QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,311,309
| 1,506,763
|
Fitting a wave function to time-series data
|
<p>I have velocity data sampled over time and I'd like to find some equation/function that can be used to describe it. My data looks like this figure</p>
<p><a href="https://i.sstatic.net/OTOvP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OTOvP.png" alt="Velocity time series data" /></a></p>
<p>As you can see from the image the data is not a simple sin or cos wave and I'm struggling to find a way to find/fit some equation to the data. My signal processing knowledge is quite limited.</p>
<p>Here's the down-sampled data as an example:</p>
<pre class="lang-py prettyprint-override"><code>xdata = np.array([11.79, 11.87, 11.95, 12.03, 12.11, 12.19, 12.27, 12.35, 12.43,
12.51, 12.59, 12.67, 12.75, 12.83, 12.91, 12.99, 13.07, 13.15,
13.23, 13.31, 13.39, 13.47, 13.55, 13.63, 13.71, 13.79, 13.87,
13.95, 14.03, 14.11, 14.19, 14.27, 14.35, 14.43, 14.51, 14.59,
14.67, 14.75, 14.83, 14.91, 14.99, 15.07, 15.15, 15.23, 15.31,
15.39, 15.47, 15.55, 15.63, 15.71, 15.79, 15.87, 15.95, 16.03,
16.11, 16.19, 16.27, 16.35, 16.43, 16.51, 16.59, 16.67, 16.75,
16.83, 16.91, 16.99, 17.07, 17.15, 17.23, 17.31, 17.39, 17.47,
17.55, 17.63, 17.71, 17.79, 17.87, 17.95, 18.03, 18.11, 18.19,
18.27, 18.35, 18.43, 18.51, 18.59, 18.67, 18.75, 18.83, 18.91,
18.99, 19.07, 19.15, 19.23, 19.31, 19.39, 19.47, 19.55, 19.63,
19.71, 19.79, 19.87, 19.95])
ydata = np.array([ 1.86470801e-05, -3.05185889e-03, -4.53502752e-03, -5.01501449e-03,
-7.61753339e-03, -7.89916120e-03, -8.45710261e-03, -7.64640792e-03,
-7.28613761e-03, -6.07402134e-03, -4.21708665e-03, -2.53126644e-03,
-1.38970318e-03, 1.59394526e-05, 1.91879565e-03, 3.10854836e-03,
3.37327421e-03, 4.56715556e-03, 5.59283055e-03, 7.38842610e-03,
7.62706763e-03, 9.14228858e-03, 1.24410442e-02, 1.47384372e-02,
1.50136837e-02, 1.14957746e-02, 9.03580024e-03, 7.04710182e-03,
6.94429062e-03, 6.57961389e-03, 5.25393124e-03, 4.75389627e-03,
2.49195903e-03, 2.41027520e-03, 1.67260849e-03, 5.02479585e-04,
2.35275116e-04, -1.91204237e-03, -1.38367516e-03, -1.02639516e-03,
2.78570931e-04, -4.42114657e-04, 2.86560704e-04, 5.69435589e-04,
-2.94260316e-04, 6.68917718e-05, -1.61045579e-03, -2.32730345e-03,
-2.55154534e-03, -4.01547893e-03, -4.72977248e-03, -5.12847064e-03,
-7.10974182e-03, -6.99822300e-03, -8.07665704e-03, -8.84851229e-03,
-9.09903075e-03, -1.03180810e-02, -1.14544281e-02, -1.21251714e-02,
-1.30148026e-02, -1.29700705e-02, -1.25293732e-02, -1.12340153e-02,
-1.05980279e-02, -8.51107110e-03, -6.05923880e-03, -4.50314785e-03,
-3.11505449e-03, -1.61344075e-03, -1.90893378e-04, 8.53961182e-04,
2.53364048e-03, 3.00061370e-03, 4.13971717e-03, 5.83572401e-03,
7.97330030e-03, 9.11719022e-03, 1.07586221e-02, 1.20721136e-02,
1.24900520e-02, 1.07001078e-02, 9.77437191e-03, 8.30650537e-03,
6.17321981e-03, 4.27379777e-03, 3.53587465e-03, 2.37557043e-03,
1.66114222e-03, 6.83006538e-04, -6.38602576e-04, -1.54135169e-03,
-9.86915409e-04, -1.58287464e-03, -2.02820728e-03, -1.53065658e-03,
-8.52157455e-04, 1.62949595e-03, 8.56897304e-04, 1.20745000e-03,
-1.06239388e-03, -2.39230381e-03, -2.39669751e-03])
</code></pre>
<p>My other data is quite regular and I can usually fit with a sin or cos function using <code>scipy.optimize</code> like in the following example.</p>
<pre class="lang-py prettyprint-override"><code>
def cos_func(x, D, E):
y = D*np.cos(E*x)
return y
guess = [0.01, 4]
parameters, covariance = curve_fit(cos_func, xdata, ydata, p0=guess)
print(parameters)
fit_D = parameters[0]
fit_E = parameters[1]
fit_cosine = cos_func(xdata, fit_D, fit_E)
plt.plot(xdata, ydata, 'o', label='data')
plt.plot(xdata, fit_cosine, '-', label='fit')
</code></pre>
<p>But that fails miserably on this data:</p>
<p><a href="https://i.sstatic.net/60ums.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/60ums.png" alt="cos function fit" /></a></p>
<p>What function or type of wave should I be fitting the data with here? I presume it needs to be some sin/cos compound wave with differing amplitudes to create a wave like this but I don't know how to automatically find the correct type of equation before optimising the parameters with <code>scipy</code>.</p>
|
<python><scipy><signal-processing><curve-fitting><scipy-optimize>
|
2024-04-11 15:01:04
| 2
| 676
|
jpmorr
|
78,311,240
| 1,726,832
|
SQLAlchemy query error when column type FBTEXT exists
|
<p>Table:<br/>
...<br/>
<code>name BLOB SUB_TYPE TEXT(0),</code></p>
<p>Class generated:</p>
<pre><code>class Users(Base):
__tablename__ = 'users'
__table_args__ = (
PrimaryKeyConstraint('userid', name='pk_userid'),
)
name = mapped_column(FBTEXT(1, 'NONE', 'NONE'))
</code></pre>
<p>Query:</p>
<pre><code>query_result = (
fb_session.query(Users)
).all()
</code></pre>
<blockquote>
<p>value = bytes(value)<br/>
^^^^^^^^^^^^<br/>
TypeError: string argument without an encoding</p>
</blockquote>
|
<python><sqlalchemy><firebird><sqlacodegen>
|
2024-04-11 14:49:46
| 1
| 6,263
|
rubStackOverflow
|
78,311,118
| 7,419,139
|
How to write stateful tests in python unittest?
|
<p>I want to create a test suite for an existing library that is using <code>unittest</code> and is very hierarchical.</p>
<p>The Class hierarchy looks a bit like this:</p>
<pre><code>A
|_B
|_C
|_D
|_E
</code></pre>
<p>Class <code>A</code> is created with an instance of <code>B</code> and <code>C</code> where <code>C</code> needs <code>D</code> and <code>E</code>. The instances are quite expensive to make so I want to reuse them for later testcases.
What I would want is something like this:</p>
<pre><code>class TestsClass(TestCase):
def test_00_test_e():
self.e = E()
...
def test_01_test_d():
self.d = D()
...
def test_02_test_c():
self.c = C(self.d, self.e)
...
...
</code></pre>
<p>This however is not possible since <code>unittest</code> creates an unique object for each testcase.
Is there a way to get this functionality within <code>unittest</code> so that I can work whithin the test framework of the project and don't have to add dependencies?</p>
|
<python><python-unittest>
|
2024-04-11 14:33:48
| 1
| 746
|
iaquobe
|
78,310,933
| 1,630,244
|
Pytest reports missing lines in coverage for imports and constant assignments
|
<p>I'm baffled by differences in coverage behavior for different files in my project and hope someone will please suggest how to debug this more effectively than I am doing now. This is basically the same question as <a href="https://stackoverflow.com/questions/72738672/pytest-incorrectly-reports-missing-lines-in-coverage">Pytest incorrectly reports missing lines in coverage</a></p>
<p>I'm using Python 3.11, tox 4.11.3, coverage 7.4.4. Working on a mbp laptop if that matters any. My python code files all look like this, this one in my package directory <code>myproj/</code> is called <code>coverme.py</code>:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
some_obj = SomeObject(foo='bar')
CONST = 123
def myfunc() -> bool:
return True
def otherfunc() -> bool:
"""
nice pydoc goes here
"""
return CONST
</code></pre>
<p>My test files all look like this, this one in the project's <code>tests/</code> directory is called <code>test_coverme.py</code>:</p>
<pre><code>import logging
from myproj import coverme
logger = logging.getLogger(__name__)
def test_myfunc():
logger.debug('test myfunc')
assert coverme.myfunc()
</code></pre>
<p>Here's my tox.ini file:</p>
<pre><code>[tox]
envlist = code,flake8
minversion = 2.0
[pytest]
testpaths = tests
[testenv:code]
basepython = python3
deps=
coverage
pytest
pytest-cov
pytest-mock
commands =
pytest --cov myproj --cov-report term-missing --cov-fail-under=70 {posargs}
[testenv:flake8]
basepython = python3
skip_install = true
deps = flake8
commands = flake8 setup.py myproj tests
</code></pre>
<p>I can run the test via <code>tox -- tests/test_coverme.py</code> and see these results; I removed results for those files I cannot post here:</p>
<pre><code>---------- coverage: platform darwin, python 3.11.7-final-0 ----------
Name Stmts Miss Cover Missing
-------------------------------------------------------------
myproj/coverme.py 7 1 86% 16
</code></pre>
<p>The simple <code>coverme</code> example shown above works perfectly, pytest-cov reports all lines covered except for line 16; I agree that that <code>otherfunc</code> is never called. Same goes for <em>most</em> code files in my project, the reported coverage is as I expect.</p>
<p>Yet for <em>some</em> code files which of course I cannot post here, the coverage report lists uncovered (missed) lines that are import statements and constant definitions! For example, pytest claims lines like <code>CONST = 123</code> (that appears in the example I posted above) are not covered by any test. It seems impossible that a constant assignment line in a file imported by a test, was not executed.</p>
<p>I see in the coverage FAQ advice to invoke <code>coverage</code> from the command line, which I believe tox is doing for me. I ran <code>coverage erase</code> to clean old data, that made no difference. I ran <code>tox -r</code> to reinstall all dependencies, also no difference. I know that I can run single test files or single tests with tox (not the full suite), I have not found that to make a difference either.</p>
<p>Please suggest other ways to figure out what I am doing wrong, or just possibly pytest is not quite doing correctly, thanks in advance.</p>
|
<python><pytest><tox><coverage.py><pytest-cov>
|
2024-04-11 14:06:14
| 1
| 4,482
|
chrisinmtown
|
78,310,768
| 276,168
|
numpy Polynomial.fit with degree zero produces unexpected result
|
<pre><code>import numpy as np
f = np.array(
[481.62900766, 511.94542042, 647.40216379, 686.10402156, 849.9420538, 888.64398048, 1029.26087049, 1071.18799217,
1210.51481107, 1266.63254274, 1409.54282743])
s = np.array(
[457.90057373, 520.90911865, 666.19372559, 709.64898682, 862.48828125, 892.19934082, 1031.70675659, 1063.03643799,
1206.41647339, 1239.6506958, 1386.23660278])
series1 = np.polynomial.polynomial.Polynomial.fit(f, s, deg=1)
delta, ratio = series1.convert().coef
print(delta, ratio) # delta = 24.72921108370622, ratio = 0.971307510662272
# ratio is expected to be 1.0, this delta is sufficiently close to zero
series0 = np.polynomial.polynomial.Polynomial.fit(f, s, deg=0) # assume ratio = 1.0
delta = series0.convert().coef[0]
print(delta) # delta = 912.3988175827271 is unreasonably large
# inspired by: https://stats.stackexchange.com/questions/436375/
delta = np.mean(s - f)
print(delta) # delta = -1.4926089272727112 is quite close to zero
</code></pre>
<p>Is there a way to get the second result (delta close to zero) by using <code>Polynomial.fit</code>, with <code>deg=0</code> or otherwise?</p>
|
<python><numpy><linear-regression><least-squares>
|
2024-04-11 13:40:09
| 1
| 3,425
|
Dženan
|
78,310,673
| 8,430,733
|
Dynamically change order of nested loops containing business logic in Python
|
<p>I'm developing a script for a lab instrumentation campaign. The instrument we're working with takes in various parameters such as x and y coordinates, power, delay, pulse, and so forth.</p>
<p>One crucial aspect of the operation involves 2d scanning an object, which requires iterating over both x and y coordinates. The sequence and amount of loops may change considering the variations in parameters.</p>
<p>My objective is to enhance the flexibility of the following pseudo-code, ideally reducing the level of nested loops if possible.</p>
<pre class="lang-py prettyprint-override"><code>for x, y in zip(x_coords, y_coords):
instr.move_axes(x, y)
do_something_else_x_y()
for power in params['power_list']:
instr.set_power(power)
do_something_else_power()
for delay in params['delay_list']:
instr.set_delay(delay)
do_something_else_delay()
for pulse in params['pulse_list']:
instr.set_pulse(pulse)
do_something_else_pulse()
</code></pre>
<p>I tried something with itertools.product() which flattens the nested for loops but I don't know how to insert the logic to communicate with the instrument without resorting to a lot of if conditions. It takes "long" time to communicate with the instrument so I want to set a value in the instrument only when there is a change.</p>
<p>The following code is easier to change the order of elements but seems to require more logic at the end to avoid calling the instrument unnecessary.</p>
<pre class="lang-py prettyprint-override"><code>for (x, y), power, delay, pulse in product(zip(x_coords, y_coords), power_list, delay_list, pulse_list):
print(x, y, power, delay, pulse)
# implement additional logic to check if x, y, power, delay, pulse etc. changed so call the instrument.
if difference_xy():
instr.move_axes(x, y)
do_something_else_x_y()
if difference_power():
instr.set_power(power)
do_something_else_power()
if difference_delay():
instr.set_delay(delay)
do_something_else_delay()
if difference_pulse():
instr.set_pulse(pulse)
do_something_else_pulse()
do_something_else()
</code></pre>
<p>Any ideas on this subject would be appreciated.</p>
|
<python>
|
2024-04-11 13:23:49
| 2
| 1,011
|
Raphael
|
78,310,666
| 12,439,683
|
Adjust yaml.Dumper to export dataclasses and their docstrings
|
<p>I would like to export dataclass objects <strong>including their docstrings</strong> (as <em>#</em> commented description above) to yaml.</p>
<p>So far I've figured out that I need a custom dumper or adjust the current one.</p>
<pre><code>from dataclasses import dataclass
@dataclass
class foo:
"""This would be a nice to have docstring but can be joined"""
bar: int = 1
"""This is bar"""
# Export
import yaml
class SomeDumper(yaml.Dumper):
"""What is needed here?"""
yaml.dump(foo, "foo.yaml", dumper=SomeDumper)
</code></pre>
<p>Expected output:</p>
<pre class="lang-yaml prettyprint-override"><code># -----
# This would be a nice to have docstring but can be joined
# -----
# This is bar
bar : 1
</code></pre>
<p>How do I need to adjust the Dumper to keep the docstrings like shown above?</p>
<p><strong>For simplicity assume that I have the docstrings, and I only care on how to adjust the dumper, the docstrings I can extract with this <a href="https://gist.github.com/Daraan/fd295f0d1cac4fb3abbf3c28c3c0bed0" rel="nofollow noreferrer">gist</a></strong></p>
|
<python><yaml><python-dataclasses><pyyaml><docstring>
|
2024-04-11 13:23:12
| 1
| 5,101
|
Daraan
|
78,310,611
| 11,335,032
|
Python typing Protocol with default
|
<p>I have a simple function which simply executes a callback:</p>
<pre class="lang-py prettyprint-override"><code>def callback(*, arg1: float | None = None) -> float:
return arg1 or 1.0
def callback2(*, arg1: int | None = None) -> int:
return arg1 or 1
def func(arg1 = callback, arg2 = None):
return arg1(arg1=arg2)
func(callback)
func(callback2)
</code></pre>
<p>For these callback functions <code>arg1</code> and the return type are always the same type and can be either <code>int</code> or <code>float</code>.</p>
<p>My first attempt without a default for the <code>scorer</code> was</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from typing import TypeVar, Protocol
_ResultType = TypeVar("_ResultType", int, float)
_ContraResultType = TypeVar("_ContraResultType", int, float, contravariant=True)
_CoResultType = TypeVar("_CoResultType", int, float, covariant=True)
class MyCallback(Protocol[_ContraResultType, _CoResultType]):
def __call__(self, *, arg1: _ContraResultType | None) -> _CoResultType: ...
def callback(arg1: float | None = None) -> float:
return arg1 or 1.0
def callback2(arg1: int | None = None) -> int:
return arg1 or 1
def func(
arg1: MyCallback[_ResultType, _ResultType],
arg2: _ResultType | None = None,
) -> _ResultType:
return arg1(arg1=arg2)
a: float = func(callback)
b: int = func(callback2)
</code></pre>
<p>which appears to work. However if I update the implementation to include the default in the following way:</p>
<pre class="lang-py prettyprint-override"><code>def func(
arg1: MyCallback[_ResultType, _ResultType] = callback,
arg2: _ResultType | None = None,
) -> _ResultType:
return arg1(arg1=arg2)
</code></pre>
<p>MyPy fails with:</p>
<pre><code>example.py:19: error: Incompatible default for argument "arg1" (default has type "Callable[[Optional[float]], float]", argument has type "MyCallback[int, int]") [assignment]
example.py:19: note: "MyCallback[int, int].__call__" has type "Callable[[NamedArg(Optional[int], 'arg1')], int]"
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Interestingly if I use <code>callback2</code> as default it fails the other way around:</p>
<pre><code>example.py:19: error: Incompatible default for argument "arg1" (default has type "Callable[[Optional[int]], int]", argument has type "MyCallback[float, float]") [assignment]
example.py:19: note: "MyCallback[float, float].__call__" has type "Callable[[NamedArg(Optional[float], 'arg1')], float]"
</code></pre>
<p>How would I need to adjust the typing so it's able to properly deduce the result type for the default argument version as well.</p>
|
<python><python-typing>
|
2024-04-11 13:16:50
| 2
| 3,335
|
maxbachmann
|
78,310,446
| 10,461,632
|
Fixing mypy error - Incompatible types in assignment (expression has type "xxx", variable has type "yyy")
|
<p>I am running into the following <code>mypy</code> error and cannot figure out how to fix it.</p>
<pre><code>test.py:30: error: Incompatible types in assignment (expression has type "list[str] | list[Path]", variable has type "list[Path]") [assignment]
</code></pre>
<p>Code versions: <code>Python 3.10.13</code>, <code>mypy 1.9.0 (compiled: yes)</code></p>
<p>I tried options 1 and 2 from the accepted answer <a href="https://stackoverflow.com/questions/43910979/mypy-error-incompatible-types-in-assignment">here</a>, but that didn't do anything.</p>
<p>Here's the code:</p>
<pre><code>import re
import glob
from pathlib import Path
from typing import Sequence
def find_files(files: str | Sequence[str], sort: bool = True) -> list[str] | list[Path]:
"""Find files based on shell-style wildcards.
files = '/path/to/all/*.csv'
files = [
'/path/to/all/*.csv',
'/and/to/all/*.xlsx
]
"""
# Convert to list if input is a str.
files = [files] if isinstance(files, str) else files
# Find files.
found_files = [Path(fglob).resolve()
for f in files
for fglob in glob.glob(f, recursive=True)]
# Make sure no duplicates exist.
found_files = [*set(found_files)]
if sort:
found_files = sort_nicely(found_files) # TODO: mypy complains here
return found_files
def sort_nicely(lst: Sequence[str] | Sequence[Path] ) -> list[str] | list[Path]:
"""Perform natural human sort.
sort_nicely(['P1', 'P10', 'P2']) == ['P1', 'P2', 'P10']
"""
def convert(text: str) -> str | int:
return int(text) if text.isdigit() else text
def alpha_key(item: str | Path) -> list[str | int]:
return [convert(c) for c in re.split('([0-9]+)', str(item))]
return sorted(lst, key=alpha_key)
</code></pre>
|
<python><mypy>
|
2024-04-11 12:47:57
| 2
| 788
|
Simon1
|
78,310,379
| 4,343,563
|
How to convert multi-page PDF file to image without using pdf2Image?
|
<p>I am trying to use pdf2image to convert a multi-page PDF to since image variable like it is done in Textractor code so that I can use it in LazyDocument:</p>
<pre><code>import pdf2image
from pdf2image import convert_from_path, convert_from_bytes
import boto3
from textractcaller import call_textract, OutputConfig
from textractcaller.t_call import Textract_Call_Mode, Textract_API, get_full_json
file_source = 's3://mybucket/my/path/to/file.pdf'
bucket = 'mybucket'
key = 'my/path/to/file.pdf'
session = boto3.session.Session(region_name= 'us-east-1')
textract_client = session.client("textract", region_name= 'us-east-1')
output_config = OutputConfig(s3_bucket=bucket, s3_prefix=key)
response = call_textract(
input_document=file_source,
ouptput_config = output_config,
features=[TextractFeatures.FORMS, TextractFeatures.TABLES, TextractFeatures.SIGNATURES, TextractFeatures.LAYOUT],
return_job_id=True,
force_async_api=True,
call_mode=Textract_Call_Mode.FORCE_ASYNC,
boto3_textract_client= textract_client,
job_done_polling_interval=1,
)
s3_client = session.client("s3")
file_obj = s3_client.get_object(Bucket=bucket, Key=key).get("Body").read()
images = convert_from_bytes(bytearray(file_obj))
LazyDoc = LazyDocument(
response["JobId"],
Textract_API.ANALYZE,
textract_client= textract_client,
images=images,
output_config=output_config,
)
</code></pre>
<p>Although the package installs properly when using pip install <code>pdf2image</code>. I am getting the error:</p>
<pre><code>PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?
</code></pre>
<p>I have seen solutions that say to download the necessay file and specify the poppler_path. However, due to permission issues I am not allowed do upload any files from online into the required space. Is there another way to get the <code>images</code> variable without using pdf2image?</p>
|
<python><image><pdf><pdf2image>
|
2024-04-11 12:35:59
| 1
| 700
|
mjoy
|
78,310,370
| 9,665,565
|
Check internet button and print a message whether it is checked or not
|
<p>I am trying to do the following exercise: open <a href="https://www.google.com/about/careers/applications/jobs/results?location=Chile&location=Santiago%2C%20Chile" rel="nofollow noreferrer">this page</a> on Python, and check what happens when the "Remote eligible" button is on; are there positions displayed or not?</p>
<p>So far, I have this code but is giving me this error: Could not find or click 'Remote eligible' button: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="c1853"]"}</p>
<p>When the button is off the CSS selector is:</p>
<pre><code><input class="VfPpkd-muHVFf-bMcfAe" type="checkbox" id="c473" jsname="YPqjbf" jsaction="focus:AHmuwe; blur:O22p3e;change:WPi0i;" data-indeterminate="false">
</code></pre>
<p>When it is on, it is:</p>
<pre><code><input class="VfPpkd-muHVFf-bMcfAe" type="checkbox" id="c556" jsname="YPqjbf" jsaction="focus:AHmuwe; blur:O22p3e;change:WPi0i;" data-indeterminate="false" checked="">
</code></pre>
<p>How can I enhance my code?</p>
<pre><code># !apt-get update
# !apt install chromium-chromedriver
# !pip install selenium
# Import necessary libraries
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
# Configure Selenium to use headless Chrome
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.binary_location = "/usr/bin/chromium-browser"
# Initialize WebDriver with options
driver = webdriver.Chrome(options=chrome_options)
# Open the webpage
driver.get("https://www.google.com/about/careers/applications/jobs/results?location=Chile&location=Santiago%2C%20Chile")
# Wait for the page elements to load
time.sleep(5) # Adjust the sleep time based on your network speed
# Attempt to locate and click the "Remote eligible" button if it's present
# Note: You'll need to adjust the selectors based on the actual page structure
try:
# Locate the checkbox element
checkbox_element = driver.find_element(By.ID, 'c1853')
# Check the value of the 'checked' attribute to determine if the button is on or off
is_button_on = checkbox_element.get_attribute('checked')
# Click the button only if it's off
if not is_button_on:
checkbox_element.click()
print("Clicked 'Remote eligible' to turn it on.")
time.sleep(3) # Wait for the page to update after clicking
except Exception as e:
print(f"Could not find or click 'Remote eligible' button: {e}")
# Now check if there are no positions displayed
# Adjust the method of checking based on how the webpage indicates no positions are available
no_positions_indicator = driver.find_elements(By.XPATH, '//div[contains(text(), "No positions found")]')
if no_positions_indicator:
print("No remote and/or hybrid positions are displayed.")
else:
print("Remote and/or hybrid positions are displayed.")
# Close the WebDriver
driver.quit()
</code></pre>
|
<python><html><css><selenium-webdriver><web-scraping>
|
2024-04-11 12:33:53
| 2
| 701
|
Paula
|
78,310,300
| 10,086,915
|
Sympy mapping defined functions
|
<p>I have a userdefined language containing mathematical expressions in python syntax but with wrapper special mathematical functions, e.g.
instead of
<code>sqrt(y) - x</code> I have <code>_math.foo(y) - x</code></p>
<p>I want to be able to use sympy.parse_expr on these strings and handle them the same way.
I tried two different things with not yet the correct result:</p>
<pre><code>
import sympy
import math
import sympy.utilities
from sympy.utilities.lambdify import implemented_function
class Math:
foo = math.sqrt
bar = implemented_function("sqrt", math.sqrt)
input = "sqrt(y) - x"
input = "_math.bar(y) - x"
input = "_math.foo(y) - x"
expr = sympy.parse_expr(input, {"_math" : Math})
print(expr)
print(sympy.solveset(expr, sympy.Symbol("y")))
print(sympy.solveset(expr, sympy.Symbol("x")))
</code></pre>
<p>With <code>input = "sqrt(y) - x"</code>
I do get the result:</p>
<pre><code>-x + sqrt(y)
{x**2}
{sqrt(y)}
</code></pre>
<p>which is the result I want.
The approach <code>input = "_math.bar(y) - x"</code>
gives me the output:</p>
<pre><code>-x + sqrt(y)
ConditionSet(y, Eq(-x + sqrt(y), 0), Complexes)
{sqrt(y)}
</code></pre>
<p>which I do not fully understand, but is not th same as the one above
and the last approach <code>input = "_math.foo(y) - x"</code>
gives me <code>TypeError("Cannot convert expression to float")</code> inside eval.</p>
<p>Note that the string representation of the first two approaches is the same.</p>
|
<python><sympy>
|
2024-04-11 12:20:14
| 1
| 638
|
Max Ostrowski
|
78,310,292
| 16,723,655
|
Substitute numpy array values for specific index
|
<p>I have an image array (shape : 256, 256, 3).</p>
<p>There is an index as below.</p>
<pre><code>array([[ 2, 254],
[ 2, 255],
[ 3, 252],
...,
[148, 169],
[148, 170],
[149, 169]], dtype=int64)
</code></pre>
<p>I could know index values of image array with below code.</p>
<pre><code>read_image(image).numpy()[idx[:,0], idx[:,1]]
</code></pre>
<p>Index values of image array would be as below. (shape: (11978, 3))</p>
<pre><code>array([[145, 161, 174],
[139, 155, 168],
[157, 171, 184],
...,
[144, 161, 169],
[144, 161, 169],
[146, 163, 171]], dtype=uint8)
</code></pre>
<p>I have an another array as below (size: (11978,))</p>
<pre><code>array([150, 150, 130, ..., 120, 150, 160])
</code></pre>
<p>I want to change index values of 3 channels with above array.</p>
<p>For example, image array of [2, 254, 3] position would be changed to [150, 150, 150], which is RGB.</p>
<p>Image array of [149, 169, 3] position would be changed to [160, 160, 160].</p>
|
<python><numpy>
|
2024-04-11 12:19:10
| 1
| 403
|
MCPMH
|
78,310,106
| 1,472,407
|
Handling XML default namespace in Python using Pydantic
|
<p>I am trying to convert some JSON data into XML in Python using the Pydantic-XML library (<a href="https://pydantic-xml.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">https://pydantic-xml.readthedocs.io/en/latest/index.html</a>). However, I am having trouble getting the default namespaces according to my business requirements.</p>
<p>This is my code:</p>
<pre><code>class Socials(
BaseXmlModel,
tag='socials',
nsmap={'': 'http://www.company.com/soc'},
):
urls: List[str] = element(tag='social')
class Contacts(
BaseXmlModel,
tag='contacts',
nsmap={'': 'http://www.company.com/cnt'},
):
socials: Socials = element()
class Company(
BaseXmlModel,
tag='company',
nsmap={'': 'http://www.company.com/co'},
):
contacts: Contacts = element()
</code></pre>
<p><strong>My desired output: (requirement from business, ns starts from 1 not 0)</strong></p>
<pre><code><ns1:company xmlns:ns1="http://www.company.com/co"
xmlns:ns2="http://www.company.com/cnt"
xmlns:ns3="http://www.company.com/soc">
<ns2:contacts>
<ns3:socials>
<ns3:social>https://www.linkedin.com/company/spacex</ns2:social>
<ns3:social>https://twitter.com/spacex</ns2:social>
<ns3:social>https://www.youtube.com/spacex</ns2:social>
</ns3:socials>
</ns2:contacts>
</ns1:company>
</code></pre>
<p><strong>What I get from library:</strong></p>
<pre><code><company xmlns="http://www.company.com/co">
<contacts xmlns="http://www.company.com/cnt" >
<socials xmlns="http://www.company.com/soc">
<social>https://www.linkedin.com/company/spacex</social>
<social>https://twitter.com/spacex</social>
<social>https://www.youtube.com/spacex</social>
</socials>
</contacts>
</company>
</code></pre>
<p>Can anyone help let me know how to achieve this with the pydantic-xml library or any other python library?</p>
<p>Regards,
Krish</p>
|
<python><xml><xml-namespaces><pydantic>
|
2024-04-11 11:46:28
| 1
| 386
|
dksr
|
78,310,014
| 8,030,794
|
Some manipulations with groupby Pandas
|
<p>I have this Dataframe</p>
<pre><code>import pandas as pd
import math
from pandas import Timestamp
Date = [Timestamp('2024-03-16 23:59:42'), Timestamp('2024-03-16 23:59:42'), Timestamp('2024-03-16 23:59:44'), Timestamp('2024-03-16 23:59:44'), Timestamp('2024-03-16 23:59:44'), Timestamp('2024-03-16 23:59:47'), Timestamp('2024-03-16 23:59:48'), Timestamp('2024-03-16 23:59:48'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49'), Timestamp('2024-03-16 23:59:49')]
Price = [0.6729, 0.6728, 0.6728, 0.6728, 0.6728, 0.673, 0.6728, 0.6729, 0.6728, 0.6728, 0.6728, 0.6728, 0.6728, 0.6728, 0.6728, 0.6728, 0.6728, 0.6728, 0.6729, 0.6728]
Side = [-1, -1, -1, 1, -1, 1, -1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, -1]
Amount = [1579.2963000000002, 7.400799999999999, 6.728, 177.61919999999998, 797.2679999999999, 33650.0, 131.196, 48.448800000000006, 0.6728, 0.6728, 0.6728, 6.728, 0.6728, 1.3456, 0.6728, 0.6728, 0.6728, 0.6728, 0.6729, 0.6728]
buy = [math.nan, math.nan, math.nan, 177.61919999999998, math.nan, 33650.0, math.nan, 48.448800000000006, math.nan, math.nan, math.nan, math.nan, math.nan, math.nan, math.nan, math.nan, math.nan, math.nan, 49.121700000000004, math.nan]
df = pd.DataFrame({
'Date':Date,
'Price':Price,
'Side':Side,
'Amount':Amount,
'buy':buy
})
print(df)
</code></pre>
<p>I got <code>buy</code> column using</p>
<p><code>df['buy'] = df[df['Side'] == 1].groupby([df['Date'].dt.floor('H'), 'Price'])['Amount'].cumsum()</code></p>
<p>But I want to get 0 in the <code>buy</code> column instead of nan values, if this price has not yet been met in the group or the previous value of the cumulative sum</p>
<p>Result <code>buy</code> column need - [0,0,0,177.6192,177.6192,33650, 177.6192,48.4488, 177.6192,.....]</p>
<p>How can I implement this?</p>
|
<python><pandas><group-by>
|
2024-04-11 11:28:11
| 1
| 465
|
Fresto
|
78,309,826
| 536,262
|
python playwright block inline elements
|
<p>I can block loading resources, but what about inline svgs:</p>
<pre><code>page.route(re.compile(r"\.(jpg|png|svg)$"), lambda route: route.abort())
page.goto(url)
</code></pre>
<p>This element:</p>
<pre><code><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="16" height="16" \
class="css-1d3xu67-Icon"><path \
d="M18,10.82a1,1,0,0,0-1,1V19a1,1,0,0,1-1,1H5a1,1,0,0,1-1-1V8A1,1,0,0,1,5,7h7.18a1,1,0,0,0,0-\
2H5A3,3,0,0,0,2,8V19a3,3,0,0,0,3,3H16a3,3,0,0,0,3-3V11.82A1,1,0,0,0,18,10.82Zm3.92-8.2a1,1,0,0,0-.54-.\
54A1,1,0,0,0,21,2H15a1,1,0,0,0,0,2h3.59L8.29,14.29a1,1,0,0,0,0,1.42,1,1,0,0,0,1.42,0L20,5.41\
V9a1,1,0,0,0,2,0V3A1,1,0,0,0,21.92,2.62Z"></path></svg>
</code></pre>
<p><a href="https://i.sstatic.net/AZ81J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AZ81J.png" alt="enter image description here" /></a></p>
|
<python><css><playwright>
|
2024-04-11 10:50:04
| 1
| 3,731
|
MortenB
|
78,309,687
| 7,495,742
|
how to get textarea value in Python that doesn't have value field
|
<p>I'm trying to parse html page and get the value from a textarea, but i can't achieve it, i will put my code (i started with selenium but it didn't work either). I don't really understand where this value is in the DOM <a href="https://i.sstatic.net/xHWPq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xHWPq.png" alt="You can see Value down right in the inspector" /></a></p>
<p>so the value is in the inspector but not in the html code... I can't find a way to get it. Please can someone explain me ? Thanks !</p>
<pre><code>#coding: utf8
import lxml.html as lh
import urllib.request
nb = 36893488147419103232
#x=0
#GRAB
#while x<=1: #A changer en fonction du nb de pages à crawl
url_base="https://bitcoin.oni.su/"+str(nb)#+x)
req = urllib.request.Request(url_base, headers={'User-Agent': 'Mozilla/5.0'})
doc=lh.parse(urllib.request.urlopen(req))
get_btc_adr = doc.xpath('//textarea[@id="BTCaddrC"]')
print(get_btc_adr, get_btc_adr[0].value)#Tried Value, text, element_Text()....
#x+=1
</code></pre>
|
<python><parsing><textarea><lxml><urllib>
|
2024-04-11 10:23:17
| 2
| 357
|
Garbez François
|
78,309,521
| 4,551,325
|
Dataframe column-wise replace values above cutoff with max values below cutoff
|
<p>I have a sample dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'A': [10, 18, 30, 40],
'B': [50, 20, 40, 6]
})
</code></pre>
<p>Given a cutoff number, say, 25, how to replace all values above the cutoff with max values below cutoff column wise?</p>
<p>A working solution:</p>
<pre><code>import numpy as np
cutoff = 25
for col in df:
ceiling = df[col][ df[col] <= cutoff ].max()
df[col] = np.where(df[col] > cutoff, ceiling, df[col])
</code></pre>
<p>Which gives:</p>
<pre><code> A B
0 10 20
1 18 20
2 18 20
3 18 6
</code></pre>
<p>My actual dataframe is much larger and therefore performance sensitive.</p>
|
<python><pandas><numpy>
|
2024-04-11 09:58:07
| 2
| 1,755
|
data-monkey
|
78,309,513
| 824,624
|
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b when paring the downloaded json
|
<p>trying to parse json file on the web using python, but get into a problem saying :</p>
<blockquote>
<p>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position
1: invalid start byte</p>
</blockquote>
<p>here is the code</p>
<pre><code>import json
import urllib.request
with urllib.request.urlopen('https://a.amap.com/Loca/static/loca-v2/demos/mock_data/sz_road_F.json') as url:
url_data = json.loads(url.read().decode())
print(url_data)
</code></pre>
<p>any idea what is going on with the code ?</p>
|
<python>
|
2024-04-11 09:57:12
| 1
| 8,168
|
user824624
|
78,309,501
| 1,942,868
|
speed up by multiple rsync at the same time
|
<p>I am doing rsync command over internet.</p>
<p>however <code>rsync</code> is a bit slow ,and when I use multiple <code>rsync</code> total speed is faster.</p>
<p>So I made this script.</p>
<pre><code>import subprocess
cmd = "ssh root@my.hopto.org ls /media/"
output = subprocess.run(cmd,shell=True, capture_output=True, text=True).stdout
arr = output.splitlines()
for a in arr:
print(a)
for i,a in enumerate(arr):
if i > 5:
break
cmd = 'rsync --partial --append --progress -av root@my.hopto.org:"/media/{}" ./test/'.format(a)
print(cmd)
subprocess.run(cmd,shell=True)
</code></pre>
<p>My purpose is doing five <code>rsync</code> in parallel.</p>
<p>However this script stops only one rsync, then I try</p>
<pre><code>cmd = 'nohup rsync --partial --append --progress -av root@my.hopto.org:"/media/{}" ./test/ &'.format(a)
</code></pre>
<p>However each 5 rsync is on the list of ps but not work.</p>
<pre><code>ps aux | grep rsync
7097 0.0 0.0 408923232 2832 s010 T 6:45PM 0:00.02 rsync --partial --append --progress -av -e ssh -p 17441 root@my.hopto.org..............
.
.
.
</code></pre>
<p>Is there any good idea or alternative way to speed up rsync?</p>
|
<python><rsync>
|
2024-04-11 09:53:49
| 0
| 12,599
|
whitebear
|
78,309,174
| 10,143,378
|
Dash input number with thousand separator
|
<p>I have doing a simple Dash application, and I have one component to get the input from the User as an integer.</p>
<p>The issue is that the integer should be between 1,000,000 and 1,000,000,000. So for an easy way to read it, I would love to display the thousand separated by comma instead of 1000000 or 1000000000.</p>
<p>Currently the workaround I found is to replace my dcc.Input number:</p>
<pre><code>dcc.Input(id='input', type='number', value=1000000, min=1000000, max=1000000000)
</code></pre>
<p>by a dcc.Input string :</p>
<pre><code> dcc.Input(id='input', type='text', value='1,000,000')
</code></pre>
<p>main issue is that when the user enter the number, I have to manage a lot of potential issue (he didn't put a number, I have to clean the comma and cast the string to int), so it is a lot of exception on the callback side to handle, and potentially a lot of bug.</p>
<p>I just would like to know if it is possible to keep the Number as input type AND having a display comma separated ?</p>
<p>Thanks</p>
|
<python><plotly-dash>
|
2024-04-11 08:53:17
| 2
| 576
|
kilag
|
78,309,170
| 6,221,742
|
How to fix relative import module error in VSCode
|
<p>I am trying to import a local module in my code using VSCode in my langcahin app but I am getting the following error</p>
<p><a href="https://i.sstatic.net/XJp7d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XJp7d.png" alt="enter image description here" /></a></p>
<p>here is my file structure</p>
<p><a href="https://i.sstatic.net/tTnWJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tTnWJ.png" alt="enter image description here" /></a></p>
<p>and the service.py file where I am trying to import the module</p>
<p><a href="https://i.sstatic.net/hpNG2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hpNG2.png" alt="enter image description here" /></a></p>
<p>I am running 'langchain serve' command while I am on the my-app directory.
Also I have created a settings.json file in the .vscode with the following content</p>
<pre><code>{
"python.analysis.extraPaths": [
"./Artificial Intelligence/LLMs/LANGSERVE/my-app/packages/sql-pgvector"
]
</code></pre>
<p>}</p>
|
<python><visual-studio-code><langchain>
|
2024-04-11 08:52:47
| 1
| 339
|
AndCh
|
78,309,105
| 2,633,803
|
Python Pandas read CSV file where dd/mm/yyyy date is shown as mm/dd
|
<p>I've got a CSV file where it keeps the date column in the format of DD/MM, however, when I click the cell, the real value is in the format of DD/MM/YYYY.</p>
<p>For example:</p>
<pre><code>28/03; 182409; 7579480; 1000; 1100; 1200; 1300
28/03; 220000; 0480000; 1000; 1100; 1200; 1300
28/03; 220000; 0760000; 1000; 1100; 1200; 1300
</code></pre>
<p>The value 28/03 in the first column is actually 28/03/2024. Please note that the year is not always 2024, but could be any possible year.</p>
<p>I think I can make a copy of the CSV file as XLSX and use openpyxl to convert the column to DD/MM/YYYY.</p>
<p>But I'm wondering if there's a way to do it directly using Pandas?</p>
|
<python><pandas><date><openpyxl>
|
2024-04-11 08:40:24
| 1
| 7,498
|
ChangeMyName
|
78,309,090
| 3,433,875
|
Need help understanding how the xycoord axes system works in matplotlib
|
<p>I am always confused on how to annotate things in matplotlib and doing a deeper dive and discovered the magical world of the different coordinate systems that exist in matplotlib, but I am having trouble understanding it all.</p>
<p>Starting with the axes system, when annotating this is what I got so far:</p>
<pre><code>from matplotlib import pyplot as plt
fig,ax = plt.subplots( figsize=(5,5), dpi=300)
ax.set_xlim(0,100)
ax.set_ylim(0,1000)
# -- Annotations --------------------------------
#coordinate system same as data xy = (100,1000)
ax.annotate(
xy=(80,800),
text='*1 -data',
ha='left', va='top',
xycoords="data"
)
#coordinates system xy = (0,1) from the axes
ax.annotate(
xy=(0.5,1.1),
text='*2 - axes fraction',
ha='left', va='top',
xycoords='axes fraction'
)
#combine data and axes fraction
ax.annotate(
xy=(30,0.5), #x = 30 and y is the middle of y axes
text='*3 -data, axes fraction',
ha='left', va='top',
xycoords=('data','axes fraction')
)
#coordinates system xy = (?,?) from the axes
ax.annotate(
xy=(50,50),
text='*4 - axes points',
ha='left', va='top',
xycoords='axes points'
)
#coordinates system xy = (?,?) from the axes
ax.annotate(
xy=(750,750),
text='*5 - axes pixels',
ha='left', va='top',
xycoords='axes pixels'
)
#annotation for the points coordinate system and xy locations.
t = "Footnotes:\n(1) data, coord @ (100,1000), plotted @ (80,800)\n(2) axes fraction, coord @ (0,1), plotted @ (0.5,1.1)\n(3) data,axes points combined, coord @ x=(0,100), y=(0,1) plotted @ (30,0.5)\n(4) axes points coord @ (?,?) plotted @ (50,50)\n(5)axes pixels, coord @ (?,?) plotted @ (750,750)"
ax.annotate(
xy=(0,-0.1),
text = t,
ha = "left", va= "top", color = "grey",
xycoords = "axes fraction"
)
</code></pre>
<p>Which generates this:
<a href="https://i.sstatic.net/5Y3ky.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Y3ky.png" alt="enter image description here" /></a></p>
<p>Obviously, I have <a href="https://matplotlib.org/stable/gallery/text_labels_and_annotations/annotation_demo.html" rel="nofollow noreferrer">read the documentation</a> but still not clear on this points:</p>
<ol>
<li><p>Why when I specify xy coordintates outside the axes (example 2), it still gets plotted? I would have expected to have to use annotation_clip.</p>
</li>
<li><p>What are the coordinates for the axes points system? Completely lost on this one.</p>
</li>
<li><p>What are the coordinates for the axes pixel system? I know the figure is 1500x1500 but how does that translate into the axes? I didnt expect example 5 to be plotted there.</p>
</li>
</ol>
|
<python><matplotlib>
|
2024-04-11 08:36:46
| 1
| 363
|
ruthpozuelo
|
78,309,015
| 16,798,185
|
IPython installation on Mac
|
<p>I have uninstalled Anaconda on Mac and am now trying to install IPython using pip3.</p>
<p>When I try to install IPython, I get a prompt, "User for us-central1-python.pkg.dev: ". This asks for username and password. What should I enter here? I use <a href="https://en.wikipedia.org/wiki/Google_Cloud_Platform" rel="nofollow noreferrer">GCP</a>, and I have installed <em>gsutil</em>. I'm not sure if this is due to any GCP artifact. As root:</p>
<pre class="lang-none prettyprint-override"><code>cd ~
python --version
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>zsh: command not found: python
</code></pre>
<p>And:</p>
<pre class="lang-none prettyprint-override"><code>python3 --version
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Python 3.9.6
</code></pre>
<p>And:</p>
<pre class="lang-none prettyprint-override"><code>pip3 install ipython
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple, https://us-central1-python.pkg.dev/workspace_code/analyticsv2-pypi/simple/
User for us-central1-python.pkg.dev:
</code></pre>
<p>I verified file <em>~/.pip/pip.conf</em>. It has an entry as below</p>
<h3>File <em>~/.pip/pip.conf</em></h3>
<pre class="lang-none prettyprint-override"><code>[global]
extra-index-url = https://us-central1-python.pkg.dev/workspace_code/analyticsv2-pypi/simple/
</code></pre>
|
<python><python-3.x><gsutil>
|
2024-04-11 08:22:23
| 2
| 377
|
user16798185
|
78,309,013
| 9,097,114
|
changing xpath dynamically in selenium - python
|
<p>Hi Unable code / automate the python script in a loop with dynamically changing xpath's
So, each time i need to input the path manually.<br />
Below are the sample xpath's (actually i have more xpaths to input) and code</p>
<p>HTML Element looks like below:</p>
<pre><code><i data-v-36de69d2="" aria-label="Play" tabindex="0" role="button" class="zm-icon-play icon"></i>
</code></pre>
<pre><code>path1= driver.find_element("xpath",'//*[@id="4cc49c1a-bbef-48d2-9ca2-b804db7be94a"]/div[5]/div/div[2]/div[1]/div/div[1]/div/div/div/div[3]/div/div[1]/i').click()
path2= driver.find_element("xpath",'//*[@id="cabfd3f2-83ab-4297-ab49-3efd7e0523f3"]/div[5]/div/div[2]/div[1]/div/div[1]/div/div/div/div[3]/div/div[1]/i').click()
</code></pre>
<ul>
<li>List of xpaths</li>
</ul>
<pre><code>xpath_1 = //*[@id="4cc49c1a-bbef-48d2-9ca2-b804db7be94a"]/div[5]/div/div[2]/div[1]/div/div[1]/div/div/div/div[3]/div/div[1]/i
xpath_2 = //*[@id="cabfd3f2-83ab-4297-ab49-3efd7e0523f3"]/div[5]/div/div[2]/div[1]/div/div[1]/div/div/div/div[3]/div/div[1]/i
</code></pre>
<p>How can I dynamically code in loop with single xpath from above list.</p>
|
<python><selenium-webdriver>
|
2024-04-11 08:22:01
| 1
| 523
|
san1
|
78,309,010
| 19,500,571
|
Remove everything in string after the first occurrence of a word
|
<p>I have a dataframe with a column consisting of strings. I want to trim the strings in the column such that everything is removed after the <strong>first</strong> appearance of a given word. The words are in this list:</p>
<pre><code>words_to_trim_after = ['test', 'hello', 'very good']
</code></pre>
<p>So if I have a dataframe such as the following</p>
<pre><code>df = pd.DataFrame({'a':['test this is a test bla bla', 'hello bla bla this is a test', 'very good qwerty this is nice']})
</code></pre>
<p>I want to end up with</p>
<pre><code>df_end = pd.DataFrame({'a':['test', 'hello', 'very good']})
</code></pre>
|
<python><pandas>
|
2024-04-11 08:21:14
| 6
| 469
|
TylerD
|
78,308,942
| 13,491,504
|
Plotting a surface over lines plotly
|
<p>I have this code snipped, that should plot a surface over a few lines:</p>
<pre><code>import plotly.graph_objects as go
import plotly.io as pio
import math
import numpy as np
t_changes = np.linspace((0.005*2*math.pi)/60 , (1*2*math.pi)/60 , 10)
omega_all = np.linspace((1*2*math.pi)/60 ,(12*2*math.pi)/60 , 10)
max_values_all = [1.4185558393536863, 1.4325906947905434, 1.438658017942095, 1.4806644284478363, 1.4761508670671415, 1.4649492471829895, 1.464546919691028, 1.4578007061068432, 1.4428714932707587, 1.4500144702965356],[1.6441629388568684, 1.642323391267553, 1.6491736867101752, 1.6412988007431675, 1.6290382708032518, 1.687033971399921, 1.7249970205847203, 1.7375113332368095, 1.734492370556642, 1.7242648069286037],[2.0845437496335575, 2.072843683096361, 2.035169117911497, 2.063686038710132, 1.966491883139673, 2.036435788844538, 2.000384950132592, 1.9602627462709168, 1.9992717869448466, 2.02862099663119],[3.3253818185735087, 3.1215539630481604, 2.981889203258018, 2.8546336239840215, 2.7042612422101002, 2.575780911605224, 2.5755048740741917, 2.6205430458363095, 2.576086019821094, 2.487398700065042],[15.432838261722877, 6.5095720712941985, 5.200282346853118, 4.495357558732455, 4.136530871191216, 3.766634997051916, 3.6827888643956186, 3.4764385554542914, 3.430120257752737, 3.4201713261825426],[75.66433489618231, 15.94566966995464, 10.161220922127605, 8.007523032946853, 6.800634588849489, 6.000935050965966, 5.536898133810305, 5.21365607068837, 5.010566351336815, 4.708208983726101],[75.66286832418304, 15.363231967657855, 15.81080564424909, 13.0105751921456, 11.001714824633169, 9.553264617238883, 8.572992551192133, 8.004222045653602, 7.365317024063387, 7.085229662879642],[75.66242236630372, 15.363226081588929, 10.955988755682252, 13.158249063867139, 14.179122828320526, 13.460821531609223, 12.624633192306533, 11.937231489633758, 11.274180241228226, 10.672516546418693],[75.65799160190585, 21.62711751806979, 18.008379194730093, 14.53079025616018, 8.520673195597784, 14.016979085011064, 16.276408102947748, 17.05937073631494, 17.220381393056392, 16.8222819511244],[118.96662233164312, 66.31352570490306, 68.16962905085047, 61.396954421293266, 74.95559072827379, 67.92902322683065, 63.00643530404031, 63.21829273940191, 65.26976962802642, 67.46680681352494]
fig = go.Figure()
# Loop through each line and add it to the figure
for i in range(len(omega_all)):
x_vals = [omega_all[i]*60/(2*math.pi)]* len(t_changes)
y_vals = t_changes
z_vals = max_values_all[i]
fig.add_trace(go.Scatter3d(x=x_vals, y=y_vals, z=z_vals, mode='lines'))
fig.add_trace(go.Surface(x=(omega_all*60/(2*math.pi)), y=t_changes, z=max_values_all))
# Set layout
fig.update_layout(scene=dict(aspectmode="cube"))
# Show the figure
fig.show()
</code></pre>
<p>But the result is this:</p>
<p><a href="https://i.sstatic.net/0O0EE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0O0EE.jpg" alt="enter image description here" /></a></p>
<p>I don't get why, maybe you can help.</p>
<p>As you can see the lines are plotted just fine but the surface is on the wrong side and not very accuratly tracing the lines in my opinion.</p>
|
<python><plotly><surface>
|
2024-04-11 08:04:44
| 1
| 637
|
Mo711
|
78,308,826
| 3,114,229
|
sphinx docstrings and vscode
|
<p>I have the following docstring:</p>
<pre><code>def test(foo):
"""this is my function
.. code-block:: python
cool function
now I should be outside the codeblock.
But I'm not?
.. code-block:: python
next function
I'm still inside the original codeblock?
:param foo: _description_
:type foo: _type_
"""
</code></pre>
<p>Which renders like this in VScode:</p>
<p><a href="https://i.sstatic.net/CRdby.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CRdby.png" alt="enter image description here" /></a></p>
<p>Am I writing the syntax incorrectly, or does VScode simply struggle with the rendering of sphinx format?</p>
|
<python><visual-studio-code><python-sphinx><docstring>
|
2024-04-11 07:42:06
| 1
| 419
|
Martin
|
78,308,640
| 1,332,690
|
Spacy matching countries in messy data
|
<p>I have some natural language text that I am trying to parse using <code>spacy</code> but having a hard time getting this to cover all possible variations.</p>
<p>Data is basically a set of recommendations for a set of countries, embedded together in a single text block. I want to extract the list of countries and their associated recommendations.</p>
<p>For example:</p>
<pre><code>Consider implementing the recommendations of the Special Rapporteur on violence against women and CEDAW (India) (Thailand), and France recommends to strengthen measures to increase the participation by ethnic minority women in line with CEDAW recommendations, and consider intensifying human rights education (Ghana).
</code></pre>
<p>Should be processed into 2 lists:</p>
<pre><code>states = [["india", "thailand"], ["ghana"]]
recommendations = ["Consider implementing the recommendations of the Special Rapporteur on violence against women and CEDAW", "France recommends to strengthen measures to increase the participation by ethnic minority women in line with CEDAW recommendations, and consider intensifying human rights education"]
</code></pre>
<p>So far I have been getting away with a custom sentence segmentation in <code>spacy</code> but this is growing out of hand to handle lots of edge cases such as countries appearing in the form <code>(Country1, Country2 and Country3)</code> or sometimes missing parenthesis or typos in countries. Some countries can also have very different spellings, like <code>iran</code>, <code>islamic republic of iran</code>, <code>iran (islamic republic of)</code>, <code>iran, islamic republic of</code>.</p>
<p>Looking for some guidance on what a proper way would be to handle this. I Was thinking using <code>spacy</code>'s <code>Matcher</code> but it wasn't clear how to apply it to such a use case.</p>
|
<python><nlp><spacy>
|
2024-04-11 06:59:41
| 1
| 41,578
|
Charles Menguy
|
78,308,420
| 2,545,680
|
Is it possible for state["response"] not raise KeyError: 'response' when the property is not there
|
<p>I'm looking at <a href="https://github.com/langchain-ai/langgraph/blob/main/examples/plan-and-execute/plan-and-execute.ipynb" rel="nofollow noreferrer">this</a> example here under the <code>Create the Graph</code> section:</p>
<pre><code>class PlanExecute(TypedDict):
input: str
plan: List[str]
past_steps: Annotated[List[Tuple], operator.add]
response: str
def should_end(state: PlanExecute):
if state["response"]:
return True
else:
return False
</code></pre>
<p>For me it throws the <code>KeyError: 'response'</code> exception.</p>
<p>Naturally I'd do it like this:</p>
<pre><code>def should_end(state: PlanExecute):
return 'response' in state
</code></pre>
<p>or like this</p>
<pre><code>def should_end(state: PlanExecute):
if state.get("response"):
return True
else:
return False
</code></pre>
<p>But I'm wondering why in the example that's supposed to work they used the construction that throws the error. Is it simply a typo or there are some cases under which the error wouldn't be thrown?</p>
|
<python>
|
2024-04-11 05:58:42
| 1
| 106,269
|
Max Koretskyi
|
78,308,325
| 15,751,564
|
Convert pandas Dataframe to 3D numpy matrix
|
<p>I have a pandas.DataFrame as given below you can read it as pd.DataFrame(data_dict):</p>
<pre><code>data_dict =
{'Elevation': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1},
'Azimuth': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 0, 6: 1, 7: 2, 8: 3, 9: 4},
'median': {0: 255,
1: 255,
2: 255,
3: 255,
4: 255,
5: 256,
6: 256,
7: 256,
8: 256,
9: 256},
'count': {0: 255,
1: 255,
2: 255,
3: 255,
4: 255,
5: 250,
6: 250,
7: 250,
8: 250,
9: 250},
'to_drop': {0: 1,
1: 1,
2: 1,
3: 1,
4: 1,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0}}
</code></pre>
<p>I want to convert it to a 3D matrix in numpy. The shape of the 3D matrix would be <code>[Azimuth.nunique(), Elevation.nunique(), 3(Median,count,to_drop)]</code> i.e, [5,2,3].</p>
<p>I have tried <code>data.groupby(['Elevation','Azimuth']).apply(lambda x: x.values).reset_index().values</code> that results in 10,3 array. How to get 5,2,3 array?</p>
|
<python><pandas><numpy>
|
2024-04-11 05:21:59
| 1
| 1,398
|
darth baba
|
78,308,082
| 3,078,313
|
import osgeo: ImportError: DLL load failed while importing _gdal
|
<p>I'm loading <code>osgeo</code> in a conda python environment. Initially I installed again <code>gdal</code> and <code>libgdal</code> according to <a href="https://gis.stackexchange.com/questions/461363/modulenotfounderror-no-module-named-gdal-in-ubuntu-20-04-and-python-3-8">this</a> post and seems to work. I'm using the python located at: <code>C:\\Users\\Admin\\AppData\\Local\\r-miniconda\\envs\\ENVNAME\\python.exe</code></p>
<pre><code>Python 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:38:46) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
</code></pre>
<p>When I run <code>import osgeo</code> I got it running when doing it on python from Anaconda PowerShell Prompt (miniconda3) found at <code>%windir%\System32\cmd.exe "/K" C:\Users\Admin\miniconda3\Scripts\activate.bat C:\Users\Admin\miniconda3</code></p>
<p>But got an error when running on the same python launched on Windows Power shell and Anaconda prompt (miniconda 3). I'm located in the same folder in the three cases</p>
<p>Anaconda Powershell Prompt (miniconda3): <code>%windir%\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy ByPass -NoExit -Command "& 'C:\Users\Admin\miniconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\Users\Admin\miniconda3'</code></p>
<p>Windows PowerShell: <code>%SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe</code></p>
<p>How can I import <code>osgeo</code> on the same Python but from different consoles?</p>
<p>Here the error I got:</p>
<pre><code>(base) C:\Users\Admin>C:\\Users\\Admin\\AppData\\Local\\r-miniconda\\envs\\ENVNAME/python.exe
Python 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:38:46) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import osgeo
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\r-miniconda\envs\ENVNAME\lib\site-packages\osgeo\__init__.py", line 30, in swig_import_helper
return importlib.import_module(mname)
File "C:\Users\Admin\AppData\Local\r-miniconda\envs\ENVNAME\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 565, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1173, in create_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
ImportError: DLL load failed while importing _gdal: Not found specific proccess
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Admin\AppData\Local\r-miniconda\envs\ENVNAME\lib\site-packages\osgeo\__init__.py", line 35, in <module>
_gdal = swig_import_helper()
File "C:\Users\Admin\AppData\Local\r-miniconda\envs\ENVNAME\lib\site-packages\osgeo\__init__.py", line 32, in swig_import_helper
return importlib.import_module('_gdal')
File "C:\Users\Admin\AppData\Local\r-miniconda\envs\ENVNAME\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: DLL load failed while importing _gdal: Not found specific process
>>>
</code></pre>
<p>Also, when getting import sys; sys.path, I got the same response in the three consoles:</p>
<pre><code>['', 'C:\\Users\\Admin\\AppData\\Local\\r-miniconda\\envs\\cola\\python39.zip',
'C:\\Users\\Admin\\AppData\\Local\\r-miniconda\\envs\\cola\\DLLs',
'C:\\Users\\Admin\\AppData\\Local\\r-miniconda\\envs\\cola\\lib',
'C:\\Users\\Admin\\AppData\\Local\\r-miniconda\\envs\\cola',
'C:\\Users\\Admin\\AppData\\Local\\r-miniconda\\envs\\cola\\lib\\site-packages']
</code></pre>
|
<python><conda><gdal><miniconda><osgeo>
|
2024-04-11 03:38:02
| 1
| 1,358
|
gonzalez.ivan90
|
78,307,771
| 312,444
|
Deploying a python websocket client in GCP
|
<p>I need to consume a websocket server using a python client. I need to deploy this client in GCP. As I need to hold the connection, and maintain it open, it seems that the only way is to have a Virtual machine. Is there any other option?</p>
|
<python><google-cloud-platform><websocket>
|
2024-04-11 00:59:26
| 1
| 8,446
|
p.magalhaes
|
78,307,569
| 1,149,813
|
selenium in python: ElementNotInteractableException -- Element could not be scrolled into view (but it is)
|
<p>I have the following bit of code in python: it opens firefox on a specified page, closes the "accept cookies" window, and scrolls down to click on a button "Carica Altro". When I click the button I get : <code>ElementNotInteractableException: Message: Element <button class="button button--primary medium expanded button--login" type="submit"> could not be scrolled into view</code>
but in fact it is visible, because I see it in the UI (as in the picture).
<a href="https://i.sstatic.net/BKRV2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BKRV2.png" alt="result of scrolling with my script" /></a></p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium.webdriver.common.action_chains import ActionChains
import time
my_url='https://www.raiplay.it/programmi/winxclub/episodi/stagione-5'
def main():
options = FirefoxOptions()
browser = webdriver.Firefox(options=options)
browser.implicitly_wait(10)
browser.get(my_url)
wait = WebDriverWait(browser, 30)
try:
wait.until(EC.element_to_be_clickable((By.CLASS_NAME, "as-oil__btn-primary.as-js-optin"))).click()
print('Clicked successfully')
except:
print('Could not click')
pass
l =browser.find_element(By.CLASS_NAME,"button.button--primary.medium.expanded")
browser.execute_script('window.scrollBy(0, 1500)')
time.sleep(10)
l.click()
return
</code></pre>
|
<python><selenium-webdriver><web-scraping>
|
2024-04-10 23:27:46
| 2
| 2,211
|
simona
|
78,307,330
| 3,127,059
|
Stream OpenCv frames as video pipeline with Gstreamer
|
<p>I have to read frames from RealSense camera and make an <code>RTSP server</code> to stream frames as video, so other clients (multiple) can be able to see the video.
Platform: Windows10.</p>
<p>For doing so it seems that I have to use <code>Gstreamer</code> backend of <code>OpenCv VideoWriter</code>.
I have done the following steps.</p>
<ol>
<li>Rebuild <code>OpenCv</code> with <code>Gstreamer</code> backend</li>
<li>Install <code>Gstreamer MSVC 64-bit (VS 2019, Release CRT)</code> runtime and development packages.</li>
<li>Write a simple Python program to test <code>OpenCv</code> + <code>Gstreamer</code></li>
</ol>
<p>Seems that Gstreamer can't load plugins event if I have added Gstreamer bin folder and Gstreamer plugin folder.</p>
<p><strong>Sample Python program</strong></p>
<pre><code>import os
os.add_dll_directory("C:\\gstreamer\\1.0\\msvc_x86_64\\bin")
os.add_dll_directory("C:\\gstreamer\\1.0\\msvc_x86_64\\lib\\gstreamer-1.0")
import cv2 as cv
import numpy
sourceUrl = "http://pendelcam.kip.uni-heidelberg.de/mjpg/video.mjpg"
#command = "appsrc ! videoconvert ! x264enc ! rtph264pay ! udpsink port=5004 host=127.0.0.1 "
command = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=BGRx ! videoconvert ! x264enc speed-preset=veryfast tune=zerolatency bitrate=800 insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96 config-interval=1 ! udpsink port=5004 host=127.0.0.1 auto-multicast=0"
cap = cv.VideoCapture(sourceUrl)
if not cap.isOpened():
print("Capture Not Opened")
writer = cv.VideoWriter(command, cv.CAP_GSTREAMER, 0, 30, (1280, 720), True)
if not writer.isOpened():
print("WRITER NOT OPENED!!!!!")
if cap.isOpened() and writer.isOpened():
while True:
ret, frame = cap.read()
if not ret:
print("Not Fram received")
break
cv.imshow("frame", frame)
writer.write(frame)
if cv.waitKey(1) > 0:
break
cap.release()
writer.release()
</code></pre>
<p><strong>Error output</strong></p>
<pre><code>(python.exe:6512): GStreamer-WARNING **: 17:18:56.512: Failed to load plugin 'C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0\gstx264.dll': The specified module could not be found.
This usually means Windows was unable to find a DLL dependency of the plugin. Please check that PATH is correct.
You can run 'dumpbin -dependents' (provided by the Visual Studio developer prompt) to list the DLL deps of any DLL.
There are also some third-party GUIs to list and debug DLL dependencies recursively.
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.512: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GLib-GObject-WARNING **: 17:18:56.512: cannot retrieve class for invalid (unclassed) type '<invalid>'
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.512: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.512: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GLib-GObject-CRITICAL **: 17:18:56.512: g_object_class_find_property: assertion 'G_IS_OBJECT_CLASS (class)' failed
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.512: gst_object_unref: assertion 'object != NULL' failed
(python.exe:6512): GLib-GObject-CRITICAL **: 17:18:56.512: g_type_class_unref: assertion 'g_class != NULL' failed
(python.exe:6512): GStreamer-WARNING **: 17:18:56.512: Failed to load plugin 'C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0\gstvideoparsersbad.dll': The specified module could not be found.
This usually means Windows was unable to find a DLL dependency of the plugin. Please check that PATH is correct.
You can run 'dumpbin -dependents' (provided by the Visual Studio developer prompt) to list the DLL deps of any DLL.
There are also some third-party GUIs to list and debug DLL dependencies recursively.
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.512: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GLib-GObject-WARNING **: 17:18:56.525: cannot retrieve class for invalid (unclassed) type '<invalid>'
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.526: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.526: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GStreamer-WARNING **: 17:18:56.530: Failed to load plugin 'C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0\gstvideoparsersbad.dll': The specified module could not be found.
This usually means Windows was unable to find a DLL dependency of the plugin. Please check that PATH is correct.
You can run 'dumpbin -dependents' (provided by the Visual Studio developer prompt) to list the DLL deps of any DLL.
There are also some third-party GUIs to list and debug DLL dependencies recursively.
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.531: gst_object_unref: assertion 'object != NULL' failed
(python.exe:6512): GLib-GObject-CRITICAL **: 17:18:56.532: g_type_class_unref: assertion 'g_class != NULL' failed
(python.exe:6512): GStreamer-WARNING **: 17:18:56.535: Failed to load plugin 'C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0\gstrtp.dll': The specified module could not be found.
This usually means Windows was unable to find a DLL dependency of the plugin. Please check that PATH is correct.
You can run 'dumpbin -dependents' (provided by the Visual Studio developer prompt) to list the DLL deps of any DLL.
There are also some third-party GUIs to list and debug DLL dependencies recursively.
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.537: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GLib-GObject-WARNING **: 17:18:56.537: cannot retrieve class for invalid (unclassed) type '<invalid>'
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.539: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.540: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GLib-GObject-CRITICAL **: 17:18:56.540: g_object_class_find_property: assertion 'G_IS_OBJECT_CLASS (class)' failed
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.541: gst_object_unref: assertion 'object != NULL' failed
(python.exe:6512): GLib-GObject-CRITICAL **: 17:18:56.541: g_type_class_unref: assertion 'g_class != NULL' failed
(python.exe:6512): GStreamer-WARNING **: 17:18:56.545: Failed to load plugin 'C:\gstreamer\1.0\msvc_x86_64\lib\gstreamer-1.0\gstudp.dll': The specified module could not be found.
This usually means Windows was unable to find a DLL dependency of the plugin. Please check that PATH is correct.
You can run 'dumpbin -dependents' (provided by the Visual Studio developer prompt) to list the DLL deps of any DLL.
There are also some third-party GUIs to list and debug DLL dependencies recursively.
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.546: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GLib-GObject-WARNING **: 17:18:56.547: cannot retrieve class for invalid (unclassed) type '<invalid>'
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.548: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.549: gst_element_factory_get_element_type: assertion 'GST_IS_ELEMENT_FACTORY (factory)' failed
(python.exe:6512): GLib-GObject-CRITICAL **: 17:18:56.551: g_object_class_find_property: assertion 'G_IS_OBJECT_CLASS (class)' failed
(python.exe:6512): GStreamer-CRITICAL **: 17:18:56.552: gst_object_unref: assertion 'object != NULL' failed
(python.exe:6512): GLib-GObject-CRITICAL **: 17:18:56.553: g_type_class_unref: assertion 'g_class != NULL' failed
[ WARN:0@1.151] global cap_gstreamer.cpp:2436 cv::CvVideoWriter_GStreamer::open OpenCV | GStreamer warning: error opening writer pipeline: no property "speed-preset" in element "x264enc"
</code></pre>
|
<python><opencv><gstreamer><rtsp>
|
2024-04-10 22:05:31
| 0
| 808
|
JuanDYB
|
78,307,146
| 14,606,046
|
Selenium in python Unable to locate element error 404
|
<p>I am trying to use selenium in python to click the load more button to show all the reviews in a specific webpage, however, I am encountering some issues when finding the button element in selenium which always return an ** Error: 404 - No Such Element: Unable to locate element**</p>
<pre><code>import requests
from urllib.parse import urljoin
import time
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_experimental_option("detach", True)
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=options)
url = 'https://www.walgreens.com/store/c/allegra-adult-24hr-tablet-180-mg,-allergy-relief/ID=300409806-product'
driver.get(url)
time.sleep(5)
while True:
try:
# loadMoreBtn1 = driver.find_element(By.CSS_SELECTOR , 'button.bv-rnr__sc-16j1lpy-3 bv-rnr__sc-17t5kn5-1.jSgrbb.gSrzYA')
# loadMoreBtn2 = driver.find_element(By.XPATH,'//*[@class="bv-rnr__sc-16j1lpy-3 bv-rnr__sc-17t5kn5-1 jSgrbb gSrzYA"]')
loadMoreBtn3 = driver.find_element(By.CLASS_NAME,'bv-rnr__sc-16j1lpy-3 bv-rnr__sc-17t5kn5-1 jSgrbb gSrzYA')
loadMoreBtn3.click()
time.sleep(2)
except:
break
</code></pre>
<p>I already tried finding the element by css, xpath and class name with no luck in making it work.
I checked the webpage and it seems the class name is fix as well</p>
<pre><code><div class="bv-rnr__sc-17t5kn5-0 gcGJRh">
<button aria-label="Load More , This action will load more reviews on screen" class="bv-rnr__sc-16j1lpy-3 bv-rnr__sc-17t5kn5-1 jSgrbb gSrzYA">Load More</button>
</div>
</code></pre>
<p>I also tried the CTRL+F to test the css and xpath in the page source and it returns zero match as well.
<a href="https://i.sstatic.net/J6sxh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J6sxh.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><web-scraping>
|
2024-04-10 21:13:40
| 1
| 6,003
|
Kristkun
|
78,307,104
| 8,006,721
|
Python: How to get SSRS Report Subscriptions published by different user via API
|
<p>I'm using the SSRS API with Python 3 and need to get all reports and their subscriptions.</p>
<p><a href="https://app.swaggerhub.com/apis/microsoft-rs/SSRS/2.0#/info" rel="nofollow noreferrer">https://app.swaggerhub.com/apis/microsoft-rs/SSRS/2.0#/info</a></p>
<pre><code>from requests_negotiate_sspi import HttpNegotiateAuth
import requests
req_url = r'http://<my_server_url>/reports/api/v2.0/Subscriptions'
session = requests.Session()
session.auth = HttpNegotiateAuth()
report_response = session.get(req_url, stream=True)
report_response_json = report_response.json()
session.close()
# do stuff with json response, eventually export to CSV
</code></pre>
<p>This works for getting me all the information I need for reports that <strong>I created</strong> and published. However, there are reports published by other users on the server and I'd like to be able to get their report information too.</p>
|
<python><python-3.x><reporting-services>
|
2024-04-10 21:00:41
| 1
| 332
|
sushi
|
78,307,099
| 7,809,915
|
Data scraping fails: Seeking assistance
|
<p>I attempted the following:</p>
<ul>
<li>Utilize the German stock exchange's API (<a href="https://api.boerse-frankfurt.de/v1/search/equity_search" rel="nofollow noreferrer">https://api.boerse-frankfurt.de/v1/search/equity_search</a>) to retrieve index values.</li>
<li>The API can be accessed externally using parameters. Normally it would be called from <a href="https://www.boerse-frankfurt.de/index/dax/zugehoerige-werte" rel="nofollow noreferrer">https://www.boerse-frankfurt.de/index/dax/zugehoerige-werte</a>.</li>
<li>There is a similar question here (<a href="https://stackoverflow.com/questions/70007317/extract-or-generate-x-client-traceid-for-header-in-get-request">Extract or generate X-Client-TraceId for header in GET-request</a>), but I'm attempting to query a different API endpoint.</li>
<li>There is a blog post here (<a href="https://lwthiker.com/reversing/2022/02/12/analyzing-stock-exchange-api.html" rel="nofollow noreferrer">https://lwthiker.com/reversing/2022/02/12/analyzing-stock-exchange-api.html</a>), but I couldn't adapt it for my scenario.</li>
<li>I suspect that my URL is not correctly composed.</li>
</ul>
<p>This is my source code:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import hashlib
import json
import requests
from pprint import pprint
def generate_headers(url):
salt = "w4ivc1ATTGta6njAZzMbkL3kJwxMfEAKDa3MNr"
current_time = datetime.datetime.now(tz=datetime.timezone.utc)
client_date = (current_time
.isoformat(timespec="milliseconds")
.replace("+00:00", "Z")
)
client_traceid = hashlib.md5(
(client_date + url + salt).encode("utf-8")
)
security = hashlib.md5(
current_time.strftime("%Y%m%d%H%M").encode("utf-8")
)
return {
"Client-Date": client_date,
"X-Client-TraceId": client_traceid.hexdigest(),
"X-Security": security.hexdigest()
}
payload = {"indices" : ["DE0007203275"],
"lang" : "de",
"offset" : 0,
"limit" : 50,
"sorting" : "TURNOVER",
"sortOrder": "DESC"}
urlString = "https://api.boerse-frankfurt.de/v1/search/equity_search?indices=['DE0007203275']&lang=de&offset=0&limit=50&sorting=TURNOVER&sortOrder=DESC"
result = generate_headers(urlString)
print(result)
headers = {"Host" : "api.boerse-frankfurt.de",
"Referer" : "https://www.boerse-frankfurt.de/",
"Content-Type" : "application/json; charset=utf-8",
"Client-Date" : result["Client-Date"],
"X-Client-TraceId" : result["X-Client-TraceId"],
"X-Security" : result["X-Security"]
}
response = requests.post(url,
headers=headers,
json=payload)
pprint(json.loads(response.content))
</code></pre>
<p>Any suggestions are welcome. Thank you!!!</p>
|
<python><web-scraping><python-requests><http-headers><stock>
|
2024-04-10 20:59:35
| 1
| 490
|
M14
|
78,307,073
| 7,903,456
|
LangChain agent parsing error with structured_chat_agent and Wikipedia tool, handle_parsing_errors hits limit
|
<p>I am trying to ask GPT 4 to use Wikipedia for a prompt, using agents and tools via LangChain.</p>
<p>The difficulty I'm running into is the book I've been using, <em>Developing Apps with GPT-4 and ChatGPT: Build Intelligent Chatbots, Content Generators, and More</em>, while published in 2023, already has code examples that are deprecated.</p>
<p>For example, I am trying to do something similar to the code provided on page 114 of that book:</p>
<pre><code>from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent, AgentType llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
tools = load_tools(["wikipedia", "llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True )
question = """What is the square root of the population of the capital of the
Country where the Olympic Games were held in 2016?"""
agent.run(question)
</code></pre>
<p>I see much of this is deprecated (e.g., <a href="https://api.python.langchain.com/en/latest/agents/langchain.agents.initialize.initialize_agent.html#langchain.agents.initialize.initialize_agent" rel="nofollow noreferrer">initialize_agent</a>), so I have looked around StackOverflow, GitHub, and the LangChain Python documents to come up with this:</p>
<pre><code>from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain.agents import (
load_tools, create_structured_chat_agent, AgentExecutor
)
model = ChatOpenAI(model="gpt-4", temperature=0)
tools = load_tools(["wikipedia"])
prompt = ChatPromptTemplate.from_template(
"""
You are a research assistant, and your job is to retrieve information about
movies and movie directors.
Use the following tool: {tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question. You only
need to give the number, no other information or explanation is necessary.
Begin!
Question: How many movies did the director of the {year} movie {name} direct
before they made {name}?
Thought: {agent_scratchpad}
"""
)
agent = create_structured_chat_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"year": "1991", "name": "thelma and louise"})
</code></pre>
<p>I'm going to be running this through a loop of many movies, so <strong>I'd like it to only return one integer</strong> (in this case, 6). But it seems like I need to give it that full thought process prompt; I can't get it to run if I don't include <code>{tools}</code>, <code>{tool_names}</code>, and <code>{agent_scratchpad}</code> in the prompt (<a href="https://github.com/langchain-ai/langchain/discussions/20199" rel="nofollow noreferrer">per this GitHub post</a>).</p>
<p>The frustrating thing is I eventually do get the correct answer, but note that it is throwing an error:</p>
<pre><code>ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: First, I need to find out who directed the movie "Thelma and Louise" in 1991.
Action: wikipedia
Action Input: {'query': 'Thelma and Louise'}
Observation:
"Thelma & Louise" is a 1991 American female buddy road film directed by Ridley Scott and written by Callie Khouri. It stars Geena Davis as Thelma and Susan Sarandon as Louise, two friends who embark on a road trip with unforeseen consequences. The film became a critical and commercial success, receiving six Academy Award nominations and winning one for Best Original Screenplay for Khouri. Scott was nominated for Best Director.
Thought:
Ridley Scott directed the movie "Thelma and Louise". Now I need to find out how many movies he directed before this one.
Action: wikipedia
Action Input: {'query': 'Ridley Scott filmography'}
Observation:
Ridley Scott is an English filmmaker. Following his commercial breakthrough with the science fiction horror film Alien (1979), his best known works are the neo-noir dystopian science fiction film Blade Runner (1982), historical drama Gladiator (2000), and science fiction film The Martian (2015). Scott has directed more than 25 films and is known for his atmospheric, highly concentrated visual style. His films are also known for their strong female characters. Here is a list of his films before "Thelma & Louise":
1. The Duellists (1977)
2. Alien (1979)
3. Blade Runner (1982)
4. Legend (1985)
5. Someone to Watch Over Me (1987)
6. Black Rain (1989)
Thought:
Ridley Scott directed six movies before "Thelma and Louise".
Final Answer: 6
</code></pre>
<p>This seems to be very common (<a href="https://github.com/langchain-ai/langchain/issues/1358" rel="nofollow noreferrer">here</a>, <a href="https://github.com/langchain-ai/langchain/discussions/4065" rel="nofollow noreferrer">and here</a>, and <a href="https://stackoverflow.com/questions/77391069/parsing-error-on-langchain-agent-with-gpt4all-llm">also here</a>, and <a href="https://www.reddit.com/r/LangChain/comments/16bztmo/outputparserexception_could_not_parse_llm_output/" rel="nofollow noreferrer">lastly here</a>).</p>
<p>So, I do what it tells me (<a href="https://python.langchain.com/docs/modules/agents/how_to/handle_parsing_errors/" rel="nofollow noreferrer">see docs also</a>) and update my AgentExecutor to:</p>
<pre><code>agent_executor = AgentExecutor(
agent=agent,
tools=tools,
handle_parsing_errors=True
)
</code></pre>
<p>And that returns:</p>
<pre><code>{'year': '1991', 'name': 'thelma and louise', 'output': 'Agent stopped due to iteration limit or time limit.'}
</code></pre>
<p>My question: <strong>How can I use LangChain to combine GPT 4 and Wikipedia to get an answer to a query, when all I want back is an integer?</strong></p>
|
<python><nlp><openai-api><langchain><large-language-model>
|
2024-04-10 20:52:34
| 2
| 1,543
|
Mark White
|
78,307,014
| 2,514,130
|
Is there a way to weigh datapoints when doing LOESS/LOWESS in Python
|
<p>I would like to run a LOWESS function where different data points have different weights, but I don't see how I can pass weights to the <code>lowess</code> function. Here's some example code of using <code>lowess</code> without weights.</p>
<pre><code>import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
# Create the data
x = np.random.uniform(low=-2*np.pi, high=2*np.pi, size=500)
y = np.sin(x) + np.random.normal(size=len(x))
# Apply LOWESS (Locally Weighted Scatterplot Smoothing)
lowess = sm.nonparametric.lowess
z = lowess(y, x)
w = lowess(y, x, frac=1./3)
# Plotting
plt.figure(figsize=(12, 6))
plt.scatter(x, y, label='Data', alpha=0.5)
plt.plot(z[:, 0], z[:, 1], label='LOWESS', color='red')
</code></pre>
<p>My points vary in significance, so I would like to be able to create weights like <code>weights = p.random.randint(1,5,size=500)</code> and have the lowess process use them. I believe this is possible in R but I'm not sure if it can be done in Python. Is there a way?</p>
|
<python><statistics><statsmodels>
|
2024-04-10 20:38:35
| 1
| 5,573
|
jss367
|
78,306,973
| 6,458,245
|
Time complexity of bisect on ordered dict?
|
<p>Given this code snippet, can anyone tell me the time complexity?</p>
<pre><code>import collections
a = collections.OrderedDict()
for i in range(100):
a[i] = i
import bisect
ind = bisect.bisect_left(a.keys(), 45.3)
</code></pre>
<p>Initially it may appear O(log(n)). However, ordered dict does not implement a BST. So in order to copy the keys over to some array/list in order for bisect to use, it would take O(n) time right?</p>
|
<python><time-complexity>
|
2024-04-10 20:27:12
| 0
| 2,356
|
JobHunter69
|
78,306,866
| 2,153,235
|
Python rule for parsing integer formatting specifier?
|
<p>I am trying to understand code that uses the following expression</p>
<pre><code>f"{current_file_start_time.year}{current_file_start_time.month:02}{current_file_start_time.day:02}"
</code></pre>
<p>Here is a minimum working example, pasted from a Spyder console:</p>
<pre><code>from datetime import datetime, timezone
current_file_start_time = datetime.now(timezone.utc)
current_file_start_time.month
Out[20]: 4
type(_)
Out[21]: int
f"{current_file_start_time.year}{current_file_start_time.month:02}{current_file_start_time.day:02}"
Out[22]: '20240410'
</code></pre>
<p><strong>More specifically,</strong> The month <code>04</code> is due to the component <code>{current_file_start_time.month:02}</code>. The relevant rules seem to be described by <a href="https://docs.python.org/3/library/string.html#formatspec" rel="nofollow noreferrer">this reference page</a>:</p>
<pre><code>format_spec ::= [[fill]align][sign]["z"]["#"]["0"][width][grouping_option]["." precision][type]
fill ::= <any character>
align ::= "<" | ">" | "=" | "^"
sign ::= "+" | "-" | " "
width ::= digit+
grouping_option ::= "_" | ","
precision ::= digit+
type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%"
</code></pre>
<p>The leading <code>0</code> in <code>{current_file_start_time.month:02}</code> seem to be matchable to several components of <code>format_spec</code> above, namely: (i) <code>[fill]</code>; (ii) <code>["0"]</code>; and (iii) and the leading digit of <code>[width]</code>. I can't find a description of component (ii), perhaps it's a special case of using <code>"0"</code> for <code>fill</code>.</p>
<p>What are the rules that determine whether <code>0</code> in <code>{current_file_start_time.month:02}</code> is used for <code>fill</code>, <code>"0"</code>, or the leading digit of <code>width</code>? While it may not matter much in this example, I would like to not guess when composing other format specifiers?</p>
<p>I am using Python 3.9.</p>
|
<python><formatting>
|
2024-04-10 20:05:28
| 1
| 1,265
|
user2153235
|
78,306,821
| 11,650,254
|
How to call python functions compiled to C in C++ (Cython)?
|
<h4>Problem</h4>
<p>Everybody extends python with C, but I need opposite scenario. I want to use code in <code>C++</code> project.</p>
<h4>Cythonize</h4>
<p>It kinda works. I did convert simple <code>.pyx</code> file to <code>.c</code> using <code>Cython</code>.
Function is super simple with only print.</p>
<pre class="lang-py prettyprint-override"><code># hello_cython.pyx
def func1():
print("Hello Cython Function.")
if __name__ == "__main__":
print("Hello World! Run cython.")
</code></pre>
<p>Setup.py just for reference (if needed)</p>
<pre class="lang-py prettyprint-override"><code># setup.py
from setuptools import setup, Extension
from Cython.Build import cythonize
# my_extension = Extension('SayHiApp',)
setup(
name="AppHi",
ext_modules=cythonize("hello_cython.pyx", show_all_warnings=True),
</code></pre>
<h2>To replicate</h2>
<p>Use <code>python setup.py build_ext --inplace</code>
This will create C file.</p>
<p>Short example from 5000 lines:</p>
<pre class="lang-cpp prettyprint-override"><code>/* Generated by Cython 3.0.10 */
/* BEGIN: Cython Metadata
{
"distutils": {
"name": "hello_cython",
"sources": [
"hello_cython.pyx"
]
},
"module_name": "hello_cython"
}
END: Cython Metadata */
#ifndef PY_SSIZE_T_CLEAN
#define PY_SSIZE_T_CLEAN
#endif /* PY_SSIZE_T_CLEAN */
#if defined(CYTHON_LIMITED_API) && 0
#ifndef Py_LIMITED_API
#if CYTHON_LIMITED_API+0 > 0x03030000
#define Py_LIMITED_API CYTHON_LIMITED_API
#else
#define Py_LIMITED_API 0x03030000
#endif
#endif
#endif
#include "Python.h"
#ifndef Py_PYTHON_H
#error Python headers needed to compile C extensions, please install development version of Python.
#elif PY_VERSION_HEX < 0x02070000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)
#error Cython requires Python 2.7+ or Python 3.3+.
#else
#if defined(CYTHON_LIMITED_API) && CYTHON_LIMITED_API
#define __PYX_EXTRA_ABI_MODULE_NAME "limited"
#else
#define __PYX_EXTRA_ABI_MODULE_NAME ""
#endif
#define CYTHON_ABI "3_0_10" __PYX_EXTRA_ABI_MODULE_NAME
#define __PYX_ABI_MODULE_NAME "_cython_" CYTHON_ABI
#define __PYX_TYPE_MODULE_PREFIX __PYX_ABI_MODULE_NAME "."
#define CYTHON_HEX_VERSION 0x03000AF0
#define CYTHON_FUTURE_DIVISION 1
#include <stddef.h>
#ifndef offsetof
#define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
#endif
#if !defined(_WIN32) && !defined(WIN32) && !defined(MS_WINDOWS)
#ifndef __stdcall
#define __stdcall
#endif
#ifndef __cdecl
#define __cdecl
#endif
#ifndef __fastcall
#define __fastcall
#endif
#endif
</code></pre>
<h4>C++ project</h4>
<p>I attached generated <code>.c</code> file to C++ project, fixed missing hearder and libs. It finally compiles, without any errors or warnings (ignore errors on screenshot).</p>
<p><code>.cpp</code> file is empty main function that does nothing.</p>
<pre class="lang-cpp prettyprint-override"><code>// LearnPythonAPIIntegration.cpp : This file contains the 'main' function. Program execution begins and ends there.
//
#include <iostream>
int main()
{
std::cout << "Hello World!\n";
/// I want to call func1() here somehow
}
</code></pre>
<p>But how can I use it? I don't see any function that would look like defined by me.</p>
<p>Have I done something wrong? I just wonder maybe I can't reuse it and I should rewrite this in C++.</p>
|
<python><c++><cython>
|
2024-04-10 19:55:58
| 0
| 301
|
Grzegorz Krug
|
78,306,687
| 1,677,381
|
In a Jupyter notebook, create an object from a class referenced using a string to use with ray
|
<p>My 3.6.3 Jupyterlab notebook is running Python 3.10.11. I am trying to use Ray to implement some asymmetrical code. This would be simple, except that in my remote function I am trying to implement a class object using a string holding the class name.</p>
<p>We have a number of triggers, each trigger causing a specific class to be used. We can't do a bunch of if statements to create the class object as the triggers can change.</p>
<p>The following code is an example of what I am trying to do. "job1" works because I instantiated the class object directly. "job2" does not work because I used eval() to instantiate the class object.</p>
<pre><code>import ray
ray.init(ignore_reinit_error=True)
# Usually imported with a %run class_notebook.ipynb command.
class Test:
def nothing(self):
return True
@ray.remote
def do_it1():
x = Test() # Class object created directly. This works.
return x.nothing()
@ray.remote
def do_it2():
x = eval("Test")() # Class object created using a string. This causes issues later.
return x.nothing()
job1 = do_it1.remote()
job2 = do_it2.remote()
ray.get(job1)
ray.get(job2) # Error occurs here
</code></pre>
<p>The error message is:</p>
<pre><code>---------------------------------------------------------------------------
RayTaskError(NameError) Traceback (most recent call last)
Cell In[9], line 1
----> 1 ray.get(job2)
File /opt/conda/lib/python3.10/site-packages/ray/_private/auto_init_hook.py:24, in wrap_auto_init.<locals>.auto_init_wrapper(*args, **kwargs)
21 @wraps(fn)
22 def auto_init_wrapper(*args, **kwargs):
23 auto_init_ray()
---> 24 return fn(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/ray/_private/client_mode_hook.py:103, in client_mode_hook.<locals>.wrapper(*args, **kwargs)
101 if func.__name__ != "init" or is_client_mode_enabled_by_default:
102 return getattr(ray, func.__name__)(*args, **kwargs)
--> 103 return func(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/ray/_private/worker.py:2493, in get(object_refs, timeout)
2491 worker.core_worker.dump_object_store_memory_usage()
2492 if isinstance(value, RayTaskError):
-> 2493 raise value.as_instanceof_cause()
2494 else:
2495 raise value
RayTaskError(NameError): ray::do_it2() (pid=25675, ip=172.31.3.78)
File "/tmp/ipykernel_25494/1701015360.py", line 3, in do_it2
File "<string>", line 1, in <module>
NameError: name 'Test' is not defined
</code></pre>
<p>What do I need to do in the do_it2 function to use a string to create the instance of class object?</p>
<p>[edit]To instantiate my class object I have tried using globals() but that does not work either.</p>
<pre><code>x = globals()['Test']()
</code></pre>
<p>[/edit]</p>
|
<python><asynchronous><jupyter-notebook><ray>
|
2024-04-10 19:25:13
| 1
| 363
|
Calab
|
78,306,590
| 5,790,653
|
How to count each value occurrences while having the same value of another key
|
<p>This is a list I have:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'city': 'Tehran', 'street': 'Ferdowsi'},
{'city': 'Tabriz', 'street': 'Imam'},
{'city': 'Sari', 'street': 'Qaran'},
{'city': 'Tehran', 'street': 'Enghelab'},
{'city': 'Tabriz', 'street': 'Imam'},
{'city': 'Tehran', 'street': 'Kan'},
{'city': 'Tehran', 'street': 'Kan'},
{'city': 'Sari', 'street': 'Nader'},
{'city': 'Sari', 'street': 'Nader'},
]
</code></pre>
<p>I'm going to both have count each city has how many <code>street</code>s with the equal name, and have a list of them, like the following as expected output:</p>
<pre><code>Tehran 1 Ferdowsi
Tabriz 2 Imam
Sari 1 Qaran
Tehran 1 Enghelab
Tehran 2 Kan
Sari 2 Nader
</code></pre>
<p>And also this:</p>
<pre><code>Tehran [Ferdowsi]
Tabriz [Imam, Imam]
Sari [Qaran]
Tehran [Enghelab]
Tehran [Kan, Kan]
Sari [Nader, Nader]
</code></pre>
<p>I tried this, but show all <code>street</code>s related to one <code>city</code>:</p>
<pre class="lang-py prettyprint-override"><code>from collections import defaultdict
cities = defaultdict(list)
for name in list1:
cities[name['city']].append(name['street'])
</code></pre>
<p>Current output:</p>
<pre><code>>>> for x in cities: x, cities[x]
...
('Tehran', ['Ferdowsi', 'Enghelab', 'Kan', 'Kan'])
('Tabriz', ['Imam', 'Imam'])
('Sari', ['Qaran', 'Nader', 'Nader'])
</code></pre>
<p>And also I know this counts how many times each street is repeated:</p>
<pre class="lang-py prettyprint-override"><code>from collections import Counter
counts = Counter(row["street"] for row in list1)
</code></pre>
<p>Its current output:</p>
<pre><code>>>> for x in counts: x, counts[x]
...
('Ferdowsi', 1)
('Imam', 2)
('Qaran', 1)
('Enghelab', 1)
('Kan', 2)
('Nader', 2)
</code></pre>
<p>But the issue is I'm not sure and what to search to have what I'm looking for.</p>
|
<python>
|
2024-04-10 19:02:08
| 1
| 4,175
|
Saeed
|
78,306,583
| 1,903,852
|
Pipenv install fails due to deprecated package
|
<p>I've got a python package with the following Pipfile:</p>
<pre><code>[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
boto3 = "*"
s3urls = "*"
tabulate = "*"
surfaceutils = {editable = true, git = "ssh://git.amazon.com/pkg/MiddleMileSurfaceResearchUtils"}
s3fs = "<0.5.0"
importlib-metadata = "*"
pandas = "*"
isodate = "*"
sklearn = "*"
numpy = "*"
timedelta = "*"
django-timedeltafield = "*"
[requires]
python_version = "3.9"
</code></pre>
<p>When I try running <code>pipenv install</code> from within this package I get the following error:</p>
<pre><code>Creating a virtualenv for this project...
Pipfile: /local/home/kinable/workspace/amazon/Cobra-Benchmarking/Pipfile
Using /local/home/kinable/.pyenv/versions/3.9.19/bin/python3.9 (3.9.19) to create virtualenv...
⠸ Creating virtual environment...created virtual environment CPython3.9.19.final.0-64 in 201ms
creator CPython3Posix(dest=/local/home/kinable/.local/share/virtualenvs/Cobra-Benchmarking-mtu2zcpw, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/local/home/kinable/.local/share/virtualenv)
added seed packages: pip==24.0, setuptools==69.2.0, surfaceutils==2021.9.0, wheel==0.43.0
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
✔ Successfully created virtual environment!
Virtualenv location: /home/kinable/.local/share/virtualenvs/Cobra-Benchmarking-mtu2zcpw
Installing dependencies from Pipfile.lock (999ffc)...
[pipenv.exceptions.InstallError]: Collecting boto3==1.26.98 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 1))
[pipenv.exceptions.InstallError]: Using cached boto3-1.26.98-py3-none-any.whl (135 kB)
[pipenv.exceptions.InstallError]: Collecting botocore==1.29.98 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 2))
[pipenv.exceptions.InstallError]: Using cached botocore-1.29.98-py3-none-any.whl (10.5 MB)
[pipenv.exceptions.InstallError]: Collecting django-timedeltafield==0.7.10 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 3))
[pipenv.exceptions.InstallError]: Using cached django-timedeltafield-0.7.10.tar.gz (13 kB)
[pipenv.exceptions.InstallError]: Preparing metadata (setup.py): started
[pipenv.exceptions.InstallError]: Preparing metadata (setup.py): finished with status 'done'
[pipenv.exceptions.InstallError]: Collecting fsspec==2023.3.0 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 4))
[pipenv.exceptions.InstallError]: Using cached fsspec-2023.3.0-py3-none-any.whl (145 kB)
[pipenv.exceptions.InstallError]: Collecting importlib-metadata==6.1.0 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 5))
[pipenv.exceptions.InstallError]: Using cached importlib_metadata-6.1.0-py3-none-any.whl (21 kB)
[pipenv.exceptions.InstallError]: Collecting isodate==0.6.1 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 6))
[pipenv.exceptions.InstallError]: Using cached isodate-0.6.1-py2.py3-none-any.whl (41 kB)
[pipenv.exceptions.InstallError]: Collecting jmespath==1.0.1 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 7))
[pipenv.exceptions.InstallError]: Using cached jmespath-1.0.1-py3-none-any.whl (20 kB)
[pipenv.exceptions.InstallError]: Collecting numpy==1.24.2 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 8))
[pipenv.exceptions.InstallError]: Using cached numpy-1.24.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
[pipenv.exceptions.InstallError]: Collecting pandas==1.5.3 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 9))
[pipenv.exceptions.InstallError]: Using cached pandas-1.5.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)
[pipenv.exceptions.InstallError]: Collecting python-dateutil==2.8.2 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 10))
[pipenv.exceptions.InstallError]: Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
[pipenv.exceptions.InstallError]: Collecting pytz==2022.7.1 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 11))
[pipenv.exceptions.InstallError]: Using cached pytz-2022.7.1-py2.py3-none-any.whl (499 kB)
[pipenv.exceptions.InstallError]: Collecting s3fs==0.4.2 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 12))
[pipenv.exceptions.InstallError]: Using cached s3fs-0.4.2-py3-none-any.whl (19 kB)
[pipenv.exceptions.InstallError]: Collecting s3transfer==0.6.0 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 13))
[pipenv.exceptions.InstallError]: Using cached s3transfer-0.6.0-py3-none-any.whl (79 kB)
[pipenv.exceptions.InstallError]: Collecting s3urls==0.0.3 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 14))
[pipenv.exceptions.InstallError]: Using cached s3urls-0.0.3-py3-none-any.whl (2.5 kB)
[pipenv.exceptions.InstallError]: Collecting six==1.16.0 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 15))
[pipenv.exceptions.InstallError]: Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
[pipenv.exceptions.InstallError]: Collecting sklearn==0.0.post1 (from -r /tmp/pipenv-eiti58sv-requirements/pipenv-exmlxe5_-hashed-reqs.txt (line 16))
[pipenv.exceptions.InstallError]: Using cached sklearn-0.0.post1.tar.gz (3.6 kB)
[pipenv.exceptions.InstallError]: Preparing metadata (setup.py): started
[pipenv.exceptions.InstallError]: Preparing metadata (setup.py): finished with status 'error'
[pipenv.exceptions.InstallError]: error: subprocess-exited-with-error
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × python setup.py egg_info did not run successfully.
[pipenv.exceptions.InstallError]: │ exit code: 1
[pipenv.exceptions.InstallError]: ╰─> [18 lines of output]
[pipenv.exceptions.InstallError]: The 'sklearn' PyPI package is deprecated, use 'scikit-learn'
[pipenv.exceptions.InstallError]: rather than 'sklearn' for pip commands.
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: Here is how to fix this error in the main use cases:
[pipenv.exceptions.InstallError]: - use 'pip install scikit-learn' rather than 'pip install sklearn'
[pipenv.exceptions.InstallError]: - replace 'sklearn' by 'scikit-learn' in your pip requirements files
[pipenv.exceptions.InstallError]: (requirements.txt, setup.py, setup.cfg, Pipfile, etc ...)
[pipenv.exceptions.InstallError]: - if the 'sklearn' package is used by one of your dependencies,
[pipenv.exceptions.InstallError]: it would be great if you take some time to track which package uses
[pipenv.exceptions.InstallError]: 'sklearn' instead of 'scikit-learn' and report it to their issue tracker
[pipenv.exceptions.InstallError]: - as a last resort, set the environment variable
[pipenv.exceptions.InstallError]: SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True to avoid this error
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: More information is available at
[pipenv.exceptions.InstallError]: https://github.com/scikit-learn/sklearn-pypi-package
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: If the previous advice does not cover your use case, feel free to report it at
[pipenv.exceptions.InstallError]: https://github.com/scikit-learn/sklearn-pypi-package/issues/new
[pipenv.exceptions.InstallError]: [end of output]
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This error originates from a subprocess, and is likely not a problem with pip.
[pipenv.exceptions.InstallError]: error: metadata-generation-failed
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × Encountered error while generating package metadata.
[pipenv.exceptions.InstallError]: ╰─> See above for output.
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This is an issue with the package mentioned above, not pip.
[pipenv.exceptions.InstallError]: hint: See above for details.
ERROR: Couldn't install package: {}
Package installation failed...
/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/subprocess.py:1052: ResourceWarning: subprocess 33373 is still running
_warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name=4 encoding='utf-8'>
ResourceWarning: Enable tracemalloc to get the object allocation traceback
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name=7 encoding='utf-8'>
ResourceWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
<p>According to the error message, I need to replace sklearn by scikit-learn in the Pipfile. I tried to accomplish that by running: <code>pipenv install scikit-learn</code>. This however produces the following error:</p>
<pre><code>Installing scikit-learn...
Resolving scikit-learn...
Added scikit-learn to Pipfile's [packages] ...
✔ Installation Succeeded
Pipfile.lock (999ffc) out of date, updating to (9ef4a3)...
Locking [packages] dependencies...
Building requirements...
Resolving dependencies...
✘ Locking Failed!
⠸ Locking...False
INFO:pip.subprocessor:Running command git clone --filter=blob:none --quiet ssh://git.amazon.com/pkg/MiddleMileSurfaceResearchUtils /tmp/pip-temp-fl2magyw/surfaceutils_d012b7bb121447ddbeccd9fe702be520
ERROR:pip.subprocessor:python setup.py egg_info exited with 1
[ResolutionFailure]: File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/resolver.py", line 645, in _main
[ResolutionFailure]: resolve_packages(
[ResolutionFailure]: File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/resolver.py", line 612, in resolve_packages
[ResolutionFailure]: results, resolver = resolve(
[ResolutionFailure]: File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/resolver.py", line 592, in resolve
[ResolutionFailure]: return resolve_deps(
[ResolutionFailure]: File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/utils/resolver.py", line 918, in resolve_deps
[ResolutionFailure]: results, hashes, internal_resolver = actually_resolve_deps(
[ResolutionFailure]: File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/utils/resolver.py", line 691, in actually_resolve_deps
[ResolutionFailure]: resolver.resolve()
[ResolutionFailure]: File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/utils/resolver.py", line 448, in resolve
[ResolutionFailure]: raise ResolutionFailure(message=str(e))
[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
You can use $ pipenv run pip install <requirement_name> to bypass this mechanism, then run $ pipenv graph to inspect the versions actually installed in the virtualenv.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: metadata generation failed
Traceback (most recent call last):
File "/home/kinable/.pyenv/versions/3.9.19/bin/pipenv", line 8, in <module>
sys.exit(cli())
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/vendor/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/cli/options.py", line 58, in main
return super().main(*args, **kwargs, windows_expand_args=False)
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/vendor/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/vendor/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/vendor/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/vendor/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/vendor/click/decorators.py", line 92, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/vendor/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/cli/command.py", line 209, in install
do_install(
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/routines/install.py", line 297, in do_install
raise e
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/routines/install.py", line 281, in do_install
do_init(
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/routines/install.py", line 648, in do_init
do_lock(
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/routines/lock.py", line 65, in do_lock
venv_resolve_deps(
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/utils/resolver.py", line 859, in venv_resolve_deps
c = resolve(cmd, st, project=project)
File "/home/kinable/.pyenv/versions/3.9.19/lib/python3.9/site-packages/pipenv/utils/resolver.py", line 728, in resolve
raise RuntimeError("Failed to lock Pipfile.lock!")
RuntimeError: Failed to lock Pipfile.lock!
</code></pre>
<p>I'm not quite sure how to proceed?</p>
|
<python><pipenv>
|
2024-04-10 19:00:07
| 0
| 2,431
|
Joris Kinable
|
78,306,109
| 10,025,674
|
Key Error when having spaces and "\n" in multiline f-string
|
<p>Trying to format SPARQL queries in python using f-strings in order to build different queries from a template,
but i'm getting <strong>KeyError</strong> when i try to format a multiline string:</p>
<pre><code>query = f'''SELECT ?age
WHERE {{
wd:{0} wdt:{1} ?birthdate.
BIND(now() - ?birthdate AS ?age)
}}'''
</code></pre>
<pre><code>print(query.format('Q84','P1082'))
</code></pre>
<pre><code>KeyError: '\n wd'
</code></pre>
<p>Any idea helps, i'd prefer not to split the queries line by line.</p>
|
<python><string><sparql><multiline><f-string>
|
2024-04-10 17:26:01
| 1
| 748
|
m.rp
|
78,305,989
| 13,142,245
|
async / await useful when independent network calls shouldn't block each other
|
<p>My understanding of Python's asyncio library (and async functions for Python in general) is that it uses threads. These threads are not processes, so division of CPU and parallelization is not possible.</p>
<p>However, if an application method has two network calls, A and B, it might be appropriate. If A must finish before B starts then it's my understanding that using async/await syntax is wasted effort. However, if A and B are independent (neither response is an input to the other), then async / await syntax could indeed be useful. The reasoning is that the execution of network call A is idle time, blocking the start of network call B. Async / await would allow B to begin before A's response is received.</p>
<p>I could be wrong but this is my understanding. Increasingly, I'm seeing FastAPI tutorials, blogs, etc use async / await in contexts where it's assumedly not harmful to performance but not useful either. <a href="https://github.com/sixfwa/react-fastapi/blob/main/backend/main.py" rel="nofollow noreferrer">One example</a>. A number of these calls are directed to the service's own relational database. This might complicate when async / await is/isn't useful.</p>
<p>Is my understanding accurate?</p>
|
<python><async-await><fastapi>
|
2024-04-10 17:03:29
| 1
| 1,238
|
jbuddy_13
|
78,305,920
| 16,717,009
|
Is there a way to put multiple steps into a polars expression function?
|
<p>I've been learning how to write expression functions in polars which are wonderful for creating self-documenting chained operations. But I'm struggling with more complex functions. Let's say I want to replace a value in column <code>bar</code> with the first value in column <code>baz</code> when <code>baz</code> is empty, over the group column <code>foo</code>.
To state more clearly: I have a set of columns which form a a sorted group (in my example only <code>foo</code>). I have another column <code>bar</code> which may or may not have empty values. If the first value in <code>bar</code> for the group is empty ('' or NULL) take the corresponding value from a another column <code>baz</code> and apply to every <code>bar</code> in the group. If the first value in <code>bar</code> is not empty then do nothing to the group.</p>
<p>The below works correctly.</p>
<p>Initial DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({'foo': [1, 1, 1, 2, 2, 2, 3, 3],
'bar': ['a', 'a', 'a', None, None, None, 'c', 'c'],
'baz': ['x', None, 'q', 'z', 'r', None, 'y', 's']})
</code></pre>
<pre class="lang-py prettyprint-override"><code>shape: (8, 3)
┌─────┬──────┬──────┐
│ foo ┆ bar ┆ baz │
│ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ str │
╞═════╪══════╪══════╡
│ 1 ┆ a ┆ x │
│ 1 ┆ a ┆ null │
│ 1 ┆ a ┆ q │
│ 2 ┆ null ┆ z │
│ 2 ┆ null ┆ r │
│ 2 ┆ null ┆ null │
│ 3 ┆ c ┆ y │
│ 3 ┆ c ┆ s │
└─────┴──────┴──────┘
</code></pre>
<p>Transformation I want to perform:</p>
<pre class="lang-py prettyprint-override"><code>df = (df.with_columns(pl.col('baz').first().over(['foo']).alias('temp'))
.with_columns(pl.when((pl.col('bar') == '') | (pl.col('bar').is_null()))
.then(pl.col('temp'))
.otherwise(pl.col('bar')).alias('bar2'))
.with_columns(pl.col('bar2').alias('bar'))
.drop(['temp', 'bar2'])
)
</code></pre>
<p>Expected Result:</p>
<pre class="lang-py prettyprint-override"><code>┌─────┬──────┬──────┐
│ foo ┆ bar ┆ baz │
│ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ str │
╞═════╪══════╪══════╡
│ 1 ┆ a ┆ x │
│ 1 ┆ a ┆ null │
│ 1 ┆ a ┆ q │
│ 2 ┆ z ┆ z │
│ 2 ┆ z ┆ r │
│ 2 ┆ z ┆ null │
│ 3 ┆ c ┆ y │
│ 3 ┆ c ┆ s │
└─────┴──────┴──────┘
</code></pre>
<p>In my actual problem, this chain would be only a subset of a larger chain, so it would be great if I could write</p>
<pre class="lang-py prettyprint-override"><code>def update_bar() -> pl.expr:
return (#some voodoo)
</code></pre>
<p>and then:</p>
<pre class="lang-py prettyprint-override"><code>df = (df.with_columns(update_bar())
.drop(['temp', 'bar2'])
)
</code></pre>
<p>or even</p>
<pre class="lang-py prettyprint-override"><code>df = (df.with_columns(update_bar())
.with_columns(pl.col('bar2').alias('bar'))
.drop(['temp', 'bar2'])
)
</code></pre>
<p>The first two operations at the top go together so I'd really like to avoid writing two functions.
Any guidance on how to go about this?</p>
<p>Or maybe someone even has a more clever way to accomplish what I need all this code to do? Note that <code>foo</code> and <code>bar</code> having matching grouping is only true in this simplified example. In my real case <code>foo</code> is 3 columns and bar cannot be used as the group alone.</p>
|
<python><dataframe><python-polars>
|
2024-04-10 16:49:36
| 2
| 343
|
MikeP
|
78,305,882
| 1,934,902
|
Boto3 delete ebs snapshots script ...error for parameter snapshotId is invalid. Expected: 'snap-...'
|
<p>My script is trying to delete ebs snapshots, but I've gotten stuck at this point due to this error. I am not sure what the solution is.</p>
<pre><code>import boto3
import pprint
from openpyxl import load_workbook
AWS_REGION = "ap-southeast-2"
ec2 = boto3.client('ec2', region_name = AWS_REGION)
wb = load_workbook("workbook-1.xlsx")
ws = wb['Sheet1']
column = ws['D'][0:3089]
column_list = [column[x].value for x in range(len(column))]
snapshot_string =" ".join(map(str,column_list)).split(",")
pprint.pprint(snapshot_string)
for snap in snapshot_string:
print(f"Snapshot {snap} are deleting")
ec2.delete_snapshot(SnapshotId=snap)
print(f"Snapshot {snap} have been deleted")
</code></pre>
<p>the last print statement starts with</p>
<pre><code>Snapshot Snapshot Id snap-0f51c7d3288247f6b
</code></pre>
<p>Ends with</p>
<pre><code>snap-03e488d013824c092 snap-05e6f9163bc26a4b4 are deleting
</code></pre>
<p>The error is</p>
<pre><code>snap-03e488d013824c092 snap-05e6f9163bc26a4b4)
for parameter snapshotId is invalid. Expected: 'snap-...'.
</code></pre>
|
<python><amazon-web-services><boto3>
|
2024-04-10 16:39:51
| 0
| 333
|
ficestat
|
78,305,761
| 8,262,535
|
Mlflow log_figure deletes artifact
|
<p>I am running mlflow with autologging to track an xgboost model. By default, under artifacts it saves the model, requirements, and feature importances. Cool stuff I want to keep.
<a href="https://i.sstatic.net/Ps4AA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ps4AA.png" alt="enter image description here" /></a></p>
<p>But, if I try to add figures with the code below, it deletes the artifacts folder and puts just my figures into it. What is the best way to append artifacts to the folder?
<a href="https://i.sstatic.net/6l8Mw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6l8Mw.png" alt="enter image description here" /></a></p>
<pre><code>mlflow.autolog()
mlflow.set_experiment('Energy Use Forecasting')
def main():
color_pal = sns.color_palette()
input_file = Path(r'D:\data\ML\PowerConsumption\AEP_hourly.csv')
models_path = Path(r'D:\Models\ML') / Path(__file__).stem
models_path.mkdir(parents=True, exist_ok=True)
df = pd.read_csv(input_file)
df = df.set_index('Datetime')
df.index = pd.to_datetime(df.index)
data_source = input_file.stem.split('_')[0]
df = df.rename(columns={data_source + '_MW': 'MW'})
train, val = train_val_split(df)
model = xgb.XGBRegressor(base_score=0.5, booster='gbtree',
n_estimators=10000,
early_stopping_rounds=50,
objective='reg:squarederror',
max_depth=max_depth,
learning_rate=learning_rate)
model.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_val, y_val)],
verbose=100)
ax = df[['MW']].plot(figsize=(15, 5))
df['prediction'].plot(ax=ax, style='.')
plt.legend(['Truth Data', 'Predictions'])
plt.axvline(val_split_index, color="gray", lw=3, label=f"Val split point")
ax.set_title('Raw Data and Prediction')
mlflow.log_figure(fig_trainval_preds, 'trainval_predictions.png')
</code></pre>
|
<python><scikit-learn><sklearn-pandas><mlflow><mlops>
|
2024-04-10 16:16:10
| 1
| 385
|
illan
|
78,305,752
| 11,801,298
|
How to interact with matplotlib chart in PyCharm?
|
<p>I want to see in PyCharm good old matplotlib window with options. I mean buttons on the bottom. They let me zoom chart and move from one part to another.</p>
<p><a href="https://i.sstatic.net/mwnNX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mwnNX.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/mLeQf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLeQf.png" alt="enter image description here" /></a></p>
<p>These navigation bottoms are very usefull, I need them.</p>
<p>But in PyCharm such chart is just a png picture without interactive options:
<a href="https://i.sstatic.net/r2TDc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r2TDc.png" alt="enter image description here" /></a>
I cannot travel through it.</p>
<p>How to create chart with bottoms, so I can interact with it?</p>
<p>Code example (my question concerns all charts built with matplotlib, so code doesn't matter: any code):</p>
<pre><code># Plotting the chart
plt.figure(figsize=(10, 5))
plt.plot(res_df['aspect'], res_df['result'], label='Result')
plt.xlabel('Aspect')
plt.ylabel('Result')
plt.title('Result vs. Aspect')
plt.legend()
plt.xticks([0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330, 360])
plt.grid(True)
plt.show()
</code></pre>
|
<python><matplotlib><pycharm>
|
2024-04-10 16:14:28
| 0
| 877
|
Igor K.
|
78,305,720
| 3,590,067
|
How to overlap a geopandas dataframe with basemap?
|
<p>I have a shapefile that I read as a geopandas dataframe</p>
<pre><code>import geopandas as gpd
gdf = gpd.read_file('myfile.shp')
gdf.plot()
</code></pre>
<p><a href="https://i.sstatic.net/VZ4FD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VZ4FD.png" alt="enter image description here" /></a></p>
<p>where <code>gdf.crs</code></p>
<pre><code><Projected CRS: ESRI:54009>
Name: World_Mollweide
Axis Info [cartesian]:
- E[east]: Easting (metre)
- N[north]: Northing (metre)
Area of Use:
- name: World.
- bounds: (-180.0, -90.0, 180.0, 90.0)
Coordinate Operation:
- name: World_Mollweide
- method: Mollweide
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
</code></pre>
<p>and <code>gdf.total_bounds</code></p>
<pre><code>array([-17561329.90352868, -6732161.66088735, 17840887.22672861,
8750122.26961274])
</code></pre>
<p>I would like to use <code>basemap</code> to plot the lat/lon grid on top of it. This is what I am doing</p>
<pre><code>from mpl_toolkits.basemap import Basemap
# Create a Basemap instance with the same projection as the GeoDataFrame
map = Basemap(projection='moll', lon_0=-0, lat_0=-0, resolution='c')
# Create a figure and axis
fig, ax = plt.subplots(figsize=(10, 6))
# Plot the basemap
map.drawcoastlines()
map.drawcountries()
map.drawparallels(range(-90, 91, 30), labels=[1,0,0,0], fontsize=10)
map.drawmeridians(range(-180, 181, 60), labels=[0,0,0,1], fontsize=10)
# Plot the GeoDataFrame on top of the basemap
gdf.plot(ax=ax, color='red', markersize=5)
</code></pre>
<p>but this is what I get</p>
<p><a href="https://i.sstatic.net/5Hsq9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5Hsq9.png" alt="enter image description here" /></a></p>
|
<python><dataframe><matplotlib><geopandas><matplotlib-basemap>
|
2024-04-10 16:08:16
| 1
| 7,315
|
emax
|
78,305,634
| 613,365
|
How can I make a video restart (loop) after it is done playing?
|
<p>I'm trying to make a video which will stay pause on the first frame for 5 second, then play until the end, and rewind (then stay on the first frame for 5 sec, then play again)</p>
<p>So far I have this code, which detects when the video is done playing, but it does not go back to the beginning, it just stay on the last frame of the video.</p>
<p>Basically the <code>set_position(0)</code> does not seem to work once the video is done playing.</p>
<pre><code>import vlc
import time
# Path to your video file
video_path = "vid.mp4"
# Create VLC instance
vlc_instance = vlc.Instance('')
# Create media player
player = vlc_instance.media_player_new()
# Load the video file
media = vlc_instance.media_new(video_path)
player.set_media(media)
#player.PlaybackMode("loop")
# Register an event manager
event_manager = player.event_manager()
# Define a callback function for the EndReached event
def on_end_reached(event):
restart_video(player)
# Connect the EndReached event to the callback function
event_manager.event_attach(vlc.EventType.MediaPlayerEndReached, on_end_reached)
def restart_video(player):
print("Putting the video at the beginning")
player.set_position(0)
player.play()
time.sleep(0.5)
player.pause() # Pause the video as the beginning
time.sleep(5)
player.play()
restart_video(player)
</code></pre>
|
<python><python-vlc>
|
2024-04-10 15:53:32
| 0
| 11,333
|
FMaz008
|
78,305,406
| 3,156,085
|
How does `issubclass` usually determine that a class is a subclass of another?
|
<p>How does <code>issubclass(cls, base_cls)</code> work?</p>
<p>Checking whether <code>base_cls</code> is in <code>cls.__mro__</code> or not? Or is it something else or more complicated?</p>
<p><code>issubclass(cls, base_cls)</code> is implemented by <code>base_cls.__subclasscheck__(self, cls)</code> and can have a totally different behavior than the default one. Since <code>__subclasscheck__</code> is also a built-in method, I could also rephrase my question:</p>
<p>How does the <code>__subclasscheck__()</code> method work by default?</p>
|
<python><inheritance>
|
2024-04-10 15:15:01
| 1
| 15,848
|
vmonteco
|
78,305,343
| 1,422,096
|
Difference between lstrip and removeprefix?
|
<p>Why is <code>removeprefix</code> needed to remove a prefix from a string (or <code>text[text.startswith(prefix) and len(prefix):]</code> for Python < 3.9, see <a href="https://stackoverflow.com/questions/16891340/remove-a-prefix-from-a-string/16892491#16892491">Remove a prefix from a string</a>), when it seems we can simply use <code>lstrip</code>?</p>
<p>Indeed:</p>
<pre><code>"abchello".removeprefix("abc") # hello
"abchello".lstrip("abc") # hello
</code></pre>
|
<python>
|
2024-04-10 15:03:05
| 1
| 47,388
|
Basj
|
78,305,307
| 10,143,378
|
Calendar Dash component outside box
|
<p>I am facing an issue in my application dash, a screenshot will show it directly :</p>
<p><a href="https://i.sstatic.net/sEMrM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sEMrM.png" alt="enter image description here" /></a></p>
<p>You can see that Calendar days are going outside the box of the calendar.</p>
<p>to create that, I simple use a DateRangePicker component in my layout</p>
<pre><code>dmc.DateRangePicker(
id='calendar_id',
style={"width": "100%"},
placeholder='Date Range...'
),
</code></pre>
<p>if you want more code of the layout, it is looking like that (I have clean it a little to avoid putting to much stuff) :</p>
<pre><code>layout = dbc.Container([
dbc.Row([
dbc.Col([
dbc.Card([
dbc.CardBody([
html.Label(
'Some Label:',
className='bold-text'
),
dcc.Dropdown(
id='dropdown_id',
options=[{'label': i, 'value': i} for i in ['a', 'b']]
),
html.Br(),
dmc.DateRangePicker(
id='calendar_id',
style={"width": "100%"},
placeholder='Date Range...'
)
])
], className="mb-4"), # Margin bottom
], md=5),
dbc.Col([
#other code
], md=7)
]),
], fluid=True)
</code></pre>
|
<python><html><css><plotly-dash>
|
2024-04-10 14:55:06
| 1
| 576
|
kilag
|
78,305,085
| 6,850,901
|
How do I get the number of dimensions (aka order or degree) of a tensor in PyTorch?
|
<p>If I have a PyTorch tensor such as</p>
<pre class="lang-py prettyprint-override"><code>t = torch.rand((2,3,4,5))
</code></pre>
<p>how do I get the number of dimensions of this tensor? In this case, it would be 4.</p>
|
<python><pytorch><tensor>
|
2024-04-10 14:18:16
| 1
| 1,488
|
Oskar
|
78,305,001
| 8,237,838
|
Using geographical coordinates, given an arbitrary point, how to find the closest existing point on a linestring?
|
<p>I'm working with shapely and django-gis. I'm given a linestring with full coordinates and a point with rounded to 6 coordinates.</p>
<p>What I need is to find the index of the nearest existing point from my linestring from the given arbitrary point, the nearest point on the linestring and add at index + 1 the coordinates of the the nearest point on the linestring.</p>
<p>My code is like below. However for some points it's raising a ValueError because closest point is not in linestring.</p>
<p>Example of succesfull data :</p>
<p>input : <code>(4.756696, 45.381095)</code>
output : <code>(4.75669614409327, 45.3810948660022)</code></p>
<p>Example of failing data :</p>
<p>input : <code>(4.756614, 45.380532)</code>
output <code>(4.756613683262075, 45.38053204640647) # this point is not inside the given linestring.</code></p>
<pre><code>class SomeClass:
@staticmethod
def closest_point_on_linestring(line_coords: list, point: Point):
line = LineString(line_coords)
min_dist = float('inf')
closest_point = None
for idx in range(len(line_coords) - 1):
segment_start = line_coords[idx]
segment_end = line_coords[idx + 1]
segment = LineString([segment_start, segment_end])
# Project the point onto the segment
projected_point = segment.interpolate(segment.project(point))
# Calculate the distance from the point to the segment
distance = projected_point.distance(point)
# Calculate the distance between the segment's start and end points
segment_length = Point(segment_start).distance(Point(segment_end))
# Check if the distance to the segment is within the segment length
if 0 <= segment.project(projected_point) <= segment_length:
if distance < min_dist:
min_dist = distance
closest_point = projected_point
return closest_point
def another_function(self):
...
closest = self.closest_point_on_linestring(line, Point(landmark_point))
index = list(line_string.coords).index(closest.coords[0])
...
</code></pre>
<p>Below is a terrible drawing of my requirements :</p>
<p>The black line represents my linestring, the red dots the points of this linestring and the green dots the arbitrary points I'm given. I need to find the closest red dots for each green dots and the closest point inside the linestring (the interpolation is represented with blue lines).</p>
<p><a href="https://i.sstatic.net/K3Jjw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K3Jjw.png" alt="enter image description here" /></a></p>
|
<python><geolocation><geo><shapely>
|
2024-04-10 14:05:52
| 2
| 1,920
|
May.D
|
78,304,896
| 13,330,700
|
When reading environment variables in Python from Docker's --env-file argument, there is an added '\' to each '\n'. How to not have that extra '\'?
|
<p>I'm in the process of containerizing my Python app. Normally, I'd load in my <code>.env</code> file and use <code>dotenv</code> to read in the variable.</p>
<p>Example:</p>
<pre><code>from dotenv import load_dotenv
import os
def test_env():
load_dotenv()
print(f"private_key : {repr(os.environ.get('PRIVATE_KEY'))}")
if __name__ == '__main__':
test_env()
</code></pre>
<p>and in my <code>.env</code> file I'd have to have quotes around my variable for this to work too. Example:</p>
<pre><code>PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n.....=\n-----END PRIVATE KEY-----\n"
</code></pre>
<p>This gets printed out correctly in my Python application.</p>
<p>However, when I run the same code but using Docker's <code>--env-file</code> I get an extra <code>\</code> added to each <code>\n</code>.</p>
<p>Example docker command:</p>
<pre><code>docker run --env-file ./.env -d -p $(RUN_PORT):$(RUN_PORT) --name $(DOCKER_CONTAINER_NAME) $(FULL_IMG_NAME)
</code></pre>
<p>The output looks like: <code>'-----BEGIN PRIVATE KEY-----\\n...=\\n-----END PRIVATE KEY-----\\n'</code>
This is regardless of whether the <code>.env</code> file has "" around the variable or not. All the "" does to the variable, is that literal to the variable, which is not the case with <code>dotenv</code>.</p>
<p>What needs to be done?</p>
|
<python><docker><environment-variables>
|
2024-04-10 13:47:21
| 1
| 499
|
mike_gundy123
|
78,304,830
| 1,703,316
|
Correct way to unzip the staged file in the Snowflake python UDF
|
<p>I'm working on implementing PyTorch model inference in the Snowflake UDF/UDTF. I'm following the official example <a href="https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-examples#unzipping-a-staged-file" rel="nofollow noreferrer">https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-examples#unzipping-a-staged-file</a></p>
<p>This is the most relevant bit of code from the example:</p>
<pre><code> # File lock class for synchronizing write access to /tmp.
class FileLock:
def __enter__(self):
self._lock = threading.Lock()
self._lock.acquire()
self._fd = open('/tmp/lockfile.LOCK', 'w+')
fcntl.lockf(self._fd, fcntl.LOCK_EX)
def __exit__(self, type, value, traceback):
self._fd.close()
self._lock.release()
# Get the location of the import directory. Snowflake sets the import
# directory location so code can retrieve the location via sys._xoptions.
IMPORT_DIRECTORY_NAME = "snowflake_import_directory"
import_dir = sys._xoptions[IMPORT_DIRECTORY_NAME]
# Get the path to the ZIP file and set the location to extract to.
zip_file_path = import_dir + "spacy_en_core_web_sm.zip"
extracted = '/tmp/en_core_web_sm'
# Extract the contents of the ZIP. This is done under the file lock
# to ensure that only one worker process unzips the contents.
with FileLock():
if not os.path.isdir(extracted + '/en_core_web_sm/en_core_web_sm-2.3.1'):
with zipfile.ZipFile(zip_file_path, 'r') as myzip:
myzip.extractall(extracted)
</code></pre>
<p>The model archive is delivered to the workspace of the UDF and is placed in the directory, that is retrieved from the sys._xoptions. The example code addresses the issue that the computation is distributed to a bunch of worker threads running on the same machine and the unzipping should be done once by one of the them, while the others should wait.</p>
<p>Mostly the code makes sense, except for one bit. What is the point of creating a <code>threading.Lock</code> instance in the <code>__enter__</code> method of <code>FileLock</code> class and acquiring it? If this is intended to synchronize the threads, wouldn't it make sense to somehow share the lock object between the threads? Otherwise each thread creates its own instance of the lock and acquires it, how would that achieve anything?</p>
<p>I believe the code actually achieves its purpose by using the file lock, that is actually shared by all threads/processes running this code, because it is an exclusive file lock on the one and the same file for all of them. Am I missing something and using file lock is not enough here and the threading.Lock is also needed?</p>
|
<python><multithreading><snowflake-cloud-data-platform>
|
2024-04-10 13:36:02
| 1
| 727
|
stys
|
78,304,579
| 2,281,159
|
Jupyter Notebook running in a local pyenv does not see PIP installed packages
|
<p>I am trying to use <code>python,</code> <code>pyenv,</code> and 'jupyter' notebooks on my MacBook.</p>
<p>Everything seems configured to use my <code>pyenv properly,</code> but when I run the Jupyter notebook, it can't find some packages I installed with <code>pip.</code> They are confirmed as installed as they show up in the pip list command.</p>
<p>I confirmed that pyenv is set up properly and that everything is running within the same local virtual environment, they all are running from the same virtual 'bin' directory ~//.venv/bin</p>
<p>All of that looks good, but when I launch the Jupyter notebook from the command line within that local pyenv, it doesn't see my additional installed PIP packages.</p>
<p>I looked at other answers to a similar problem, but they did not help.</p>
|
<python><pip><jupyter><pyenv>
|
2024-04-10 12:53:31
| 1
| 360
|
Mark_Sagecy
|
78,304,473
| 8,428,634
|
LangChain E-Mails with LLM
|
<p>I am quite new to LangChain and Python as im mainly doing C# but i am interested in using AI on my own data.
So i wrote some python code using langchain that:</p>
<ol>
<li><p>Gets my Emails via IMAP</p>
</li>
<li><p>Creates JSON from my E-Mails (JSONLoader)</p>
</li>
<li><p>Creates a Vectordatabase where each mail is a vector (FAISS, OpenAIEmbeddings)</p>
</li>
<li><p>Does a similarity search according to the query returning the 3 mails that match the query the most</p>
</li>
<li><p>feeds the result of the similarity search to the LLM (GPT 3.5 Turbo) using the query AGAIN</p>
</li>
</ol>
<p>The LLM Prompt then looks something like:</p>
<pre><code>The question is
{query}
Here are some information that can help you to answer the question:
{similarity_search_result}
</code></pre>
<p>Ok so far so good... when my question is:</p>
<pre><code>When was my last mail sent to xyz@gmail.com?
</code></pre>
<p>i get a correct answer... -> e.g last mail received 10.04.2024 14:11</p>
<p>But what if i want to have an answer to the following question</p>
<pre><code>How many mails have been sent by xyz@gmail.com?
</code></pre>
<p>Because the similarity search only gets the vectors that are most similar, how can i just get an answer about the amount?
Even if the similarity search would deliver 150 mails instead of 3 sent by xyz@gmail.com i cant just feed them all into the LLM prompt right?</p>
<p>So what is my mistake here?</p>
|
<python><langchain>
|
2024-04-10 12:36:24
| 1
| 621
|
CrazyEight
|
78,304,167
| 13,806,869
|
Why does calling pd.ExcelWriter in this way create an invalid file format or extension?
|
<p>I am trying to export two Pandas dataframes to the same Excel file, in different sheets.</p>
<p>The code below runs ok, but the file it creates has a size of 0kb. When I try to open it in Excel, I get the message "<em>Excel cannot open the file myfile.xlsx because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file</em>":</p>
<pre><code>V_writer = pd.ExcelWriter("myfile.xlsx", mode = 'w')
V_df_1.to_excel(V_writer, sheet_name = "df_1", index = False)
V_df_2.to_excel(V_writer, sheet_name = "df_2", index = False)
</code></pre>
<p>However, the following code works perfectly:</p>
<pre><code>with pd.ExcelWriter("myfile.xlsx", mode = 'w') as V_writer:
V_df_1.to_excel(V_writer, sheet_name = "df_1", index = False)
V_df_2.to_excel(V_writer, sheet_name = "df_1", index = False)
</code></pre>
<p>Could anyone explain why this is please? The file name and file extension are the same in each bit of code and the same parameters are being used, so I don't understand why one creates an invalid file and the other does not.</p>
|
<python><pandas><pandas.excelwriter>
|
2024-04-10 11:43:58
| 1
| 521
|
SRJCoding
|
78,304,081
| 13,174,189
|
How to group rows in dataset by two timestamps rage sizes?
|
<p>I have a dataframe with two timestamps:</p>
<pre><code>timestamp1 timestamp2
2022-02-18 2023-01-02
2022-02-19 2023-01-04
2022-02-21 2023-01-11
2022-03-11 2024-02-05
2022-03-12 2024-02-06
2022-03-30 2024-02-07
</code></pre>
<p>i want to group them (create new parameter group) by timestamps ranges. in each timestamp per group range of dates must not exceed a 4 days (so difference between max and min within group must be no higher than 4 days). so dessired result can be like:</p>
<pre><code>timestamp1 timestamp2 group
2022-02-18 2023-01-02 1
2022-02-19 2023-01-04 1
2022-02-21 2023-01-11 2
2022-03-11 2024-02-05 3
2022-03-12 2024-02-06 3
2022-03-30 2024-02-07 4
</code></pre>
<p>as you see third row doesnt have group 1 because timestamp2 is 2023-01-11 (9 days larger than smallest timestamp2 in group 1). and last row is not in group 3 since its timestamp1 is 19 days larger than min timestamp1 in group 1</p>
<p>The goal is two have as many rows per group as possible with constraints mentioned before</p>
<p>How to do that? I tried this but it doesnt give desired results especially on larger dataset:</p>
<pre><code>import pandas as pd
# Sample DataFrame
data = {
'timestamp1': ['2022-02-18', '2022-02-19', '2022-02-21', '2022-03-11', '2022-03-12', '2022-03-30'],
'timestamp2': ['2023-01-02', '2023-01-04', '2023-01-11', '2024-02-05', '2024-02-06', '2024-02-07']
}
df = pd.DataFrame(data)
df['timestamp1'] = pd.to_datetime(df['timestamp1'])
df['timestamp2'] = pd.to_datetime(df['timestamp2'])
# Function to assign groups
def assign_groups(df):
groups = []
current_group = 1
start_timestamp2 = df.iloc[0]['timestamp2']
for i, row in df.iterrows():
if (row['timestamp2'] - start_timestamp2).days > 4:
current_group += 1
start_timestamp2 = row['timestamp2']
groups.append(current_group)
return groups
# Assign groups
df['group'] = assign_groups(df)
</code></pre>
|
<python><python-3.x><dataframe><algorithm><function>
|
2024-04-10 11:25:00
| 1
| 1,199
|
french_fries
|
78,304,004
| 2,397,542
|
Exclude a Flask view from session creation?
|
<p>I have a Flask app that uses Flask-Session to store session data in Redis between a few containers running as a Kubernetes Deployment. That's all working fine, but I recently started looking closer at the Redis database and noticed it has <em>way</em> more keys than I expected, and it seems that the readiness and metrics endpoints of my app (which are polled quite frequently) are creating a new session each time.</p>
<p>Is there a way to exclude specific Flask views from the session creation process? e.g. something like a @flask.nosession decorator for the view would be ideal, or force only those sessions to a much shorter life?</p>
<p><a href="https://i.sstatic.net/57d2q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/57d2q.png" alt="enter image description here" /></a></p>
|
<python><flask><prometheus>
|
2024-04-10 11:11:42
| 1
| 832
|
AnotherHowie
|
78,303,879
| 6,936,682
|
Typing Pydantic mutated @field_validator property
|
<p>Consider the following Pydantic class:</p>
<pre><code>from pydantic import BaseModel, field_validator
class UserAttributes(BaseModel):
user_groups: list[str]
...
@field_validator("user_groups", mode="before")
@classmethod
def make_user_group_list(cls, user_groups: str) -> list[str]:
if not user_groups:
raise ValueError("User groups cannot be empty")
return user_groups.replace(" ", "").split(",")
</code></pre>
<p>As the function suggests, <code>user_groups</code> is a comma separated list e.g. "Admin, User", which the validator transforms into an array, e.g. <code>["Admin", "User]</code>.</p>
<p>The problem now is, when I use the class, like so:</p>
<pre><code>request = UserAttributes(user_groups="Admin, User") # type: ignore
</code></pre>
<p>Type checking obviously wants it to be an array.</p>
<p>Is there a way in which I can type the pydantic class correctly, without me having to do "hacky" tricks to make it work? (Such hacky tricks could be creating an intermediate property, which has a field validator that writes to <code>user_groups</code>=</p>
|
<python><pydantic>
|
2024-04-10 10:48:24
| 0
| 1,970
|
Jeppe Christensen
|
78,303,796
| 9,251,158
|
YouTube upload documentation code fails to get authenticated service due to OAuth2client AttributeError
|
<p>I am uploading videos to YouTube programmatically following <a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow noreferrer">the official guide</a>. This worked three weeks ago when I first tried and I logged into my account on a browswer for the OAuth2 flow.</p>
<p>Now my credentials expired and I cannot upload again. This is the relevant part of the code, copied from the sample code:</p>
<pre class="lang-py prettyprint-override"><code>def get_authenticated_service():
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE,
scope=YOUTUBE_UPLOAD_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, [])
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
</code></pre>
<p>I now get this error from the <code>oauth2client</code>:</p>
<pre><code>/usr/local/lib/python3.11/site-packages/oauth2client/_helpers.py:255: UserWarning: Cannot access upload_youtube.py-oauth2.json: No such file or directory
warnings.warn(_MISSING_FILE_MESSAGE.format(filename))
Traceback (most recent call last):
File "~/upload_youtube.py", line 291, in <module>
youtube = get_authenticated_service()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/upload_youtube.py", line 92, in get_authenticated_service
credentials = run_flow(flow, storage, [])
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/oauth2client/tools.py", line 195, in run_flow
logging.getLogger().setLevel(getattr(logging, flags.logging_level))
^^^^^^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'logging_level'
</code></pre>
<p>I uninstalled and reinstalled <code>google-api-python-client</code> and <code>oauth2client</code> to make sure I had the latest version, and I still get the same error.</p>
<p><strong>Note</strong>: The file <code>upload_youtube.py-oauth2.json</code> is a cache of the authenticated service so I don't have to login on the browser every time I run it. That authentication expires after 1-2 weeks. The warning (<code>UserWarning: Cannot access upload_youtube.py-oauth2.json: No such file or directory</code>) happens when the file is missing, e.g. when I run the code for the first time or (in this case), when I delete that file in an attempt to solve the error.</p>
<p>How can I work around this error and get an authenticated YouTube service?</p>
|
<python><python-3.x><youtube-data-api>
|
2024-04-10 10:33:16
| 1
| 4,642
|
ginjaemocoes
|
78,303,788
| 6,915,206
|
Django Error DisallowedHost at / Invalid HTTP_HOST header: 'xx.xx.xx.xx' again and again
|
<p><strong>Error:</strong>
<strong>DisallowedHost at /
Invalid HTTP_HOST header: '3.17.142.65'. You may need to add '3.17.142.65' to ALLOWED_HOSTS.</strong></p>
<p>I am trying to deploy my django site om AWS EC2 while deploying through github using git clone on AWS live cli. I am getting the following errors again and again. My <em><strong>EC2 instance ip is 3.17.142.65</strong></em> and in my settings file first i kept it like this <code>ALLOWED_HOSTS = ['3.17.142.65', 'localhost', '127.0.0.1']</code> this shows me the same error then i changed it to <code>ALLOWED_HOSTS = ['3.17.142.65']</code>
this also giving same error. (One thing i am not getting like i cloned my github project once at starting after then if i am changing on my github setting file how aws cli knows these changes. Btw i run the command <code>git pull origin master</code> Am i right that i should run this command while making any changes on github files? )</p>
<p>I am new to ubuntu and deploying websites so please guide me what mistake i am dong here.</p>
<p><strong>To run the server i executing these commands</strong></p>
<pre><code>sudo systemctl restart nginx
sudo service gunicorn restart
sudo service nginx restart
</code></pre>
<p>My Configured Nginx file</p>
<pre><code>server {
listen 80;
server_name 3.17.142.65;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
</code></pre>
|
<python><django><amazon-web-services>
|
2024-04-10 10:32:02
| 2
| 563
|
Rahul Verma
|
78,303,706
| 5,170,442
|
What is a simple way to round all floats in an f-string to the same number of digits in one step?
|
<p>I have a long f-string with containing many float values, all of which I want to round to the same number of digits. Is there a way to do this in one go or in a smart way, or do I need to specify the digits at for each one separately?</p>
<p>I know that I can round a value in an f-string using</p>
<pre class="lang-py prettyprint-override"><code>print(f"pi rounded to 2 decimals is {math.pi:.2f}")
</code></pre>
<p>However, I would like to do something along the lines of:</p>
<pre class="lang-py prettyprint-override"><code>print(f"rounding pi and tau and e to 2 digits results in {math.pi}, {math.tau}, {math.e}, respectively:"<some operation to round all to two digits.>)
</code></pre>
<p>(This is a minimal example. In reality, my string has many more floats associated to it (and isn't just a list of mathematical constants)).</p>
<p>I know I can could do something like:</p>
<pre class="lang-py prettyprint-override"><code>constants = [math.pi, math.tau, math.e]
rounded_constants = [str(round(c, 2)) for c in constants]
print(f"rounding pi and tau and e to 2 digits results in {rounded_constants[0]}, {rounded_constants[1]}, {rounded_constants[2]}, respectively"
</code></pre>
<p>but this seems a bit roundabout to me and I was wondering if there's a more direct way to do it.</p>
<p>(EDITED: I edited the question to clarify that I don't really care about doing it in a single step, but just want to find a smarter way than specifying it for each float separately)</p>
|
<python><string-formatting><f-string>
|
2024-04-10 10:18:44
| 3
| 653
|
db_
|
78,303,605
| 5,303,909
|
Node-gyp prebuild-install || node-gyp rebuild falling with python 3.12
|
<p>I'm working on a <code>React Native Expo</code> project, and I'm encountering an error from node-gyp when I try to install or build dependencies. The error occurs because node-gyp expects <code>Python 3.10</code>, but my system has <code>Python 3.12</code> installed. I've found that using <code>Python 3.10</code> resolves the issue, but this isn't ideal because my Android builds require <code>Python 3.12.</code></p>
<p>Currently, I'm using <code>Node.js v18</code> with <code>node-gyp 8</code>. Is there a proper solution to this problem? The error sometimes occurs with packages like <code>snappy</code> and <code>node-sass</code>. Any help would be appreciated</p>
<pre><code>/Users/XXX/Documents/my-project/node_modules/snappy: Command failed.
Exit code: 1
Command: prebuild-install || node-gyp rebuild
Arguments:
Directory: Users/XXX/Documents/my-project/node_modules/snappy
</code></pre>
<p>I'm using macOS Sonoma(14) in Apple Silicon</p>
|
<python><node.js><react-native><npm><node-gyp>
|
2024-04-10 10:00:27
| 1
| 1,800
|
NicoleZ
|
78,303,223
| 8,849,755
|
Python logging loop without new lines
|
<p>Is it possible to use the <code>logging</code> module to display information about the progress in a loop without printing hundreds of new lines?</p>
<p>Let me illustrate with this snippet:</p>
<pre class="lang-py prettyprint-override"><code>for n in range(99999999):
if n%99999 == 0:
msg = f'Completed {n} iterations'
print(msg + '\b'*len(msg), end='')
</code></pre>
<p>Is it possible to replicate this behavior with <code>logging</code>? Something like:</p>
<pre class="lang-py prettyprint-override"><code>with logging.end(''):
for n in range(99999999):
if n%99999 == 0:
msg = f'Completed {n} iterations'
logging.info(msg + '\b'*len(msg))
</code></pre>
|
<python><logging>
|
2024-04-10 08:58:41
| 1
| 3,245
|
user171780
|
78,303,180
| 5,462,743
|
Update Azure Machine Learning Compute Instances with Python SDKv2
|
<p>At first, I have some terraform code that will deploy compute instances for AML. But there's a lack of update of the terraform code so we can not add the option of schedules and auto shutdown. <a href="https://github.com/hashicorp/terraform-provider-azurerm/issues/20973" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-azurerm/issues/20973</a></p>
<p>So for this I want to create a workaround, because terraform created compute instances will stay on until someone shut them off.</p>
<p>I'm trying to list all the compute instances with the python SDK V2 and update each compute instances one by one. But it does not work.</p>
<pre><code>subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx"
resource_group = "XXXXXXXXXXXXXXXXXXXXX"
workspace = "XXXXXXXXXXX"
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
from azure.ai.ml.constants import TimeZone
from azure.ai.ml.entities import ComputeSchedules, ComputeStartStopSchedule, RecurrencePattern, RecurrenceTrigger
# Login (az login or env variables)
ml_client = MLClient(
DefaultAzureCredential(), subscription_id, resource_group, workspace
)
for compute_instance in ml_client.compute.list():
if compute_instance.type == "computeinstance":
rec_trigger = RecurrenceTrigger(start_time="2024-01-10T00:00:00", time_zone=TimeZone.CENTRAL_EUROPEAN_STANDARD_TIME , frequency="day", interval=1, schedule=RecurrencePattern(hours=20, minutes=[0]))
myschedule = ComputeStartStopSchedule(trigger=rec_trigger, action="stop")
com_sch = ComputeSchedules(compute_start_stop=[myschedule])
compute_instance.schedules = com_sch
compute_instance.idle_time_before_shutdown = "PT30M"
compute_instance.idle_time_before_shutdown_minutes = 30
ml_client.begin_create_or_update(compute_instance)
</code></pre>
<p>I get this output <code>Warning: 'Location' is not supported for compute type computeinstance and will not be used.</code>. The <code>location</code> property is coming from the <code>ml_client.compute.get(compute_instance.name)</code>.</p>
<p>I want to idle shutdown after 30 minutes and force shutdown at 8pm. For this I've started to edit one compute instance from the AML Web UI, then used the python code to get the content of this modified compute instance and replicate it for the others.</p>
<p>On the <code>begin_create_or_update</code>, I see that the compute instance is in <code>creating</code> state, but once updated, there's still no schedule nor idle shutdown.</p>
<p>I also tried with <code>az ml compute update</code> but there's an error on the <code>idle</code> argument that is not recognized even if it is in the documentation. <a href="https://github.com/Azure/azure-cli/issues/26911" rel="nofollow noreferrer">https://github.com/Azure/azure-cli/issues/26911</a></p>
|
<python><azure-machine-learning-service><azure-sdk-python>
|
2024-04-10 08:50:13
| 1
| 1,033
|
BeGreen
|
78,303,059
| 5,378,816
|
Is deletion of no longer needed helper functions pythonic?
|
<p>I saw a .py file like this:</p>
<pre><code>def _initmodule():
...
def api_func1(): pass
def api_func2(): pass
_initmodule()
del _initmodule # <--- pythonic?
</code></pre>
<p>The init function is run once and after that it is not needed any more and gets deleted.</p>
<p>Wondering if it is a pythonic/idiomatic I briefly scanned the std library and found a similiar pattern in the <code>pickletools</code>:</p>
<pre><code>def assure_pickle_consistency(verbose=False):
...
assure_pickle_consistency()
del assure_pickle_consistency
</code></pre>
<p>and another one in <code>urllib/parse</code>.</p>
<pre><code>def _fix_result_transcoding():
...
_fix_result_transcoding()
del _fix_result_transcoding
</code></pre>
<p>I have probably overlooked some occurrences, but definitiely, it is a rare pattern.</p>
<p>Is it something we should not forget to do?</p>
|
<python>
|
2024-04-10 08:29:44
| 0
| 17,998
|
VPfB
|
78,302,788
| 16,723,655
|
How can I make fastest calculation speed for given condition for numpy array?
|
<p>I made category as below.</p>
<pre><code>1~4 : 0
5~9 : 1
10~15 : 2
</code></pre>
<p>I have a numpy array as below.</p>
<pre><code>np.array([2, 5, 10, 13, 7, 9])
</code></pre>
<p>How can I make fastest way to change above numpy array based on given conditioin as below?</p>
<pre><code>np.array([0, 1, 2, 2, 1, 1])
</code></pre>
<p>Because I think 'for loop' will consume lots of time.</p>
<p>Is there any way to make fastest time calculation?</p>
|
<python><numpy>
|
2024-04-10 07:36:06
| 3
| 403
|
MCPMH
|
78,302,751
| 16,723,655
|
How can I get values of specific indexes for numpy array?
|
<p>I have a numpy array (shape : (256, 256, 3)) of image and</p>
<p>I have numpy array for indexing as below.</p>
<pre><code>array([[ 3, 230],
[ 3, 231],
[ 3, 232],
...,
[114, 151],
[115, 149],
[115, 150]], dtype=int64)
</code></pre>
<p>However, I want to get values of numpy array (shape : (256, 256, 3)) of image for</p>
<p>indexing as below. So that, I can get 3 channels values for specific index.</p>
<pre><code>array([[ 3, 230, :],
[ 3, 231, :],
[ 3, 232, :],
...,
[114, 151, :],
[115, 149, :],
[115, 150, :]], dtype=int64)
</code></pre>
<p>How can I get values?</p>
|
<python><numpy>
|
2024-04-10 07:29:18
| 1
| 403
|
MCPMH
|
78,302,239
| 5,026,136
|
How to POST using RobinHood Crypto API in Python?
|
<p>I'm using the new RobinHood crypto API documented here: <a href="https://docs.robinhood.com/crypto/trading/#tag/Trading/operation/api_v1_post_crypto_trading_order" rel="nofollow noreferrer">https://docs.robinhood.com/crypto/trading/#tag/Trading/operation/api_v1_post_crypto_trading_order</a></p>
<p>I am successful with all GET endpoints, but can't seem to place an order using the POST endpoints. It fails with a verification error, which testing shows usually mean something about the message is wrong. As far as I can tell though, I've followed the documentation exactly.</p>
<p>Here is my code. Is there something that I'm missing?</p>
<pre><code>url = "https://trading.robinhood.com/api/v1/crypto/trading/orders/"
api_path = "/api/v1/crypto/trading/orders/"
http_method_type = "POST"
body = {
"client_order_id" : "131de903-5a9c-4260-abc1-28d562a5dcf0",
"side" : "buy",
"symbol" : "DOGE-USD",
"type" : "market",
"market_order_config" : {
"asset_quantity" : "1"
}
}
current_unix_timestamp = int(time.time())
message = (
f"{API_KEY_ROBIN_HOOD}"
f"{current_unix_timestamp}"
f"{api_path}"
f"{http_method_type}"
f"{body}"
)
signature = PRIV_KEY.sign(message.encode("utf-8"))
base64_signature = base64.b64encode(signature).decode("utf-8")
PUB_KEY.verify(signature, message.encode("utf-8"))
headers = {
'x-api-key': API_KEY_ROBIN_HOOD,
'x-timestamp': str(current_unix_timestamp),
'x-signature': base64_signature,
'Content-Type': 'application/json; charset=utf-8',
}
response = requests.post(url, headers=headers, json=body)
</code></pre>
<p>The entire code is almost identical to my (working) GET functions, except that this is a POST request and that for the first time, the request (and the message) includes a body element.</p>
<p>Do you see something I've overlooked?</p>
|
<python><post><request>
|
2024-04-10 05:19:49
| 2
| 4,392
|
Elliptica
|
78,301,994
| 11,748,924
|
Pillow draw line with gradient color where the color gradually from darkgreen to the lime as y position is increased relatively from starting y point
|
<p>How to draw line in Pillow, but the color of line is gradually became lighter (from darkgreen) as every dot of line go to the bottom relatively from starting point (y increase)?</p>
<p>I expect there is argument like this:</p>
<pre><code>img = Image.new('RGB', (1024, 800))
draw = ImageDraw.Draw(img)
draw.line([(p1x, p1y), (p2x, p2y)], width=2, start_color='darkgreen', end_color='lime', gradient_based_on='+y')
</code></pre>
|
<python><python-imaging-library>
|
2024-04-10 03:24:29
| 0
| 1,252
|
Muhammad Ikhwan Perwira
|
78,301,926
| 1,757,464
|
Asyncio: Creating a producer-consumer flow with async generator output
|
<p>I am trying to set up a asyncio program that takes as its input, one <code>AsyncGenerator</code> and returns data via another <code>AsyncGenerator</code>. Here's a sample program that illustrates the basic flow of data:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import AsyncGenerator
import asyncio
async def input_gen() -> AsyncGenerator[str, None]:
'''a simple generator that yields strings'''
for char in "abc123xyz789":
await asyncio.sleep(0.1)
yield char
async def slow_task(item: str) -> str:
'''simulate a slow task that opporates on a single item'''
await asyncio.sleep(0.5)
return f"{item}_loaded"
async def my_gen() -> AsyncGenerator[str, None]:
'''a second generator that yields the results slow_task(item)'''
async for item in input_gen():
yield await slow_task(item)
results = [x async for x in my_gen()]
</code></pre>
<p>In this flow, I would like to a) enable concurrent processing of <code>slow_task</code>s for each <code>item</code>, and b) start yielding the outputs of <code>slow_task(item)</code> as soon as they become available. The outputs of <code>my_gen</code> need not be sorted but should be otherwise identically.</p>
<p>I have been <a href="https://stackoverflow.com/questions/74130544/asyncio-yielding-results-from-multiple-futures-as-they-arrive">trying</a> to find a path to do this using asyncio's Queue in a producer/consumer pattern but I haven't managed to get very far. I'm hoping folks have some suggestions for approaches that will improve this.</p>
|
<python><python-asyncio>
|
2024-04-10 02:54:22
| 2
| 6,534
|
jhamman
|
78,301,692
| 20,386,110
|
Godot: Target position of Ball changes upon calling new instance
|
<p>I'm working on Breakout which is a game with a ball and you use a paddle to hit some bricks with the ball.</p>
<p>My starting position of the ball is in the middle of the screen and as soon as the game begins, the ball shoots downwards toward the paddle. I also have a script on the bottom wall of the main scene which resets the ball (and player loses a life) if it touches.</p>
<pre><code>extends CharacterBody2D
var speed: int = 3
func _ready():
var target_position = Vector2(575, 645)
var direction = global_position.direction_to(target_position)
velocity = direction.normalized() * speed
func _physics_process(delta):
var collision = move_and_collide(velocity)
if collision != null:
velocity = velocity.bounce(collision.get_normal())
if collision.get_collider().has_method("hit"):
collision.get_collider().hit()
</code></pre>
<p>This is the script for my <code>Ball</code> scene. The ball is placed in the middle of the main scene at position (575, 450). Upon starting it heads towards position (575, 645) which is below the paddle.</p>
<p>And for my main scene, I have this code.</p>
<pre><code>extends Node2D
var current_ball
var ball_scene = preload("res://scenes/ball.tscn")
func _on_bottom_wall_body_entered(body):
reset_ball()
func _ready():
current_ball = $Ball
ball_position()
func _physics_process(delta):
$Label.text = str(Globals.score)
func reset_ball():
if current_ball:
current_ball.queue_free()
current_ball = ball_scene.instantiate()
await get_tree().create_timer(3).timeout
add_child(current_ball)
ball_position()
func ball_position():
current_ball.position.x = 575
current_ball.position.y = 450
</code></pre>
<p>The bottom area is an <code>Area2D</code> and when the ball enters, it resets back to the middle and is meant to head straight downwards again towards the paddle.</p>
<p>The problem that I'm having is that as soon as the ball starts moving after being reset, the position of the direction changes from (575, 645) to (748, 644) and I have no idea why that's happening. So instead of heading downwards toward the paddle, it slightly rotates and starts heading to the right of the screen where it will hit the <code>Area2D</code> again.</p>
|
<python><game-development><godot><gdscript>
|
2024-04-10 00:53:05
| 1
| 369
|
Dagger
|
78,301,490
| 8,653,226
|
Distinguishing between the landmarks (left hand and right hand)
|
<p>I have been trying to distinguish between the landmark with the same number for the left-hand and right-hand in mediapipe and opencv. The problem is that we have 20 id numbers but for both hands we have 40 landmarks. Eventually I want to extract the coordinates of for example landmark#8 on the left-hand and right-hand separately. Here is the minimal work I have done so far to put markers with different colors (circles) on the landmarks with the same number on the the left and right hands but it did not work (They get the same color again):</p>
<pre><code>import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
mpHands = mp.solutions.hands
hands = mpHands.Hands(max_num_hands=2)
mpDraw = mp.solutions.drawing_utils
while True:
success, img = cap.read()
img = cv2.flip(img, 1)
time.sleep(0.1)
imgRGB = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
results = hands.process(imgRGB)
if results.multi_hand_landmarks:
for handLms in results.multi_hand_landmarks:
for id, lm in enumerate(handLms.landmark):
h,w,c = img.shape
cx, cy, cz = int(lm.x*w), int (lm.y*h) , int (lm.z*c)
lbl = results.multi_handedness[0].classification[0].label
if lbl == 'Right':
if id == 8:
cv2.circle(img, (cx,cy),8, (255,0,0), cv2.FILLED)
if lbl == 'Left':
if id == 8 :
cv2.circle(img, (cx,cy),8, (255,255,0), cv2.FILLED)
mpDraw.draw_landmarks(img, handLms, mpHands.HAND_CONNECTIONS,\
mpDraw.DrawingSpec(color=(121,0,255)),\
mpDraw.DrawingSpec(color=(0,255,0)))
cv2.imshow("image", img)
c = cv2.waitKey(7) % 0x100
if c == 27:
break
</code></pre>
<p>I was hoping there is a solution to fully distinguish between all 40 landmarks for both hands.</p>
|
<python><mediapipe><pose-estimation>
|
2024-04-09 23:21:51
| 1
| 519
|
Leo
|
78,301,391
| 11,618,586
|
Splitting and bracketing months based on number of weeks and determine the beginning of the month
|
<p>I want to define each month of the year based on the number of weeks so as to avoid the confusion due to some months having 30 or 31 days.
For example, Jan to Dec would be have weeks of the following configuration:</p>
<pre><code>Jan - 4 wks
Feb - 4 wks
Mar - 5 wks
Apr - 4 wks
May - 4 wks
Jun - 5 wks
and so on...
</code></pre>
<p>Then based on the current datetime, I want to find the beginning of the month so I can use that as a date filter in my dataframes.
The beginning of hte month will not be the typical 1st of the Julian calendar but instead whatever date the week starts with where Sunday will be the beginning of the week.<br />
For example the month of April 2024 will have 4 weeks (weeknumber 14 to 17). Week 14 starts on Mar 31st so the <code>datefilter</code> value will be <code>2024-03-31</code>
Similarly, the month of May 2024 will have 5 weeks (weeknumber 18 to 21). Week 18 starts on April 28th so the <code>datefilter</code> value will be <code>2024-04-28</code></p>
<p>My code so far:</p>
<pre><code>import datetime
current_weekday = datetime.datetime.now().weekday()
sunday_as_zero = (current_weekday + 1) % 7 #Setting Sunday weekday as zero
if (sunday_as_zero == 0):
week_to_date = datetime.datetime.now() - timedelta(days=7)
else:
week_to_date = datetime.datetime.now() - timedelta(days=sunday_as_zero)
weeknum=week_to_date.isocalendar().week
if weeknum in [13, 26, 39, 52]:
datefilter= datetime.datetime.now() + pd.DateOffset(weeks=-5)
else:
datefilter= datetime.datetime.now() + pd.DateOffset(weeks=-4)
datefilter=datefilter.replace(hour=0, minute=0, second=0, microsecond=0)
</code></pre>
<p>The <code>13</code>,<code>26</code>,<code>39</code> and <code>52</code> are the ending week numbers for the months that have 5 weeks.
How do i get the offset to be the beginning weeknumber without having to literally define the conditions for each month?
Is there a more elegant way to do this?</p>
|
<python><python-3.x><datetime><calendar>
|
2024-04-09 22:44:42
| 0
| 1,264
|
thentangler
|
78,301,370
| 2,774,885
|
python multiprocessing equivalent for lots of bash ampersands and redirects?
|
<p>I've got a script where the functionality I need has outgrown bash (or at least outgrown my desire to maintain it in bash) so I'm looking at how to do this in python instead.</p>
<p>I've looked through the multiprocessing documentation to my ability to understand it, and I'm still not entirely sure which combination of map-ping, starmap-ping, async-ing, and other python magic I need to use. The starting point for this block would be represented in bash like this:</p>
<pre><code>command1 argument1a arg1b > output1 < /dev/null &
command2 argument2a arg2b > output2 < /dev/null &
command3 argument3a arg3b > output3 < /dev/null &
wait
do_something_else
</code></pre>
<p>building out the command and argument list is easy enough, I'm having trouble with the job control equivalents and getting the script to wait in the right places.</p>
<p>Some relevant things I'm trying to accomplish:</p>
<ul>
<li>each <code>command</code> may be different or another instance of the same command, but each is associated with a unique <code>output</code> file so I don't have to worry about locking/overwriting.</li>
<li>none of the processes need to take any input.</li>
<li>stderr should be printed to the terminal, although if for some reason it's way easier to put it into a separate file that would be fine as well.</li>
<li>I want all of the processes to start more or less simultaneously (the precise order/timing is not important here)</li>
</ul>
|
<python><multiprocessing><job-control>
|
2024-04-09 22:36:57
| 2
| 1,028
|
ljwobker
|
78,301,339
| 8,521,346
|
How to train Hugging Face Model On Multiple Datasets?
|
<p>I am trying to fine tune a model based on two datasets, following the example on the Hugging Face website, I have my model training on the Yelp Review dataset, but I also want to train my model on the Short Jokes dataset.</p>
<p>These two datasets are picked just to eximplify that the datasets I would like to fine tune the model on are totally unrelated.</p>
<p>I have seen the <code>interleave_datasets</code> function, but im not sure if its exactly what i should be using.</p>
<p>I've tried this to train on one dataset:</p>
<pre><code>from datasets import load_dataset
yelp_dataset = load_dataset("yelp_review_full")
jokes_dataset = load_dataset("short-jokes")
from transformers import AutoTokenizer, Trainer
tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = yelp_dataset.map(tokenize_function, batched=True)
small_train_yelp_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_yelp_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
# Where do these go?
small_train_short_jokes_dataset = jokes_dataset["train"].shuffle(seed=42).select(range(1000))
small_eval_short_jokes_dataset = jokes_dataset["test"].shuffle(seed=42).select(range(1000))
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", num_labels=5)
from transformers import TrainingArguments
training_args = TrainingArguments(output_dir="test_trainer")
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_yelp_dataset,
eval_dataset=small_eval_yelp_dataset,
)
trainer.train()
</code></pre>
<p><strong>How would I train my model on two different sets at one time?</strong></p>
|
<python><nlp><huggingface-transformers><bert-language-model><huggingface-datasets>
|
2024-04-09 22:28:18
| 0
| 2,198
|
Bigbob556677
|
78,301,269
| 3,614,578
|
pip install private package 403 azure databricks
|
<p>I'm trying to install a package from a private jfrog artifactory repo using the following command</p>
<pre><code>pip install -i https://user:pass@com.jfrog.io/artifactory/api/pypi/private-pypi/simple --trusted-host com.jfrog.io private_package==0.1.1
</code></pre>
<p>and getting a 403 when running it within Azure Databricks. Its using Python 3.9 and pip 21.2.4</p>
<p>The exact same command works from another environment in AWS Databricks using Python 3.10 and pip 22.x. It also works from my local in Python 3.10 and pip 23.x. I created a docker container with the exact same version of pip (21.2.4) and python (3.9) as the Azure Databricks environment and the installation works.</p>
<p>In the Azure Databricks environment is also set</p>
<pre><code>pip config set global.index-url https://user:pass@com.jfrog.io/artifactory/api/pypi/private-pypi/simple
pip config set global.trusted-host com.jfrog.io
</code></pre>
<p>with no success</p>
<p>I've also checked the values for <code>HTTPS_PROXY</code>, <code>https_proxy</code>, <code>HTTP_PROXY</code>, and <code>http_proxy</code> and they are all empty.</p>
<p>I also tried installing a different package from our private artifactory in the Azure Databricks environment (using different credentials) and that works.</p>
<p>I'm not sure what could be happening here, can anyone help?</p>
|
<python><pip><azure-databricks><artifactory>
|
2024-04-09 22:04:50
| 1
| 4,360
|
gary69
|
78,301,202
| 19,299,757
|
How to find the text from a label in selenium pytest
|
<p>I've a requirement to capture the payment ID from the text in an element using Selenium, python, pytest UI. The text is in the form, "Payment 3022594".</p>
<p>I was trying something like this.</p>
<pre><code>PAYMENT_ID_TEXT_XPATH = "//div[@class='BottomLayoutHeaderWrapper']//h5[text()='Payment']"
paymentId = self.get_element(self.PAYMENT_ID_TEXT_XPATH, page)
if paymentId:
paymentId = paymentId.strip()
paymentIdValue = paymentId[8:]
</code></pre>
<p>This doesn't always work because sometimes paymentId is null resulting in TypeError,</p>
<pre><code>TypeError: 'NoneType' object is not subscriptable
</code></pre>
<p>in the line where I am extracting paymentIdValue.</p>
<p>Is there better way to capture the payment ID from the text?
The Payment ID will always be appended to the text "Payment" in the UI.</p>
|
<python><selenium-webdriver><pytest>
|
2024-04-09 21:44:51
| 1
| 433
|
Ram
|
78,301,114
| 412,234
|
Wrapping a class method in a callable vs an ordinary function
|
<p>I am trying to decorate class methods. When I use an ordinary function, it works fine. However, when I use a class that implements __call__, it fails for instance level functions despite behaving identically when not used at the instance level. Here is the SSCCE:</p>
<pre><code>import random
class C():
"""A simple class"""
def __init__(self):
self.x = random.random()
def report(self):
return 'The value is: '+str(self.x)
class F():
"""Simulate a function"""
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
return self.f(*args, **kwargs)
def decoratef(f):
def f1(*args, **kwargs):
return f(*args, **kwargs)
return f1
def decorateF(f):
def f1(*args, **kwargs):
return f(*args, **kwargs)
return F(f1) # This F is the only difference.
report0 = C.report
C.report = decoratef(report0)
print('Test_f0:', C.report(C())) # Works.
print('Test_f1:', C().report()) # Works.
C.report = decorateF(report0)
print('Test_F0:', C.report(C())) # Works.
print('Test_F1:', C().report()) #TypeError: "C.report() missing 1 required positional argument: 'self'"
</code></pre>
<p>Why is this?</p>
|
<python><class>
|
2024-04-09 21:13:51
| 1
| 3,589
|
Kevin Kostlan
|
78,301,049
| 1,040,092
|
How to replace HTML with dynamically generated content containing the JS let keyword?
|
<p>I am currently going through older projects and upgrading them use jQuery 3.7. Many of the web pages use AJAX to dynamically generate and render HTML pages through Flask. The AJAX worked previously without any issues. However, after upgrading to jQuery 3.7, I am having issues rendering the content when the template contains JavaScript with the <code>let</code> keyword.</p>
<p><strong>Python</strong></p>
<pre><code>from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def hello_world():
return render_template('test.html')
@app.route('/dynamic')
def dynamic_content():
return render_template('dynamic.html')
if __name__ == '__main__':
app.run()
</code></pre>
<p><strong>test.html</strong></p>
<pre><code><script src="https://code.jquery.com/jquery-3.7.0.min.js"></script>
<div id="content">Hello World!</div>
<button onclick="return loadContent();">
Load Content
</button>
<script>
function loadContent() {
$.ajax({
url: '/dynamic',
type: 'GET',
success: function (response) {
$("#content").html(response)
}
})
}
</script>
</code></pre>
<p><strong>dynamic.html</strong></p>
<pre><code>My dynamic content
<script>
let test = 1;
</script>
</code></pre>
<p>When <code>loadContent</code> is called the first time, it dynamically generates and displays the HTML onto the web page. When it is called a second time, it gets the JavaScript error</p>
<blockquote>
<p>Identifier 'test' has already been declared</p>
</blockquote>
<p>I know what this error means and the scope of <code>let</code>, but as stated previously, doing this previously worked without errors. It seems like even if the form is being replaced with dynamically generated content, jQuery 3.7 seems to have issues knowing that JavaScript variables declared with <code>let</code> are also being re-defined.</p>
<p>Is there any way around this issue, and allow jQuery to replace the dynamic content with the <code>let</code> keyword?</p>
|
<python><jquery><flask>
|
2024-04-09 20:54:30
| 0
| 7,912
|
Wondercricket
|
78,300,949
| 4,674,706
|
How to unpack a string into multiple columns in a Polars DataFrame using expressions?
|
<p>I have a Polars DataFrame containing a column with strings representing 'sparse' sector exposures, like this:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
pl.Series("sector_exposure", [
"Technology=0.207;Financials=0.090;Health Care=0.084;Consumer Discretionary=0.069",
"Financials=0.250;Health Care=0.200;Consumer Staples=0.150;Industrials=0.400"
])
)
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>sector_exposure</th>
</tr>
</thead>
<tbody>
<tr>
<td>Technology=0.207;Financials=0.090;Health Care=0.084;Consumer Discretionary=0.069</td>
</tr>
<tr>
<td>Financials=0.250;Health Care=0.200;Consumer Staples=0.150;Industrials=0.400</td>
</tr>
</tbody>
</table></div>
<p>I want to "unpack" this string into new columns for each sector (e.g., Technology, Financials, Health Care) with associated values or a polars struct with sector names as fields and exposure values.</p>
<p>I'm looking for a more efficient solution using polars expressions only, without resorting to Python loops (or python mapped functions). Can anyone provide guidance on how to accomplish this?</p>
<p>This is what I have come up with so far - which works in producing the desired struct but is a little slow.</p>
<pre class="lang-py prettyprint-override"><code>(
df["sector_exposure"]
.str
.split(";")
.map_elements(lambda x: {entry.split('=')[0]: float(entry.split('=')[1]) for entry in x},
skip_nulls=True,
)
)
</code></pre>
<p>Output:</p>
<pre><code>shape: (2,)
Series: 'sector_exposure' [struct[6]]
[
{0.207,0.09,0.084,0.069,null,null}
{null,0.25,0.2,null,0.15,0.4}
]
</code></pre>
<p>Thanks!</p>
|
<python><dataframe><python-polars>
|
2024-04-09 20:25:28
| 3
| 464
|
Azmy Rajab
|
78,300,945
| 5,981,529
|
Which is considered better practice for defining range() for a loop in python? Using len() or a defined variable?
|
<p>Can't seem to find a answer for this. Which is considered better practice?</p>
<p>For example, let's say I have an array of length = 6. Let's also say that it is a sorting algorithm that changes in place and I need the indexes.</p>
<pre><code>for i in range(len(array)):
</code></pre>
<p>or</p>
<pre><code>for i in range(6):
</code></pre>
<p>?</p>
<p>Can anyone point me to the right media that could definitively answer this question?</p>
|
<python><for-loop><range>
|
2024-04-09 20:24:56
| 1
| 356
|
spacedustpi
|
78,300,917
| 12,553,917
|
Checking the size of a file that's being downloaded by the browser causes it to get duplicated
|
<p>I'm writing a script which needs to download a file by hitting a URL. It would be easy enough to curl the URL, but I need to be logged in to the site when doing it, and I've given up on trying to solve the problem of sending a curl request as a logged in user. So what I've settled on doing is opening the URL in the browser and monitoring the downloads folder until the new file appears.</p>
<p>I already wrote a working version for this some time ago in bash, but now I'm in the process of rewriting everything in python. This was my bash solution:</p>
<pre class="lang-bash prettyprint-override"><code>#! /bin/bash
get_latest_csv_in_downloads() {
find ~/Downloads -maxdepth 1 -iname '*.csv' -printf '%B@ %f\0' | sort -znr | head -zn 1 | cut -zd ' ' -f 2- | head -c -1
}
url="$1"
initial_file="$(get_latest_csv_in_downloads)"
# Open the file in the browser which will initiate a download.
python -m webbrowser "$url" > /dev/null
# We'll try to obtain the most recent file in the downloads folder,
# until it's a different one than before we started downloading.
while latest_file="$(get_latest_csv_in_downloads)"; [[ "$initial_file" == "$latest_file" ]]; do :; done
# When the file is created it's sometimes empty for a bit.
# At some point it jumps to being fully written, without any inbetween.
# So this waits for the file size to not be zero.
# NOTE: [[ ! -s "..." ]] would be a lot nicer,
# but for some reason it sometimes creates a copy of the file and messes everything up.
while (( "$(stat --format="%s" ~/Downloads/"$latest_file")" == 0 )); do :; done
echo "got the file $latest_file"
</code></pre>
<p>This works, it prints: <code>got the file example_file.csv</code>. But as you can see, I had to do a bit of hacking here. Pay special attention to the NOTE above the second while loop. <code>[[ -s <path> ]]</code> would be the clean way to check if the file is nonempty, but sometimes, for some mysterious reason, it causes two files to be created in my downloads folder:</p>
<pre class="lang-none prettyprint-override"><code>example_file(1).csv 4KB (or whatever size)
example_file.csv 0KB (empty)
</code></pre>
<p>This causes the first while loop to find <code>example_file.csv</code> and then move on to the second loop, where it gets stuck forever because the file remains empty and the contents are actually written to the mysterious copy <code>example_file(1).csv</code>.</p>
<p>This right here is my problem. I've already solved it in bash, but now I'm trying to rewrite this in python and the same issue happens and I cannot figure out how to solve it. Here is my python version:</p>
<pre class="lang-py prettyprint-override"><code>#! python
import os
import webbrowser
import sys
import glob
def get_latest_csv_in_downloads():
files_in_downloads = glob.glob(os.path.expanduser(os.path.join(os.path.expanduser('~'), 'Downloads', '*.csv')))
latest_file = max(files_in_downloads, key=os.path.getctime, default=None)
return latest_file
url = sys.argv[1]
initial_file = get_latest_csv_in_downloads()
webbrowser.open(url)
while True:
latest_file = get_latest_csv_in_downloads()
if initial_file != latest_file:
break
while os.path.getsize(latest_file) == 0:
pass
print(f'got the file {latest_file}')
</code></pre>
<p>In bash what helped was to use stat, so I tried to use <code>os.stat(latest_file).st_size</code>, but it didn't solve it (probably <code>os.path.getsize</code> just calls <code>stat</code> anyway).</p>
<p>I thought I could solve this more cleanly by obtaining an exclusive lock on the file so my script gets blocked until the browser closes its handle, but alas it appears that getting exclusive access to a file is a surprisingly hard problem to do portably and I tried to use some libraries which did not do the job.</p>
<p>I'm using Firefox. From my tests this issue doesn't reproduce in Edge.</p>
<p>I've verified that this issue reproduces no matter what I try to download or from which website.</p>
<p>And in case it's relevant, I'm on Windows.</p>
<p>Any ideas what could possibly be causing the file to get duplicated, and how to prevent it? Thanks.</p>
|
<python><firefox><browser><download>
|
2024-04-09 20:18:30
| 0
| 776
|
Verpous
|
78,300,874
| 2,030,532
|
Memory leakage in python?
|
<p>I have python scripts that calls a function <code>foo</code> that performs some operations that uses say 10GB of memory and returns the small object <code>res</code> to the main program. After printing res I delete the res, but top shows the ipython process is sill using about 1GB of memory. How can I ensure the process frees up all unused memory?</p>
<pre><code>def foo():
.... # some complication operation
return res
if __name__ == "__main__":
res = foo()
print(res)
del res
</code></pre>
<p>I tried using gc.collect() but the it does not change anything.
I have read that sometimes the memory becomes fragmented and python can not reclaim all the memory.</p>
|
<python><memory>
|
2024-04-09 20:06:30
| 2
| 3,874
|
motam79
|
78,300,839
| 9,251,158
|
How to set video details when uploading to YouTube with Python API client
|
<p>Follow-up on <a href="https://stackoverflow.com/questions/78267481/how-to-set-video-language-when-uploading-to-youtube-with-python-api-client">How to set video language when uploading to YouTube with Python API client</a>, after successfully setting the video language.</p>
<p>I am uploading videos to YouTube programmatically following <a href="https://developers.google.com/youtube/v3/guides/uploading_a_video" rel="nofollow noreferrer">the official guide</a>:</p>
<pre class="lang-py prettyprint-override"><code> body=dict(
snippet=dict(
title=title,
description=description,
tags=tags,
categoryId=10, # Music.
defaultLanguage=language, # Title and description launguage.
defaultAudioLanguage=language, # Video language.
),
status=dict(
privacyStatus="public"
embeddable=True,
publicStatsViewable=True,
publishAt="2024-04-04T18:00:00+00:00",
selfDeclaredMadeForKids=True,
)
)
</code></pre>
<p>I can upload the video, set the title, the description, their language, the video's language, the scheduled publishing date, etc. I can also upload a thumbnail.</p>
<p>But the web interface has other details that I can't see how to set programmatically:</p>
<ul>
<li>Does the video contain altered content</li>
<li>Allow automatic chapters and key moments</li>
<li>Allow automatic concepts</li>
<li>Don't allow remixing</li>
</ul>
<p>I know the documentation does not mention these settings and so they don't seem possible to set. But it does not mention the video's language either, and yet (from <a href="https://stackoverflow.com/questions/78267481/how-to-set-video-language-when-uploading-to-youtube-with-python-api-client">How to set video language when uploading to YouTube with Python API client</a>), it's possible to set it with <code>defaultAudioLanguage</code>. I wonder if someone knows the right parameters for these details as well, which are missing from the documentation.</p>
<p>How can I set these additional details?</p>
|
<python><python-3.x><youtube><youtube-api><youtube-data-api>
|
2024-04-09 19:58:47
| 1
| 4,642
|
ginjaemocoes
|
78,300,823
| 1,183,542
|
pipenv return odd error when installing packages
|
<p>I am new to python. Have multiple versions of python installed on my mac. When trying to use pipenv to manage dependencies for a legacy project (3.6 <= python <3.8). I ran</p>
<pre><code>/Users/user/.pyenv/shims/pipenv --python /Users/user/.pyenv/versions/3.7.16/bin/python install --dev
</code></pre>
<p>and here is what I got</p>
<pre><code>Installing dependencies from Pipfile.lock (166d15)...
[pipenv.exceptions.InstallError]: Traceback (most recent call last):
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/patched/pip/__pip-runner__.py", line 50, in <module>
[pipenv.exceptions.InstallError]: runpy.run_module("pip", run_name="__main__", alter_sys=True)
[pipenv.exceptions.InstallError]: File "/Users/user/.pyenv/versions/3.7.16/lib/python3.7/runpy.py", line 205, in run_module
[pipenv.exceptions.InstallError]: return _run_module_code(code, init_globals, run_name, mod_spec)
[pipenv.exceptions.InstallError]: File "/Users/user/.pyenv/versions/3.7.16/lib/python3.7/runpy.py", line 96, in _run_module_code
[pipenv.exceptions.InstallError]: mod_name, mod_spec, pkg_name, script_name)
[pipenv.exceptions.InstallError]: File "/Users/user/.pyenv/versions/3.7.16/lib/python3.7/runpy.py", line 85, in _run_code
[pipenv.exceptions.InstallError]: exec(code, run_globals)
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/patched/pip/__main__.py", line 28, in <module>
[pipenv.exceptions.InstallError]: spec.loader.exec_module(pipenv)
[pipenv.exceptions.InstallError]: File "<frozen importlib._bootstrap_external>", line 728, in exec_module
[pipenv.exceptions.InstallError]: File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/__init__.py", line 30, in <module>
[pipenv.exceptions.InstallError]: from pipenv.cli import cli # noqa
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/cli/__init__.py", line 1, in <module>
[pipenv.exceptions.InstallError]: from .command import cli # noqa
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/cli/command.py", line 4, in <module>
[pipenv.exceptions.InstallError]: from pipenv import environments
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/environments.py", line 9, in <module>
[pipenv.exceptions.InstallError]: from pipenv.utils.shell import env_to_bool, is_env_truthy, isatty
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/utils/shell.py", line 16, in <module>
[pipenv.exceptions.InstallError]: from pipenv.vendor.pythonfinder.utils import ensure_path, parse_python_version
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/vendor/pythonfinder/__init__.py", line 4, in <module>
[pipenv.exceptions.InstallError]: from .models import SystemPath
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/vendor/pythonfinder/models/__init__.py", line 3, in <module>
[pipenv.exceptions.InstallError]: from .path import SystemPath
[pipenv.exceptions.InstallError]: File "/Users/user/Library/Python/3.9/lib/python/site-packages/pipenv/vendor/pythonfinder/models/path.py", line 10, in <module>
[pipenv.exceptions.InstallError]: from functools import cached_property
[pipenv.exceptions.InstallError]: ImportError: cannot import name 'cached_property' from 'functools' (/Users/user/.pyenv/versions/3.7.16/lib/python3.7/functools.py)
ERROR: Couldn't install package: {}
Package installation failed...
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/subprocess.py:1052: ResourceWarning: subprocess 65854 is still running
</code></pre>
<p>it switched to the version of system python (3.9) in the middle. And I have no clue of the "subprocess ... is still running". Have been puzzled by this for weeks. Can I have help to fix this please?</p>
|
<python><python-3.x><pipenv><pipenv-install>
|
2024-04-09 19:55:12
| 1
| 2,214
|
gigi2
|
78,300,681
| 1,934,902
|
Remove quotes from beginning and end from list
|
<p>I have an excel spreadsheet that has a column of AWS EBS volumes that I want to use as a list to delete using my boto3 python script. The script displays the volumes with initial and trailing quotes. I am getting a malformed string error, and learned I have to strip out the initial and trailing quotes out, but I am getting an error. The end goal is to use this list of volumes to be deleted using the boto3 delete_volume api call</p>
<pre><code>import boto3
import pprint
from openpyxl import load_workbook
AWS_REGION = "us-gov-west-1"
ec2 = boto3.client('ec2', region_name = AWS_REGION)
wb = load_workbook("WORKBOOK1.xlsx")
ws = wb['Sheet1']
column = ws['D']
column_list = [column[x].value for x in range(len(column))]
stripped_column_list = column_list.strip(",")
pprint.pprint(stripped_column_list)
ec2_info = ec2.delete_volume(
VolumeId = stripped_column_list
DryRun = True
)
</code></pre>
<p>Which gives me this list.</p>
<pre><code>['Volume Id vol-00a1f3aff3ff57b01 vol-04bd4cada863ee544 vol-02147928b7cb0e19e '
'vol-0079ea32b16aeede1 vol-06097f05fbcab73db vol-0b80038092795093a '
'vol-0217cbc5059c93843 vol-04bc423f2e9c53198 vol-05335afa0249b342a '
'vol-0cb37e8948953fff5 vol-038518daeeb9c6602 vol-04a8671f9029ce376 '
'vol-0ec9998450fee4df9 vol-01a7459966f7ac89f vol-05a66a16f8fd8b216 '
'vol-0f9c0d20097d4db17 vol-0d96a5fadd7e628c3 vol-096f66a8746604d81 ']
</code></pre>
<p>I have been unable to figure out how to remove the initial and trailing quotes.</p>
<p>The goal is to use a for loop to iterate over the ec2_info variable to delete each of the volumes.</p>
|
<python><boto3>
|
2024-04-09 19:23:21
| 1
| 333
|
ficestat
|
78,300,456
| 10,465,835
|
OR-Tools CP-SAT conditional constraint
|
<p>In the problem I am trying to solve, I have a list of length <em>n</em> of boolean variables called <strong>x</strong>. Given an integer <em>m</em> where <em>m < n</em>, I need to define these two constraints:</p>
<ul>
<li>if <code>LinearExpr.Sum(x[m:]) > 0</code> then <code>LinearExpr.Sum(x[:m]) == 0</code></li>
<li>if <code>LinearExpr.Sum(x[:m]) > 0</code> then <code>LinearExpr.Sum(x[m:]) == 0</code></li>
</ul>
<p>From what I've read I should use one or several of the following:</p>
<ul>
<li><a href="https://developers.google.com/optimization/reference/python/sat/python/cp_model#addboolor" rel="nofollow noreferrer">AddOrBool</a></li>
<li><a href="https://developers.google.com/optimization/reference/python/sat/python/cp_model#onlyenforceif" rel="nofollow noreferrer">OnlyEnforceIf</a></li>
<li><a href="https://developers.google.com/optimization/reference/python/sat/python/cp_model#addimplication" rel="nofollow noreferrer">AddImplication</a></li>
</ul>
<p>However, after a few hours of trying I could not figure it out. Any help to solve this is very much appreciated!</p>
|
<python><boolean-logic><or-tools><constraint-programming><cp-sat>
|
2024-04-09 18:30:35
| 2
| 646
|
Zufra
|
78,300,439
| 12,547,996
|
Counting commas in two columns and showing the count side by side
|
<p>Based on the sample dataframe below how can I get the comma count of two columns?</p>
<p>Sample data:</p>
<pre><code> ID City Zipcode
1 A,B,C,D 1,2,3,4
2 A,B 1,2
3 A 1
4 B,C 2,3
</code></pre>
<p>Desired output:</p>
<pre><code>City Zipcode City_Count Zipcode_Count
A,B,C,D 1,2,3,4 4 4
A,B 1,2 2 2
A 1 1 1
B,C 2,3 2 2
</code></pre>
<p>Code:</p>
<pre><code># Based on my research so far, below is what I have found to count values/commas in a single column
df['value_count'] = df.string_column.str.count(',')
print (df)
</code></pre>
|
<python><count>
|
2024-04-09 18:27:16
| 1
| 2,043
|
Ed_Gravy
|
78,300,413
| 9,357,484
|
Two different Cudatoolkit version and PyTorch version for same Conda environment
|
<p><code>Conda list</code> returned the following configuration:</p>
<pre><code># packages in environment at /home/user/anaconda3/envs/experiment2:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
asttokens 2.0.5 pyhd3eb1b0_0
backcall 0.2.0 pyhd3eb1b0_0
blas 1.0 mkl
ca-certificates 2024.3.11 h06a4308_0
comm 0.2.1 py38h06a4308_0
cudatoolkit 9.2 0
debugpy 1.6.7 py38h6a678d5_0
decorator 5.1.1 pyhd3eb1b0_0
executing 0.8.3 pyhd3eb1b0_0
freetype 2.12.1 h4a9f257_0
importlib-metadata 7.0.1 py38h06a4308_0
importlib_metadata 7.0.1 hd3eb1b0_0
intel-openmp 2023.1.0 hdb19cb5_46306
ipykernel 6.28.0 py38h06a4308_0
ipython 8.12.2 py38h06a4308_0
jedi 0.18.1 py38h06a4308_1
jpeg 9b h024ee3a_2
jupyter_client 8.6.0 py38h06a4308_0
jupyter_core 5.5.0 py38h06a4308_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libdeflate 1.17 h5eee18b_1
libffi 3.4.4 h6a678d5_0
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libpng 1.6.39 h5eee18b_0
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 11.2.0 h1234567_1
libtiff 4.2.0 h85742a9_0
libuv 1.44.2 h5eee18b_0
libwebp-base 1.3.2 h5eee18b_0
lz4-c 1.9.4 h6a678d5_0
matplotlib-inline 0.1.6 py38h06a4308_0
mkl 2023.1.0 h213fc3f_46344
mkl-service 2.4.0 py38h5eee18b_1
mkl_fft 1.3.8 py38h5eee18b_0
mkl_random 1.2.4 py38hdb19cb5_0
ncurses 6.4 h6a678d5_0
nest-asyncio 1.6.0 py38h06a4308_0
ninja 1.10.2 h06a4308_5
ninja-base 1.10.2 hd09550d_5
numpy 1.24.3 py38hf6e8229_1
numpy-base 1.24.3 py38h060ed82_1
olefile 0.46 pyhd3eb1b0_0
openjpeg 2.4.0 h3ad879b_0
openssl 3.0.13 h7f8727e_0
packaging 23.2 py38h06a4308_0
parso 0.8.3 pyhd3eb1b0_0
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 8.3.1 py38h2c7a002_0
pip 23.3.1 py38h06a4308_0
platformdirs 3.10.0 py38h06a4308_0
prompt-toolkit 3.0.43 py38h06a4308_0
psutil 5.9.0 py38h5eee18b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pygments 2.15.1 py38h06a4308_1
python 3.8.19 h955ad1f_0
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch 1.7.0 py3.8_cuda9.2.148_cudnn7.6.3_0 pytorch
pyzmq 25.1.2 py38h6a678d5_0
readline 8.2 h5eee18b_0
setuptools 68.2.2 py38h06a4308_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.2 h5eee18b_0
stack_data 0.2.0 pyhd3eb1b0_0
tbb 2021.8.0 hdb19cb5_0
tk 8.6.12 h1ccaba5_0
torchaudio 0.7.0 py38 pytorch
torchvision 0.8.0 py38_cu92 pytorch
tornado 6.3.3 py38h5eee18b_0
traitlets 5.7.1 py38h06a4308_0
typing_extensions 4.9.0 py38h06a4308_1
wcwidth 0.2.5 pyhd3eb1b0_0
wheel 0.41.2 py38h06a4308_0
xz 5.4.6 h5eee18b_0
zeromq 4.3.5 h6a678d5_0
zipp 3.17.0 py38h06a4308_0
zlib 1.2.13 h5eee18b_0
zstd 1.4.9 haebb681_0
</code></pre>
<p><code>conda list | grep "torch"</code> returned</p>
<pre><code>pytorch 1.7.0 py3.8_cuda9.2.148_cudnn7.6.3_0 pytorch
torchaudio 0.7.0 py38 pytorch
torchvision 0.8.0 py38_cu92 pytorch
</code></pre>
<p><code>python3 -c "import torch; print(torch.__version__)"</code> returned <code>2.2.1+cu121</code></p>
<p>All outputs are in the same conda environment.</p>
<p>I can not understand why I get two different version of PyTorch(2.2.1 and 1.7.0) and two different version of CUDA toolkit (cu121 / 9.2)?</p>
|
<python><pytorch>
|
2024-04-09 18:19:37
| 0
| 3,446
|
Encipher
|
78,300,398
| 2,475,195
|
Pandas - calculate time series delta from at least 2 days ago
|
<p>For every row in a <code>pandas</code> dataframe, I want to calculate: what's the fraction change from the row that's at least <code>n</code> (say 2) days ago. I want to implement this using <code>shift()</code> rather than <code>for</code> loops. Example below:</p>
<pre><code> x change
Date
2024-04-01 1 0.00000
2024-04-05 2 1.00000 # (2-1) / 1
2024-04-06 3 2.00000 # (3 - 1) / 1
2024-04-07 4 1.00000 # (4 - 2) / 2
2024-04-08 5 0.66666 # (5 - 3) / 3
2024-04-09 6 0.50000 # (6 - 4) / 4
</code></pre>
|
<python><pandas><numpy><time-series>
|
2024-04-09 18:17:04
| 3
| 4,355
|
Baron Yugovich
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.