QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,802,454
| 9,381,966
|
Static Files Not Loading in Production in Django-React application
|
<p>I'm running a Django application in a Docker container, and I'm having trouble serving static files in production. Everything works fine locally, but when I deploy to production, the static files don't load, and I get 404 errors.</p>
<p>Here are the relevant parts of my setup:</p>
<p><strong>Django <code>settings.py</code>:</strong></p>
<pre class="lang-py prettyprint-override"><code>TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'build')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = '/vol/web/static'
STATICFILES_DIRS = [os.path.join(BASE_DIR, 'build', 'static')]
</code></pre>
<p>The build folder was generated by npm run build command in a react application.</p>
<p>After running <code>collectstatic</code>, the volume <code>/vol/web/static</code> is correctly populated. However, the browser shows 404 errors for the static files, e.g.,</p>
<pre><code>GET https://site/static/js/main.db771bdd.js [HTTP/2 404 161ms]
GET https://site/static/css/main.4b763604.css [HTTP/2 404 160ms]
Loading failed for the <script> with source βhttps://mysite/static/js/main.db771bdd.jsβ.
</code></pre>
<p>These files exist in the <code>build/static</code> directory, but I thought the browser should use the static files collected into <code>/vol/web/static</code>.</p>
<p><strong>Nginx Configuration:</strong></p>
<pre><code>server {
listen ${LISTEN_PORT};
location /static {
alias /vol/static;
}
location / {
uwsgi_pass ${APP_HOST}:${APP_PORT};
include /etc/nginx/uwsgi_params;
client_max_body_size 10M;
}
}
</code></pre>
<p><strong>Dockerfile:</strong></p>
<pre><code>FROM python:3.9-alpine
ENV PYTHONUNBUFFERED 1
ENV PATH="/scripts:${PATH}"
RUN pip install --upgrade "pip<24.1"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache postgresql-client jpeg-dev \
&& apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev libffi-dev \
&& pip install -r /requirements.txt \
&& apk del .tmp-build-deps
RUN mkdir -p /app /vol/web/media /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol /app
COPY ./app /app
COPY ./scripts /scripts
COPY ./requirements.txt /requirements.txt
RUN chmod -R 755 /vol/web /app /scripts \
&& chmod +x /scripts/*
USER user
WORKDIR /app
VOLUME /vol/web
CMD ["entrypoint.sh"]
</code></pre>
<p>For further context, I deployed the Django application and the proxy in separated containers inside a ECS task:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"name": "api",
"image": "${app_image}",
"essential": true,
"memoryReservation": 256,
"environment": [
{"name": "DJANGO_SECRET_KEY", "value": "${django_secret_key}"},
{"name": "DB_HOST", "value": "${db_host}"},
{"name": "DB_NAME", "value": "${db_name}"},
{"name": "DB_USER", "value": "${db_user}"},
{"name": "DB_PASS", "value": "${db_pass}"},
{"name": "ALLOWED_HOSTS", "value": "${allowed_hosts}"},
{"name": "S3_STORAGE_BUCKET_NAME", "value": "${s3_storage_bucket_name}"},
{"name": "S3_STORAGE_BUCKET_REGION", "value": "${s3_storage_bucket_region}"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log_group_name}",
"awslogs-region": "${log_group_region}",
"awslogs-stream-prefix": "api"
}
},
"portMappings": [
{
"containerPort": 9000,
"hostPort": 9000
}
],
"mountPoints": [
{
"readOnly": false,
"containerPath": "/vol/web",
"sourceVolume": "static"
}
]
},
{
"name": "proxy",
"image": "${proxy_image}",
"essential": true,
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
],
"memoryReservation": 256,
"environment": [
{"name": "APP_HOST", "value": "127.0.0.1"},
{"name": "APP_PORT", "value": "9000"},
{"name": "LISTEN_PORT", "value": "8000"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log_group_name}",
"awslogs-region": "${log_group_region}",
"awslogs-stream-prefix": "proxy"
}
},
"mountPoints": [
{
"readOnly": true,
"containerPath": "/vol/static",
"sourceVolume": "static"
}
]
}
]
</code></pre>
<p>The entrypoint.sh script called by the Dockerfile is given by</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/sh
set -e
python manage.py collectstatic --noinput --settings=app.settings.staging
python manage.py wait_for_db --settings=app.settings.staging
python manage.py wait_for_es --settings=app.settings.staging
python manage.py migrate --settings=app.settings.staging
python manage.py search_index --rebuild --settings=app.settings.staging -f
uwsgi --socket :9000 --workers 4 --master --enable-threads --module app.wsgi --env DJANGO_SETTINGS_MODULE=app.settings.staging
</code></pre>
<p>In terraform, my code is essentially equal to the configuration found <a href="https://this:%20https://gitlab.com/LondonAppDev/recipe-app-api-devops/-/blob/master/deploy/ecs.tf?ref_type=heads" rel="nofollow noreferrer">here</a></p>
<p>I suspect there might be an issue with file permissions, but after I change the permission the errors continue. Any insights on what might be going wrong or how to debug this further?</p>
<p>Any help would be greatly appreciated!</p>
|
<python><reactjs><django><nginx><amazon-ecs>
|
2024-07-27 21:03:48
| 2
| 1,590
|
Lucas
|
78,802,361
| 827,927
|
How to generate permutations with constraints
|
<p>I want to generate all possible lists of ["x","y","z",1,2,3] that are ordered in ascending order. "x", "y" and "z" can be any real numbers and so their order can be arbitrary. However, as 1<2<3, 1 must be before 2 and 2 must be before 3. So, for example, the permutation ["x", 1, "y", "z", 2, 3] should be generated, but the permutation ["x", 1, "y", "z", 3, 2] should not be generated.</p>
<p>The simplest solution is</p>
<pre><code>for p in itertools.permutations(["x","y","z",1,2,3]):
if permutation_satisfies_the_constraint(p):
yield p
</code></pre>
<p>but it is very inefficient, as it would generate many unneeded permutations
<sub>(the total number of permutations of 6 elements is 6!=720, but the total number of legal permutations is 6!/3!=120).</sub></p>
<p>What is an efficient way to generate only the permutations that satisfy the constraint on 1,2,3?</p>
|
<python><algorithm><permutation><python-itertools>
|
2024-07-27 20:09:59
| 3
| 37,410
|
Erel Segal-Halevi
|
78,802,205
| 5,506,167
|
pycharm debugger too slow on every "step over"
|
<p>I am using pycharm to debug a django web app. When I click "step over" to direct debugger to the next line, around 50-100 background jobs launches in pycharm (as shown in the image which is taken from status bar of PyCharm) and it takes a few second(!) for the debugger to settle and show new values and be able to catch next "step over". This will impede development speed.</p>
<p>I tried the same project with same virtual environment but with VSCode debugger and it worked perfectly! So there is something wrong with pycharm.</p>
<p><a href="https://i.sstatic.net/mLiGq1cD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLiGq1cD.png" alt="enter image description here" /></a></p>
|
<python><debugging><pycharm>
|
2024-07-27 18:48:56
| 0
| 1,962
|
Saleh
|
78,802,160
| 1,794,549
|
Create a PyObject with a pointer to a C function
|
<p>Let's say I have this Python function:</p>
<pre><code>def foo(bar):
bar()
</code></pre>
<p>And this C function</p>
<pre><code>void bar() {
}
</code></pre>
<p>How can I create a PyObject with Python C API that holds a pointer to bar function so I can call the Python function. Something like this</p>
<pre><code>PyObject* function = PyObject_GetAttrString(module, "foo");
// how to create PyObject* args?
PyObject* result = PyObject_Call(function, args, NULL);
</code></pre>
<p>Searched and read the docs, but can not find a similar example</p>
|
<python><cpython><python-c-api>
|
2024-07-27 18:26:50
| 1
| 1,099
|
Marko Devcic
|
78,801,991
| 11,099,842
|
How to access a nested dictionary with a tuple
|
<p>I have a nested dictionary that I want to access via a tuple. How to do that?</p>
<pre><code> my_dict = {
"1": { "a": "hi", "b": "bye" },
"2": { "a": { "i": "howdy"}}
}
# Way 1: Works
my_dict["1"]["b"] # prints "bye"
my_dict["2"]["a"]["i"] # prints "howdy"
# Way 2: Doesn't Work
my_dict[("1", "b")] # fails
my_dict(("2", "a", "i")) # fails
</code></pre>
|
<python>
|
2024-07-27 16:59:45
| 1
| 891
|
Al-Baraa El-Hag
|
78,801,910
| 18,125,313
|
Why is it slower when storing the result of numpy.astype in a variable?
|
<p>The following two functions logically do the same thing, but <code>func2</code>, which stores the result in a temporary variable, is slower.</p>
<pre class="lang-py prettyprint-override"><code>def func1(a):
return a.astype(np.float64) / 255.0
def func2(a):
t = a.astype(np.float64)
return t / 255.0
</code></pre>
<p>Benchmark:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import timeit
import numpy as np
def func1(a):
return a.astype(np.float64) / 255.0
def func2(a):
t = a.astype(np.float64)
return t / 255.0
def main():
print(f"{np.__version__=}")
print(f"{sys.version=}")
size = 1_000_000
n_repeats = 100
n_epochs = 5
a = np.random.randint(0, 256, size=size, dtype=np.uint8)
for f in [func1, func2] * n_epochs:
times = timeit.repeat(lambda: f(a), number=1, repeat=n_repeats)
print(f"{f.__name__}: {min(times)}")
main()
</code></pre>
<p>Result:</p>
<pre class="lang-none prettyprint-override"><code>np.__version__='1.26.4'
sys.version='3.12.4 (main, Jul 16 2024, 19:42:31) [GCC 11.4.0]'
func1: 0.0013509448617696762
func2: 0.0035865511745214462
func1: 0.0013513723388314247
func2: 0.0034992704167962074
func1: 0.0013509979471564293
func2: 0.003565799444913864
func1: 0.0013509783893823624
func2: 0.003563949838280678
func1: 0.0013510659337043762
func2: 0.003569650463759899
</code></pre>
<p>For a <code>size</code> of <code>1_000_000_000</code>:</p>
<pre class="lang-none prettyprint-override"><code>func1: 2.503432061523199
func2: 3.3956982269883156
func1: 2.503927574492991
func2: 3.393561664968729
func1: 2.5052043283358216
func2: 3.3980945963412523
func1: 2.503149318508804
func2: 3.39398608263582
func1: 2.5073573794215918
func2: 3.396817682310939
</code></pre>
<p>Although the relative difference has decreased, the absolute difference is now almost 1 second.
So, I assume there is some difference that affects the entire array, but I couldn't figure out what it was.</p>
<p>Here are the bytecodes for the two functions printed by the dis module.</p>
<pre class="lang-none prettyprint-override"><code> 8 0 RESUME 0
9 2 LOAD_FAST 0 (a)
4 LOAD_ATTR 1 (NULL|self + astype)
24 LOAD_GLOBAL 2 (np)
34 LOAD_ATTR 4 (float64)
54 CALL 1
62 LOAD_CONST 1 (255.0)
64 BINARY_OP 11 (/)
68 RETURN_VALUE
</code></pre>
<pre class="lang-none prettyprint-override"><code> 12 0 RESUME 0
13 2 LOAD_FAST 0 (a)
4 LOAD_ATTR 1 (NULL|self + astype)
24 LOAD_GLOBAL 2 (np)
34 LOAD_ATTR 4 (float64)
54 CALL 1
62 STORE_FAST 1 (t)
14 64 LOAD_FAST 1 (t)
66 LOAD_CONST 1 (255.0)
68 BINARY_OP 11 (/)
72 RETURN_VALUE
</code></pre>
<p>As you can see, the only difference is <code>STORE_FAST</code> and <code>LOAD_FAST</code> of <code>t</code>, which I thought would not affect the entire array. What am I misunderstanding?</p>
<p>Please note that I was only able to reproduce this on Ubuntu (probably GCC), and not on Windows.
I can't tell whether it's OS-dependent or hardware-dependent.</p>
<hr />
<h3>UPDATE:</h3>
<p>As @no-comment pointed out in the comment, the first timing (i.e. immediately after the function switches) is noticeably different from the later timings. So I think we can assume that it was affected by the previous call.</p>
<pre class="lang-none prettyprint-override"><code>func1: 0.0014957496896386147 times[:5]=[0.003002448007464409, 0.002440572716295719, 0.0015078019350767136, 0.001591850072145462, 0.0014987429603934288]
func2: 0.0031856298446655273 times[:5]=[0.0031856298446655273, 0.0035560475662350655, 0.003505030646920204, 0.0035979915410280228, 0.0035021230578422546]
func1: 0.0014953771606087685 times[:5]=[0.002430872991681099, 0.0015014195814728737, 0.0014976952224969864, 0.0015024356544017792, 0.001497727818787098]
func2: 0.0031922338530421257 times[:5]=[0.0031922338530421257, 0.0035140514373779297, 0.003572382964193821, 0.003530893474817276, 0.003493628464639187]
func1: 0.0014957757666707039 times[:5]=[0.0024183401837944984, 0.001501905731856823, 0.0015066321939229965, 0.0015012985095381737, 0.0015163430944085121]
func2: 0.003169679082930088 times[:5]=[0.003169679082930088, 0.0035164309665560722, 0.0034868214279413223, 0.0034810323268175125, 0.003540727309882641]
func1: 0.0014958055689930916 times[:5]=[0.0024264072999358177, 0.0015287799760699272, 0.0014993362128734589, 0.001505572348833084, 0.001497725024819374]
func2: 0.003466635011136532 times[:5]=[0.0038485946133732796, 0.0054071033373475075, 0.0054869744926691055, 0.0034872666001319885, 0.003563432954251766]
func1: 0.0014960560947656631 times[:5]=[0.0024131694808602333, 0.001533709466457367, 0.0015180828049778938, 0.0015001073479652405, 0.0014980453997850418]
func2: 0.003209024667739868 times[:5]=[0.003209024667739868, 0.0037086624652147293, 0.0035465294495224953, 0.0034985439851880074, 0.00348656065762043]
</code></pre>
<hr />
<h3>UPDATE2:</h3>
<p>I was able to confirm that <code>func1</code> reuses memory for division using the following method.</p>
<pre class="lang-py prettyprint-override"><code>def show_address(a):
print(np.byte_bounds(a))
return a
def func1_with_hook(a):
return show_address(show_address(a.astype(np.float64)) / 255.0)
def func2_with_hook(a):
t = a.astype(np.float64)
return show_address(show_address(t) / 255.0)
</code></pre>
<p>However, I discovered something else in the process.</p>
<p>What I tried to do was to play around with memory outside of functions and investigate the changes in performance depending on whether memory (address) was reused or not. In the process, I realized that by storing the result of the function in a variable, <code>func2</code> runs just as fast as <code>func1</code>.</p>
<pre class="lang-py prettyprint-override"><code>def main():
print(f"{np.__version__=}")
print(f"{sys.version=}")
print("-" * 50)
size, n_repeats, n_epochs = 1_000_000, 100, 5
a = np.random.randint(0, 256, size=size, dtype=np.uint8)
for f in [func1, func2] * n_epochs:
times = []
epoch_started = time.perf_counter()
for _ in range(n_repeats):
started = time.perf_counter()
# f(a)
r = f(a) # This makes func2 faster.
times.append(time.perf_counter() - started)
epoch_time = time.perf_counter() - epoch_started
print(f"{f.__name__}: min={min(times):.6f} total={epoch_time:.4f} times={[round(t * 1000, 2) for t in times[:20]]}")
main()
</code></pre>
<pre class="lang-none prettyprint-override"><code>func1: min=0.001484 total=0.1595 times=[4.68, 3.6, 2.95, 2.93, 1.62, 1.5, 1.52, 1.53, 1.7, 1.76, 1.57, 1.51, 1.5, 1.49, 1.57, 1.49, 1.51, 1.49, 1.5, 1.6]
func2: min=0.001506 total=0.1554 times=[2.89, 1.54, 1.53, 1.54, 1.51, 1.53, 1.54, 1.57, 1.6, 1.56, 1.54, 1.52, 1.52, 1.54, 1.54, 1.56, 1.55, 1.54, 1.56, 1.54]
func1: min=0.001485 total=0.1634 times=[2.01, 2.91, 1.51, 1.5, 1.5, 1.49, 1.5, 1.49, 1.5, 1.49, 1.5, 1.56, 1.56, 1.53, 1.52, 1.61, 1.64, 1.53, 1.5, 1.49]
func2: min=0.001516 total=0.1609 times=[2.94, 1.6, 1.53, 1.58, 1.53, 1.57, 1.59, 1.53, 1.54, 1.56, 1.57, 1.62, 1.55, 1.54, 1.55, 1.56, 1.58, 1.57, 1.58, 1.56]
func1: min=0.001485 total=0.1655 times=[1.93, 2.86, 1.52, 1.49, 1.49, 1.49, 1.5, 1.49, 1.52, 1.55, 1.56, 1.5, 1.5, 1.5, 1.5, 1.49, 1.51, 1.52, 1.51, 1.49]
func2: min=0.001507 total=0.1623 times=[2.87, 1.59, 1.56, 1.58, 1.58, 1.63, 1.66, 1.64, 1.64, 1.61, 1.59, 1.61, 1.58, 1.57, 1.59, 1.57, 1.55, 1.56, 1.56, 1.61]
func1: min=0.001485 total=0.1522 times=[1.93, 2.86, 1.51, 1.49, 1.5, 1.6, 1.52, 1.5, 1.5, 1.49, 1.5, 1.51, 1.5, 1.5, 1.5, 1.49, 1.51, 1.49, 1.51, 1.49]
func2: min=0.001506 total=0.1557 times=[2.88, 1.54, 1.53, 1.52, 1.51, 1.52, 1.51, 1.51, 1.52, 1.52, 1.59, 1.58, 1.59, 1.59, 1.54, 1.55, 1.52, 1.53, 1.53, 1.52]
func1: min=0.001486 total=0.1538 times=[1.89, 2.93, 1.62, 1.52, 1.51, 1.52, 1.52, 1.51, 1.51, 1.49, 1.5, 1.49, 1.5, 1.5, 1.5, 1.49, 1.5, 1.5, 1.51, 1.51]
func2: min=0.001506 total=0.1566 times=[2.89, 1.56, 1.53, 1.52, 1.55, 1.57, 1.58, 1.54, 1.53, 1.55, 1.54, 1.53, 1.53, 1.58, 1.62, 1.61, 1.6, 1.6, 1.58, 1.59]
</code></pre>
<p>Please note that I also measured the runtime of the entire loop (see the <code>total</code> above). It is expected that the timing of the deletion will change, but it should still be within the measured section.</p>
<p>This result brought me back to square one. It is most certainly a memory management issue, but what specific factor is causing it is quite confusing.</p>
<hr />
<h3>UPDATE3:</h3>
<p>The following is a breakdown of the details mentioned in UPDATE2.</p>
<p>If we discard the result (in the same way as with timeit), it will reuse 2 sets of memory spaces.</p>
<pre class="lang-py prettyprint-override"><code>def check_discard_result(size):
a = np.random.randint(0, 256, size=size, dtype=np.uint8)
for i in range(6):
print(f"--- discard {i} ---")
func2_with_hook(a) # Discard the result.
</code></pre>
<pre class="lang-none prettyprint-override"><code>--- discard 2 ---
(25003648, 33003648) <-- (1)
(33003664, 41003664) <-- (2)
--- discard 3 ---
(25003648, 33003648) <-- (1)
(33003664, 41003664) <-- (2)
--- discard 4 ---
(25003648, 33003648) <-- (1)
(33003664, 41003664) <-- (2)
--- discard 5 ---
(25003648, 33003648) <-- (1)
(33003664, 41003664) <-- (2)
</code></pre>
<p>On the other hand, if we keep the results, it will reuse 3 sets of memory spaces. (Which is expected, since the content of the variable <code>r</code> is held until the next result is assigned.)</p>
<pre class="lang-py prettyprint-override"><code>def check_keep_result(size):
a = np.random.randint(0, 256, size=size, dtype=np.uint8)
for i in range(6):
print(f"--- keep {i} ---")
r = func2_with_hook(a) # Keep the result.
</code></pre>
<pre class="lang-none prettyprint-override"><code>--- keep 2 ---
(24003632, 32003632) <-- (1)
(40003664, 48003664) <-- (2)
--- keep 3 ---
(24003632, 32003632) <-- (1)
(32003648, 40003648) <-- (3)
--- keep 4 ---
(24003632, 32003632) <-- (1)
(40003664, 48003664) <-- (2)
--- keep 5 ---
(24003632, 32003632) <-- (1)
(32003648, 40003648) <-- (3)
</code></pre>
<p>In terms of the total memory space used:</p>
<ul>
<li>Former: 25003648 to 41003664</li>
<li>Latter: 24003632 to 48003664</li>
</ul>
<p>In other words, the former reuses memory more aggressively, and the latter accesses a wider range of memory. However, the latter is faster.</p>
<pre class="lang-none prettyprint-override"><code>keep : min=0.000458 total=0.0529 times=[1.33, 1.59, 0.59, 0.51, 0.51, 0.49, 0.51, 0.5, 0.51, 0.48, 0.5, 0.49, 0.48, 0.47, 0.47, 0.46, 0.46, 0.46, 0.47, 0.46]
discard: min=0.001360 total=0.1507 times=[2.03, 1.77, 1.58, 1.4, 1.37, 1.47, 1.45, 1.4, 1.39, 1.58, 1.55, 1.49, 1.51, 1.47, 1.4, 1.37, 1.39, 1.51, 1.39, 1.42]
keep : min=0.000461 total=0.0560 times=[1.6, 1.53, 0.74, 0.56, 0.62, 0.52, 0.54, 0.48, 0.48, 0.47, 0.48, 0.51, 0.7, 0.58, 0.52, 0.48, 0.48, 0.46, 0.48, 0.47]
discard: min=0.001353 total=0.1492 times=[2.15, 1.91, 1.65, 1.55, 1.55, 1.52, 1.46, 1.53, 1.47, 1.54, 1.48, 1.52, 1.44, 1.42, 1.39, 1.56, 1.57, 1.49, 1.49, 1.46]
keep : min=0.000461 total=0.0555 times=[1.66, 1.5, 0.85, 0.59, 0.62, 0.55, 0.62, 0.71, 0.69, 0.7, 0.6, 0.64, 0.61, 0.6, 0.6, 0.53, 0.59, 0.53, 0.54, 0.53]
discard: min=0.001363 total=0.1504 times=[1.85, 1.82, 1.58, 1.55, 1.53, 1.44, 1.37, 1.4, 1.4, 1.41, 1.41, 1.41, 1.57, 1.54, 1.51, 1.52, 1.46, 1.41, 1.41, 1.44]
keep : min=0.000468 total=0.0550 times=[2.05, 1.5, 0.58, 0.54, 0.57, 0.54, 0.59, 0.7, 0.62, 0.61, 0.52, 0.54, 0.51, 0.51, 0.52, 0.5, 0.54, 0.49, 0.52, 0.49]
discard: min=0.001354 total=0.1488 times=[1.95, 1.79, 1.57, 1.5, 1.45, 1.47, 1.45, 1.43, 1.67, 1.62, 1.74, 1.62, 1.52, 1.44, 1.4, 1.42, 1.54, 1.43, 1.38, 1.53]
keep : min=0.000456 total=0.0547 times=[1.82, 1.6, 0.77, 0.55, 0.51, 0.5, 0.51, 0.56, 0.64, 0.57, 0.54, 0.51, 0.5, 0.54, 0.54, 0.54, 0.53, 0.52, 0.51, 0.48]
discard: min=0.001360 total=0.1492 times=[1.89, 1.73, 1.6, 1.49, 1.43, 1.53, 1.58, 1.51, 1.55, 1.48, 1.41, 1.37, 1.37, 1.38, 1.54, 1.44, 1.45, 1.59, 1.5, 1.51]
</code></pre>
<pre class="lang-py prettyprint-override"><code>import sys
import time
import numpy as np
import os
print("LD_PRELOAD:", os.getenv("LD_PRELOAD"))
def show_address(a):
print(np.byte_bounds(a))
return a
def func2_with_hook(a):
t = a.astype(np.float64)
return show_address(show_address(t) / 255.0)
def func2(a):
t = a.astype(np.float64)
return t / 255.0
def check_discard_result(size):
a = np.random.randint(0, 256, size=size, dtype=np.uint8)
for i in range(6):
print(f"--- discard {i} ---")
func2_with_hook(a) # Discard the result.
def check_keep_result(size):
a = np.random.randint(0, 256, size=size, dtype=np.uint8)
for i in range(6):
print(f"--- keep {i} ---")
r = func2_with_hook(a) # Keep the result.
def benchmark_discard_result(size, n_repeats):
times = []
epoch_started = time.perf_counter()
a = np.random.randint(0, 256, size=size, dtype=np.uint8)
for _ in range(n_repeats):
started = time.perf_counter()
func2(a) # Discard the result.
times.append(time.perf_counter() - started)
epoch_time = time.perf_counter() - epoch_started
print(f"discard: min={min(times):.6f} total={epoch_time:.4f} times={[round(t * 1000, 2) for t in times[:20]]}")
def benchmark_keep_result(size, n_repeats):
times = []
epoch_started = time.perf_counter()
a = np.random.randint(0, 256, size=size, dtype=np.uint8)
for _ in range(n_repeats):
started = time.perf_counter()
r = func2(a) # Keep the result.
times.append(time.perf_counter() - started)
epoch_time = time.perf_counter() - epoch_started
print(f"keep : min={min(times):.6f} total={epoch_time:.4f} times={[round(t * 1000, 2) for t in times[:20]]}")
def main():
print(f"{np.__version__=}")
print(f"{sys.version=}")
print("-" * 50)
size, n_repeats, n_epochs = 1_000_000, 100, 5
check_keep_result(size)
check_discard_result(size)
for _ in range(n_epochs):
benchmark_keep_result(size, n_repeats)
benchmark_discard_result(size, n_repeats)
main()
</code></pre>
|
<python><numpy><performance>
|
2024-07-27 16:26:51
| 2
| 3,446
|
ken
|
78,801,590
| 1,136,895
|
Tensorflow consumes both GPU and CPU Memory
|
<p>I have TensorFlow set up with GPU enabled on Debian. Upon using <code>tensorflow.keras.models.load_model</code> to load a model, I noticed that it utilizesoth GPU memory and CPU memory (the system's RAM). I'm curious to know if it's typical for TensorFlow to use some of my system's RAM in addition to GPU memory.</p>
|
<python><tensorflow><keras><tensorflow2.0><tensorflow-serving>
|
2024-07-27 14:04:48
| 0
| 1,351
|
Masoud
|
78,801,434
| 11,354,959
|
Test python application and SQL scripts
|
<p>I am creating application with Flask and I am using SQLAclhemy with it.</p>
<p>I am currently writing some unit/integration test and I was wondering what is the best practise when it comes to add data to my inmemory db in python? Should I execute SQL scripts before I run my tests in a file? Should I execute SQL during the test?</p>
<p>For instance let's take a loot at my register methods test which are in my test_auth.py file (there is the register method and the login method):</p>
<pre><code>def test_register(client):
data = {
"username" : "test_user",
"password" : "test_password"
}
response = client.post(f'{AUTH_ROUTE}register', json=data)
assert response.status_code == 201
assert response.get_json() == {'message': 'User created successfully'}
with client.application.app_context():
user = User.query.filter_by(username='test_user').first()
assert user is not None
assert user.username == 'test_user'
assert user.password == 'test_password'
def test_register_user_already_exist(client):
with client.application.app_context():
existing_user = User(username='test_user', password='password')
db.session.add(existing_user)
db.session.commit()
data = {
"username" : "test_user",
"password" : "test_password"
}
response = client.post(f'{AUTH_ROUTE}register', json=data)
assert response.status_code == 400
</code></pre>
<p>I am writing code with app_context() to execute operation on my DB. Is it correct way to do the things? Should I remove the SQL part and put it in other place?</p>
<p>Should I execute a script first that do my SQL for the test_auth part then run all the tests?</p>
|
<python><sql><flask><pytest>
|
2024-07-27 12:48:46
| 1
| 370
|
El Pandario
|
78,801,372
| 169,992
|
Huggingface Mistral-Nemo-Instruct-2407 python script for text generation just hanging on Mac M3?
|
<p>I installed pytorch, transformers, and python-dotenv to run this script:</p>
<pre><code>from transformers import pipeline
from dotenv import load_dotenv
import torch
import os
import json
# from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
load_dotenv()
device = "cuda" if torch.cuda.is_available() else "cpu"
# device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
def summarize_definitions(definitions):
# Use a pipeline as a high-level helper
pipe = pipeline("text-generation", model="mistralai/Mistral-Nemo-Instruct-2407", device=device)
cleaned_definitions = {}
for i, (term, defs) in enumerate(definitions.items()):
combined_defs = f"Please summarize these definitions into a JSON array of simple ideally 1-3 word definitions: {json.dumps(defs, indent=2)}"
# Summarize the combined definitions
messages = [
{"role": "user", "content": combined_defs},
]
summary = pipe(messages, min_length=1, max_new_tokens=1000)
print(summary)
return cleaned_definitions
def main():
with open('import/language/tibetan/definitions.out.json', 'r', encoding='utf-8') as f:
definitions = json.load(f)
cleaned_definitions = summarize_definitions(definitions)
print(cleaned_definitions)
if __name__ == "__main__":
main()
</code></pre>
<p>When I run it, I just see this (it already previously installed 5 model chunks, each about 4-5GB):</p>
<pre><code>$ python3 transform.py
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:37<00:00, 7.48s/it]
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
</code></pre>
<p>Then it just sits there.</p>
<p>I tried commenting back in this line:</p>
<pre><code>device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
</code></pre>
<p>To see if I could use the GPU to speed things up? But it just said this and exited without any other content/messaging:</p>
<pre><code>$ python3 transform.py
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:39<00:00, 7.88s/it]
zsh: killed python3 transform.py
(venv) $ /opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
</code></pre>
<p>Any ideas what I'm doing wrong? Is it supposed to take a long time to get the first output? Or what is supposed to happen? I am new to this AI stuff, though have been programming for a while.</p>
<p>When I <code>CTRL+C</code>, it errors at line 31, which is right when I call <code>pipe(messages)</code>:</p>
<pre><code>^CTraceback (most recent call last):
File "./transform.py", line 60, in <module>
if __name__ == "__main__":
^^^^^^
File "./transform.py", line 56, in main
File "./transform.py", line 31, in summarize_definitions
]
File "./venv/lib/python3.12/site-packages/transformers/pipelines/text_generation.py", line 257, in __call__
return super().__call__(Chat(text_inputs), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
</code></pre>
|
<python><pytorch><nlp><huggingface-transformers><text-generation>
|
2024-07-27 12:24:40
| 0
| 80,366
|
Lance Pollard
|
78,801,248
| 6,556,388
|
pyspark .display() works but .collect(), .distinct() and show() don't
|
<p>I'm working with a pyspark dataframe in Azure Databricks and I'm trying to count how many unique (distinct) values a particular column has.</p>
<blockquote>
<p>Cluster: 14.3 LTS (includes Apache Spark 3.5.0, Scala 2.12)
pandas 1.5.3
pyspark 3.3.0</p>
</blockquote>
<p>I can .display() and .count() the df without problems, but when I try any of these three commands</p>
<pre><code>df.select("PB_countProducts").distinct().count()
df.collect()
df.show()
</code></pre>
<p>I get this error:</p>
<pre><code>SparkException: Job aborted due to stage failure: Task 6 in stage 57.0 failed 4 times, most recent failure: Lost task 6.3 in stage 57.0 (TID 421) (10.43.64.12 executor 0): org.apache.spark.SparkException: Failed to create session
at com.databricks.spark.api.python.IsolatedPythonWorkerFactory.createRawIsolatedWorker(IsolatedPythonWorkerFactory.scala:233)
at com.databricks.spark.api.python.IsolatedPythonWorkerFactory.create(IsolatedPythonWorkerFactory.scala:293)
at org.apache.spark.SparkEnv.createIsolatedPythonWorker(SparkEnv.scala:300)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:325)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:228)
at org.apache.spark.sql.execution.python.BasePythonUDFRunner.compute(PythonUDFRunner.scala:59)
at org.apache.spark.sql.execution.python.BatchEvalPythonEvaluatorFactory.evaluate(BatchEvalPythonExec.scala:80)
at org.apache.spark.sql.execution.python.EvalPythonEvaluatorFactory$EvalPythonPartitionEvaluator.eval(EvalPythonEvaluatorFactory.scala:114)
at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:77)
at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2$adapted(EvalPythonExec.scala:76)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2(RDD.scala:920)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2$adapted(RDD.scala:920)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
at org.apache.spark.rdd.RDD.$anonfun$computeOrReadCheckpoint$1(RDD.scala:409)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:406)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:373)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
at org.apache.spark.rdd.RDD.$anonfun$computeOrReadCheckpoint$1(RDD.scala:409)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:406)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:373)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)
at org.apache.spark.rdd.RDD.$anonfun$computeOrReadCheckpoint$1(RDD.scala:409)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:406)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:373)
at org.apache.spark.scheduler.ShuffleMapTask.$anonfun$runTask$3(ShuffleMapTask.scala:88)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ShuffleMapTask.$anonfun$runTask$1(ShuffleMapTask.scala:87)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:58)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:39)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:201)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:186)
at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:151)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103)
at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108)
at scala.util.Using$.resource(Using.scala:269)
at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107)
at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:145)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$9(Executor.scala:958)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:105)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:961)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:853)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult: RawInitSandboxError: [IndexOutOfBoundsException] readerIndex(8) + length(2067) exceeds writerIndex(2048): PooledUnsafeDirectByteBuf(ridx: 8, widx: 2048, cap: 2048)
at org.apache.spark.util.SparkThreadUtils$.awaitResult(SparkThreadUtils.scala:51)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:462)
at com.databricks.spark.api.python.IsolatedPythonWorkerFactory.createRawIsolatedWorker(IsolatedPythonWorkerFactory.scala:230)
... 54 more
Caused by: java.lang.RuntimeException: RawInitSandboxError: [IndexOutOfBoundsException] readerIndex(8) + length(2067) exceeds writerIndex(2048): PooledUnsafeDirectByteBuf(ridx: 8, widx: 2048, cap: 2048)
at daemon.safespark.client.SandboxApiClient.rawInitSandbox(SandboxApiClient.scala:272)
at com.databricks.spark.safespark.ApiAdapter.rawInitSandbox(ApiAdapter.scala:123)
at com.databricks.spark.safespark.udf.DispatcherImpl.$anonfun$createRawConnection$3(DispatcherImpl.scala:142)
at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:573)
at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:669)
at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:687)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:426)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:216)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:424)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:418)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:27)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:472)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:455)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:27)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:664)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:582)
at com.databricks.spark.util.PublicDBLogging.recordOperationWithResultTags(DatabricksSparkUsageLogger.scala:27)
at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:573)
at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:542)
at com.databricks.spark.util.PublicDBLogging.recordOperation(DatabricksSparkUsageLogger.scala:27)
at com.databricks.spark.util.PublicDBLogging.recordOperation0(DatabricksSparkUsageLogger.scala:68)
at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:150)
at com.databricks.spark.util.UsageLogger.recordOperation(UsageLogger.scala:68)
at com.databricks.spark.util.UsageLogger.recordOperation$(UsageLogger.scala:55)
at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:109)
at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:429)
at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:408)
at com.databricks.spark.safespark.udf.DispatcherImpl.recordOperation(DispatcherImpl.scala:67)
at com.databricks.spark.safespark.udf.DispatcherImpl.recordDispatcherOperation(DispatcherImpl.scala:443)
at com.databricks.spark.safespark.udf.DispatcherImpl.$anonfun$createRawConnection$2(DispatcherImpl.scala:130)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.apache.spark.util.ThreadUtils$$anon$1.execute(ThreadUtils.scala:105)
at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
at scala.concurrent.impl.Promise$KeptPromise$Kept.onComplete(Promise.scala:372)
at scala.concurrent.impl.Promise$KeptPromise$Kept.onComplete$(Promise.scala:371)
at scala.concurrent.impl.Promise$KeptPromise$Successful.onComplete(Promise.scala:379)
at scala.concurrent.impl.Promise.transform(Promise.scala:33)
at scala.concurrent.impl.Promise.transform$(Promise.scala:31)
at scala.concurrent.impl.Promise$KeptPromise$Successful.transform(Promise.scala:379)
at scala.concurrent.Future.map(Future.scala:292)
at scala.concurrent.Future.map$(Future.scala:292)
at scala.concurrent.impl.Promise$KeptPromise$Successful.map(Promise.scala:379)
at scala.concurrent.Future$.apply(Future.scala:659)
at com.databricks.spark.safespark.udf.DispatcherImpl.createRawConnection(DispatcherImpl.scala:169)
at com.databricks.spark.safespark.Dispatcher.createRawConnection(Dispatcher.scala:156)
at com.databricks.spark.api.python.IsolatedPythonWorkerFactory.createRawIsolatedWorker(IsolatedPythonWorkerFactory.scala:228)
... 54 more
Driver stacktrace:
JVM stacktrace:
org.apache.spark.SparkException
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3908)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3830)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3817)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:3817)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1695)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1680)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1680)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:4154)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:4066)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:4054)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:54)
Caused by: org.apache.spark.SparkException: Failed to create session
</code></pre>
<p>I tried to come up with a df_fake to enable you to reproduce the problem, with the same columns and values, but when I create it from data and scheme, all these functions work.</p>
<p>Here's what I did to create the df_fake:</p>
<pre><code># Create a Spark session
spark = SparkSession.builder.appName("CreateDeltaTable").getOrCreate()
spark.conf.set("spark.sql.parquet.enableVectorizedReader","false")
data = [("3bf293f7-f093-4943-9e1f-1e9eb313c9d6", 1),
("f8342388-3b17-4043-8a5a-73c56335181f", 1),
("1a6d1a34-4291-4e24-b00e-45600c04bc55", 2)]
columns = ["RequestId", "PB_countProducts"]
df_fake = spark.createDataFrame(data, columns)
df_fake = df_fake.withColumn("PB_countProducts", F.col("PB_countProducts").cast("integer"))
df_fake.display()
</code></pre>
<p>when I display my problematic df (called df) and my fake df (df_fake), they look alike:</p>
<pre><code>df.display()
df_fake.display()
</code></pre>
<p><a href="https://i.sstatic.net/Qs26fXen.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qs26fXen.png" alt=".display" /></a></p>
<p>When I print dtypes, they look alike:</p>
<pre><code>print(df.dtypes)
print(df_fake.dtypes)
</code></pre>
<p>[('RequestId', 'string'), ('PB_countProducts', 'int')]
[('RequestId', 'string'), ('PB_countProducts', 'int')]</p>
<p>When I print their scheme, they look alike:</p>
<pre><code>print(df.printSchema())
print(df_fake.printSchema())
</code></pre>
<p>root
|-- RequestId: string (nullable = true)
|-- PB_countProducts: integer (nullable = true)</p>
<p>None
root
|-- RequestId: string (nullable = true)
|-- PB_countProducts: integer (nullable = true)</p>
<p>None</p>
<p>but when I try .collect() (or any of the mentioned functions), I get errors only for df, while df_fake works fine:</p>
<pre><code>print(df_fake.collect())
</code></pre>
<pre><code>[Row(RequestId='3bf293f7-f093-4943-9e1f-1e9eb313c9d6', PB_countProducts=1), Row(RequestId='f8342388-3b17-4043-8a5a-73c56335181f', PB_countProducts=1), Row(RequestId='1a6d1a34-4291-4e24-b00e-45600c04bc55', PB_countProducts=2)]
print(df.collect()) will generate the error above.
</code></pre>
<p>Any idea on what I can do? ChatGPT wasn't very helpful...neither was databrick's assistant.</p>
<p><strong>Update</strong>: After trying a lot of stuff, the problem was solved. Here a couple of things that may or may not have solved it:</p>
<ul>
<li>I was installing several packages at the beginning of the code and some of them could be interfering with others (package versioning have been hard to manage).</li>
<li>One of the columns was filled with dictionaries and I manage to identify that they were the only column where I coudn't .distinct.count(). So dropping it helped.</li>
</ul>
|
<python><apache-spark><pyspark><databricks>
|
2024-07-27 11:28:44
| 0
| 864
|
Diego Rodrigues
|
78,800,937
| 1,188,943
|
Change Enter line break character with shift+enter line break in Python
|
<p>How can I replace newline characters (\n) with a Shift+Enter equivalent in Python to maintain paragraph structure when working with text data? Are there any specific libraries or techniques to achieve this?</p>
<p>I want to fill the textarea with selenium.</p>
|
<python><selenium-webdriver>
|
2024-07-27 08:51:40
| 2
| 1,035
|
Mahdi
|
78,800,904
| 6,223,275
|
Preview Version 5.0 interactive window hangs in Visual Studio Professional 2022 (64-bit) (Python)
|
<ul>
<li><p>I am running the Python Development Workload on Microsoft Visual Studio Professional 2022 (64-bit) - Preview Version 17.11.0 Preview 5.0.</p>
</li>
<li><p>I am following the Python in Visual Studio tutorial <a href="https://learn.microsoft.com/en-us/visualstudio/python/tutorial-working-with-python-in-visual-studio-step-03-interactive-repl?view=vs-2022" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/visualstudio/python/tutorial-working-with-python-in-visual-studio-step-03-interactive-repl?view=vs-2022</a></p>
</li>
<li><p>I have installed the <strong>ipython</strong> and <strong>ipykernel</strong> packages</p>
</li>
</ul>
<p>When I open the Interactive window and enter <code>$help</code> it returns the keyboard shortcuts and REPL commands β <strong>OK</strong>.</p>
<p>However, when I enter an instruction like <code>2+2</code> β it hangs.</p>
<p>This is similar to the 2 year old Stackoverflow question <a href="https://stackoverflow.com/questions/71420378/ipython-%c4%b0nteractive-window-in-visual-studio">IPython Δ°nteractive Window in Visual Studio</a> which I have also tried to use without success.</p>
<p>The problem appears a well-known one, see also <a href="https://trycatchdebug.net/news/1346983/python-visual-studio-hangs" rel="nofollow noreferrer">https://trycatchdebug.net/news/1346983/python-visual-studio-hangs</a> Unfortunately their suggestions do not solve my problem.</p>
<p>PS. Thanks to the person that cleaned up my formatting.</p>
<p>What am I doing wrong?</p>
|
<python><visual-studio>
|
2024-07-27 08:31:24
| 1
| 1,366
|
Joe
|
78,800,844
| 6,243,129
|
Paddle OCR not able to extract single digits from image
|
<p>I am doing an image OCR using Paddle OCR. Below is the sample code I am using:</p>
<pre><code>from paddleocr import PaddleOCR
import os
image_file = "3_496.png"
ocr = PaddleOCR(use_gpu=True)
image_path = os.path.join(os.getcwd(), image_file)
result = ocr.ocr(image_path, cls=True)
if None not in result:
print(f"OCR result for {image_file}:")
for line in result[0]:
print(line[1][0])
</code></pre>
<p>Below is the image file:</p>
<p><a href="https://i.sstatic.net/EDGrXvJZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDGrXvJZ.png" alt="enter image description here" /></a></p>
<p>In this image, Paddle OCR is able to extract almost all the items but failed in extracting <code>4</code> (PACK SIZE). I have some more very similar images and have noticed that it fails in extracting single digits. Is it true or may be I am doing something wrong?</p>
<p>Below is the output after running the above code:</p>
<p><code>OCR result for 3_496.png: PAGLNILE. PIGRWGIGHT 400g DATCHNUMHERE USLOYI 205257 31/07/2024 12.22. CUTIN NZ</code></p>
|
<python><ocr><paddle-paddle><paddleocr><paddle>
|
2024-07-27 07:57:08
| 2
| 7,576
|
S Andrew
|
78,800,797
| 2,268,543
|
How to view the final prompt in a MultiQueryRetriever pipeline using LangChain?
|
<p>I am currently working on a project using the LangChain library where I want to retrieve relevant documents from a vector database and then generate answers based on these documents using the Ollama LLM.</p>
<p>Below is my current implementation:</p>
<pre class="lang-py prettyprint-override"><code>import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
# Define the prompt template for generating multiple query versions
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from
a vector database. By generating multiple perspectives on the user question, your
goal is to help the user overcome some of the limitations of the distance-based
similarity search. Provide these alternative questions separated by newlines.
Original question: {question}""",
)
# Initialize the MultiQueryRetriever
retriever = MultiQueryRetriever.from_llm(
vectordb.as_retriever(),
ollama,
prompt=QUERY_PROMPT
)
# Modified RAG prompt for generating the final response
template = """Answer the question based ONLY on the following context:
{context}
Question: {question}
"""
# Create the final QA chain
prompt = ChatPromptTemplate.from_template(template)
from langchain_core.runnables import RunnableLambda
def inspect(state):
"""Print the state passed between Runnables in a langchain and pass it on"""
print(state)
return state
qa_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| RunnableLambda(inspect) # Add the inspector here to print the intermediate results
| prompt
| ollama
| StrOutputParser()
)
# Invoke the QA chain with a sample query
qa_chain.invoke("Give 10 quotes from this articles related to love?")
</code></pre>
<p>How can I view the final prompt that is generated by the <code>qa_chain</code> before it is sent to the Ollama LLM for processing? I would like to see the exact prompt that includes the context and the user's question.</p>
|
<python><langchain><large-language-model><ollama>
|
2024-07-27 07:28:04
| 1
| 2,519
|
Rasik
|
78,800,653
| 1,188,943
|
Remove common company name suffixes in Python using regex
|
<p>I'm struggling to remove suffixes from some company names. The expected result is like below:</p>
<p>Original Names:</p>
<pre><code>Apple Inc.
Sony Corporation
Fiat Chrysler Automobiles S.p.A.
Samsung Electronics Co., Ltd.
</code></pre>
<p>Cleared Names:</p>
<pre><code>Apple
Sony
Fiat Chrysler Automobiles
Samsung Electronics
</code></pre>
<p>What I have done until now:</p>
<pre><code>import re
def remove_company_suffixes(company_name):
suffix_pattern = r"\s*(?:co(?:rp(?:oration)?|mpany)?|ltd\.|llc|gmbh|sa|sp\.a\.|s\.r\.l\.|ag|nv|bv|inc\.|s\.a\.s\.|e\.u\.|s\.l\.|s\.a\.l\.|doo|dooel|d.o.o.|szr|ltd|inc|llc|corp|ag|sa|sp|sl)\.?$"
return re.sub(suffix_pattern, '', company_name.strip())
company_names = ["Apple Inc.", "Sony Corporation", "Fiat Chrysler Automobiles S.p.A.", "Samsung Electronics Co., Ltd.", "Plasticos SA", "ABC GmbH"]
for company_name in company_names:
cleaned_name = remove_company_suffixes(company_name)
print(cleaned_name)
</code></pre>
<p>The result is:</p>
<pre><code>Apple
Sony
Fiat Chrysler Automobiles S.p.A.
Samsung Electronics Co.,
Plasticos
ABC
</code></pre>
|
<python><regex><string>
|
2024-07-27 06:10:57
| 3
| 1,035
|
Mahdi
|
78,800,160
| 3,486,684
|
How to annotate type of an `OrderedDict` that is initialized with literals?
|
<p>Suppose I have the following:</p>
<pre class="lang-py prettyprint-override"><code>from collections import OrderedDict
from dataclasses import dataclass
@dataclass
class HelloWorld:
x: OrderedDict[str, int]
a = OrderedDict([("a", 0), ("c", 2), ("b", 1)])
HelloWorld(a) <--- # type error here
</code></pre>
<p>The type error produced is:</p>
<pre><code>Argument of type "OrderedDict[Literal['a', 'c', 'b'], Literal[0, 2, 1]]" cannot be assigned to parameter "x" of type "OrderedDict[str, int]" in function "__init__"
"OrderedDict[Literal['a', 'c', 'b'], Literal[0, 2, 1]]" is incompatible with "OrderedDict[str, int]"
Type parameter "_KT@OrderedDict" is invariant, but "Literal['a', 'c', 'b']" is not the same as "str"
Type parameter "_VT@OrderedDict" is invariant, but "Literal[0, 2, 1]" is not the same as "int
</code></pre>
<p>Oddly enough, this very similar snippet does not produce an error:</p>
<pre class="lang-py prettyprint-override"><code>from collections import OrderedDict
from dataclasses import dataclass
@dataclass
class HelloWorld:
x: OrderedDict[str, int]
HelloWorld(OrderedDict([("a", 0), ("c", 2), ("b", 1)])) # <--- no error
</code></pre>
|
<python><python-typing><ordereddictionary><pyright>
|
2024-07-26 23:31:04
| 2
| 4,654
|
bzm3r
|
78,800,053
| 197,738
|
Python, pandas parse number and string from string
|
<p>In Python, I want to parse a string and return the numeric portion (may or may not have decimal point) as a float and return the suffix as a string. Examples are:</p>
<ul>
<li><p>7.1inch -> 7.1, inch</p>
</li>
<li><p>7.1β -> 7.1, β</p>
</li>
<li><p>7in -> 7.0, in</p>
</li>
<li><p>-10dB -> -10.0, dB</p>
</li>
<li><p>-10.2dB -> -10.2, dB</p>
</li>
</ul>
<p>Thereβs no space between the numeric portion and suffix. Also, I want to apply this to a Pandas DataFrame column that has this format so that I can sort by the floating point value. I then want to append the suffix back to each element in the column after the sort.</p>
<p>How to do this?</p>
|
<python><pandas><string><floating-point>
|
2024-07-26 22:19:19
| 3
| 421
|
tosa
|
78,799,762
| 6,626,632
|
Overlay a polar matplotlib axis over a geopandas map
|
<p>I would like to overlay a polar axis over a map of a GeoPandas GeoDataFrame centered at a specific point. The ultimate goal is for the plot on the polar axis to be exactly centered at the point. I have found a way to do this but it is hacky and I wonder if there is a better way.</p>
<p>The problem is that the centers of the geopandas plot and the polar axes are slightly offset. For example, this code:</p>
<pre><code>import geopandas as gpd
lat, lon = 37, -122
gdf = gpd.GeoDataFrame({'id': ['my_point']}, geometry=gpd.points_from_xy([lon], [lat], crs=4326))
fig, ax = plt.subplots()
gdf.plot(ax=ax)
ax.set_axis_off()
polar_ax=fig.add_axes(rect=[0, 0, 1, 1], polar=True, frameon=False)
</code></pre>
<p>Yields this figure:</p>
<p><a href="https://i.sstatic.net/GPi69OKQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPi69OKQ.png" alt="enter image description here" /></a></p>
<p>Notice how the point is slightly off the origin of the polar axes.</p>
<p>If I finagle the <code>rect</code> argument in <code>fig.add_axes</code>, I can get the two to line up:</p>
<pre><code>import geopandas as gpd
lat, lon = 37, -122
gdf = gpd.GeoDataFrame({'id': ['my_point']}, geometry=gpd.points_from_xy([lon], [lat], crs=4326))
fig, ax = plt.subplots()
gdf.plot(ax=ax)
polar_ax=fig.add_axes(rect=[0, 0, 1.025, 0.99], polar=True, frameon=False)
</code></pre>
<p><a href="https://i.sstatic.net/oJT43JA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJT43JA4.png" alt="enter image description here" /></a></p>
<p>My questions: Is there a better way to do this (i.e., one that will automatically adjust the polar axes so they are centered at the same location as the geopandas plot)? Will this "adjustment factor" for the <code>rect</code> argument work universally, or does it depend on the particulars of the geodataframe I'm plotting?</p>
|
<python><matplotlib><geopandas><polar-coordinates>
|
2024-07-26 20:13:06
| 2
| 462
|
sdg
|
78,799,730
| 1,892,433
|
Open with Python an R data.table saved as metadata in a Parquet file
|
<p>With R, I created a Parquet file containing a <code>data.table</code> as main data, and another <code>data.table</code> as metadata.</p>
<pre><code>library(data.table)
library(arrow)
dt = data.table(x = c(1, 2, 3), y = c("a", "b", "c"))
dt2 = data.table(a = 22222, b = 45555)
attr(dt, "dt_meta") = dt2
tb = arrow_table(dt)
tb$metadata
write_parquet(tb, "file.parquet")
</code></pre>
<p>Attributes/metadata can be accessed easily when loading the Parquet file in R:</p>
<pre><code>dt = open_dataset("file.parquet")
dt$metadata$r$attributes$dt_meta
dt2 = read_parquet("file.parquet")
attributes(dt2)$dt_meta
</code></pre>
<p>Now I wonder if it is also possible to retrieve the data.table (or data.frame) from metadata of the Parquet file in Python.</p>
<p>Metadata can be accessed in Python with the pyarrow library, and the r field is there, but not correctly decoded.</p>
<pre><code>import pyarrow.parquet as pq
mt = pq.read_metadata("file.parquet")
metadata = mt.metadata[b'r']
metadata
</code></pre>
<p>Result:</p>
<pre><code>b'A\n3\n263169\n197888\n5\nUTF-8\n531\n2\n531\n3\n16\n2\n262153\n10\ndata.table\n262153\n10\ndata.frame\n22\n22\n254\n254\n16\n2\n262153\n1\nx\n262153\n1\ny\n787\n2\n14\n1\n22222\n14\n1\n45555\n1026\n1\n262153\n5\nnames\n16\n2\n262153\n1\na\n262153\n1\nb\n1026\n1\n262153\n9\nrow.names\n13\n2\nNA\n-1\n1026\n1\n262153\n5\nclass\n16\n2\n262153\n10\ndata.table\n262153\n10\ndata.frame\n1026\n1\n262153\n17\n.internal.selfref\n22\n22\n254\n254\n16\n2\n262153\n1\na\n262153\n1\nb\n254\n1026\n1023\n16\n3\n262153\n5\nclass\n262153\n17\n.internal.selfref\n262153\n7\ndt_meta\n254\n531\n2\n254\n254\n1026\n1023\n16\n2\n262153\n1\nx\n262153\n1\ny\n254\n1026\n1023\n16\n2\n262153\n10\nattributes\n262153\n7\ncolumns\n254\n'
</code></pre>
<p>Is it still an R attribute object, or another encoded object?</p>
<p>The names of the different attributes (e.g. <code>dt_meta</code>) can be read in this resulting string but is it possible to fully decode and parse it to retrieve the <code>dt_meta</code> table as a DataFrame?</p>
|
<python><r><parquet><pyarrow><apache-arrow>
|
2024-07-26 20:00:40
| 1
| 1,552
|
julien.leroux5
|
78,799,483
| 7,706,098
|
Can not iterate "cbar" or "cmins" in multiple violinplot in a single plot
|
<p>I am trying to change colors of each violinplot in multiple plots in a single plot. I can change colors of the <em>bodies</em>, but I can not change the colors for <em>cbars</em>, <em>cmins</em>, <em>cmaxes</em>. How do I do this?</p>
<pre><code>import matplotlib.pyplot as plt
mock_data= [
[0,1,2,3,4,5,6,7,8,9],
[1,2,3,4,5,6,7,8,9,10],
[2,3,4,5,6,7,8,9,10,11],
[3,4,5,6,7,8,9,10,11,12],
[4,5,6,7,8,9,10,11,12,13],
[5,6,7,8,9,10,11,12,13,14],
[6,7,8,9,10,11,12,13,14,15],
[7,8,9,10,11,12,13,14,15,16],
[8,9,10,11,12,13,14,15,16,17],
[9,10,11,12,13,14,15,16,17,18],
]
pos = [0,1,2,3,4,5,6,7,8,9]
colors = ["C0","C1","C2","C3","C4","C5","C6","C7","C8","C9"]
fig, ax = plt.subplots(1,1)
violin = ax.violinplot(mock_data,positions=pos)
for things in ["bodies","cbars","cmins","cmaxes"]:
for vp, co in zip(violin[things],colors):
vp.set_facecolor(co)
vp.set_edgecolor(co)
plt.show()
</code></pre>
<p>With above code, I get the following error</p>
<pre><code>Traceback (most recent call last):
File "/downloads/violin.py", line 25, in <module>
for vp, co in zip(violin[things],colors):
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'LineCollection' object is not iterable
</code></pre>
<p>It seems that only <code>violin["bodies"]</code>is iteratable, but <code>violin["cbars"]</code>,<code>violin["cmins"]</code>,<code>violin["cmaxes"]</code> are not. Given this, how do I efficiently change the lines colors of means, mins and maxes?</p>
|
<python><matplotlib>
|
2024-07-26 18:24:50
| 1
| 301
|
Redshoe
|
78,799,460
| 674,039
|
Why does re._compile exist?
|
<p>Here is <a href="https://github.com/python/cpython/blob/v3.12.4/Lib/re/__init__.py#L226" rel="nofollow noreferrer"><code>re.compile</code></a>:</p>
<pre><code>>>> import re, inspect
>>> print(inspect.getsource(re.compile))
def compile(pattern, flags=0):
"Compile a regular expression pattern, returning a Pattern object."
return _compile(pattern, flags)
</code></pre>
<p>It just calls <a href="https://github.com/python/cpython/blob/v3.12.4/Lib/re/__init__.py#L280" rel="nofollow noreferrer"><code>re._compile</code></a> with the same arguments.</p>
<p>What's the point of that? Seems like it just makes a useless extra stack frame, which is relatively expensive in Python. There should be a slight performance improvement, and simpler tracebacks, to just move the body of <code>_compile</code> directly into <code>compile</code>.</p>
<p>What's the reason for this technique using an extra function call <code>re.compile</code> -> <code>re._compile</code>?</p>
|
<python><python-re>
|
2024-07-26 18:14:29
| 0
| 367,866
|
wim
|
78,799,407
| 22,886,184
|
JavascriptException Stale element not found after (multiple) redirects
|
<p>I'm building a SeleniumBase scraper that takes a set of tasks and performs them on a website. After performing some actions on the site the script calls <code>driver.execute_script("return window.variable;")</code>. This works fine on all domains I've scraped even with any redirects after clicks.</p>
<p>Today I scraped a new website which includes pressing two buttons that both redirect you to a page (2 redirects total). After the redirects the script runs <code>execute_script</code> and throws a <code>selenium.common.exceptions.JavascriptException</code>.</p>
<pre><code>Traceback (most recent call last):
File "/home/victor/.pyenv/versions/3.12.3/envs/tagprotection-v3/lib/python3.12/site-packages/tenacity/__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/victor/Webvitals/tagprotection-v3/tagprotection/worker.py", line 28, in __collect_datalayer
datalayer = sb.execute_script("return window.variable;")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/victor/.pyenv/versions/3.12.3/envs/tagprotection-v3/lib/python3.12/site-packages/seleniumbase/fixtures/base_case.py", line 3372, in execute_script
return self.driver.execute_script(script, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/victor/.pyenv/versions/3.12.3/envs/tagprotection-v3/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py", line 414, in execute_script
return self.execute(command, {"script": script, "args": converted_args})["value"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/victor/.pyenv/versions/3.12.3/envs/tagprotection-v3/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py", line 354, in execute
self.error_handler.check_response(response)
File "/home/victor/.pyenv/versions/3.12.3/envs/tagprotection-v3/lib/python3.12/site-packages/selenium/webdriver/remote/errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.JavascriptException: Message: javascript error: {"status":10,"value":"stale element not found in the current frame"}
</code></pre>
<p>I'm not quite sure why this javascript variable is a stale element. Data is being pushed to it on every page (similar to Google's dataLayer)</p>
|
<python><selenium-webdriver><seleniumbase>
|
2024-07-26 17:58:06
| 0
| 537
|
Victor
|
78,799,303
| 10,181,236
|
read messages from public telegram channel using telethon
|
<p>I am using Telethon to automate some stuff from Telegram channels. I have already obtained my API key, hash, and token and I can start a new session using Telethon.</p>
<p>The problem is that when a new message arrives from some selected channels, I want to print the text but the code I wrote does not seem to work and I do not understand why.
It correctly enters the loop printing "Listening" but nothing is printed when a new message arrives.</p>
<p>This is my code</p>
<pre><code>import configparser
import asyncio
from telethon import TelegramClient, events
# Reading Configs
config = configparser.ConfigParser()
config.read("config.ini")
# Setting configuration values
api_id = int(config['Telegram']['api_id'])
api_hash = str(config['Telegram']['api_hash'])
phone = config['Telegram']['phone']
username = config['Telegram']['username']
channels_list = ["channel1", "channel2"] #these are public channel names taken from https://web.telegram.org/k/#@channelname for example
async def main():
client = TelegramClient('session_name', api_id, api_hash)
await client.start(phone)
@client.on(events.newmessage.NewMessage())
async def my_event_handler(event):
print("hello")
print(event)
sender = await event.get_sender()
if sender.username in channels_list:
channel = sender.username
# get last message of the channel
async for message in client.iter_messages(channel, limit=1):
print(message.text)
print("Listening...")
await client.run_until_disconnected()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
</code></pre>
|
<python><telegram><telethon>
|
2024-07-26 17:25:03
| 1
| 512
|
JayJona
|
78,799,154
| 10,335,909
|
How to filter Pandas Dataframe to integer values?
|
<p>I have a dataframe column with string values. I want to filter to the rows that have integer. I can do the below to find if it's numeric, but this would return floats as well.</p>
<pre><code>result = pd.to_numeric(df['col1'], errors='coerce').notnull()
</code></pre>
<p>'1' should be returned</p>
<p>'1.1' should not be returned.</p>
<p>How do I filter to integers (excluding floats)?</p>
|
<python><pandas>
|
2024-07-26 16:40:20
| 4
| 1,098
|
Melissa Guo
|
78,798,926
| 276,193
|
Calling module from jupyter notebook inside of poetry project reults in 'No module named' error
|
<p>I'm working on a module with poetry with a directory structure like</p>
<pre><code>./
/my_module
(module files)
/tests
test_thing.py
/jupyter
demo.ipynb
</code></pre>
<p>In the test files and jupyter notebooks there is import code like</p>
<pre><code>from my_module import thingy, other_thingy
</code></pre>
<p>Now from the project root <code>poetry run pytest</code> works without issue, but invoking jupyter lab via <code>poetry run jupyter lab</code> and then running the notebook results in <code>No module named my_module</code> error. Why is the module on the path for pytest and not jupyter?</p>
<p>Of course I can do the following, but then none of the other modules from the poetry env are available. How can I resolve this?</p>
<pre><code>import sys
sys.path.append('../)
</code></pre>
|
<python><pytest><jupyter><python-poetry>
|
2024-07-26 15:40:17
| 1
| 16,283
|
learnvst
|
78,798,871
| 8,438,604
|
Using pymongo installed via virtualenv and Ansible returns an error
|
<p>Since it's good practice to install pip modules in virtual environment, I have the following task in place:</p>
<pre><code>- name: Install python packages
ansible.builtin.pip:
name:
- pymongo
virtualenv: /home/user/.venv
</code></pre>
<p>However, the taks that supposed to use pymongo returns an error: <code>ModuleNotFoundError: No module named 'pymongo'</code></p>
<p>This is the task:</p>
<pre><code>- name: Create database database user
community.mongodb.mongodb_user:
login_user: root
login_password: 12345
database: somedb
name: some_user
password: 12345
state: present
roles:
- { db: "xxx", role: "dbOwner" }
- { db: "yyy", role: "dbOwner" }
</code></pre>
<p>How can I fix the Ansible playbook so that the pymongo installed in the virtual environment is found?
Also, if I run <code>pip list</code> pymongo is not listed and this let me think that I'm missing a piece in the chain.</p>
|
<python><python-3.x><ansible><virtualenv>
|
2024-07-26 15:25:38
| 1
| 391
|
matteo-g
|
78,798,654
| 5,786,649
|
Python: trying to access instance attributes in self.__setattr__
|
<p>I was trying to create a <code>dataclass</code> with shorter aliases for longer fields. Since I wanted to be able to add new fields (with long and short names) without having to write a <code>@property</code> decorated "getter" and a <code>@short_name.setter</code> decorated "setter", I decided to use a dictionary mapping long names to short names, and implement custom <code>__getattr__</code> and <code>__setattr__</code> methods. However, my kernel crashed (when running the code in JupyterLab) / I got a Recursion error (when running the script). After some trial and error, it seems I cannot access any instance or class attribute via <code>self</code> inside the <code>__setattr__</code> method. My current (failing) implementation is</p>
<pre><code>@dataclass
class Foo:
very_long_name_of_a_number: int
very_long_name_of_a_string: str
short_name_map: dict = field(
default_factory=lambda: {
"number": "very_long_name_of_a_number",
"string": "very_long_name_of_a_string",
},
init=False,
repr=False,
)
def __getattr__(self, name):
if name in self._short_name_map:
return getattr(self, self._short_name_map[name])
raise AttributeError(
f"'{self.__class__.__name__}' object has no attribute '{name}'"
)
def __setattr__(self, name, value):
short_name = self.short_name_map[name]
self.__dict__[short_name ] = value
</code></pre>
<p>Two questions arise:</p>
<ol>
<li>Where does the problem come from? I see no evil in accessing attributes of self inside <code>.__setattr__</code>, but I am fairly sure that is the problem (renaming <code>__setattr__</code> solves the problem, as does deleting the reference to <code>self.short_name_map</code>).</li>
<li>How could I work around this problem? Or is there some underlying paradigm I did not consider?</li>
</ol>
<hr />
<p>For the record: this is the behaviour I wanted to have, but without having to write the two decorated functions for every field:</p>
<pre><code>@dataclass
class Foo:
very_long_name_of_a_number: int
@property
def number(self) -> int:
return self.very_long_name_of_a_number
@number.setter
def short_name(self, value: int):
self.very_long_name_of_a_number= value
</code></pre>
|
<python><properties><python-dataclasses><setattr>
|
2024-07-26 14:40:03
| 1
| 543
|
Lukas
|
78,798,620
| 7,919,597
|
Why does my Pydantic model contain an extra attribute when popupated from an object with extra=ignore?
|
<p>The title says it all:</p>
<p>Why does my Pydantic model contain an extra attribute when popupated from an object with extra=ignore?</p>
<pre><code>from pydantic import BaseModel, ConfigDict
# pydantic 2.4.2
class Test(BaseModel):
a: int
model_config = ConfigDict(extra='ignore',
from_attributes=True)
class TestExtended(Test):
b: float
m = TestExtended(a=2, b=2)
# Why does m_1 contain the additional field "b" even though
# we try to instantiate Test with extra=ignore?
m_1 = Test.model_validate(m)
print(m_1)
</code></pre>
<p>The output shows that the model contains both attributes:</p>
<pre><code>>>> a=2 b=2.0
</code></pre>
<p>Any ideas what the reason for this behaviour is?</p>
|
<python><pydantic><pydantic-v2>
|
2024-07-26 14:31:01
| 1
| 7,233
|
Joe
|
78,798,569
| 20,920,790
|
How to add trigger for run dag into another dag (with decorators) in Airflow?
|
<p>I got 2 dags in same dags folder:</p>
<ol>
<li>dag_update_database (dag_update_database.py)</li>
<li>dag_add_client_loyalty (dag_updade_clients_loyalty.py)</li>
</ol>
<p>I can run dag_add_client_loyalty with this code, but I get error:</p>
<pre><code>[2024-07-29, 14:34:15 MSK] {task_command.py:423} INFO - Running <TaskInstance: dag_update_database.trigger_add_client_loyalty_dag manual__2024-07-29T11:25:59.801574+00:00 [running]> on host 4e2cf1ad7047
[2024-07-29, 14:34:15 MSK] {taskinstance.py:2510} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='d-chernovol' AIRFLOW_CTX_DAG_ID='dag_update_database' AIRFLOW_CTX_TASK_ID='trigger_add_client_loyalty_dag' AIRFLOW_CTX_EXECUTION_DATE='2024-07-29T11:25:59.801574+00:00' AIRFLOW_CTX_TRY_NUMBER='6' AIRFLOW_CTX_DAG_RUN_ID='manual__2024-07-29T11:25:59.801574+00:00'
[2024-07-29, 14:34:15 MSK] {taskinstance.py:2728} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/models/taskinstance.py", line 444, in _execute_task
result = _execute_callable(context=context, **execute_callable_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/models/taskinstance.py", line 414, in _execute_callable
return execute_callable(context=context, **execute_callable_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/decorators/base.py", line 241, in execute
return_value = super().execute(context)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/operators/python.py", line 200, in execute
return_value = self.execute_callable()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/operators/python.py", line 217, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/airflow/dags/dag_update_database.py", line 794, in trigger_add_client_loyalty_dag
trigger_dag_add_client_loyalty_task.execute(context={})
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/operators/trigger_dagrun.py", line 192, in execute
ti = context["task_instance"]
~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'task_instance'
</code></pre>
<p>Then I get error this task runs 4 additional times.
So I run my second task many times.
It mean I need edit context to avoid error.</p>
<p>My first dag:</p>
<pre><code>import datetime
from airflow.decorators import dag, task
from airflow.operators.dagrun_operator import TriggerDagRunOperator
from dag_add_client_loyalty import dag_add_client_loyalty
default_args = {
'owner': 'd-chernovol',
'depends_on_past': False,
'retries': 5,
'retry_delay': datetime.timedelta(minutes=1),
'start_date': datetime.datetime(2024, 6, 30)
}
schedule_interval = '*/20 * * * *'
@dag(default_args=default_args, schedule_interval=schedule_interval, catchup=False, concurrency=4)
def dag_update_database():
@task
def get_salons(api, tries: int):
...
return result_from_salons_api
@task
def get_clients(api, tries: int, result_from_salons_api: list):
...
return clients
@task
def trigger_add_client_loyalty_dag():
trigger_dag_add_client_loyalty_task = TriggerDagRunOperator(
task_id='run_dag_add_client_loyalty',
trigger_dag_id='dag_add_client_loyalty',
dag=dag_update_database
)
trigger_dag_add_client_loyalty_task.execute(context={})
get_salon_task = get_salons(api, 10)
get_clients_task = get_clients(api, 10, get_salon_task)
run_add_client_loyalty_dag_task = trigger_add_client_loyalty_dag()
get_salon_task.set_downstream(get_clients_task)
get_clients_task.set_downstream(run_add_client_loyalty_dag_task)
dag_update_database = dag_update_database()
</code></pre>
<p>Second dag code:</p>
<pre><code>from airflow.decorators import dag, task
default_args = {
'owner': 'd-chernovol',
'depends_on_past': False,
'retries': 5,
'retry_delay': datetime.timedelta(minutes=1),
'start_date': datetime.datetime(2024, 6, 30)
}
@dag(default_args=default_args, schedule_interval=None, catchup=False, concurrency=4)
def dag_add_client_loyalty():
@task
def get_salons_list():
return result
@task
def get_records_from_api():
return result
get_salon_task = get_salons_list(api)
get_records_from_api_task = get_records_from_api(api, get_salon_task)
get_salon_task.set_downstream(get_records_from_api_task)
dag_add_client_loyalty = dag_add_client_loyalty()
</code></pre>
|
<python><airflow>
|
2024-07-26 14:21:20
| 2
| 402
|
John Doe
|
78,798,406
| 16,883,182
|
What's the simplest way to annotate a function/method parameter to accept an overlapping Union of types?
|
<p>In all my functions that allow an overlapping union of types, I already have special handling that checks all parameters to ensure that there are no types which don't support operations with each other.</p>
<p>In my specific use-case, I'm writing functions that accept any of <code>int</code>, <code>float</code>, <code>fractions.Fraction</code>, and <code>decimal.Decimal</code> as input arguments, and since <code>decimal.Decimal</code> <em>only</em> supports mathematical operations with instances of itself and <code>int</code>, I wrote a function that implements the afore-mentioned special handling by first checking if there's <em>any</em> argument which is a <code>decimal.Decimal</code> object. And if there is, then the function re-scans all arguments to confirm that all of them are either <code>decimal.Decimal</code> or <code>int</code>. If it finds a violation, then it returns <code>False</code> to notify the caller that incompatible arguments are present (else it returns <code>True</code>). Below are the two functions that implement this check. All my functions & methods call the latter in a guard-clause at some point, and raise a <code>TypeError</code> if the return value is <code>False</code>:</p>
<pre class="lang-python prettyprint-override"><code>from fractions import Fraction
import collections.abc
import decimal as dec
import typing as tp
Real = tp.Union[int, float, Fraction]
DecLike = tp.Union[int, dec.Decimal]
def _check_has_decimals(args: tp.Tuple[tp.Any, ...]) -> collections.abc.Iterator[bool]:
"""
Returns an iterator yielding whether each argument is an instance of 'decimal.Decimal'.
Can be passed to 'any()'.
"""
return map(lambda param: isinstance(param, dec.Decimal), args)
def _check_arg_types(
*args: tp.Any,
has_decimals: tp.Union[tp.Callable[[tp.Tuple[tp.Any, ...]],
collections.abc.Iterator[bool]],
bool] = _check_has_decimals
) -> bool:
"""Returns a boolean indicating if the types of the arguments are valid."""
if not isinstance(has_decimals, bool):
# Expand iterator to a simple 'bool'.
has_decimals = any(has_decimals(args))
if has_decimals:
return all(map(lambda param: isinstance(param, tp.get_args(DecLike)), args))
else:
return all(map(lambda param: isinstance(param, tp.get_args(Real)), args))
</code></pre>
<p>The reason for the callable default argument is so if I'm writing a class that accepts mixed arguments, it can manually call <code>_check_has_decimals</code> and store the result as an instance attribute so that when any of its methods are called, it can supply <code>_check_arg_types</code> with the pre-computed value instead of letting the function re-compute it every time. But if being called from a function, then that function just calls <code>_check_arg_types</code> without the optional argument, as the same set of arguments will only have to be checked once anyway.</p>
<p>So in a function, for example, I would do this:</p>
<pre class="lang-python prettyprint-override"><code>def add_stuff(a, b, c, d):
if not _check_arg_types(a, b, c, d):
err_msg = ("argument types other than 'int' or 'Decimal' aren't allowed if "
"any argument is a 'Decimal'!")
raise TypeError(err_msg)
return a + b + c + d
</code></pre>
<p>And now the problem. I can't get the type checker (PyRight in my case) to understand that I'm preventing the imcompatible types from collisioning in my code! If I simply annotate the above function's signature as:</p>
<pre class="lang-python prettyprint-override"><code>Numeric = tp.Union[Real, DecLike]
def add_stuff(a: Numeric, b: Numeric, c: Numeric, d: Numeric) -> Numeric:
...
</code></pre>
<p>...then the nested union just gets flattened to <code>tp.Union[int, float, Fraction, dec.Decimal]</code>, and when I type-check the code, I would get an error on the line <code>return a + b</code> because the type-checker thinks it's possible for... say <code>a</code> to be <code>float</code> and <code>b</code> to be <code>Decimal</code>. And nothing I do can make it understand better. The only solution seems to be to not type annotate any the function/methods in my code that accept overlapping unions, and lose the benefit of static type-checking completely (for those function/method/classes, at least).</p>
<p>And here is the list of things I've tried, with help from the Python chatroom. None of them work completely for my use-case:</p>
<ol>
<li><p>Over-loading a function 2 times, with the first signature's parameter only accepting <code>Real</code>s and with return type annotated as <code>Real</code>, and the second signature with everything swapped out for <code>DecLike</code>. Also, make <code>Real</code> and <code>DecLike</code> generic types instead of <code>Union</code>s. It works if you suppress the <code>overload-overlap</code> error on the line of the first overload:</p>
<pre><code>Real = tp.TypeVar("Real", bound=tp.Union[int, float, Fraction])
DecLike = tp.TypeVar("DecLike", bound=tp.Union[int, dec.Decimal])
@tp.overload
def add_stuff(a: Real, b: Real) -> Real: # type: ignore[overload-overlap]
...
@tp.overload
def add_stuff(a: DecLike, b: DecLike) -> DecLike:
...
def add_stuff(a, b):
return a + b
print(tp.reveal_type(add_stuff(1, 1)))
print(tp.reveal_type(add_stuff(1, 1.5)))
print(tp.reveal_type(add_stuff(Fraction(1, 2), 0.5)))
print(tp.reveal_type(add_stuff(3, dec.Decimal("0.5"))))
print(tp.reveal_type(add_stuff(dec.Decimal("1"), dec.Decimal("1"))))
</code></pre>
<p>But the problem with this is that it doesn't work for class instance methods which don't take any arguments and return a value based on an instance attribute. If I tried to define, say:</p>
<pre><code>class Foo:
...
@tp.overload
def return_something(self) -> Real:
...
@tp.overload
def return_something(self) -> DecLike:
...
def return_something(self):
return self.some_instance_attribute
</code></pre>
<p>then it would trigger the error <code>A function returning TypeVar should receive at least one argument containing the same TypeVar [type-var]</code>.</p>
</li>
<li><p>Make the class which supports overlapping unions a generic class, and then create an overloaded helper function which take constructor arguments, use them unchanged to create a new instance of the class, and has its return type annotated as a subscription of the generic class.</p>
<pre><code>T = tp.TypeVar("T")
TReal = tp.TypeVar("TReal", bound=tp.Union[int, float, Fraction])
TDec = tp.TypeVar("TDec", bound=tp.Union[int, dec.Decimal])
class C(tp.Generic[T]):
def __init__(self, foo: T, bar: T) -> None:
self.foo, self.bar = foo, bar
def return_something(self) -> T:
return self.foo + self.bar
@tp.overload
def from_number(a: TReal, b: TReal) -> C[TReal]:
...
@tp.overload
def from_number(a: TDec, b: TDec) -> C[TDec]:
...
def from_number(a, b):
return C(a, b)
print(tp.reveal_type(from_number(1, 1)))
print(tp.reveal_type(from_number(1, 1.5)))
print(tp.reveal_type(from_number(Fraction(1, 2), 0.5)))
print(tp.reveal_type(from_number(3, dec.Decimal("0.5"))))
print(tp.reveal_type(from_number(dec.Decimal("1"), dec.Decimal("1"))))
</code></pre>
<p>This gives an <code>Operator "X" not supported for types "T@C" and "T@C"</code> whenever I try to do any operations on the stored instance attributes. So this is basically the same as just annotating as <code>tp.Union[int, float, Fraction, dec.Decimal]</code> in that I have to add a <code># type: ignore[reportOperatorIssue]</code> after <em>every</em> operation between the arguments.</p>
</li>
</ol>
<p>Is there any solution (preferably simple) that actually works for my use case? Or am I trying to do something impossible?</p>
|
<python><generics><python-typing>
|
2024-07-26 13:45:19
| 0
| 315
|
I Like Python
|
78,798,373
| 850,781
|
Numpy 2 has a separate float64(nan) - how to get rid of it?
|
<p>It appears that Numpy 2 introduced its own separate <code>float64(nan)</code> - different from <code>np.nan</code>. Now regression tests fail because while</p>
<pre><code>>>> np.nan is np.nan
True
</code></pre>
<p>we now have</p>
<pre><code>>>> np.nan is np.float64("nan")
False
</code></pre>
<p>(also json cannot serialize those new <code>NaN</code>s).</p>
<p>Why did they make this change?</p>
<p>Is there a way to cast numpy floats to normal floats automatically?</p>
|
<python><numpy><nan>
|
2024-07-26 13:38:26
| 1
| 60,468
|
sds
|
78,798,250
| 3,070,421
|
Plotly updatemenus - Only update specific parameters
|
<p>I'm looking for a way to have two updatemenu buttons on a plotly figure. One changes the data and the y-axis title and one switches from linear to log scale. Broadly the code below works but I lose something depending on which method I use.</p>
<p>If the <code>buttons</code> method is <code>update</code> then when I switch parameters it defaults back to linear scale. If I use <code>restyle</code> the y-axis title stops updating. Is there a method to only partially update I've missed?</p>
<pre><code>import plotly.graph_objects as go
import pandas as pd
def main():
figure = go.Figure()
df = pd.DataFrame([['sample 1', 1, 5, 10],
['sample 2', 2, 20, 200],
],
columns=['sample id', 'param1', 'param2', 'param3']
)
options = ['param1', 'param2', 'param3']
figure.add_trace(go.Bar(x=df['sample id'], y=df[options[0]]))
figure.update_yaxes(title=options[0])
buttons = []
for option in options:
buttons.append({'method': 'restyle', ### this can be update or restyle, each has different issues
'label': option,
'args': [{'y': [df[option]],
'name': [option]},
{'yaxis': {'title': option}},
[0]]
})
scale_buttons = [{'method': 'relayout',
'label': 'Linear Scale',
'args': [{'yaxis': {'type': 'linear'}}]
},
{'method': 'relayout',
'label': 'Log Scale',
'args': [{'yaxis': {'type': 'log'}}]
}
]
figure.update_layout(yaxis_title=options[0],
updatemenus=[dict(buttons=buttons,
direction='down',
x=0,
xanchor='left',
y=1.2,
),
dict(buttons=scale_buttons,
direction='down',
x=1,
xanchor='right',
y=1.2,
)],
)
figure.show()
if __name__ == '__main__':
main()
</code></pre>
|
<python><plotly>
|
2024-07-26 13:14:34
| 1
| 2,179
|
Sobigen
|
78,798,106
| 12,057,138
|
AWS Lambda python log file is not created
|
<p>I have a Python AWS lambda with a logger that generates two different log files:</p>
<pre><code>LOG = f"/tmp/{uuid.uuid1()}_log.log"
SLOW_LOG = f"/tmp/{uuid.uuid1()}_slow.log"
logging.basicConfig(level=logging.INFO,
format="%(levelname)s - %(message)s",
handlers=[logging.FileHandler(LOG),
logging.StreamHandler()])
slow_logger = logging.getLogger("slow_requests")
slow_logger.setLevel(logging.INFO)
file_handler = logging.FileHandler(SLOW_LOG)
formatter = logging.Formatter("%(levelname)s - %(message)s")
file_handler.setFormatter(formatter)
slow_logger.addHandler(file_handler)
slow_logger.propagate = False
</code></pre>
<p>When the code was executed from command line, it created the two different logs and I had no issue. However, when I implemented it as a Lambda, LOG is created empty while SLOW_LOG is fine. I've been loosing my mind over this... can anyone help?</p>
|
<python><logging><aws-lambda>
|
2024-07-26 12:40:09
| 0
| 688
|
PloniStacker
|
78,797,584
| 12,350,600
|
Python nested dictionary to Tree
|
<p>I am trying to display my nested dictionary as a Tree.</p>
<pre><code>data = {"L1" :
{"sub L1" :
"sub sub L1"},
"L2" :
"sub L2",
"L3" :
{"sub L3" :
{"sub sub L3" :
"sub sub sub L3"}}}
</code></pre>
<p>As you can see, it's all text data and there is no real correlation between any of it.</p>
<p>I have tried networkx and dendograms but they seem to be breaking as I have a nested dictionary.</p>
<p>Is there a better way to do this?</p>
|
<python><dictionary><plot><tree>
|
2024-07-26 10:38:50
| 2
| 394
|
Kruti Deepan Panda
|
78,797,526
| 2,485,799
|
Is using a Pandas Dataframe as a read-only table scalable in a Flask App?
|
<p>I'm developing a small website in Flask that relies on data from a CSV file to output data to a table on the frontend using JQuery.</p>
<p>The user would select an ID from a drop-down on the front-end, then a function would run on the back-end where the ID would be used as a filter on the table to return data. The data returned would usually just be a single column from the dataframe as well.</p>
<p>The usual approach, from my understanding, would be to load the CSV data into a SQLite DB on startup and query using SQL methods in python at runtime.</p>
<p>However, in my case, the table is 15MB in size (214K rows) and will never grow past that point. All the data will be as is for the duration of the Apps lifecycle.</p>
<p>As such, would it be easier and less hassle to just load the dataframe table into memory and just filter on a copy of it when requests come in? Is that scalable or am I just kicking a can down the road?</p>
<p>Example:</p>
<pre><code>app = Flask(__name__)
dir_path = os.path.abspath(os.path.dirname(__file__))
with app.app_context():
print("Writing DB on startup")
query_df = pd.read_csv(dir_path+'/query_file.csv')
@app.route('/getData', methods=["POST"])
def get_data():
id = request.get_json()
print("Getting rows....")
data_list = sorted(set(query_df[query_df['ID'] == id]['Name'].tolist()))
return jsonify({'items': data_list, 'ID': id})
</code></pre>
<p>This may be a tad naive on my end but I could not find a straight answer for my particular use-case.</p>
|
<python><pandas><flask><scalability>
|
2024-07-26 10:29:24
| 3
| 6,883
|
GreenGodot
|
78,797,499
| 24,191,255
|
Cubic spline interpolation in Python - why is it not working?
|
<p>I would like to apply a 2nd order low-pass Butterworth filter on my data, and then use cubic spline interpolation for resampling at every 1 meter in Python.</p>
<p>I tried to prevent having non-finite values but I still receive the following ValueError:</p>
<pre class="lang-none prettyprint-override"><code>cs_v = CubicSpline(distance, filtered_v)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^`
raise ValueError("`x` must contain only finite values.")
ValueError: `x` must contain only finite values.
</code></pre>
<p>Here are the relevant parts of my code:</p>
<pre><code>import numpy as np
import pandas as pd
from scipy.signal import butter, filtfilt
from scipy.interpolate import CubicSpline, interp1d
import matplotlib.pyplot as plt
import plotly.graph_objects as go
# Butterworth filter
def butterworth_filter(data, cutoff, fs, order=2):
nyquist = 0.5 * fs
normal_cutoff = cutoff / nyquist
b, a = butter(order, normal_cutoff, btype='low', analog=False)
y = filtfilt(b, a, data)
return y
# ensuring data arrays are finite
def ensure_finite(data):
nans = np.isnan(data) | np.isinf(data)
if np.any(nans):
interp_func = interp1d(np.arange(len(data))[~nans], data[~nans], kind='linear', fill_value="extrapolate")
data[nans] = interp_func(np.arange(len(data))[nans])
return data
def handle_infinite(data, t):
nans = np.isnan(data) | np.isinf(data)
if np.any(nans):
valid_mask = ~nans
if valid_mask.sum() < 2:
raise ValueError("No valid data.")
interp_func = interp1d(t[valid_mask], data[valid_mask], kind='linear', fill_value="extrapolate")
data[nans] = interp_func(t[nans])
return data
# ...
data = pd.read_excel(r"xy")
t = data['Time'].values
x = data['X'].values
y = data['Y'].values
v = data['speed'].values
z = data['altitude'].values
a = data['acceleration'].values
distance = data['distance'].values
# Set initial NaN value in distance to 0
if np.isnan(distance[0]):
distance[0] = 0
# Ensuring distance array is increasing
sorted_indices = np.argsort(distance)
distance = distance[sorted_indices]
x = x[sorted_indices]
y = y[sorted_indices]
v = v[sorted_indices]
z = z[sorted_indices]
a = a[sorted_indices]
# Ensuring data arrays are finite
v = ensure_finite(v)
a = ensure_finite(a)
z = ensure_finite(z)
x = ensure_finite(x)
y = ensure_finite(y)
# Butterworth filter
fs = 1 / (t[1] - t[0]) # sampling frequency
cutoff = 0.3 # frequency
filtered_v = butterworth_filter(v, cutoff, fs)
filtered_a = butterworth_filter(a, cutoff, fs)
filtered_z = butterworth_filter(z, cutoff, fs)
filtered_v = handle_infinite(filtered_v, t)
filtered_a = handle_infinite(filtered_a, t)
filtered_z = handle_infinite(filtered_z, t)
print(filtered_v)
print(filtered_z)
# Resampling
distance_new = np.arange(0, distance[0], 1) # at every 1 meter
cs_v = CubicSpline(distance, filtered_v)
cs_a = CubicSpline(distance, filtered_a)
cs_z = CubicSpline(distance, filtered_z)
cs_x = CubicSpline(distance, x)
cs_y = CubicSpline(distance, y)
v_cubic = cs_v(distance_new)
a_cubic = cs_a(distance_new)
z_cubic = cs_z(distance_new)
x_cubic = cs_x(distance_new)
y_cubic = cs_y(distance_new)
</code></pre>
|
<python><interpolation><spline><butterworth>
|
2024-07-26 10:23:50
| 1
| 606
|
MΓ‘rton HorvΓ‘th
|
78,797,378
| 19,520,266
|
Dynamic numbers of datasets with facebook Hydra
|
<p>I am currently working on a data science project where I can use different datasets, but on top of that I can use some of them in a dynamic manner (e.g I only use dataset1, or dataset1+dataset2, or dataset2+dataset3 etc).</p>
<p>Currently, I have one yaml file for each of these datasets (containing the necessary information to initialize those datasets in Pytorch). I'd like to be able to select one or several datasets from a parent config (e.g a train.yaml file).</p>
<p>The issue is that currently I am not sure how to do it with Hydra.</p>
<p>My config structure is as follow :</p>
<pre><code>.
βββ train.ymal
βββ datasets
| βββ all_datasets.yaml
β βββ dataset1.yaml
β βββ dataset2.yaml
β βββ dataset3.yaml
</code></pre>
<p>So I thought I could do something like this in the train yaml file :</p>
<pre><code># train.yaml
defaults :
- datasets: all_datasets
datasets :
- ${datasets.dataset1}
- ${datasets.dataset2}
</code></pre>
<p>But it does not work because I cannot a merge a DictConfig with a ListConfig.</p>
<p>THe only way I found was to use a nested defaults list, such as</p>
<pre><code># train.yaml
defaults :
- datasets:
- datasets1
- datasets2
</code></pre>
<p>But with this approach, since it is in the default list, I cannot override in a simple way it and anyway I'd like not to have it inside the defaults list if possible.</p>
<p>An ugly way would be to create a specific .yaml file for each possible dataset combination, but that would be impossible when I have e.g 10 datasets and anyway this is a problem Hydra is designed to solve normally...</p>
<p>Any advice ?</p>
|
<python><fb-hydra>
|
2024-07-26 09:57:06
| 1
| 303
|
Lelouch
|
78,797,187
| 188,331
|
BertTokenizer vocab_size remains unchanged after adding tokens
|
<p>I am using HuggingFace <code>BertTokenizer</code> and adding some tokens to it. Here are the codes:</p>
<pre><code>from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('fnlp/bart-base-chinese')
print(tokenizer)
</code></pre>
<p>which outputs:</p>
<pre><code>BertTokenizer(name_or_path='fnlp/bart-base-chinese', vocab_size=51271, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': '[CLS]', 'eos_token': '[EOS]', 'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}, clean_up_tokenization_spaces=True), added_tokens_decoder={
0: AddedToken("[PAD]", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
100: AddedToken("[UNK]", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
101: AddedToken("[CLS]", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
102: AddedToken("[SEP]", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
103: AddedToken("[MASK]", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
104: AddedToken("[EOS]", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
</code></pre>
<p>The vocabulary size is 51271. I add a token using <code>.add_tokens()</code> function:</p>
<pre><code>tokenizer.add_tokens(["new_token"])
</code></pre>
<p>and it returns the number of added tokens, which is <code>1</code>. If I execute the above line one more time, it will return <code>0</code>, proving the token is indeed added to the tokenizer.</p>
<p>But I print the <code>tokenizer.vocab_size</code>, which is still 51271, the same as before. However, the <code>len(tokenizer)</code> is 51272, added 1.</p>
<p>How do I update the number of <code>vocab_size</code> after adding the new tokens (and save as a new tokenizer)?</p>
|
<python><huggingface-transformers><huggingface><huggingface-tokenizers>
|
2024-07-26 09:14:28
| 1
| 54,395
|
Raptor
|
78,797,161
| 18,769,241
|
Can I copy paste Python 2.7 packages from lib/site-packages from Windows to Ubuntu?
|
<p>I am moving to Ubuntu 22.04 from Windows 10 and I was wondering if I could skip installing again all the packages I did on my Windows 10 machine on Python 2.7. Is it possible to save the package folders and paste them into the Python installation folder in Ubuntu?</p>
|
<python><windows><ubuntu><site-packages>
|
2024-07-26 09:08:21
| 1
| 571
|
Sam
|
78,797,159
| 5,423,259
|
Polygons elongation ratio calculation with geopandas - problem with creating a geodataframe with results
|
<p>I have a polygon shapefile (actually, there are 170 such files) and want to calculate the elongation ratio for each polygon. The elongation ratio is the ratio of the short to the long side of the minimum rotated rectangle around the polygon. I have the following script which works well up to the point I want to create a geodataframe from the rectangles calculated in the for loop
(the line causing the error is <code>newgdf = gpd.GeoDataFrame(newData)</code>)
where I get this error: <code>GeometryTypeError: Unknown geometry type: 'featurecollection'</code>.
It seems that each rectangle is a geoseries with one element. Maybe they should be concatenated somehow into a single series before being passed to <code>GeoDataFrame()</code>? How to create a geodataframe from <code>mrrs</code>?</p>
<p><strong>Further explanations; why use a for loop?</strong>
Some polygons have corrupted geometry (I got errors when calculating the minimum rotated rectangle in QGIS). I'm afraid that if applying <code>minimum_rotated_rectangle()</code> to the full geodataframe the bad polygons may be skipped. The resulting elongation table will then be shorter than the original shapefile and with shifted row IDs. I will not be able to join correctly the table with the elongations with the original shapefile. That is why I decided to use a for loop and try one polygon at a time to keep track of the id.</p>
<p>Any suggestions on how this can be done more rationally will be also appreciated.</p>
<p>The script:</p>
<pre><code>import geopandas as gpd
from shapely.geometry import Polygon, LineString
gdf = gpd.read_file('original polygons.shp')
# Add field with unique id
gdf["shapeid"] = gdf.index
shapes = list(gdf.shapeid.unique())
mrrs = []
ids = []
elongs = []
for shape in shapes:
gdfSubset = gdf[gdf['shapeid'] == shape]
try:
# Create minimum_rotated_rectangle
mrr = gdfSubset.minimum_rotated_rectangle()
coords = mrr.geometry.get_coordinates()
x = coords['x'].tolist()
y = coords['y'].tolist()
poly = Polygon(list(zip(x, y)))
# get the edges of a polygon as LineStrings based on
# https://stackoverflow.com/a/68996466/5423259
b = poly.boundary.coords
linestrings = [LineString(b[k:k+2]) for k in range(len(b) - 1)]
lengths = [int(ls.length) for ls in linestrings]
# calculate elongation ratio (E) as:
# E = 1 - S / L, where S is the short-axis length, and L is the long-axis length
elong = 1 - (min(lengths) / max(lengths))
elongs.append(elong)
mrrs.append(mrr)
ids.append(shape)
except:
pass
newData = {'shapeid': ids, 'elong': elongs, 'geometry': mrrs}
newgdf = gpd.GeoDataFrame(newData)
# export rectangles for checking purposes
newgdf.to_file('mrrs.shp')
# skip geometries and join the elongation field to the original shapes
newgdf = newgdf.drop('geometry', axis=1)
finalgdf = gdf.merge(newgdf, left_on='shapeid', right_on='shapeid', how='left')
# export original shapes with elongation field joined
finalgdf.to_file('final.shp')
</code></pre>
|
<python><geopandas><shapely>
|
2024-07-26 09:07:26
| 1
| 358
|
ABC
|
78,797,091
| 9,640,238
|
Replace img tag with in-line SVG with BeautifulSoup
|
<p>I have an HTML file produced by pandoc, where SVG illustrations have been embedded. The SVG content is encoded in base64 and included in the <code>src</code> attribute of <code>img</code> elements. It looks like this:</p>
<pre class="lang-html prettyprint-override"><code><figure>
<img role="img" aria-label="Figure 1" src="data:image/svg+xml;base64,<base64str>" alt="Figure 1" />
<figcaption aria-hidden="true">Figure 1</figcaption>
</figure>
</code></pre>
<p>I'd like to replace the <code>img</code> element by the decoded SVG string, with BeautifulSoup. So here's what I do:</p>
<pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup
import base64
with open("file.html") as f:
soup = BeautifulSoup(f, "html.parser")
# get all images
images = soup.find_all("img")
# try with the first one
# decode the SVG string from the src attribute
svg_str = base64.b64decode(images[0]["src"].split(",")[1]).decode()
# replace the tag with the string
images[0].replace_with(soup.new_tag(svg_str))
</code></pre>
<p>However, <code>images[0]</code> remains unchanged, although no error is returned. I've looked at examples in the Internet, but I can't figure out what I'm doing wrong.</p>
|
<python><html><svg><beautifulsoup>
|
2024-07-26 08:48:04
| 1
| 2,690
|
mrgou
|
78,796,974
| 5,629,527
|
SHAP values for linear model different from those calculated manually
|
<p>I train a linear model to predict house price, and then I compare the Shapley values calculation manually vs the values returned by the <code>SHAP</code> library and they are slightly different.</p>
<p>My understanding is that for linear models the Shapley value is given by:</p>
<pre><code>coeff * features for obs - coeffs * mean(features in training set)
</code></pre>
<p>Or as stated in the SHAP documentation: <code>coef[i] * (x[i] - X.mean(0)[i])</code>, where i is one feature.</p>
<p>The question is, why does SHAP return different values from the manual calculation?</p>
<p>Here is the code:</p>
<pre><code>import pandas as pd
from sklearn.datasets import fetch_california_housing
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import shap
X, y = fetch_california_housing(return_X_y=True, as_frame=True)
X = X.drop(columns = ["Latitude", "Longitude", "AveBedrms"])
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0,
)
scaler = MinMaxScaler().set_output(transform="pandas").fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
linreg = LinearRegression().fit(X_train, y_train)
coeffs = pd.Series(linreg.coef_, index=linreg.feature_names_in_)
X_test.reset_index(inplace=True, drop=True)
obs = 6188
# manual shapley calculation
effect = coeffs * X_test.loc[obs]
effect - coeffs * X_train.mean()
</code></pre>
<p>Which returns:</p>
<pre><code>MedInc 0.123210
HouseAge -0.459784
AveRooms -0.128162
Population 0.032673
AveOccup -0.001993
dtype: float64
</code></pre>
<p>And the SHAP library returns something slightly different:</p>
<pre><code>explainer = shap.LinearExplainer(linreg, X_train)
shap_values = explainer(X_test)
shap_values[obs]
</code></pre>
<p>Here the result:</p>
<pre><code>.values =
array([ 0.12039244, -0.47172515, -0.12767778, 0.03473923, -0.00251017])
.base_values =
2.0809714707337523
.data =
array([0.25094137, 0.01960784, 0.06056066, 0.07912217, 0.00437137])
</code></pre>
<p>It is set to ignore interactions:</p>
<pre><code>explainer.feature_perturbation
</code></pre>
<p>returning</p>
<pre><code>'interventional'
</code></pre>
|
<python><machine-learning><shap><xai>
|
2024-07-26 08:22:09
| 1
| 1,134
|
Sole Galli
|
78,796,866
| 21,049,944
|
Polars - when/then conditions from dict
|
<p>I would like to have a function that accept list of conditions as parameter and filter given dataframe by all of them.
Pseudocode should look like this:</p>
<pre><code>def Filter(df, conditions = ["a","b"]):
conditions_dict = {
"a": pl.col("x") < 5,
"b": pl.col("x") > -3,
"c": pl.col("z") < 7
}
return df.with_columns(
pl.when( any [conditions_dict[c] for c in conditions])
.then(pl.lit(False))
.otherwise(pl.lit(True))
.alias("bool")
)
</code></pre>
<p>How to do it?</p>
|
<python><dataframe><python-polars>
|
2024-07-26 07:58:37
| 1
| 388
|
Galedon
|
78,796,805
| 3,083,195
|
avg() over a whole dataframe causing different output
|
<p>I see that <code>dataframe.agg(avg(Col)</code> works fine, but when i calculate avg() over a window over whole column(not using any partition), i see different results based on which column i use with orderBy.</p>
<p>Sample code:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("sample_for_SE").getOrCreate()
# Sample data
data = [
(1, 10.0, 5.0),
(3, 20.0, None),
(5, 15.0, None)
]
schema = ["id", "value1", "value2"]
df = spark.createDataFrame(data, schema=schema)
# Display DataFrame and avg()
df.show()
df.agg(avg("value1")).show()
</code></pre>
<p>And the output showing DF and avg correctly:</p>
<pre><code>+---+------+------+
| id|value1|value2|
+---+------+------+
| 1| 10.0| 5.0|
| 3| 20.0| NULL|
| 5| 15.0| NULL|
+---+------+------+
+-----------+
|avg(value1)|
+-----------+
| 15.0|
+-----------+
</code></pre>
<p>However with window function:</p>
<pre><code>from pyspark.sql.window import Window
#with orderBy("value1")
#========================
w = Window.orderBy("value1")
df.withColumn("AVG",avg(col("value1")).over(w))\
.sort("id",ascending=True)\
.show()
#with orderBy("id")
#========================
w = Window.orderBy("id")
df.withColumn("AVG",avg(col("value1")).over(w))\
.sort("id",ascending=True)\
.show()
</code></pre>
<p>Output:</p>
<pre><code>| id|value1|value2| AVG|
+---+------+------+----+
| 1| 10.0| 5.0|10.0|
| 3| 20.0| NULL|15.0|
| 5| 15.0| NULL|12.5|
+---+------+------+----+
+---+------+------+----+
| id|value1|value2| AVG|
+---+------+------+----+
| 1| 10.0| 5.0|10.0|
| 3| 20.0| NULL|15.0|
| 5| 15.0| NULL|15.0|
+---+------+------+----+
</code></pre>
<p>Question:</p>
<ol>
<li>Why would it matter which column i choose in the orderBy(), as i am choosing whole column anyways for calculating <code>avg()</code>?</li>
<li>Why is the avg() not being shown consistently as a fixed number, rather its being shown as 10, 15, 12.5 etc.</li>
</ol>
|
<python><dataframe><apache-spark><pyspark><rdd>
|
2024-07-26 07:39:05
| 1
| 1,707
|
anurag86
|
78,796,733
| 6,999,569
|
Python WFDB library has no attribute ann2rr
|
<p>I am trying to extract RR interval data from ecg annotation files from physionet by using the <code>ann2rr</code> function according to documentation:</p>
<pre class="lang-py prettyprint-override"><code>from IPython.display import display
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import os
import shutil
import posixpath
import wfdb
# Demo 1 - Read a WFDB record using the 'rdrecord' function into a wfdb.Record object.
# Plot the signals, and show the data.
# This works:
path = 'C:/Users/user/Downloads/bidmc-congestive-heart-failure-database-1.0.0/files/chf09'
ext ='ecg'
record = wfdb.rdrecord(path)
wfdb.plot_wfdb(record=record, title='Record chf09 from BIDMC heart failure database')
display(record.__dict__)
# but that generates the error:
res = wfdb.ann2rr(path, ext, as_array=True)
</code></pre>
<pre><code>AttributeError Traceback (most recent call last)
Cell In[19], line 10
8 display(record.__dict__)
9 sig=np.array(record.p_signal).T
---> 10 res = wfdb.processing.ann2rr(path, ext, as_array=True)
AttributeError: module 'wfdb.processing' has no attribute 'ann2rr'
</code></pre>
<p>I have also tried the variant without processing, i.e. <code>wfdb.ann2rr(...)</code> also without success.</p>
<p>I also have reinstalled the <code>wfdb</code> and <code>pyECG</code> libraries, also without effect.</p>
<p>I'm working with Jupyter Notebook and python version
3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]</p>
<p>Maybe I'm just missing something simple so I put my hopes on helpful replies</p>
|
<python><python-import>
|
2024-07-26 07:20:31
| 1
| 846
|
Manfred Weis
|
78,796,237
| 5,595,678
|
django OAuth Toolkit not getting redirect URI
|
<p>I am using Django OAuth toolkit and using the following code for OAuth implementation</p>
<pre><code>import requests
from django.http import JsonResponse
from django.shortcuts import redirect, render
from django.contrib.auth import authenticate, login, logout
from django.contrib.auth.decorators import login_required
from .forms import AuthenticationForm, UserProfileForm
from .models import UserProfile
from oauth2_provider.models import get_application_model
import base64
Application = get_application_model()
def oauth_login(request):
app = Application.objects.get(name="App")
#redirect_uri = request.GET.get("redirect_uri", "http://test.com:8000/callback")
#redirect_uri = request.GET.get("redirect_uri", "http://test.com:8002/malicious_redirect.html")
redirect_uri = request.POST.get("redirect_uri", "http://test.com:8002/malicious_redirect.html")
authorization_url = (
f"http://test.com:8000/o/authorize/?client_id={app.client_id}&response_type=code&redirect_uri={redirect_uri}"
)
return redirect(authorization_url)
def oauth_callback(request):
code = request.GET.get("code")
if not code:
return JsonResponse({'error': 'missing_code', 'details': 'Missing code parameter.'}, status=400)
token_url = "http://test.com:8000/o/token/"
client_id = Application.objects.get(name="App").client_id
client_secret = Application.objects.get(name="App").client_secret
#redirect_uri = request.GET.get("redirect_uri", "http://test.com:8002/callback")
redirect_uri = request.GET.get("redirect_uri", "http://test.com:8002/unique_redirect.html")
data = {
"grant_type": "authorization_code",
"code": code,
"redirect_uri": redirect_uri,
"client_id": client_id,
"client_secret": client_secret,
}
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': f'Basic {base64.b64encode(f"{client_id}:{client_secret}".encode()).decode()}',
}
response = requests.post(token_url, data=data, headers=headers)
tokens = response.json()
print(tokens)
if response.status_code != 200:
return JsonResponse({'error': 'token_exchange_failed', 'details': tokens}, status=response.status_code)
request.session['access_token'] = tokens['access_token']
request.session['refresh_token'] = tokens['refresh_token']
return JsonResponse(tokens)
#return redirect('profile')
</code></pre>
<p>The problem is that if I am logged into the OAuth 2.0 admin panel with superuser credentials, the above code works fine and redirects to the provided URL. Otherwise it doesn't work and uses the <code>LOGIN_REDIRECT_URL = '/profile/'</code> from <code>settings.py</code>.</p>
<p>What could be the reason?</p>
|
<python><django><session><django-oauth><django-oauth-toolkit>
|
2024-07-26 04:41:20
| 1
| 1,765
|
Johnny
|
78,796,192
| 1,935,424
|
tkinter: what is the difference between transient() and wm_transient()?
|
<p>Using tkinter in Python. Created a dialog box from tk.Toplevel()</p>
<p>After reading various posts and docs:</p>
<pre><code> self.resizable(width=False, height=False)
self.wm_transient(self._mainwin)
self.transient(self._mainwin)
self.wait_visibility()
self.grab_set()
self.update_idletasks() # draw
self.wait_window(self)
</code></pre>
<p>Just curious as to what the difference is between wm_transient() and transient()? Do I need both or only one of them?</p>
|
<python><tkinter>
|
2024-07-26 04:09:36
| 1
| 899
|
JohnA
|
78,796,065
| 457,030
|
How to embed Python to .Net?
|
<p>I tried to embed Python to .Net using pythonnet based on the docs <a href="https://pythonnet.github.io/pythonnet/dotnet.html" rel="nofollow noreferrer">here</a> and <a href="https://github.com/pythonnet/pythonnet#embedding-python-in-net" rel="nofollow noreferrer">here</a>.</p>
<p>Here are my code</p>
<pre><code>Runtime.PythonDLL = @"D:\Dev\Console\.conda\python311.dll";
PythonEngine.Initialize();
dynamic sys = Py.Import("sys");
Console.WriteLine("Python version: " + sys.version);
</code></pre>
<p>.conda is virtual environment I created using VSCode.</p>
<p>I get this error message.</p>
<blockquote>
<p>System.TypeInitializationException: 'The type initializer for
'Delegates' threw an exception.' DllNotFoundException: Could not load
D:\Dev\Console.conda\python311.dll. Win32Exception: The specified
module could not be found.</p>
</blockquote>
<p>Thanks.</p>
|
<python><c#><.net><python.net>
|
2024-07-26 02:49:15
| 1
| 4,315
|
Syaiful Nizam Yahya
|
78,795,944
| 1,471,980
|
how do you merge values in rows, replace nan values in pandas
|
<p>I am doing some manipulation on a data frame:</p>
<pre><code>df
Node Interface Speed carrier 1-May 9-May 2-Jun 21-Jun
Server1 internet1 10 ATT 20 30 50 90
Server1 wan3.0 20 Comcast NaN NaN NaN 100
Server1 wan3.0 50 Comcast 30 40 40 NaN
Server2 wan2 100 Sprint 90 70 NaN NaN
Server2 wan2 20 Sprint NaN NaN 88 70
Server2 Internet2 40 Verizon 10 60 90 70
</code></pre>
<p>I need to merge rows in data frame group by Node and Interface, replace nan values with the other row and pick the max value for the speed for the interface.</p>
<p>expected data frame should be like this:</p>
<pre><code>df1
Node Interface Speed carrier 1-May 9-May 2-Jun 21-Jun
Server1 internet1 10 ATT 20 30 50 90
Server1 wan3.0 50 Comcast 30 40 40 100
Server2 wan2 100 Sprint 90 70 88 70
Server2 Internet2 40 Verizon 10 60 90 70
</code></pre>
<p>I tried this:</p>
<pre><code>df2=df.groupby(['Node','Interface','carrier']),agg({'Speep': 'max'}).reset_index()
df3=df.drop('Speed', axis=1)
df4=df3.ffill().drop_duplicates()
</code></pre>
<p>Not quite working. Is there an easy way to merge rows, replace nan values with the other row values and pick the max Speed for the Speed cell value?</p>
|
<python><pandas>
|
2024-07-26 01:39:04
| 1
| 10,714
|
user1471980
|
78,795,749
| 880,783
|
How can I invert `dataclass.astuple`?
|
<p>I am trying to construct a hierarchy of <code>dataclass</code>es from a tuple, like so:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import astuple, dataclass
@dataclass
class Child:
name: str
@dataclass
class Parent:
child: Child
# this is what I want
p = Parent(Child("Tim"))
print(p)
# this is what I get
t = astuple(p)
print(Parent(*t))
</code></pre>
<p>However, while this constructs a <code>Parent</code> as expected, its <code>child</code> is not of type <code>Child</code>:</p>
<pre><code>Parent(child=Child(name='Tim'))
Parent(child=('Tim',))
</code></pre>
<p>Is there a way to construct Parent <em>and child</em> from the tuple <code>t</code>, or by some other means?</p>
|
<python><python-dataclasses>
|
2024-07-25 23:29:40
| 2
| 6,279
|
bers
|
78,795,740
| 3,284,297
|
Sort Pandas dataframe by Sub Total and count
|
<p>I have a very large dataset called bin_df.</p>
<p>Using pandas and the following code I've assigned sub-total "Total" to each group:</p>
<pre><code> bin_df = df[df["category"].isin(model.BINARY_CATEGORY_VALUES)]
bin_category_mime_type_count_df = (
bin_df.groupby(["category", "mime_type"])["mime_type"]
.count()
.reset_index(name="Count")
)
</code></pre>
<p>The output from bin_category_mime_type_count_df :</p>
<pre><code> category mime_type Count
1 application/x-executable 19
1 application/x-pie-executable 395
1 application/x-sharedlib 1
2 application/x-sharedlib 755
3 application/x-sharedlib 1
6 application/x-object 129
</code></pre>
<p>Then:</p>
<pre><code> bin_category_total_count_df = (
bin_category_mime_type_count_df.groupby(["category", "mime_type"])[
"Count"
]
.sum()
.unstack()
)
bin_category_total_count_df = (
bin_category_total_count_df.assign(
Total=bin_category_total_count_df.sum(1)
)
.stack()
.to_frame("Count")
)
bin_category_total_count_df["Count"] = (
bin_category_total_count_df["Count"].astype("Int64").fillna(0)
)
</code></pre>
<p>This produces the following (it is sorted by category by default):</p>
<pre><code> Count
category mime_type
1 application/x-executable 19
application/x-pie-executable 395
application/x-sharedlib 1
Total 415
2 application/x-sharedlib 755
Total 755
3 application/x-sharedlib 1
Total 1
6 application/x-object 129
Total 129
</code></pre>
<p>I would like it to be sorted by "Total" and then within a category I would like it to be sorted by mime_type Count:</p>
<pre><code> Count
category mime_type
2 application/x-sharedlib 755
Total 755
1 application/x-pie-executable 395
application/x-executable 19
application/x-sharedlib 1
Total 415
6 application/x-object 129
Total 129
3 application/x-sharedlib 1
Total 1
</code></pre>
<p>What kind of function should I look at to get the desired result?</p>
|
<python><pandas><group-by>
|
2024-07-25 23:24:55
| 2
| 423
|
Charlotte
|
78,795,739
| 8,037,521
|
Speed up / parallelize multivariate_normal.pdf
|
<p>I have multiple Nx3 points, and I sequentially generate a new value for each from its corresponding multivariate Gaussian, each with 1x3 mean and 3x3 cov. So, together, I have arrays: Nx3 array of points, Nx3 array of means and Nx3x3 array of covs.</p>
<p>I only see how to do it with the classic for-loop:</p>
<pre><code>import numpy as np
from scipy.stats import multivariate_normal
# Generate example data
N = 5 # Small number for minimal example, can be increased for real use case
points = np.random.rand(N, 3)
means = np.random.rand(N, 3)
covs = np.array([np.eye(3) for _ in range(N)]) # Identity matrices as example covariances
# Initialize an array to store the PDF values
pdf_values = np.zeros(N)
# Loop over each point, mean, and covariance matrix
for i in range(N):
pdf_values[i] = multivariate_normal.pdf(points[i], mean=means[i], cov=covs[i])
print("Points:\n", points)
print("Means:\n", means)
print("Covariances:\n", covs)
print("PDF Values:\n", pdf_values)
</code></pre>
<p>Is there any way to speed this up? I tried to pass all directly to multivariate_normal.pdf, but also from the docs that does not seem to be supported (unlike a simpler case of generating values for Nx3 points, but with same mean and covariance.</p>
<p>Maybe some implementation not from scipy?</p>
<p>I might be too hopeful, but somehow I hope there is a simpler way to speed this up and avoid iterating with this for-loop in a big array of data directly with Pythonic loop.</p>
|
<python><numpy>
|
2024-07-25 23:24:51
| 1
| 1,277
|
Valeria
|
78,795,722
| 2,136,286
|
gcloud mistakes event trigger for storage trigger
|
<p>There's a cloud function in Python that processes some data when a file is uploaded to firebase's bucket:</p>
<pre><code>@storage_fn.on_object_finalized(bucket = "my-bucket", timeout_sec = timeout_sec, memory = memory, cpu = cpu, region='us-central1')
def validate_file_upload(event: storage_fn.CloudEvent[storage_fn.StorageObjectData]):
process(event)
</code></pre>
<p>When it's deployed via firebase cli the function works properly</p>
<p><code>firebase deploy --only functions:validate_file_upload</code></p>
<p>However, when the same function is deployed via gcloud</p>
<pre><code>gcloud functions deploy validate_file_upload
--gen2
--region=us-central1
.......
--entry-point=validate_file_upload
--trigger-event-filters="type=google.cloud.storage.object.v1.finalized"
--trigger-event-filters="bucket=my-bucket"
</code></pre>
<p>When function is triggered, it fails with</p>
<blockquote>
<p>TypeError: validate_file_upload() takes 1 positional argument but 2 were given</p>
</blockquote>
<p>The reason is that when function is deployed via<code>firebase</code>, GCP sends 'Eventarc' object as single argument to cloud function, but if it's deployed via <code>gcloud</code> it sends two: (data, context) and naturally it causes exception</p>
<p>Even in the documentation it states there should be only 1 argument:</p>
<blockquote>
<p>A Cloud Storage trigger is implemented as a CloudEvent function, in which the Cloud Storage event data is passed to your function in the CloudEvents format</p>
</blockquote>
<p><a href="https://cloud.google.com/functions/docs/calling/storage" rel="nofollow noreferrer">https://cloud.google.com/functions/docs/calling/storage</a></p>
<p>How to make sure that <code>gcloud</code> deployment uses correct function prototype?</p>
|
<python><firebase><google-cloud-functions><google-cloud-storage><gcloud>
|
2024-07-25 23:16:48
| 1
| 676
|
the.Legend
|
78,795,712
| 395,857
|
How to find all occurrences of a substring in a string while ignoring some characters in Python?
|
<p>I'd like to find all occurrences of a substring while ignoring some characters. How can I do it in Python?</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>long_string = 'this is a t`es"t. Does the test work?'
small_string = "test"
chars_to_ignore = ['"', '`']
print(find_occurrences(long_string, small_string))
</code></pre>
<p>should return [(10, 16), (27, 31)] because we want to ignore the presence of chars ` and ".</p>
<ul>
<li><code>(10, 16)</code> is the start and end index of t`es"t in <code>long_string</code>,</li>
<li><code>(27, 31)</code> is the start and end index of <code>test</code> in <code>long_string</code>.</li>
</ul>
<hr />
<p>Using <code>re.finditer</code> won't work as it does not ignore the presence of chars ` and ":</p>
<pre><code>import re
long_string = 'this is a t`es"t. Does the test work?'
small_string = "test"
matches = []
[matches.append((m.start(), m.end())) for m in re.finditer(small_string, long_string)]
print(matches)
</code></pre>
<p>outputs <code>[(27, 31)]</code>, which missed <code>(10, 16)</code>.</p>
|
<python><string><python-re>
|
2024-07-25 23:12:50
| 2
| 84,585
|
Franck Dernoncourt
|
78,795,641
| 579,204
|
How to find rows with value on either side of a given value?
|
<p>Python, Pandas, I have a dataframe containing datetimes and values.</p>
<pre class="lang-py prettyprint-override"><code># Create an empty DataFrame with 'timestamp' and 'value' columns
df = pd.DataFrame(columns=['timestamp', 'value'])
df.set_index('timestamp', inplace=True)
</code></pre>
<p>I append data to that frame over time.</p>
<p>At some point, I want to find the value at a timestamp. If it's already in the df, great, easy to find.</p>
<p>However, if the time I'm looking for is between two existing values, how can I find that quickly, and interpolate between those two bracketing values? ChatGPT led me on a merry chase with nothing by invalid comparisons.</p>
<p>Here's what I tried so far, not working:</p>
<pre class="lang-py prettyprint-override"><code># Check if the target timestamp exists in the DataFrame
timestamps = df.index
if target_timestamp in timestamps:
# Exact match found
return df.loc[target_timestamp, 'value']
else:
# Use searchsorted to find the insertion point
pos = timestamps.searchsorted(target_timestamp)
if pos == 0 or pos == len(timestamps) - 1:
raise ValueError("Target timestamp is out of bounds for interpolation")
if target_timestamp > timestamps[pos]:
previous_timestamp = timestamps[pos]
next_timestamp = timestamps[pos + 1]
else:
previous_timestamp = timestamps[pos - 1]
next_timestamp = timestamps[pos]
# Interpolating the value
previous_value = df.loc[previous_timestamp, 'value']
next_value = df.loc[next_timestamp, 'value']
# Linear interpolation formula
interpolated_value = previous_value + (next_value - previous_value) * \
(target_timestamp - previous_timestamp) / (next_timestamp - previous_timestamp)
return interpolated_value
</code></pre>
|
<python><pandas><search><interpolation>
|
2024-07-25 22:36:15
| 3
| 411
|
Dave
|
78,795,606
| 2,655,092
|
FFprobe not reflecting MP4 dimension edits
|
<p>I'm trying to edit MP4 width & height without scaling.</p>
<p>I'm doing that by editing <code>tkhd</code> & <code>stsd</code> boxes of the MP4 header.</p>
<ul>
<li><code>exiftool</code> will show the new width & height but <code>ffprobe</code> will <strong>not</strong>.</li>
</ul>
<p>Before editing:</p>
<p><strong>Exif:</strong> <br><code>$ exiftool $f | egrep -i 'width|height'</code><br></p>
<pre><code>Image Width : 100
Image Height : 100
Source Image Width : 100
Source Image Height : 100
</code></pre>
<p><strong>FFprobe:</strong> <br> <code>$ ffprobe -v quiet -show_streams $f | egrep 'width|height'</code></p>
<pre><code>width=100
height=100
coded_width=100
coded_height=100
</code></pre>
<p>After editing the above sizes I then get this new following python file output:</p>
<pre><code>[ftyp] size:32
[mdat] size:196933
[moov] size:2057
- [mvhd] size:108
- [trak] size:1941
- - [tkhd] size:92
Updated tkhd box: Width: 100 -> 300, Height: 100 -> 400
- - [mdia] size:1841
- - - [mdhd] size:32
- - - [hdlr] size:44
- - - [minf] size:1757
- - - - [vmhd] size:20
- - - - [dinf] size:36
- - - - - [dref] size:28
- - - - [stbl] size:1693
- - - - - [stsd] size:145
Updated stsd box #1: Width: 100 -> 300, Height: 100 -> 400
- - - - - [stts] size:512
- - - - - [stss] size:56
- - - - - [stsc] size:28
- - - - - [stsz] size:924
- - - - - [stco] size:20
</code></pre>
<p>Then running EXIFtool & FFprobe again:</p>
<p><code>$ exiftool $f egrep -i 'width|height'</code></p>
<pre><code>Image Width : 300
Image Height : 400
Source Image Width : 300
Source Image Height : 400
</code></pre>
<p><code>$ ffprobe -v quiet -show_streams $f | egrep 'width|height'</code></p>
<pre><code>width=100
height=100
coded_width=100
coded_height=100
</code></pre>
<p>This is my Python code:</p>
<pre class="lang-py prettyprint-override"><code>import sys, struct
def read_box(f):
offset = f.tell()
header = f.read(8)
if len(header) < 8:
return None, offset
size, box_type = struct.unpack(">I4s", header)
box_type = box_type.decode("ascii")
if size == 1:
size = struct.unpack(">Q", f.read(8))[0]
elif size == 0:
size = None
return {"type": box_type, "size": size, "start_offset": offset}, offset
def edit_tkhd_box(f, box_start, new_width, new_height, depth):
f.seek(box_start + 84, 0) # Go to the width/height part in tkhd box
try:
old_width = struct.unpack('>I', f.read(4))[0] >> 16
old_height = struct.unpack('>I', f.read(4))[0] >> 16
f.seek(box_start + 84, 0) # Go back to write
f.write(struct.pack('>I', new_width << 16))
f.write(struct.pack('>I', new_height << 16))
print(f"{' ' * depth} Updated tkhd box: Width: {old_width} -> {new_width}, Height: {old_height} -> {new_height}")
except struct.error:
print(f" Error reading or writing width/height to tkhd box")
def edit_stsd_box(f, box_start, new_width, new_height, depth):
f.seek(box_start + 12, 0) # Skip to the entry count in stsd box
try:
entry_count = struct.unpack('>I', f.read(4))[0]
for i in range(entry_count):
entry_start = f.tell()
f.seek(entry_start + 4, 0) # Skip the entry size
format_type = f.read(4).decode("ascii", "ignore")
if format_type == "avc1":
f.seek(entry_start + 32, 0) # Adjust this based on format specifics
try:
old_width = struct.unpack('>H', f.read(2))[0]
old_height = struct.unpack('>H', f.read(2))[0]
f.seek(entry_start + 32, 0) # Go back to write
f.write(struct.pack('>H', new_width))
f.write(struct.pack('>H', new_height))
print(f"{' ' * depth} Updated stsd box #{i + 1}: Width: {old_width} -> {new_width}, Height: {old_height} -> {new_height}")
except struct.error:
print(f" Error reading or writing dimensions to avc1 format in entry {i + 1}")
else:
f.seek(entry_start + 8, 0) # Skip to the next entry
except struct.error:
print(f" Error reading or writing entries in stsd box")
def parse_and_edit_boxes(f, new_width, new_height, depth=0, parent_size=None):
while True:
current_pos = f.tell()
if parent_size is not None and current_pos >= parent_size:
break
box, box_start = read_box(f)
if not box:
break
box_type, box_size = box["type"], box["size"]
print(f'{"- " * depth}[{box_type}] size:{box_size}')
if box_type == "tkhd":
edit_tkhd_box(f, box_start, new_width, new_height, depth)
elif box_type == "stsd":
edit_stsd_box(f, box_start, new_width, new_height, depth)
# Recursively parse children if it's a container box
if box_type in ["moov", "trak", "mdia", "minf", "stbl", "dinf", "edts"]:
parse_and_edit_boxes(f, new_width, new_height, depth + 1, box_start + box_size)
if box_size is None:
f.seek(0, 2) # Move to the end of file
else:
f.seek(box_start + box_size, 0)
if __name__ == '__main__':
if len(sys.argv) != 4:
print("Usage: python script.py <input_file> <new_width> <new_height>")
else:
with open(sys.argv[1], 'r+b') as f:
parse_and_edit_boxes(f, int(sys.argv[2]), int(sys.argv[3]))
</code></pre>
<p>It seems related to <code>ff_h264_decode_seq_parameter_set</code></p>
|
<python><video><mp4><ffprobe><exiftool>
|
2024-07-25 22:17:20
| 1
| 3,085
|
James W.
|
78,795,597
| 54,873
|
How do I copy an individual Cell's conditional formatting using Openpyxl?
|
<p>I'm trying to move one set of Excel cells to another location in a workbook using python's <code>openpyxl</code>, offset by some columns. The source sheet has some cells that use conditional formatting.</p>
<p>In good news, <code>move_range</code> does much of the work. But I cannot for the life of me figure out how to copy/move the conditional formatting; it just get altogether lost.</p>
<p>Anyone have thoughts?</p>
|
<python><openpyxl>
|
2024-07-25 22:11:10
| 1
| 10,076
|
YGA
|
78,795,514
| 382,982
|
Unable to use implicit IAM/role-based auth when using Boto3 inside Docker on EC2
|
<p>I've run into a frustrating edge case that I'm now having to either introduce custom logic to work around or solve properly.</p>
<p>I'm running a Django application inside a Docker container on EC2. I'm using an IAM role attached to the instance to grant it access to a particular S3 bucket and set of actions. This is all working well and I can confirm that Boto3 can authenticate using the IAM role and access the bucket as expected when Python is run directly on the instance.</p>
<p>However, when Python is running inside a Docker container on the same instance, Boto3 is unable to use that same implicit authentication strategy. I have configured the EC2 instance to use IMDSV2, required token auth and increased the allowed number of hops (currently 63 while I'm messing with this but ideally 2-3). What's particularly odd is that I'm able to access the IMDSV2 endpoint and manually request a token from within the container using Curl or within Python using Requests and thread the token through to Boto and access the S3 bucket as desired.</p>
<p>The downside to manually fetching the token is that I need to implement my own caching/session refresh strategy or suffer the runtime burden of constantly requesting a new token. It's also not entirely clear to me when the Boto3 session expires: is it when the IMDVS2 token expires or at some other arbitrary point?</p>
<p>So, before I implement a cache/retry strategy of my own, is there something obvious I'm missing here which should enable this all to work?</p>
<p>I'm using Boto3 1.34.146, Docker 27 and Docker Compose 2.29.1 and running on Ubuntu 22.04.3. Happy to provide any additional context or specifics.</p>
|
<python><amazon-web-services><docker><amazon-ec2><boto3>
|
2024-07-25 21:34:36
| 2
| 10,529
|
pdoherty926
|
78,795,407
| 7,385,563
|
load a model from checkpoint folder in pyTorch
|
<p>I am trying to load a model from a certain checkpoint and use it for inference. The checkpoint folder looks like this. How do I load the model in torch from this folder. The <a href="https://pytorch.org/tutorials/recipes/recipes/saving_and_loading_a_general_checkpoint.html" rel="nofollow noreferrer">resources</a> I could find are for loading from a checkpoint file, not a folder.</p>
<p><a href="https://i.sstatic.net/wi97AlyY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wi97AlyY.png" alt="enter image description here" /></a></p>
<pre><code>import whisper_timestamped as whisper
from transformers import AutoProcessor, WhisperForConditionalGeneration
from peft import prepare_model_for_kbit_training, LoraConfig, PeftModel, LoraModel, LoraConfig, get_peft_model
from peft import PeftModel, PeftConfig
import torch
from datasets import Dataset, Audio
from transformers import AutoFeatureExtractor, WhisperModel
peft_model_id = "aben118/finetuned_model/checkpoint-3900"
language = "en"
task = "transcribe"
peft_config = PeftConfig.from_pretrained(peft_model_id)
model = WhisperForConditionalGeneration.from_pretrained(
peft_config.base_model_name_or_path, load_in_8bit=False, device_map="auto"
)
model = PeftModel.from_pretrained(model, peft_model_id)
print(model)
model = model.merge_and_unload()
model.save_pretrained(<model_path>)
</code></pre>
<p>But it saves it in <code>.safetensors</code> format. I want it to be a model that i can load using <code>torch.load</code>.</p>
|
<python><pytorch><checkpointing>
|
2024-07-25 20:57:29
| 1
| 691
|
afsara_ben
|
78,795,318
| 8,379,035
|
How to Use a Single INSERT INTO Statement for Multiple Rows in Python?
|
<p>Iβm currently working on a Discord Python bot where I loop through a list of <a href="https://discordpy.readthedocs.io/en/latest/api.html?highlight=forumtag#discord.ForumTag" rel="nofollow noreferrer">ForumTags</a> and generate an <code>INSERT INTO</code> SQL statement for each object to insert data into a MySQL database.</p>
<p>However, I want to optimize my code by combining all these individual <code>INSERT INTO</code> statements into a single query like this as an example:</p>
<pre class="lang-sql prettyprint-override"><code>INSERT INTO guild_support_tags (guild_id, tag_id, tag_name, category) VALUES
(123, 1, "test", "test_category"), (456, 2, "another tag", "test_category2)
</code></pre>
<p>That is my current code:</p>
<pre class="lang-py prettyprint-override"><code>start = time.time()
for tag in forum.available_tags:
await write_query("INSERT INTO guild_support_tags (guild_id, tag_id, tag_name, category) VALUES "
"(:guild_id, :tag_id, :tag_name, :category)",
{"guild_id": interaction.guild_id, "tag_id": tag.id, "tag_name": tag.name,
"category": category})
print(f"loop done after {time.time() - start}")
# from other file - created by myself to execute MySQL statements
# I would like a solution where this part is untouched. But if it can be improved, its okay.
async def write_query(query: str, params: dict) -> None:
async with async_session() as session:
async with session.begin():
await session.execute(text(query), params)
</code></pre>
<p>Maybe it's good to know: I currently use <strong>SQLAlchemy</strong> with <strong>aiomysql</strong> and <strong>Python3.12</strong> on a <strong>MySQL</strong> database.</p>
|
<python><sql><mysql><sql-insert>
|
2024-07-25 20:28:15
| 1
| 891
|
Razzer
|
78,795,251
| 496,289
|
How to capture call inside a with block?
|
<p>I have some code with a testcase for it. It works fine.</p>
<pre><code>import pysftp
from unittest import mock
remote_file_name = 'remote_file_name'
local_path = 'local/path/to/file'
def open_sftp_connection(hostname):
return pysftp.Connection(hostname, username='usr', password='pswd')
def sftp_upload_try(local_path, dest_file_name):
sftp_conn = None
try:
sftp_conn = open_sftp_connection('sftp-host')
sftp_conn.put(local_path, remotepath=dest_file_name)
finally:
if sftp_conn:
sftp_conn.close()
def test_sftp_try():
conn_mock = mock.MagicMock()
with mock.patch('test_sftp.open_sftp_connection', return_value=conn_mock):
sftp_upload_try(local_path, remote_file_name)
print(conn_mock.mock_calls)
assert 1 == conn_mock.put.call_count
assert remote_file_name == conn_mock.put.call_args.kwargs['remotepath']
assert 1 == conn_mock.close.call_count
</code></pre>
<p><code>print(conn_mock.mock_calls)</code> prints:</p>
<pre><code>[call.put('local/path/to/file', remotepath='remote_file_name'),
call.__bool__(),
call.close()]
</code></pre>
<hr />
<p>Instead of <code>try-finally</code>, I would like to use <code>with</code>:</p>
<pre><code>def sftp_upload_with(local_path, dest_file_name):
with open_sftp_connection('sftp-host') as sftp_conn:
sftp_conn.put(f'${local_path}', remotepath=dest_file_name)
def test_sftp_with():
conn_mock = mock.MagicMock()
with mock.patch('test_sftp.open_sftp_connection', return_value=conn_mock):
sftp_upload_with(local_path, remote_file_name)
print(conn_mock.mock_calls)
put_call = next(c for c in conn_mock.mock_calls if '__enter__().put' == c[0])
assert put_call
assert remote_file_name == put_call.kwargs['remotepath']
</code></pre>
<p>This time <code>print(conn_mock.mock_calls)</code> prints:</p>
<pre><code>[call.__enter__(),
call.__enter__().put('$local/path/to/file', remotepath='remote_file_name'),
call.__exit__(None, None, None)]
</code></pre>
<hr />
<p>So question is, is there some elegant way to assert calls when using <code>with</code>? Or do I have to do something ugly like <code>'__enter__().put' == c[0]</code> ?</p>
|
<python><mocking><pytest><pytest-mock>
|
2024-07-25 20:04:15
| 0
| 17,945
|
Kashyap
|
78,795,087
| 4,306,274
|
How to remove the large left and right paddings due to "lines+markers" lines?
|
<p>How to remove the large left and right paddings due to "lines+markers" lines?</p>
<pre><code>import numpy as np
import pandas as pd
import plotly.graph_objects as go
count = 100
df = pd.DataFrame(
{
"A": np.random.randint(10, 20, size=(count)),
"B": np.random.randint(10, 20, size=(count)),
"C": np.random.randint(100, 200, size=(count)),
},
index=[f"bar.{i}" for i in range(count)],
)
fig = go.Figure(
data=[
go.Bar(
name=df.columns[0],
orientation="v",
x=df.index,
y=df.iloc[:, 0],
text=df.iloc[:, 0],
),
go.Bar(
name=df.columns[1],
orientation="v",
x=df.index,
y=df.iloc[:, 1],
text=df.iloc[:, 1],
),
go.Scatter(
name=df.columns[2],
x=df.index,
y=df.iloc[:, 2],
text=df.iloc[:, 2],
mode="lines+markers",
),
],
layout=dict(
width=1024,
height=512,
margin=dict(l=10, r=10, t=10, b=10),
),
)
fig.update_layout(
xaxis_title=None,
yaxis_title=None,
legend_title=None,
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/kEcY9PPb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEcY9PPb.png" alt="enter image description here" /></a></p>
<p>If I don't set the mode or reduce the data point count from 100 to 10, I can get rid of the large paddings:</p>
<p><a href="https://i.sstatic.net/rU22EKBk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rU22EKBk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/FyvjCnIV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyvjCnIV.png" alt="enter image description here" /></a></p>
<p>This is a similar issue to the one in this question: <a href="https://stackoverflow.com/q/78782754/4306274">How to remove the large top and bottom paddings due to top-to-bottom lines?</a></p>
|
<python><plotly>
|
2024-07-25 19:16:12
| 1
| 1,503
|
chaosink
|
78,795,062
| 9,438,845
|
Can't connect to MongoDB server with Tableau or Python but can with Compass on Windows 10
|
<p>It was extremely easy accessing MongoDB server by pasting conn string into MongoDB Compass on Windows 10 but when it comes to Python or Tableau it returns errors</p>
<p>For Python its either timout or:</p>
<pre><code>Configuration Error: The DNS query name does not exist: _mongodb._tcp.mac88stage.sui9p.mongodb.net.
</code></pre>
<blockquote>
<pre><code>from pymongo import MongoClient
# Connection URI
uri = "mongodb+srv://USERNAME:PASSWORD@SERVER.NAME.mongodb.net/"
client = MongoClient(uri)
db = client.db_name
collection = db.table_name
top_10_rows = collection.find().limit(10)
for row in top_10_rows:
print(row)
</code></pre>
</blockquote>
<p>When creating DSN (following this guide: <a href="https://www.mongodb.com/docs/atlas/tutorial/create-system-dsn/" rel="nofollow noreferrer">https://www.mongodb.com/docs/atlas/tutorial/create-system-dsn/</a>)</p>
<p>I get <code>Unknown MySQL Driver Host</code></p>
<p>For Tableau followed the standard guide:
<a href="https://help.tableau.com/current/pro/desktop/en-us/examples_mongodb.htm" rel="nofollow noreferrer">https://help.tableau.com/current/pro/desktop/en-us/examples_mongodb.htm</a>
added driver and connector into relevant folders in Tableau but getting similar errors to Python</p>
|
<python><mongodb><tableau-desktop><mongodb-biconnector>
|
2024-07-25 19:09:17
| 1
| 309
|
Alexander Ka
|
78,794,702
| 11,878,472
|
Read multiple files parallel into separate dataframe in Pyspark
|
<p>I am trying to read large txt files into dataframe. Each file is 10-15 GB in size,</p>
<p>as the IO is taking long time. I want to read multiple file in parallel and get them in separate dataframe.</p>
<p>I tried below code</p>
<pre><code>from multiprocessing.pool import ThreadPool
def read_file(file_path):
return spark.read.csv(file_path)
pool = ThreadPool(10)
df_list = pool.starmap(read_file,[[file1,file2,file3...]])
</code></pre>
<p>But it gives pickel error.
How can I do this?, any alternative for my requirement?</p>
<p>I want to read multiple file in parallel and get them in separate dataframe.</p>
|
<python><pyspark><databricks>
|
2024-07-25 17:26:59
| 0
| 411
|
Tejas
|
78,794,664
| 3,490,622
|
Scipy minimization problem - x never changes
|
<p>I am trying to solve the following problem: I have a sample of stores selected by geography, and I'd like to select 20% of the sample in such a way that the fraction of sales per geography for the selection mimics that of the overall sales distribution by geography. I tried several iterations of the code below, but for some reason the 'selection' never changes inside the objective function, so test_fraction is always the same and there's no real optimization. I have no idea what I am doing wrong. Could someone please help?</p>
<p>Here's my code with sample data:</p>
<pre><code>import pandas as pd
import numpy as np
from scipy.optimize import minimize
from scipy.special import expit
# Sample data
np.random.seed(0)
data = pd.DataFrame({
'Store': range(1, 101),
'Geography': np.random.choice(['North', 'South', 'East', 'West'], 100),
'TotalSales': np.random.rand(100) * 1000
})
# Define the number of stores to select
num_stores = len(data)
target_percentage = 0.2
target_num_stores = int(target_percentage * num_stores)
# Calculate overall sales fraction per geography
overall_sales = data.groupby('Geography')['TotalSales'].sum()
overall_fraction = overall_sales / overall_sales.sum()
# Objective function to minimize squared error and approach target percentage
def objective(selection):
selection = expit(selection).round() # Ensure binary selection using sigmoid function
selected_stores = data.iloc[np.where(selection == 1)]
if len(selected_stores) == 0:
return np.inf # Prevent division by zero
test_sales = selected_stores.groupby('Geography')['TotalSales'].sum()
test_fraction = test_sales / test_sales.sum()
print(test_fraction)
print()
# Calculate the squared error
error = ((overall_fraction - test_fraction) ** 2).sum()
# Add penalty for deviation from the target number of stores
penalty = ((np.sum(selection) - target_num_stores) / target_num_stores) ** 2
return error +penalty
# Constraint to select approximately 20% of the stores
def constraint(selection):
return np.sum(expit(selection).round()) - target_num_stores
# Set up bounds for the decision variables
bounds = [(0, 1) for _ in range(num_stores)]
# Create an initial feasible solution by randomly selecting approximately 20% of the stores
initial_selection = np.random.uniform(low=0, high=1, size=num_stores)
# Set up constraints
constraints = {'type': 'eq', 'fun': constraint}
# Solve the optimization problem
result = minimize(objective, initial_selection, bounds=bounds, method='SLSQP', options={'disp': True})
# Solve the optimization problem
result = minimize(objective, x0=initial_selection)#, bounds=bounds, constraints=constraints, method='SLSQP', options={'disp': True})
</code></pre>
|
<python><scipy><scipy-optimize>
|
2024-07-25 17:17:38
| 0
| 1,011
|
user3490622
|
78,794,566
| 864,245
|
Iterate over list of strings to retrieve groups of strings
|
<p>Given the following list:</p>
<pre class="lang-py prettyprint-override"><code>inp = ["arg1:", "list", "of", "args", "arg2:", "other", "list"]
</code></pre>
<p>How can I develop a dictionary like this?</p>
<pre class="lang-py prettyprint-override"><code>out = {"arg1": ["list", "of", "args"], "arg2": ["other", "list"]}
</code></pre>
<p>It's basically a case of:</p>
<ul>
<li>Loop through the input list until we find an element suffixed with a colon (e.g. <code>arg1:</code>)</li>
<li>Use this (without the colon) to create a key in a dictionary with a value of an empty list</li>
<li>Continue looping through the input list, adding each element to this key's list value, until another element is found with a colon</li>
<li>Rinse and repeat until all elements are processed in this way</li>
</ul>
<p>Other examples would be:</p>
<pre class="lang-py prettyprint-override"><code>inp = ["test:", "one", "help:", "two", "three", "four"]
out = {"test": ["one"], "help": ["two", "three", "four"]
---
inp = ["one:", "list", "two:", "list", "list", "three:", "list", "list", "list"]
out = {"one": ["list"], "two": ["list", "list"], "three": ["list", "list", "list"]}
</code></pre>
<p>This feels like it <em>should</em> be relatively straightforward (albeit probably not a one-liner!) but I just can't get my head round it.</p>
<p>Any advice appreciated.</p>
|
<python><python-3.x>
|
2024-07-25 16:48:11
| 7
| 1,316
|
turbonerd
|
78,794,511
| 1,273,987
|
How can I set a python f-string format programmatically?
|
<p>I would like to set the formatting of a large integer in an fstring programmatically. Specifically, I have an integer that can span several orders of magnitude and I'd like to print it using the underscore "_" delimiter, e.g.</p>
<pre class="lang-py prettyprint-override"><code>num = 123456789
print(f"number is {num:9_d}")
>>> number is 123_456_789 # this is ok
num = 1234
print(f"number is {num:9_d}")
>>> number is 1_234 # this has extra spaces due to hand-picking 9
</code></pre>
<p>How can I set the number preceding "_d" programmatically based on the number of digits in num? (Or in case I asked an XY question, how can I print the number with underscore delimiters and no extra spaces?)</p>
|
<python><f-string>
|
2024-07-25 16:36:41
| 2
| 2,105
|
Ziofil
|
78,794,242
| 610,569
|
How to convert JSONL to parquet efficiently?
|
<p>Given a jsonl file like this:</p>
<pre><code>{"abc1": "hello world", "foo2": "foo bar"}
{"foo2": "bar bar blah", "foo3": "blah foo"}
</code></pre>
<p>I could convert it to a dataframe like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
with open('mydata.jsonl') as fin:
df = pd.json_normalize(pd.DataFrame(fin.read().splitlines())[0].apply(eval))
# Sometimes, there's some encoding issues, so this is done:
for col in df.columns:
if df[col].dtype == object:
df[col] = df[col].apply(lambda x: np.nan if x==np.nan else str(x).encode('utf8', 'replace').decode('utf8'))
df.to_parquet('mydata.parquet')
</code></pre>
<p>The above would work for small dataset but for dataset with billions of lines, reading everything into RAM is excessive and won't fit onto a normal machine.</p>
<p><strong>Are there other ways to convert the dataset into parquet efficiently without reading the whole data into RAM?</strong></p>
|
<python><pandas><parquet><jsonlines>
|
2024-07-25 15:31:39
| 3
| 123,325
|
alvas
|
78,794,183
| 5,661,667
|
scipy.special.eval_hermite with complex argument
|
<p>Does the scipy.special eval_hermite function only support real arguments? From the documentation (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.eval_hermite.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.eval_hermite.html</a>), I assumed complex would work fine and I also don't see any programmatic reason why it should not. However, I get the following error:</p>
<pre><code>scipy.special.eval_hermite(4, 1.j)
# output: TypeError: ufunc 'eval_hermite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>Does anyone know why this is? Or how to work around it?</p>
|
<python><scipy><polynomials>
|
2024-07-25 15:19:25
| 1
| 1,321
|
Wolpertinger
|
78,794,103
| 5,110,551
|
Mocking or monkey patching a slow loading resource that loads on import
|
<p>I have 1 file which loads a particular resource ( from a file ) that takes a while to load.</p>
<p>I have a function in <strong>another</strong> file which uses that resource by importing it first. When that resource is first imported, it loads and takes a bit.</p>
<p>However, when I'm testing that function that requires that resource, I wanna pass it another smaller resource.</p>
<p>Unfortunately, there aren't any major refactors I'm able to do to either <code>file1.py</code> or <code>file2.py</code>. Is there a solution using <code>pytest</code>? I haven't been able to get it working.</p>
<h3>file1.py</h3>
<pre class="lang-py prettyprint-override"><code>def load_resource():
print("Loading big resource...")
return {"key": "big resource"}
resource = load_resource()
</code></pre>
<h3>file2.py</h3>
<pre class="lang-py prettyprint-override"><code>from file1 import resource
def my_func():
return resource["key"]
</code></pre>
<h3>test.py</h3>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture(autouse=True)
def mock_load_resource(monkeypatch):
def mock_resource():
return {'key': 'mocked_value'}
monkeypatch.setattr('file1', 'load_resource', mock_resource)
monkeypatch.setattr('file1', 'resource', mock_resource())
def test_my_function():
from file2 import my_func
result = my_func()
assert result == 'mocked_value'
</code></pre>
|
<python><testing><mocking><pytest><monkeypatching>
|
2024-07-25 15:04:28
| 1
| 309
|
Sam
|
78,793,963
| 1,256,041
|
Different Python versions for different Bazel targets?
|
<p>I have many <code>py_library</code>, <code>py_binary</code> and <code>py_test</code> targets.</p>
<p>Some of these must run against <code>3.9.16</code> (constraint outside of my control) and others use <code>3.10.6</code> (in order to get more recent features).</p>
<p>In my <code>WORKSPACE</code>, I specify my toolchain:</p>
<pre><code>load("@rules_python//python:repositories.bzl", "python_register_toolchains")
python_register_toolchains(
name = "python3_9_16",
python_version = "3.9.16",
)
load("@python3_9_16//:defs.bzl", python_interpreter = "interpreter")
# python_register_toolchains(
# name = "python3_10_6",
# python_version = "3.10.6",
# )
</code></pre>
<p>How do I tell Bazel which toolchain to use on a per-target basis?</p>
<pre><code># Not real code
py_binary(
name = "app",
main = "main.py",
srcs = [
"main.py",
],
interpreter = "python3_10_6",
)
</code></pre>
<hr />
<p>This question is about 2 vs 3: <a href="https://stackoverflow.com/questions/47184773/way-to-use-different-python-versions-for-different-bazel-rules">Way to use different python versions for different Bazel rules</a></p>
|
<python><bazel><bazel-python>
|
2024-07-25 14:37:50
| 1
| 37,819
|
sdgfsdh
|
78,793,821
| 14,463,396
|
Creating/Updating an excel file with slicers from python
|
<p>I have some python code which creates a few pandas data frames which I need to put into a spreadsheet, with different data on multiple sheets and with slicers for easy filtering. This then gets sent out so others can easily see/filter the data. Currently, I manually copy and paste the updated tables into a template excel file, resave and send out, but I would like to automate this if possible.</p>
<p>My first thought was to use <code>xlsxwriter</code> to write the formatting to generate this spreadsheet, however as per <a href="https://stackoverflow.com/questions/72307488/is-it-possible-to-generate-an-excel-with-slicers-with-python-and-with-the-xlsxw">this answer</a>, slicers aren't supported by the library, so that's no good.</p>
<p>I then considered seeing if I could automate opening a template excel document with the slicers already in place, delete the previous data in the tables and add in the new data. I found <a href="https://stackoverflow.com/questions/58326076/replacing-data-in-xlsx-sheet-with-pandas-dataframe">this question</a> on using <code>openpyxl</code>, and used some of the answers to try the below code to first remove the rows:</p>
<pre><code>wb = openpyxl.load_workbook(r'TEST.xlsx')
ws = wb['sheet 1']
rows = ws.max_row
ws.delete_rows(13, rows)
wb.save(r'TEST2.xlsx')
</code></pre>
<p>However this removes the slicers and all formatting in the excel file completely, and not only saves a new copy but also removes the formatting from the original template as well, so that's no good either.</p>
<p>Does anyone know of a way I could automate creating or updating an excel file from python with slicers?</p>
<p><strong>Edit:</strong> I'm not looking for recommendations on opinions on different libraries as the close vote suggests. I'm looking for a way to solve a specific coding problem, if it's possible, for which the code and research I've already tried has brought me up short.</p>
|
<python><excel><openpyxl><pywin32><xlsxwriter>
|
2024-07-25 14:12:01
| 1
| 3,395
|
Emi OB
|
78,793,812
| 3,636,909
|
How can I apply a transformation only once in open3D
|
<p>I want to apply a simple rotation to my 3D model, say 10Β° pitch, with respect to the recent rotation only once but the model keeps rotating indefinitely. I tried using rotation matrix in a transformation but I get the same issue.</p>
<p>Here is an example code:</p>
<pre><code>from colorama import init
import numpy as np
import open3d as o3d
import cv2
# Paths to the 3D object and video file
obj_path = "Compressor HD.obj"
# Callback function to update the texture of the plane with video frames
def update(vis):
global obj_mesh, initial_rotation
obj_mesh.rotate(o3d.geometry.get_rotation_matrix_from_xyz(np.radians([10, 0, 0])), center=[0, 0, 0])
vis.update_geometry(obj_mesh)
vis.poll_events()
vis.update_renderer()
return False
if __name__ == "__main__":
vis = o3d.visualization.VisualizerWithKeyCallback()
vis.create_window()
# Load and add the 3D object to the scene
obj_mesh = o3d.io.read_triangle_mesh(obj_path)
obj_mesh.compute_vertex_normals()
vis.add_geometry(obj_mesh)
initial_rotation = obj_mesh.get_rotation_matrix_from_xyz([0, 0, 0])
# Create and add coordinate frame
axis_length = 200
coordinate_frame = o3d.geometry.TriangleMesh.create_coordinate_frame(size=axis_length, origin=[0, 0, 0])
vis.add_geometry(coordinate_frame)
# Register callback to update video frame texture
vis.register_animation_callback(update)
vis.run()
vis.destroy_window()
cv2.destroyAllWindows()
</code></pre>
|
<python><open3d>
|
2024-07-25 14:09:37
| 2
| 2,334
|
Ja_cpp
|
78,793,802
| 4,505,998
|
Detect precision loss in numpy floats
|
<p>If the difference between two numbers is too large, the smallest number will be "lost". Does this raise any flag that I can check?</p>
<p>In float32, the significant has 24 binary digits, therefore if the difference between numbers is greater than 10^7.2, the smaller number will be lost. Obviously, I could use <code>decimal</code> or <code>float64</code>, but this code is just an example:</p>
<pre class="lang-py prettyprint-override"><code># Max 7.2 decimal digits of difference
## First experiment:
np.seterr(all='warn')
n1 = np.pow(np.float32(10), 9)
n2 = np.float32(1)
print("n1:", n1, "n2:", n2, "n1+n2:", n1+n2, "n1+n2==n1:", n1+n2==n1)
# n1: 1000000000.0 n2: 1.0 n1+n2: 1000000000.0 n1+n2==n1: True
## Second experiment
n1 = np.pow(np.float32(10), 9)
total = np.float32(0)
n2 = n1
for i in tqdm(range(10**9)):
total += 1
n2 += 1
print("n1:", n1, "n2:", n2, "total:", total, "n1+total:", n1+total)
# n1: 1000000000.0 n2: 1000000000.0 total: 16777216.0 n1+total: 1016777200.0
</code></pre>
<p>Is it possible to detect if this error happened? How is this kind of error called and how can I search for more information?</p>
|
<python><numpy><floating-point><precision>
|
2024-07-25 14:07:18
| 1
| 813
|
David DavΓ³
|
78,793,796
| 5,507,132
|
Leave one out encoding on test set with transform
|
<p><strong>Context</strong>: When preprocessing a data set using sklearn, you use <code>fit_transform</code> on the <strong>training</strong> set and <code>transform</code> on the <strong>test</strong> set, to avoid data leakage. Using leave one out (LOO) encoding, you need the target variable value to calculate the encoded value of a feature value. When using the LOO encoder in a pipeline, you can apply it to the training set using the <code>fit_transform</code> function, which accepts the features (<code>X</code>) and the target values (<code>y</code>).</p>
<p>How do I calculate the LOO encodings for the test set with the same pipeline, knowing that <code>transform</code> does not accept the target variable values as an argument? I'm quite confused about this. The <code>transform</code> function indeed transforms the columns but without considering the value of the target, since it doesn't have that information.</p>
|
<python><machine-learning><scikit-learn><encoding>
|
2024-07-25 14:05:44
| 1
| 333
|
Jelle
|
78,793,568
| 1,471,980
|
how do you pick the max value of each row of certain columns in pandas
|
<p>I have this data frame:</p>
<p>df</p>
<pre><code>Node Interface Speed Band_In carrier 1-Jun 10-Jun
Server1 wan1 100 80 ATT 80 30
Server1 wan2 100 60 Sprint 60 30
Server1 wan3 100 96 Verizon 96 15
</code></pre>
<p>I need to create a new colum name Mx and it needs to be right after the carrier colum and it needs to only pick the max value of columns after carrier. In this case in needs to look at "1-Jun" and "10-Jun" but number of columns could be greater.</p>
<p>resulting df needs to looks like this:</p>
<pre><code>Node Interface Speed Band_In carrier Mx 1-Jun 10-Jun
Server1 wan1 100 80 ATT 80 80 30
Server1 wan2 100 60 Sprint 60 60 30
Server1 wan3 100 96 Verizon 96 96 15
</code></pre>
<p>I tried this:</p>
<pre><code>df['Mx']=df.max(axis=1)
</code></pre>
<p>but this looks at all of the row values?</p>
<p>any ideas?</p>
|
<python><pandas>
|
2024-07-25 13:22:53
| 1
| 10,714
|
user1471980
|
78,793,443
| 2,207,840
|
Convert '3' to numpy.dtypes.Int64DType
|
<p>I have strings containing str representations of numpy values. I also have a list of the original dtypes, such as numpy.dtypes.Int64DType. I do not know all possible dtypes in advance. I can for the life of me not figure out how to convert the string back to a scalar of the correct dtype.</p>
<pre class="lang-py prettyprint-override"><code>dt = numpy.dtypes.Int64DType
dt('3') # I was somewhat hoping this would work but the dtype is not callable
np.fromstring('3', dtype=dt) # ValueError: string size must be a multiple of element size
np.fromstring('3', dtype=dt, count=1) # ValueError: string is smaller than requested size
np.fromstring('[3]', dtype=dt, count=1) # ValueError: Cannot create an object array from a string
</code></pre>
<p>For the latter three calls, I also get a</p>
<blockquote>
<p>:1: DeprecationWarning: The binary mode of fromstring is
deprecated, as it behaves surprisingly on unicode inputs. Use
frombuffer instead`</p>
</blockquote>
<p>Which I don't understand, because '3' is not a binary string?</p>
|
<python><numpy>
|
2024-07-25 12:56:23
| 1
| 3,513
|
Eike P.
|
78,793,368
| 100,129
|
How do I find the smallest circle that can enclose a polygon?
|
<p>Using python, how do I find the smallest circle that can enclose a polygon?</p>
<p>A polygon is defined as a set of coordinates that map the vertices inside a 2D plane.</p>
<p><strong>Guidelines</strong>:</p>
<p><strong>Input</strong>: A list of tuples representing the coordinates of the polygonβs vertices. For example, [(x1, y1), (x2, y2), ..., (xn, yn)]. Numpy, pandas, GeoPandas etc may also be sensible input options.</p>
<p><strong>Output</strong>: Coordinates of the center of the smallest enclosing circle and its radius. For instance, as a tuple: ((x_center, y_center), radius). Output can be anything reasonable: list, dict, object attributes etc.</p>
<p><strong>Constraints</strong>:
The algorithm should efficiently handle a reasonable number of vertices. Take 100 vertices as an upper limit. Finding the <a href="https://en.wikipedia.org/wiki/Convex_hull" rel="nofollow noreferrer">convex hull</a> of the vertices will help.</p>
<p><strong>Example</strong>:
For an input square with vertices [(2, 2), (0, 2), (2, 0), (0, 0)], the output should be ((1.0, 1.0), 1.4142).</p>
<p>They're guidelines. I don't want to be overly proscriptive.</p>
<p>You can refer to existing algorithms, source libraries or software packages for finding the minimum bounding circle, but the implementation should be in Python.</p>
<p>This is known as the <a href="https://en.wikipedia.org/wiki/Smallest-circle_problem" rel="nofollow noreferrer">smallest circle problem</a>.</p>
|
<python><algorithm><computational-geometry>
|
2024-07-25 12:41:37
| 4
| 1,805
|
makeyourownmaker
|
78,793,248
| 12,990,915
|
How to remove large space between rows in matplotlib plot?
|
<p>Code:</p>
<pre><code>fig, axes = plt.subplots(2, 6)
for row in axes:
for ax in row:
## Set limits
ax.set_xlim(-1, 1)
ax.set_ylim(-1, 1)
## Set aspect ratio
ax.axis('scaled')
## Plot
plt.tight_layout(pad=0.5)
plt.show()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/gYVtxKUI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYVtxKUI.png" alt="enter image description here" /></a></p>
<p>Removing <code>ax.axis('scaled)</code> removes the large space between the rows, but I would like the plot to be scaled:</p>
<p><a href="https://i.sstatic.net/oTfItkaA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTfItkaA.png" alt="enter image description here" /></a></p>
<p>It is possible to adjust the spacing between the rows using <code>figsize</code> manually, however, getting equal spacing between the subplots has to be done by eye.</p>
|
<python><matplotlib>
|
2024-07-25 12:15:23
| 2
| 383
|
user572780
|
78,793,132
| 241,552
|
Python: init exception from another (like raise ... from but with no raise)
|
<p>I am working on a piece of FastAPI app code that (already) has some exception handling, where handlers receive exceptions as function arguments, some exceptions are converted into other exceptions and then are passed on to other handler functions. My task is to log them, and I would like to be able to keep all the stactrace, including for those exceptions that resulted from converting other exceptions. However, when <code>return MyNewException(old_exc.args)</code> is called, all the stacktrace from the first exception is lost.</p>
<p>I know that you can append to the stactrace if you <code>raise MyNewException(old_exc.args) from old_exc</code>, but here there's no raising, only passing exceptions raised and caught elsewhere.</p>
<p>Is there a way I can incorporate the stacktrace from the original exceptions in my situation?</p>
|
<python><exception>
|
2024-07-25 11:51:00
| 1
| 9,790
|
Ibolit
|
78,793,130
| 4,599,564
|
How to check if one wallet address have already called a method on an EVM contract (PYTHON)
|
<h2>Introduction</h2>
<p>I have a list of wallets and I want to ensure that a contract function is called only once. To achieve this, I iterate through the wallets, check if each wallet has already called that function, and if not, I proceed to call it. <em>(I'm testing on Linea and Base blockchains buy I'm looking for a universal solution for EVM blockchains.)</em></p>
<h2>Explanation</h2>
<p>I need to check if a specific contract function has been called by a wallet address after a given block number. Here are the data I have:</p>
<ul>
<li>startblock: The block from which I want to start checking.</li>
<li>contract_address: The address of the contract.</li>
<li>wallet_address: The address of the wallet.</li>
<li>contract_abi: The ABI of the contract.</li>
<li>contract_function_name: The name of the contract function I want to check.</li>
</ul>
<h2>What I tried.</h2>
<h3>Using an API to Fetch Transactions:</h3>
<p>I used the Lineascan/Basescan API to fetch transactions with the following code:</p>
<pre><code> url = (f"{self.scan_api_url}"
f"?module=account"
f"&action=txlist"
f"&address={wallet_address if wallet_address is not None else self.wallet_address}"
f"&startblock={self.quest.start_block}"
f"&endblock=99999999"
f"&page=1"
f"&offset=10000"
f"&sort=asc"
f"&apikey={self.scan_api_key}")
</code></pre>
<p>And checking for contract_address on the list.</p>
<p>However, this approach often fails to return recent blocks due to delays in the API.</p>
<h3>Iterating Through Blocks:</h3>
<p>I tried iterating from the latest block back to startblock and checking for wallet_address.</p>
<p><code>block = self.w3.eth.get_block(block_num, full_transactions=True)</code></p>
<p>This method is very slow because it processes a large number of blocks.</p>
<h3>Using get_logs():</h3>
<p>I attempted to use the get_logs() method but faced limitations:</p>
<p><code>logs = self.w3.eth.get_logs()</code></p>
<p>The method has a limit of 1000 blocks and I couldn't figure out how to filter by wallet address and contract function.</p>
<h2>What I'm Looking For:</h2>
<p><strong>Preferred Solution:</strong> A way to efficiently check if wallet_address has called contract_function_name on contract_address after start_block. Ideally, this solution would filter by both wallet address and contract function.</p>
<p><strong>Alternative Solution:</strong> If filtering by both wallet address and contract function is not feasible, a solution that simply checks if wallet_address interacted with contract_address would be acceptable.</p>
<p>Any help or guidance on how to achieve this efficiently would be greatly appreciated!</p>
|
<python><ethereum><web3py>
|
2024-07-25 11:50:17
| 1
| 1,251
|
Juan Antonio TubΓo
|
78,792,953
| 6,714,667
|
from redis.commands.search.field import TagField, TextField, VectorField ModuleNotFoundError: No module named 'redis.commands' ERROR
|
<p>from redis.commands.search.field import TagField, TextField, VectorField
ModuleNotFoundError: No module named 'redis.commands'</p>
<p>i have the above - and am using redis 3.5.3
how can i recitfy this?</p>
|
<python><redis>
|
2024-07-25 11:02:26
| 1
| 999
|
Maths12
|
78,792,885
| 380,111
|
How do i shift the axis of a chart in plotly
|
<p>I'm trying to make this ridgeplot chart in plotly and its mostly working ok, but for some reason the top trace is getting cut off and i can't figure out how to move the trace down a bit. I can do it manually by dragging the axis</p>
<p>the code i have</p>
<pre><code>for i, time_unit in enumerate(time_units):
fig = go.Figure()
df_year = df[df['year'] == time_unit]
months = sorted(df['month'].unique())
for j, month in enumerate(months):
df_month = df_year[df_year['month'] == month]
fig.add_trace(go.Violin(
x=df_month['metric_value'],
name=calendar.month_name[month],
side='negative',
box_visible=False
))
fig.update_layout(
title=f'Ridge Plot for Year {time_unit}',
xaxis_title='Metric Value',
yaxis_title='Month',
yaxis=dict(automargin=True, domain=[0.00, 0.00], autorange='reversed',),
violingap=0,
violingroupgap=0,
violinmode='overlay',
height=1200
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/cW0FJqOg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cW0FJqOg.png" alt="enter image description here" /></a></p>
<p>this is the goal
<a href="https://i.sstatic.net/YFRAOTux.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFRAOTux.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2024-07-25 10:48:15
| 2
| 1,565
|
Nathaniel Saxe
|
78,792,882
| 100,129
|
How do I add 'week of year' and 'day of year' columns to a python datatable?
|
<p>There is currently (version 1.1.0) no python <code>datatable.time.week()</code> or <code>datatable.time.day_of_year()</code> functions.</p>
<p>Here is some example data:</p>
<pre><code>from datetime import date
import datatable as dt
DT = dt.Frame({"Date": ["2021-05-24", "2021-06-26", "2021-07-27"]}, stype='date32')
</code></pre>
<p><a href="https://datatable.readthedocs.io/en/latest/api/time.html" rel="nofollow noreferrer">datatable.time documentation</a></p>
<hr />
<p>I'm not interested in using pandas, polars, vaex or other related packages. The question is solely concerned with python datatable and date processing.</p>
|
<python><datetime><python-datetime><py-datatable>
|
2024-07-25 10:47:59
| 1
| 1,805
|
makeyourownmaker
|
78,792,758
| 11,357,695
|
Uninstall package with non-python files that weren't in distribution
|
<p>I have a python app that has been deposited on pypi as described <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/" rel="nofollow noreferrer">here</a>. The app runs an external software (<a href="https://blast.ncbi.nlm.nih.gov/doc/blast-help/downloadblastdata.html" rel="nofollow noreferrer">blast+</a>) and generates non-python files (various database and <code>txt</code> files, and an <code>xml</code> file that is fed back into the app). Currently, my directory structure (when installed with <code>pip</code> into an environment called <code>env</code>) is:</p>
<pre><code>lib/site-packages/app/file1.py...
</code></pre>
<p>after running this becomes:</p>
<pre><code>lib/site-packages/app/file1.py...xml_file.xml, db_file.txt, db_file.pdb ...
</code></pre>
<p>When I <code>pip uninstall app</code>:</p>
<pre><code>Found existing installation: app 1.1a0
Uninstalling app-1.1a0:
Would remove:
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app-1.1a0.dist-info\*
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\*
c:\users\username\appdata\local\anaconda3\envs\app_env\scripts\app.exe
Would not remove (might be manually added):
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\all_proteins.txt
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\all_proteins_db.pdb
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\all_proteins_db.phr
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\all_proteins_db.pin
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\all_proteins_db.pot
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\all_proteins_db.psq
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\all_proteins_db.ptf
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\all_proteins_db.pto
c:\users\username\appdata\local\anaconda3\envs\app_env\lib\site-packages\app\results.xml
</code></pre>
<p>This isn't a massive deal I think, as the files are rewritten every run and if you install my app again it just regenerates the original folder. But I'd rather not have unecessery files cluttering the place up and potentially scaring new users with the <code>would not remove</code> message. Is there a way to specify that the package will generate the files and they should be deleted if the package is (perhaps in pyproject.toml)?</p>
<p>(If not, I could make the user specify a local dir to write the files too, rather than just setting that as the dir holding the python script associated with making the fies, which is what I do now - but I'd rather keep the user interface as simple as possible.)</p>
|
<python><package><delete-file><pypi><pyproject.toml>
|
2024-07-25 10:19:00
| 1
| 756
|
Tim Kirkwood
|
78,792,498
| 11,748,924
|
Remove blank area of matplotlib between distance of zero for multivariate time series
|
<p>I have two time series:</p>
<ol>
<li>First row is prediction</li>
<li>second row is ground truth</li>
</ol>
<p><a href="https://i.sstatic.net/ZPbUCKmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZPbUCKmS.png" alt="enter image description here" /></a></p>
<p>I want remove this red area, where it should be flexible.
<a href="https://i.sstatic.net/rUf4Ajek.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUf4Ajek.png" alt="enter image description here" /></a></p>
<p>I mean there is no distance between them like this picture, without removing x-axis and y-axis
<a href="https://i.sstatic.net/9niLQlWK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9niLQlWK.png" alt="enter image description here" /></a></p>
<p>Here is my code, how do I do that?</p>
<pre><code>#@title Animate Define
# Display Parameter
start = 0
end = 1024
Xt = X_unseen[start:end]
yp = y_pred[start:end]
yt = y_true[start:end]
# Get mask of every class for prediction
bl_pred = yp == 0
p_pred = yp == 1
qrs_pred = yp == 2
t_pred = yp == 3
# get mask of every class for ground truth
bl_true = yt == 0
p_true = yt == 1
qrs_true = yt == 2
t_true = yt == 3
# create figure with two rows and one column
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 8))
# Plotting for prediction
prev_class = None
start_idx = start
for i in range(start, end):
current_class = None
if bl_pred[i]:
current_class = 'grey'
elif p_pred[i]:
current_class = 'orange'
elif qrs_pred[i]:
current_class = 'green'
elif t_pred[i]:
current_class = 'purple'
if current_class != prev_class:
if prev_class is not None:
ax1.axvspan(start_idx, i, color=prev_class, alpha=0.5)
start_idx = i
prev_class = current_class
# Fill the last region
if prev_class is not None:
ax1.axvspan(start_idx, end, color=prev_class, alpha=0.5)
# Plotting for ground truth
prev_class = None
start_idx = start
for i in range(start, end):
current_class = None
if bl_true[i]:
current_class = 'grey'
elif p_true[i]:
current_class = 'orange'
elif qrs_true[i]:
current_class = 'green'
elif t_true[i]:
current_class = 'purple'
if current_class != prev_class:
if prev_class is not None:
ax2.axvspan(start_idx, i, color=prev_class, alpha=0.5)
start_idx = i
prev_class = current_class
# Fill the last region
if prev_class is not None:
ax2.axvspan(start_idx, end, color=prev_class, alpha=0.5)
# first row for prediction (X_unseen, y_pred)
ax1.plot(Xt, color='blue')
ax1.set_title('Prediction')
# second row for ground truth (X_unseen, y_true)
ax2.plot(Xt, color='blue')
ax2.set_title('Ground Truth')
# plot
plt.show()
</code></pre>
<p>Basically, this code is plotting in same figure where first row is prediction and second row is ground truth. I want remove distance of blank area (white) between them.</p>
|
<python><matplotlib>
|
2024-07-25 09:22:16
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
78,792,386
| 6,936,582
|
Get cumulative weight of edges without repeating already traversed paths
|
<p>I have a water pipe network where each node is a house and each edge is a pipe connecting houses. The edges have a water volume as an attribute.</p>
<p>I want to calculate the total volume reaching node 13.</p>
<p>It should sum to <code>5+2+1+6+0+3+14+4+12+5+8+10+6+9=85</code></p>
<p><a href="https://i.sstatic.net/pzuQ4Gsf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzuQ4Gsf.png" alt="enter image description here" /></a></p>
<p>I've tried something like this, but it will repeat already traversed paths. For example I dont want to sum the value from node 1 to 2 (the weight 5) more than once:</p>
<pre><code>import networkx as nx
from itertools import combinations
G = nx.DiGraph()
edges = [(1,2,5), (2,5,0), (3,2,2), (3,4,1), (4,5,6), (5,6,3), (7,8,14), (8,6,4), (6,9,12), (9,11,8), (10,9,5),(10,12,10), (11,12,6),(12,13,9)]
for edge in edges:
n1, n2, weight = edge
G.add_edge(n1, n2, volume=weight)
for n1, n2 in combinations(G.nodes,2):
paths = list(nx.all_simple_paths(G=G, source=n1, target=n2))
for path in paths:
total_weight = nx.path_weight(G=G, path=path, weight="volume")
print(f"From node {n1} to {n2}, there's the path {'-'.join([str(x) for x in path])} \n with the total volume: {total_weight}")
# From node 1 to 2, there's the path 1-2
# with the total volume: 5
# From node 1 to 5, there's the path 1-2-5
# with the total volume: 5
# From node 1 to 6, there's the path 1-2-5-6
# ...
</code></pre>
|
<python><networkx>
|
2024-07-25 09:01:57
| 1
| 2,220
|
Bera
|
78,792,381
| 5,595,678
|
Enable wildcard redirect_uri and disable code verification
|
<p>I am using Django OAuth toolkit in a intentionally vulnerable application. I am painting a scenario in which for example I there is a client application with <code>redirect_uri</code> <code>abc.com</code>.
I want to show to the user that in OAuth 2.0 if the <code>redirect_uri</code> is not specifically well taken care of the attacker can manipulate and redirect it to their own server.</p>
<p>In this case, if the attacker sends a link to the victim with <code>redirect_uri</code>, they will be able to receive the victim's authorization code.
Now, my first question is that I cannot see wildcard URI being possible in the OAuth toolkit. Is there any way I can enable it? Otherwise, I have to add <code>evil.com</code> to the allowed <code>redirect_uri</code> section of the app, which is not a valid scenario.</p>
<p>Similar to this one, if the attacker receives the authorization code after redirecting the user to attacker.com. From the authorization code, I cannot get the token as the OAuth toolkit also verifies the <code>redirect_uri</code> once the code is generated. Is there any way to bypass that so that the access token should be easily gerneateable without validation on <code>redirect_uri</code>?</p>
<p>May be I have made a bad decision regarding choice of the toolkit, I mean, the RFC does not say to store the authorization_code with redirect_uri in the database and then matches with during token generation process.</p>
|
<python><oauth-2.0><oauth><access-token><django-authentication>
|
2024-07-25 09:00:37
| 0
| 1,765
|
Johnny
|
78,792,353
| 7,112,039
|
Celery class based tasks found in the worker, but gets NotRegistered when consumed
|
<p>I am configuring Celery like that</p>
<pre class="lang-py prettyprint-override"><code>from celery import Celery
from settings.config import settings
celery_app = Celery(
broker=settings.RABBITMQ_URL,
backend="rpc://",
)
celery_app.config_from_object(settings.CELERY_SETTINGS_MODULE)
celery_app.autodiscover_tasks(["app.tasks"], force=True)
</code></pre>
<p>In another file, I have:</p>
<pre class="lang-py prettyprint-override"><code>class AsyncTask(celery.Task):
def run(self, *args, **kwargs):
coro = self._run(*args, **kwargs)
try:
result = self._thread_isolated_worker(coro)
return result
except (DeprecatedTimeoutError, TimeoutError):
raise TimeoutError
@abstractmethod
async def task(self, *args, **kwargs):
# do something
async_task = celery_app.register_task(AsyncTask())
</code></pre>
<p>When I run the worker, I see the task registered, but when I run
this simple script</p>
<pre class="lang-py prettyprint-override"><code>from app.tasks import async_task
async_task.delay().get()
</code></pre>
<p>I get</p>
<pre><code>celery.exceptions.NotRegistered: 'async_task'
</code></pre>
<p>Celery version: <code>celery==5.3.6</code></p>
<p>If I run a task, that is registered with the usual @shared_task decorator it works fine.</p>
<p>I think it's something related to the registrations, but it looks correct to me.</p>
|
<python><celery>
|
2024-07-25 08:54:25
| 1
| 303
|
ow-me
|
78,792,196
| 4,701,852
|
Overpass API & Road Lanes
|
<p>I've written a script in Python that, given a lat/long, generates a geojson with the roads edges, pedestrian crosses and lanes. It looks like this:</p>
<pre><code>def fetch_intersection_data(lat, lon, radius: int = 20):
"""
Fetch intersection data from OpenStreetMap using Overpass API and convert to GeoJSON.
:param lat: Latitude of the center of the intersection
:param lon: Longitude of the center of the intersection
:param radius: Radius around the point to search (in meters)
:return: GeoJSON data as a dictionary
"""
overpass_url = "http://overpass-api.de/api/interpreter"
overpass_query = f"""
[out:json];
(
way["highway"](around:{radius},{lat},{lon});
node["highway"="crossing"](around:{radius},{lat},{lon});
way["lanes"](around:{radius},{lat},{lon});
way["lanes:forward"](around:{radius},{lat},{lon});
way["lanes:backward"](around:{radius},{lat},{lon});
);
out geom;
"""
print(overpass_query)
response = requests.get(overpass_url, params={'data': overpass_query})
osm_data = response.json()
with open('raw_data.json', 'w') as f:
json.dump(osm_data, f)
try:
features = []
for element in osm_data['elements']:
feature = {
"type": "Feature",
"properties": element.get('tags', {}),
"geometry": None
}
if element['type'] == 'way':
if 'geometry' in element:
feature['geometry'] = {
"type": "LineString",
"coordinates": [[point['lon'], point['lat']] for point in element['geometry']]
}
feature['properties']['id'] = element['id']
elif element['type'] == 'node':
feature['geometry'] = {
"type": "Point",
"coordinates": [element['lon'], element['lat']]
}
feature['properties']['id'] = element['id']
if feature['geometry']:
features.append(feature)
geojson = {"type": "FeatureCollection","features": features}
except Exception as e:
print(e)
return geojson
</code></pre>
<p>So far so good: it's doing almost everything I need. For instance, this is how the geojson looks when I plot it on <code>geojson.io</code>:</p>
<p><a href="https://i.sstatic.net/Imi5FBWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Imi5FBWk.png" alt="enter image description here" /></a></p>
<p>Now, what it is missing is the <strong>lanes within the road</strong>. What I mean by that is that what I'd like to be able to fetch the geometries/coordinates to plot something like this (in red):</p>
<p><a href="https://i.sstatic.net/KPu01qaG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPu01qaG.png" alt="enter image description here" /></a></p>
<p>The JSON file returned by the API doesn't seem to have the lanes coordinates. I'm not sure if my query is missing something or if this information is not available.</p>
<p>Any thoughts?</p>
|
<python><gis><openstreetmap><overpass-api>
|
2024-07-25 08:20:15
| 1
| 361
|
Paulo Henrique PH
|
78,792,162
| 1,082,349
|
Multi-level rolling average with missing values
|
<p>I have data on frequencies (<code>N</code>), for combinations of [from, to, subset], and the month. Importantly, when N=0, the row is missing.</p>
<pre><code> N from to subset
month
1996-01-01 8.956799 1 2 0
1996-02-01 2.068997 1 2 0
1996-03-01 1.086952 1 2 0
1996-05-01 7.103955 1 2 0
1996-01-01 4 1 2 1
1996-03-01 5 1 2 1
1996-04-01 5 1 2 1
1996-05-01 6 1 2 1
</code></pre>
<p>I want to compute a rolling average of N over these, considering the zeros.</p>
<pre><code>df.groupby(['from', 'to', 'subset.rolling(3, center=True).mean()
</code></pre>
<p>This works, but does not treat missing rows as zeros. So, I first have to fill the misisng rows.</p>
<pre><code>df.reset_index().set_index(['month', 'from', 'to', 'subset']).resample('1M', level=0).fillna(0)
</code></pre>
<p>This does not work, and gives me <code>ValueError: Upsampling from level= or on= selection is not supported, use .set_index(...) to explicitly set index to datetime-like</code>.</p>
<p>Can I not upsample for a multi-index? Should I try a different approach altogether?</p>
<p>I would want this to work for any number of groups. My expected output for data above would be missings at the boundary (where the 3M centered rolling average cannot be computed), and a rolling average including zeros for the rest:</p>
<pre><code> N from to subset
month
1996-01-01 np.nan 1 2 0
1996-02-01 [avg Jan-March] 1 2 0
1996-03-01 [avg Feb-April, April=0] 1 2 0
1996-04-01 [avg March-May, May=0] 1 2 0
1996-05-01 np.nan 1 2 0
1996-01-01 np.nan 1 2 1
1996-03-01 [avg Feb-April, Feb=0] 1 2 1
1996-04-01 [avg March-June Feb=0] 1 2 1
1996-05-01 np.nan 1 2 1
</code></pre>
|
<python><pandas><python-datetime>
|
2024-07-25 08:12:47
| 1
| 16,698
|
FooBar
|
78,792,124
| 10,855,529
|
Cast columns that might not exist in Polars
|
<p>I want to cast a column to another type, but there is a possibility that the column does not exist in the df.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import polars.selectors as cs
# Sample DataFrame
data = {
"viewCount": [100, 200, 300],
"likeCount": [10, 20, 30],
"id": [1, 2, 3]
}
df = pl.DataFrame(data)
# Columns we want to cast
columns_to_cast = ["viewCount", "likeCount", "favoriteCount", "id", "does_not_exist"]
# Check for existence and cast if the column exists
for col in columns_to_cast:
if col in df.columns:
df = df.with_columns(pl.col(col).cast(pl.Float64))
print(df)
</code></pre>
<p>Can the be achieved using polars' expression API?</p>
|
<python><python-polars>
|
2024-07-25 08:04:58
| 1
| 3,833
|
apostofes
|
78,791,781
| 511,302
|
How to encode unicode to bytes, so that the original string can be retrieved? in python 3.11
|
<p>In python 3.11 we can encode a string like:</p>
<p>string.encode('ascii', 'backslashreplace')</p>
<p>Which works neatly for say: <code>hellΓΆ</code> => <code>hell\\xf6</code></p>
<p>However when I insert <code>hellΓΆ w\\xf6rld</code> I get <code>hell\\xf6 w\\xf6rld</code>
(notice the second one has an literal part that looks like a character escape sequence)</p>
<p>Or in other words the following holds:</p>
<pre><code>'hellΓΆ wΓΆrld'.encode('ascii', 'backslashreplace') == 'hellΓΆ w\\xf6rld'.encode('ascii', 'backslashreplace')
</code></pre>
<p>Which obviously means that data has been lost by the encoding.</p>
<p>Is there a way to make python actually encode correctly? So also backslashes are escaped themselves? Or a library to do so?</p>
|
<python><character-encoding>
|
2024-07-25 06:38:00
| 1
| 9,627
|
paul23
|
78,791,738
| 630,971
|
MultiLabelBinarizer: inverse_transform How can get a list of labels sorted according to their probability?
|
<p>I am doing a multi label classification and I use MultiLabelBinarizer to translate the list of labels into Zeros and Ones.</p>
<p>I could get the Labels using inverse_transform which is super. However, in a case where I want to rank the class based on their probability, i.e the higher the probability, the better the judgement of the label even (just) in case its probability is less than 0.5.</p>
<p>How can I get back the sorted list of labels based on their probability?</p>
|
<python><sorting><scikit-learn><classification><multilabel-classification>
|
2024-07-25 06:26:08
| 1
| 472
|
sveer
|
78,791,669
| 8,621,823
|
How does python's open() read mode and read binary mode deal with null bytes and f.seek?
|
<p>I did 2 experiments</p>
<ol>
<li>rb mode</li>
<li>r mode</li>
</ol>
<pre><code>with open("small_file.txt", "w") as f:
f.seek(2)
f.write("Content")
with open("small_file.txt", "rb") as f:
content = f.read()
print(f"START|{content}|END")
</code></pre>
<p>f.seek(2) during write is to insert some nullbytes (2 being arbitrary).</p>
<p><strong>Output:</strong> <code>START|b'\x00\x00Content'|END</code> (No surprise here)</p>
<p>If i change <code>rb</code> to <code>r</code>, i get <code>START|Content|END</code>.</p>
<p><strong>What are the underlying concepts here to explain the behaviour of not reading (or reading but not printing?) null bytes, is it specified in any documentation?</strong></p>
|
<python><io>
|
2024-07-25 06:05:18
| 0
| 517
|
Han Qi
|
78,791,591
| 2,181,188
|
Need help to extract this XML node - Excel connection strings in Python
|
<p>I have a Python program opening up Excel (XLSX) files, and trying to find the <code><connection></code> node.</p>
<p>This is the full XML from the <code>connections.xml</code> file.</p>
<pre><code><?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<connections
xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="xr16"
xmlns:xr16="http://schemas.microsoft.com/office/spreadsheetml/2017/revision16">
<connection
id="1" xr16:uid="{#####}" keepAlive="1"
name="Query - CargoData_small"
description="Connection to the 'CargoData_small' query in the workbook."
type="5" refreshedVersion="7" background="1">
<dbPr connection="Provider=Microsoft.Mashup.OleDb.1;Data Source=$Workbook$;Location=CargoData_small;Extended Properties=&quot;&quot;"
command="SELECT * FROM [CargoData_small]"/>
</connection>
</connections>
</code></pre>
<p>I am trying to get to find the <code><dbPr></code> node. but I am stuck on the child node of the code as shown below:</p>
<pre><code>def checkfile(filename):
if zipfile.is_zipfile(filename):
zf = zipfile.ZipFile(filename, 'r')
if "xl/connections.xml" in zf.namelist():
print(filename)
xml = zf.read('xl/connections.xml')
root = parseString(xml)
connections = root.getElementsByTagName('connection')
try:
for con in connections:
for child in con.childNodes:
# there are no 'children'
for children in child.childNodes:
dsn = children.attributes.values()[0].nodeValue
sql = children.attributes.values()[1].nodeValue
writeoutput(filename, dsn, sql )
except:
pass
</code></pre>
<p>So what happens is I get the 'child' value, but I cannot find the dbPr section.</p>
<p>This is what I am getting as an error:</p>
<p><a href="https://i.sstatic.net/QsiTVEBn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsiTVEBn.png" alt="{TypeError}TypeError("'dict_values' object is not subscriptable")" /></a></p>
<p>I am using Pycharm as the IDE.</p>
<p>Thanks</p>
|
<python><excel><xml>
|
2024-07-25 05:29:04
| 3
| 4,945
|
Fandango68
|
78,791,013
| 3,120,501
|
Jax jitting of kd-tree code taking an intractably long amount of time
|
<p>I've written myself into a corner with the following situation:</p>
<ul>
<li>I'm running an optimiser which requires smooth gradients to work, and I'm using Jax for automatic differentiation. Since this code is Jax jitted, this means that anything connected to it has to be Jax jit traceable.</li>
<li>I need to interpolate a function to use with the optimiser, but can't use the Scipy library as it isn't compatable with Jax (there's a jax.scipy.interpolate.RegularGridInterpolator implementation, but this isn't smooth - it only supports linear and nearest neighbour interpolation).</li>
<li>This means that I'm having to write my own Jax-compatible smooth interpolator, which I'm basing off the Scipy RBFInterpolator code. The implementation of this is very nice - it uses a kd-tree to find the nearest neighbours of a queried point in space, and then uses these to construct a local interpolation. This means that I also need to write a Jax-compatable kd-tree class (the Scipy one also isn't compatible with Jax), which I've done.</li>
</ul>
<p>The problem comes with jit-compiling the kd-tree code. I've written it in the 'standard way', using objects for the tree nodes with <code>left</code> and <code>right</code> node fields for the children. At the leaf nodes, these fields have <code>None</code> values to signify the absense of children.</p>
<p>The code runs and is functionally correct, however jit-compiling it takes a long time: 72 seconds for a tree of 64 coordinates, 131 seconds for 343 coordinates, ... and my intended dataset has over 14 million points. I think internally Jax is tracing every single possible path through the tree, which is why it's taking so long. The results are that it's blazingly quick: 0.0075s for kd-tree 10-point retrieval vs 0.4s for a brute force search over all of the points (for 343 points). These are the kind of speeds I'm hoping to obtain for use in the optimiser (without jitting it will be too slow). However it doesn't seem possible if the compilation times are going to continue to grow as experienced.</p>
<p>I thought that the problem might lie in the structure of the tree, with lots of different objects to be stored, so have also implemented a kd-tree search algorithm where the tree is represented by a set of Jax-numpy arrays (e.g. <code>coord</code>, <code>value</code>, <code>left</code> and <code>right</code>; where each index corresponds to a point in the tree) and iteration rather than recursion is used to do the tree search (this was a challenge but it works!). However, converting this to work with jit (changing if-statements for <code>jax.lax.cond</code>) is going to be complicated, and before I start I was wondering if it's going to be worth it - surely I'll have the same problem: Jax will trace all branches of the tree until the 'null terminators' (-1 values in the <code>left</code> and <code>right</code> arrays) are reached, and it will still take a very long time to compile. I've been investigating structures like <code>jax.lax.while_loop</code>, in case they might help?</p>
<p>(I've also written a hybrid of the two approaches, with an array-based tree and a recursion-based algorithm. In this case the tracing goes into an infinite loop, I think because of the fact that the null-terminator is -1 rather than None. But the arrays should be known statically (they don't change after construction, and belong to an object which is marked as a static input), so maybe the solution lies in this and I'm doing something wrong.)</p>
<p>I was wondering if I'm doing anything which is obviously wrong (or if my understanding is wrong), and if there is anything I can do to speed it up? Is it just to be expected that the compile time would be so high when there are so many code paths to trace? I don't suppose I could even build the jitted function only once and then save it?</p>
<p>I'm concerned that the only solution may be to rewrite the optimiser code so that it doesn't use Jax (e.g. if I hard-code the derivatives, and rewrite some of the code so that it operates on arrays directly instead of being vectorised across the inputs).</p>
<p>The code is available here: <a href="https://github.com/FluffyCodeMonster/jax_kd_tree" rel="nofollow noreferrer">https://github.com/FluffyCodeMonster/jax_kd_tree</a></p>
<p>All three varieties described are given: the node-based tree with recursion, the array-based tree with iteration, and the array-based tree with recursion. The former works, but is very slow to jit compile as the number of points in the tree increases; the second also works, but is not written in a jit-able way yet. The last is written to be jitted, but can't jit compile as it gets into an infinite recursion.</p>
<p>I really need to get this working urgently so that I can obtain the optimisation results.</p>
|
<python><scipy><jit><jax><kdtree>
|
2024-07-25 00:03:02
| 1
| 528
|
LordCat
|
78,790,887
| 3,754,125
|
Python multiprocessing.connection.Connection not behaving according to specifications
|
<p>According to python specifications, <code>recv()</code> method of python <code>Connection</code>, (returned from <code>multiprocessing.Pipe()</code>, throws an <code>EOFError</code> when pipe is empty and the other end of the pipe is closed. (Here is the reference: <a href="https://docs.python.org/3.9/library/multiprocessing.html#multiprocessing.connection.Connection.recv" rel="nofollow noreferrer">https://docs.python.org/3.9/library/multiprocessing.html#multiprocessing.connection.Connection.recv</a>)</p>
<p>In the following code, I expect the child process to exit immediately after the pipe is closed on the parent process. In other words, I would not see many <code>still alive!</code> prints and also see a <code>no val (got EOFError)</code> print.</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing as mp
import sys
import time
from multiprocessing.connection import Connection
def foo(conn: Connection):
while True:
try:
val = conn.recv()
except EOFError:
print("no val (got EOFError)")
return
print(f"got val {val}")
if __name__ == "__main__":
print(f"Version: {sys.version}")
conn_r, conn_w = mp.Pipe(duplex=False)
proc=mp.Process(target=foo, args=(conn_r,))
proc.start()
conn_r.close()
for i in range(10):
time.sleep(0.1)
conn_w.send(i)
conn_w.close()
while True:
if not proc.is_alive():
break
print("still alive!")
proc.join(timeout=0.1)
</code></pre>
<p>But here is what I get: Child process never exits and I never catch the <code>EOFError</code> exception.</p>
<pre><code>$ python3 test_pipe.py
Version: 3.9.2 (default, Feb 28 2021, 17:03:44)
[GCC 10.2.1 20210110]
got val 0
got val 1
got val 2
got val 3
got val 4
got val 5
got val 6
got val 7
got val 8
still alive!
got val 9
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
still alive!
...
</code></pre>
<p>Also observed the same behavior on python 3.10.
Am I missing something here? How does it behave on your computer and python version? Does this indicate a bug in python implementation?</p>
|
<python><python-3.x>
|
2024-07-24 22:53:41
| 1
| 1,121
|
milaniez
|
78,790,767
| 5,013,066
|
Poetry unable to find installation candidates for a private package, but only on a GitLab runner
|
<p>I am having a very bad case of "works on my machine" today. I am using a private GitLab repository and a package from a private package registry on the same GitLab instance.</p>
<p>On my machine, I have configured the package source with a personal access token.</p>
<p>This is my pyproject.toml. (I have obfuscated the naming, but you get the idea.)</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "my-project-name"
version = "0.0.1"
description = ""
authors = ["Eleanor <eleanor@example.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.9"
[tool.poetry.group.dev.dependencies]
mypy = "^1.10.1"
pytest = "^8.2.2"
pycodestyle = "^2.12.0"
coverage = "^7.6.0"
pytest-html = "^4.1.1"
boto3-stubs = {extras = ["s3"], version = "^1.34.144"}
boto3 = "^1.34.144"
my-private-package = {version = "^0.0.8", source = "my-private-package-source"}
[[tool.poetry.source]]
name = "my-private-package-source"
url = "https://gitlab.private.gitlab.instance/api/v4/projects/9972/packages/pypi/simple"
priority = "explicit"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>I know that source url works because I can effectively query it by getting doing a</p>
<pre class="lang-bash prettyprint-override"><code>poetry add --group dev --source my-private-package-source my-private-package@latest
</code></pre>
<p>on my machine and the command succeeds.</p>
<p>The relevant portion of my <em>.gitlab-ci.yml</em> is as follows:</p>
<pre class="lang-yaml prettyprint-override"><code>.poetry_setup:
script:
- !reference [.py_setup, script]
- $pycommand -m pip install pipx
- pipx_command="$pycommand -m pipx"
- poetry_command="$pipx_command run poetry"
- $poetry_command --version
- !reference [.poetry_setup_sources, script]
- $poetry_command install
.poetry_setup_sources:
script:
- $poetry_command config http-basic.my-private-package-source gitlab-ci-token $CI_JOB_TOKEN
- $poetry_command config certificates.my-private-package-source.cert /cfs-certs.pem
</code></pre>
<p>When I run this GitLab pipeline, I the job fails on the <code>$poetry_command install</code> with the following error</p>
<pre class="lang-bash prettyprint-override"><code> - Installing my-private-package (0.0.8)
- Installing mypy (1.11.0)
- Installing pytest-html (4.1.1)
- Installing pycodestyle (2.12.0)
RuntimeError
Unable to find installation candidates for my-private-package (0.0.8)
at ~/.cache/pipx/d2080b0cb8a1427/lib/python3.9/site-packages/poetry/installation/chooser.py:74 in choose_for
70β
71β links.append(link)
72β
73β if not links:
β 74β raise RuntimeError(f"Unable to find installation candidates for {package}")
75β
76β # Get the best link
77β chosen = max(links, key=lambda link: self._sort_key(package, link))
78β
Cannot install my-private-package.
</code></pre>
<p>To debug, I made a change to the pipeline by switching out the $CI_JOB_TOKEN in the source configuration line to "nonsense", and I got an authentication error. What this tells me is that when the package source <em>is</em> configured correctly as I understand it, poetry does query the correct endpoint and is able to authenticate successfully.</p>
<p>So I'm stumped.</p>
<p>What do you think is going on here? Or what possible debugging next steps should I take?</p>
|
<python><python-poetry>
|
2024-07-24 22:06:11
| 1
| 839
|
Eleanor Holley
|
78,790,660
| 6,440,589
|
Does it make sense to use Python's multiprocessing module when deploying code on Azure?
|
<p>Our team has deployed a Python script on <strong>Azure Machine Learning</strong> (AML) to process files stored on an Azure storage account.</p>
<p>Our pipeline consists of a <strong><a href="https://learn.microsoft.com/en-us/azure/data-factory/control-flow-for-each-activity" rel="nofollow noreferrer">ForEach</a></strong> activity, that calls the Python script for each or the listed files. Running it from the Azure Data Factory (ADF) triggers multiple individual pipelines that run <em>concurrently</em>. I am not using the expression <em>in parallel</em> because I am unsure how these individual jobs are assigned across the different vCPUs.</p>
<p>In addition to the "parallelization" managed by AML, would it make sense to parallelize the processing at the Python level using Python's <strong>multiprocessing</strong> module?</p>
<p>I gave it a shot using the following approach, but it did not appear to reduce the overall processing time.</p>
<pre><code>from multiprocessing import Process
from multiprocessing import Pool
[...]
mp_processes = 2
if mp_processes is True :
p = Pool(int(mp_processes))
output1, output2 = zip(*p.map(process, process_queue))
[...]
</code></pre>
|
<python><azure><parallel-processing><multiprocessing><vcpu>
|
2024-07-24 21:31:53
| 0
| 4,770
|
Sheldon
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.