QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,543,201 | 1,228,532 | Using `Unpack` on `TypeVar`s to dynamically generate function signatures | <p>I'm trying to dynamically generate the function signatures for child classes of <code>_AbstractGameObject</code>. I have successfully experimented with <code>Unpack</code>ing the concrete <code>TypedDict</code>s in the child classes but wondered why I need to do that if can define the <code>Unpack</code> in the base class. However I get a mypy error at <code>_AbstractGameObject.load</code>. <code>D</code> is a <code>TypedDict</code> but it cannot be <code>Unpack</code>ed apparently. Am I expecting too much from this experimental feature or can it be done?</p>
<pre><code>from abc import ABC
from dataclasses import dataclass
from pathlib import Path
from typing import Generic, Self, TypedDict, TypeVar, Unpack
D = TypeVar("D", bound="_GameObjectDict")
class _GameObjectDict(TypedDict):
name: str
class AssetDict(_GameObjectDict):
path: Path
@dataclass
class _AbstractGameObject(ABC, Generic[D]):
name: str
@classmethod
def load(cls, **kwargs: Unpack[D]) -> Self: # <- error: Unpack item in ** argument must be a TypedDict [misc]
return cls(**kwargs)
@dataclass
class _GameObject(_AbstractGameObject[D], Generic[D]):
def to_dict(self):
return _GameObjectDict(name=self.name)
@dataclass(kw_only=True)
class Asset(_GameObject[AssetDict]):
path: Path
</code></pre>
| <python><mypy><python-typing> | 2023-11-24 12:36:47 | 1 | 653 | James N |
77,543,062 | 3,239,855 | Monitor function calls inside python contexts | <p>Using python I would like to:</p>
<ul>
<li>monitor execution of some 'selected' functions (I would like to record function name, execution time, etc.)</li>
<li>store these info only when functions are executed inside a context</li>
<li>if these contexts are nested, info should be recorded in all parent contexts</li>
</ul>
<p>I end up with the following code which monitor functions decorated with <code>@monitor_decorator</code>:</p>
<pre><code>from dataclasses import dataclass
@dataclass
class MonitorRecord:
function: str
time: float
class MonitorContext:
def __init__(self):
self._records: list[MonitorRecord] = []
def add_record(self, record: MonitorRecord) -> None:
self._records.append(record)
def __enter__(self) -> 'MonitorContext':
handlers.register(self)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
handlers.delete(self)
return
class MonitorHandlers:
def __init__(self):
self._handlers: list[MonitorContext] = []
def register(self, handler: MonitorContext) -> None:
self._handlers.append(handler)
def delete(self, handler: MonitorContext) -> None:
self._handlers.remove(handler)
def add_record(self, record: MonitorRecord) -> None:
for h in self._handlers:
h.add_record(record)
handlers = MonitorHandlers()
def monitor_decorator(f):
def _(*args, **kwargs):
start = time.time()
f(*args, **kwargs)
handlers.add_record(
MonitorRecord(
function=f.__name__,
time=time.time() - start,
)
)
return _
</code></pre>
<p>This code works in simple cases like this:</p>
<pre><code>@monitor_decorator
def run():
time.sleep(5)
with MonitorContext() as m1:
run()
with MonitorContext() as m2:
run()
run()
print(len(m1._records))
print(len(m2._records))
</code></pre>
<p>Output:</p>
<pre><code>3
2
</code></pre>
<p>But doesn't work with multithreading:</p>
<pre><code>@monitor_decorator
def run():
time.sleep(5)
def nested():
with MonitorContext() as m:
run()
print(len(m._records))
with MonitorContext() as m1:
threads = [threading.Thread(target=nested)
for i in range(10)]
[t.start() for t in threads]
[t.join() for t in threads]
print(len(m1.records))
</code></pre>
<p>Output:</p>
<pre><code>1
2
3
4
5
6
7
9
8
10
10
</code></pre>
<p>Indeed, in this case, each thread adds its own context to the global variable <code>handlers</code> and the final result is that contexts inside a thread are monitoring other threads.</p>
<p>The problem arises from the global variable <code>handlers</code>, but so far I haven't found a simple and elegant alternative.</p>
| <python><python-dataclasses> | 2023-11-24 12:15:57 | 1 | 304 | hari |
77,542,849 | 12,454,639 | Very close to having a working django frontpage and Im not sure what is wrong | <p>Im close to being able to spin up and iterate on this django website that I am trying to launch.
I seem to be struggling with alternating 404 and 500 errors presenting as each of these two:</p>
<pre><code>[24/Nov/2023 11:19:56] "GET / HTTP/1.1" 500 145
[24/Nov/2023 11:20:07] "GET / HTTP/1.1" 404 179
</code></pre>
<p>respectively</p>
<p>When I alternate my urls.py line 20 between</p>
<pre><code>path('randoplant_home', views.randoplant_home, name='randoplant_home'), # 404
path('', views.randoplant_home, name='randoplant_home'), # 500
</code></pre>
<p>I don't quite know what else I am missing to be able to spin up my app and I've not been able to isolate what is causing my code to still error.</p>
<p>here is my project structure:</p>
<pre><code>randoplant/
├── frontend/
│ ├── templates/
│ │ └── randoplant_home.html
│ ├── views.py
│ ├── urls.py
│ └── ...
├── views.py
├── urls.py
├── settings.py
└── ...
</code></pre>
<p>here is my urls.py</p>
<pre><code>urlpatterns = [
path('admin/', admin.site.urls),
path('randoplant_home/', views.randoplant_home, name='randoplant_home'),
#path('', views.randoplant_home, name='Home')
]
</code></pre>
<p>and my views.py</p>
<pre><code>def randoplant_home(request):
return render(request, 'frontend/templates/randoplant_home.html')
####################### MANAGEMENT #################################
class PlantPageView(TemplateView):
template_name = 'randoplant_home.html'
</code></pre>
<p>Very close to spinning up my home page for the first time but Im really not sure what else is missing.</p>
| <python><django> | 2023-11-24 11:38:16 | 0 | 314 | Syllogism |
77,542,728 | 1,462,770 | Reproducible crawling behavior with Scrapy | <p>How can I make the <a href="https://docs.scrapy.org/" rel="nofollow noreferrer">Scrapy</a> crawler reproducible? By reproducibility, I mean that if we crawl the same website twice, we crawl it in exactly the same way. Therefore, we visit pages in the same order each time. Assume the website is static. In fact, if we print the URL in the following function (i.e., <code>parse</code>) it should print the same thing each time:</p>
<pre><code>class SiteSpider(CrawlSpider):
name = 'ReproducibleSpider'
rules = [Rule(LinkExtractor(), callback='parse', follow=True)]
def __init__(self, urls: list[str], domains: list[str]):
super().__init__()
self.start_urls = urls
self.allowed_domain = domains
def parse(self, response, **kwargs):
url = response.url if hasattr(response, 'url') else None
print(url)
return None
</code></pre>
<p>I also tried the following piece:</p>
<pre><code>class SortedLinkExtractor(LinkExtractor):
def extract_links(self, response):
links = super().extract_links(response)
return sorted(links, key=lambda x: len(x.url))
</code></pre>
<p>I realized that <a href="https://docs.scrapy.org/en/latest/topics/settings.html#concurrent-requests" rel="nofollow noreferrer">concurrent request</a> in Scrapy is the source of the problem! Any idea how to fix it without sacrificing the performance?</p>
<p>Edit: Specifying the <a href="https://docs.scrapy.org/en/latest/topics/settings.html#concurrent-requests-per-domain" rel="nofollow noreferrer">concurrency</a> with the following setup gives me some reproducibility:</p>
<pre><code> 'CONCURRENT_REQUESTS' = 1
'CONCURRENT_REQUESTS_PER_DOMAIN' = 1
</code></pre>
| <python><web-scraping><scrapy><web-crawler> | 2023-11-24 11:18:50 | 0 | 16,657 | Amir |
77,542,619 | 12,016,688 | What is the `ExceptionTable` in the output of `dis`? | <p>In <code>python3.13</code>, when I try to disassemble <code>[i for i in range(10)]</code>, the result is as below:</p>
<pre class="lang-py prettyprint-override"><code>>>> import dis
>>>
>>> dis.dis('[i for i in range(10)]')
0 RESUME 0
1 LOAD_NAME 0 (range)
PUSH_NULL
LOAD_CONST 0 (10)
CALL 1
GET_ITER
LOAD_FAST_AND_CLEAR 0 (i)
SWAP 2
L1: BUILD_LIST 0
SWAP 2
L2: FOR_ITER 4 (to L3)
STORE_FAST_LOAD_FAST 0 (i, i)
LIST_APPEND 2
JUMP_BACKWARD 6 (to L2)
L3: END_FOR
L4: SWAP 2
STORE_FAST 0 (i)
RETURN_VALUE
-- L5: SWAP 2
POP_TOP
1 SWAP 2
STORE_FAST 0 (i)
RERAISE 0
ExceptionTable:
L1 to L4 -> L5 [2]
</code></pre>
<p>At the end of the output, there's something <code>ExceptionTable</code>. It does not exist in the previous versions of python.</p>
<pre class="lang-py prettyprint-override"><code>Python 3.10.0b1 (default, May 4 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dis
>>>
>>> dis.dis('[i for i in range(10)]')
1 0 LOAD_CONST 0 (<code object <listcomp> at 0x7f3d412503a0, file "<dis>", line 1>)
2 LOAD_CONST 1 ('<listcomp>')
4 MAKE_FUNCTION 0
6 LOAD_NAME 0 (range)
8 LOAD_CONST 2 (10)
10 CALL_FUNCTION 1
12 GET_ITER
14 CALL_FUNCTION 1
16 RETURN_VALUE
Disassembly of <code object <listcomp> at 0x7f3d412503a0, file "<dis>", line 1>:
1 0 BUILD_LIST 0
2 LOAD_FAST 0 (.0)
>> 4 FOR_ITER 4 (to 14)
6 STORE_FAST 1 (i)
8 LOAD_FAST 1 (i)
10 LIST_APPEND 2
12 JUMP_ABSOLUTE 2 (to 4)
>> 14 RETURN_VALUE
</code></pre>
<p>I can't understand what that means, also I couldn't find any document for this.</p>
| <python><python-3.x><cpython><python-internals> | 2023-11-24 10:58:01 | 1 | 2,470 | Amir reza Riahi |
77,542,569 | 1,232,660 | Fix badly encoded character | <p>I visited a website that contained a character <code>ø</code> in the text. A character with unicode codepoint 248 (0xf8 in hexadecimal). Indeed, the Python console confirms that:</p>
<pre class="lang-py prettyprint-override"><code>>>> chr(248)
'ø'
</code></pre>
<p>But since I understand the text, I know that the character has been encoded with wrong encoding. It should be <code>ř</code> instead. And indeed, the <a href="https://en.wikipedia.org/wiki/Windows-1250" rel="nofollow noreferrer">Windows-1250 codepoint table</a> confirms that value 0xf8 equals to the character <code>ř</code>.</p>
<p><strong>What conversions should I apply to fix the text encoding? To transform <code>ø</code> to <code>ř</code>?</strong></p>
<p>I cannot figure out the correct sequence of functions. I, quite brainlessly, tried both:</p>
<pre class="lang-py prettyprint-override"><code>>>> chr(248).encode().decode('windows-1250')
'ø'
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>>>> chr(248).encode('windows-1250').decode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.6/encodings/cp1250.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_table)
UnicodeEncodeError: 'charmap' codec can't encode character '\xf8' in position 0: character maps to <undefined>
</code></pre>
<p>but, as you can see, none of it worked.</p>
| <python><character-encoding> | 2023-11-24 10:50:02 | 2 | 3,558 | Jeyekomon |
77,542,461 | 10,694,589 | How to increase contrast and detect edges in a 16-bit image? | <p>My problem is with contrast enhancement and edge detection.
I would like to get a good Contrast stretching and and make it work the edge detection.</p>
<p>First of all, I read my images using rawpy because I have .nef files (Nikon raw).
I know that my images have a bit depth per color channel equal to 14 bits.
But I can't find a function or code to read 14-bit images.
So I open my images in 16 bits with rawpy...</p>
<p>Then I use opencv to transform my images into grayscale and calculate (I(x,y)-Io)/Io(x,y). Where I(x,y) is "brut" and Io(x,y) is "init".
I apply a mask to hide the uninteresting region.</p>
<p>My masked image (with edges that I want to detect) and my "contrasted image":
<a href="https://i.sstatic.net/CGup2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CGup2.jpg" alt="enter image description here" /></a></p>
<p>and my masked image with my edge detection:</p>
<p><a href="https://i.sstatic.net/lJS1d.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lJS1d.jpg" alt="enter image description here" /></a></p>
<p><a href="https://amubox.univ-amu.fr/s/DWqNkSrryEeAas3" rel="nofollow noreferrer">my two images are here (brut and init)</a></p>
<p>I don't understand why it doesn't works ?
Why my images size (adjusted_image and edges) isn't correct ?
If you have any comments or ideas ?
thank you</p>
<p>my code :</p>
<pre><code>import numpy as np
import cv2
import rawpy
import rawpy.enhance
import matplotlib.pyplot as plt
import glob
import imutils
####################
# Reading a Nikon RAW (NEF) image 2023-09-25_18-02-21.406
init="initialisation/2023-09-19_19-02-33.473.nef"
brut="DT10/16-45-31_2023-09-06.nef"
####################
# This uses rawpy library
print("reading init file using rawpy.")
raw_init = rawpy.imread(init)
image_init = raw_init.postprocess(use_camera_wb=True, output_bps=16)
print("Size of init image read:" + str(image_init.shape))
print("reading brut file using rawpy.")
raw_brut = rawpy.imread(brut)
image_brut = raw_brut.postprocess(use_camera_wb=True, output_bps=16)
print("Size of brut image read:" + str(image_brut.shape))
####################
# (grayscale) OpenCV
init_grayscale = cv2.cvtColor(image_init, cv2.COLOR_RGB2GRAY)
brut_grayscale = cv2.cvtColor(image_brut, cv2.COLOR_RGB2GRAY)
test = cv2.divide((brut_grayscale-init_grayscale),(init_grayscale))
print("test image max =" + str(np.max(test)))
# Step 1: Create an empty mask of the same shape as your image
mask = np.zeros_like(test)
mask = mask.astype(np.uint8)
# Step 2: Create a circle in the mask
height, width = mask.shape
center_y, center_x = height // 2, width // 2
radius =3* min(height, width) // 6 # Adjust the radius as needed
cv2.circle(mask, (center_x, center_y), radius, 1, thickness=-1)
# Step 3: Apply the mask to your image
masked_image = cv2.bitwise_and(test, test, mask=mask)
print("masked image max =" + str(np.max(masked_image)))
print("masked image type =" + str((masked_image.dtype)))
####################
# Adjust contrast
alpha = 10
adjusted_image = cv2.multiply(test, alpha)
#adjusted_image = np.clip(adjusted_image, 0, 65535)
print(masked_image.dtype, adjusted_image.dtype)
# Afficher l'image originale et l'image avec le contraste augmenté
cv2.imshow('Image originale', imutils.resize(masked_image * 65535, width=1080))
cv2.imshow('Contraste augmenté', imutils.resize(adjusted_image * 65535, width=1080))
cv2.waitKey(0)
cv2.destroyAllWindows()
###Edge detection
# Appliquer l'opérateur Canny pour détecter les contours
seuil_min = 0 # Seuil minimal pour les bords faibles
seuil_max = 0.1 # Seuil maximal pour les bords forts
edges = cv2.Canny(masked_image.astype(np.uint8), seuil_min, seuil_max)
# Afficher l'image originale et l'image des contours
cv2.imshow("Image originale", imutils.resize(masked_image * 65535, width=1080))
cv2.imshow("Contours détectés (Canny)", imutils.resize(edges * 65535, width=1080))
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
| <python><opencv><image-processing><canny-operator><imutils> | 2023-11-24 10:33:56 | 1 | 351 | Suntory |
77,542,162 | 18,349,319 | How to configure certificate for fastapi app | <p>I created two applications (backend (fastapi. Works on 8000 port), frontend (reactjs. Works on 80 port)) they are communicate each with other.</p>
<p>My docker-compose file:</p>
<pre><code>version: '3.7'
services:
frontend:
container_name: "frontend"
build:
context: ./frontend
stop_signal: SIGTERM
ports:
- "80:80"
volumes:
- ./uploads:/app/uploads
networks:
- good_network
depends_on:
- backend
backend:
container_name: "backend"
build:
context: ./backend
stop_signal: SIGTERM
ports:
- "8000:8000"
networks:
- good_network
volumes:
- ./uploads:/app/uploads
depends_on:
- postgres
postgres:
container_name: "postgres"
image: postgres:16.0
healthcheck:
test: [ "CMD-SHELL", "pg_isready -d sugar -U postgres" ]
interval: 5s
timeout: 5s
retries: 5
start_period: 5s
restart: unless-stopped
ports:
- "5432:5432"
volumes:
- ./postgres_data:/var/lib/postgresql/data
networks:
- good_network
networks:
good_network:
volumes:
postgres_data:
</code></pre>
<p>help me with configuration a certificate :(.</p>
<p>My tries:</p>
<pre><code>uvicorn.run(..., ssl_keyfile="./privkey.pem", ssl_certfile="./fullchain.pem") # Problem with cors :/
</code></pre>
<p>I've tried to use certbot, but i created a certfiles, but i didn't understand what to do with it...</p>
| <python><docker><deployment><ssl-certificate><fastapi> | 2023-11-24 09:47:17 | 2 | 345 | TASK |
77,542,112 | 15,991,297 | Vectorized Lookup in Dataframe Using Numpy Array | <p>Given a numpy array such as the one below, is it possible to use vectorization to return the associated value in column HHt without using a for loop?</p>
<pre><code>ex_arr = [2643, 2644, 2647]
for i in ex_arr:
h_p = df.at[i, "HHt"]
</code></pre>
<p>Example df:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>HHt</th>
</tr>
</thead>
<tbody>
<tr>
<td>2643</td>
<td>1</td>
</tr>
<tr>
<td>2644</td>
<td>2</td>
</tr>
<tr>
<td>2645</td>
<td>3</td>
</tr>
<tr>
<td>2646</td>
<td>4</td>
</tr>
<tr>
<td>2647</td>
<td>5</td>
</tr>
<tr>
<td>2648</td>
<td>6</td>
</tr>
<tr>
<td>2649</td>
<td>7</td>
</tr>
<tr>
<td>2650</td>
<td>8</td>
</tr>
</tbody>
</table>
</div>
<p>Expected result:
1
2
5</p>
| <python><pandas><numpy> | 2023-11-24 09:39:32 | 2 | 500 | James |
77,542,003 | 521,347 | Python Pubsub Subscriber client not pulling messages when filter is applied on subscription | <p>I am using Pubsub streaming pull subscription in my Python web application. When I am not applying any subscription filter, the subscriber client is able to successfully pull messages from the subscription.
However, if a subscription filter is applied, the subscriber stops pulling messages.If I go manually to the specific subscription and click on 'Pull', I can see that there are messages in the subscription (which obviously matched the filter criteria and hence are present within subscription). But the client can not pull any of these messages.
Do I need to do any additional configuration for the client? The code for my subscriber client is as follows:-</p>
<pre><code>import os
from google.cloud import pubsub_v1
from app.services.subscription_service import save_bill_events
from app.utils.constants import BILL_SUBSCRIPTION_GCP_PROJECT_ID, BILL_EVENT_SUBSCRIPTION_ID
from app.utils.logging_tracing_manager import get_logger
logger = get_logger(__file__)
def callback(message: pubsub_v1.subscriber.message.Message) -> None:
save_bill_events(message.data)
message.ack()
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(os.environ.get(BILL_SUBSCRIPTION_GCP_PROJECT_ID),
BILL_EVENT_SUBSCRIPTION_ID)
# Limit the subscriber to only have fixed number of outstanding messages at a time.
flow_control = pubsub_v1.types.FlowControl(max_messages=50)
streaming_pull_future = subscriber.subscribe(subscription_path, callback=callback, flow_control=flow_control)
async def poll_bill_subscription():
with subscriber:
try:
# When `timeout` is not set, result() will block indefinitely,
# unless an exception is encountered first.
streaming_pull_future.result()
except Exception as e:
# Even in case of an exception, subscriber should keep listening
logger.error(
f"An error occurred while pulling message from subscription {BILL_EVENT_SUBSCRIPTION_ID}",
exc_info=True)
pass
</code></pre>
| <python><google-cloud-pubsub><publish-subscribe> | 2023-11-24 09:20:49 | 1 | 1,780 | Sumit Desai |
77,541,799 | 12,200,808 | How to checkout the source code of keras 2.15.0 | <p>The latest version of <code>keras</code> in <code>pypi</code> is <code>2.15.0</code>:</p>
<p><a href="https://pypi.org/project/keras/" rel="nofollow noreferrer">https://pypi.org/project/keras/</a></p>
<pre><code>keras 2.15.0
pip install keras
Latest version
Released: Nov 7, 2023
</code></pre>
<p>But the latest release of <code>keras</code> in <code>github</code> is <code>2.14.0</code>:</p>
<p><a href="https://github.com/keras-team/keras/releases" rel="nofollow noreferrer">https://github.com/keras-team/keras/releases</a></p>
<pre><code>Keras Release 2.14.0 Latest
What's Changed
[keras/layers/normalization] Standardise docstring usage of "Default to" by @SamuelMarks in #17965
Update Python ver to 3.9 in Dockerfile by @sampathweb in #18076
[keras/saving/legacy/saved_model] Standardise docstring usage of "Default to" by @SamuelMarks in #17978
[keras/metrics] Standardise docstring usage of "Default to" by @SamuelMarks in #17972
Update example losses to bce- metrics/confusion_metrics.py by @Frightera in #18045
...
</code></pre>
<p>Where to download the source code for <code>keras</code> version <code>2.15.0</code>?</p>
| <python><tensorflow><keras><pypi> | 2023-11-24 08:48:16 | 1 | 1,900 | stackbiz |
77,541,722 | 4,281,353 | pip mechanism of ```pip install ".[source-pt]"``` | <p><a href="https://pypi.org/project/deepdoctection/0.15/" rel="nofollow noreferrer">deepdoctection</a> PyPi has the pip installation description:</p>
<pre><code>cd deepdoctection
pip install ".[source-pt]"
</code></pre>
<p><a href="https://github.com/deepdoctection/deepdoctection" rel="nofollow noreferrer">deepdoctection</a> github does not have <code>source-pt</code> in the root.</p>
<p>Help understand which options or mechanism of <a href="https://pip.pypa.io/en/stable/cli/pip_install/" rel="nofollow noreferrer">pip install</a> is used here and what is <code>source-pt</code>?</p>
| <python><pip> | 2023-11-24 08:36:32 | 0 | 22,964 | mon |
77,541,607 | 1,422,096 | High-level abstraction for serial communication | <p>On a hardware device with serial communication, I need to continously query "foo", forever, every second (for example to log temperature of the device). At a random timing, another thread may query "bar".</p>
<p>One option is to interrupt the "foo" queries, ask for "bar", and once done, re-start the "foo" queries again - but I'm looking for a more general solution.</p>
<p>Is there a way to make <strong>a high-level abstraction, allowing us to just do:</strong></p>
<pre><code>def thread1():
while True:
serial_device_abstraction.get("foo")
time.sleep(1)
def thread2()
time.sleep(random.random())
serial_device_abstraction.get("bar")
threading.Thread(target=thread1).start()
threading.Thread(target=thread2).start()
</code></pre>
<p><strong>and letting the high-level abstraction handle the low-level concurrency problems?</strong></p>
<p>NB: obviously, with</p>
<pre><code>def get(query):
serial_port.write(query)
data = self.serial_port.read(8)
</code></pre>
<p>it won't work, because if two threads send bits through the serial wire at the same time, it will send/receive corrupted data.</p>
| <python><multithreading><serial-port><pyserial><abstraction> | 2023-11-24 08:17:28 | 1 | 47,388 | Basj |
77,541,537 | 3,270,926 | How to use intel IPP ippiRemap function using ctypes in python? | <p>I am trying to use ippiRemap from intel IPP library shared object (libipp.so).
by installing ipp using conda.</p>
<pre><code>conda install -c intel ipp
</code></pre>
<p>The library shared object files are installed in the environment directory. When trying to use <code>ippiRemap</code> function for image processing with python <code>ctypes</code></p>
<pre><code>import ctypes
ipp = ctypes.cdll.loadLibrary('libippi.so')
ippiRemap = ipp.ippiRemap
</code></pre>
<p>I got the error <code>undefined symbol: ippiRemap</code>. However, this is might be due to name mangling of function symbol names by C++.
Trying <code>readelf</code> command I got the following output:</p>
<pre><code>$ readelf -D --symbol /home/user_name/anaconda3/envs/env_name/lib/libippi.so |grep -E "FUNC.*GLOBAL.*ippIRemap.*"
376: 000000000006a180 32 FUNC GLOBAL DEFAULT 10 ippiRemap_16s_AC4R
663: 000000000006a100 32 FUNC GLOBAL DEFAULT 10 ippiRemap_16u_AC4R
685: 000000000006a080 32 FUNC GLOBAL DEFAULT 10 ippiRemap_8u_AC4R
773: 000000000006a120 32 FUNC GLOBAL DEFAULT 10 ippiRemap_16s_C1R
862: 000000000006a140 32 FUNC GLOBAL DEFAULT 10 ippiRemap_16s_C3R
910: 000000000006a020 32 FUNC GLOBAL DEFAULT 10 ippiRemap_8u_C1R
920: 000000000006a160 32 FUNC GLOBAL DEFAULT 10 ippiRemap_16s_C4R
1028: 000000000006a040 32 FUNC GLOBAL DEFAULT 10 ippiRemap_8u_C3R
1079: 000000000006a060 32 FUNC GLOBAL DEFAULT 10 ippiRemap_8u_C4R
1433: 000000000006a200 32 FUNC GLOBAL DEFAULT 10 ippiRemap_32f_AC4R
1794: 000000000006a280 32 FUNC GLOBAL DEFAULT 10 ippiRemap_64f_AC4R
1866: 000000000006a0a0 32 FUNC GLOBAL DEFAULT 10 ippiRemap_16u_C1R
1985: 000000000006a0c0 32 FUNC GLOBAL DEFAULT 10 ippiRemap_16u_C3R
2049: 000000000006a0e0 32 FUNC GLOBAL DEFAULT 10 ippiRemap_16u_C4R
2424: 000000000006a220 32 FUNC GLOBAL DEFAULT 10 ippiRemap_64f_C1R
2451: 000000000006a1a0 32 FUNC GLOBAL DEFAULT 10 ippiRemap_32f_C1R
2523: 000000000006a240 32 FUNC GLOBAL DEFAULT 10 ippiRemap_64f_C3R
2562: 000000000006a1c0 32 FUNC GLOBAL DEFAULT 10 ippiRemap_32f_C3R
2585: 000000000006a260 32 FUNC GLOBAL DEFAULT 10 ippiRemap_64f_C4R
2615: 000000000006a1e0 32 FUNC GLOBAL DEFAULT 10 ippiRemap_32f_C4R
</code></pre>
<p>trying one of symbol names works.</p>
<pre><code>>> ippiRemap = ipp.ippiRemap_16s_AC4R
>> ippiRemap
<_FuncPtr object at 0x7fc277ba2c80>
</code></pre>
<p>So my question what is difference between each symbol name, are they the same function? is there a standard way to use it with <code>ctypes</code> ?</p>
| <python><c++><ctypes><intel-ipp> | 2023-11-24 08:08:40 | 1 | 623 | youssef |
77,541,530 | 1,581,090 | How can I fix error "Invalid rrule byxxx generates an empty set." using a MinuteLocator for matplotlib plots? | <p>I want to create a plot using Matplotlib to show a time along the x-axis. The following code is a complete minimum workable example:</p>
<pre><code>import random
from datetime import datetime
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import dates as mdates
times = np.arange(0, 5000, 60)
x = [datetime.utcfromtimestamp(val) for val in times]
y = random.sample(range(1, 100), len(x))
formatter = mdates.DateFormatter("%H:%M")
locator = mdates.MinuteLocator(interval=10)
# locator = mdates.MinuteLocator(interval=10, byminute=10)
# locator = mdates.MinuteLocator(interval=10, byminute=[0,10,20,30,40,50])
plt.clf()
plt.gca().xaxis.set_major_formatter(formatter)
plt.gca().xaxis.set_major_locator(locator)
plt.plot(x, y)
plt.grid(True)
plt.xlabel("time [min]")
plt.ylabel("Random number")
plt.show()
</code></pre>
<p>Along with a <code>DateFormatter</code>, it creates the following axis labels:</p>
<p><a href="https://i.sstatic.net/4ubm7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4ubm7.png" alt="Enter image description here" /></a></p>
<p>However, I want the ticks to be made only at every full 10 minutes, i.e., I want ticks at times 15:30, 15:40, 15:50, etc. and not starting at 15:24. So I tried</p>
<pre><code>locator = mdates.MinuteLocator(interval=10, byminute=10)
</code></pre>
<p>which gives me an error</p>
<blockquote>
<p>ValueError: Invalid rrule byxxx generates an empty set.</p>
</blockquote>
<p>but according to <a href="https://matplotlib.org/stable/api/dates_api.html#matplotlib.dates.MinuteLocator" rel="nofollow noreferrer">the documentation</a>, you also can specify an int value or a list of ints.
But even when trying to use a list of ints</p>
<pre><code>locator = mdates.MinuteLocator(interval=10, byminute=[0,10,20,30,40,50])
</code></pre>
<p>it still gives the same error!</p>
<p>How can I fix this problem?</p>
| <python><matplotlib><python-dateutil> | 2023-11-24 08:07:03 | 1 | 45,023 | Alex |
77,541,498 | 10,623,444 | Applying Python UDF function per row in a polars dataframe throws unexpected exception 'expected tuple, got list' | <p>I have the following polars DF in Python</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
"user_movies": [[7064, 7153, 78009], [6, 7, 1042], [99, 110, 3927], [2, 11, 152081], [260, 318, 195627]],
"user_ratings": [[5.0, 5.0, 5.0], [4.0, 2.0, 4.0], [4.0, 4.0, 3.0], [3.5, 3.0, 4.0], [1.0, 4.5, 0.5]],
"common_movies": [[7064, 7153], [7], [110, 3927], [2], [260, 195627]]
})
print(df.head())
</code></pre>
<p>I want to create a new column named "common_movie_ratings" that will take from each rating list only the index of the movie rated in the common movies. For example, for the first row, I should return only the ratings for movies [7064, 7153,], for the second row the ratings for the movie [7], and so on and so forth.</p>
<p>For this reason, I created the following function:</p>
<pre class="lang-py prettyprint-override"><code>def get_common_movie_ratings(row): #Each row is a tuple of arrays.
common_movies = row[2] #the index of the tuple denotes the 3rd array, which represents the common_movies column.
user_ratings = row[1]
ratings_for_common_movies= [user_ratings[list(row[0]).index(movie)] for movie in common_movies]
return ratings_for_common_movies
</code></pre>
<p>Finally, I apply the UDF function on the dataframe like</p>
<pre class="lang-py prettyprint-override"><code>df["common_movie_ratings"] = df.apply(get_common_movie_ratings, return_dtype=pl.List(pl.Float64))
</code></pre>
<p>Every time I apply the function, on the 3rd iteration/row I receive the following error</p>
<blockquote>
<p>expected tuple, got list</p>
</blockquote>
<p><a href="https://i.sstatic.net/BsiLL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BsiLL.png" alt="Output after every 3rd iteration" /></a></p>
<p>I have also tried a different approach for the UDF function like</p>
<pre class="lang-py prettyprint-override"><code>def get_common_movie_ratings(row):
common_movies = row[2]
user_ratings = row[1]
ratings = [user_ratings[i] for i, movie in enumerate(row[0]) if movie in common_movies]
return ratings
</code></pre>
<p>But again on the 3rd iteration, I received the same error.</p>
<h3>Update - Data input and scenario scope (<a href="https://stackoverflow.com/q/77567521/10623444">here</a>)</h3>
| <python><dataframe><user-defined-functions><python-polars> | 2023-11-24 08:00:48 | 2 | 1,589 | NikSp |
77,541,361 | 1,167,759 | LoRA Fine-Tuning on CPU Error: Using `load_in_8bit=True` requires Accelerate | <p>I'm trying to follow the instructions from the great blog post <a href="https://www.philschmid.de/fine-tune-flan-t5-peft" rel="nofollow noreferrer">Efficient Large Language Model training with LoRA and Hugging Face</a> to fine-tune a small model via LoRA. To prepare as much as possible, I'm trying to run everything locally via CPU first.</p>
<p>When running 'AutoModelForSeq2SeqLM.from_pretrained' I get the following error:</p>
<blockquote>
<p>ImportError: Using <code>load_in_8bit=True</code> requires Accelerate: <code>pip install accelerate</code> and the latest version of bitsandbytes <code>pip install -i https://test.pypi.org/simple/ bitsandbytes</code> or pip install bitsandbytes`</p>
</blockquote>
<p>I've tried various ways to import accelerate and bitsandbytes but without success. The fine-tuning code without LoRA works (<a href="https://www.philschmid.de/fine-tune-flan-t5-peft" rel="nofollow noreferrer">Fine-tune FLAN-T5 for chat & dialogue summarization</a>). That one doesn't use accelerate.</p>
<p>Does anyone know how to fix this issue? Or is accelerate not supported on CPU?</p>
<p>Here is what I've tried:</p>
<pre><code>!pip install "peft"
!pip install "transformers" "datasets" "accelerate" "evaluate" "bitsandbytes" loralib --upgrade --quiet
or
!pip install "peft==0.2.0"
!pip install "transformers==4.27.2" "datasets==2.9.0" "accelerate==0.17.1" "evaluate==0.4.0" "bitsandbytes==0.37.1" loralib --upgrade --quiet
or
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git
</code></pre>
<p>Update Nov 27th: I get the same error when running on a V100 GPU.</p>
| <python><huggingface-transformers><large-language-model> | 2023-11-24 07:27:30 | 1 | 962 | Niklas Heidloff |
77,541,350 | 5,462,398 | Calling Python(Code, executable or some other form) from Java | <p><strong>TLDR;</strong></p>
<p>I have a java desktop application that runs on Mac, Linux & Windows and I have a python library. I am trying to call the python package's function from Java. As long as python can be packaged in the installer of my application and will work on the user's machines without additional setups.</p>
<p><strong>Details:</strong></p>
<p>I have a java maven project. I have seen it to work on windows. I have seen java call the python and return the response.</p>
<pre><code> <properties >
<project.build.sourceLevel>21</project.build.sourceLevel>
<project.build.targetLevel>21</project.build.targetLevel>
<maven.compiler.source>21</maven.compiler.source>
<maven.compiler.target>21</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest</artifactId>
<version>2.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.32</version>
</dependency>
<dependency>
<groupId>org.python</groupId>
<artifactId>jython-slim</artifactId>
<version>2.7.3b1</version>
</dependency>
</dependencies>
</code></pre>
<p><strong>Java</strong></p>
<pre><code> @Test
public void getWhispered() throws Exception {
ProcessBuilder processBuilder = new ProcessBuilder("python", resolvePythonScriptPath("foo5/main.py"), "stringdata");
processBuilder.redirectErrorStream(true);
Process process = processBuilder.start();
List<String> results = readProcessOutput(process.getInputStream());
assertThat("Results should not be empty", results, is(not(empty())));
assertThat("Results should contain output of script", results,
hasItem(containsString("Argument List"))); // Please excuse in accuracy ... I am adapting a bit.
int exitCode = process.waitFor();
assertEquals("No errors should be detected", 0, exitCode);
}
private List<String> readProcessOutput(InputStream inputStream) throws IOException {
try (BufferedReader output = new BufferedReader(new InputStreamReader(inputStream))) {
return output.lines()
.collect(Collectors.toList());
}
}
private String resolvePythonScriptPath(String filename) {
File file = new File("src/test/resources/" + filename);
return file.getAbsolutePath();
}
</code></pre>
<p><strong>main.py</strong></p>
<pre><code>import sys
print ('Number of arguments:', len(sys.argv), 'arguments.')
print ('Argument List:', str(sys.argv))
</code></pre>
<p>But now I see the error ... <code>java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified</code></p>
<p>I ran both times (when I see it to work. And now when I do not see it working) without python instllation. I uninstalled all python and ran it... I saw it working. But now I do not see it working.</p>
<p>I suspect that I did not restart the windows hence may be there was a cached form of python available somewhere ...</p>
<p><strong>Quesiton:</strong></p>
<p>In this invocation is Java calling actual PythonRun time available on the command line in the host machine or is Java's jar providing the interpreter its self ?</p>
<p>And if someone can help me call a python in a way that I can call some python executable/code/package and get the result ... without installing additional things and needing to set up additional things on user's machines.</p>
<p>Thank you</p>
| <python><java> | 2023-11-24 07:25:25 | 1 | 1,348 | zur |
77,541,171 | 8,547,986 | Pyright ... (triple dot) is not allowed in this context | <p>I am on python 3.11 and i am using type hints for annotations. I have a function parameter which takes list of any length, all elements should of type str, for this when I do the following:</p>
<pre class="lang-py prettyprint-override"><code>def create_input(columns: list[str, ...]):
...
</code></pre>
<p>Pyright throws an error in the editor saying triple dots are allowed in this context. But in the docs, it says it is okay. See image for reference. And <a href="https://docs.python.org/3/library/typing.html#annotating-tuples" rel="nofollow noreferrer">docs</a></p>
<p><a href="https://i.sstatic.net/9PsGL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9PsGL.png" alt="enter image description here" /></a></p>
<p>Could some one please help, what I am doing wrong?</p>
| <python><python-typing><pyright> | 2023-11-24 06:44:55 | 0 | 1,923 | monte |
77,541,118 | 10,366,334 | Numpy indexing with ndarray and PyTorch tensor | <p>I find numpy array indexing works differently with ndarrray and PyTorch tensor of shape <code>(1,)</code> and want to know why. Please see the case below:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import torch as th
x = np.arange(10)
y = x[np.array([1])]
z = x[th.tensor([1])]
print(y, z)
</code></pre>
<p><code>y</code> would be <code>array[2]</code> while <code>z</code> being just <code>2</code>. What is the difference exactly?</p>
| <python><numpy><pytorch><numpy-ndarray> | 2023-11-24 06:29:39 | 1 | 314 | Nicolás_Tsu |
77,540,949 | 10,219,156 | df style apply to specific cells | <p>I have a dict which i create df</p>
<pre><code>data = {'BTC': {'PNL': 834.8485, 'FUNDING': 575.20837762, 'DELTA VOL': 18845436.3535, 'HEDGE VOL': 16444662.545, 'OI': 4598885.7407, 'DEFICIT': 149.35084}, 'BTC 4HR': {'PNL': 76.2125, 'FUNDING': 197.52681642, 'DELTA VOL': 1999585.538, 'HEDGE VOL': 1729156.5247, 'OI': None, 'DEFICIT': None}, 'ETH': {'PNL': -661.112, 'FUNDING': -218.42574899, 'DELTA VOL': 10720104.9915, 'HEDGE VOL': 10088271.477, 'OI': 552973.992, 'DEFICIT': 61.9926}, 'ETH 4HR': {'PNL': 65.331, 'FUNDING': 23.57589308, 'DELTA VOL': 373668.3985, 'HEDGE VOL': 356349.724, 'OI': None, 'DEFICIT': None}, 'TOTAL': {'PNL': 2859.98348042, 'FUNDING': 871.26688115, 'DELTA VOL': 33607091.07200743, 'HEDGE VOL': 30570729.07912772, 'OI': 6019831.030358, 'DEFICIT': 9250.907426}, 'TOTAL 4HR': {'PNL': 177.84871257, 'FUNDING': 246.41302914, 'DELTA VOL': 2881419.73415996, 'HEDGE VOL': 2592306.47723815, 'OI': None, 'DEFICIT': None}}
</code></pre>
<p>after creating df from this, need to make <strong>bold</strong> 0,0 cell 0,2 and 0,4 cells only. tried with style.applymap but couldnt succeed.</p>
<pre><code>df1 = pd.DataFrame(data).T
df1
def color_negative_red(val):
try:
if val < 0:
color = 'red'
elif val>0:
color='green'
return 'color: %s' % color
except:
return 'color: black'
</code></pre>
<p>also making color code changes which is working.</p>
<pre><code>df1=df1.style.apply(styler).set_caption("Futures Risk Monitor").format("{:20,.0f}").set_table_styles([{
'selector': 'thead th',
'props': [('text-align', 'center')]
}])
display(df1)
</code></pre>
| <python><pandas><list><numpy><dictionary> | 2023-11-24 05:44:24 | 1 | 326 | Madan Raj |
77,540,934 | 5,810,125 | how can i add active class to image slider dynamically in Django template? | <p>i have an image slider which does not show anything if the image does not have active class on it as it slides through a list of images.</p>
<pre><code>{% for v in m.value.images %}<div class="carousel-item active"> <div class="container">
<div class="carousel-caption">
<h1>Another example headline.</h1> </div></div>{% endfor %}
</code></pre>
| <javascript><python><html><django> | 2023-11-24 05:41:26 | 1 | 505 | Nsamba Isaac |
77,540,641 | 379,075 | Can you disable Pylint inline messages in Visual Studio Code? | <p>Is there a way to disable inline Pylint (v2023.10.1) messages in Visual Studio Code (version 1.84.2 on <a href="https://en.wikipedia.org/wiki/Windows_11" rel="nofollow noreferrer">Windows 11</a>)? For example, I'm referring to "Formatting a regular string which could be an f-string" in the image below.</p>
<p><a href="https://i.sstatic.net/DPgad.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DPgad.png" alt="Example Pylint inline message" /></a></p>
<p>I work with a lot of legacy code and I can't fix every message. I like the squiggly blue line and the list of <em>problems</em>. But I find the text to be very distracting when there are a lot of messages.</p>
| <python><visual-studio-code><pylint> | 2023-11-24 03:57:55 | 1 | 1,490 | Dwayne Driskill |
77,540,541 | 11,225,821 | DRF self.paginate_queryset(queryset) doesn't respect order from self.get_queryset() | <p>i have a ListAPIView with custom get_queryset() where it ordered by a custom annotate() field called 'mentions'</p>
<p>i have customized my list function in ListAPIView, but when calling self.paginate_queryset() with the queryset i'm getting different result(not getting the first data from queryset</p>
<pre><code>class KeywordList(ListAPIView):
"""return list of keywords
"""
pagination_class = CursorPagination
serializer_class = ListKeywordSerializer
total_reviews_count = Review.objects.all().count()
ordering = ["-mentions"]
def get_queryset(self):
queryset = Keyword.objects.all()
queryset = queryset.annotate(...).order_by('-mentions')
def list(self, request, *args, **kwargs):
queryset = self.get_queryset()
print('queryset first', queryset[0])
page = self.paginate_queryset(queryset)
print('page first', page[0])
</code></pre>
<p>here is the print log:</p>
<pre><code>queryset first good
page first design program
</code></pre>
<p>as you can see, i'm getting different result(first index) after running the queryset through <code>self.paginate_queryset(queryset)</code></p>
<p>how can i fix this?</p>
| <python><django><django-rest-framework> | 2023-11-24 03:18:58 | 1 | 3,960 | Linh Nguyen |
77,540,502 | 2,850,913 | MLFlow install Python 3.10.12 with pyenv-win? | <p>I'm working with MLFlow 2.7.1 on Windows 10 and when I try to serve a model via <code>mlflow serve</code> it tries to use Pyenv to install Python 3.10.12. I installed the latest version of Pyenv Win with pip and updated via <code>pyenv update</code>, but when I run <code>pyenv install --list</code> it does not list Python 3.10.12. Looking into it I found Pyenv Win does not seem to support Python 3.10.12 and the reason seems to be that python.org does not provide binaries/installers for Python 3.10.12 only source code. I tried modifying the <code>mlflow\utils\virtualenv.py</code> to use Python 3.10.11 by setting <code>version='3.10.11'</code> but then I got another error. Just wondering if anyone has any ideas on what I can do, or if there is some way I can specify to MLFlow to use a different version of Python than 3.10.12?</p>
<p>The command I am using to serve the model with MLFLow is;</p>
<p><code>mlflow models serve -m {path_to_model} -h 0.0.0.0 -p 8001</code></p>
| <python><python-3.x><pyenv><mlflow><pyenv-virtualenv> | 2023-11-24 02:56:27 | 1 | 750 | tail_recursion |
77,540,422 | 1,601,580 | How do I remove base64 strings from an arbitrary string? | <p>I have a string in Python. I want to remove base64 strings from it. I read about <a href="https://en.wikipedia.org/wiki/Base64#Decoding_Base64_with_padding" rel="nofollow noreferrer">the base64 spec</a> and <a href="https://stackoverflow.com/search?q=how%20to%20remove%20base64%20strings%20python%3F">looked around SO</a> but it doesn't look like I was able to find a clean way to remove them.</p>
<p>I tried a couple of hacky regexes but it makes my string worse; e.g., it changes the word <code>problem</code> to <code>lem</code>:</p>
<pre class="lang-py prettyprint-override"><code>def remove_base64_strings(text: str) -> str:
"""
Remove base64 encoded strings from a string.
"""
# Regular expression for matching potential base64 strings
base64_pattern = r"(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?"
# Replace found base64 strings with an empty string
return re.sub(base64_pattern, "", text)
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>import re
base64_regex = r'^([A-Za-z0-9+/]{4})*([A-Za-z0-9+/]{4}|[A-Za-z0-9+/]{3}=|[A-Za-z0-9+/]{2}==)$'
base64_strings = re.findall(base64_regex, text)
</code></pre>
<p>Is there a robust way to remove base64 strings otherwise?</p>
<p>I was thinking of splitting the words by spaces. Then finding a string that matched the above pattern and was at least 12 characters, since base64 strings look like random long strings and I want to remove them for sure.</p>
<hr />
<p>I tried this:</p>
<pre class="lang-py prettyprint-override"><code>def remove_base64_words(text: str, threshold_length: int = 24) -> str:
"""
Remove words that are suspected to be Base64 encoded strings from a sentence.
Args:
sentence (str): The sentence from which to remove Base64 encoded words.
threshold_length (int): The minimum length of a word to be considered a Base64 encoded string.
Returns:
str: The sentence with suspected Base64 encoded words removed.
"""
# # Regex pattern for Base64 encoded strings
# base64_pattern = r"\b(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?\b"
# base64_pattern = r"^([A-Za-z0-9+/]{4}){5,}([A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$"
# base64_pattern = r"\b(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?\b"
# # Function to replace suspected Base64 encoded words
# def replace_base64_word(matchobj):
# word = matchobj.group(0)
# if len(word) >= threshold_length:
# return ""
# else:
# return word
# # Replace words in the sentence that match the pattern and are above the threshold length
# return re.sub(base64_pattern, replace_base64_word, sentence)
"""
Remove words from the text that are of length 28 or more,
are multiples of 4, and not found in the English dictionary.
Args:
text (str): The input text.
Returns:
str: The text with suspected Base64-like non-dictionary words removed.
"""
import nltk
nltk.download('words')
from nltk.corpus import words
# Set of English words
english_words = set(words.words())
# Split the text into words
words_in_text = text.split()
# Filter out words of specific length properties that are not in the English dictionary
filtered_words = [word for word in words_in_text if not (len(word) >= threshold_length and len(word) % 4 == 0 and word.lower() not in english_words)]
# Reassemble the text
return ' '.join(filtered_words)
</code></pre>
<p>Unit tests:</p>
<pre><code> # base64
# Unit tests
test_sentences = [
("This is a test with no base64", "This is a test with no base64"),
("Base64 example: TWFuIGlzIGRpc3Rpbmd1aXNoZWQ=", "Base64 example: "),
("Short== but not base64", "Short== but not base64"),
("ValidBase64== but too short", "ValidBase64== but too short"),
("Mixed example with TWFuIGlzIGRpc3Rpbmd1aXNoZWQ= base64", "Mixed example with base64"),
]
for input_sentence, expected_output in test_sentences:
our_output: str = remove_base64_words(input_sentence)
print(f'Trying to remove Base64: {input_sentence=} --> {our_output=} {expected_output=}')
# print(f'Trying to remove Base64: {input_sentence=} {expected_output=}')
</code></pre>
| <python><regex><replace><base64><data-cleaning> | 2023-11-24 02:27:54 | 1 | 6,126 | Charlie Parker |
77,540,338 | 1,564,947 | How can I use Playwright to PUT multipart form-data in Python? | <p>I am trying to use Playwright to PUT multipart form-data with the Python API.
Here is the relevant <a href="https://playwright.dev/python/docs/next/api/class-apirequestcontext#api-request-context-put" rel="nofollow noreferrer">documentation</a>.</p>
<p>However, it's not clear how to structure the <a href="https://playwright.dev/python/docs/next/api/class-apirequestcontext#api-request-context-put-option-multipart" rel="nofollow noreferrer">multipart</a> argument.
The docs say:</p>
<blockquote>
<pre><code>multipart: Dict[str, str|float|bool|[ReadStream]|Dict] (optional)
name: str
File name
mimeType: str
File type
buffer: bytes
File content
</code></pre>
<p>Provides an object that will be serialized as html form using multipart/form-data encoding and sent as this request body. If this parameter is specified content-type header will be set to multipart/form-data unless explicitly provided. File values can be passed either as <a href="https://nodejs.org/api/fs.html#fs_class_fs_readstream" rel="nofollow noreferrer">fs.ReadStream</a> or as file-like object containing file name, mime-type and its content.</p>
</blockquote>
<p>So, we pass it a dict with <code>str</code> keys, and either <code>str</code>, <code>float</code>, <code>bool</code>, <code>ReadStream</code> or <code>Dict</code> values. But what keys should we use? Also, how do I pass an <code>fs.ReadStream</code>, which is a JavaScript object, from a Python script? It looks like the keys should be <code>'name'</code>, <code>'mimeType'</code> and <code>'buffer'</code>. However, checking the generated HTTP request with <code>nc -l -p 8080</code> shows that a new form part is created for each key in the multipart dictionary.</p>
<p>I'm essentially trying to replicate the following <code>curl</code> command:</p>
<pre class="lang-bash prettyprint-override"><code>curl 'localhost:8080' -X PUT --form file='@test.zip;filename=test.zip;type=application/x-zip-compressed'
</code></pre>
<p>Assuming that you have previously run <code>nc -l -p 8080 | less</code>, you will see the following HTTP request:</p>
<pre class="lang-http prettyprint-override"><code>PUT / HTTP/1.1
Host: localhost:8080
User-Agent: curl/7.81.0
Accept: */*
Content-Length: 380
Content-Type: multipart/form-data; boundary=------------------------f3dde3ddff901cd3
--------------------------f3dde3ddff901cd3
Content-Disposition: form-data; name="file"; filename="test.zip"
Content-Type: application/x-zip-compressed
PK^C^D
^@^@^@^@^@<D1>`xW^N<93>ESC<91>
^@^@^@
^@^@^@^H^@^\^@test.txtUT ^@^C^Z<F7>_e^Z<F7>_eux^K^@^A^D<E8>^C^@^@^D<E8>^C^@^@Test file
PK^A^B^^^C
^@^@^@^@^@<D1>`xW^N<93>ESC<91>
^@^@^@
^@^@^@^H^@^X^@^@^@^@^@^A^@^@^@<FF><81>^@^@^@^@test.txtUT^E^@^C^Z<F7>_eux^K^@^A^D<E8>^C^@^@^D<E8>^C^@^@PK^E^F^@^@^@^@^A^@^A^@N^@^@^@L^@^@^@^@^@
--------------------------f3dde3ddff901cd3--
</code></pre>
<p>Here is a failed attempt to achieve an equivalent HTTP request with Playwright:</p>
<pre class="lang-py prettyprint-override"><code>from playwright.sync_api import sync_playwright
with sync_playwright() as playwright:
browser = playwright.chromium.launch()
context = browser.new_context()
page = context.new_page()
upload_file = "test.zip"
with open(upload_file, "rb") as file:
response = page.request.put(
"http://localhost:8080",
multipart={
"name": "file",
"filename": upload_file,
"mimeType": "application/x-zip-compressed",
"buffer": file.read(),
}
)
print(response)
</code></pre>
<p>This produces the following (clearly wrong) HTTP request:</p>
<pre class="lang-http prettyprint-override"><code>PUT / HTTP/1.1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/115.0.5790.24 Safari/537.36
accept: */*
accept-encoding: gzip,deflate,br
content-type: multipart/form-data; boundary=----WebKitFormBoundarynU6IBe6rgn1KUgpw
content-length: 365
Host: localhost:8080
Connection: close
------WebKitFormBoundarynU6IBe6rgn1KUgpw
content-disposition: form-data; name="name"
file
------WebKitFormBoundarynU6IBe6rgn1KUgpw
content-disposition: form-data; name="filename"
test.zip
------WebKitFormBoundarynU6IBe6rgn1KUgpw
content-disposition: form-data; name="mimeType"
application/x-zip-compressed
------WebKitFormBoundarynU6IBe6rgn1KUgpw--
</code></pre>
<p>How can I replicate the curl HTTP request with Playwright?</p>
| <python><http><multipartform-data><put><playwright-python> | 2023-11-24 01:55:23 | 1 | 2,879 | daviewales |
77,540,153 | 6,163,463 | Type hinting decorator which injects the value, but also supports passing the value | <p>I am trying to implement a decorator which injects <code>DBConnection</code>. The problem I am facing is that I would like to support both: passing the argument <strong>and</strong> depending on decorator to inject it. I have tried doing this with <code>@overload</code> but failed. Here is reproducible code:</p>
<pre><code>from functools import wraps
from typing import Awaitable, Callable, Concatenate, ParamSpec, TypeVar
from typing_extensions import reveal_type
class DBConnection:
...
T = TypeVar("T")
P = ParamSpec("P")
def inject_db_connection(
f: Callable[Concatenate[DBConnection, P], Awaitable[T]]
) -> Callable[P, Awaitable[T]]:
@wraps(f)
async def inner(*args: P.args, **kwargs: P.kwargs) -> T:
signature = inspect.signature(f).parameters
passed_args = dict(zip(signature, args))
if "db_connection" in kwargs or "db_connection" in passed_args:
return await f(*args, **kwargs)
return await f(DBConnection(), *args, **kwargs)
return inner
@inject_db_connection
async def get_user(db_connection: DBConnection, user_id: int) -> dict:
assert db_connection
return {"user_id": user_id}
async def main() -> None:
# ↓ No issue, great!
user1 = await get_user(user_id=1)
# ↓ Understandably fails with:
# `Unexpected keyword argument "db_connection" for "get_user" [call-arg]`
# but I would like to support passing `db_connection` explicitly as well.
db_connection = DBConnection()
user2 = await get_user(db_connection=db_connection, user_id=1)
# ↓ Revealed type is "builtins.dict[Any, Any]", perfect.
reveal_type(user1)
# ↓ Revealed type is "builtins.dict[Any, Any]", perfect.
reveal_type(user2)```
</code></pre>
| <python><mypy><python-typing> | 2023-11-24 00:24:57 | 1 | 337 | Michal K |
77,540,116 | 5,394,072 | Where is the request argument coming from in pytest fixtures? | <p>I have a question with the request argument <a href="https://github.com/GokuMohandas/Made-With-ML/blob/540754392af9f49e637b491009586a61c54e2268/tests/model/conftest.py#L12" rel="nofollow noreferrer">in the following code</a>:</p>
<pre><code>@pytest.fixture(scope="module")
def run_id(request):
return request.config.getoption("--run-id")
</code></pre>
<p>In that line of code, where are we getting the <code>request</code> argument from? I searched the repo and <code>request</code> isn't defined as a fixture.</p>
| <python><pytest> | 2023-11-24 00:06:07 | 1 | 738 | tjt |
77,539,817 | 23,457 | How to get all declared or just typed parameters of a class | <p>How to write a function that prints out all attributes of a class with types?</p>
<pre class="lang-py prettyprint-override"><code>class Parent:
parent_param: str
parent_default: str | None = None
def find_params_meta(self):
...
# enter magic code, filter out methods and attributes starting with `"_"`
class Child(Parent):
child_param: str
_params_meta: dict[str, Any] | None = None
def __init__(self, parent_param: str, child_param: str):
self._params_meta = self.find_params_meta()
self.child_param = child_param
self.parent_param = parent_param
assert Child("parent param", "child param")._params_meta == {
"parent_param": {"types": [str]},
"parent_default": {"types": [str, None], "default": None},
"child_param": {"types": [str]},
}
</code></pre>
<p>The <code>parent_param</code> and <code>child_param</code> attributes are not instantiated.
<code>getattr(self, "parent_param")</code> raises the <code>AttributeError: 'Child' object has no attribute 'parent_param'</code>. <code>type(self).__dict__</code>, <code>dir(self)</code> or <code>inspect.getmembers(self)</code> are not printing them. <code>inspect.get_annotations(type(self))</code> is closer as it prints <code>{'child_param': <class 'str'>, ...}</code>, but it does not return <code>Parent</code>'s attributes.</p>
| <python><class><inspect> | 2023-11-23 22:03:41 | 2 | 4,959 | zalun |
77,539,710 | 12,415,855 | Getting allways the same values when iterating using Selenium? | <p>i try to collect some data using the following code:</p>
<pre><code>import time
import os
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
if __name__ == '__main__':
print(f"Checking Browser driver...")
os.environ['WDM_LOG'] = '0'
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
baseLink = "https://residentialprotection.alberta.ca/public-registry/Property"
print(f"Working for {baseLink}")
driver.get (baseLink)
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@aria-owns="Municipality_listbox"]'))).send_keys("Calgary")
waitWD.until(EC.presence_of_element_located((By.XPATH, '//button[@id="show-results"]'))).click()
time.sleep(5)
countElems = driver.find_elements(By.XPATH,'//tbody//tr[@role="row"]')
print(len(countElems))
for idx in range(len(countElems)):
time.sleep(3)
elems = driver.find_elements(By.XPATH,'//tbody//tr[@role="row"]')
elems[idx].click()
time.sleep(3)
soup = BeautifulSoup (driver.page_source, 'lxml')
worker = soup.find("label", {"for": "FileNumber"})
wFileNumber = worker.find_next("td").text.strip()
print(f"{idx}: {wFileNumber}")
closeElem = driver.find_elements(By.XPATH,'//a[@aria-label="Close"]')[-1]
closeElem.click()
driver.quit()
</code></pre>
<p>When you check the opened chrome-window it is opening all these 10 rows line by line and then parsing using bs4 for the file-number of this file. But in the output i allways get the same file-number 10 times (the file-number from the first row)</p>
<pre><code>$ python temp2.py
Checking Browser driver...
Working for https://residentialprotection.alberta.ca/public-registry/Property
10
0: 21RU3557182
1: 21RU3557182
2: 21RU3557182
3: 21RU3557182
4: 21RU3557182
5: 21RU3557182
6: 21RU3557182
7: 21RU3557182
8: 21RU3557182
9: 21RU3557182
(selenium)
</code></pre>
<p>Why do i not get the different file-numbers like i can see it in the opened chrome-browser from selenium?</p>
| <python><selenium-webdriver><beautifulsoup> | 2023-11-23 21:31:00 | 2 | 1,515 | Rapid1898 |
77,539,700 | 726,730 | Setting QT_AUTO_SCREEN_SCALE_FACTOR cause QTableWidget gridline and headerView border line mis-aligned | <p>Example:</p>
<p>File: <code>table.py</code></p>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'table.ui'
#
# Created by: PyQt5 UI code generator 5.15.9
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(410, 311)
self.gridLayout = QtWidgets.QGridLayout(Dialog)
self.gridLayout.setObjectName("gridLayout")
self.tableWidget = QtWidgets.QTableWidget(Dialog)
font = QtGui.QFont()
font.setPointSize(26)
self.tableWidget.setFont(font)
self.tableWidget.setFocusPolicy(QtCore.Qt.NoFocus)
self.tableWidget.setStyleSheet("QScrollBar{\n"
" border:1px solid #ABABAB;\n"
"}\n"
"\n"
"QScrollBar::add-page, QScrollBar::sub-page{\n"
" background: rgb(223, 223, 223);\n"
"}\n"
"\n"
"QScrollBar:vertical {\n"
" width: 13px;\n"
" margin: 14px 0 14px 0; \n"
"}\n"
"\n"
"QScrollBar::handle:vertical {\n"
" background:rgb(250,250,250);\n"
" border:1px solid #ABABAB;\n"
" border-left:none;\n"
" border-right:none;\n"
"}\n"
" \n"
"QScrollBar::add-line:vertical {\n"
" background:rgb(249,249,249);\n"
" height: 13px;\n"
" subcontrol-position: bottom;\n"
" subcontrol-origin: margin; \n"
" border:1px solid #ABABAB;\n"
"}\n"
"\n"
"QScrollBar::sub-line:vertical {\n"
" background: rgb(249,249,249);\n"
" height: 13px;\n"
" subcontrol-position: top;\n"
" subcontrol-origin: margin;\n"
" border:1px solid #ABABAB;\n"
"}\n"
"\n"
"QScrollBar:horizontal {\n"
" background:rgb(223, 223, 223);\n"
" height: 13px;\n"
" margin: 0 14px 0 14px; \n"
"}\n"
"\n"
"QScrollBar::handle:horizontal {\n"
" background:rgb(250,250,250);\n"
" border:1px solid #ABABAB;\n"
" border-top:none;\n"
" border-bottom:none;\n"
"}\n"
" \n"
"QScrollBar::add-line:horizontal {\n"
" background:rgb(249,249,249);\n"
" width: 13px;\n"
" subcontrol-position: right;\n"
" subcontrol-origin: margin; \n"
" border:1px solid #ABABAB;\n"
"}\n"
"\n"
"QScrollBar::sub-line:horizontal {\n"
" background: rgb(249,249,249);\n"
" width: 13px;\n"
" subcontrol-position: left;\n"
" subcontrol-origin: margin;\n"
" border:1px solid #ABABAB;\n"
"}\n"
" \n"
"\n"
"QScrollBar::right-arrow,\n"
"QScrollBar::left-arrow,\n"
"QScrollBar::up-arrow,\n"
"QScrollBar::down-arrow {\n"
" width: 6px;\n"
" height: 6px;\n"
" background: white;\n"
"}\n"
"\n"
"QScrollBar::right-arrow {\n"
" image: url(:/scrollbar/assets/icons/scrollbar/arrow-right.png);\n"
"}\n"
"QScrollBar::left-arrow {\n"
" image: url(:/scrollbar/assets/icons/scrollbar/arrow-left.png);\n"
"}\n"
"QScrollBar::up-arrow {\n"
" image: url(:/scrollbar/assets/icons/scrollbar/arrow-up.png)\n"
"}\n"
"QScrollBar::down-arrow {\n"
" image: url(:/scrollbar/assets/icons/scrollbar/arrow-down.png);\n"
"}\n"
"\n"
"QTableWidget{\n"
" border:none;\n"
"}\n"
"\n"
"QHeaderView:section{\n"
" background:rgb(220, 245, 255);\n"
" padding:10px;\n"
"}\n"
"\n"
"QHeaderView:section::vertical{\n"
" border:1px solid #ABABAB;\n"
" border-top:none;\n"
"}\n"
"\n"
"QHeaderView:section:last::vertical{\n"
" border-bottom:none;\n"
"}\n"
"\n"
"\n"
"QHeaderView:section::horizontal{\n"
" border:none;\n"
" border-right:1px solid #ABABAB;\n"
"}\n"
"\n"
"QHeaderView:section::horizontal:last{\n"
" border:none;\n"
"}\n"
"\n"
"QTableWidget::item {\n"
" padding: 10px 10px 10px 10px;\n"
" border:none;\n"
"}\n"
"\n"
"QTableWidget QTableCornerButton::section {\n"
" border:1px solid #ABABAB;\n"
" border-top:none;\n"
" border-left:none;\n"
"}")
self.tableWidget.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
self.tableWidget.setShowGrid(False)
self.tableWidget.setObjectName("tableWidget")
self.tableWidget.setColumnCount(20)
self.tableWidget.setRowCount(18)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(1, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(2, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(3, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(4, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(5, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(6, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(7, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(8, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(9, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(10, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(11, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(12, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(13, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(14, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(15, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(16, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setVerticalHeaderItem(17, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(0, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(1, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(2, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(3, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(4, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(5, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(6, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(7, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(8, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(9, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(10, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(11, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(12, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(13, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(14, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(15, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(16, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(17, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(18, item)
item = QtWidgets.QTableWidgetItem()
self.tableWidget.setHorizontalHeaderItem(19, item)
self.tableWidget.horizontalHeader().setHighlightSections(False)
self.gridLayout.addWidget(self.tableWidget, 0, 0, 1, 1)
self.retranslateUi(Dialog)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
_translate = QtCore.QCoreApplication.translate
Dialog.setWindowTitle(_translate("Dialog", "Dialog"))
item = self.tableWidget.verticalHeaderItem(0)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(1)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(2)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(3)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(4)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(5)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(6)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(7)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(8)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(9)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(10)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(11)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(12)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(13)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(14)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(15)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(16)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.verticalHeaderItem(17)
item.setText(_translate("Dialog", "New Row"))
item = self.tableWidget.horizontalHeaderItem(0)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(1)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(2)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(3)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(4)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(5)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(6)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(7)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(8)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(9)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(10)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(11)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(12)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(13)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(14)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(15)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(16)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(17)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(18)
item.setText(_translate("Dialog", "New Column"))
item = self.tableWidget.horizontalHeaderItem(19)
item.setText(_translate("Dialog", "New Column"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Dialog = QtWidgets.QDialog()
ui = Ui_Dialog()
ui.setupUi(Dialog)
Dialog.show()
sys.exit(app.exec_())
</code></pre>
<p>File: <code>run_me.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from PyQt5 import QtWidgets, QtCore, QtGui
from table import Ui_Dialog
import os
import sys
class Run_me:
def __init__(self):
os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1"
self.app = QtWidgets.QApplication(sys.argv)
self.app.setStyle("Fusion")
self.Dialog = QtWidgets.QDialog()
self.ui = Ui_Dialog()
self.ui.setupUi(self.Dialog)
self.Dialog.show()
self.ui.tableWidget.setItemDelegate(GridDelegate(self.ui.tableWidget))
sys.exit(self.app.exec_())
class GridDelegate(QtWidgets.QStyledItemDelegate):
pen = QtGui.QPen(QtGui.QColor('#000000'), 1)
def paint(self, qp, opt, index):
qp.save()
custom_option = QtWidgets.QStyleOptionViewItem(opt)
custom_option.state &= ~QtWidgets.QStyle.State_Selected
qp.setPen(self.pen)
lastRow = index.model().rowCount() - 1
lastCol = index.model().columnCount() - 1
if not opt.state & QtWidgets.QStyle.State_Selected:
qp.setBrush(QtCore.Qt.NoBrush)
else:
brush = QtGui.QBrush(QtGui.QColor("#1182dc"),QtCore.Qt.SolidPattern)
qp.fillRect(custom_option.rect, brush )
if index.row() < lastRow and index.column() < lastCol:
qp.drawLine(opt.rect.bottomLeft(), opt.rect.bottomRight())
qp.drawLine(opt.rect.topRight(), opt.rect.bottomRight())
elif index.row() == lastRow and index.column() == lastCol:
pass
elif index.row() == lastRow:
qp.drawLine(opt.rect.topRight(), opt.rect.bottomRight())
elif index.column() == lastCol:
qp.drawLine(opt.rect.bottomLeft(), opt.rect.bottomRight())
qp.restore()
super().paint(qp, custom_option, index)
if __name__ == "__main__":
program = Run_me()
</code></pre>
<p>If I set <code>os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1"</code> then in the QTableWidget there is wrong line alignment between grid lines and QHeaderViews borders lines.</p>
<p>This is also happen with nature grid-line-system.</p>
<p>I tried to fix it with <code>QStyledItemDelegate</code> with no luck.</p>
<p>Have anyone have experience about?</p>
<p>Screenshot:</p>
<p><a href="https://i.sstatic.net/o58IH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o58IH.png" alt="enter image description here" /></a></p>
| <python><pyqt5><qtablewidget><screen-size> | 2023-11-23 21:27:45 | 1 | 2,427 | Chris P |
77,539,624 | 6,458,245 | Do I need to add every version of Python to PATH? | <p>I've been getting this error when using pip to install:
"WARNING: The script foo is installed in '/Users/bar/Library/Python/3.9/bin' which is not on PATH."</p>
<p>Do I need to remember to add to PATH everytime I update to a new version of Python and everytime I update to a new shell? What's a permanent solution?</p>
| <python><macos><shell><pip><path> | 2023-11-23 21:07:32 | 2 | 2,356 | JobHunter69 |
77,539,458 | 2,036,930 | Using pytest and hypothesis, how can I make a test immediately return after discovering the first counterexample? | <p>I'm developing a library, and I'm using <a href="https://hypothesis.readthedocs.io/" rel="nofollow noreferrer">hypothesis</a> to test it.
I usually sketch out a (buggy) implementation of a function, implement tests, then iterate by fixing errors and running tests.
Usually these errors are very simple (e.g., a typo), and I don't need simplified test cases to figure out the issue. For example:</p>
<pre class="lang-py prettyprint-override"><code>def foo(value):
return vslue + 1 # a silly typo
@given(st.integers())
def test_foo(x):
assert foo(x) == x + 1
</code></pre>
<p>How can I make hypothesis stop generating test cases as soon as it has found a single counterexample?
Ideally, this would be using commandline flags to <code>pytest</code>.</p>
| <python><python-hypothesis> | 2023-11-23 20:19:51 | 1 | 920 | statusfailed |
77,539,376 | 9,985,849 | Different ways to create dictionary in python | <p>I was exploring different ways to create a dictionary in Python 3 (3.11.1). ChatGPT suggested that I can create a dictionary by using the following syntax:</p>
<pre><code>my_dict = dict('key1'='value1', 'key2'='value2', 'key3'='value3')
</code></pre>
<p><a href="https://i.sstatic.net/vJOfB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vJOfB.png" alt="Suggestion from ChatGPT" /></a></p>
<p>I tried this in IDLE, but I am getting <code>SyntaxError: expression cannot contain assignment, perhaps you meant "=="?</code>.</p>
<p><a href="https://i.sstatic.net/XogoW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XogoW.png" alt="Error thrown by IDLE" /></a></p>
<p>To confirm if ChatGPT may have suggested the wrong method, I asked the same question with Google Bard. Ever Bard is replying that this method can be used to create a dictionary. Am I missing something?</p>
<p><a href="https://i.sstatic.net/VcBHL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VcBHL.png" alt="Google Bard's reply" /></a></p>
| <python><dictionary> | 2023-11-23 20:00:21 | 1 | 468 | Akshay Katiha |
77,539,206 | 3,861,775 | Add noise to parameters of ensemble in JAX | <p>I use the following code to create parameters for an ensemble of models stored as list of lists holding tuples of weights and biases. How do I efficiently add random noise to all parameters of the ensemble with JAX? I tried to use <code>tree_map()</code> but run into many errors probably caused by the nested structure.</p>
<p>Could you please provide guidance on how to use <code>tree_map()</code> in this case or point to other methods that JAX provides for such a case?</p>
<pre><code>from jax import Array
from jax import random
def layer_params(dim_in: int, dim_out: int, key: Array) -> tuple[Array]:
w_key, b_key = random.split(key=key)
weights = 0 * random.normal(key=w_key, shape=(dim_out, dim_in))
biases = 0 * random.normal(key=w_key, shape=(dim_out,))
return weights, biases
def init_params(layer_dims: list[int], key: Array) -> list[tuple[Array]]:
keys = random.split(key=key, num=len(layer_dims))
params = []
for dim_in, dim_out, key in zip(layer_dims[:-1], layer_dims[1:], keys):
params.append(layer_params(dim_in=dim_in, dim_out=dim_out, key=key))
return params
def init_ensemble(key: Array, num_models: int, layer_dims: list[int]) -> list:
keys = random.split(key=key, num=num_models)
models = [init_params(layer_dims=layer_dims, key=key) for key in keys]
return models
if __name__ == "__main__":
num_models = 2
layer_dims = [2, 3, 4]
key = random.PRNGKey(seed=1)
key, subkey = random.split(key)
ensemble = init_ensemble(key=subkey, num_models=num_models, layer_dims=layer_dims)
# Add noise to ensemble.
</code></pre>
| <python><jax> | 2023-11-23 19:14:34 | 2 | 3,656 | Gilfoyle |
77,539,171 | 1,843,511 | Pydantic: computed field dependent on attributes parent object | <p>As you can see from my example below, I have a computed field that depends on values from a parent object.</p>
<p>A parent has children, so it contains an attribute which should contain a list of Children objects. But I want a computed field for each child that calculates their allowance based on the parent object.</p>
<p>The below example is the best solution I could think of, but I'm unsure about my approach, and I would appreciate your advice on whether there is a better solution than referencing both objects in each other.</p>
<pre><code>from __future__ import annotations
from pydantic import BaseModel, computed_field, ConfigDict
class Parent(BaseModel):
model_config = ConfigDict(validate_assignment=True)
earns_monthly: int = 3000
@computed_field
@property
def available_for_allowance(self) -> int:
return self.earns_monthly * 0.05
class Parents(BaseModel):
model_config = ConfigDict(validate_assignment=True)
parents: list[Parent] = []
children: list[Child] = []
class Child(BaseModel):
model_config = ConfigDict(validate_assignment=True)
parents: Parents = None
@computed_field
@property
def monthly_allowance(self) -> int:
monthly_allowance: int = 0
if self.parents:
for parent in self.parents.parents:
monthly_allowance += parent.available_for_allowance
return monthly_allowance
mom = Parent()
dad = Parent()
child = Child()
parents = Parents(
parents=[mom, dad],
children=[child]
)
# Now the parents object is set, also store this one in the
# child or children objects, so the allowance can be computed.
child.parents = parents
for child in parents.children:
print(child.monthly_allowance)
</code></pre>
<p>It just feels weird to have circular references (children attributes in Parents links to the child object and the parents attribute in the child object refers to the parents object again):</p>
<pre><code>Parents.children -> child object
Child.parents -> parent object
# If I'd do this, it would just result in an infinite loop:
# Obviously the parents attribute is just for back referencing,
# to get access to certain attributes, but still,
# you get my point.
def getchildren(parents):
for child in parents.children:
getchildren(child.parents)
print(getchildren(parents))
</code></pre>
<p>That's the reason I'm asking if there is any advice that this is totally the right way, or there are better ways I'm not familiar of. You're never too old to learn I guess.</p>
| <python><python-3.x><pydantic> | 2023-11-23 19:06:59 | 0 | 5,005 | Erik van de Ven |
77,539,057 | 13,881,506 | LINQ query expression vs using IEnumerable.Where and IEnumerable.Select | <p>I'm coming to C# from Python where we have generator comprehensions such as</p>
<pre><code>nums = range(2, 10)
evenSquares = (num*num for num in nums if num % 2 == 0)
for evenSquare in evenSquares:
print(evenSquare)
</code></pre>
<p>The above would print</p>
<pre><code>4
16
36
64
</code></pre>
<p>I was looking for a C# equivalent or near-equivalent to Python's generator comprehension and found two candidates</p>
<ul>
<li>LINQ query expressions</li>
<li>Using <code>IEnumerable</code> query methods (e.g. <code>Where</code> and <code>Select</code>).</li>
</ul>
<p>The above Python snippet could be done in C# as either</p>
<pre><code>// LINQ query expression
IEnumerable<int> nums = Enumerable.Range(2, 8);
IEnumerable<int> evenSquares =
from num in nums
where num % 2 == 0
select num*num;
</code></pre>
<p>or</p>
<pre><code>// Using IEnumerable.Where and IEnumerable.Select
IEnumerable<int> nums = Enumerable.Range(2, 8);
IEnumerable<int> evenSquares =
nums.Where(num => num % 2 == 0).Select(num*num);
</code></pre>
<p>and in either case</p>
<pre><code>foreach (int evenSquare in evenSquares) {
Console.WriteLine(evenSquare);
}
</code></pre>
<p>would also print</p>
<pre><code>4
16
36
64
</code></pre>
<p>So I was wondering which of the two</p>
<ul>
<li>LINQ query expression, or</li>
<li>Using IEnumerable.Where and IEnumerable.Select</li>
</ul>
<p>is preferred and why.</p>
<hr />
<p>AFAIK, like Python generator comprehensions which are evaluated lazily, LINQ query expressions and IEnumerable.Where and IEnumerable.Select are also evaluated lazily (the C# docs call it <strong>deferred execution</strong>, but I'm guessing it's the same as lazy evaluation, right?)</p>
| <python><c#><linq><generator><ienumerable> | 2023-11-23 18:36:09 | 0 | 1,013 | joseville |
77,538,965 | 11,517,893 | Set variable to an expired session out of a view in Django | <p>I’m trying to set a variable to an <strong>expired session</strong> out of a view in Django.</p>
<p>I’m aware of django’s documentation on <a href="https://docs.djangoproject.com/en/4.2/topics/http/sessions/#using-sessions-out-of-views" rel="nofollow noreferrer">Using sessions out of views</a>. But in my case, I try to set a session variable in a <a href="https://docs.djangoproject.com/en/4.2/howto/custom-management-commands/" rel="nofollow noreferrer">custom management command</a>.</p>
<p>Here’s what I tried :</p>
<pre class="lang-python prettyprint-override"><code>from django.contrib.sessions.models import Session
class Command(BaseCommand):
help = "My custom command."
def handle(self, *args, **options):
for s in Session.objects.all():
s['my_variable'] = None
</code></pre>
<p>What I get is this error:</p>
<pre class="lang-none prettyprint-override"><code>TypeError: 'Session' object does not support item assignment
</code></pre>
<p>I also tried:</p>
<pre class="lang-python prettyprint-override"><code>[…]
ss = SessionStore(session_key=s.session_key)
ss['my_variable'] = None
ss.save()
</code></pre>
<p>This creates another session but does not modify the existing one…</p>
<p>How can I set <code>my_variable</code> to <code>None</code> ?</p>
<p><strong>Edit: The session I’m trying to set a variable to is expired</strong></p>
| <python><django><session-variables> | 2023-11-23 18:13:33 | 2 | 363 | Zatigem |
77,538,948 | 7,529,256 | AWS Lambda - security token expired exception on performance testing, changed code - not working | <p>I have an AWS Lambda function which connects to dynamo db (cross-account) using sts.client (boto3 python).</p>
<p>It does a simple task of fetching data based on a query. This code works absolutely fine almost all the time.</p>
<p>However, we find it failing strangely during performance tests.</p>
<p>For example :
Testcase 1 : we tried 75 calls to the api per min for 1 hr. Consistently it fails at somewhere around 55 - 57th min with bad gateway errors.</p>
<p>However,sometimes after Testcase1 and sometimes before, we try a 500 calls / min for 1 hr and this works absolutely fine with 0 bad gateway errors.</p>
<p>The errors we see during test case 1 : None at the api-gateway / dynamo but only at lambda logs :</p>
<p><code>[ERROR] ClientError: An error occurred (ExpiredTokenException) when calling the Query operation: The security token included in the request is expired </code></p>
<p>So, following some StackOverflow links which suggest calling to create token from inside the main function of lambda instead of global. we changed the code, but still we see it failing.</p>
<p>2 Questions here :</p>
<ol>
<li><p>Why would 75 req / min fail and a 500 req/min pass on the same instance at almost same times (tried on 2 different days, times).</p>
</li>
<li><p>Help required to understand why code change is not refreshing the token.</p>
</li>
</ol>
<pre><code>import json
import boto3
import os
from boto3.dynamodb.conditions import Key, Attr
from botocore.credentials import RefreshableCredentials
from decimal import Decimal
import datetime
# Constants
ROLE_ARN = os.environ['arnrole']
ROLE_SESSION_NAME = 'dynamodb-session'
EXPIRATION_THRESHOLD_SECONDS = 3580 # Setting 5 mins check below which it will refresh token
def get_sts_token():
# Create an STS client
sts_client = boto3.client('sts')
# Assume the role to get temporary credentials
response = sts_client.assume_role(
RoleArn=ROLE_ARN,
RoleSessionName=ROLE_SESSION_NAME
)
# Extract and return temporary credentials
print("invoked sts new")
return response['Credentials']
def refresh_sts_token_if_needed(current_credentials):
# Calculate remaining time until expiration\
print(current_credentials)
expiration_time = current_credentials['Expiration']
expiration_date = expiration_time.replace(tzinfo=None)
remaining_seconds = ( expiration_date - datetime.datetime.utcnow()).total_seconds()
print(remaining_seconds)
if remaining_seconds < EXPIRATION_THRESHOLD_SECONDS:
print("inside loop for new token")
# Refresh the STS token if it's close to expiration
return get_sts_token()
else:
# Return the current credentials if they are still valid
print("else loop")
return current_credentials
# Get or refresh STS token
current_credentials = get_sts_token()
""" Convert Decimal to float (test demo)
DynamoDB stores floats as Decimals and that cannot be encoded by json.dumps()
the data first needs to be converted to float """
class DecimalEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Decimal):
return float(obj)
return json.JSONEncoder.default(self, obj)
def get_a(event):
return(event["pathParameters"]["a"])
def get_query_result(table, a):
query_result_json = table.query(KeyConditionExpression=Key('a').eq(a))
query_results = json.loads(json.dumps(query_result_json, cls=DecimalEncoder))
return query_results
def define_api_response(query_results, account_number):
json_body_error = {"ResponseMetadata" : {"Count" : [], "RequestId" : [] , "Message" : []}}
# Define the status code and body of response based on query result
if query_results["ResponseMetadata"]["HTTPStatusCode"] == 200 and query_results["Count"] > 0:
response = create_response_body(200, json.dumps(query_results["Items"][0]))
elif query_results["ResponseMetadata"]["HTTPStatusCode"] == 200 and query_results["Count"] == 0:
response = create_response_body(404, json.dumps(query_results["Items"]))
else:
json_body_error["ResponseMetadata"]["Message"].append(query_results["errorMessage"])
json_body_error["ResponseMetadata"]["RequestId"].append(query_results["ResponseMetadata"]["RequestId"])
response = create_response_body(query_results["ResponseMetadata"]["HTTPStatusCode"], json.dumps(json_body_error))
return response
def create_response_body(statusCode, body):
str_body = str(body)
response = {
"statusCode": statusCode,
"body": str_body,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
},
}
print(str_body)
return response
def lambda_handler(event, context):
# Use the STS token or refresh if close to expiration
refreshed_credentials = refresh_sts_token_if_needed(current_credentials)
# Dynamo client connection :
dynamodb = boto3.resource('dynamodb', region_name="us-east-1",aws_access_key_id=refreshed_credentials['AccessKeyId'],aws_secret_access_key=refreshed_credentials['SecretAccessKey'], aws_session_token=refreshed_credentials['SessionToken'] )
# Rest of the code
a = get_a(event)
table = dynamodb.Table(os.environ['tablename'])
query_results = get_query_result(table, account_number)
response = define_api_response(query_results, account_number)
return response
</code></pre>
| <python><amazon-web-services><aws-lambda><boto3> | 2023-11-23 18:09:01 | 0 | 1,584 | Viv |
77,538,837 | 10,485,253 | How to add python type check for a Callable with a defined argument and kwargs | <p>What I have is a function as a parameter. This function will always need to take a dictionary, and then can optionally take any number of keyword arguments. See code below for a simplified example:</p>
<pre class="lang-py prettyprint-override"><code>def my_func(func, **kwargs):
my_map = {'a': 1, 'b': 2}
func(my_map, **kwargs)
</code></pre>
<p>How would I type this? I would have thought I can do the following but I get the error <code>"..." is not allowed in this context</code></p>
<pre class="lang-py prettyprint-override"><code>def default_func(mapping: dict[str, Any]) -> list[Any]:
# do something and return list
def my_func(
func: Callable[[dict[str, Any], ...], list[Any]] = default_func,
**kwargs: Any
) -> Any:
my_map = {'a': 1, 'b': 2}
func(my_map, **kwargs)
</code></pre>
<p>Is there a way to do this or do I just need to use <code>Callable[..., list[Any]]</code>?</p>
| <python><python-typing> | 2023-11-23 17:46:41 | 1 | 887 | TreeWater |
77,538,835 | 3,002,584 | RuntimeWarning shows once and then disappears - how to make it persistent? | <p>When running Python in interactive mode, <code>RuntimeWarning</code> is issued once and then disappears.</p>
<p>Here's an example session:</p>
<pre class="lang-py prettyprint-override"><code>>> import numpy as np
>> arr = np.array([np.inf])
>>
>> arr - arr # <-- first time
>> <stdin>:1: RuntimeWarning: invalid value encountered in subtract
array([nan])
>>
>> arr - arr # <-- second time, no warning!
array([nan])
</code></pre>
<p>I'm experiencing this in Windows cmd (python 3.10) and also in Google Colab Jupyter notebook.</p>
<p>What's the reason, and how can I make the warning appear every time?</p>
| <python><numpy><warnings> | 2023-11-23 17:46:26 | 1 | 10,690 | OfirD |
77,538,784 | 148,736 | How do I install a pip python fused_kernels library on a GPU CUDA machine? | <p><a href="https://github.com/EleutherAI/gpt-neox/blob/5dd366539803dbf1fd725cc057013fd002a4cfd4/requirements.txt#L28" rel="nofollow noreferrer">https://github.com/EleutherAI/gpt-neox/blob/5dd366539803dbf1fd725cc057013fd002a4cfd4/requirements.txt#L28</a></p>
<pre><code>fused-kernels @ file:///fsx/hailey/math-lm/gpt-neox/megatron/fused_kernels
</code></pre>
<p>Line 28 here is different from all the others, it's saying use the local file system to find <code>fused_kernels</code>. But I'm missing it:</p>
<pre><code>No such file or directory: '/fsx/hailey/math-lm/gpt-neox/megatron/fused_kernels
</code></pre>
<p>Update: I ran</p>
<pre><code>python ./megatron/fused_kernels/setup.py install
</code></pre>
<p>first and it said it worked but still the problem remains.</p>
<p>How do I solve this chicken egg situation? I need it installed before I can install it?</p>
| <python><pip><gpu><nvidia> | 2023-11-23 17:34:31 | 1 | 5,101 | Andrew Arrow |
77,538,730 | 499,721 | Cannot unpickle an instance of a class which inherits from `time` | <p>Consider the following class, which inherits from <code>datetime.time</code>:</p>
<pre><code>class TimeWithAddSub(time):
_date_base = date(2000, 1, 1)
def __new__(cls, hour=0, minute=0, second=0, microsecond=0, tzinfo=None):
obj = super(TimeWithAddSub, cls).__new__(cls, hour, minute, second, microsecond, tzinfo)
obj._as_datetime = datetime(2000, 1, 1, hour, minute, second, microsecond, tzinfo=tzinfo)
return obj
</code></pre>
<p>Trying to unpickle it gives the following error:</p>
<pre><code>>>> t = TimeWithAddSub(10, 11, 12)
>>> b = pickle.dumps(t)
>>> b'\x80\x04\x95T\x00\x00\x00\x00\x....'
>>> t2 = pickle.loads(b)
>>> ... in __new__ ... TypeError: 'bytes' object cannot be interpreted as an integer
</code></pre>
<p>After some digging, it seems that the <code>__new__()</code> function in the unpickled object does not get the correct parameters:</p>
<ul>
<li><code>hour</code> is set to <code>b'\x0b\x0c\r\x00\x00\x00'</code></li>
<li>the rest of the parameters get their default values (<code>minute=0, second=0, microsecond=0, tzinfo=None</code>)</li>
</ul>
<p>I've tried overriding <code>__reduce__()</code> and then <code>__getnewargs__()</code>, but both approaches result in the same error.</p>
<p>Any ideas how to fix this?</p>
| <python><time><pickle> | 2023-11-23 17:26:09 | 2 | 11,117 | bavaza |
77,538,602 | 110,963 | Deadlock with asyncio.Semaphore | <p>I have asyncio code that sometimes freezes in a deadlock, which should not be possible in my opinion. As reality always wins over theory, I must obviously be missing something. Can somebody spot a problem in the following code and tell me why it is possible at all that I can run into a deadlock?</p>
<pre><code>async def main():
sem = asyncio.Semaphore(8)
loop = asyncio.get_running_loop()
tasks = []
# Wrapper around the 'do_the_work' function to make sure that the
# semaphore is released in every case. In my opinion it should be
# impossible to leave this code without releasing the semaphore.
#
# But as I can observe a deadlock in real life, I must be missing
# something!?
async def task(**params):
try:
return await do_the_work(**params)
finally:
# Whatever happens in do_the_work (that does not crash the whole
# interpreter), the semaphore should be released.
sem.release()
for params in jobs:
# Without the wait_for my code freezes at some point. The do_the_work
# function does not take too long, so the 10min timeout is
# unrealistic high and just a plausibility check to "proof" the dead
# lock.
try:
await asyncio.wait_for(sem.acquire(), 60*10)
except TimeoutError as e:
raise RuntimeError("Deadlock?") from e
# Start the task. Due to the semaphore there can only be 8 tasks
# running at the same time.
tasks.append(loop.create_task(task(**params)))
# Check tasks which are already done for an exception. If there was
# one just stop immediately and raise it.
for t in tasks:
if t.done():
e = t.exception()
if e:
raise e
# If I reach this point, all tasks were scheduled and the results are
# ready to be consumed.
for result in await asyncio.gather(*tasks):
handle_result(result)
</code></pre>
| <python><python-asyncio><semaphore> | 2023-11-23 17:00:05 | 1 | 15,684 | Achim |
77,538,420 | 4,056,146 | using yaml.safe_load and hasattr | <p>I have the following code that reads from a yaml file and needs to know if the property "ethernets" exists in it. I use "ethernets1" below to see how the code behaves when the property does not exist.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python
import yaml
import simplejson as json
NETPLAN = "/etc/netplan/01-network-manager-all.yaml"
# get netplan file
with open(NETPLAN, "r") as stream:
try:
netplan_config = yaml.safe_load(stream)
print("netplan_config = " + json.dumps(netplan_config))
stream.close()
except yaml.YAMLError as e:
print(f"error = {str(e)}")
# test 1
if hasattr(netplan_config["network"], "ethernets"):
print("mark1")
else:
print("mark2")
# test 2
if netplan_config["network"]["ethernets"]:
print("mark3")
else:
print("mark4")
# test 3
if netplan_config["network"]["ethernets1"]:
print("mark5")
else:
print("mark6")
# test 4
try:
if netplan_config["network"]["ethernets1"]:
print("mark7")
except KeyError:
print("mark8")
</code></pre>
<p><strong>The output is:</strong></p>
<pre><code>mark2
mark3
error thrown: KeyError: 'ethernets1'
</code></pre>
<p>The problem is that "mark1" is not printed. I cannot use the test 2 method because it throws an error if it does not exist. I do not understand why <code>hasattr</code> does not work on <code>netplan_config</code>.</p>
<p><strong>YAML file for reference:</strong></p>
<pre class="lang-yaml prettyprint-override"><code>network:
ethernets:
eth0:
addresses:
- 192.168.4.31/22
dhcp4: false
dhcp6: false
match:
macaddress: 24:4b:fe:e2:1c:4a
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4
routes:
- to: default
via: 192.168.4.1
set-name: eth0
renderer: NetworkManager
version: 2
</code></pre>
<p>Edit: I guess I could use test4 (it works), but I would rather use <code>hasattr</code> as it is cleaner.</p>
| <python><yaml><hasattr> | 2023-11-23 16:32:02 | 1 | 3,927 | xinthose |
77,538,397 | 7,684,584 | as400 I series JDBC Connect | <p>I have been trying to connect as400 (IBM DB2 i series) server using JDBC however the code I have written is neither giving any exception nor printing anything. Can any one please help with if I am doing anything wrong or required to change any server side configuration.</p>
<pre><code>import jaydebeapi
import os
import configparser
class IbmDb2(object):
def __init__(self):
print("Initializing...")
self.thisfolder = os.path.dirname(os.path.abspath(os.path.join(__file__, "..")))
self.configfile = os.path.join(self.thisfolder, 'config', 'config.ini')
self.config = configparser.ConfigParser()
self.config_read = self.config.read(self.configfile)
self.driver = self.config.get('DRIVER', 'driver')
self.user = self.config.get('USER', 'user')
self.password = self.config.get('PASSWORD', 'password')
# Load the Netsuite JDBC driver
self.jar_path = os.path.join(self.thisfolder, 'jars', "jt400.jar")
print(f"JAR Path: {self.jar_path}")
self.conn_str = "jdbc:as400://XX.XX.XX.XX:9099/WM410BASD"
self.connection = jaydebeapi.connect(self.driver, self.conn_str, [self.user, self.password] , 'C:/google_analytics_data/db2-to-bq/jars/jt400.jar', )
print(self.connection)
print("Connection success")
# Establish the connection
self.cursor = self.connection.cursor()
self.cursor.execute('select count(*) from WM410BASD.PHPICK00')
results = self.cursor.fetchall()
print(results)
self.connection.close()
if __name__ == "__main__":
db2data = IbmDb2()
</code></pre>
| <python><db2><ibm-midrange> | 2023-11-23 16:29:06 | 1 | 1,812 | DKM |
77,538,019 | 12,427,902 | How to convert dates to UTC taking into account winter and summer time? | <p>My objective is to convert a list of dates to the correct UTC format, i.e.: taking into consideration winter and summer time. Dates were scrapped from a chat.</p>
<p>I am Switzerland based which uses (CET) UTC+01:00 during the winter and (CEST) Central European Summer Time during the summer i.e. UTC+02:00.</p>
<p>Here are the dates:</p>
<pre><code>import pandas as pd
df = pd.DataFrame()
df['dates_raw'] = [
'2022-01-20 01:12:15',
'2022-06-22 12:00:00',
'2022-10-29 05:57:02',
'2022-12-18 09:34:17',
'2023-01-12 06:36:10',
'2023-02-17 20:23:10',
'2023-04-12 02:02:24',
'2023-09-12 15:57:35',]
</code></pre>
<p>And here the desired result:</p>
<pre><code>df['dates_converted'] = [
'2022-01-20 00:12:15',
'2022-06-22 10:00:00',
'2022-10-29 03:57:02',
'2022-12-18 08:34:17',
'2023-01-12 05:36:10',
'2023-02-17 19:23:10',
'2023-04-12 00:02:24',
'2023-09-12 13:57:35',]
</code></pre>
<p>And finally, here are the dates changes from summer to winter to summer... etc. etc.:</p>
<pre><code>dates_changes = {
'st_2022' : '2022-03-27 02:00:00', # UTC + 2 (we gain one hour)
'wt_2022' : '2022-10-30 03:00:00', # UTC + 1 (we loose one hour)
'st_2023' : '2023-03-26 02:00:00', # UTC + 2 (we gain one hour)
'wt_2023' : '2023-10-29 03:00:00', # UTC + 1 (we loose one hour)
'st_2024' : '2024-03-31 02:00:00', # UTC + 2
'wt_2024' : '2024-10-27 03:00:00', # UTC + 1
}
</code></pre>
<p>As the dates changes look arbitrary, I don't know if there is any built-in function to make the conversion.</p>
<p>Many thanks in advance!</p>
| <python><pandas><date><datetime><utc> | 2023-11-23 15:26:25 | 1 | 509 | plonfat |
77,537,902 | 12,845,199 | Polars map users with more than one event | <pre><code>df = pl.LazyFrame({'col1':['a','a','b','b','c','c'],'col2':['undefined','defined','defined','defined','undefined','undefined']})
</code></pre>
<p>I have the following DF</p>
<p>Imagine I want to select the rows where col2 has more than two different values when it is grouped by col1.
And I want to select those users and map them in a way I know they have more than two different values in col 2.</p>
<p>Wanted result</p>
<pre><code>df = pl.LazyFrame({'col1':['a','a','b','b','c','c'],'col2':['undefined','defined','defined','defined','undefined','undefined'],'col3':[2,2,1,1,1,1]})
</code></pre>
<p>Any ideas on how can I achieve that? In a efficient manner avoiding lefts joins separate filters and so on</p>
| <python><python-polars> | 2023-11-23 15:07:16 | 1 | 1,628 | INGl0R1AM0R1 |
77,537,787 | 1,692,384 | import logic in python - for some modules we follow the structure from A.B.C import D, but for some modules we can bypass that and do from A import E | <p>i'm having trouble figuring out the importing logic in python, i'll use an existing example from the langchain library</p>
<p>for module name langchain.chains.retrieval_qa.base.VectorDBQA
a direct import works, i.e.</p>
<pre><code>from langchain import VectorDBQA
</code></pre>
<p>but for another module name langchain.document_loaders.epub.UnstructuredEPubLoader
a direct import won't work</p>
<pre><code>from langchain import UnstructuredEPubLoader
</code></pre>
<p>will give me an error</p>
<p>I need to do</p>
<pre><code>from langchain.document_loaders import UnstructuredEPubLoader
</code></pre>
<p>I do not understand the logic behind this? How am I to know from the reading the documentation, which libraries can be imported directly and which ones need the structure to be followed?</p>
| <python><import><module><logic> | 2023-11-23 14:49:39 | 1 | 399 | Black Dagger |
77,537,779 | 18,904,265 | How do I use pip, pipx and virtualenv of the current python version after upgrading? | <p>This is the first time I am upgrading python (from 3.11.3 to 3.12.0) and some questions came up along the way. I think I have somehow understood how Path works on windows, but am not getting the complete picture right now. At the moment I have the issue, that python and pip still by default use the old python installation, so this led to a few questions:</p>
<h2>How do I use the correct version of pip?</h2>
<p>My understanding is, that the version of pip which is Called is determined by which Path entry of <code>Python\Scripts</code> is preferred by Windows.</p>
<p>At the moment, python 3.11 and 3.9 are installed into <code>C:\Program Files\</code>, and their installation path is added to <strong>system</strong> PATH.</p>
<p>3.12 however is installed to <code>C:\Users\...\AppData\Local\Programs\Python\Python312\</code> and the installation path is added to <strong>user</strong> PATH.</p>
<p>Should I just delete the PATH entries directing to python 3.9 and 3.11? Does Windows prefer system path over user path?</p>
<h2>How do I ensure the right version of virtualenv and pipx are used?</h2>
<p>In the Scripts directory, I can't find entries for pipx and virtualenv, and there are no seperate entries for pipx and virtualenv in either PATH. How does my Terminal know, where to find the correct executables? Is this managed by pip? And how do I get my system to use the newly installed versions of virtualenv and pipx which use python3.12?</p>
<h2>Is there an easier way of upgrading my pipx installed tools than reinstalling all of them for a new python version?</h2>
<p>I have some tools installed via pipx, e.g. hatch, mypy, ipython, virtualenv. Do I need to reinstall all of those tools for every python upgrade I make? Or is there a way to tell pipx that I want it to use a new python version now?</p>
<h2>Edit: My Path entries</h2>
<p>System Path:</p>
<pre><code>C:\Program Files\Python311\Scripts\
C:\Program Files\Python311\
C:\Program Files\Python39\Scripts\
C:\Program Files\Python39\
</code></pre>
<p>User Path:</p>
<pre><code>C:\Users\UserName\AppData\Local\Programs\Python\Python312\Scripts\
C:\Users\UserName\AppData\Local\Programs\Python\Python312\
C:\Users\UserName\AppData\Local\Programs\Python\Launcher\
C:\users\UserName\appdata\roaming\python\python311\scripts
</code></pre>
| <python><windows><pip><virtualenv><pipx> | 2023-11-23 14:48:18 | 1 | 465 | Jan |
77,537,765 | 5,888,928 | Type hinting in Pycharm for dynamically generated classes | <p>I'm trying to get Pycharm's linter to stop complaining when I use a dynamically created class via the Python <code>type( )</code> function. A simplified version of the actual code looks like the following.</p>
<p>Yes, it's weird to be working with the class object like that, but in the real code, it makes sense (SQLAlchemy ORM classes, if you're curious). Note the final line which points out the Pycharm warning.</p>
<pre><code>class MyClass(object):
ID_NUM: int = 0
def __init__(self, id_num: int, first_name: str, last_name: str):
self.ID_NUM = id_num
self.first_name = first_name
self.last_name = last_name
def make_augmented_class(cls):
def augmented__eq__(self, other):
try:
return self.id_num == other.id_num
except AttributeError:
return False
new_cls = type('{}Augmented'.format(cls.__name__), tuple([cls]), {})
new_cls.__eq__ = augmented__eq__
return new_cls
def do_stuff(my_class):
print(my_class.ID_NUM)
if __name__ == '__main__':
do_stuff(MyClass)
augmented_class = make_augmented_class(MyClass)
do_stuff(augmented_class) # <=== PYCHARM CODE LINTER COMPLAINS THAT "Type 'type' doesn't have expected attribute 'ID_NUM'"
</code></pre>
<p>I've tried a couple of type hints for the <code>do_stuff( )</code> function, such as <code>def do_stuff(my_class: type):</code> and <code>def do_stuff(my_class: MyClass):</code>, but that just causes different linter warnings.</p>
<p>The code works, but I'd like to eliminate the linter warnings somehow...</p>
| <python><sqlalchemy><pycharm><python-typing> | 2023-11-23 14:46:06 | 2 | 460 | Roy Wood |
77,537,463 | 736,662 | How to use the pandas df | <p>I create a df like this:</p>
<pre class="lang-py prettyprint-override"><code>df_ts_ids = pd.read_csv('C:\\share\\pythonProject\\TS_ID.csv').head(10)
</code></pre>
<p>And I have this function:</p>
<pre class="lang-py prettyprint-override"><code>def set_ts_ids():
ts_ids = df_ts_ids.sample()["TS_ID"].iloc[0]
print("ts_ids : ", ts_ids)
return ts_ids
</code></pre>
<p>It prints like this:</p>
<pre><code>ts_ids : 53005
Response status code: 200
ts_ids : 246388
Response status code: 200
ts_ids : 161700
Response status code: 200
ts_ids : 254692
Response status code: 200
ts_ids : 312003
Response status code: 200
ts_ids : 243898
Response status code: 200
ts_ids : 45032
Response status code: 200
ts_ids : 45032
Response status code: 200
</code></pre>
<p>The .csv file looks like this:</p>
<pre><code>TS_ID
53005
246388
312003
243898
161700
</code></pre>
<p>I have a Locust file looking like this using the test data:</p>
<pre class="lang-py prettyprint-override"><code>class LoadValues(FastHttpUser):
host = server_name
def _run_read_ts(self, series, resolution, start, end):
ts_ids = series
resp = self.client.get(f'/api/loadValues?tsIds={ts_ids}&resolution={resolution}'
f'&startUtc={set_from_date()}&endUtc={set_to_date()}',
headers={'X-API-KEY': 'xxx'}, name='/api/loadvalues')
print("Response status code: ", resp.status_code)
# print("Response text: ", resp.text)
@task(1)
def test_get_ts_1(self):
self._run_read_ts(set_ts_ids(), 'PT15M', set_from_date(), set_to_date())
</code></pre>
<p>However I want to call the endpoint with (in this example) 10 TSids (<code>.head(10)</code>)</p>
<p>How can I change this line to make the <code>.head</code> value be what says how many TDids each call for each VU puts into the URL?</p>
<p>I guess it needs to be changed here:</p>
<pre><code>ts_ids = df_ts_ids.sample()["TS_ID"].iloc[0]
</code></pre>
<p>?</p>
| <python><pandas><locust> | 2023-11-23 13:56:20 | 2 | 1,003 | Magnus Jensen |
77,537,307 | 1,084,174 | ERROR: Cannot install tflite-model-maker (The conflict is caused by other modules) | <p>I am trying to use tflite-model-maker package in one of my Kaggle notebook. It shows module not found then I tried to install it using,</p>
<pre><code>!pip install tflite_model_maker
</code></pre>
<p>However, it yields the following error:</p>
<pre><code>.....
ERROR: Cannot install tflite-model-maker==0.1.2, tflite-model-maker==0.2.0, tflite-model-maker==0.2.1, tflite-model-maker==0.2.2, tflite-model-maker==0.2.3, tflite-model-maker==0.2.4, tflite-model-maker==0.2.5, tflite-model-maker==0.3.3, tflite-model-maker==0.3.4, tflite-model-maker==0.4.0, tflite-model-maker==0.4.1 and tflite-model-maker==0.4.2 because these package versions have conflicting dependencies.
The conflict is caused by:
tflite-model-maker 0.4.2 depends on numba==0.53
tflite-model-maker 0.4.1 depends on numba==0.53
tflite-model-maker 0.4.0 depends on numba==0.53
tflite-model-maker 0.3.4 depends on numba==0.53
tflite-model-maker 0.3.3 depends on numba==0.53
tflite-model-maker 0.2.5 depends on tflite-support==0.1.0rc4
tflite-model-maker 0.2.4 depends on tflite-support==0.1.0rc4
tflite-model-maker 0.2.3 depends on tflite-support==0.1.0rc3.dev2
tflite-model-maker 0.2.2 depends on tflite-support==0.1.0rc3.dev2
tflite-model-maker 0.2.1 depends on tflite-support==0.1.0rc3.dev2
tflite-model-maker 0.2.0 depends on tflite-support==0.1.0rc3.dev2
tflite-model-maker 0.1.2 depends on tflite-support==0.1.0rc3.dev2
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
</code></pre>
<p>How can I perform the options they suggested? Is there any way around straight forward?</p>
| <python><jupyter-notebook><pip><anaconda><kaggle> | 2023-11-23 13:34:47 | 1 | 40,671 | Sazzad Hissain Khan |
77,537,303 | 3,740,652 | Dagster unable to import module | <p>I'm new to Dagster and I'm struggling importing my module into the Dagster code.
This is my project structure.</p>
<pre class="lang-bash prettyprint-override"><code>.
├── pyproject.toml
├── README.md
├── setup.cfg
├── setup.py
├── my-project
│ ├── assets.py
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── assets.cpython-311.pyc
│ │ └── __init__.cpython-311.pyc
│ ├── scaled_dataset.csv
│ └── utilities
│ ├── dataset.py
│ ├── __init__.py
│ ├── nn.py
│ ├── plotting.py
│ ├── testing.py
│ └── training.py
</code></pre>
<p>In my <code>__init__.py</code> of <code>my-poject</code> I have set</p>
<pre class="lang-py prettyprint-override"><code>from dagster import Definitions, load_assets_from_modules
from . import assets, utilities
all_assets = load_assets_from_modules([assets, utilities])
defs = Definitions(
assets=all_assets,
)
</code></pre>
<p>And in my code in <code>assets.py</code> I have (for example):</p>
<pre class="lang-py prettyprint-override"><code>from utilities.plotting import plot_dataset
</code></pre>
<p>However, when I run `dagster dev', it returns:</p>
<pre class="lang-bash prettyprint-override"><code>ModuleNotFoundError: No module named 'utilities'
Stack Trace:
File "/home/user/PycharmProjects/Dagster/venv/lib/python3.11/site-packages/dagster/_core/code_pointer.py", line 135, in load_python_module
return importlib.import_module(module_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/user/PycharmProjects/Dagster/my-project/my-project/__init__.py", line 3, in <module>
from . import assets, utilities
File "/home/user/PycharmProjects/Dagster/my-project/my-project/assets.py", line 12, in <module>
from utilities.plotting import plot_dataset
</code></pre>
<p>What am I missing?
Thanks</p>
| <python><import><python-module><dagster> | 2023-11-23 13:33:47 | 1 | 372 | net_programmer |
77,537,125 | 955,273 | ASAN memory leaks in embedded python interpreter in C++ | <p>Address Sanitizer is reporting a memory leak (multiple actually) originating from an embedded python interpreter when testing some python code exposed to c++ using <code>pybind11</code>.</p>
<p>I have distilled the code down to nothing other than calling <code>PyInitialize_Ex</code> and then <code>PyFinalizeEx</code></p>
<pre class="lang-cpp prettyprint-override"><code>#include <Python.h>
int main()
{
Py_InitializeEx(0);
Py_FinalizeEx();
return 0;
}
</code></pre>
<p>All the memory leaks originate from a call to <code>Py_InitializeEx</code>.</p>
<p>Example:</p>
<pre><code>Direct leak of 576 byte(s) in 1 object(s) allocated from:
#0 0x7f0d55ce791f in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:69
#1 0x7f0d55784a87 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x133a87)
#2 0x7f0d55769e4f (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x118e4f)
#3 0x7f0d5576a084 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x119084)
#4 0x7f0d5576b0fa (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x11a0fa)
#5 0x7f0d557974e6 in PyType_Ready (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x1464e6)
#6 0x7f0d5577fea6 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x12eea6)
#7 0x7f0d5584eda5 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x1fdda5)
#8 0x7f0d559697b3 (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x3187b3)
#9 0x7f0d558524e8 in Py_InitializeFromConfig (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x2014e8)
#10 0x7f0d558548fb in Py_InitializeEx (/lib/x86_64-linux-gnu/libpython3.10.so.1.0+0x2038fb)
#11 0x55fe54d040ce in main /home/steve/src/test.cpp:8
#12 0x7f0d54fe7d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
#13 0x7f0d54fe7e3f in __libc_start_main_impl ../csu/libc-start.c:392
#14 0x55fe54d04124 in _start (/home/steve/src/build/test+0x1124)
</code></pre>
<p><strong>Questions:</strong></p>
<ul>
<li>How can I free up the memory so asan is happy?</li>
<li>Failing that, is it safe just to suppress <code>Py_InitializeEx</code> in its entirety?</li>
</ul>
| <python><c++><address-sanitizer> | 2023-11-23 13:05:59 | 1 | 28,956 | Steve Lorimer |
77,537,067 | 20,399,144 | Use of a single LineString to display two data fields | <p>I have a recurring problem for which I cannot find an easy solution, whereas I guess there is one.</p>
<p>I have a dataset representing road traffic data at a street level. This dataset is associated with a road network graph, where each edge represents a road and each node represents an intersection. For each edge, I have a <code>shapely</code> <em>LineString</em> object containing the geopgraphic properties of the road.</p>
<p>My goal is to draw a map representing this traffic dataset. I can do that quite easily using <code>geopandas</code> for example, but my problem comes from the fact that most roads, even when bi-directionnal, are represented with a single <em>LineString</em> object, so one of the direction won't be seen as both will be plotted on top of each other.</p>
<p>Would there be a simple way to solve this problem? Ideally, I would like to end up with a plot like this: <a href="https://i.sstatic.net/QgMhm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QgMhm.png" alt="traffic data representation example" /></a></p>
| <python><plot><geopandas><shapely> | 2023-11-23 12:57:45 | 1 | 438 | R_D |
77,536,952 | 6,797,207 | Issue with using kmeans on filtered pandas data | <p>I have imported a CSV file, and filtered two columns. Very standard and working as expected. However, as soon as i run a KMeans test the outcome is unexpected. I am either running it on the entire dataset (not the filtered data) or on the wrong data.</p>
<pre><code># Load data
data = pd.read_csv('ISD.csv', encoding='ISO-8859-1')
# Remove '%' from the entire DataFrame
data = data.apply(lambda x: x.str.rstrip('%') if x.dtype == 'O' else x)
# Replace '#DIV/0!' with NaN
data.replace('#DIV/0!', pd.NA, inplace=True)
# Convert relevant columns to float, replacing missing values with 0
data['Debt to Equity'] = data['Debt to Equity'].replace(pd.NA, '0').astype(float)
</code></pre>
<p>This loads the data loaded.</p>
<pre><code># Select relevant financial metrics
selected_metrics = data[['Debt to Equity', 'Stable Financial Postion']]
# Apply filters
filtered_data = selected_metrics[(selected_metrics['Stable Financial Postion'] == 1)].copy()
# Check counts of Stable Financial Position in the filtered data
print("Counts of Stable Financial Position in Filtered Data:")
print(filtered_data['Stable Financial Postion'].value_counts())
# Reset index of filtered_data
filtered_data.reset_index(drop=True, inplace=True)
print(filtered_data['Stable Financial Postion'].value_counts())
print (filtered_data['Debt to Equity'].max())
print (filtered_data['Debt to Equity'].min())
</code></pre>
<p>The result from this is as expected and correct</p>
<p>Counts of Stable Financial Position in Filtered Data:</p>
<pre><code>Stable Financial Postion
1 1316
Name: count, dtype: int64
Stable Financial Postion
1 1316
Name: count, dtype: int64
1.923278013
1.25e-09
</code></pre>
<p>However, in the next step, it seems to break down and the results become unexpected.</p>
<pre><code># Clustering
kmeans = KMeans(n_clusters=3, n_init=10) # Adjust the number of clusters as needed
# Fit and predict on the filtered data
cluster_labels = kmeans.fit_predict(filtered_data[['Debt to Equity', 'Stable Financial Postion']])
# Assign cluster labels to the original DataFrame
data.loc[filtered_data.index, 'Cluster'] = cluster_labels
</code></pre>
<p>running the below descriptions gives a max value of 226149 for cluster 0, 370 for cluster 1 and 5000 for cluster 2.</p>
<pre><code># Inspect cluster labels
print("\nCluster Counts:")
print(data['Cluster'].value_counts())
# Explore cluster characteristics
cluster_stats = data.groupby('Cluster')['Debt to Equity'].describe()
print("\nCluster Characteristics:")
print(cluster_stats)
</code></pre>
<p>the results are below</p>
<pre><code>Cluster Counts:
Cluster
0.0 998
1.0 244
2.0 74
Name: count, dtype: int64
Cluster Characteristics:
count mean std ... 50% 75% max
Cluster ...
0.0 968.0 524.708244 8306.283145 ... 0.056144 1.629182 226149.000000
1.0 239.0 550.843018 4978.241124 ... 0.054620 1.931508 50000.000000
2.0 71.0 13.488451 64.011376 ... 0.007957 0.373877 370.186275
</code></pre>
<p>Am I going wrong with the filtering or with the Kmeans test?</p>
<p>thanks</p>
| <python><pandas><k-means> | 2023-11-23 12:39:27 | 1 | 897 | ben121 |
77,536,655 | 14,004,940 | Change percentage legend colour of lightweight-charts-python | <p>I was implementing <a href="https://github.com/louisnw01/lightweight-charts-python" rel="nofollow noreferrer"><strong>lightweight-charts-python</strong></a> library that is python equivalent for tradingview's lightweight-charts library.</p>
<p>In that I wanted to have a label showing open, high, low, close and percentage move of the candle that the crosshair was pointing to.</p>
<p>I found the way to add these labels from <a href="https://github.com/louisnw01/lightweight-charts-python#5-styling" rel="nofollow noreferrer">here</a> but there was no way to change the colour of all labels or only the percentage label dynamically according to the crosshair's candle's colour.</p>
<p>Is there any workaround or a proper solution to this problem?</p>
| <python><lightweight-charts> | 2023-11-23 11:52:33 | 2 | 916 | Mr. Techie |
77,536,632 | 8,930,751 | Same message is being received by multiple receivers using event hubs | <p>An event hub is created in Azure. This event hub has $default consumer group. It has 32 partitions.</p>
<p>There is one sender to this event hub which sends messages at some particular frequency. Now there are multiple receivers to this event hub. The code to send and receive is written in Python exactly as per <a href="https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-python-get-started-send?tabs=passwordless%2Croles-azure-portal" rel="nofollow noreferrer">this document</a>. This receiver Python script is running in 5 different VMs.</p>
<p>The expectation is, only one of the receiver should receive the message and process it. But it is observed that more than one receiver is receiving the same message.</p>
<p>I read that if checkpointing is not done, this might happen. But in the sample code, I see that the checkpointing is done. What else is missing? Or is my expectation wrong?</p>
<p>PS : People who have worked previously have started using event hubs and entire infrastructure and code is around event hubs. It's kind of difficult to move away from event hubs now. So I plan to stick with event-hubs for now.</p>
| <python><python-3.x><azure><azure-eventhub> | 2023-11-23 11:48:54 | 1 | 2,416 | CrazyCoder |
77,536,610 | 10,836,309 | Filling nulls after merge - but only to the newly merged columns | <p>I have two dataframes that I am merging:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID': [1,2,3,4],
'Column A': [2, 3, 2, 3],
'Column B': [None, None, 3, None]})
df2 = pd.DataFrame({'ID': [1,2,4],
'Column C': [2, 3, 22]})
df = df.merge(df2, on='ID', how='left')
df
</code></pre>
<p>The output is:</p>
<pre><code> ID Column A Column B Column C
0 1 2 NaN 2.0
1 2 3 NaN 3.0
2 3 2 3.0 NaN
3 4 3 NaN 22.0
</code></pre>
<p>Position 2 on the new 'Column C' is Null because ID=3 does not exist in df2.</p>
<p>How can I fill the nulls in the new columns only but leave the nulls in the old column intact?
Expected output:</p>
<pre><code> ID Column A Column B Column C
0 1 2 NaN 2.0
1 2 3 NaN 3.0
2 3 2 3.0 0.0
3 4 3 NaN 22.0
</code></pre>
| <python><pandas> | 2023-11-23 11:46:17 | 1 | 6,594 | gtomer |
77,536,582 | 1,711,271 | Build a LaTeX label programmatically in matplotlib | <p>I can render hardcoded LaTeX expressions in matplotlib labels, no problem:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0., 2.* np.pi, 20)
y = 2. * np.pi * np.stack((np.sin(x), np.cos(x)))
fig, ax = plt.subplots(2, 1, figsize=(6, 8), dpi=200, constrained_layout=True)
i=0
current_ax = ax[i]
current_ax.plot(x, y[i, :], '--.k', markersize=5, alpha=0.5)
current_ax.set_xlabel('x')
current_ax.set_ylabel(r'$2\pi*\sin (x)$')
i=1
current_ax = ax[i]
current_ax.plot(x, y[i, :], '--.k', markersize=5, alpha=0.5)
current_ax.set_xlabel('x')
current_ax.set_ylabel(r'$2\pi*\cos (x)$')
plt.show()
</code></pre>
<p><em>However</em>, I would like to be able to build them programmatically. In other words, I would like to be able to define a string label such as</p>
<pre><code>function = 'sin'
my_label = f"r'$2\pi*\{function} (x)$'"
current_ax.set_ylabel(my_label)
</code></pre>
<p>However, this doesn't seem to work. Is there a way to fix it?</p>
| <python><matplotlib><plot><f-string> | 2023-11-23 11:42:18 | 1 | 5,726 | DeltaIV |
77,536,463 | 1,994,377 | How do you write a VSCode run configuration that will run the current Python file as a module? | <p>VSCode automatically generates something like this:</p>
<pre><code> {
"name": "Foobar",
"type": "python",
"request": "launch",
"module": "project.folder.foobar",
"justMyCode": true
},
</code></pre>
<p>I would like to have a configuration which always works for the current file so that I have to switch less.</p>
| <python><visual-studio-code> | 2023-11-23 11:21:52 | 1 | 7,042 | Sarien |
77,536,435 | 6,694,814 | Excel file corrupted after bulk saving with Python | <p>The aim of my work is just to save multiple Excel Macro files with exactly the same name.
I used the openpyxl library for it. The only thing, which changes in my file is the number att the end.</p>
<p>My code looks as follows:</p>
<pre><code> from openpyxl import Workbook, load_workbook
wb = load_workbook('Collections/Collection Note - 001.xlsm')
ws = wb.active
for x in range(2,25):
newname = "%03d" % x
wb.save('Collections/Collection Note - '+newname+'.xlsm')
</code></pre>
<p>Everything works well as the files come out. However, I have no faintest idea why are they corrupted?</p>
<p>What is wrong here, since I want just open them and resave?</p>
<p><a href="https://i.sstatic.net/b0Kk3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b0Kk3.png" alt="enter image description here" /></a></p>
| <python><excel> | 2023-11-23 11:16:56 | 1 | 1,556 | Geographos |
77,536,328 | 3,575,623 | Test_step returns no loss values | <p>I have been working for some time with a VAE model based off of <a href="https://keras.io/examples/generative/vae/" rel="nofollow noreferrer">this</a>example, working with binary data so it has been modified.</p>
<p>Recently, the computing cluster I was working on suffered a fault and my most recent version of my script was lost. Here is a <a href="https://gist.github.com/callum-b/c682e54d4d9a3668aa4f58e4d92cac8a" rel="nofollow noreferrer">gist</a> with the full code that I am currently running on my local computer, and I've also included it below, and the data can be found <a href="https://pastebin.com/nz6Sr3GL" rel="nofollow noreferrer">here</a>:</p>
<pre><code>import csv
import sys
import numpy as np
import pandas as pd
import math
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input, Embedding, Flatten, Reshape, Dropout, ReLU
from tensorflow.keras.regularizers import l1
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
import tensorflow.keras.callbacks as kcb
def split_train_test(mydata, test_ratio=0.2):
msk = np.random.randn(len(mydata)) < 1-test_ratio
train = mydata[msk]
test = mydata[~msk]
return train, test
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
def nll(y_true, y_pred):
## From Louis Tiao's post
""" Negative log likelihood (Bernoulli). """
# keras.losses.binary_crossentropy gives the mean
# over the last axis. we require the sum
return keras.backend.sum(keras.backend.binary_crossentropy(y_true, y_pred), axis=-1)
class VAE(keras.Model):
## FROM https://keras.io/examples/generative/vae/
def __init__(self, encoder, decoder, **kwargs):
super(VAE, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.total_loss_tracker = keras.metrics.Mean(name="loss")
self.reconstruction_loss_tracker = keras.metrics.Mean(
name="reconstruction_loss"
)
self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")
@property
def metrics(self):
return [
self.total_loss_tracker,
self.reconstruction_loss_tracker,
self.kl_loss_tracker,
]
def train_step(self, data):
if isinstance(data, tuple):
data = data[0]
with tf.GradientTape() as tape:
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
## BASE RECONSTRUCTION LOSS:
# reconstruction_loss = tf.reduce_mean( keras.losses.binary_crossentropy(data, reconstruction) )
## ELBO RECONSTRUCTION LOSS:
reconstruction_loss = tf.reduce_mean( nll(data, reconstruction) )
## KULLBACK-LEIBLER DIVERGENCE (maybe?):
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
## BASE TOTAL LOSS:
total_loss = reconstruction_loss + kl_loss
grads = tape.gradient(total_loss, self.trainable_weights)
self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
def test_step(self, data):
## TENTATIVE CALL FUNCTION FOR VALIDATION DATA
if isinstance(data, tuple):
data = data[0]
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
## BASE RECONSTRUCTION LOSS:
# reconstruction_loss = tf.reduce_mean( keras.losses.binary_crossentropy(data, reconstruction) )
## ELBO RECONSTRUCTION LOSS:
reconstruction_loss = tf.reduce_mean( nll(data, reconstruction) )
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
## BASE TOTAL LOSS:
total_loss = reconstruction_loss + kl_loss
return {
"loss": total_loss,
"reconstruction_loss": reconstruction_loss,
"kl_loss": kl_loss,
}
data = pd.read_csv("/my/dir/my_data.txt", sep="\t", header=None, index_col=0)
x_train, x_test = split_train_test(np.transpose(np.array(data, dtype="float32")))
## DEFINE AUTOENCODER PARAMS
input_size = x_train.shape[1]
hidden_1_size = math.ceil(0.5*input_size)
hidden_2_size = math.ceil(0.2*input_size)
hidden_3_size = math.ceil(0.08*input_size)
code_size = math.ceil(0.05*input_size)
dropout_rate = 0.2
## CREATE AUTOENCODER STRUCTURE FOR CATEGORICAL DATA
myactivation = "sigmoid"
input_data = Input(shape=x_train[1].shape)
hidden_1 = Dense(hidden_1_size, activation=myactivation)(input_data)
e_drop_1 = Dropout(dropout_rate)(hidden_1)
hidden_2 = Dense(hidden_2_size, activation=myactivation)(e_drop_1)
e_drop_2 = Dropout(dropout_rate)(hidden_2)
hidden_3 = Dense(hidden_3_size, activation=myactivation)(e_drop_2)
code_mean = Dense(code_size, name="code_mean")(hidden_3)
code_log_var = Dense(code_size, name="code_log_var")(hidden_3)
code = Sampling()([code_mean, code_log_var])
latent_inputs = Input(shape=(code_size,))
hidden_3_rev = Dense(hidden_3_size, activation=myactivation)(latent_inputs)
d_drop_1 = Dropout(dropout_rate)(hidden_3_rev)
hidden_2_rev = Dense(hidden_2_size, activation=myactivation)(d_drop_1)
d_drop_2 = Dropout(dropout_rate)(hidden_2_rev)
hidden_1_rev = Dense(hidden_1_size, activation=myactivation)(d_drop_2)
pre_output_data = Dense(input_size, activation=myactivation)(hidden_1_rev)
output_data = ReLU(max_value=1.0)(pre_output_data)
## TRAIN AUTOENCODER
encoder = Model(input_data, [code_mean, code_log_var, code], name="encoder")
decoder = Model(latent_inputs, output_data, name="decoder")
var_autoencoder = VAE(encoder, decoder)
var_autoencoder.compile(optimizer='adam', metrics=[tf.keras.metrics.BinaryAccuracy()])
history = var_autoencoder.fit( x_train, x_train, epochs=1000, shuffle=True, validation_data=(x_test, x_test),
callbacks=kcb.EarlyStopping(monitor="val_loss", patience=30, restore_best_weights=True) )
</code></pre>
<p>I wrote the test step myself, and used it to train my models for a few months earlier this year. However, now that I'm trying to run it again, it doesn't seem to return any meaningful values:</p>
<pre><code>> Epoch 1/1000
127/127 [==============================] - 2s 9ms/step - loss: 187.9941 - reconstruction_loss: 187.8929 - kl_loss: 0.1011 - val_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00
> Epoch 2/1000
127/127 [==============================] - 1s 8ms/step - loss: 154.8218 - reconstruction_loss: 154.8206 - kl_loss: 0.0012 - val_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00
> Epoch 3/1000
127/127 [==============================] - 1s 8ms/step - loss: 154.5254 - reconstruction_loss: 154.5229 - kl_loss: 0.0025 - val_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00
</code></pre>
<p>If there's an issue with my <code>test_step()</code>, it's possible that I had worked through it previously and simply didn't save the solution to my PC.</p>
<p>I'm currently using tensorflow 2.12.0 and Python 3.8.10</p>
<p>Why is it not returning any loss values currently?</p>
| <python><tensorflow><keras><autoencoder> | 2023-11-23 10:59:50 | 1 | 507 | Whitehot |
77,536,080 | 4,346,886 | Python IntelliJ style 'search everywhere' algorithm | <p>I have a list of file names in python like this:</p>
<pre><code>HelloWorld.csv
hello_windsor.pdf
some_file_i_need.jpg
san_fransisco.png
Another.file.txt
A file name.rar
</code></pre>
<p>I am looking for an IntelliJ style search algorithm where you can enter whole words or simply the first letter of each word in the file name, or a combination of both. Example searches:</p>
<pre><code>hw -> HelloWorld.csv, hello_windsor.pdf
hwor -> HelloWorld.csv
winds -> hello_windsor.pdf
sf -> some_file_i_need.jpg, san_francisco.png
sfin -> some_file_i_need.jpg
file need -> some_file_i_need.jpg
sfr -> san_francisco.png
file -> some_file_i_need.jpg, Another.file.txt, A file name.rar
file another -> Another.file.txt
fnrar -> A file name.rar
</code></pre>
<p>You get the idea.</p>
<p>Is there any Python packages that can do this? Ideally they'd also rank matches by 'frecency' (how often the files have been accessed, how recently) as well as by how strong the match is.</p>
<p>I know pylucene is one option but it seems very heavyweight given the list of file names is short and I have no interest in searching the contents of the file? Is there any other options?</p>
| <python><intellij-idea><n-gram><file-search> | 2023-11-23 10:19:52 | 2 | 822 | Adam Griffiths |
77,535,978 | 10,667,216 | Issue with Pickling in Top2Vec v1.0.34 | <p>I recently upgraded to Top2Vec version 1.0.34 and encountered an issue with pickling the model.
Specifically, I'm getting following error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[18], line 8
2 model_name_out = os.path.join(
3 stt.DATA_PATH_RESULTS,
4 f"{PROJECT_NAME}.pickle")
7 with open(model_name_out, "wb") as file:
----> 8 pickle.dump(data, file)
AttributeError: Can't pickle local object 'Loader._recreate_base_user_object.<locals>._UserObject'
</code></pre>
<p>Here all the information:</p>
<pre><code>Python version: 3.9.12
Top2Vec version: 1.0.34
</code></pre>
<p>Question:
Has anyone else encountered this issue? Are there any workarounds or known fixes for this problem?</p>
<p>Thank you for your help!</p>
| <python><pickle><top2vec> | 2023-11-23 10:07:17 | 1 | 483 | Davood |
77,535,863 | 6,240,756 | Sort list with letters, numbers and special characters | <p>I know this kind of question has already been asked several times but I cannot find a proper answer to my problem.</p>
<p>I have a list of strings containing (or not) any kind of characters, letters, numbers, and special characters (<code>-</code> or <code>_</code>).</p>
<p>I would like to sort this list but having the following priority criterias: letters (insensitive case, no matter if uppercase or lowercase first) first, then numbers, then special characters (I don't mind the order of special characters)</p>
<p>For example :</p>
<pre class="lang-py prettyprint-override"><code>>>> data = "abcdefhijklmnopqrstuvwxyz123456789ABCDEFHIJKLMNOPQRSTUVWXYZ-_"
>>> expected = "aAbBcCdDeEfFhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ123456789-_"
>>> result = sorted(data, key=str.lower)
>>> print(result)
['-', '1', '2', '3', '4', '5', '6', '7', '8', '9', '_', 'a', 'A', 'b', 'B', 'c', 'C', 'd', 'D', 'e', 'E', 'f', 'F', 'h', 'H', 'i', 'I', 'j', 'J', 'k', 'K', 'l', 'L', 'm', 'M', 'n', 'N', 'o', 'O', 'p', 'P', 'q', 'Q', 'r', 'R', 's', 'S', 't', 'T', 'u', 'U', 'v', 'V', 'w', 'W', 'x', 'X', 'y', 'Y', 'z', 'Z']
>>> print(result == expected)
False
</code></pre>
<p>I tried some good algorithms from already asked questions but they never sort both number and special characters with letters.</p>
<p>How can I achieve this? Thanks.</p>
| <python><sorting> | 2023-11-23 09:52:20 | 3 | 2,005 | iAmoric |
77,535,857 | 2,583,346 | python plotly - heatmap with different data above and below diagonal | <p>I'm trying to create a heatmap based on two different matrices with identical dimensions but different data, so that the values for matrix1 are shown below the diagonal and the values for matrix2 are shown above the diagonal. Here is a simple example:</p>
<pre><code>import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
# create the data
matrix1 = pd.DataFrame([
[np.nan,np.nan,np.nan],
[0.2,np.nan,np.nan],
[0.5,0.7,np.nan]
])
matrix2 = pd.DataFrame([
[np.nan,20,40],
[np.nan,np.nan,70],
[np.nan,np.nan,np.nan]
])
# plot
fig = px.imshow(matrix1)
fig.add_heatmap(z=matrix2, colorscale='Bluered')
fig.show()
</code></pre>
<p>Here is what I'm getting:
<a href="https://i.sstatic.net/zmRuA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zmRuA.png" alt="enter image description here" /></a></p>
<p>which is quite close to what I want, except that the legends overlap. How can I change the position of one of the legends? Or maybe my approach is wrong altogether?</p>
| <python><plotly><heatmap> | 2023-11-23 09:51:33 | 0 | 1,278 | soungalo |
77,535,830 | 16,027,663 | Vecotorized Lookback with Two Numpy Arrays | <p>Is it possible to vectorize lookbacks using two Numpy arrays? In example one below I need to "loop" through each value in first_arr and check if any of the previous 4 values contain 3 or more values from second_arr. If the condition is true the value in first_arr should be added to result_arr.</p>
<p>If there are fewer than 4 values in first_arr it should check however many values exist (although obviously the condition will never be true if less than 3).</p>
<p>The actual arrays will be much larger so I am looking for a vecotorized solution.</p>
<p>Example one:</p>
<pre><code>first_arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
second_arr = [2, 4, 5, 11, 13]
result_arr = [6]
</code></pre>
<p>Example two:</p>
<pre><code>first_arr = [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]
second_arr = [14, 15, 22, 23, 24, 26, 30, 31, 32]
result_arr = [25, 26, 27, 33]
</code></pre>
| <python><numpy> | 2023-11-23 09:48:15 | 2 | 541 | Andy |
77,535,771 | 7,766,158 | Zip deployment of Azure function do not trigger the import of requirements | <p>I want to deploy some Azure functions through Terraform.</p>
<pre><code>resource "azurerm_storage_account" "function_storage" {
name = var.storage_account_name
resource_group_name = var.resource_group_name
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
min_tls_version = "TLS1_2"
public_network_access_enabled = true
}
resource "azurerm_storage_container" "function_container" {
name = "${var.function_name}files"
storage_account_name = azurerm_storage_account.function_storage.name
container_access_type = "private"
}
resource "azurerm_service_plan" "appplan" {
name = var.app_serviceplan_name
location = var.location
resource_group_name = var.resource_group_name
os_type = "Linux"
sku_name = "S1"
}
data "archive_file" "file_function_app" {
type = "zip"
source_dir = "../src/azure_functions/my_functions"
# Use a hash of all code files as a zip filename to trigger the deployment on every changes
output_path = "archives/${sha1(join("", [for f in fileset("../src/azure_functions/my_functions", "**") : filesha1("../src/azure_functions/my_functions/${f}")]))}-function.zip"
}
resource "azurerm_linux_function_app" "funcapp" {
name = var.funcapp_name
location = var.location
resource_group_name = var.resource_group_name
service_plan_id = azurerm_service_plan.appplan.id
storage_account_name = azurerm_storage_account.function_storage.name
storage_account_access_key = azurerm_storage_account.function_storage.primary_access_key
site_config {
application_stack {
python_version = 3.11
}
}
app_settings = {
WEBSITE_RUN_FROM_PACKAGE = 1
AzureWebJobsFeatureFlags = "EnableWorkerIndexing"
SCM_DO_BUILD_DURING_DEPLOYMENT = true
ENABLE_ORYX_BUILD = true
}
zip_deploy_file = data.archive_file.file_function_app.output_path
}
</code></pre>
<p>When I do so, the Function App is properly deployed and my Function's files appears in the "App files" tab in the portal.</p>
<p><a href="https://i.sstatic.net/XI0xT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XI0xT.png" alt="enter image description here" /></a></p>
<p>However, there is no function showing in the overview tab and I can't trigger my function.</p>
<p><a href="https://i.sstatic.net/jkd7X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jkd7X.png" alt="enter image description here" /></a></p>
<p>After some investigation, I found out that I can deploy a "hello-world" Azure function properly, but as soon as I add an import form an external lib, the Azure function is not recognized anymore, so I assume that there is an issue with the import of dependencies. How can I solve this issue ?</p>
<p>Here is an example of <code>function_app.py</code>:</p>
<pre><code>import azure.functions as func
# Adding this import, or any other external lib will cause the problem
# If I remove this line, the function is properly recognized in Azure
import jsonschema
app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION)
@app.route(route="p4_profiling", auth_level=func.AuthLevel.FUNCTION)
def p4_profiling(req: func.HttpRequest) -> func.HttpResponse:
return func.HttpResponse("OK", status_code=200)
</code></pre>
<p>And here is my <code>requirements.txt</code>:</p>
<pre><code>azure-functions==1.17.0
jsonschema==4.19.2
numpy==1.26.1
pandas==2.1.2
thefuzz==0.20.0
psycopg2-binary
</code></pre>
<p>Here is my file structure :</p>
<pre><code>├── infrastructure
│ ├── azure-function.tf
│ ├── main.tf
│ └── variable.tf
└── src
├── azure_functions
│ └── my_function
│ ├── function_app.py
│ ├── host.json
│ ├── __init__.py
│ ├── local.settings.json
│ └── requirements.txt
</code></pre>
<p>And here is my zip file structure :</p>
<pre><code>├── function_app.py
├── host.json
├── __init__.py
├── local.settings.json
└── requirements.txt
</code></pre>
| <python><azure><terraform><azure-functions> | 2023-11-23 09:40:27 | 1 | 1,931 | Nakeuh |
77,535,702 | 903,051 | Convert ImageMagick command to Python | <p>I have the following ImageMagick command that works very well for removing grid lines from a table image:</p>
<pre><code>convert test.png -background white -deskew 40% -write mpr:img \
\( mpr:img -morphology close rectangle:30x1 -negate \) \
\( mpr:img -morphology close rectangle:1x30 -negate \) \
-evaluate-sequence add \
result.png
</code></pre>
<p>I can, of course, invoke this code from my Python script.</p>
<p>However, I would like to be able to translate this command into something pythonic like Wand, PIL, cv2 or whatever.</p>
<p>Namely for portability sake but also to avoid having to save the output.</p>
<p>Even though Wand, for instance, has a morphology function I do not seem to be able to figure out the kernel and so on.</p>
<p>As the real images contain confidential information, I have added a similar mock example. I have verified that the above command produces the desired results of entirely clearing the grid lines without affecting the quality of the text.</p>
<p><a href="https://i.sstatic.net/bJHWd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bJHWd.png" alt="As the real images contain confidential information, I have added a similar mock example." /></a></p>
| <python><opencv><python-imaging-library><imagemagick><wand> | 2023-11-23 09:29:22 | 1 | 543 | mirix |
77,535,500 | 8,849,755 | Importing a Function from External Module Without Colliding with Local Module Names in Python | <p>I have the following files:</p>
<pre><code>/somewhere/my_project/main.py
/somewhere/my_project/utils.py
/somewhere/else/utils.py
</code></pre>
<p>and I need to import a function from <code>/somewhere/else/utils.py</code> into <code>main.py</code>. Usually I do <a href="https://stackoverflow.com/a/4383597/8849755">this</a>, i.e.</p>
<pre><code># main.py
import sys
sys.path.append('/somewhere/else')
from utils import somewhere_elses_function
</code></pre>
<p>However, in this case this collides with <code>/somewhere/my_project/utils.py</code>.</p>
<p>How can I solve this?</p>
| <python><import> | 2023-11-23 08:58:41 | 3 | 3,245 | user171780 |
77,535,380 | 4,056,181 | Minimum number of digits for exact double-precision and the fmt='%.18e' of numpy.savetxt() | <p>I want to save 64-bit (double-precision) floating point numbers to a text file in Python using NumPy's <a href="https://numpy.org/doc/stable/reference/generated/numpy.savetxt.html" rel="nofollow noreferrer"><code>savetxt()</code></a>. The default format is <code>fmt='%.18e'</code> which gives 19 significant decimal digits (e.g. <code>f'{np.pi:.18e}' == '3.141592653589793116e+00'</code>). According to e.g. <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow noreferrer">Wikipedia</a>, however, 64-bit floats only contain 15 to 17 significant decimal digits, 2 lower than the 19 used by default by <code>savetxt()</code>.</p>
<h5>Question</h5>
<p>I want to know if I can get by using <code>savetxt(..., fmt='%.16e')</code> without any loss of precision, i.e. saving and then loading should be binary compatible with the original data. If so, why is <code>fmt='%.18e'</code> the default and not <code>fmt='%.16e'</code>?</p>
<h5>Test code</h5>
<p>To test this experimentally, I wrote the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def test(n, iterations=1_000_000, scale=2**20):
for i in range(iterations):
a = scale*(np.random.random()*2 - 1)
for b in [a, np.nextafter(a, -np.inf), np.nextafter(a, np.inf)]:
if float(f'{{:.{n}e}}'.format(b)) != b:
return False
return True
for n in range(19):
print(f'Binary compatibility with fmt=%.{n}e ({n + 1} significant digits)?', test(n))
</code></pre>
<p>which indeed gives</p>
<pre class="lang-none prettyprint-override"><code>.
.
.
Binary compatibility with fmt=%.15e (16 significant digits)? False
Binary compatibility with fmt=%.16e (17 significant digits)? True
Binary compatibility with fmt=%.17e (18 significant digits)? True
Binary compatibility with fmt=%.18e (19 significant digits)? True
</code></pre>
<p>suggesting that 17 significant digits are always enough, i.e. anything beyond <code>fmt='%.16e'</code> is wasteful.</p>
<h5>Edit</h5>
<p>From the <a href="https://en.cppreference.com/w/cpp/header/cfloat" rel="nofollow noreferrer">CPP reference</a>, the needed number of significant digits needed is indeed <code>DBL_DECIMAL_DIG = 17</code>, corresponding to <code>fmt='%.16e'</code>. The question of why <a href="https://numpy.org/doc/stable/reference/generated/numpy.savetxt.html" rel="nofollow noreferrer"><code>savetxt()</code></a> chooses <code>fmt='%.18e'</code> remains.</p>
| <python><numpy><floating-point><binary><precision> | 2023-11-23 08:36:07 | 0 | 13,201 | jmd_dk |
77,535,331 | 14,471,688 | How can I perform XOR between every elements of two 2D numpy arrays by saving the memory usage efficiently? | <p>I am not familiar with XOR operations and I was wondering if there is an efficient way to perform XOR operation for every elements between two 2D NumPy arrays by saving the memory usage.</p>
<p>Here is my toy example:</p>
<pre><code># Example arrays
u_values = np.array([[True, True, True, True, True, True, False, True, True, True],
[True, True, True, True, True, True, True, False, False, True],
[True, True, True, True, False, True, False, True, True, True],
[True, True, True, True, False, True, True, False, False, True],
[True, True, True, True, True, False, False, False, False, True],
[True, True, True, False, False, False, False, False, False, True],
[True, False, False, False, False, False, False, False, False, True],
[True, True, False, True, False, True, False, True, True, True]])
v_values = np.array([[True, True, True, True, True, True, False, True, True, True],
[True, False, True, False, True, True, True, False, False, True],
[True, True, True, True, False, True, False, True, True, True],
[True, False, True, True, False, False, True, False, False, True],
[True, True, False, True, True, False, False, False, False, True],
[True, True, True, False, False, True, False, False, False, True]])
</code></pre>
<p>Here is what I tried:</p>
<pre><code>u_reshaped = u_values[:, None, :]
v_reshaped = v_values[None, :, :]
xor_result = u_reshaped ^ v_reshaped
xor_results = xor_result.reshape((-1, xor_result.shape[2]))
</code></pre>
<p>I even tried to use the <em>np.packbits</em> function to see if I can save the memory:</p>
<pre><code>u_values_packed = np.packbits(u_values, axis=1)
v_values_packed = np.packbits(v_values, axis=1)
result_packed = u_values_packed[:, None, :] ^ v_values_packed[None, :, :]
result_packed = result_packed.reshape((-1, result_packed.shape[2]))
result_unpacked = np.unpackbits(result_packed, axis=1)[:, :u_values.shape[1]]
</code></pre>
<p>When I performed my real use case, I got this error:</p>
<blockquote>
<p>numpy.core._exceptions.MemoryError: Unable to allocate 959. GiB for an array with shape (2788, 1813, 203769) and data type uint8</p>
</blockquote>
<p>How can I achieve this by using less memory?</p>
<p>To clarify the problem, here is my expected output for toy example with dimension (48,10):</p>
<pre class="lang-none prettyprint-override"><code>[[False False False False False False False False False False]
[False True False True False False True True True False]
[False False False False True False False False False False]
[False True False False True True True True True False]
[False False True False False True False True True False]
[False False False True True False False True True False]
[False False False False False False True True True False]
[False True False True False False False False False False]
[False False False False True False True True True False]
[False True False False True True False False False False]
[False False True False False True True False False False]
[False False False True True False True False False False]
[False False False False True False False False False False]
[False True False True True False True True True False]
[False False False False False False False False False False]
[False True False False False True True True True False]
[False False True False True True False True True False]
[False False False True False False False True True False]
[False False False False True False True True True False]
[False True False True True False False False False False]
[False False False False False False True True True False]
[False True False False False True False False False False]
[False False True False True True True False False False]
[False False False True False False True False False False]
[False False False False False True False True True False]
[False True False True False True True False False False]
[False False False False True True False True True False]
[False True False False True False True False False False]
[False False True False False False False False False False]
[False False False True True True False False False False]
[False False False True True True False True True False]
[False True False False True True True False False False]
[False False False True False True False True True False]
[False True False True False False True False False False]
[False False True True True False False False False False]
[False False False False False True False False False False]
[False True True True True True False True True False]
[False False True False True True True False False False]
[False True True True False True False True True False]
[False False True True False False True False False False]
[False True False True True False False False False False]
[False True True False False True False False False False]
[False False True False True False False False False False]
[False True True True True False True True True False]
[False False True False False False False False False False]
[False True True False False True True True True False]
[False False False False True True False True True False]
[False False True True False False False True True False]]
</code></pre>
| <python><numpy><bit-manipulation> | 2023-11-23 08:25:24 | 0 | 381 | Erwin |
77,535,074 | 6,900,729 | Can I rely on garbage collector to close asynchronous database connections in Python? | <p>My team is working on an asynchronous HTTP web server implemented in Python (CPython 3.11 to be exact). We're using Redis for data storage and connect to it with the help of the <a href="https://redis-py.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">redis-py</a> library. Since the HTTP server is asynchronous, we use the <code>redis.asyncio.Redis</code> client class - it creates a connection pool internally and manages it automatically.</p>
<p>The Redis server is hosten in AWS, and will have password rotation configured. Currently, we're trying to come up with a way how to deal with this fact in our Python code automatically. There're 2 steps that we have to perform:</p>
<ol>
<li>Create a new connection pool as soon as we know that new credentials are available</li>
<li>Close the existing connection pool as soon as we know that <strong>it will not be used anymore</strong></li>
</ol>
<p>The problem here is step #2. It's not guaranteed that we'll be able to introduce any synchronization mechanism that would tell us whether the connection pool can be safely closed manually (i.e. there're no HTTP requests being handled at that very moment which rely on the old connection pool), so we're looking for an alternative automated solution first. Right now I'd like to know whether we can rely on garbage collector to <strong>safely</strong> close any existing connections for us.</p>
<p>According to <a href="https://redis-py.readthedocs.io/en/stable/examples/asyncio_examples.html#Connecting-and-Disconnecting" rel="nofollow noreferrer">the documentation</a>, <code>redis.asyncio.Redis</code> instances must be closed manually because the <code>__del__</code> magic method, which is synchronous by its nature, cannot execute <code>await self.aclose()</code> itself. At the same time, I'm wondering what happens if these objects are simply destroyed by the GC. In theory, the cleanup process should go something like this:</p>
<ol>
<li>GC destroys a <code>redis.asyncio.Redis</code> instance (a.k.a. client) together with all its fields</li>
<li>GC destroys the connection-pool-class instance stored inside that client</li>
<li>GC destroys the connection-list stored inside that connection pool</li>
<li>GC desroys all the connection-class instances stored inside that list</li>
</ol>
<p>I performed an aritificial test similar to this:</p>
<pre class="lang-py prettyprint-override"><code>client = Redis(...)
await asyncio.gather(*(
client.get(f"key_{i}") for i in range(100)
))
# checkpoint 1
client = Redis(...)
# checkpoint 2
</code></pre>
<p>And the Redis server reported 100 connections being open at checkpoint #1, and 0 connections being open at checkpoint #2 (in one case immediately, and in a slightly different case I had to make another request to the Redis server using the new client instance beforehand). It seems that (ab-)using the GC like this doesn't keep any connections hanging on the Redis server, but can we be sure that everything will be properly cleaned up on the HTTP server, and we won't end up with any memory leaks or hanging system resources?</p>
| <python><asynchronous><redis><garbage-collection><connection-pooling> | 2023-11-23 07:37:10 | 1 | 456 | Alex F |
77,534,891 | 139,150 | More than 6 characters string repeated | <p>I am trying to find the repeated strings (not words) from text.</p>
<pre><code>x = 'This is a sample text and this is lowercase text that is repeated.'
</code></pre>
<p>In this example, the string ' text ' should not return because only 6 characters match with one another. But the string 'his is ' is the expected value returned.</p>
<p>I tried using range, Counter and regular expression.</p>
<pre><code>import re
from collections import Counter
duplist = list()
for i in range(1, 30):
mylist = re.findall('.{1,'+str(i)+'}', x)
duplist.append([k for k,v in Counter(mylist).items() if v>1])
</code></pre>
| <python><awk><sed><grep> | 2023-11-23 07:04:20 | 4 | 32,554 | shantanuo |
77,534,387 | 4,281,353 | keras.Model.fit does not work correctly with generator and sparse categorical crossentropy loss | <p><code>tf.keras.Model.fit(x=generator)</code> does not work correctly with
<code>SparseCategoricalCrossentropy</code>/<code>sparce_categorical_crossentropy</code> loss function with a generator as training data. The same symptom reported in <a href="https://stackoverflow.com/questions/64910527/accuracy-killed-when-using-imagedatagenerator-tensorflow-keras#">Accuracy killed when using ImageDataGenerator TensorFlow Keras</a>.</p>
<p>Please advise if this behaviour is as expected or please point out if code is incorrect.</p>
<p>Code excerpt. Entire code at the bottom.</p>
<pre><code># --------------------------------------------------------------------------------
# CIFAR 10
# --------------------------------------------------------------------------------
USE_SPARCE_LABEL = True
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train, x_validation, y_train, y_validation = train_test_split(
x_train, y_train, test_size=0.2, random_state=42
)
# One Hot Encoding the labels when USE_SPARCE_LABEL is False
if not USE_SPARCE_LABEL:
y_train = keras.utils.to_categorical(y_train, NUM_CLASSES)
y_validation = keras.utils.to_categorical(y_validation, NUM_CLASSES)
y_test = keras.utils.to_categorical(y_test, NUM_CLASSES)
# --------------------------------------------------------------------------------
# Model
# --------------------------------------------------------------------------------
model: Model = Model(
inputs=inputs, outputs=outputs, name="cifar10"
)
# --------------------------------------------------------------------------------
# Compile
# --------------------------------------------------------------------------------
if USE_SPARCE_LABEL:
loss_fn=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) # <--- cause incorrect behavior
else:
loss_fn=tf.keras.losses.CategoricalCrossentropy(from_logits=False)
learning_rate = 1e-3
model.compile(
optimizer=Adam(learning_rate=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-08),
loss=loss_fn, # <---- sparse categorical causes the incorrect behavior
metrics=["accuracy"]
)
# --------------------------------------------------------------------------------
# Train
# --------------------------------------------------------------------------------
batch_size = 16
number_of_epochs = 10
def data_label_generator(x, y):
def _f():
index = 0
length = len(x)
try:
while True:
yield x[index:index+batch_size], y[index:index+batch_size]
index = (index + batch_size) % length
except StopIteration:
return
return _f
earlystop_callback = tf.keras.callbacks.EarlyStopping(
patience=5,
restore_best_weights=True,
monitor='val_accuracy'
)
steps_per_epoch = len(y_train) // batch_size
validation_steps = (len(y_validation) // batch_size) - 1 # To avoid run out of data for validation
history = model.fit(
x=data_label_generator(x_train, y_train)(), # <--- Generator
batch_size=batch_size,
epochs=number_of_epochs,
verbose=1,
validation_data=data_label_generator(x_validation, y_validation)(),
shuffle=True,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
validation_batch_size=batch_size,
callbacks=[
earlystop_callback
]
)
</code></pre>
<h2>Symptom</h2>
<p>Using Sparse Index as the labels and <code>SparseCategoricalCrossentropy</code> as the loss function (<code>USE_SPARSE_LABEL=True</code>). The accuracy values got unstable and low, causing early stop.</p>
<pre><code>2500/2500 [...] - 24s 8ms/step - loss: 1.4824 - accuracy: 0.0998 - val_loss: 1.1893 - val_accuracy: 0.1003
Epoch 2/10
2500/2500 [...] - 21s 8ms/step - loss: 1.0730 - accuracy: 0.1010 - val_loss: 0.8896 - val_accuracy: 0.0832
Epoch 3/10
2500/2500 [...] - 20s 8ms/step - loss: 0.9272 - accuracy: 0.1016 - val_loss: 0.9150 - val_accuracy: 0.0720
Epoch 4/10
2500/2500 [...] - 20s 8ms/step - loss: 0.7987 - accuracy: 0.1019 - val_loss: 0.8087 - val_accuracy: 0.0864
Epoch 5/10
2500/2500 [...] - 20s 8ms/step - loss: 0.7081 - accuracy: 0.1012 - val_loss: 0.8707 - val_accuracy: 0.0928
Epoch 6/10
2500/2500 [...] - 21s 8ms/step - loss: 0.6056 - accuracy: 0.1019 - val_loss: 0.7688 - val_accuracy: 0.0851
</code></pre>
<p>Using One Hot Encoding as the labels and <code>CategoricalCrossentropy</code> as the loss function (<code>USE_SPARSE_LABEL=True</code>). Work as expected.</p>
<pre><code>2500/2500 [...] - 24s 8ms/step - loss: 1.4146 - accuracy: 0.4997 - val_loss: 1.0906 - val_accuracy: 0.6105
Epoch 2/10
2500/2500 [...] - 21s 9ms/step - loss: 1.0306 - accuracy: 0.6375 - val_loss: 0.9779 - val_accuracy: 0.6532
Epoch 3/10
2500/2500 [...] - 22s 9ms/step - loss: 0.8780 - accuracy: 0.6925 - val_loss: 0.8194 - val_accuracy: 0.7127
Epoch 4/10
2500/2500 [...] - 21s 8ms/step - loss: 0.7641 - accuracy: 0.7315 - val_loss: 0.9330 - val_accuracy: 0.7014
Epoch 5/10
2500/2500 [...] - 21s 8ms/step - loss: 0.6797 - accuracy: 0.7614 - val_loss: 0.7908 - val_accuracy: 0.7311
Epoch 6/10
2500/2500 [...] - 21s 9ms/step - loss: 0.6182 - accuracy: 0.7841 - val_loss: 0.7371 - val_accuracy: 0.7533
Epoch 7/10
2500/2500 [...] - 21s 9ms/step - loss: 0.4981 - accuracy: 0.8217 - val_loss: 0.8221 - val_accuracy: 0.7373
Epoch 8/10
2500/2500 [...] - 22s 9ms/step - loss: 0.4363 - accuracy: 0.8437 - val_loss: 0.7865 - val_accuracy: 0.7525
Epoch 9/10
2500/2500 [...] - 23s 9ms/step - loss: 0.3962 - accuracy: 0.8596 - val_loss: 0.8198 - val_accuracy: 0.7505
Epoch 10/10
2500/2500 [...] - 22s 9ms/step - loss: 0.3463 - accuracy: 0.8776 - val_loss: 0.8472 - val_accuracy: 0.7512
</code></pre>
<h2>Code</h2>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras import (
__version__
)
from keras.layers import (
Layer,
Normalization,
Conv2D,
MaxPooling2D,
BatchNormalization,
Dense,
Flatten,
Dropout,
Reshape,
Activation,
ReLU,
LeakyReLU,
)
from keras.models import (
Model,
)
from keras.layers import (
Layer
)
from keras.optimizers import (
Adam
)
from sklearn.model_selection import train_test_split
print("TensorFlow version: {}".format(tf.__version__))
tf.keras.__version__ = __version__
print("Keras version: {}".format(tf.keras.__version__))
# --------------------------------------------------------------------------------
# CIFAR 10
# --------------------------------------------------------------------------------
NUM_CLASSES = 10
INPUT_SHAPE = (32, 32, 3)
USE_SPARCE_LABEL = False # Setting False make it work as expected
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train, x_validation, y_train, y_validation = train_test_split(
x_train, y_train, test_size=0.2, random_state=42
)
# One Hot Encoding the labels
if not USE_SPARCE_LABEL:
y_train = keras.utils.to_categorical(y_train, NUM_CLASSES)
y_validation = keras.utils.to_categorical(y_validation, NUM_CLASSES)
y_test = keras.utils.to_categorical(y_test, NUM_CLASSES)
# --------------------------------------------------------------------------------
# Model
# --------------------------------------------------------------------------------
inputs = tf.keras.Input(
name='image',
shape=INPUT_SHAPE,
dtype=tf.float32
)
x = Conv2D(
filters=32,
kernel_size=(3, 3),
strides=(1, 1),
padding="same",
activation='relu',
input_shape=INPUT_SHAPE
)(inputs)
x = BatchNormalization()(x)
x = Conv2D(
filters=64,
kernel_size=(3, 3),
strides=(1, 1),
padding="same",
activation='relu'
)(x)
x = MaxPooling2D(
pool_size=(2, 2)
)(x)
x = Dropout(0.20)(x)
x = Conv2D(
filters=128,
kernel_size=(3, 3),
strides=(1, 1),
padding="same",
activation='relu'
)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(
pool_size=(2, 2)
)(x)
x = Dropout(0.20)(x)
x = Flatten()(x)
x = Dense(300, activation="relu")(x)
x = BatchNormalization()(x)
x = Dropout(0.20)(x)
x = Dense(200, activation="relu")(x)
outputs = Dense(NUM_CLASSES, activation="softmax")(x)
model: Model = Model(
inputs=inputs, outputs=outputs, name="cifar10"
)
# --------------------------------------------------------------------------------
# Compile
# --------------------------------------------------------------------------------
learning_rate = 1e-3
if USE_SPARCE_LABEL:
loss_fn=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
else:
loss_fn=tf.keras.losses.CategoricalCrossentropy(from_logits=False)
model.compile(
optimizer=Adam(learning_rate=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-08),
loss=loss_fn,
metrics=["accuracy"]
)
model.summary()
# --------------------------------------------------------------------------------
# Train
# --------------------------------------------------------------------------------
batch_size = 16
number_of_epochs = 10
def data_label_generator(x, y):
def _f():
index = 0
length = len(x)
try:
while True:
yield x[index:index+batch_size], y[index:index+batch_size]
index = (index + batch_size) % length
except StopIteration:
return
return _f
earlystop_callback = tf.keras.callbacks.EarlyStopping(
patience=5,
restore_best_weights=True,
monitor='val_accuracy'
)
steps_per_epoch = len(y_train) // batch_size
validation_steps = (len(y_validation) // batch_size) - 1 # -1 to avoid run out of data for validation
history = model.fit(
x=data_label_generator(x_train, y_train)(),
batch_size=batch_size,
epochs=number_of_epochs,
verbose=1,
validation_data=data_label_generator(x_validation, y_validation)(),
shuffle=True,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
validation_batch_size=batch_size,
callbacks=[
earlystop_callback
]
)
</code></pre>
<h2>Environment</h2>
<pre><code>TensorFlow version: 2.14.1
Keras version: 2.14.0
Python 3.10.12
Ubuntu 22.04LTS
</code></pre>
<hr />
<h2>Workaround</h2>
<p>The answer by <a href="https://stackoverflow.com/users/9215780/innat">innat</a> worked.</p>
<pre><code>model.compile(
optimizer=Adam(learning_rate=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-08),
#metrics=["accuracy"]
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy(name='accuracy')])
model.summary()
</code></pre>
| <python><tensorflow><machine-learning><keras><generator> | 2023-11-23 04:41:21 | 1 | 22,964 | mon |
77,534,243 | 12,242,085 | How to combine Cross Validation and Early Stopping Round in LSTM model? | <p>I would like to combine Cross Validation and Early Stopping Round in LSTM model in Python. I would like to make cross validation on 3 folds and stop learning my LSTM model if for 3 consecutive epchos there is no improvement in the mean AUC of the 3 folds from the cross-validation.</p>
<p>In this way, I would like to build a model with the highest possible mean AUC of the 3 folds from cross-validation, which has not improved for 3 epchos in a row.</p>
<p>My current code is:</p>
<pre><code>import numpy as np
from keras.models import Sequential
from keras.layers import LSTM, Dense
from sklearn.model_selection import KFold
from sklearn.metrics import roc_auc_score
from keras.callbacks import EarlyStopping
model = Sequential()
model.add(LSTM(units=32, input_shape=(X_train.shape[1], X_train.shape[2]), dropout=0.3))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer=Adam(learning_rate=0.01), metrics=['AUC'])
kfold = KFold(n_splits=3, shuffle=True)
early_stopping = EarlyStopping(monitor='val_auc', patience=3, mode='max')
for train_idx, val_idx in kfold.split(X_train):
X_train_fold, X_val_fold = X_train[train_idx], X_train[val_idx]
y_train_fold, y_val_fold = y_train[train_idx], y_train[val_idx]
auc_scores = []
for epoch in range(100):
model.fit(X_train_fold, y_train_fold, epochs=1, validation_data=(X_val_fold, y_val_fold), callbacks=[early_stopping], verbose=0)
y_pred = model.predict(X_val_fold)
auc = roc_auc_score(y_val_fold, y_pred)
auc_scores.append(auc)
print("Epoch:", epoch+1, "Avg AUC from 3 folds:", np.mean(auc_scores))
print("Model stopped learn after", early_stopping.stopped_epoch)
</code></pre>
<p>How can I modify my code to achieve that ?</p>
| <python><machine-learning><lstm><cross-validation><auc> | 2023-11-23 03:40:32 | 0 | 2,350 | dingaro |
77,534,184 | 12,178,648 | Not able to update Django model with HTMX button click | <p>I am using Django, Tailwind CSS, and HTMX.</p>
<p>I am trying to update my Django model (<code>Item</code>) which starts as <code>None</code> to use a dropdown <code>select</code> to change the color then use a button to Save and update the <code>Item.color</code> on the Django model. The HTML shows up, and I am able to make the request (the url shows up: "https://localhost/detail-view/my-item-2/?id=my-item-2&color=YELLOW"), but the model is not being updated and I can't get the <code>JsonResponse</code>s to show up. I'm not sure if there is something wrong with how I am using my POST request? Can anyone help?</p>
<p>models.py</p>
<pre class="lang-py prettyprint-override"><code>class Item(models.Model):
id = models.AutoField(primary_key=True)
color = models.CharField(
"Color",
blank=True,
null=True,
choices=MyConstants.Color.choices,
)
</code></pre>
<p>constants.py</p>
<pre class="lang-py prettyprint-override"><code>class Color(models.TextChoices):
RED = "RED"
YELLOW = "YELLOW"
GREEN = "GREEN"
</code></pre>
<p>views.py</p>
<pre class="lang-py prettyprint-override"><code>@require_POST
def update_color(request):
if request.method == "POST":
new_status = request.POST.get("color")
item_id = request.POST.get("item_id")
item = Items.objects.filter(id=item_id)
item.update_or_create(
color=new_color
)
item.save()
return JsonResponse({"message": "Color updated successfully"})
return JsonResponse({"message": "Not a post request: failed to update color"})
</code></pre>
<p>color_picker.html</p>
<pre class="lang-html prettyprint-override"><code>{% block main_content %}
<div class="p-5">
<div class="mb-6">
<h1 class="text-2xl">Overview</h1>
<p class="text-sm">{{ item.id }}</p>
</div>
<div class="pb-4">
<label for="color" class="text-lg ">Color</label>
<form
hx-post="{% url "update_color" %}"
hx-swap="outerHTML"
>
<input type="hidden" name="item_id" value="{{ item.id }}"/>
<div class="flex">
<select
id="color"
name="color"
class="w-1/4 p-2"
>
<option
value=""
{% if not color %}selected{% endif %}
>Choose a score</option>
<option
key="GREEN"
value="GREEN"
{% if color and color.value == "GREEN" %}selected{% endif %}
>RED </option>
<option
key="YELLOW"
value="YELLOW"
{% if color and color.value == "YELLOW" %}selected{% endif %}
>YELLOW </option>
<option
key="RED"
value="RED"
{% if color and color.value == "RED" %}selected{% endif %}
>GREEN </option>
</select>
<button
type="submit"
class="text-white bg-blue-300"
>Save</button>
</div>
</form>
</div>
</div>
{% endblock %}
</code></pre>
<p>urls.py</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns = [
path(r"update_color/", update_color, name="update_color"),
]
</code></pre>
| <python><django><tailwind-css><htmx> | 2023-11-23 03:20:17 | 1 | 1,715 | sepulchre01 |
77,534,100 | 9,443,671 | Is there any way to merge overlapping audio segments into one with very low latency? | <p>I've this problem where I'm faced with overlapping audio segments: in particular, audio segment <code>t</code> overlaps with audio segment <code>t+1</code> in the last couple of words (e.g. <code>t = "I like to move"</code> and <code>t+1 = "to move in circles all day"</code>). Is there any way that I can merge the 2 audio segments so that they sound smooth? I've tried doing something the following:</p>
<pre><code>curr_audio = curr_audio[len(prev_audio):] #Where both items are PyDub AudioSegments
</code></pre>
<p>but that does not seem to solve it and only gives me the final part of the audio segment with non-overlapping parts missing. Am I doing something wrong here? I've also tried working with bytes and doing a similar approach to above, but it seems like the byte streams of the generated audios are completely different which makes this problem super hard.</p>
<p>Is there a nice way to do this in near real-time?</p>
<p>Edit of a working example:</p>
<p>Here's sample 1: <a href="https://vocaroo.com/14ygGStF6788" rel="nofollow noreferrer">https://vocaroo.com/14ygGStF6788</a>
Here's sample 2: <a href="https://voca.ro/18S4ZjSwgjey" rel="nofollow noreferrer">https://voca.ro/18S4ZjSwgjey</a></p>
| <python><arrays><audio><pydub> | 2023-11-23 02:47:16 | 1 | 687 | skidjoe |
77,534,017 | 16,171,413 | Python's built in sum function taking forever to compute sums of very large range of values in a list | <p>This works well and returns <strong>249999500000</strong>:</p>
<pre><code>sum([x for x in range(1_000_000) if x%2==0])
</code></pre>
<p>This is slower but still returns <strong>24999995000000</strong>:</p>
<pre><code>sum([x for x in range(10_000_000) if x%2==0])
</code></pre>
<p>However, larger range of values such as <code>1_000_000_000</code> takes a very long time to compute. In fact, this returns an error:</p>
<pre><code>sum([x for x in range(10_000_000_000) if x%2==0])
</code></pre>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
MemoryError.
</code></pre>
<p>I've seen posts saying that the built-in sum function is the fastest but this doesn't look like being fast to me. My question then is, do we have a faster Pythonic approach to compute large sums like this? Or is it perhaps a hardware limitation?</p>
| <python><sum><list-comprehension> | 2023-11-23 02:12:15 | 3 | 5,413 | Uchenna Adubasim |
77,534,012 | 219,153 | How to define globals for this timeit call? | <p>This Python 3.11 script benchmarks function <code>f</code>:</p>
<pre><code>import numpy as np, timeit as ti
def f(a):
return np.median(a)
a = np.random.rand(10_000)
m = None
fun = f'm = f(a)'
t = 1000 * np.array(ti.repeat(stmt=fun, setup=fun, globals=globals(), number=1, repeat=100))
print(f'{fun}: {np.amin(t):6.3f}ms {np.median(t):6.3f}ms {m}')
</code></pre>
<p>I intended to get the result of <code>f(a)</code> stored in <code>m</code>, but the call to <code>ti.repeat</code> doesn't do it. I'm getting:</p>
<pre><code>m = f(a): 0.104ms 0.118ms None
</code></pre>
<p>What do I need to change in order to get that? I tried <code>globals={'f': f, 'a': a, 'm': m}</code> and <code>setup='global m'</code> with the same result.</p>
| <python><global><timeit> | 2023-11-23 02:11:16 | 1 | 8,585 | Paul Jurczak |
77,533,990 | 22,875,576 | Installed django in virtual env, but I can't import | <p>I installed Django in virtual env.</p>
<p><a href="https://i.sstatic.net/z5jBI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z5jBI.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/BQkTS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BQkTS.png" alt="Installed in venv" /></a></p>
<p>But I got error.</p>
<pre><code>raise ImportError(
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
</code></pre>
<p>I use python 3.12.0 and tried reinstall all venv and packages, but got same error.</p>
<p><a href="https://stackoverflow.com/questions/37579042/after-installing-django-in-virtual-environment-still-cant-import-module">After installing django in virtual environment, still can't import module</a> is not helpful.</p>
| <python><django> | 2023-11-23 02:05:39 | 0 | 462 | solverfox |
77,533,783 | 179,234 | flask app Failed to serve js generate from vite. Error: Failed to load module script | <p>I have a webapp where the backend is build with python (flask) and the frontend is build with nodejs.</p>
<p>For the frontend, I run the following to generate static files:</p>
<pre><code>npm install
npm run build
</code></pre>
<p>This is my vite.config.ts</p>
<pre><code>import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
const port = process.env.PORT || 5000; // Use PORT environment variable if available, otherwise use 5000 as a fallback
export default defineConfig({
plugins: [react()],
base: "/static/",
build: {
outDir: "../backend/static",
emptyOutDir: true,
sourcemap: true
},
envPrefix: ["APP_INSIGHTS_CONN_STRING", "VITE_"],
envDir: "../../.azure/myenv/",
server: {
proxy: {
"/abc": `http://localhost:${port}`,
"/xyz": `http://localhost:${port}`,
"/lmn": `http://localhost:${port}`,
"/wuv": `http://localhost:${port}`
}
}
});
</code></pre>
<p>I then have the generate static file as part of my deployed package. I zip it and use zip deployment to deploy to Azure.
The code structure in the wwwroot folder in azure is like that:</p>
<pre><code>wwwroot
- static
- app.py
- requirements.txt
...other irrelevant files and folders
</code></pre>
<p>In app.py, I have the following code to serve the frontend files</p>
<pre><code>@app.route("/", methods=['GET'])
def index():
return send_from_directory('static', 'index.html')
@app.route("/<path:path>")
def static_file(path):
return app.send_static_file(path)
</code></pre>
<p>When this is deployed, the backend seems to be working fine, but the static files do not seem to be served correctly. I get a white page, when I inspected it, it is the index that is generated by vite, but apparently the js and css files are not rendered/served and I get the below error: Failed to load modile script: Expected a Javascript module script but the server responses with a MIME tyoe of "text/html". strict MIME type checking is enforced for module scripts per HTML spec.</p>
<p>what am I doing wrong? I tried to remove the "base" parameter from vite, still same error.</p>
<p>Also when I click on the js, it doesn't open</p>
<p><a href="https://i.sstatic.net/dkT6I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dkT6I.png" alt="enter image description here" /></a></p>
| <python><node.js><flask><azure-web-app-service><vite> | 2023-11-23 00:30:22 | 1 | 1,417 | Sarah |
77,533,755 | 2,347,063 | Apply a method on blobs in a container without downloading blobs from Azure blob storage | <p>I have a large number of containers in my Azure blob storage account. Each container contains several files/blobs of different formats (e.g., txt or json.) I need to compute the hash of these files and store the hash in a database. The way we're doing it now is that we download the blobs in a container to our local machine, and then compute the hash.</p>
<p>Everything is working, but there's one problem. Since the blobs are pretty large (by average the total size of blobs in a container is ~1GB,) it is so costly for us in terms of bandwidth to every time download the files to only compute the hash. I wonder if there's any way to apply our hash computation method (which is a Python method) to the blobs in a container on Azure without the need to download the blobs?</p>
| <python><azure><azure-blob-storage><azure-storage> | 2023-11-23 00:17:38 | 1 | 2,629 | Pedram |
77,533,733 | 1,035,897 | Excluding folders starting with a dot in google drive API query | <p>I am using <code>Python</code> client for <strong>google drive API</strong>.</p>
<p>I want to enumerate folders that has a name that does not start with a dot ('.').</p>
<p>My naive query to find the folders looks like this:</p>
<pre><code>query = f"""
{root_folder_id}' in parents
and mimeType = '{FOLDER_MIMETYPE}'
and trashed = false
and not name contains '.'
"""
</code></pre>
<p>The reason I used the <code>contains</code> operator here is because it is stated in the <a href="https://developers.google.com/drive/api/guides/ref-search-terms#operators" rel="nofollow noreferrer">official documentation</a> quote:</p>
<blockquote>
<p>The contains operator only performs prefix matching for a name term.
For example, suppose you have a name of HelloWorld. A query of name
contains 'Hello' returns a result, but a query of name contains
'World' doesn't.</p>
</blockquote>
<p>My query does <em>not</em> work. It does not find any folders. If I remove the offending last part <code>and not name contains '.'</code> then it finds all folders, including those with a dot at the start of the name, so I have narrowed it down to this part.</p>
<p>How can I solve this?</p>
| <python><google-drive-api><google-drive-shared-drive><flysystem-google-drive> | 2023-11-23 00:11:51 | 1 | 9,788 | Mr. Developerdude |
77,533,721 | 11,192,275 | Scipy stats t-test for the means and degrees of freedom | <p>I am using the stats module from scipy and in particular the function <code>ttest_ind</code>. I want to extract information related to the degrees of freedom when I apply this test. According to the SciPy v1.11.4 documentation, <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html#scipy-stats-ttest-ind" rel="nofollow noreferrer">link</a>, it is mention that the following values are return:</p>
<ul>
<li><strong>statistic</strong>: t-statistic</li>
<li><strong>pvalue</strong>: p-value associated with the given alternative</li>
<li><strong>df</strong>: degrees of freedom used in calculation of the t-statistic</li>
</ul>
<p>However using the following reproducible example I don't see that this is possible:</p>
<h1></h1>
<pre class="lang-py prettyprint-override"><code>from scipy.stats import ttest_ind
# Example data for two groups
group1 = [25, 30, 22, 28, 32]
group2 = [18, 24, 20, 26, 19]
t_statistic, p_value, degrees_of_freedom = ttest_ind(group1, group2, permutations=None)
#> Traceback (most recent call last):
#> Cell In[6], line 1
#> ----> 1 t_statistic, p_value, degrees_of_freedom = ttest_ind(group1, group2, permutations=None)
#> ValueError: not enough values to unpack (expected 3, got 2)
</code></pre>
<p>It is an error in the documentation or there is a way to obtain the degrees of freedom?</p>
| <python><probability><scipy.stats> | 2023-11-23 00:07:03 | 1 | 456 | luifrancgom |
77,533,604 | 8,006,845 | Execution timing with context manager | <p>I was looking for a solution for timing code execution.
Of course, I found all sorts of solutions mostly suggesting the use of <code>timeit</code> module - which is pretty cool - or just using the <code>time</code> module. Neither of those really provides what I'm after.</p>
<p>I wanted a simple solution that</p>
<ul>
<li>I can quickly add to the code without changing the structure or logic</li>
<li>can cover a single line or code block</li>
<li>elegant and clean enough to even leave it in</li>
</ul>
<p>I think I finally figured it out - using the context manager's dunder methods <code>__enter__</code> and <code>__exit__</code>.</p>
<p>The idea is we capture the time when the context manager starts (<code>__enter__</code>), then just let the code run in the context-manager-block, finally, we capture the time when the context manager ends (<code>__exit__</code>) and do something with the result (print and/or return).</p>
<p>So here's the snippet:</p>
<pre class="lang-py prettyprint-override"><code>import time
class ExecutionTimer:
def __init__(self, message='Execution time', unit='s'):
self.message = message
self.unit = unit
self.start_time = None
self.end_time = None
self.result = None
def __enter__(self):
self.start_time = time.time()
return self
def __exit__(self, exc_type, exc_value, traceback):
self.end_time = time.time()
self.result = self.end_time - self.start_time
self.print_time()
def print_time(self):
elapsed_time = self.result
if self.unit == 'ms':
elapsed_time *= 1000 # convert to milliseconds
elif self.unit == 'us':
elapsed_time *= 1e6 # convert to microseconds
elif self.unit == 'ns':
elapsed_time *= 1e9 # convert to nanoseconds
print(f"{self.message}: {elapsed_time}{self.unit}")
if __name__ == '__main__':
start_time = time.time()
time.sleep(1)
end_time = time.time()
print(f"Execution (s): {end_time - start_time}s")
with ExecutionTimer(message='Execution (s)', unit='s'):
time.sleep(1)
with ExecutionTimer(message="Execution (ms)", unit='ms'):
time.sleep(1)
with ExecutionTimer(message="Execution (us)", unit='us'):
time.sleep(1)
with ExecutionTimer(message="Execution (ns)", unit='ns'):
time.sleep(1)
# or just capture the value
with ExecutionTimer(message="Execution (ns)", unit='ns') as my_timer:
time.sleep(1)
# notice we are not in the context manager any more
print(my_timer.result)
print(my_timer.result)
</code></pre>
<p>and its output:</p>
<pre class="lang-none prettyprint-override"><code>Execution (s): 1.0000789165496826s
Execution (s): 1.0000693798065186s
Execution (ms): 1000.067949295044ms
Execution (us): 1000072.0024108887us
Execution (ns): 1000071287.1551514ns
Execution (ns): 1000077962.8753662ns
1.0000779628753662
1.0000779628753662
Process finished with exit code 0
</code></pre>
<p>There's definitely some room for improvement. It can be integrated with logging to only emit messages to a certain log level OR it could use a "dry run" parameter to control the execution print method. Etc.</p>
<p>Feel free to use, test, and modify it as you please. It would be awesome if you could leave your <strong>constructive</strong> comments, ideas, and improvements so we can all learn from them.</p>
<p>Cheers!</p>
| <python><timing><execution-time><contextmanager><timeit> | 2023-11-22 23:29:28 | 0 | 763 | Gergely M |
77,533,488 | 16,674,436 | NLP pre-processing on two columns in data frame gives error | <p>I have the following data frame:</p>
<pre><code>gmeDateDf.head(2)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>title</th>
<th>score</th>
<th>id</th>
<th>url</th>
<th>comms_num</th>
<th>body</th>
<th>timestamp</th>
</tr>
</thead>
<tbody>
<tr>
<td>It's not about the money, it's about sending a...</td>
<td>55.0</td>
<td>l6ulcx</td>
<td><a href="https://v.redd.it/6j75regs72e61" rel="nofollow noreferrer">https://v.redd.it/6j75regs72e61</a></td>
<td>6.0</td>
<td>NaN</td>
<td>2021-01-28 21:37:41</td>
</tr>
<tr>
<td>Math Professor Scott Steiner says the numbers ...</td>
<td>110.0</td>
<td>l6uibd</td>
<td><a href="https://v.redd.it/ah50lyny62e61" rel="nofollow noreferrer">https://v.redd.it/ah50lyny62e61</a></td>
<td>23.0</td>
<td>NaN</td>
<td>2021-01-28 21:32:10</td>
</tr>
</tbody>
</table>
</div>
<p>I have the following function to pre-process the text (with the proper libraries imported and so on):</p>
<pre><code>def preprocess_text(text):
# Tokenize words
tokens = word_tokenize(text.lower())
# Remove stopwords and non-alphabetic words, and lemmatize
processed_tokens = [lemmatizer.lemmatize(word) for word in tokens if word.isalpha() and word not in stop_words]
return processed_tokens
</code></pre>
<p>Then calling it on specific column:</p>
<pre><code>gmeDateDf.loc[:, 'body'] = gmeDateDf['body'].fillna('NaN').astype(str)
gmeDateDfProcessed = gmeDateDf['body'].apply(preprocess_text)
</code></pre>
<p>That works properly as expected. However, when I try to do it on two columns, like so:</p>
<pre><code>gmeDateDf.loc[:, 'title','body'] = gmeDateDf['title', 'body'].fillna('NaN').astype(str)
gmeDateDfProcessed = gmeDateDf['title', 'body'].apply(preprocess_text)
</code></pre>
<p>I get the following error:</p>
<pre><code> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
3806 # If we have a listlike key, _check_indexing_error will raise
KeyError: ('title', 'body')
</code></pre>
<p>I’ve looked around, asked chatGPT for some help, but I can’t figure it out.</p>
<p>Please, bear with me as I’m still learning the basics of Python.</p>
<p>Why can’t I give it a listlike key? And why is it a listlike key only when I have the two columns? When I call it only on <code>gmeDateDf.loc[:, 'body']</code> it is some kind of listlike key, no? So why would it not work otherwise?</p>
<p>I’m confused, and don’t even know where to look to see what I’m doing wrong now.</p>
| <python><pandas><dataframe><nlp><nltk> | 2023-11-22 22:54:17 | 1 | 341 | Louis |
77,533,480 | 998,248 | Is multiprocessing.Queue generic or not (pyright)? | <p>I started using pyright recently (over mypy) and it's finding a lot of issues that weren't coming up with mypy. For example</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Queue
queue: Queue = Queue() # Type of "queue" is "Queue[Unknown]" [reportUnknownVariableType]
</code></pre>
<p>This is surprising to me because <code>multiprocessing.Queue</code> doesn't appear to have a generic type argument when I inspect the source (python 3.9). Is this pyright making standard library types generic as a "feature" or am I just wrong about Queue somehow?</p>
| <python><python-typing><pyright> | 2023-11-22 22:49:14 | 1 | 2,791 | Anthony Naddeo |
77,533,405 | 7,366,596 | Pandas: How to convert datetime objects into float | <p>Running the MRE below gives me the following output:</p>
<pre><code> c1 c2 c3
0 2023-11-12 10:36:00 2023-11-12 03:00:00 0 days 07:36:00
1 2023-11-12 00:41:00 2023-11-11 00:00:00 1 days 00:41:00
</code></pre>
<pre><code>import pandas as pd
lines = [
"2023-11-12T10:36:00\t2023-11-12T03:00:00",
"2023-11-12T00:41:00\t2023-11-11T00:00:00"
]
data = pd.DataFrame([l.split('\t') for l in lines], columns=['c1', 'c2'])
data["c1"] = pd.to_datetime(data["c1"])
data["c2"] = pd.to_datetime(data["c2"])
data["c3"] = data["c1"] - data["c2"]
print(data)
</code></pre>
<p>What I want to see in the third column are 7.36 and 24.41, respectively. I tried using:</p>
<pre><code>data["c3"] = (data["c1"] - data["c2"]).dt.total_seconds() / 3600.0
</code></pre>
<p>however this ends up with the following incorrect result:</p>
<pre><code> c1 c2 c3
0 2023-11-12 10:36:00 2023-11-12 03:00:00 7.600000
1 2023-11-12 00:41:00 2023-11-11 00:00:00 24.683333
</code></pre>
| <python><pandas><dataframe> | 2023-11-22 22:29:18 | 2 | 402 | bbasaran |
77,533,274 | 785,404 | How can I type annotate a decorator that takes optional arguments? | <p>Let's say I define a decorator like this:</p>
<pre class="lang-py prettyprint-override"><code>def my_decorator(func=None, *, param=42):
if func is None:
return functools.partial(my_decorator, param=param)
...
</code></pre>
<p>This decorator can be used like this</p>
<pre class="lang-py prettyprint-override"><code>@my_decorator
def my_func(...):
...
</code></pre>
<p>or like this</p>
<pre class="lang-py prettyprint-override"><code>@my_decorator(param=123)
def my_func(...):
...
</code></pre>
<p>This pattern is common in the Python standard library; for example, <a href="https://docs.python.org/3/library/functools.html#functools.lru_cache" rel="nofollow noreferrer">functools.lru_cache</a> does this.</p>
<p>What type hints can I add to this decorator so that type checkers will be able to correctly infer the type of the decorated <code>my_func</code> either way it is decorated?</p>
| <python><mypy><python-typing> | 2023-11-22 21:53:49 | 2 | 2,085 | Kerrick Staley |
77,533,242 | 4,213,362 | Finding minimum of 4 columns in postgres with sqlalchemy (computed sqlalchemy model properties) | <p>I am trying to convert a psql query to SQLAlchemy.
The model that I have has virtual/computed fields and thus raw sql is not something that solves my problem.</p>
<pre><code> query = """
SELECT MIN(LEAST(delta_cell_v1, delta_cell_v2, delta_cell_v3, delta_cell_v4)) , MAX(GREATEST(delta_cell_v1, delta_cell_v2, delta_cell_v3, delta_cell_v4)) FROM bus_battery_data
"""
</code></pre>
<p>I tried the following code and similar to this by removing subquery and using <code>funct.min (func.least(...))</code>.</p>
<p>Model:</p>
<pre><code>
class BusBatteryData(models.Base):
__tablename__ = "bus_battery_data_mv"
imei = Column(String, primary_key=True)
...
...
@property
def delta_cell_v1(self):
try:
return self.max_cell_v1 - self.min_cell_v1
except TypeError:
return None
@property
def delta_cell_v2(self) -> float:
try:
return self.max_cell_v2 - self.min_cell_v2
except TypeError:
return None
@property
def delta_cell_v3(self) -> float:
try:
return self.max_cell_v3 - self.min_cell_v3
except TypeError:
return None
@property
def delta_cell_v4(self):
try:
return self.max_cell_v4 - self.min_cell_v4
except TypeError:
return None
</code></pre>
<p>The query that was being used when deltas were columns in db earlier.</p>
<pre><code>subquery = select(
func.least(
BusBatteryData.delta_cell_v1,
BusBatteryData.delta_cell_v2,
BusBatteryData.delta_cell_v3,
BusBatteryData.delta_cell_v4,
).label("min_"),
func.greatest(
BusBatteryData.delta_cell_v1,
BusBatteryData.delta_cell_v2,
BusBatteryData.delta_cell_v3,
BusBatteryData.delta_cell_v4,
).label("max_"),
).subquery()
query = select(func.min(subquery.c.min_), func.max(subquery.c.max_))
</code></pre>
<p>Error</p>
<pre><code> sqlalchemy.exc.DBAPIError: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DataError'>: invalid input for query argument $1: <property object at 0x0000023FDE38EC50> (expected str, got property)
[SQL: SELECT min(anon_1.min_) AS min_1, max(anon_1.max_) AS max_1
FROM (SELECT least($1, $2, $3, $4) AS min_, greatest($5, $6, $7, $8) AS max_) AS anon_1]
[parameters: (<property object at 0x0000023FDE38EC50>, <property object at 0x0000023FDE38EDE0>, <property object at 0x0000023FDE38EE30>, <property object at 0x0000023FDE38EE80>, <property object at 0x0000023FDE38EC50>, <property object at 0x0000023FDE38EDE0>, <property object at 0x0000023FDE38EE30>, <property object at 0x0000023FDE38EE80>)]
(Background on this error at: https://sqlalche.me/e/20/dbapi)
</code></pre>
| <python><postgresql><sqlalchemy> | 2023-11-22 21:45:00 | 1 | 811 | Vishesh Mangla |
77,533,081 | 22,437,734 | Unable to install packages with pip in VS code | <p>I am trying to install <code>Flask</code> with the following command:</p>
<p><code>pip install flask</code></p>
<p>But pip returns:</p>
<pre><code>Requirement already satisfied: flask in /home/john/anaconda3/lib/python3.11/site-packages (3.0.0)
Requirement already satisfied: Werkzeug>=3.0.0 in /home/john/anaconda3/lib/python3.11/site-packages (from flask) (3.0.1)
Requirement already satisfied: Jinja2>=3.1.2 in /home/john/anaconda3/lib/python3.11/site-packages (from flask) (3.1.2)
Requirement already satisfied: itsdangerous>=2.1.2 in /home/john/anaconda3/lib/python3.11/site-packages (from flask) (2.1.2)
Requirement already satisfied: click>=8.1.3 in /home/john/anaconda3/lib/python3.11/site-packages (from flask) (8.1.7)
Requirement already satisfied: blinker>=1.6.2 in /home/john/anaconda3/lib/python3.11/site-packages (from flask) (1.7.0)
Requirement already satisfied: MarkupSafe>=2.0 in /home/john/anaconda3/lib/python3.11/site-packages (from Jinja2>=3.1.2->flask) (2.1.1)
</code></pre>
<p>(I have anaconda and conda installed)</p>
<p>My imports show an error:</p>
<p><img src="https://i.sstatic.net/4e6uo.png" alt="(Error Image)" /></p>
<p>All that my code contains is:</p>
<pre class="lang-py prettyprint-override"><code>from flask import *
</code></pre>
| <python><visual-studio-code><terminal><pip><python-import> | 2023-11-22 21:00:47 | 2 | 473 | Gleb |
77,532,987 | 14,385,099 | Subset MULTIPLE specific rows and columns to update cell value | <p>I have a dataframe that looks like this:</p>
<pre><code> Onset = [17,18,19,20,21]
Event = ['ArrowsTask', 'SelfReframeNeg', 'Memory', 'SocialReframeNeg', 'Cue']
Response = [1,2,3,4,5]
df = pd.DataFrame({'Event':Event,'Onset':Onset, 'Response':Response})
</code></pre>
<p>If<code>Event</code> == 'ArrowsTask', 'Memory' or 'Cue', I want to change the <code>Response</code> values in those rows to np.nan .</p>
<p>How can I subset the dataframe and achieve this goal?</p>
<p>The final df should be:</p>
<pre><code> Onset = [17,18,19,20,21]
Event = ['ArrowsTask', 'SelfReframeNeg', 'Memory', 'SocialReframeNeg', 'Cue']
Response = [0,2,0,4,0]
df = pd.DataFrame({'Event':Event,'Onset':Onset, 'Response':Response})
</code></pre>
<p>Thank you!</p>
| <python><pandas> | 2023-11-22 20:44:51 | 1 | 753 | jo_ |
77,532,861 | 1,422,096 | With a single serial link, how to read continuous data every second (polling), and do other queries in the meantime? | <p>I communicate with a hardware device via serial link RS232 (no other option available).</p>
<ul>
<li>I need to continuously poll some data "foo", every second (if possible).</li>
<li>Sometimes I also need to ask the device for other data, such as "bar" and "baz"</li>
</ul>
<p>Of course, since it's a single serial link (and no high-level interface like HTTP), I cannot do both at the same time just by using 2 different threads: it's important to check that no concurrent data is being sent at the same time through the same wire.</p>
<p>This is my current solution:</p>
<pre><code>class Device:
def __init__(self):
self.serial_port = serial.Serial(port="COM1", baudrate=9600, parity="N", stopbits=1)
self.start_foo_polling()
def start_foo_polling(self):
self.foo_polling = True
threading.Thread(target=self.get_foo_thread).start()
def get_foo_thread(self):
while self.foo_polling:
self.serial_port.write(b"QUERY_FOO")
data = self.serial_port.read(8) # then, do something with data!
time.sleep(1)
def stop_foo_polling(self):
self.foo_polling = False
def query_bar(self):
rerun_foo_polling = False # *
if self.foo_polling: # *
rerun_foo_polling = True # *
self.stop_foo_polling() # *
time.sleep(1) # wait for the thread to finish # *
self.serial_port.write(b"QUERY_BAR")
data = self.serial_port.read(8)
if rerun_foo_polling: # *
self.start_foo_polling() # *
d = Device()
time.sleep(3.9)
d.query_bar()
</code></pre>
<p>but it would require some code duplication of the lines (*) for <code>def query_baz(self): ...</code>, and more generally it does not seem optimal.</p>
<p><strong>What is the standard serial link solution to handle this concurrency problem?</strong></p>
| <python><multithreading><serial-port><pyserial><polling> | 2023-11-22 20:16:37 | 0 | 47,388 | Basj |
77,532,707 | 889,349 | bigram calculation - Memory error, large file problem | <p>Here is a code for bigram calculation from the text corpus:</p>
<pre><code>import sys
import csv
import string
import nltk
from nltk import word_tokenize
from nltk.tokenize import RegexpTokenizer
from nltk.util import ngrams
from collections import Counter
tokenizer = RegexpTokenizer(r'\w+')
with open('hugefile.csv', 'r') as file:
text = file.read()
token = tokenizer.tokenize(text)
trigrams = ngrams(token,3)
count = Counter(trigrams)
with open('out.csv','w') as csvfile:
for tag, count in count.items():
csvfile.write('{},{},{},{}\n'.format(tag[0], tag[1], tag[2], count))
</code></pre>
<p><strong>The error:</strong></p>
<pre><code> Traceback (most recent call last):
File "C:\OSPanel\domains\vad\freq.py", line 17, in <module>
text = file.read();
^^^^^^^^^^^
File "<frozen codecs>", line 322, in decode
MemoryError
</code></pre>
<p>The code works correct in cases where hugefile.csv is relativity small.
And problems begin when file is really huge (several GB), script stop working.</p>
<p>How to solve that problem?
Thanks.</p>
<p><strong>Updated:</strong></p>
<pre><code>import sys
import csv
import string
import nltk
from nltk import word_tokenize
from nltk.tokenize import RegexpTokenizer
from nltk.util import ngrams
from collections import deque
from collections import Counter
tokenizer = RegexpTokenizer(r'\w+')
def ngram_iter(it, n=3):
fifo = deque(maxlen=3)
for line in it:
for token in tokenizer.tokenize(line):
fifo.append(token)
if len(fifo) == n:
yield list(fifo)
# Create an empty counter to store the trigram counts
with open('file.txt', 'r') as file:
count = Counter(ngram_iter(file, 3))
with open('out_2.csv','w') as csvfile:
for tag, freq in count.items():
csvfile.write('{},{},{},{}\n'.format(tag[0], tag[1], tag[2], freq))
</code></pre>
<p><strong>Error:</strong></p>
<pre><code> Traceback (most recent call last):
File "freq2.py", line 30, in <module>
count = Counter(ngram_iter(file, 3))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\collections\__init__.py", line 599, in __init__
self.update(iterable, **kwds)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1776.0_x64__qbz5n2kfra8p0\Lib\collections\__init__.py", line 690, in update
_count_elements(self, iterable)
TypeError: unhashable type: 'list'
</code></pre>
| <python><nltk><n-gram> | 2023-11-22 19:42:56 | 1 | 3,402 | XTRUST.ORG |
77,532,632 | 11,013,417 | How to proxy all inherited methods in python | <p>I want to proxy all inherited methods of a class. I need to do this because I want to add some information before forward the method call. I've something like:</p>
<pre class="lang-py prettyprint-override"><code>class Father():
def fatherMethodOne(self):
print("This is father method one")
def fatherMethodTwo(self):
print("This is father method two")
class Child(Father):
ALLOWED_OPTIONS = ["test"]
def __init__(self) -> None:
super().__init__()
self._options = {
"test": True
}
</code></pre>
<p>Notice that I've a <code>_options</code> in the <code>Child</code> class, I want to perform some operations using the <code>_options</code> before call any inherent method from <code>Father</code>.</p>
<p>I've tried for example the following code:</p>
<pre class="lang-py prettyprint-override"><code>class Father():
def fatherMethodOne(self):
print("This is father method one")
def fatherMethodTwo(self):
print("This is father method two")
class Child(Father):
ALLOWED_OPTIONS = ["test"]
def __init__(self) -> None:
super().__init__()
self._options = {
"test": True
}
def __getattribute__(self, __name: str) -> Any:
for key in self._options.keys():
if key not in Child.ALLOWED_OPTIONS:
raise KeyError("Key not allowed")
return super().__getattribute__(__name)
print(Child().fatherMethodOne())
</code></pre>
<p>But I get the following error (I've omitted filepath):</p>
<pre><code>Traceback (most recent call last):
File "filepath", line 105, in <module>
print(Child().fatherMethodOne())
File "filepath", line 99, in __getattribute__
for key in self._options.keys():
File "filepath", line 99, in __getattribute__
for key in self._options.keys():
File "filepath", line 99, in __getattribute__
for key in self._options.keys():
[Previous line repeated 996 more times]
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>I can't override every method from <code>Father</code> because in the real case there is a lot of functions to override. So this is why I want to do it in a way where I don't need to override every single function from father.</p>
<p>So, there is a way to do that?</p>
| <python><proxy> | 2023-11-22 19:26:36 | 0 | 351 | Matheus |
77,532,617 | 9,604,370 | AttributeError: module 'os' has no attribute 'add_dll_directory' when importing "pandas" library in Azure Function | <p>I'm endeavoring to deploy an Azure Function application that relies on various libraries, including numpy, torch, and pandas (you can find all the prerequisites listed in the attached image). While the application runs smoothly on my <strong>Ubuntu 22.0.4</strong> laptop, I encounter a persistent issue upon deployment to the remote Azure Function. Specifically, I keep receiving the following error:</p>
<blockquote>
<p>"AttributeError: module 'os' has no attribute 'add_dll_directory'."</p>
</blockquote>
<p>I've provided an image displaying these errors for your reference.</p>
<p>In my attempts to resolve this issue, I explored several solutions available online, such as those outlined in <a href="https://learn.microsoft.com/es-es/answers/questions/1434854/attributeerror-module-os-has-no-attribute-add-dll" rel="nofollow noreferrer">https://github.com/confluentinc/confluent-kafka-python/issues/1462</a> and <a href="https://stackoverflow.com/questions/77244413/azure-function-app-error-attributeerror-module-os-has-no-attribute-add-dll">Azure Function App Error: AttributeError: module 'os' has no attribute 'add_dll_directory',</a>, yet none have proven effective thus far. Additionally, I verified that the init.py file within my local pandas installation does not include a line <strong>invoking _delvewheel_patch_1_5_1()</strong>. Consequently, I suspect that the problem might stem from using an <em>incorrect version</em> of <em>pandas</em> intended for <em>Windows rather than Linux</em>.</p>
<p><a href="https://i.sstatic.net/E8qpx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E8qpx.png" alt="enter image description here" /></a></p>
| <python><pandas><azure><ubuntu><azure-functions> | 2023-11-22 19:21:33 | 1 | 409 | Arthur Vaz |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.