QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,767,889
| 1,818,935
|
Error running a magic programmatically via IPython's run_cell_magic
|
<p>Consider the following program, which I wrote in two Jupyter Notebook cells.</p>
<p>Cell 1:</p>
<pre><code>import rumbledb as rmbl
%load_ext rumbledb
%env RUMBLEDB_SERVER=http://public.rumbledb.org:9090/jsoniq
</code></pre>
<p>Cell 2:</p>
<pre><code>%%jsoniq
parse-json("{\"x\":3}}").x
</code></pre>
<p>After executing</p>
<pre><code>spark-submit rumbledb-1.21.0-for-spark-3.5.jar serve -p 9090
</code></pre>
<p>in a Git Bash console, when I run these two cells in order, the output of the second cell is</p>
<pre><code>Took: 0.2607388496398926 ms
3
</code></pre>
<p>I'd like to rewrite cell 2 so as not to use the cell magic literally (<code>%%</code> syntax) but programatically, via a function. The reason I'd like to avoid using it literally is so that I can encapsulate it in a function, as I described in <a href="https://stackoverflow.com/q/78767789/1818935">this post</a>.</p>
<p>I tried the advice in the end of <a href="https://stackoverflow.com/questions/10361206/how-to-run-ipython-magic-from-a-script/15898875#15898875">this answer</a> and rewrote cell 2 as follows:</p>
<p>Cell 3:</p>
<pre><code>from IPython import get_ipython
ipython = get_ipython()
ipython.run_cell_magic('jsoniq', '', 'parse-json("{\"x\":3}}").x')
</code></pre>
<p>However, when I ran this cell, I got the following error message:</p>
<pre><code>Took: 2.2189698219299316 ms
There was an error on line 2 in file:/home/ubuntu/:
Code: [XPST0003]
Message: Parser failed.
Metadata: file:/home/ubuntu/:LINE:2:COLUMN:0:
This code can also be looked up in the documentation and specifications for more information.
</code></pre>
<ol>
<li>Why did I get the error message, and how can I get rid of it?</li>
<li>Why did I not get <code>3</code> in the output, and how can I get it?</li>
</ol>
|
<python><jupyter-notebook><ipython-magic><jsoniq>
|
2024-07-19 06:45:04
| 2
| 6,053
|
Evan Aad
|
78,767,789
| 1,818,935
|
How to invoke a cell magic inside a function?
|
<p>Consider the following program, which I wrote in two Jupyter Notebook cells.</p>
<p>Cell 1:</p>
<pre><code>import rumbledb as rmbl
%load_ext rumbledb
%env RUMBLEDB_SERVER=http://public.rumbledb.org:9090/jsoniq
</code></pre>
<p>Cell 2:</p>
<pre><code>%%jsoniq
parse-json("{\"x\":3}}").x
</code></pre>
<p>After executing</p>
<pre><code>spark-submit rumbledb-1.21.0-for-spark-3.5.jar serve -p 8001
</code></pre>
<p>in a Git Bash console, when I run these two cells in order, the output of the second cell is</p>
<pre><code>Took: 0.2607388496398926 ms
3
</code></pre>
<p>How can I encapsulate the second cell in a function, say <code>f</code>, that I can call to thus: <code>f()</code>?</p>
<p>I tried the obvious:</p>
<p>Cell 3:</p>
<pre><code>def f():
%%jsoniq
parse-json("{\"x\":3}}").x
</code></pre>
<p>Cell 4:</p>
<pre><code>f()
</code></pre>
<p>Cell 3 ran successfully, however when I ran cell 4, I got the following error message:</p>
<pre><code>UsageError: Line magic function `%%jsoniq` not found.
</code></pre>
<p>I then tried to move the <code>%%jsoniq</code> part to the beginning of the cell, as follows:</p>
<p>Cell 5:</p>
<pre><code>%%jsoniq
def f():
parse-json("{\"x\":3}}").x
</code></pre>
<p>However, running cell 5 yielded the following error message:</p>
<pre><code>Took: 3.5366852283477783 ms
There was an error on line 3 in file:/home/ubuntu/:
Code: [XPST0003]
Message: Parser failed.
Metadata: file:/home/ubuntu/:LINE:3:COLUMN:0:
This code can also be looked up in the documentation and specifications for more information.
</code></pre>
<p>I then tried to follow the advice given in the end of <a href="https://stackoverflow.com/a/15898875/1818935">this answer</a>, and ran the following cell:</p>
<p>Cell 6:</p>
<pre><code>from IPython import get_ipython
def f():
ipython = get_ipython()
ipython.run_cell_magic('jsoniq', '', 'parse-json("{\"x\":3}}").x')
</code></pre>
<p>This executed without problem, but when I then ran cell 4 again (<code>f()</code>), I got the following error message:</p>
<pre><code>Took: 0.5450258255004883 ms
There was an error on line 2 in file:/home/ubuntu/:
Code: [XPST0003]
Message: Parser failed.
Metadata: file:/home/ubuntu/:LINE:2:COLUMN:0:
This code can also be looked up in the documentation and specifications for more information.
</code></pre>
<p>How can I encapsulate cell 2's code in a named function of my own making that I can later invoke in other cells?</p>
|
<python><json><jupyter-notebook><ipython><jsoniq>
|
2024-07-19 06:21:47
| 1
| 6,053
|
Evan Aad
|
78,767,751
| 18,595,760
|
How to enable Intellisense when using YouTube API with Python in VSCode
|
<p>I want to get the IntelliSense in VSCode when using Google YouTube API with Python. But I don't know the detailed step.</p>
<pre><code>from googleapiclient.discovery import build
api_key="****"
youtube=build("youtube","v3",developerKey=api_key)
request=youtube.channels().list(
part="statistics",
forUsername="****"
)
</code></pre>
<p>I can get Intellisense when inputting the "build(...)" method. But I can't get Intellisense when I input "channels()" and "list()" methods. Should I import other modules? When I code in Google Colab, I can get the Intellisense feature for all of the methods above. But in VSCode, only the "build" method has Intellisense.</p>
|
<python><visual-studio-code><youtube-api><youtube-data-api>
|
2024-07-19 06:07:51
| 1
| 317
|
test1229
|
78,767,565
| 4,556,479
|
f-strings: Skip thousands delimiter (comma) for four digits (1000); limit comma to five or more digits
|
<p>I have a paper where the editors request I remove thousands separators (comma) for four digit-numbers. I used f-strings:</p>
<pre class="lang-py prettyprint-override"><code>form=",.0f"
format(123456, form)
> '123,456'
format(1234, form)
> '1,234'
</code></pre>
<p>I cannot find the correct format to keep <code>'123,456'</code> but remove the comma for <code>'1234'</code>.</p>
<p>The python <a href="https://docs.python.org/3/library/string.html#format-examples" rel="nofollow noreferrer">docs</a> do not provide an answer. I could provide a custom formatting function, but I would like to avoid this.</p>
|
<python><formatting>
|
2024-07-19 04:53:04
| 2
| 3,581
|
Alex
|
78,767,542
| 355,715
|
Is there a way to read sequentially pretty-printed JSON objects in Python?
|
<p>Suppose you have a JSON file like this:</p>
<pre><code>{
"a": 0
}
{
"a": 1
}
</code></pre>
<p>It's not JSONL, because each object takes more than one line. But it's not a single valid JSON object either. It's sequentially listed pretty-printed JSON objects.</p>
<p><code>json.loads</code> in Python gives an error about invalid formatting if you attempt to load this, and the documentation indicates it only loads a single object. But tools like <code>jq</code> can read this kind of data without issue.</p>
<p>Is there some reasonable way to work with data formatted like this using the core json library? I have an issue where I have some complex objects and while just formatting the data as JSONL works, for readability it would be better to store the data like this. I can wrap everything in a list to make it a single JSON object, but that has downsides like requiring reading the whole file in at once.</p>
<p>There's a similar question <a href="https://stackoverflow.com/questions/17730864/load-pretty-printed-json-from-file-into-python">here</a>, but despite the title the data there isn't JSON at all.</p>
|
<python><json><pretty-print>
|
2024-07-19 04:39:11
| 2
| 15,817
|
polm23
|
78,767,325
| 2,780,906
|
What allowable date forms can be used in pandas bdate_series holidays?
|
<p>I am currently testing the bdate_series to use in my code.</p>
<p>The following code still outputs July 2 as a date, even though I expected it to be excluded.</p>
<pre><code>>>> pd.bdate_range(start='20240701', end='20240705',holidays=["20240702"],freq="C")
DatetimeIndex(['2024-07-01', '2024-07-02', '2024-07-03', '2024-07-04', '2024-07-05'], dtype='datetime64[ns]', freq='C')
</code></pre>
<p>The following modification corrects this, but I don't understand why:</p>
<pre><code>>>> pd.bdate_range(start='20240701', end='20240705',holidays=["2024-07-02"],freq="C")
DatetimeIndex(['2024-07-01', '2024-07-03', '2024-07-04', '2024-07-05'], dtype='datetime64[ns]', freq='C')
</code></pre>
<p>I assume this is because some parts of the function allow more flexibility in specifying datetime arguments than others. Can someone explain?</p>
<p>I am using</p>
<p>pandas : 2.2.1<br />
numpy : 1.26.4</p>
|
<python><pandas><date>
|
2024-07-19 02:39:22
| 0
| 397
|
Tim
|
78,767,238
| 1,601,580
|
Best way to count tokens for Anthropic Claude Models using the API?
|
<h1>Summary</h1>
<p>How can I count the number of tokens before sending it to Anthropic?</p>
<hr />
<h1>Question content</h1>
<p>I'm working with Anthropic's Claude models and need to accurately count the number of tokens in my prompts and responses. I'm using the <code>anthropic_bedrock</code> Python client but recently came across an alternative method using the <code>anthropic</code> client. I'm looking for advice on which approach is better and the proper way to implement token counting.</p>
<p>Here are the two approaches I've found:</p>
<h3>Approach 1: Using <code>anthropic_bedrock</code> Client</h3>
<pre class="lang-py prettyprint-override"><code>from anthropic_bedrock import AnthropicBedrock
client = AnthropicBedrock()
prompt = "Hello, world!"
token_count = client.count_tokens(prompt)
print(token_count)
</code></pre>
<h3>Approach 2: Using <code>anthropic</code> Client</h3>
<pre class="lang-py prettyprint-override"><code>import anthropic
client = anthropic.Client()
token_count = client.count_tokens("Sample text")
print(token_count)
</code></pre>
<h3>My Questions:</h3>
<ol>
<li>Which client (<code>anthropic_bedrock</code> or <code>anthropic</code>) is better for counting tokens in prompts for Claude models?</li>
<li>Are there any significant differences in how these clients handle token counting or any other functionalities that might influence the choice?</li>
<li>Are there best practices I should follow when counting tokens and managing token usage in my applications?</li>
</ol>
<h3>Steps to Reproduce:</h3>
<ol>
<li>Install the appropriate client (<code>anthropic_bedrock</code> or <code>anthropic</code>) using <code>pip</code>.</li>
<li>Authenticate the client with your AWS credentials.</li>
<li>Use the <code>count_tokens</code> method to count tokens in a given prompt.</li>
<li>Print or log the token count for analysis.</li>
</ol>
<h3>References:</h3>
<ul>
<li><a href="https://discord.com/channels/1072196207201501266/1263673928874725479/1263673928874725479" rel="noreferrer">Discord link to their get-help the question</a></li>
<li><a href="https://tokencounter.org/claude_counter" rel="noreferrer">Anthropic Token Counter Documentation</a></li>
<li><a href="https://docs.anthropic.com" rel="noreferrer">Anthropic API Documentation</a></li>
<li><a href="https://www.datacamp.com/tutorial/getting-started-with-anthropic-api" rel="noreferrer">Using the Anthropic API with Python</a></li>
</ul>
<hr />
<p>They give different outputs!</p>
<pre><code>>>> client = AnthropicBedrock()
prompt = "Hello, world!"
token_count = client.count_tokens(prompt)
print(token_count)>>> prompt = "Hello, world!"
>>> token_count = client.count_tokens(prompt)
>>> print(token_count)
4
>>> import anthropic
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'anthropic'
>>>
>>> client = anthropic.Client()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'anthropic' is not defined
>>>
>>> token_count = client.count_tokens("Sample text")
>>> print(token_count)
2
</code></pre>
<p>2 vs 4. Which one to use?</p>
|
<python>
|
2024-07-19 01:47:58
| 4
| 6,126
|
Charlie Parker
|
78,767,207
| 2,925,620
|
pyqtgraph: Elegant way to remove a plot from GraphicsLayoutWidget?
|
<p>I am trying to programmatically add and remove plots to <code>GraphicsLayoutWidget</code>. Adding works as expected via <code>addPlot(row, col)</code> (col is zero for me). Unfortunately there is no <code>removePlot</code> function (which I find odd!), so I am trying to use <code>removeItems(plot)</code>. This does not seem to work, though.</p>
<p>Here's a MWE:</p>
<pre><code>import pyqtgraph as pg
import numpy as np
app = pg.mkQApp("MWE")
win = pg.GraphicsLayoutWidget(show=True)
x = np.cos(np.linspace(0, 2*np.pi, 1000))
y = np.sin(np.linspace(0, 4*np.pi, 1000))
plots = []
def add_plot():
p = win.addPlot(row=len(plots), col=0)
plots.append(p)
p.plot(x,y)
def remove_plot(i):
if 0 <= i <= len(plots)-1:
p = plots[i]
win.removeItem(p)
plots.pop(i)
p.deleteLater()
add_plot()
add_plot()
add_plot()
remove_plot(1)
add_plot()
pg.exec()
</code></pre>
<p>Executing this code leads to <code>QGridLayoutEngine::addItem: Cell (2, 0) already taken</code>. In fact, zooming in or out of the second plot reveals that something is wrong.</p>
<p>I found out that after my attempt at removing a plot <code>win.ci.items</code> and <code>win.ci.rows</code> still have the wrong cell, so I have tried to correct these manually. But, since this did not solve the problem, there must be something else still holding the wrong cells after removing a plot.</p>
<ol>
<li>What am I missing here?</li>
<li>Is there a counterpart to <code>addPlot</code>? If not, why?</li>
<li>What is an elegant way to remove a plot from <code>GraphicsLayoutWidget</code>?</li>
</ol>
<p>EDIT: My unsatisfying current workaround is to call <code>win.clear()</code> and add all plots that have not been removed again. Not too elegant.</p>
|
<python><pyqtgraph>
|
2024-07-19 01:28:22
| 1
| 357
|
emma
|
78,767,203
| 6,395,612
|
Pyspark Filtering Array inside a Struct column
|
<p>I have a column in my Spark DataFrame that has this schema:</p>
<pre><code>root
|-- my_feature_name: struct (nullable = true)
| |-- first_profiles: map (nullable = true)
| | |-- key: string
| | |-- value: array (valueContainsNull = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- storeId: string (nullable = true)
| | | | |-- storeIdHashed: string (nullable = true)
| | | | |-- lastUpdated: long (nullable = true)
| | | | |-- sum1d: long (nullable = true)
| |-- second_profile: map (nullable = true)
| | |-- key: string
| | |-- value: array (valueContainsNull = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- storeId: string (nullable = true)
| | | | |-- storeIdHashed: string (nullable = true)
| | | | |-- lastUpdated: long (nullable = true)
| | | | |-- sum1d: long (nullable = true)
</code></pre>
<p>I want to remove the lines in the most nested array where storeId = some value.
The closer I got to the solution is something like this <code>F.expr("filter(my_feature_name.first_profiles.attributed, x -> x.storeId NOT IN (123124124))")</code></p>
<p>but that only sends me back the final array without the struct on top of it..
Thanks for the help.</p>
|
<python><dataframe><pyspark><filter>
|
2024-07-19 01:25:06
| 2
| 402
|
MathLal
|
78,767,142
| 3,120,501
|
Dictionary indexing with Numpy/Jax
|
<p>I'm writing an interpolation routine and have a dictionary which stores the function values at the fitting points. Ideally, the dictionary keys would be 2D Numpy arrays of the fitting point coordinates, <code>np.array([x, y])</code>, but since Numpy arrays aren't hashable these are converted to tuples for the keys.</p>
<pre><code># fit_pt_coords: (n_pts, n_dims) array
# fn_vals: (n_pts,) array
def fit(fit_pt_coords, fn_vals):
pt_map = {tuple(k): v for k, v in zip(fit_pt_coords, fn_vals)}
...
</code></pre>
<p>Later in the code I need to get the function values using coordinates as keys in order to do the interpolation fitting. I'd like this to be within <code>@jax.jit</code>ed code, but the coordinate values are of type <code><class 'jax._src.interpreters.partial_eval.DynamicJaxprTracer'></code>, which can't be converted to a tuple. I've tried other things, like creating a dictionary key as <code>(x + y, x - y)</code>, but again this requires concrete values, and calling <code>.item()</code> results in an <code>ConcretizationTypeError</code>.</p>
<p>At the moment I've <code>@jax.jit</code>ed all of the code I can, and have just left this code un-jitted. It would be great if I could jit this code as well however. Are there any better ways to do the dictionary indexing (or better Jax-compatible data structures) which would allow all of the code to be jitted? I am new to Jax and still understading how it works, so I'm sure there must be better ways of doing it...</p>
|
<python><numpy><dictionary><jit><jax>
|
2024-07-19 00:48:27
| 2
| 528
|
LordCat
|
78,767,002
| 15,800,558
|
"detail": "Not Found" Error With Python Django REST API
|
<p>Im having an issue with a Python Django REST API i've built that manages the file upload and retreival from and to a Google Cloud SQL DB. I am not exactly sure whats causing the error but whene i run it runs fine at successfully connects to the Database and connects at <a href="http://127.0.0.1:8000/" rel="nofollow noreferrer">http://127.0.0.1:8000/</a> but whenever i test a end point for example <a href="http://127.0.0.1:8000/upload" rel="nofollow noreferrer">http://127.0.0.1:8000/upload</a> it returns a 404
<code>{ "detail": "Not Found" }</code></p>
<p>When the app itself runs it logs this to the console:</p>
<pre><code>Could not find platform independent libraries <prefix>
Could not find platform independent libraries <prefix>
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
You have 18 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run 'python manage.py migrate' to apply them.
July 18, 2024 - 19:03:09
Django version 5.0.6, using settings 'JurisScan_REST_API.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
</code></pre>
<p>Here are the project files</p>
<p>asgi.py</p>
<pre><code>"""
ASGI config for JurisScan_REST_API project.
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/5.0/howto/deployment/asgi/
"""
import os
from django.core.asgi import get_asgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'JurisScan_REST_API.settings')
application = get_asgi_application()
</code></pre>
<p>settings.py</p>
<pre><code>"""
Django settings for JurisScan_REST_API project.
Generated by 'django-admin startproject' using Django 5.0.6.
For more information on this file, see
https://docs.djangoproject.com/en/5.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/5.0/ref/settings/
"""
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/5.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = ''
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'JurisScan_REST_API'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'JurisScan_REST_API.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'JurisScan_REST_API.wsgi.application'
# Database
# https://docs.djangoproject.com/en/5.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'jurisscan_db',
'USER': 'root',
'PASSWORD': '',
'HOST': '35.221.31.137',
'PORT': '3306',
}
}
# Password validation
# https://docs.djangoproject.com/en/5.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/5.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/5.0/howto/static-files/
STATIC_URL = 'static/'
# Default primary key field type
# https://docs.djangoproject.com/en/5.0/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
</code></pre>
<p>views.py</p>
<pre><code>import base64
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
from django.core.exceptions import ObjectDoesNotExist
from django.db import connection
from .models import UserFile
from .serializers import UserFileSerializer
class UploadFileView(APIView):
def post(self, request):
user_id = request.data.get('user_id')
file = request.data.get('file')
file_path = request.data.get('file_path')
if not user_id or not file:
return Response({'error': 'user_id and file are required'}, status=status.HTTP_400_BAD_REQUEST)
# Save file to user's table
table_name = f'user_{user_id}'
with connection.cursor() as cursor:
cursor.execute(
f"CREATE TABLE IF NOT EXISTS {table_name} (file_name VARCHAR(255), file_path VARCHAR(255), file LONGBLOB)")
cursor.execute(f"INSERT INTO {table_name} (file_name, file_path, file) VALUES (%s, %s, %s)",
[file.name, file_path, file.read()])
return Response({'message': 'File uploaded successfully'}, status=status.HTTP_201_CREATED)
class GetUserFilesView(APIView):
def get(self, request, user_id):
table_name = f'user_{user_id}'
with connection.cursor() as cursor:
cursor.execute(f"SHOW TABLES LIKE '{table_name}'")
if cursor.fetchone() is None:
return Response({'error': 'User table does not exist'}, status=status.HTTP_404_NOT_FOUND)
cursor.execute(f"SELECT file_name, file_path, file FROM {table_name}")
files = cursor.fetchall()
response_files = [
{'file_name': file[0], 'file_path': file[1], 'file_content': base64.b64encode(file[2]).decode('utf-8')}
for file in files]
return Response({'files': response_files}, status=status.HTTP_200_OK)
</code></pre>
<p>urls.py</p>
<pre><code>from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('api/', include('myapp.urls')),
]
</code></pre>
<p>REST_API/urls.py</p>
<pre><code>from django.urls import path
from .views import UploadFileView, GetUserFilesView
urlpatterns = [
path('upload/', UploadFileView.as_view(), name='file-upload'),
path('get_user_files/<str:user_id>/', GetUserFilesView.as_view(), name='get-user-files'),
]
</code></pre>
<p>wsgi.py</p>
<pre><code>"""
WSGI config for JurisScan_REST_API project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/5.0/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'JurisScan_REST_API.settings')
application = get_wsgi_application()
</code></pre>
<p>manage.py</p>
<pre><code>#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'JurisScan_REST_API.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
</code></pre>
|
<python><django><debugging><django-rest-framework><http-status-code-404>
|
2024-07-18 23:25:07
| 1
| 317
|
Sarimm Chaudhry
|
78,766,850
| 1,119,391
|
Are Python 3.12 subinterpreters multi-CPU?
|
<p>Python 3.12 exposes the subinterpreter functionality: after starting the Python main.exe you can spawn multiple subinterpreter threads.
Questions:</p>
<ol>
<li>As the subinterpreter threads run in the same process as the Python main thread, does that mean the subinterpreters can only use the cores of the CPU the main thread is running on?</li>
<li>If so, how would you use subinterpreters that employ the full range of CPUs/cores available (ranging from 8-16 on modern hardware)?</li>
</ol>
<p>Let me explain the context:</p>
<p>I'm using a C(++) <code>reactor</code> framework that brings its own task scheduling runtime: execution of a <code>reactor</code>'s multiple <code>reaction</code>s (methods) are scheduled over a pool of runtime workers.</p>
<p>Using the <code>subinterpreter</code> C-API, I'm trying to achieve that (using Python's subinterpreter functionality) <code>reaction</code>s can call into Python code and update the <code>reactor</code>'s <code>PyThreadState</code>. To that end,</p>
<ol>
<li>I start a Python main interpreter at that top-level reactor.</li>
<li>I initialize each sub-reactor to have its own Python <code>PyThreadState*</code> and private GIL-owned subinterpreter:</li>
</ol>
<pre><code>PyInterpreterConfig py_config = {
.use_main_obmalloc = 0,
.allow_fork = 0,
.allow_exec = 0,
.allow_threads = 1,
.allow_daemon_threads = 0,
.check_multi_interp_extensions = 1,
.gil = PyInterpreterConfig_OWN_GIL,
};
this->py_status = Py_NewInterpreterFromConfig(
&self->py_thread, &py_config);
assert(this->py_thread != NULL);
this->py_interp = PyThreadState_GetInterpreter(this->py_thread);
this->py_interp_id = _PyInterpreterState_GetIDObject(this->py_interp);
assert(this->py_interp_id != NULL);
PyObject *main_mod = _PyInterpreterState_GetMainModule(this->py_interp);
this->py_ns = PyModule_GetDict(main_mod);
Py_DECREF(main_mod);
Py_INCREF(this->py_ns);
const char* codestr =
"id = \"Reactor_1\";"
"import threading;"
"print(f\"{id} @thread \" +
str(threading.get_ident()));";
PyObject *result = PyRun_StringFlags(
codestr, Py_file_input, this->py_ns, this->py_ns, NULL);
</code></pre>
<p>What I find though is that all Python (sub)interpreters (called upon in the various reactions) run in the same thread ...; using the subintepreter framework I would expect them to run on different cores (i.e. different threads) (Obviously, but IMO unrelatedly, the workers (the reactions are being scheduled on at runtime) run in separate threads.)</p>
<p>(Relatedly, what I find is that closing down the subinterpreters, like <code>Py_EndInterpreter(this->py_thread);</code>, in a finalizing reaction may crash the system when <code>this->py_thread</code> is not the current thread.)</p>
<p>Perhaps someone has an idea of how to resolve this?</p>
|
<python><python-3.x><python-subinterpreters>
|
2024-07-18 22:10:06
| 1
| 335
|
stustd
|
78,766,780
| 1,749,551
|
How to extract a rectangle in an image from identified lines
|
<p><strong>(See below for update with partially working code.)</strong></p>
<p>I have thousands of images that look like this:</p>
<p><a href="https://i.sstatic.net/531hwP7H.jpg" rel="noreferrer"><img src="https://i.sstatic.net/531hwP7H.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/KPDK7M8G.jpg" rel="noreferrer"><img src="https://i.sstatic.net/KPDK7M8G.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/8Mz22eLT.jpg" rel="noreferrer"><img src="https://i.sstatic.net/8Mz22eLT.jpg" alt="enter image description here" /></a></p>
<p>I need to run an OCR algorithm on the "1930 E.D." column. I find that when I crop the image down to just that column, I get much better results from Tesseract. So I want to identify this long, vertical rectangle automatically and then crop the image and OCR just that bit (with a margin that I'll tweak).</p>
<p>However, the dimensions of the image and the location of the column aren't fixed. There can also be a small bit of rotation in the image which leads the vertical lines to be a few degrees off strictly vertical.</p>
<p>If I were able to reliably identify the long vertical and horizontal lines (and ideally be able to distinguish between single lines and line pairs), then I could easily find the column I need. But the images can be quite poor, so sometimes the lines are interrupted (see third test image). This is the closest I've come, based on <a href="https://stackoverflow.com/a/45560545/1749551">this very helpful SO answer</a>:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
img = cv2.imread(path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
height, width, channels = img.shape
center_x = width // 2
center_y = height // 2
center_point = (center_x, center_y)
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
rho = 1 # distance resolution in px of Hough grid
theta = np.pi / 180 # angular resolution in rad of Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50 # minimum number of px making up a line
max_line_gap = 5 # maximum gap in px btw line segments
line_image = np.copy(img) * 0 # creating a blank to draw lines on
lines = cv2.HoughLinesP(
edges,
rho,
theta,
threshold,
np.array([]),
min_line_length,
max_line_gap,
)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
</code></pre>
<p>Which gives me an image like this:</p>
<p><a href="https://i.sstatic.net/M64POCFp.png" rel="noreferrer"><img src="https://i.sstatic.net/M64POCFp.png" alt="enter image description here" /></a></p>
<p>This looks like it <em>could</em> be good enough line discovery. <strong>But my question is:</strong> how do I group the lines/countours and then determine the coordinates that define the crop rectangle?</p>
<p>Assuming accurate line discovery, a reliable heuristic will be that the rectangle in question will consist of the first two (double) lines to the left of the centerpoint of the image, with the top edge being the first (single) line above the center point. Putting the bottom edge at the end of the image should be fine: the OCR isn't going to identify any text in the black border area. Basically, every image shows the same paper form.</p>
<p>Thanks in advance!</p>
<p><strong>UPDATE:</strong> I have tried <a href="https://stackoverflow.com/users/7355741/fmw42">@fmw42</a>'s suggested approach, and have made some progress! Here's the code as of now. Points to note:</p>
<ol>
<li>I've used <code>equalizeHist</code> to improve the contrast. This seems to marginally improve results.</li>
<li>I'm relying on a morphology kernel of 400x30, which selects for tall, narrow boxes.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import cv2
import numpy as np
def find_column(path):
image = cv2.imread(path)
out = cv2.imread(path)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
equal = cv2.equalizeHist(gray)
_, thresh = cv2.threshold(equal, 10, 255, cv2.THRESH_BINARY_INV)
kernel = np.ones((400, 30), np.uint8)
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
contours, _ = cv2.findContours(morph, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
x, y, w, h = cv2.boundingRect(contour)
cv2.rectangle(out, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imwrite(Path("/tmp") / path.name, out) # Save
show_img(path.stem, out)
def show_img(self, title, img):
cv2.imshow(title, img)
code = cv2.waitKeyEx(0)
cv2.destroyAllWindows()
if code == 113: # 'q'
sys.exit(0)
</code></pre>
<p>That results in this:</p>
<p><a href="https://i.sstatic.net/TYUF5fJj.jpg" rel="noreferrer"><img src="https://i.sstatic.net/TYUF5fJj.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/2fJpPxiM.jpg" rel="noreferrer"><img src="https://i.sstatic.net/2fJpPxiM.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/604pmFBM.jpg" rel="noreferrer"><img src="https://i.sstatic.net/604pmFBM.jpg" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/IY7sePZW.jpg" rel="noreferrer"><img src="https://i.sstatic.net/IY7sePZW.jpg" alt="enter image description here" /></a></p>
<p>This does work sometimes, and quite well in those cases. But more often, the box still isn't properly aligned or doesn't extend far enough up the column to capture all of the text. Any other suggestions about how to tweak these knobs to realiably get the coordinates of the column I'm after?</p>
|
<python><opencv><ocr><hough-transform><document-layout-analysis>
|
2024-07-18 21:40:26
| 1
| 4,798
|
Nick K9
|
78,766,731
| 3,367,091
|
"in" operator for list does not call item's __eq__ method
|
<p>This is the code setup:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Self, Iterable
class PersonNotFoundError(Exception):
"""Person not found in collection."""
class Person:
"""A person with a name."""
def __init__(self: Self, name: str) -> None:
self.name = name
def __eq__(self: Self, other: Self) -> bool:
print("Calling __eq__ in Person class")
if type(self) != type(other):
return NotImplemented
return self.name == other.name
class PersonCollection:
"""A collection of persons."""
def __init__(self: Self, iterable: Iterable[Person]) -> None:
self._data: list[Person] = list(iterable)
def __contains__(self: Self, person: Person) -> bool:
if person in self._data:
return True
raise PersonNotFoundError
p = Person("Person1")
p2 = Person("Person2")
pc = PersonCollection([p, p2])
print(p in pc) # Does not call Person.__eq__
print(p2 in pc) # Does call Person.__eq__
</code></pre>
<p>Why isn't <code>Person.__eq__</code> being called when evaluating <code>p in pc</code>? Since it wasn't called, and the result still comes back <code>True</code>, the <code>person in self._data</code> would have had to use reference equality instead. But why?</p>
|
<python>
|
2024-07-18 21:26:02
| 2
| 2,890
|
jensa
|
78,766,698
| 4,228,320
|
AMBIGUOUS_REFERENCE error when trying to aggregate a dataframe in azure
|
<p>Have a dataframe with the following columns
year, month, loc_code, usg_type, id_code, usg</p>
<p>trying to aggregate
in SQL it would have been</p>
<pre><code>select year, month, loc_code, usg_type, count(distinct id_code) as id_count, sum(usg) as traffic
group by all
</code></pre>
<p>In Python I've tried the following:</p>
<pre><code> step4_df = (
step3_df
.groupBy("year", "month", "loc_code", "usg_type")
.agg(
countDistinct("id_code").alias("id_count"),
sum("usg").alias("traffic")
)
)
</code></pre>
<p>But I'm getting a AnalysisException: [AMBIGUOUS_REFERENCE] Reference <code>id_code</code> is ambiguous error. Followed by a list of suggested tables that have nothing to do with I am trying to sum/count.</p>
<p>How do I go about resolving it?
Thank you</p>
|
<python><dataframe><azure>
|
2024-07-18 21:13:47
| 1
| 489
|
Ben
|
78,766,495
| 1,818,935
|
How to use JSONiq in a python program?
|
<p>How can I use JSONiq to query JSON objects in a python program running on my PC?</p>
<p>I have managed to use JSONiq in a Jupyter Notebook running inside Visual Studio Code, but I'm interested in using JSONiq in a regular python program (*.py), rather than a notebook (*.ipynb).</p>
|
<python><json><jsoniq>
|
2024-07-18 20:16:05
| 2
| 6,053
|
Evan Aad
|
78,766,475
| 162,758
|
module not found error with poetry and multistage docker image
|
<p>I am trying to execute a multistage docker build to create a light image. I am using poetry for dependency management. My dockerfile is as below</p>
<pre><code>FROM python:3.12-slim as builder
RUN pip install poetry
RUN mkdir -p /app
COPY . /app
WORKDIR /app
RUN python -m venv .venv
RUN poetry install
FROM python:3.12-slim as base
COPY --from=builder /app /app
WORKDIR /app
ENV PATH="/app/.venv/bin:$PATH"
CMD ["python", "-m", "main"]
</code></pre>
<p>My main.py file is (yes i know I am not using the yaml import, just trying to test dependency resolution)</p>
<pre class="lang-py prettyprint-override"><code>import yaml
if __name__ == '__main__':
print(f'Hi, Poetry')
</code></pre>
<p>My pyproject.toml file is as below</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "poetrydemo"
version = "0.1.0"
description = ""
readme = "README.md"
package-mode = false
[tool.poetry.dependencies]
python = "^3.12"
pyyaml = "^6.0.1"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>After i build
<code>docker build -t poetrydemo:latest .</code>
and run
<code>docker run poetrydemo:latest </code>
I get a <code>ModuleNotFoundError: No module named 'yaml'</code> error</p>
<p>Examining the contents of the container,
<code>docker run -it --entrypoint sh poetrydemo:latest</code>
I see the .venv file is copied over correctly and the path is set correctly, but still does not find the module. What am I missing? If it helps my folder structure is as below
<a href="https://i.sstatic.net/H3b5nIzO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3b5nIzO.png" alt="enter image description here" /></a></p>
|
<python><docker><python-poetry><docker-multi-stage-build>
|
2024-07-18 20:07:52
| 1
| 2,344
|
VDev
|
78,766,426
| 13,142,245
|
Export Pydantic model classes (not instances) to JSON
|
<p>I understand that Pydantic can export models to JSON, see <a href="https://docs.pydantic.dev/1.10/usage/exporting_models/" rel="nofollow noreferrer">ref</a>. But in practice, this means instances of a model:</p>
<pre><code>from datetime import datetime
from pydantic import BaseModel
class BarModel(BaseModel):
whatever: int
class FooBarModel(BaseModel):
foo: datetime
bar: BarModel
m = FooBarModel(foo=datetime(2032, 6, 1, 12, 13, 14), bar={'whatever': 123})
print(m.json())
#> {"foo": "2032-06-01T12:13:14", "bar": {"whatever": 123}}
</code></pre>
<p>My question is parsing the class itself; so that the class could be understood by other languages, for example JavaScript.</p>
<p>Here is an example of what I'm envisioning:</p>
<pre><code>{BarModel: {
'whatever': 'int'}
}
{FooBarModel: {
'foo': 'datetime',
'bar': {BarModel: {'whatever': 'int'}}
}
</code></pre>
<p>Is this possible using built-in Pydantic functionality?</p>
|
<python><json><pydantic>
|
2024-07-18 19:53:24
| 1
| 1,238
|
jbuddy_13
|
78,765,650
| 893,254
|
Is it possible (or recommended?) to mix abstract base classes with Pydantic BaseModel?
|
<p>In the example code below, I tried to combine abstract base classes with Pydantic. I wanted to write this code because I had an existing type <code>Ticker</code> which has two implementations. It can either be represented as a string or an integer value.</p>
<p>I later wanted to add some data classes to represent messages. It seemed intuitive to implement these using Pydantic, because these message types are closely related to some FastAPI data classes (FastAPI uses Pydantic as part of its framework).</p>
<p>I have my suspicions about the below design. I somewhat suspect that either the <code>ABC</code> approach should be used or the <code>BaseModel</code> approach should be used, and not both. This would also remove multiple inheritance from the design, which would likely be another benefit - for simplicity if nothing else.</p>
<p>If this is the case, I would prefer to keep the runtime type-checking behaviour offered by Pydantic. But in this case, I do not know how to implement a class <code>Ticker</code> which can have two representations - as a <code>str</code> and <code>int</code>.</p>
<p>The immediate problem is a runtime error:</p>
<pre><code>pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class '__main__.Ticker'>. Set `arbitrary_types_allowed=True` in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.
If you got this error by calling handler(<some type>) within `__get_pydantic_core_schema__` then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.
For further information visit https://errors.pydantic.dev/2.8/u/schema-for-unknown-type
</code></pre>
<p>Example code:</p>
<pre><code>from abc import ABC
from abc import abstractmethod
from pydantic import BaseModel
class AbstractTicker(ABC):
@abstractmethod
def to_str(self) -> str:
pass
@abstractmethod
def _get_value(self) -> str|int:
pass
class Ticker():
_ticker: AbstractTicker
def __init__(self, ticker: object):
if isinstance(ticker, str):
self._ticker = TickerStr(ticker)
elif isinstance(ticker, int):
self._ticker = TickerInt(ticker)
else:
raise TypeError(f'unsupported type {type(ticker)} for ticker')
def __str__(self) -> str:
return str(self._ticker)
def to_str(self) -> str:
return self._ticker.to_str()
def __eq__(self, ticker: object) -> bool:
if isinstance(ticker, Ticker):
return self._ticker._get_value() == ticker._ticker._get_value()
return False
class TickerStr(AbstractTicker, BaseModel):
_ticker_str: str
def __init__(self, ticker_str) -> None:
super().__init__()
assert isinstance(ticker_str, str), f'ticker must be of type str'
assert len(ticker_str) > 0, f'ticker cannot be empty string'
self._ticker_str = ticker_str
def __str__(self) -> str:
return f'Ticker[str]({self._ticker_str})'
def _get_value(self) -> str:
return self._ticker_str
def to_str(self) -> str:
return self._ticker_str
class TickerInt(AbstractTicker, BaseModel):
_ticker_int: int
def __init__(self, ticker_int) -> None:
super().__init__()
assert isinstance(ticker_int, int), f'ticker must be of type int'
assert ticker_int > 0, f'ticker must be > 0'
self._ticker_int = ticker_int
def __str__(self) -> str:
return f'Ticker[int]({self._ticker_int})'
def _get_value(self) -> int:
return self._ticker_int
def to_str(self) -> str:
return str(self._ticker_int)
class AbstractMessage(ABC, BaseModel):
def __init__(self) -> None:
pass
def __str__(self) -> str:
pass
class ConcreteMessage(AbstractMessage):
ticker: Ticker
def __init__(self) -> None:
super().__init__()
self.ticker = Ticker('TEST')
def __str__(self) -> str:
return f'{str(self.ticker)}'
def main():
str_ticker = TickerStr('NVDA')
int_ticker = TickerInt(1234)
print(str_ticker)
print(int_ticker)
concrete_message = ConcreteMessage()
print(concrete_message)
if __name__ == '__main__':
main()
</code></pre>
<p>It could be that what I have tried to write in the example code is inadvisable or there might otherwise be a reason why this is a bad idea. If this is the case, please do let me know.</p>
|
<python><pydantic>
|
2024-07-18 16:31:46
| 0
| 18,579
|
user2138149
|
78,765,628
| 6,691,051
|
Reading .csv column with decimal commas and trailing percentage signs as floats using Pandas
|
<p>I am faced with reading a .csv file with some columns like this:</p>
<pre><code>Data1 [-]; Data2 [%]
9,46;94,2%
9,45;94,1%
9,42;93,8%
</code></pre>
<p>I want to read <code>Data1 [%]</code> column as a pandas DataFrame with values <code>[94.2, 93.4, 96.4]</code>.</p>
<p>I am able to remove the percentage sign using <a href="https://stackoverflow.com/questions/25669588/convert-percent-string-to-float-in-pandas-read-csv">this answer</a> after reading the csv, however, this prevents me from using the <code>read_csv(dec=",")</code> to turn the commas into decimal points.</p>
<p>Is there a way that I can convert these numbers into floats as I am reading the .csv file, or do I have to manipulate the DataFrame (removing percentage sign and converting to float values) after reading the .csv?</p>
<p>EDIT: here is the code I am currently using for reference:</p>
<p>.csv file:</p>
<pre><code>Data1 [g/m3];Data2 [%]
9,46;94,2%
9,45;94,1%
9,42;93,8%
</code></pre>
<p>Python file:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv(f, encoding="latin-1", sep=";", decimal=",")
for col in df:
if str(df[col][0])[-1] == "%":
df[col] = df[col].str.rstrip('%').str.replace(',', '.').astype('float')
</code></pre>
|
<python><pandas><dataframe><csv>
|
2024-07-18 16:26:32
| 1
| 2,081
|
FerventHippo
|
78,765,604
| 2,066,725
|
How to enable "vi mode" for the python 3.13 interactive interpreter?
|
<p>One of the new features in python 3.13 is a <a href="https://docs.python.org/3.13/whatsnew/3.13.html#a-better-interactive-interpreter" rel="nofollow noreferrer">more sophisticated interactive interpreter</a>. However, it seems that this interpreter is no longer based on <code>readline</code>, and therefore no longer respects <code>~/.inputrc</code>. I particularly miss the <code>set editing-mode vi</code> behavior.
Is there a way to get this same "vi mode" behavior with the python 3.13 interpreter (other than just disabling the new interpreter completely with <code>PYTHON_BASIC_REPL=1</code>)?</p>
<p>Caveat: The specific version of python I'm using is <code>3.13.0b3</code>. This is a beta release, so I'd be satisfied if the answer is "just wait for the actual release of python 3.13; this feature is planned but hasn't been implemented yet".</p>
|
<python><python-3.x>
|
2024-07-18 16:19:53
| 1
| 1,534
|
Kale Kundert
|
78,765,505
| 10,958,326
|
How to use overload from the typing package within a higher-order function in Python?
|
<p>Returning an overloaded function from a higher-order callable does not produce expected results with respect to type hints: namely, the resulting function behaves as if it was unannotated at all.</p>
<p>Here a minimal example:</p>
<pre><code>from typing import Optional, overload
def get_f(b: int):
@overload
def f(x: int) -> str: ...
@overload
def f(x: int, c: bool) -> int: ...
def f(x: int, c: Optional[bool] = None) -> int | str:
return 2 * x * b if c is not None else "s"
return f
d = get_f(2)
a = d(2)
b = a ** 2
</code></pre>
<p>Here mypy does not realize that <code>b = a**2</code> leads to an <code>TypeError</code>. How to remedy this?</p>
|
<python><overloading><mypy><python-typing>
|
2024-07-18 15:56:51
| 1
| 390
|
algebruh
|
78,765,504
| 7,112,039
|
SQLALchemy: Object deleted by Cascade option is still retrievable even after flushing the session
|
<p>I have a simple SqlAlchemy Relationship defined as follows</p>
<pre class="lang-py prettyprint-override"><code>class User(BaseModel):
cars = relationship(
"Car",
back_populates="user",
passive_deletes="all",
)
class Car(BaseModel):
user_id: Mapped[Optional[uuid.UUID]] = mapped_column(
ForeignKey("user_id", ondelete="CASCADE"),
index=True,
)
</code></pre>
<p>My test is run as follows</p>
<pre class="lang-py prettyprint-override"><code> @staticmethod
async def test_car_is_cascade_deleted(
db, fixture_user, fixture_car
):
user = await fixture_user(). # just a factory that creates a User object and FLUSH
car = await fixture_car(
user=user
)
await db.refresh(car)
await db.delete(user)
await db.commit()
assert await db.get(User, user.id) is None
assert await db.get(car, car.id) is None # this fails if await db.refresh(car) is not called
</code></pre>
<p>As described in the comment, the <code>await db.refresh(car)</code> looks needed to make the
<code>assert await db.get(car, car.id)</code> getting a None value, otherwise an actual object is retrieved.</p>
<p>I also echoed the queries in SqlAlchemy and I clearly see the Delete instructions</p>
<pre><code>DELETE FROM car WHERE car.user_id = %(user_id)s::UUID
2024-07-18 17:48:46,521 INFO sqlalchemy.engine.Engine [generated in 0.00007s] [{'user_id': 'b652d7c5-6328-4725-b276-914264c0064a'}, {'user_id': 'b652d7c5-6328-4725-b276-914264c0064a'}]
2024-07-18 17:48:46,523 INFO sqlalchemy.engine.Engine DELETE FROM user WHERE user.id = %(id)s::UUID {'id': 'b652d7c5-6328-4725-b276-914264c0064a'}
</code></pre>
<p>What I didn't get from SqlAlchemy session management?</p>
|
<python><sqlalchemy>
|
2024-07-18 15:56:44
| 0
| 303
|
ow-me
|
78,765,477
| 8,605,685
|
Why does except Exception appear to not be catching an exception?
|
<p>The code below raises two exceptions: A <code>ValueError</code> and a library specific <code>ApplicationError</code>. The <code>ValueError</code> is still being printed to the console. Why is the <code>except</code> block not suppressing its output?</p>
<pre><code>from pyomo.common.errors import ApplicationError
from pyomo.environ import SolverFactory
try:
solver = SolverFactory("glpk", executable="bad_path")
if solver.available():
print("GLPK solver is available.")
else:
print("GLPK solver not available.")
except (ValueError, ApplicationError) as e:
# except Exception as e: # also doesn't work
print(f"Caught {type(e).__name__}")
</code></pre>
<p>Console output:</p>
<pre><code>> python .\test_pyomo_glpk_solver.py
WARNING: Failed to create solver with name '_glpk_shell': Failed to set
executable for solver glpk. File with name=bad_path either does not exist or
it is not executable. To skip this validation, call set_executable with
validate=False.
Traceback (most recent call last):
File ".\env\Lib\site-packages\pyomo\opt\base\solvers.py", line 148, in __call__
opt = self._cls[_name](**kwds)
^^^^^^^^^^^^^^^^^^^^^^^^
File ".\env\Lib\site-packages\pyomo\solvers\plugins\solvers\GLPK.py", line 92, in __init__
SystemCallSolver.__init__(self, **kwargs)
File ".\env\Lib\site-packages\pyomo\opt\solver\shellcmd.py", line 66, in __init__
self.set_executable(name=executable, validate=validate)
File ".\env\Lib\site-packages\pyomo\opt\solver\shellcmd.py", line 115, in set_executable
raise ValueError(
ValueError: Failed to set executable for solver glpk. File with name=bad_path either does not exist or it is not executable. To skip this validation, call set_executable with validate=False.
Caught ApplicationError
</code></pre>
|
<python><exception>
|
2024-07-18 15:50:29
| 1
| 12,587
|
Salvatore
|
78,765,384
| 20,409,539
|
Tkinter interpreting the long press of the button as many press and release events
|
<p>I want to create a small virtual joystick with <code>tkinter</code>, so I need correct handling of long press of buttons.</p>
<p>I've created a code like this:</p>
<pre><code>import tkinter as tk
def up():
print("Up")
up_button.config(image=up_active_image)
def reset_up(event):
print("Up released")
up_button.config(image=up_image)
root = tk.Tk()
frame = tk.Frame(root)
frame.grid(row=0, column=0)
up_image = tk.PhotoImage(file="assets/up.png")
up_active_image = tk.PhotoImage(file="assets/up_active.png")
up_button = tk.Label(frame, image=up_image)
up_button.grid(row=0, column=1)
root.grid_rowconfigure(0, weight=1)
root.grid_columnconfigure(1, weight=1)
root.bind('<Up>', lambda event: up())
root.bind('<KeyRelease-Up>', reset_up)
root.mainloop()
</code></pre>
<p>But when I press arrow-up for a long time I receive this output:</p>
<pre><code>Up
Up release
Up
Up release
Up
Up release
Up
Up release
Up
Up release
...
</code></pre>
<p>While I'm expecting just one line with word <code>up</code></p>
<p>What am I doing wrong? How to fix this problem?</p>
|
<python><tkinter>
|
2024-07-18 15:31:36
| 1
| 317
|
Ilia Nechaev
|
78,765,225
| 11,357,695
|
monkeypatch logging basicConfig with pytest
|
<p>I have a function that that involves setting up a logger:</p>
<p><code>app/src/app/pipelines.py</code>:</p>
<pre><code>import logging
def get_log_handlers(filepath):
return [logging.FileHandler(filepath,
'w'),
logging.StreamHandler()
]
def function(results_dir):
log_handlers= get_log_handlers(f'{results_dir}\\log.txt')
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s %(levelname)s: %(message)s',
handlers=log_handlers
)
logger = logging.getLogger()
logger.info('---PARAMETERS---')
</code></pre>
<p>I would like to test that the logging is going as expected using pytest, but don't want to write to a file, so in my test file I call <code>function</code> and monkeypatch a handler generation function so that a <code>FileHandler</code> is omitted for the test:</p>
<p><code>app/tests/test_pipelines.py</code>:</p>
<pre><code>from io import StringIO
import logging
def test_function(monkeypatch):
def mock_get_log_handlers(path):
nonlocal test_stream
return [logging.StreamHandler(test_stream)]
test_stream = StringIO()
monkeypatch.setattr('app.pipelines.get_log_handlers', mock_get_log_handlers)
function1('result_dir')
test_stream.seek(0)
content = test_stream.readlines()
assert content[0] == '---PARAMETERS---'
</code></pre>
<p>My mock is getting called, but nothing is getting written to the test_stream (<code>AssertionError: assert [] == ['---PARAMETERS---']</code>).</p>
<p>What am I doing wrong?</p>
|
<python><logging><pytest><monkeypatching><stringio>
|
2024-07-18 15:01:09
| 1
| 756
|
Tim Kirkwood
|
78,765,111
| 13,866,965
|
Issue with Custom Rounding Function
|
<p>I have implemented a custom rounding function in Python using Pandas, but it's not producing the expected results in certain cases. It basically should always <strong>round down</strong> to the nearest step.Here's the function and how I'm applying it:</p>
<pre><code>import pandas as pd
# Define the custom rounding function
def custom_round(x, step):
return x if round(x / step) * step == x else (x // step) * step
# Sample DataFrame similar to my data structure
data = {
"1": [1.300, 1.400, 1.333, 1.364, 1.400],
"X": [5.0, 5.0, 5.0, 4.5, 4.5]
}
outcome = pd.DataFrame(data)
# Define the step for rounding
step = 0.1
# Apply custom_round to create new columns
outcome["1_cluster"] = outcome["1"].apply(lambda x: custom_round(x, step))
outcome["X_cluster"] = outcome["X"].apply(lambda x: custom_round(x, step))
print(outcome)
</code></pre>
<p>The output I get from this script is:</p>
<pre><code> 1 X 1_cluster X_cluster
0 1.300 5.0 1.3 5.0
1 1.400 5.0 1.3 5.0
2 1.333 5.0 1.3 5.0
3 1.364 4.5 1.3 4.5
4 1.400 4.5 1.3 4.5
</code></pre>
<p>However, the correct output for 1_cluster and X_cluster should be:</p>
<pre><code> 1 X 1_cluster X_cluster
0 1.300 5.0 1.3 5.0
1 1.400 5.0 1.4 5.0
2 1.333 5.0 1.3 5.0
3 1.364 4.5 1.3 4.5
4 1.400 4.5 1.4 4.5
</code></pre>
<p>It seems the function custom_round correctly rounds to 5.0 or 4.5 but fails to round correctly to 1.4 when it should. How can I modify custom_round to ensure it correctly rounds to 1.4 as well?</p>
|
<python><pandas><dataframe>
|
2024-07-18 14:38:54
| 1
| 451
|
arrabattapp man
|
78,765,048
| 8,112,883
|
Embedding variable with HTML tags in plotly dash layout
|
<p>I have a variable with some text already containing HTML tags - like line breaks/bold/etc.
I'm extracting this variable from a dataframe based on some conditions.</p>
<p>I want to use this inside my Dash layout.</p>
<p><code>myData = df["HTML_TEXT_COLUMN].values[0]</code></p>
<p>For example, you can consider <code>myData</code> variable to have the below text -</p>
<pre><code>line1 <br> line2 <br> line3
</code></pre>
<p>I'm trying to use this variable as an HTML inside my dash layout like this -</p>
<pre><code>app.layout = html.Div(children = [
html.H1("Some header),
myData
# OR html.P(myData)
]
)
</code></pre>
<p>This is considering the layout as a text with the HTML tags as text.
Instead, I would like to consider that variable as an HTML and consider the line breaks.
What am I doing wrong?</p>
<p>Ideally, I would like to do this without saving the variable as an HTML file and then using that html file in an iFrame - since I don't want to have unnecessary HTML files saved.</p>
|
<python><html><plotly-dash>
|
2024-07-18 14:29:41
| 1
| 374
|
Manu Manjunath
|
78,764,968
| 449,693
|
Detecting a specific type of invalid JSON file
|
<p>I have JSON-like records being written by sensors.</p>
<p>The files are not valid because they look like concatenated records like the following:</p>
<pre><code> {
"timestamp": "2024-07-17T12:31:12Z",
"source_ip": "10.0.0.2",
"source_port": 5678,
"destination_ip": "192.168.1.1",
"destination_port": 443,
"protocol": "UDP",
"bytes_sent": 800
}
{
"timestamp": "2024-07-17T12:31:12Z"
}
{
"timestamp": "2024-07-17T12:31:12Z"
}
</code></pre>
<p>There are no leading and trailing square brackets and there are no commas between the records.</p>
<p>I can clean this up with <code>jq</code> easily enough if I can detect the issue. Not all files are broken like this, other files are properly formatted. Also, these files can be large (hundreds of Gb) so I am using <code>ijson</code>. They are customer files, I have no control over their formatting or content, I can only deal with it.</p>
<p>What's the <code>ijson</code> way to determine if a file is invalid per the above or not? This seems to work but it feels like I am killing an ant with a sledgehammer:</p>
<pre><code>def test_file_for_array_markers(file_path):
with open(file_path, 'r') as file:
try:
for record in ijson.items(file, 'item'):
return isinstance(record, dict)
except ijson.JSONError as e:
return False
</code></pre>
<p>While the above may work, I don't think it is the best way. Running with what Rodrigo said, I am now trying to read the files using:</p>
<pre><code>for record in ijson.items(file, 'item', multiple_values=True):
</code></pre>
<p>If I remove the <code>multiple_values</code> flag I get the "trailing garbage" error which is fine. But when I add it, I get nothing at all. I never enter the <code>for</code> loop. I amended the question above to include an actual example of what I am reading.</p>
|
<python><ijson>
|
2024-07-18 14:16:00
| 2
| 12,431
|
Tony Ennis
|
78,764,883
| 10,164,750
|
Decoding Byte to String is adding additional characters "\"
|
<p>I am trying to convert a <code>bytes</code> field to <code>str</code>. I tried the following code. Not sure, why it is adding an extra <code>char</code> before and after each <code>key</code> and <code>value</code>.</p>
<pre><code>print(event_json)
print(type(event_json))
event_json_nobytes = event_json.decode('utf-8')
event_obj = json.dumps(event_json_nobytes, indent=2)
print(event_obj)
print(type(event_json_nobytes))
</code></pre>
<p>output:</p>
<pre><code>b'{"specversion": "1.0", "id": "xxxxyy", "source": "campaign", "type": "clientfed", "datacontenttype": "H8908", "time": "2024-01-27T11:51:19.288Z", "data": {"clientIdentifier": "H7508", "portfolioIdentifier": "H8908"}}'
<class 'bytes'>
"{\"specversion\": \"1.0\", \"id\": \"xxxxyy\", \"source\": \"campaign\", \"type\": \"clientfed\", \"datacontenttype\": \"H8908\", \"time\": \"2024-01-27T11:51:19.288Z\", \"data\": {\"clientIdentifier\": \"H7508\", \"portfolioIdentifier\": \"H8908\"}}"
<class 'str'>
</code></pre>
<p>Any lead in this regard will be really helpful. Thank you.</p>
<p>P.S. - I got the <code>event_json</code> field from a <code>cloudevents</code> output.</p>
|
<python><json><cloudevents>
|
2024-07-18 13:57:23
| 1
| 331
|
SDS
|
78,764,795
| 7,160,592
|
Is there a way to enable, pause, resume chrome devtools?
|
<p>Preq:</p>
<ol>
<li>Launch selenium script in python</li>
<li>Once launched we are trying to connect to chrome devtools and
control the execution of script for debugging.</li>
<li>As and when selenium script starts replaying, I'm fetching websocket
url and executing below code snippet.</li>
</ol>
<p>Below is my code snippet to connect to chrome devtools through python</p>
<pre><code>def connect_devtools(ws_url):
global ws
ws = websocket.WebSocketApp(ws_url,
on_open=on_open,
on_message=on_message,
on_error=on_error,
on_close=on_close)
# Run WebSocket in a separate thread to keep it open
wst = threading.Thread(target=ws.run_forever)
wst.daemon = True
wst.start()
time.sleep(1) # Wait for the connection to be established
def enable_debugger():
enable_debugger_command = {
"id": 1,
"method": "Debugger.enable",
"params": {}
}
ws.send(json.dumps(enable_debugger_command))
def pause_debugger():
if ws:
pause_command = {
"id": 2,
"method": "Debugger.pause",
"params": {}
}
ws.send(json.dumps(pause_command))
else:
print("WebSocket is not connected")
def on_open(ws):
print("WebSocket connection opened")
def on_message(ws, message):
print("Received message: ", message)
if __name__ == "__main__":
ws_url = "ws://127.0.0.1:35192/devtools/browser/9e552787-2329-4e7d-9cca-77a60e8d603c" # Replace with your actual WebSocket URL
connect_devtools(ws_url)
# Wait a bit before pausing to see the effect
time.sleep(2)
enable_debugger
pause_debugger()
</code></pre>
<p>I see below output:</p>
<pre><code>WebSocket connection opened
Received message: {"id":1,"error":{"code":-32601,"message":"'Debugger.enable' wasn't found"}}
Received message: {"id":2,"error":{"code":-32601,"message":"'Debugger.pause' wasn't found"}}
</code></pre>
<p>As we can see here, websocket connection is opened but unable to pause.</p>
<p>Is there a way to enable, pause, resume chrome devtools without debugger?</p>
|
<python><selenium-webdriver><websocket><google-chrome-devtools>
|
2024-07-18 13:41:10
| 1
| 509
|
sridattas
|
78,764,778
| 5,950,598
|
How to fix "XPath syntax error: Invalid expression" when using etree.XPath in Python while using union operator "|"
|
<p>I'm trying to compile an XPath expression using etree.XPath in Python, but I'm encountering a syntax error. Here's the code snippet:</p>
<pre class="lang-py prettyprint-override"><code>XPATH = '//bridge-domain/(bridge-domain-group-name|bridge-domain-name|bridge-acs/bridge-ac/interface-name|bridge-acs/bridge-ac/attachment-circuit/state|bridge-access-pws/bridge-access-pw/neighbor|bridge-access-pws/bridge-access-pw/pseudowire/state)'
try:
xpath_obj = etree.XPath(XPATH)
print("XPath expression compiled successfully.")
except etree.XPathSyntaxError as e:
print(f"XPath syntax error: {e}")
</code></pre>
<p>Upon running this code, I receive the following error message:</p>
<pre><code>XPath syntax error: Invalid expression
</code></pre>
<p>The XPath expression causing the error is:</p>
<pre><code>XPATH = '//bridge-domain/(bridge-domain-group-name|bridge-domain-name)'
</code></pre>
<p>Could someone please explain what might be causing this error and suggest how to correct the XPath expression so that it compiles correctly? Any guidance or examples would be greatly appreciated.</p>
|
<python><xpath><lxml>
|
2024-07-18 13:38:23
| 1
| 516
|
Jan Krupa
|
78,764,677
| 16,578,438
|
cast list type value of dictionary column in pandas
|
<p>I am trying to write a complex type to dynamodb, but getting into the error:</p>
<pre><code>import pandas as pd
df = pd.DataFrame.from_records([{"col_w_status": {"ACTIVE":["ABC100-01"],"INACTIVE":["ABC100"]}}])
col_list = ["col_w_status"]
display(df)
for col in col_list:
df[col] = df[col].apply(lambda x: dict(x))
# wr.dynamodb.put_df(df, <tableName>)
display(df)
</code></pre>
<p>Error out with below message:</p>
<pre><code>Unexpected err=TypeError('Unsupported type "<class \'numpy.ndarray\'>" for value "[\'ABC100-01\']"'), type(err)=<class 'TypeError'>
</code></pre>
<p>I think nested lambda might work, but I am not sure how to implement it.</p>
|
<python><pandas><aws-data-wrangler>
|
2024-07-18 13:17:38
| 1
| 428
|
NNM
|
78,764,662
| 3,437,787
|
How to pivot predictions over several timesteps against actual values?
|
<p>I have a python script that produces predictions for a timeseries over several timesteps like this:</p>
<p><a href="https://i.sstatic.net/zTYvhG5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zTYvhG5n.png" alt="Image of desired output" /></a></p>
<p>I would like to pivot or rearrange the predictions so that I have a new column named <code>Predictions</code>. In that column I have in sequential order <code>Prediction_1, Prediction_2</code> etc. And then we have four columns <code>time_from_actual_1, time_from_actual</code> etc, that contain the prediction values. This way, for each row, we have a date, an actual value, a "Predictions" value that indicates the number of predictions, and then four values than show the four different timesteps in the predictions.</p>
<p>The output should look like this:
<a href="https://i.sstatic.net/H2d3f1Oy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H2d3f1Oy.png" alt="enter image description here" /></a></p>
<p>Here is what I've tried with sample data:</p>
<pre><code>
import pandas as pd
import numpy as np
# Set random seed for reproducibility
np.random.seed(42)
# Create a DataFrame with daily values for 10 days
date_range = pd.date_range(start="2023-01-01", end="2023-01-10", freq='D')
actual_values = np.random.randint(90, 110, size=len(date_range))
df = pd.DataFrame({'Date': date_range, 'Actual': actual_values})
# Create prediction columns
prediction_period = 4 # 4 days for each prediction
predictions_df = df.copy()
for i in range(len(df)):
prediction_dates = pd.date_range(start=df['Date'][i], periods=prediction_period, freq='D')
predictions = np.random.randint(90, 110, size=prediction_period)
# Add the predictions to the DataFrame
for j, pred_date in enumerate(prediction_dates):
pred_col_name = f'prediction_{i+1}'
if pred_date in predictions_df['Date'].values:
predictions_df.loc[predictions_df['Date'] == pred_date, pred_col_name] = predictions[j]
# Fill NaN values with empty strings
predictions_df = predictions_df.fillna('')
# Reshape the DataFrame
reshaped_data = []
for i in range(len(df)):
prediction_name = f'prediction_{i+1}'
if prediction_name in predictions_df.columns:
for j in range(prediction_period):
if i + j < len(predictions_df):
reshaped_data.append({
'Date': predictions_df['Date'][i + j],
'Actual': predictions_df['Actual'][i + j],
'Prediction': prediction_name,
'time_from_actual_1': predictions_df.loc[i + j, prediction_name] if j == 0 else '',
'time_from_actual_2': predictions_df.loc[i + j, prediction_name] if j == 1 else '',
'time_from_actual_3': predictions_df.loc[i + j, prediction_name] if j == 2 else '',
'time_from_actual_4': predictions_df.loc[i + j, prediction_name] if j == 3 else '',
})
reshaped_df = pd.DataFrame(reshaped_data)
</code></pre>
<p>Thank you for any suggestions!</p>
|
<python><pandas>
|
2024-07-18 13:15:28
| 4
| 62,064
|
vestland
|
78,764,659
| 22,370,136
|
Scraping all pages using Scrapy-Crawler & LinkExtractor-Rules
|
<p>I am trying to scrape dockerhub.com with the scrapy-vertical approach using the Crawler and i need to define a rule which gathers all pages with the following pattern:</p>
<ul>
<li><a href="https://hub.docker.com/search?q=python&page=1" rel="nofollow noreferrer">https://hub.docker.com/search?q=python&page=1</a></li>
<li><a href="https://hub.docker.com/search?q=python&page=2" rel="nofollow noreferrer">https://hub.docker.com/search?q=python&page=2</a></li>
<li><a href="https://hub.docker.com/search?q=python&page=3" rel="nofollow noreferrer">https://hub.docker.com/search?q=python&page=3</a></li>
<li><a href="https://hub.docker.com/search?q=python&page=4" rel="nofollow noreferrer">https://hub.docker.com/search?q=python&page=4</a></li>
<li><a href="https://hub.docker.com/search?q=python&page=..." rel="nofollow noreferrer">https://hub.docker.com/search?q=python&page=...</a>.</li>
</ul>
<p>I have tried using:</p>
<pre><code> rules = (
Rule(LinkExtractor(allow=r'\?q=.*&page=\d+'),
follow=True,
process_request=set_playwright_true,
callback=parse_page
),
)
def parse_page(self, response):
self.logger.info("Parsing page: %s", response.url)
for result in response.css('div#searchResults div.some-class'):
yield {
'title': result.css('h2 a::text').get(),
'link': result.css('h2 a::attr(href)').get(),
}
</code></pre>
<p>but this approach gathers just one page.</p>
<p>Has anyone an idea how to get every page based on my pattern?</p>
|
<python><web-scraping><browser><scrapy><playwright>
|
2024-07-18 13:15:21
| 1
| 407
|
Vlajic Stevan
|
78,764,497
| 15,175,771
|
When calling torchvision.transforms.Normalize and converting to PIL.Image, how are values above 1 handled?
|
<p>Applying <a href="https://pytorch.org/vision/main/generated/torchvision.transforms.Normalize.html" rel="nofollow noreferrer">torchvision.Normalize</a> should lead to values below 0 and above 1, and thus below 0 and 255 when switching to integer values.</p>
<p>However, when watching the values, this seems NOT to be the case. How is this handled?</p>
<p>Please find below a code sample to reproduce the issue.</p>
<p>For a bit of context, I am trying to integrate a neural network using onnx into a C++ code, but I fail to reproduce my python results as values below 0 and above 1 are clipped.</p>
<pre><code>from PIL import Image
from torchvision import transforms
def make_transforms(normalize: bool = True) -> transforms.Compose:
results = [
transforms.Resize((224, 224)),
transforms.ToTensor(),
]
if normalize:
results.append(transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]))
return transforms.Compose(results)
def main() -> float:
resize_a_white_image_without_normalization() # works as expected
resize_a_white_image_WITH_normalization() # how are value > 255 handled???
def resize_a_white_image_WITH_normalization():
# given a given image
white = (255, 255, 255)
white_img = Image.new("RGB", (300, 300), white)
# when resizing the image, and normalizing it
resized_img = make_transforms(normalize=True)(white_img)
resized_img_pil = transforms.ToPILImage()(resized_img)
# expected normalized value
normalized_val = [(1.0 - 0.485) / 0.229, (1.0 - 0.456) / 0.224, (1 - 0.406) / 0.225]
normalized_val_int = [int(i * 255) for i in normalized_val]
print(normalized_val) # [2.2489082969432315, 2.428571428571429, 2.6399999999999997] > 1 ??
print(normalized_val_int) # [573, 619, 673] > 255??
print([between_0_and_255(i) for i in normalized_val_int]) # [63, 109, 163]
print(np.array(resized_img_pil)[0,0]) # [ 61 107 161] ???? still different from above!
def resize_a_white_image_without_normalization() -> float:
# given a given image
white = (255, 255, 255)
white_img = Image.new("RGB", (300, 300), white)
# when resizing the image, but not normalizing it,
resized_img = make_transforms(normalize=False)(white_img)
# then all pixels should remain white
assert (np.array(resized_img) == np.ones_like(np.array(resized_img))).all()
def between_0_and_255(value: int):
return value % 255
if __name__ == "__main__":
main()
</code></pre>
|
<python><pytorch><python-imaging-library><torch><torchvision>
|
2024-07-18 12:43:46
| 1
| 340
|
GabrielGodefroy
|
78,764,364
| 16,389,095
|
How to convert non-readable PDF into readable PDF with OcrMyPdf: troubles with tesseract and configparser
|
<p>I'm trying to convert a scanned PDF into a readable one.</p>
<p>The original PDF contains text, tables, images/logos. The desired output file should be exactly the same of the original file.</p>
<p>I found different strategies on the web to do it in Python:</p>
<p><a href="https://basilchackomathew.medium.com/best-ocr-tools-in-python-4f16a9b6b116" rel="nofollow noreferrer">https://basilchackomathew.medium.com/best-ocr-tools-in-python-4f16a9b6b116</a></p>
<p><a href="https://medium.com/@dr.booma19/extracting-text-from-pdf-files-using-ocr-a-step-by-step-guide-with-python-code-becf221529ef" rel="nofollow noreferrer">https://medium.com/@dr.booma19/extracting-text-from-pdf-files-using-ocr-a-step-by-step-guide-with-python-code-becf221529ef</a></p>
<p><a href="https://ploomber.io/blog/pdf-ocr/" rel="nofollow noreferrer">https://ploomber.io/blog/pdf-ocr/</a></p>
<p>I'm trying to use the <a href="https://ocrmypdf.readthedocs.io/en/latest/api.html" rel="nofollow noreferrer">OcrMyPdf</a> because it seems to be the most direct way for the conversion. Once installed, as well as its dependency <em>tesseract</em>, I tried to run this code:</p>
<pre><code>import ocrmypdf, tesseract
if __name__ == '__main__':
file_path = r'C:\Users\...\Desktop\Test.pdf'
save_path = r'C:\Users\...\Desktop\Test_ocr.pdf'
ocrmypdf.ocr(file_path, save_path, deskew=True)
</code></pre>
<p>Surprisingly, I got this error:</p>
<pre><code> File "C:\Users\...\miniconda3\Lib\site-packages\tesseract\__init__.py", line 34
print 'Creating user config file: {}'.format(_config_file_usr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
</code></pre>
<p>that I corrected adding the () to the print call.
Running again the code, I got the exception:</p>
<pre><code> File "C:\Users\...\miniconda3\Lib\site-packages\tesseract\__init__.py", line 26, in <module>
import ConfigParser
ModuleNotFoundError: No module named 'ConfigParser'
</code></pre>
<p>that I subsequently installed. But running the code again, I still got the same execption:</p>
<pre><code> File "C:\Users\...\miniconda3\Lib\site-packages\tesseract\__init__.py", line 26, in <module>
import ConfigParser
ModuleNotFoundError: No module named 'ConfigParser'
</code></pre>
<p>Even importing the <em>ConfigParser</em> module into my code, I got the same exception.</p>
<p>I'm trying to run the code into Windows 11 using Python 3.11.5. I installed the following packages with <em>Pip</em>:</p>
<p><strong>ConfigParser</strong> 7.0.0</p>
<p><strong>OcrMyPdf</strong> 16.4.1</p>
<p><strong>Tesseract</strong> 0.1.3</p>
<p>This is the Tesseract <em>init</em> file:</p>
<pre><code>#!/usr/bin/python
"""
tesseract
=========
A package for measuring the concentration of halos from Nbody simulations
non-parametrically using Voronoi tessellation.
Subpackages
-----------
voro
Routines for running and manipulating data returned by the Voronoi
tesselation routine vorovol.
nfw
Routines relating to fitting and determining properties of NFW profiles.
io
Routines for data input and output.
util
Misc. utility routines
tests
Routines for running and plotting different tests for the provided test halos.
"""
# Basic dependencies
import ConfigParser
import os
import shutil
# Initialize config file
_config_file_def = os.path.join(os.path.dirname(__file__),"default_config.ini")
_config_file_usr = os.path.expanduser("~/.tessrc")
if not os.path.isfile(_config_file_usr):
print('Creating user config file: {}'.format(_config_file_usr))
shutil.copyfile(_config_file_def,_config_file_usr)
# Read config file
config_parser = ConfigParser.ConfigParser()
config_parser.optionxform = str
config_parser.read(_config_file_def)
config_parser.read(_config_file_usr) # Overrides defaults with user options
# General options
config = {}
if config_parser.has_option('general','outputdir'):
config['outputdir'] = os.path.expanduser(config_parser.get('general','outputdir').strip())
else:
config['outputdir'] = os.getcwd()
# Subpackages
import voro
import nfw
import io
import util
import tests
__all__ = ['voro','nfw','io','util','tests']
</code></pre>
<p>How can I run that simple code for PDF conversion?</p>
|
<python><pdf><ocr><ocrmypdf>
|
2024-07-18 12:14:52
| 0
| 421
|
eljamba
|
78,764,092
| 9,036,382
|
How to update a polars dataframe column at a specific range?
|
<p>I want to update a specific column at a specific row index range.</p>
<p>Here's what I want to achieve in pandas:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({ "foo": [0,0,0,0] })
df["foo"].iloc[0:3] = 1
# or
df.iloc[0:3, df.columns.get_loc("foo")] = 1
</code></pre>
<p>How can I achieve this seemingly simple operation in polars? It seems to be possible to update a single row with the following:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({ "foo": [0,0,0,0] })
df[0, "foo"] = 1
</code></pre>
<p>but trying to update a range fails:</p>
<pre class="lang-py prettyprint-override"><code>df[0:3, "foo"] = 1
# TypeError: cannot use "slice(0, 3, None)" for indexing
df[0:3]["foo"] = 1
# TypeError: DataFrame object does not support `Series` assignment by index
</code></pre>
<p>The recommended answer of using <code>pl.when(pl.col("row_number").between(...)).then(...)</code> adds a significant overhead that shouldn't be needed considering that the row number is sequential, ordered, and starting at 0. On a dataset with a million row, I'm seeing a 20x difference in performance between pandas <code>df.iloc[...] = x</code> and the current polars solution. Is there really no alternative ?</p>
|
<python><python-polars>
|
2024-07-18 11:20:23
| 2
| 534
|
Oreille
|
78,764,081
| 3,446,351
|
Subtle mistake in conda create syntax will install a patch = 0 version of python
|
<p>I'm creating a new conda environment with no additional packages.</p>
<p>It is interesting to note the difference in the python interpreter version installed with a small change in the <strong>conda create</strong> syntax...</p>
<pre><code>conda create --name test python=3.11
</code></pre>
<p>...installs python 3.11.9 as expected.</p>
<p>However but putting a double == rather than a single = when specifying the python version...</p>
<pre><code>conda create --name test python==3.11
</code></pre>
<p>...does not throw any errors but creates an environment containing python 3.11.0, not the 3.11.9 that I was expecting.</p>
<p><strong>This is consistent across 3.9, 3.10, 3.11 and 3.12, all install patch version 0.</strong></p>
<p>I assume the double == is not the correct syntax and that this is not an intended behaviour but this is not caught and certainly confused me for a while.</p>
<p>Windows 10, conda version 23.11.0</p>
|
<python><conda>
|
2024-07-18 11:17:37
| 1
| 691
|
Ymareth
|
78,763,960
| 6,144,940
|
WindRoseAxe generated incompleted axis
|
<p>I just encounter this problem, which never happened before.</p>
<p>The axis is not complete. The code works well in previous runs.</p>
<p>I am not sure what detail I should provide for resolving the problem. Please let me know.</p>
<p>Python version : 3.8.8 (default, Apr 13 2021, 15:08:03) [MSC v.1916 64 bit (AMD64)]</p>
<p>matplotlib version 3.7.5</p>
<p>Data in *csv format for testing can be downloaded va: <a href="https://drive.google.com/file/d/1nIIT0vTVOtj2B89Fxh1tzBLqsYQq_hXP/view?usp=drive_link" rel="nofollow noreferrer">https://drive.google.com/file/d/1nIIT0vTVOtj2B89Fxh1tzBLqsYQq_hXP/view?usp=drive_link</a></p>
<p><a href="https://i.sstatic.net/9QY2o2aK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QY2o2aK.png" alt="enter image description here" /></a></p>
<p>My code is :</p>
<pre><code>from windrose import WindroseAxes
from matplotlib import pyplot as plt
import matplotlib.cm as cm
import numpy as np
import os
import matplotlib.pyplot as plt
file_path = 'the directory with csv file'
# the code will process all the *csv files in the directory
has_legend = False
files = os.listdir(file_path)
def create_rose(file_name):
# Create wind speed and direction variables
# CSV data structure : Date , Time , Speed , Direction
csv = np.genfromtxt(file_path + file_name,
delimiter =',', skip_header =1)
pdf_1 = file_path + 'with_legend_'+ file_name + '.pdf'
pdf_2 = file_path + 'no_legend_'+ file_name + '.pdf'
# ws: wind speed
# wd: wind direction
# in python, range index starts with zero
# ws = csv[ : , 2]
ws = csv[ : , 0]
wd = csv[ : , 1] + 15
speed_bin = np.arange(0, 4, 0.5)
len(ws)
ax = WindroseAxes.from_ax()
ax.bar(wd, ws, normed =True, opening=1, edgecolor='white', bins = speed_bin, cmap = cm.jet, nsector = 12)
ax.set_xticklabels([90, 45, 0, 315, 270, 225, 180, 135])
if(has_legend):
ax.set_legend()
ax.legend(loc ='lower right', decimal_places=2)
plt.savefig(pdf_1)
else:
plt.savefig(pdf_2)
for file_x in files:
print(file_x)
create_rose(file_x)
print('All is finished')
</code></pre>
|
<python><python-3.x><windrose>
|
2024-07-18 10:56:34
| 1
| 473
|
Justin
|
78,763,857
| 3,391,592
|
Rows not getting added to Snowflake table using Python connector
|
<p>I am trying to create and load a table in Snowflake using Python. The data is in a pandas data frame. Below is the code I'm using. The table gets created, but it has 0 rows and there's no error message shown.</p>
<pre><code>with engine.connect() as con:
data.head(0).to_sql(name=table_name, con=con, if_exists="replace", index=False)
query="put file://" + file_path + "* @%" + table_name
con.execute(text(query))
query2="copy into " + table_name + " ON_ERROR=CONTINUE"
con.execute(text(query2))
</code></pre>
<p>Some notes:</p>
<ul>
<li>I had initially used the f string approach, but for some reason it was showing a "not executable object" error, so I changed that up to use text()</li>
<li>When I run the exact same code on a different computer, it works as expected and loads rows in the table, so I don't quite understand what could be happening here</li>
</ul>
|
<python><sqlalchemy><snowflake-cloud-data-platform>
|
2024-07-18 10:35:24
| 2
| 478
|
Moon_Watcher
|
78,763,468
| 3,510,201
|
Use exc_info for log record information when log is raised within an exceptionhook
|
<p><strong>TLDR;</strong> When the log record is passed an <code>exc_info</code> is there a way for me to use that for the log record data instead of were it was logged from?</p>
<p>I am using a exceptionhook to log any uncaught exceptions. For normal logging instances I am using the data within the log record to get additional information such as the file, line number and so on. The log record will use line the error was logged from as the basis for that information. This means that when the logger within the exceptionhook logs the exception the extra log record data will point to the exceptionhook instead of the location where the original exception occurred.</p>
<p>Is there a way for me to use the exc_info data for the log record instead of the log call?</p>
<p>This is best illustrated in a two file setup. The first file raises the error</p>
<pre class="lang-py prettyprint-override"><code># File log_record_test.py
import logging
import log_record_setup
import sys
sys.excepthook = log_record_setup.exc_hook
logger = logging.getLogger()
logger.addHandler(log_record_setup.LogHandler())
def test():
logger.critical("Normal logging here")
def test2():
raise TypeError()
test()
test2()
</code></pre>
<p>The second file contains the set up, most importantly the exception hook.</p>
<pre class="lang-py prettyprint-override"><code># File log_record_setup.py
import logging
logger = logging.getLogger("TestLogger")
class LogHandler(logging.Handler):
def __init__(self):
super(LogHandler, self).__init__()
def emit(self, record):
self.format(record)
log_record_dict = record.__dict__.copy()
print(f"{log_record_dict['filename']} at {log_record_dict['lineno']} within {log_record_dict['funcName']}")
def exc_hook(exc_type, exc_value, tb):
logger.critical("Error happened", exc_info=(exc_type, exc_value, tb))
</code></pre>
<p>If the <code>log_record_test.py</code> file now runs we get this output.</p>
<pre><code>log_record_test.py at 12 within test
log_record_setup.py at 18 within exc_hook
</code></pre>
<p>Where the first test function has the output we want, but the second test that raises an exception says that the it originates from the exception hook. This is the part I would like to say that it originated from where the <code>TypeError</code> was raised within the example.</p>
|
<python><logging>
|
2024-07-18 09:19:41
| 1
| 539
|
Jerakin
|
78,763,404
| 2,261,637
|
Using structlog with Datadog
|
<p>We're using the logging module from the stdlib to send logs from our Django app to Datadog and we have customised our logging based on <a href="https://pypi.org/project/django-datadog-logger" rel="nofollow noreferrer">django-datadog-logger</a></p>
<p>We're exploring a move to <a href="https://www.structlog.org" rel="nofollow noreferrer">structlog</a>, and I was wondering if there is a package that gives a good starting point to get log formatted for Datadog, similar to what <a href="https://github.com/namespace-ee/django-datadog-logger/blob/main/django_datadog_logger/formatters/datadog.py" rel="nofollow noreferrer">the above library does</a> as Datadog <a href="https://docs.datadoghq.com/logs/log_collection/python/" rel="nofollow noreferrer">expects a few top level keys</a>.</p>
<p>Has anyone built something reusable already? Is there a recipe that we could reuse?</p>
<p>PS: I already found <a href="https://pypi.org/project/django-structlog/" rel="nofollow noreferrer">the structlog Django extension</a>, my question is more around formatting for Datadog</p>
|
<python><django><datadog><python-logging><structlog>
|
2024-07-18 09:05:07
| 0
| 1,875
|
Bruno A.
|
78,763,342
| 16,815,358
|
Compromise between quality and file size, how to save a very detailed image into a file with reasonable size (<1MB)?
|
<p>I am facing a small (big) problem: I want to generate a high resolution speckle pattern and save it as a file that I can import into a laser engraver. Can be PNG, JPEG, PDF, SVG, or TIFF.</p>
<p>My script does a decent job of generating the pattern that I want:</p>
<p>The user needs to first define the inputs, these are:</p>
<pre><code>############
# INPUTS #
############
dpi = 1000 # dots per inch
dpmm = 0.03937 * dpi # dots per mm
widthOfSampleMM = 50 # mm
heightOfSampleMM = 50 # mm
patternSizeMM = 0.1 # mm
density = 0.75 # 1 is very dense, 0 is not fine at all
variation = 0.75 # 1 is very bad, 0 is very good
############
</code></pre>
<p>After this, I generate the empty matrix and fill it with black shapes, in this case a circle.</p>
<pre><code># conversions to pixels
widthOfSamplesPX = int(np.ceil(widthOfSampleMM*dpmm)) # get the width
widthOfSamplesPX = widthOfSamplesPX + 10 - widthOfSamplesPX % 10 # round up the width to nearest 10
heightOfSamplePX = int(np.ceil(heightOfSampleMM*dpmm)) # get the height
heightOfSamplePX = heightOfSamplePX + 10 - heightOfSamplePX % 10 # round up the height to nearest 10
patternSizePX = patternSizeMM*dpmm # this is the size of the pattern, so far I am going with circles
# init an empty image
im = 255*np.ones((heightOfSamplePX, widthOfSamplesPX), dtype = np.uint8)
# horizontal circle centres
numPoints = int(density*heightOfSamplePX/patternSizePX) # get number of patterns possible
if numPoints==1:
horizontal = [heightOfSamplePX // 2]
else:
horizontal = [int(i * heightOfSamplePX / (numPoints + 1)) for i in range(1, numPoints + 1)]
# vertical circle centres
numPoints = int(density*widthOfSamplesPX/patternSizePX)
if numPoints==1:
vertical = [widthOfSamplesPX // 2]
else:
vertical = [int(i * widthOfSamplesPX / (numPoints + 1)) for i in range(1, numPoints + 1)]
for i in vertical:
for j in horizontal:
# generate the noisy information
iWithNoise = i+variation*np.random.randint(-2*patternSizePX/density, +2*patternSizePX/density)
jWithNoise = j+variation*np.random.randint(-2*patternSizePX/density, +2*patternSizePX/density)
patternSizePXWithNoise = patternSizePX+patternSizePX*variation*(np.random.rand()-0.5)/2
cv2.circle(im, (int(iWithNoise),int(jWithNoise)), int(patternSizePXWithNoise//2), 0, -1) # add circle
</code></pre>
<p>After this step, I can get <code>im</code>, here's a low quality example at <code>dpi=1000</code>:</p>
<p><a href="https://i.sstatic.net/zKAmUW5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zKAmUW5n.png" alt="bad example" /></a></p>
<p>And here's one with my target dpi (5280):</p>
<p><a href="https://i.sstatic.net/82lyLbdT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82lyLbdT.png" alt="good example" /></a></p>
<p>Now I would like to save <code>im</code> in a handlable way at high quality (DPI>1000). Is there any way to do this?</p>
<hr />
<p>Stuff that I have tried so far:</p>
<ol>
<li>plotting and saving the plot image with PNG, TIFF, SVG, PDF with different DPI values
<code>plt.savefig()</code> with different dpi's</li>
<li><code>cv2.imwrite()</code>
too large of a file, only solution here is to reduce DPI, which also reduces quality</li>
<li>SVG write from matrix:
I developed this function but ultimately, the files were too large:</li>
</ol>
<pre><code>import svgwrite
def matrix_to_svg(matrix, filename, padding = 0, cellSize=1):
# get matrix dimensions and extremes
rows, cols = matrix.shape
minVal = np.min(matrix)
maxVal = np.max(matrix)
# get a drawing
dwg = svgwrite.Drawing(filename, profile='tiny',
size = (cols*cellSize+2*padding,rows*cellSize+2*padding))
# define the colormap, in this case grayscale since black and white
colorScale = lambda val: svgwrite.utils.rgb(int(255*(val-minVal)/(maxVal-minVal)),
int(255*(val-minVal)/(maxVal-minVal)),
int(255*(val-minVal)/(maxVal-minVal)))
# get the color of each pixel in the matrix and draw it
for i in range(rows):
for j in range(cols):
color = colorScale(matrix[i, j])
dwg.add(dwg.rect(insert=(j * cellSize + padding, i * cellSize + padding),
size=(cellSize, cellSize),
fill=color))
dwg.save() # save
</code></pre>
<ol start="4">
<li><code>PIL.save()</code>. Files too large</li>
</ol>
<p>The problem could be also solved by generating better shapes. This would not be an obstacle either. I am open to re-write using a different method, would be grateful if someone would just point me in the right direction.</p>
|
<python><opencv><image-processing>
|
2024-07-18 08:52:04
| 3
| 2,784
|
Tino D
|
78,763,286
| 12,439,683
|
MyST implicit references work locally but not on Read The Docs
|
<p>I am using Sphinx to build my documentation and I parse markdown files with MyST.
I used <a href="https://myst-parser.readthedocs.io/en/latest/syntax/cross-referencing.html#inline-links-with-implicit-text" rel="nofollow noreferrer">implicit text</a> to link to different references, for example <code>[](#my-heading)</code> or <code>[](#MyPyClass)</code>. This works fine locally but not on Read The Docs; the text is not filled in and references to different pages are not resolved.</p>
<h4>How it works locally:</h4>
<p><code>See [](#my-heading)</code> results in "See <em>My_Heading</em>" with a hyperlink to the section.</p>
<p>Using <code>[](#MyPyClass)</code> in fil1.html creates link references correctly from <code>file://path/file1.html</code> to <code>file://path/modules/classes#MyPyClass</code> and the text written will be "MyPyClass" or "modules.classes.MyPyClass" if I use the full dot-fragment.</p>
<h4>On Read The Docs</h4>
<p><strong>Problem 1: Same Page</strong>: If I use <code>see [](#my-heading)</code> on a page where <em>#My Heading</em> is present it does not fill-in the text. <code>[](#my-heading)</code> results in an empty <code>see <a href=#my-heading></a></code> block, instead of writing "<em>see My Heading</em>"</p>
<p><strong>Problem 2: Different pages</strong>: If I use <code>[](#MyPyClass)</code> or <code>[](#modules.classes.MyPyClass)</code> it will also be empty and result only in a file-interal reference, e.g. <code>file1.html#MyPyClass</code>.</p>
<hr />
<p>I double checked. I use the same python (3.10), sphinx ( v7.3.7) and myst_parser (3.0.1) versions in both envuronments. The logs are mostly similar, expect that at the end <em>locally</em> more warnings are generated. These local warnings are about 'myst' cross-reference not found that are <em>not yet</em> in the documentation and are therefor <em>correct</em>, online these warnings are not logged.</p>
<p>Any ideas or solutions why it is not working and the cross-referencing of myst is bugged?</p>
|
<python><hyperlink><python-sphinx><read-the-docs><myst>
|
2024-07-18 08:39:51
| 1
| 5,101
|
Daraan
|
78,763,225
| 19,745,277
|
Python Ping IPv4 and IPv6 from docker container
|
<p>I am trying to ping IPv4 and IPv6 addresses from inside my docker container.</p>
<p>I already tried using <code>ping3</code>, that does not support IPv6</p>
<pre><code>>>> from ping3 import ping
>>> v4 = ping("142.250.185.110")
>>> v6 = ping("2a00:1450:4001:831::200e")
>>> print(v4)
0.003611326217651367
>>> print(v6)
False
</code></pre>
<p>I also tried using <code>multiping</code>, that returns response times for invalid IP Addresses (e.g. <code>1.2.3.4</code> from inside my docker container</p>
<pre><code>mp = MultiPing([server_address])
mp.send()
responses, no_responses = mp.receive(timeout)
print(server_address + " " + str(responses))
# 1.2.3.5 {'1.2.3.5': 0.004548549652099609}
# 1.2.3.4 {'1.2.3.4': 0.004738330841064453}
</code></pre>
<p>Is there a good and reliable way to ping IPv4 and IPv6 addresses including response times?</p>
|
<python><docker><ping>
|
2024-07-18 08:25:35
| 1
| 349
|
noah
|
78,763,062
| 1,397,843
|
Why `pd.merge()` still works even though the `on` column is located in the index
|
<p>Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame(
data={'A': range(5)},
index=range(10, 15),
).rename_axis(index='ID')
df2 = df1.add(100).reset_index()
print(df1)
# A
# ID
# 10 0
# 11 1
# 12 2
# 13 3
# 14 4
print(df2)
# ID A
# 0 10 100
# 1 11 101
# 2 12 102
# 3 13 103
# 4 14 104
</code></pre>
<p>Here, we have two dataframes:</p>
<ul>
<li><code>df1</code>: Includes the <code>ID</code> as index</li>
<li><code>df2</code>: Includes the <code>ID</code> as a column</li>
</ul>
<p>To my surprise, <code>pd.merge()</code> still works:</p>
<pre class="lang-py prettyprint-override"><code>result = df1.merge(df2, on='ID', left_index=False)
print(result)
# ID A_x A_y
# 0 10 0 100
# 1 11 1 101
# 2 12 2 102
# 3 13 3 103
# 4 14 4 104
</code></pre>
<p>You can also leave out <code>left_index=False</code> as its the default. It still works.</p>
<p>However, the <code>on='ID'</code> column does not exist in in <code>df1</code> and it should raise an error.</p>
<p>Am I missing something here?</p>
|
<python><pandas><merge>
|
2024-07-18 07:49:53
| 1
| 386
|
Amin.A
|
78,763,016
| 13,160,199
|
Way to convert a latex file to pdf only with python? (without having to locally install anything)
|
<p>I am building a mathematical software with PyQt, and I would like to provide the users with a result in a file written in LaTex and also the compiled version of it in PDF. However, I can't find a way to do so. All solutions I've seen require me to install a compiler locally.</p>
<p>I cannot expect the users to install a compiler in their computers once they have executable version of the software. That is why I am looking for a solution that does not requires my system to have it.</p>
<p>This question is <em>not</em> about displaying mathematical typesetting in Qt, but about generating a document report in LaTex and also a easy way to visualize it in PDF.</p>
|
<python><pdf><pyqt><latex>
|
2024-07-18 07:39:31
| 1
| 335
|
Lavínia Beghini
|
78,762,922
| 11,345,585
|
What is the best approach to handle class instances in django application in a production environment?
|
<p>I would like to consider 2 approaches. First, you could create a class and its instance in a class and inside views import the instance and call its methods. Second, you import the class and create instance of the class inside the view function when you handle a request. which one do you think is better. Also I am not saying about any class like , models, forms or serializer etc that are related to Django , but the ones I manually create for myself.</p>
<p>Approach 1: Create an instance at module level and import it</p>
<pre><code># utils.py
class Calc:
def run(self):
# perform calculations
return result
calc_instance = Calc()
# views.py
from utils import calc_instance
def view_function(request):
result = calc_instance.run()
# rest of the view logic
</code></pre>
<p>Approach 2: Import the class and create an instance inside the view function</p>
<pre><code># utils.py
class Calc:
def run(self):
# perform calculations
return result
# views.py
from utils import Calc
def view_function(request):
calc_instance = Calc()
result = calc_instance.run()
# rest of the view logic
</code></pre>
<p>Also, I see Approach 1 creates a single instance that remains in memory for the lifetime of the process, at least in the development server.</p>
<p>So based on your experience, which approach would you suggest, or are both exhibits similar behaviour in production environment.</p>
|
<python><django><class><instance><production-environment>
|
2024-07-18 07:18:50
| 1
| 380
|
Sachin Das
|
78,762,664
| 7,802,751
|
When using Google Colab, Python package 'datasets' just disappeared from virtualenv directory 'site-packages'
|
<p>I'm using Google Colab and trying make a virtual environment to work.</p>
<p>My code is:</p>
<pre><code> from google.colab import drive
drive.mount('/content/drive')
!pip install virtualenv
myenv_dir = '/content/drive/MyDrive/virtual_env/'
!virtualenv {myenv_dir}
!chmod +x {myenv_dir}bin/pip;
!chmod +x {myenv_dir}bin/activate;
!source {myenv_dir}bin/activate; pip install accelerate==0.29.3 -U
!source {myenv_dir}bin/activate; pip install datasets==2.19.1
import sys
packages_dir = myenv_dir + "lib/python3.10/site-packages/"
sys.path.append(packages_dir)
import accelerate
import datasets
</code></pre>
<p>This code runs ok and at this moment I can import both <code>accelerate</code> and <code>datasets</code> packages. I look at the Google Drive file explorer and the subdirectories are there,both for <code>accelerate</code> and <code>datasets</code>.</p>
<p>Now I disconnect the notebook, reconnect it and run just the code below (enough to connect to the virtual environment and start using the packages without reinstalling them on Collab):</p>
<pre><code> drive.mount('/content/drive')
myenv_dir = '/content/drive/MyDrive/virtual_env/'
!chmod +x {myenv_dir}bin/activate;
!source {myenv_dir}bin/activate;
import sys
packages_dir = myenv_dir + "lib/python3.10/site-packages/"
sys.path.append(packages_dir)
import accelerate
import datasets
</code></pre>
<p>And here comes the weird part: <code>import accelerate</code> works fine. <code>import datasets</code> returns a</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'datasets'.</p>
</blockquote>
<p>If look in the directory tree either in Colab's file explorer or in Google Drive the accelerate subdirectory is there but the datasets directory in gone. Just vanished.</p>
<p>I'm at a loss on what's happening here!</p>
|
<python><google-drive-api><google-colaboratory><virtualenv><google-cloud-colab-enterprise>
|
2024-07-18 06:12:13
| 2
| 997
|
Gustavo Mirapalheta
|
78,762,448
| 1,818,935
|
Unable to load JSONATA into duktape JavaScript engine
|
<p>I'm trying to use <a href="https://pypi.org/project/jsonata/" rel="nofollow noreferrer">jsonata for python</a> in a Jupyter notebook running in Visual Studio Code. I have installed jsonata via <code>%pip install jsonata</code>, and have imported it via <code>import jsonata</code>. However, when I try to run a command such as <code>jsonata.Context()</code> or <code>jsonata.transform('x', {'x': 10})</code>, I get the following error message: <code>Unable to load JSONATA into duktape JavaScript engine</code>.</p>
<p>Visual Studio Code version 1.91.1<br />
python 3.12.4<br />
jsonata 0.2.5 (it's the python API's version)<br />
Windows 10 Version 22H2<br />
Computer architecture x64</p>
|
<python><visual-studio-code><jupyter-notebook><jsonata>
|
2024-07-18 04:46:15
| 0
| 6,053
|
Evan Aad
|
78,762,421
| 445,345
|
Can you put target value in input x in keras lstm model?
|
<p>In keras documentation:</p>
<p>Arguments x:</p>
<ul>
<li>A tf.data.Dataset. Should return a tuple of either (inputs, <strong>targets</strong>) or (inputs, <strong>targets</strong>, sample_weights).</li>
<li>A keras.utils.PyDataset returning (inputs, <strong>targets</strong>) or (inputs, <strong>targets</strong>, sample_weights).</li>
</ul>
<p>Does it mean that the target values can be included in the input variable x? I have seen an example that uses x. Would it make the model inaccurate? Thanks in advance.</p>
<p>Method:
Model.fit()</p>
<p>x=(1,2,target value), y=(target value)</p>
|
<python><tensorflow><keras>
|
2024-07-18 04:33:15
| 0
| 3,464
|
tipsywacky
|
78,762,359
| 16,869,946
|
Solving system of integral equations using fsolve and quad in python scipy
|
<p>I am trying to solve the following system of integral equations with unknowns theta1, theta2, theta3: <a href="https://i.sstatic.net/zOkKCrX5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOkKCrX5.png" alt="enter image description here" /></a></p>
<p>where Phi and phi are the cdf and pdf of the standard normal distribution respectively by using scipy's fsolve and integrate. Here is my code:</p>
<pre><code>import numpy as np
import math
import scipy.integrate as integrate
from scipy import integrate
from scipy.stats import norm
from scipy.optimize import fsolve
def function(xi, thetai, thetaj, thetak):
return (1 - norm.cdf(xi - thetaj)) * (1 - norm.cdf(xi - thetak)) * norm.pdf(xi - thetai)
def pi_i(thetai, thetaj, thetak):
return integrate.quad(function, -np.inf, np.inf)[0]
def equations(p):
t1, t2, t3 = p
return (pi_i(t1,t2,t3) - 0.5, pi_i(t2,t1,t3) - 0.3, pi_i(t3,t1,t2) - 0.2)
t1, t2, t3 = fsolve(equations, (1, 1, 1))
print(equations((t1, t2, t3)))
</code></pre>
<p>However, when I run my code, the following error pops up:</p>
<pre><code>TypeError: function() missing 3 required positional arguments: 'thetai', 'thetaj', and 'thetak'
</code></pre>
|
<python><scipy><quad><fsolve>
|
2024-07-18 04:03:14
| 1
| 592
|
Ishigami
|
78,761,783
| 2,475,195
|
XGBoostError: value -1 for Parameter verbosity exceed bound [0,3]
|
<p>Error message as in the title. It doesn't make sense to me, per my code below:</p>
<pre><code>clf = xgboost.XGBClassifier(verbosity=1)
print (clf.__class__, clf.verbosity)
# prints <class 'xgboost.sklearn.XGBClassifier'> 1
clf.fit(X=train_data_iter[features].fillna(0), y=train_data_iter['y']) # the error is raised here
</code></pre>
<p>The value is clearly 1, but it somehow gets -1? I don't get it.</p>
|
<python><machine-learning><xgboost><xgbclassifier>
|
2024-07-17 22:12:23
| 1
| 4,355
|
Baron Yugovich
|
78,761,747
| 1,016,004
|
Can you specify a default value for an optional element in structural pattern matching?
|
<p>I have a class <code>Expr</code> that can contain an array of slugs, the first slug defines a type of operation, and in 1 case it can be followed by either 1 or 2 more slugs. Is there a way to use structural pattern matching in such a way that the last slug gets a default value when it's not given? Here are some close-ish attempts:</p>
<pre class="lang-py prettyprint-override"><code>class Expr:
def __init__(self):
self.slugs = ["flows", "step1"]
e = Expr()
</code></pre>
<p><strong>Solution 1:</strong></p>
<p>This one works but requires me to add validation logic and duplicates the wildcard error logic, which I could otherwise fall back on:</p>
<pre><code>match e:
case Expr(slugs=["flows", step, *output]):
if len(output) > 1:
raise Exception("Invalid expression")
output = output[0] if output else None
case _:
raise Exception("Invalid expression")
</code></pre>
<p><strong>Solution 2:</strong></p>
<p>This one doesn't work because one branch doesn't bind the <code>output</code> (<em>SyntaxError: alternative patterns bind different names)</em>:</p>
<pre><code>match e:
case Expr(slugs=["flows", step]) | Expr(slugs=["flows", step, output]):
</code></pre>
<p><strong>Solution 3:</strong></p>
<p>This one would be my ideal (<em>SyntaxError: invalid syntax</em>):</p>
<pre><code>match e:
case Expr(slugs=["flows", step, output = None]):
</code></pre>
<p>I know solution 1 sort of does the job and it's a nitpick, but is it possible to get to a solution where the default value of the <code>output</code> variable can be bound within the <code>case</code> statement?</p>
|
<python><structural-pattern-matching>
|
2024-07-17 21:55:05
| 1
| 6,505
|
Robin De Schepper
|
78,761,659
| 6,029,488
|
Spyder: pip under different interpreter is used for installing packages
|
<p>I started using Spyder recently. I have Python 3.7 and 3.11 installed in my machine. I want to use the second one (3.11), therefore, I have successfully (confirmed by checking in console <code>sys.path</code>) adjusted the interpreter in the preferences:
<a href="https://i.sstatic.net/4af6pt1L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4af6pt1L.png" alt="enter image description here" /></a></p>
<p>Nevertheless, when I want to install a package using <code>pip</code> in Spyder's console (<code>!pip install NAME</code>), the pip under Python 3.7 is utilized, and the package is installed in the Lib folder of Python 3.7:</p>
<p><a href="https://i.sstatic.net/yrPwQYe0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrPwQYe0.png" alt="enter image description here" /></a></p>
<p>Is it is possible to use the pip version installed under Python 3.11 interpreter in Spyder's console for installing the desired modules?</p>
|
<python><pip><spyder>
|
2024-07-17 21:21:33
| 2
| 479
|
Whitebeard13
|
78,761,516
| 926,918
|
Generate stable combinations of an unsorted list in python
|
<p>I have a list made of strings which are <strong>not</strong> in sorted order. I wish to generate combinations but in such a way that the order of the elements is as in the list. However, Python <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow noreferrer">documentation</a> and its behavior confuses me. To quote:</p>
<ol>
<li><p>The combination tuples are emitted in <strong>lexicographic order</strong> according to the order of the input iterable. If the input iterable is sorted, the output tuples will be produced in sorted order.</p>
</li>
<li><p>Elements are treated as unique based on their position, <strong>not on their value</strong>. If the input elements are unique, there will be no repeated values within each combination.</p>
</li>
</ol>
<p>I experimented a little and found the following behavior in Python 3.12.4:</p>
<pre><code>>>> [c for c in combinations([7,3,4],2)]
[(7, 3), (7, 4), (3, 4)]
>>> [c for c in combinations({7,3,4},2)]
[(3, 4), (3, 7), (4, 7)]
</code></pre>
<p>While the first outcome that uses a list solves by purpose, it seems to contradict the documentation and may not be reliable. The second outcome that uses sets, however, is consistent with the documentation but does not serve my purpose.</p>
<p>I would be grateful for any clarification on these observations.</p>
<p>PS: I understand that the problem can be solved using additional list comprehension, but I would like to avoid it if that step is not required.</p>
|
<python><python-3.x><combinations><python-itertools>
|
2024-07-17 20:31:29
| 0
| 1,196
|
Quiescent
|
78,761,475
| 5,594,008
|
parse js string to python
|
<p>I have a string with js code</p>
<pre><code> [{label: '&nbsp;', name: 'Confirmed', width: 30, formatter: 'checkbox'},{label: 'ID', name: 'ID', width: 100, key:true},{label: 'E-mail', name: 'F250275', width: null, formatter: clearText},{label: 'Agree', name: 'F250386', width: null, formatter: clearText},]
</code></pre>
<p>Is there a way to parse it to python to get list? I've tried json.loads , but get the error <code>json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 3 column 18 (char 27)</code></p>
<p><strong>Some update, based on comments.</strong></p>
<p>There isn't any options to use JS here. That's why I didn't put JS tag here</p>
|
<python><parsing>
|
2024-07-17 20:18:37
| 2
| 2,352
|
Headmaster
|
78,761,143
| 13,226,563
|
Streaming audio to Amazon Connect through Kinesis Video Streams
|
<p>What I am trying to achieve is the following.</p>
<p>In Amazon Connect, when I create a Flow with Start Streaming Data block, it starts streaming to a Kinesis Video Stream, and I can call a lambda function with the streams information.</p>
<p>Next, I want this lambda function to stream some audio into the Kinesis Video Stream, so the customer on the phone can hear it.</p>
<p>I have created the following python script, but I cannot hear the audio on the phone call. I have looked at the error logs of connect, and there are no errors that are occuring. A solution to fix this script, or any examples of a similar pipeline would be awesome.</p>
<pre><code>import boto3
import botocore.exceptions
import types
from functools import lru_cache
import time
import io
import wave
class KinesisVideo(object):
_CONTROL_SERVICES = ('kinesisvideo', )
_DATA_SERVICES = ('kinesis-video-media', 'kinesis-video-archived-media')
def __init__(self, session=None):
self._session = session or boto3.Session()
self._methods = {}
for service in self._CONTROL_SERVICES + self._DATA_SERVICES:
prototype = self._get_client_for_service(service)
for method in prototype.meta.method_to_api_mapping.keys():
self._methods[method] = service
@lru_cache()
def _get_arn_for_stream_name(self, stream_name):
response = self.describe_stream(StreamName=stream_name)
return response['StreamInfo']['StreamARN']
@lru_cache()
def _get_endpoint_for_stream_method(self, stream_arn, method):
response = self.get_data_endpoint(StreamARN=stream_arn, APIName=method.upper())
return response['DataEndpoint']
@lru_cache()
def _get_client_for_service(self, service, endpoint_url=None):
client = self._session.client(service, endpoint_url=endpoint_url)
if service == 'kinesis-video-media':
client = self._patch_kinesis_video_media(client)
return client
@lru_cache()
def _get_client_by_arguments(self, method, stream_name=None, stream_arn=None):
service = self._methods[method]
if service not in self._DATA_SERVICES:
return self._get_client_for_service(service)
if not (bool(stream_name) ^ bool(stream_arn)):
raise botocore.exceptions.ParamValidationError(report=
'One of StreamName or StreamARN must be defined ' + \
'to determine service endpoint'
)
stream_arn = self._get_arn_for_stream_name(stream_name) if stream_name else stream_arn
endpoint_url = self._get_endpoint_for_stream_method(stream_arn, method)
return self._get_client_for_service(service, endpoint_url)
def __getattr__(self, method):
if method not in self._methods:
return getattr(super(), method)
kwarg_map = {'StreamName': 'stream_name', 'StreamARN': 'stream_arn'}
def _api_call(**kwargs):
filtered_kwargs = {kwarg_map[k]: v for k, v in kwargs.items() if k in kwarg_map}
client = self._get_client_by_arguments(method, **filtered_kwargs)
return getattr(client, method)(**kwargs)
return _api_call
@staticmethod
def _patch_kinesis_video_media(client):
client.meta.service_model._service_description['operations']['PutMedia'] = {
'name': 'PutMedia',
'http': {'method': 'POST', 'requestUri': '/putMedia'},
'input': {'shape': 'PutMediaInput'},
'output': {'shape': 'PutMediaOutput'},
'errors': [
{'shape': 'ResourceNotFoundException'},
{'shape': 'NotAuthorizedException'},
{'shape': 'InvalidEndpointException'},
{'shape': 'ClientLimitExceededException'},
{'shape': 'ConnectionLimitExceededException'},
{'shape': 'InvalidArgumentException'}
],
'authtype': 'v4-unsigned-body',
}
client.meta.service_model._shape_resolver._shape_map['PutMediaInput'] = {
'type': 'structure',
'required': ['FragmentTimecodeType', 'ProducerStartTimestamp'],
'members': {
'FragmentTimecodeType': {
'shape': 'FragmentTimecodeType',
'location': 'header',
'locationName': 'x-amzn-fragment-timecode-type',
},
'ProducerStartTimestamp': {
'shape': 'Timestamp',
'location': 'header',
'locationName': 'x-amzn-producer-start-timestamp',
},
'StreamARN': {
'shape': 'ResourceARN',
'location': 'header',
'locationName': 'x-amzn-stream-arn',
},
'StreamName': {
'shape': 'StreamName',
'location': 'header',
'locationName': 'x-amzn-stream-name',
},
'Payload': {
'shape': 'Payload',
},
},
'payload': 'Payload',
}
client.meta.service_model._shape_resolver._shape_map['PutMediaOutput'] = {
'type': 'structure',
'members': {'Payload': {'shape': 'Payload'}},
'payload': 'Payload',
}
client.meta.service_model._shape_resolver._shape_map['FragmentTimecodeType'] = {
'type': 'string',
'enum': ['ABSOLUTE', 'RELATIVE'],
}
client.put_media = types.MethodType(
lambda self, **kwargs: self._make_api_call('PutMedia', kwargs),
client,
)
client.meta.method_to_api_mapping['put_media'] = 'PutMedia'
return client
def main():
session = boto3.Session(
aws_access_key_id='MY_ACCESS_KEY_ID',
aws_secret_access_key='MY_SECRET_ACCESS_KEY',
region_name='MY_REGION'
)
video = KinesisVideo(session=session)
print(video.list_streams())
start_tmstp = repr(time.time()).split('.')[0]
print(start_tmstp)
# Ensure the audio data is correctly handled for Amazon Connect
with open('packages/python/output.mkv', 'rb') as payload_file:
payload_data = payload_file.read()
# Create a PCM file from the audio data
pcm_data = io.BytesIO(payload_data)
with wave.open(pcm_data, 'wb') as wav_file:
wav_file.setnchannels(1) # Mono
wav_file.setsampwidth(2) # 2 bytes per sample
wav_file.setframerate(8000) # Sample rate
wav_file.writeframes(payload_data)
response = video.put_media(
StreamName='MY_STREAM_NAME',
Payload=pcm_data,
FragmentTimecodeType='RELATIVE',
ProducerStartTimestamp=start_tmstp,
)
import json
for event in map(json.loads, response['Payload'].read().decode('utf-8').splitlines()):
print(event)
if __name__ == '__main__':
main()
</code></pre>
|
<python><amazon-web-services><amazon-kinesis><amazon-connect><amazon-kinesis-video-streams>
|
2024-07-17 18:42:33
| 1
| 570
|
Andy_ye
|
78,761,023
| 11,402,025
|
fastapi : "GET /docs HTTP/1.1" 404 Not Found
|
<p>I recently updated fast api from v0.89.1 to v0.110.0 and I am getting the following error.</p>
<p>"GET /docs HTTP/1.1" 404 Not Found</p>
<p>There is no other error to help me debug the issue.</p>
<pre><code>INFO: Uvicorn running on http://0.0.0.0:portNumber (Press CTRL+C to quit)
INFO: Started reloader process [xxxxx] using StatReload
INFO: Started server process [xxxxx]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: - "GET / HTTP/1.1" 404 Not Found
INFO: - "GET /docs HTTP/1.1" 404 Not Found
</code></pre>
<p>Everything was working fine before the upgrade.</p>
|
<python><python-3.x><fastapi><openapi>
|
2024-07-17 18:10:58
| 1
| 1,712
|
Tanu
|
78,760,980
| 3,724,318
|
creating hexagonal lattice graph in networkx
|
<p>I need to create a hexagonal lattice network based on another researcher's work. The researcher describes describes the structure of their network in the yellow highlighted section of text here.</p>
<p><a href="https://i.sstatic.net/jyB7d7fF.png" rel="noreferrer"><img src="https://i.sstatic.net/jyB7d7fF.png" alt="enter image description here" /></a></p>
<p>Here's a little bit more info from the actual paper (again, I'm only in need of help with the hexagonal lattice construction):</p>
<p><a href="https://i.sstatic.net/jQmznaFd.png" rel="noreferrer"><img src="https://i.sstatic.net/jQmznaFd.png" alt="enter image description here" /></a></p>
<p>The full paper is <a href="https://www.science.org/doi/full/10.1126/science.1185231" rel="noreferrer">here</a>.</p>
<p>I suspect I'm missing something. Here's my code:</p>
<pre><code>import networkx as nx
import random
import matplotlib.pyplot as plt
def create_hexagonal_lattice_graph(rows, cols):
G = nx.hexagonal_lattice_graph(rows, cols)
simple_G = nx.Graph(G)
return simple_G
# Define the number of rows and columns to get approximately 200 nodes
rows, cols = 6, 6 # Adjusting to get roughly 200 nodes (10*10 grid)
# Create the initial hexagonal lattice graph
G = create_hexagonal_lattice_graph(rows, cols)
# Ensure all nodes have degree 6 by adding missing edges
for node in G.nodes():
while G.degree(node) < 6:
possible_nodes = set(G.nodes()) - set(G.neighbors(node)) - {node}
new_neighbor = random.choice(list(possible_nodes))
G.add_edge(node, new_neighbor)
# Check the number of nodes and ensure all have degree 6
num_nodes = G.number_of_nodes()
degrees = dict(G.degree())
degree_check = all(degree == 6 for degree in degrees.values())
print(f"Number of nodes: {num_nodes}")
print(f"All nodes have degree 6: {degree_check}")
plt.title("Initial Hexagonal Lattice")
nx.draw_circular(G, node_size=20, with_labels=False)
</code></pre>
<p>The result does not have the degree distribution and clustering coefficient that I am hoping for. I feel like I am missing necessary information to construct the network. How do I modify my code to create an undirected hexagonal lattice graph that satisfies these bare-bones characteristics (Z=6; CC=.4)?</p>
|
<python><networkx><graph-theory>
|
2024-07-17 17:59:00
| 1
| 1,023
|
J.Q
|
78,760,916
| 1,284,735
|
Difference between integer division vs floating point division + int conversion
|
<p>Assuming <code>x ≥ 0</code> and <code>y > 0</code> are both 64-bit unsigned integers would there be any correctness difference between below implementation and <code>x // y</code>?</p>
<pre class="lang-py prettyprint-override"><code>def int_divide(x, y):
return int(x / float(y))
</code></pre>
|
<python><floating-point>
|
2024-07-17 17:42:35
| 1
| 1,587
|
petabyte
|
78,760,822
| 11,357,695
|
pytest import errors - module not found
|
<p>I have this directory structure:</p>
<pre><code>app/
pyproject.toml
src/
app/
__init__.py
app.py
functions1.py
functions2.py
tests/
test_functions1.py
test_functions2.py
</code></pre>
<p>If I run this on command line from the app directory (<code>app>python path/to/app/src/app/app.py -flags</code>) it works. However, when I run my tests with pytest (<code>app>pytest path/to/app/tests/test_functions1.py</code>), I get the following error (adapted to this dummy example) during collection:</p>
<pre><code>ImportError while importing test module 'path/to/app/tests/test_functions1.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests\test_functions1.py:9: in <module>
from app.functions1 import MyClass
src\app\functions1.py:13: in <module>
from functions2 import other_function
E ModuleNotFoundError: No module named 'functions2'
</code></pre>
<p>As per the <a href="https://docs.pytest.org/en/7.2.x/explanation/goodpractices.html#tests-outside-application-code" rel="nofollow noreferrer">pytest docs</a>, I have the following in my TOML:</p>
<pre><code>[tool.pytest.ini_options]
pythonpath = "src"
addopts = [
"--import-mode=importlib",
]
</code></pre>
<p>Can someone tell me the proper way to import functions in this context?</p>
<p>EDIT:
Thanks for the <a href="https://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time">link suggestion</a>. I have made some changes and updated this answer, but I believe my issues may be specific to pytest (the link was a more general discussion).</p>
<p>I believe the issue is that my import namespaces are incorrect when run with pytest, but i'm not sure how to reference them properly without breaking the app outside of testing. If I do:</p>
<p><em><strong>functions1.py</strong></em></p>
<pre><code>from app.functions2 import other_function
</code></pre>
<p>rather than:</p>
<p><strong>functions1.py</strong></p>
<pre><code>import functions2 import other_function
</code></pre>
<p>Then my tests work but <code>app>python path/to/app/src/app/app.py -flags</code> fails with <code>ModuleNotFoundError: No module named 'app'</code>. Please note, <code>other_function</code> is used by functions in <code>functions2.py</code>, but not by the specific function from <code>functions2.py</code> i am testing.</p>
|
<python><import><module><package><pytest>
|
2024-07-17 17:14:44
| 1
| 756
|
Tim Kirkwood
|
78,760,634
| 4,687,531
|
Setting the return type of a custom namedtuple
|
<p>I have a <code>sample_config.json</code> file with potentially an arbitrary number of the parameters. An example of the <code>sample_config.json</code> is as follows:</p>
<pre class="lang-json prettyprint-override"><code>{
"tableA_ID": "tableA.csv",
"tableB_ID": "tableB.csv",
"time_period" : 10,
"start_date": "2024-07-01",
"optimization_metric": "L1",
"id": 353
}
</code></pre>
<p>In this case, there are 6 key values, but there could be arbitrarily many more. I have the following helper function to read in this json, as a custom <code>namedtuple</code>, with the resulting type named <code>ConfigTuple</code>.</p>
<pre class="lang-py prettyprint-override"><code>import json
from collections import namedtuple
from pathlib import Path
def get_config_namedtuple(config_path: str | Path) -> ConfigTuple:
config_path = Path(config_path)
with open(f"{config_path}") as f:
config = json.load(f)
ConfigTuple = namedtuple("ConfigTuple", config)
return ConfigTuple(**config)
</code></pre>
<p>This can be used successfully as follows:</p>
<pre class="lang-py prettyprint-override"><code>config_path = Path("sample_config.json")
config = get_config_namedtuple(config_path=config_path)
# using the namedtuple
print(f"{config.tableB_ID=}")
</code></pre>
<p>This works, and prints <code>"tableB.csv"</code>. However, the return type of the function <code>-> ConfigTuple</code> keeps throwing LSP errors, namely <code>"ConfigTuple" is not defined</code>.</p>
<p>This makes sense since we define it in the function body, not outside globally. Is there a way to elegant fix this issue, i.e., ensure that the return type <code>ConfigTuple</code> can be used, for a JSON object with an arbitrary number of keys?</p>
<p>I'd like for the type to be correctly respected in the return type of the function signature, and not just ignore the type error.</p>
|
<python><json><namedtuple>
|
2024-07-17 16:21:31
| 0
| 1,131
|
user4687531
|
78,760,601
| 11,626,909
|
Iteration an object in Python
|
<p>I am new to python. I am trying to parse some 10-Ks from Edgar using <code>edgartools</code> and <code>sec-parsers</code> module of python. Here is my code -</p>
<pre><code>import pandas as pd
# pip install edgartools
from edgar import *
# Tell the SEC who you are
set_identity("Your Name myemail@outlook.com")
filings = get_filings( form = "10-K", filing_date="2023-12-15:2024-07-16",amendments=False )
filings_df = filings.to_pandas() # all filings info now in a data frame
filings[5].document.url # to get the url of a individual document
</code></pre>
<p>But when I run the following code -</p>
<pre><code>for x in filings:
filings[x].document.url
</code></pre>
<p>The error shows - <code>'Filing</code> object cannot be interpreted as an integer.</p>
<p>I am not sure why this is happening. I want the result of for loop function above in a list so that I can later use it in <code>sec-parsers</code> like this -</p>
<pre><code>from sec_parsers import Filing, download_sec_filing, set_headers
def print_first_n_lines(text, n):
lines = text.split('\n')
for line in lines[:n]:
print(line)
html = download_sec_filing(filings[6070].document.url) # for example I need the url for filings index 6070
filing = Filing(html)
filing.html
filing.parse() # parses filing
filing.xml
item1c = filing.find_nodes_by_title('item 1c') [0]
item1c_text = filing.get_node_text(item1c)
print_first_n_lines(item1c_text,50)
</code></pre>
<p>My goal is to create a data frame for all filings with the text from Item C in 10-K and add this Item C text in <code>filings_df</code> data frame as an additional column. Note that to add the Item C text in <code>filings_df</code> data frame as a column, we can use CIK and the year variable (from filing_date variable).</p>
<p>Thanks</p>
|
<python><edgar>
|
2024-07-17 16:15:05
| 3
| 401
|
Sharif
|
78,760,584
| 1,371,481
|
Polars match element counts in List columns before exploding
|
<p>I have a dataframe with multiple columns containing list items.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"a": [[1, 2], [3], [4, 5], [1]],
"b": [[4, 5, 7], [6], [4, 5], [3, 2]]
})
</code></pre>
<pre><code>shape: (4, 2)
┌───────────┬───────────┐
│ a ┆ b │
│ --- ┆ --- │
│ list[i64] ┆ list[i64] │
╞═══════════╪═══════════╡
│ [1, 2] ┆ [4, 5, 7] │
│ [3] ┆ [6] │
│ [4, 5] ┆ [4, 5] │
│ [1] ┆ [3, 2] │
└───────────┴───────────┘
</code></pre>
<p>I eventually want to explode the list columns into rows. This will only work if all lists have the same length (per row).</p>
<pre class="lang-py prettyprint-override"><code>df.explode("a", "b")
# ShapeError: exploded columns must have matching element counts
</code></pre>
<p>For each row, to match the number of elements in the lists, I would like to insert dummy items into the lists.</p>
<pre class="lang-py prettyprint-override"><code>def generate_dummy(c1, c2):
return pl.lit([""] * (pl.col(c1).cast(pl.Int32) - pl.col(c2).cast(pl.Int32)), dtype=pl.List(pl.String))
# Collect the list lengths in each column.
df = df.with_columns(alens=pl.col("a").list.len(), blens=pl.col("b").list.len())
### ERROR STEP ###
# Add dummy element [""] where the length is shorter.
df = df.with_columns(
pl.when(pl.col("alens") > pl.col("blens"))
.then(pl.col("b").list.concat(generate_dummy("alens", "blens")))
.otherwise(pl.col("a").list.concat(generate_dummy("blens", "alens")))
)
</code></pre>
<p>But I am stuck when counting <code>#'s of dummy</code> elements to be added.</p>
<p>The error I get:</p>
<pre class="lang-py prettyprint-override"><code># TypeError: cannot create expression literal for value of type Expr: <Expr ['[(Series[literal]) * ([(col("a…'] at 0x15007DCF7370>
#
# Hint: Pass `allow_object=True` to accept any value and create a literal of type Object.
</code></pre>
<p>I tried with kwargs <code>allow_object=True</code> and end up with the error</p>
<pre class="lang-py prettyprint-override"><code># ComputeError: cannot cast 'Object' type
</code></pre>
<p>How can I match the element counts (of list) in multiple columns of a dataframe?</p>
|
<python><dataframe><python-polars>
|
2024-07-17 16:11:11
| 2
| 1,254
|
DOOM
|
78,760,560
| 447,426
|
How to read / restore a checkpointed Dataframe - across batches
|
<p>I need to "checkpoint" certain information during my batch processing with pyspark that are needed in the next batches.</p>
<p>For this use case, DataFrame.checkpoint seems to fit. While I found many places that explain how to create the one, I did not find any how to restore or read a checkpoint.</p>
<p>For this to be tested, I created a simple test class with two (2) tests. The first reads a CSV and creates a sum. The 2nd one should just get some a continue to sum up:</p>
<pre><code>import pytest
from pyspark.sql import functions as f
class TestCheckpoint:
@pytest.fixture(autouse=True)
def init_test(self, spark_unit_test_fixture, data_dir, tmp_path):
self.spark = spark_unit_test_fixture
self.dir = data_dir("")
self.checkpoint_dir = tmp_path
def test_first(self):
df = (self.spark.read.format("csv")
.option("pathGlobFilter", "numbers.csv")
.load(self.dir))
sum = df.agg(f.sum("_c1").alias("sum"))
sum.checkpoint()
assert 1 == 1
def test_second(self):
df = (self.spark.read.format("csv")
.option("pathGlobFilter", "numbers2.csv")
.load(self.dir))
sum = # how to get back the sum?
</code></pre>
<p>Creating the checkpoint in first test works fine (set tmp_path as checkpoint dir) and i see a folder created with a file.</p>
<p>But how do I read it?</p>
<p>And how do you handle multiple checkpoints? For example, one checkpoint on the sum and another for the average?</p>
<p>Are there better approaches to storing state across batches?</p>
<p>For sake of completeness, the CSV looks like this:</p>
<pre><code>1719228973,1
1719228974,2
</code></pre>
<p>And this is only a minimal example to get it running - my real scenario is more complex.</p>
|
<python><pyspark>
|
2024-07-17 16:05:57
| 1
| 13,125
|
dermoritz
|
78,760,550
| 16,869,946
|
Permutation summation in Pandas dataframe growing super exponentially
|
<p>I have a pandas dataframe that looks like</p>
<pre><code>import pandas as pd
data = {
"Race_ID": [2,2,2,2,2,5,5,5,5,5,5],
"Student_ID": [1,2,3,4,5,9,10,2,3,6,5],
"theta": [8,9,2,12,4,5,30,3,2,1,50]
}
df = pd.DataFrame(data)
</code></pre>
<p>And I would like to create a new column <code>df['feature']</code> by the following method: with each <code>Race_ID</code>, suppose the <code>Student_ID</code> is equal to i, then we define feature to be</p>
<p><a href="https://i.sstatic.net/f5gWvDA6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5gWvDA6.png" alt="enter image description here" /></a></p>
<pre><code>def f(thetak, thetaj, thetai, *theta):
prod = 1;
for t in theta:
prod = prod * t;
return ((thetai + thetaj) / (thetai + thetaj + thetai * thetak)) * prod
</code></pre>
<p>where k,j,l are the <code>Student_ID</code>s within the same <code>Race_ID</code> such that k =/= i, j=/=i,k, l=/=k,j,i and theta_i is <code>theta</code> with <code>Student_ID</code> equals to i. So for example for <code>Race_ID</code> =2, <code>Student_ID</code> =1, we have feature equals to</p>
<p>f(2,3,1,4,5)+f(2,3,1,5,4)+f(2,4,1,3,5)+f(2,4,1,5,3)+f(2,5,1,3,4)+f(2,5,1,4,3)+f(3,2,1,4,5)+f(3,2,1,5,4)+f(3,4,1,2,5)+f(3,4,1,5,2)+f(3,5,1,2,4)+f(3,5,1,4,2)+f(4,2,1,3,5)+f(4,2,1,5,3)+f(4,3,1,2,5)+f(4,3,1,5,2)+f(4,5,1,2,3)+f(4,5,1,3,2)+f(5,2,1,3,4)+f(5,2,1,4,3)+f(5,3,1,2,4)+f(5,3,1,4,2)+f(5,4,1,2,3)+f(5,4,1,3,2)</p>
<p>which is equal to 299.1960138012742.</p>
<p>But as one quickly realises, the number of terms in the sum grows super exponentially with the number of students in a race: if there are n students in a race, then there are (n-1)! terms in the sum.</p>
<p>Fortunately, due to the symmetry property of f, we can reduce the number of terms to a mere (n-1)(n-2) terms by noting the following:</p>
<p>Let i,j,k be given and 1,2,3 (for example sake) be different from i,j,k (i.e. 1,2,3 is in *arg). Then f(k,j,i,1,2,3) = f(k,j,i,1,3,2) = f(k,j,i,2,1,3) = f(k,j,i,2,3,1) = f(k,j,i,3,1,2) = f(k,j,i,3,2,1). Hence we can reduce the number of terms if we just compute any one of the terms and then multiply it by (n-3)!</p>
<p>So for example, for <code>Race_ID</code> =5, <code>Student_ID</code> =9, there would have been 5!=120 terms to sum, but using the above symmetry property, we only have to sum 5x4 = 20 terms (5 choices for k, 4 choices for i and 1 (non-unique choice) for l's), namely</p>
<p>f(2,3,9,5,6,10)+f(2,5,9,3,6,10)+f(2,6,9,3,5,10)+f(2,10,9,3,5,6)+f(3,2,9,5,6,10)+f(3,5,9,3,6,10)+f(3,6,9,2,5,10)+f(3,10,9,2,5,6)+f(5,2,9,3,6,10)+f(5,3,9,2,6,10)+f(5,6,9,2,3,10)+f(5,10,9,2,3,6)+f(6,2,9,3,5,10)+f(6,3,9,2,5,10)+f(6,5,9,2,3,10)+f(6,10,9,2,3,5)+f(10,2,9,3,5,6)+f(10,3,9,2,5,6)+f(10,5,9,2,3,6)+f(10,6,9,2,3,5)</p>
<p>and the feature for student 9 in race 5 will be equal to the above sum times 3! = 53588.197759</p>
<p>So by question is: how do i write the sum for the above dataframe? I have computed the features by hand for checking and the desired outcome looks like:</p>
<pre><code>import pandas as pd
data = {
"Race_ID": [2,2,2,2,2,5,5,5,5,5,5],
"Student_ID": [1,2,3,4,5,9,10,2,3,6,5],
"theta": [8,9,2,12,4,5,30,3,2,1,50],
"feature": [299.1960138012742, 268.93506341257876, 634.7909309816431, 204.18901708653254, 483.7234700875771, 53588.197759, 9395.539167178009, 78005.26224935807, 92907.8753942894, 118315.38359654899, 5600.243276203378]
}
df = pd.DataFrame(data)
</code></pre>
<p>Thank you so much.</p>
|
<python><pandas><dataframe><algorithm><group-by>
|
2024-07-17 16:04:20
| 1
| 592
|
Ishigami
|
78,760,340
| 17,800,932
|
QML layout and padding working in Qt Design Studio but not Qt Creator with Python
|
<p>I have the following QML code:</p>
<pre><code>import QtQuick
import QtQuick.Window
import QtQuick.Controls
import QtQuick.Layouts
Window {
visible: true
width: 300
height: 300
title: "Padding test"
Frame {
anchors.centerIn: parent
padding: 20
ColumnLayout {
spacing: 10
Label {
text: "Section"
font.pixelSize: 20
Layout.alignment: Qt.AlignHCenter
}
Repeater {
model: 4
delegate: Rectangle {
Layout.preferredWidth: 200
Layout.preferredHeight: 20
border.color: "black"
color: "lightBlue"
}
}
}
}
}
</code></pre>
<h2>In Qt Design Studio</h2>
<p>In Qt Design Studio, this code works just as I would expect. It produces this:</p>
<p><a href="https://i.sstatic.net/wiIGIJpY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiIGIJpY.png" alt="enter image description here" /></a></p>
<p>I am using Qt Design Studio 4.5.1, which according to the document <a href="https://doc.qt.io/qtdesignstudio/studio-finding-the-qt-runtime-version.html" rel="nofollow noreferrer">Finding the Qt Runtime Version</a> means that it is using Qt 6.6.2.</p>
<h2>In Qt Creator</h2>
<p>In Qt Creator, the exact same code produces this:</p>
<p><a href="https://i.sstatic.net/0vQea2CY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0vQea2CY.png" alt="enter image description here" /></a></p>
<p>which is not what I would expect. I am using Qt Creator 13.0.2. The application is in Python, and not C++, and so I am using PySide6, which appears to be using Qt 6.7.2.</p>
<p>The Python code that launches this is in Python 3.12.1 and is:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from pathlib import Path
from PySide6.QtGui import QGuiApplication
from PySide6.QtQml import QQmlApplicationEngine
if __name__ == "__main__":
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
qml_file = Path(__file__).resolve().parent / "main.qml"
engine.load(qml_file)
if not engine.rootObjects():
sys.exit(-1)
sys.exit(app.exec())
</code></pre>
<h2>Questions</h2>
<ol>
<li>Why is there a difference between these two methods?</li>
<li>Is there something missing in the Python setup that is not loading some component of QML?</li>
</ol>
<p>There's not a way that I know of to make Qt Design Studio use Qt 6.7, so I have no way of knowing if it's a versioning problem. However, that would be very disturbing indeed. Hopefully there is something simple here. My only guess is that this is a Qt bug or there is some "QML component" I can load into the engine from Python.</p>
<p>Thank you for any help!</p>
|
<python><qt><qml><pyside><qtquick2>
|
2024-07-17 15:24:16
| 0
| 908
|
bmitc
|
78,760,314
| 9,421,213
|
Python Locust using raw socket requests
|
<p>I am using the Locust library to load test a server that listens to TCP/IP socket requests and expecting custom protobuf messages. I have a similator sending message to the server and with locust I am able to create multiple instances and send requests simultaneous. However the statistics of my requests are not showing up on my dashboard and <code>@events.request.add_listener</code> also doesn't work, but the users are spawning and making requests which my server receives. What am I missing? How do I get the the statistics on my dashboard on localhost:8089 if I am using a raw socket? I have listed the locustfile and my client class below.</p>
<p><strong>locustfile.py</strong></p>
<pre><code>from locust import User, task
from client.protobuf import ProtobufMsg
from helpers.fleetsync_excel import ConfigFile, ExcelData
from helpers.fleetsync_general import GpsCollection, Machine
from helpers.gps_generator import Field, Route
from locust_fleetsync_wrapper import FleetSyncLocustClient
class FleetSyncUser(User):
"""
A minimal Locust user class that provides an FleetSyncClient to its subclasses
"""
abstract = True # dont instantiate this as an actual user when running Locust
def __init__(self, environment):
super().__init__(environment)
self.client = FleetSyncLocustClient(self.host, request_event=environment.events.request)
self.excel_data_helper = ExcelData()
self.config_file = ConfigFile()
self.machine = self.start_machine("farmcentredummy")
def start_machine(self, user_name) -> Machine:
"""Set the terminal_id and device_id as configured on the selected user sheet.
Reading these from acc server or test server columns using machine name and account id"""
row = self.config_file.get_randow_row_from_sheet(user_name)
machine_type = row[1]
terminal_id = row[4]
device_id = row[5]
if machine_type is None or terminal_id is None or device_id is None:
raise ValueError(f"terminal ID and/or device ID missing for machine '{machine_type}', '{terminal_id}', '{device_id}'"
f"in {self.config_file.NAME} sheet:{user_name}")
print(f"{machine_type} with terminal_id:{terminal_id} and device_id:{device_id} started")
return Machine(machine_type, terminal_id, device_id)
class Terminal(FleetSyncUser):
host = "<my-url.com>"
def on_start(self):
self.client.connect_terminal()
@task
def simulate_route(self):
"""Simulate gps route configured in ts_config sheet 'route'"""
gpsroute_config = self.config_file.get_row_from_sheet("route", "field1")
route = Route(self.machine, gpsroute_config)
gps_coordinates = route.generate_gps_coordinates()
self.send_gps(gps_coordinates)
self.send_sovs(route.speed_handler.sovs)
@task
def simulate_field(self):
"""Simulate gps and speed on field configured in ts_config sheet 'field'"""
gpsfield_config = self.config_file.get_row_from_sheet("field", "datapoint")
field = Field(self.machine, gpsfield_config)
gps_coordinates = field.generate_gps_coordinates_and_speed_sov()
self.send_gps(gps_coordinates)
self.send_sovs(field.speed_handler.sovs)
@task
def simulate_gps_data(self):
"""Simulate gps data configured in ts_config sheet 'gps'"""
gps_config = self.config_file.get_row_from_sheet("gps", "real-tractor")
gps_coordinates = self.excel_data_helper.read_gps_coordinates(gps_config[1], gps_config[2])
self.send_gps(gps_coordinates)
def send_gps(self, gps_coordinates: GpsCollection):
pb_msg = ProtobufMsg(self.machine.terminal_id, self.machine.device_id)
for gps in gps_coordinates.records:
pb_msg.add_gps_coordinate(gps)
self.send_message(pb_msg)
def send_sovs(self, sovs):
pb_msg = ProtobufMsg(self.machine.terminal_id, self.machine.device_id)
for sov in sovs.records:
pb_msg.add_sov_data(sov)
self.send_message(pb_msg)
def send_message(self, pb_msg: ProtobufMsg):
# when datapoints reaches limit 250 the message must be sent (data sent via several messages)
if pb_msg.datapoints >= ProtobufMsg.datapoints_limit:
response = self.client.send_protobuf_message(pb_msg.msg)
print(f"Terminal: {self.machine.terminal_id} received: {response}")
pb_msg.datapoints = 0
pb_msg.update_request_msg()
</code></pre>
<p><strong>locust_fleetsync_wrapper.py</strong></p>
<pre><code>import time
from client.fleetsync_client import FleetSyncClient
class FleetSyncLocustClient(FleetSyncClient):
def __init__(self, host, request_event):
super().__init__(host)
self._request_event = request_event
def __getattr__(self, name):
print("running __getattr__")
func = FleetSyncClient.__dict__[name]
def wrapper(*args, **kwargs):
request_meta = {
"request_type": "socket",
"name": name,
"start_time": time.time(),
"response_length": 0, # calculating this for an xmlrpc.client response would be too hard
"response": None,
"context": {}, # see HttpUser if you actually want to implement contexts
"exception": None,
}
start_perf_counter = time.perf_counter()
try:
request_meta["response"] = func(*args, **kwargs)
except Exception as e:
request_meta["exception"] = e
request_meta["response_time"] = (time.perf_counter() - start_perf_counter) * 1000
self._request_event.fire(**request_meta) # This is what makes the request actually get logged in Locust
return request_meta["response"]
return wrapper
</code></pre>
<p><strong>fleetsync_client</strong></p>
<pre><code>import ssl
import struct
import sys
import time
import certifi
from client.kvgmprotobuf import PbUnion_pb2
import socket
class FleetSyncClient():
CERTIFICATE = '<my_certificate_file>'
def __init__(self, host):
self.hostname = host
self.ssock = None
def connect_terminal(self):
try:
# Set up the SSL context
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
context.load_cert_chain(certfile=self.CERTIFICATE)
# Open the SSL socket to the server
with socket.create_connection((self.hostname, 443)) as sock:
try:
self.ssock = context.wrap_socket(sock, server_hostname=self.hostname)
print(f"Connected to FarmCentre server: {self.hostname}")
return
except ssl.CertificateError as e:
print("Certificate validation error:", e)
print("Please check the server certifcate")
except ssl.SSLError as e:
print(f"Connection with server {self.hostname} failed: {e}")
except Exception as e:
print("An error occurred:", e)
except Exception as err:
print(f"Connection with server {self.hostname} failed unexpected {err=}, {type(err)=}")
print("Failed to connect to server")
sys.exit(1)
def disconnect_terminal(self):
if self.ssock is not None:
try:
self.ssock.close()
except Exception as err:
print(f"Close connection failed{err=}, {type(err)=}")
sys.exit(1)
def abort(self, text):
print(f"###Error: {text}")
self.disconnect_terminal()
sys.exit(1)
def _send_message(self, data):
try:
# add data length 4 bytes MSB first
data = struct.pack('<I', len(data)) + data
# add two currently not used bytes
data = b'\x00\x00' + data
# add total length 4 bytes MSB first
data = struct.pack('<I', len(data)) + data
# print(binascii.hexlify(data))
####print(f"Sending {len(data)} bytes")
self.ssock.sendall(data)
except socket.error as e:
self.abort(f"Error sending data: {e}")
try:
# receive the response
self.ssock.settimeout(30)
header_bytes = self.ssock.recv(10)
# kvgmprotobuf message length is the last 4 bytes of the header
protobuf_msg_len = struct.unpack('<I', header_bytes[-4:])[0]
protobuf_response = b""
while len(protobuf_response) < protobuf_msg_len:
chunk = self.ssock.recv(1024)
protobuf_response += chunk
# print(f"Protobuf message: {binascii.hexlify(protobuf_response)}")
####print(f"Server response: {len(protobuf_response)} protobuf bytes")
return protobuf_response
except socket.error as e:
self.abort(f"Error receiving response: {e}")
# send kvgmprotobuf message to the server and return the server response kvgmprotobuf message
def send_protobuf_message(self, message):
####Logger.debug(f"sent message:\n{message}")
response_data = self._send_message(message.SerializeToString())
# parse the serialized response bytes to a kvgmprotobuf message
response_msg = PbUnion_pb2.PbUnion()
response_msg.ParseFromString(response_data)
####print(f"{response_msg}")
return response_msg
</code></pre>
|
<python><python-3.x><locust>
|
2024-07-17 15:16:58
| 1
| 681
|
Jiren
|
78,760,313
| 11,684,473
|
Assign DataFrame to particular cell in DataFrame
|
<p>I have the case when I want to nest DataFrame into another one. The problem is, I want to handle it one by one due to the way data is populated.</p>
<p>Some sources of documentation recommend using an <code>.at</code> operator to set value of particular cell. While this works well with scalar types, it causes a problem due to that it have special logic for handling assignments of Series and DataFrames.</p>
<pre class="lang-py prettyprint-override"><code>df.at["foo","newCol"] = DataFrame(...)
</code></pre>
<p>Is there any way of bypassing this logic?</p>
|
<python><pandas>
|
2024-07-17 15:16:22
| 3
| 1,565
|
majkrzak
|
78,760,141
| 11,815,097
|
Python: List containers in the Azure Data Lake Storage
|
<p>I'm trying to list the containers inside a specific directory within Azure Data Lake Storage account, but it doesn't seem to be any function that can handle this:</p>
<p>Here is my hierarchy:</p>
<pre><code>assets
root
container1
container2
container3
container4
container5
</code></pre>
<p>I wrote the following function that gets the path, and it shows all of the containers even inside the containerX. What I want to achieve is to just put the name of the containers in assets/root without having other containers deeper than containerX.</p>
<pre><code>import os
from azure.storage.filedatalake import DataLakeServiceClient
connection_string = os.getenv("AZURE_STORAGE_CONNECTION_STRING")
data_lake_service_client = DataLakeServiceClient.from_connection_string(conn_str=connection_string)
filesystem_client = data_lake_service_client.get_file_system_client(file_system="assets")
paths = filesystem_client.get_paths(path="root")
for path in paths:
if path.is_directory:
print("\t" + path.name)
</code></pre>
<p>Its quite strange that there is no functions like</p>
<blockquote>
<p>get_containers(path="")
or
list_containers(path="")</p>
</blockquote>
<p>to just list them</p>
|
<python><azure><storage><azure-data-lake><azure-storage-account>
|
2024-07-17 14:40:54
| 1
| 315
|
Yasin Amini
|
78,759,990
| 10,749,925
|
How do install these packages on Ubuntu?
|
<p>I'm trying to install 2 packages that were no issue on Anaconda virtual environment. But on an AWS EC2 instance of Ubuntu (v24.04 LTS GNU/Linux 6.8.0-1010-aws x86_64), it's not working.</p>
<p>Typically i would just run <code>pip3 install langchain</code> and <code>pip3 install langchain-community</code> but i get the error that this is an 'externally managed environment'</p>
<p>I have installed most packages on this machine like this:</p>
<pre><code>sudo apt install python3-PACKAGEXYZ
</code></pre>
<p>But even that comes with error:</p>
<pre><code>Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package python3-langchain
</code></pre>
<p>Same for installing <code>sudo apt install python3-langchain-community</code></p>
<p>Can someone please tell me the correct commands to install <code>langchain</code> and <code>langchain-community</code> on Ubuntu?</p>
|
<python><linux><ubuntu><amazon-ec2><langchain>
|
2024-07-17 14:06:55
| 2
| 463
|
chai86
|
78,759,930
| 893,254
|
Is it possible to implement RAII with Python in the context of ensuring that the close() function is called on opened files?
|
<p>I asked a question earlier today, <a href="https://stackoverflow.com/questions/78758529/is-it-important-to-call-close-on-an-file-opened-with-open-if-flush-i"><em><strong>Is it important to call <code>close()</code> on an file opened with <code>open()</code> if <code>flush()</code> is called after each write() operation?</strong></em></a></p>
<p>I asked this question while thinking about data loss, something which <code>flush()</code> would account for. However, there are other reasons why properly calling <code>close()</code> is a good idea.</p>
<ul>
<li>Calling <code>close()</code> allows a resource (file handle) to be freed, rather than relying on the OS to do it.</li>
<li>It is also good design and good practice to write code which puts things back as it found them. If nothing else, it is what future readers of code expect to see. So better to not confuse them.</li>
</ul>
<p>My question is, if we want to manage a file using an object (a class), is there a way to implement RAII like behaviour?</p>
<p>In a language without GC, such as C++, this would be done with a combination of class destructor functions and scope. Destructors are automatically called by the compiler at the end of scope.</p>
<p>Python doesn't have such a mechanism. It only has one way of expressing RAII, and that is with the <code>with</code> syntax (Context Managers):</p>
<pre><code>with open(example_filename, 'w') as ofile:
ofile.write()
</code></pre>
<p>Is there a way to write a class which manages a resource in a RAII like manner in Python?</p>
<p>One possible way to do it might be to pass a context manager to a class <code>__init__</code>.</p>
<pre><code>class ResourceManager():
def __init__(self, file):
self._file = file
def write():
self._file.write(f'hello world\n')
self._file.flush()
with open(example_filename, 'w') as ofile:
manager = ResourceManager(ofile)
manager.write()
manager.write()
manager.write()
</code></pre>
|
<python><contextmanager><raii>
|
2024-07-17 13:54:48
| 1
| 18,579
|
user2138149
|
78,759,927
| 16,436,095
|
Raising NotImplementedError in abstract methods
|
<p>While studying the source code of the <code>numbers.py</code> module from build-in lib, I came across two variants of <code>@abstractmethod</code> (with and without the <code>NotImplementedError</code> raise). Example:</p>
<pre class="lang-py prettyprint-override"><code>class Complex(ABC):
@abstractmethod
def __complex__(self):
"""Return a builtin complex instance. Called for complex(self)."""
@abstractmethod
def __neg__(self):
"""-self"""
raise NotImplementedError
</code></pre>
<p>Next, I created a child class:</p>
<pre class="lang-py prettyprint-override"><code>class MyComplex(Complex):
def __complex__(self):
return complex(1, 1)
def __neg__(self):
return complex(-1, -1)
</code></pre>
<p>After playing with this code for a bit, I found only one difference:</p>
<pre class="lang-bash prettyprint-override"><code>>>> c = MyComplex()
>>> Complex.__complex__(c)
>>> Complex.__neg__(c)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/maskalev/dev/foo/bar.py", line 15, in __neg__
raise NotImplementedError
NotImplementedError
</code></pre>
<p>when calling the <code>__complex__()</code> method, <code>None</code> is returned, and when calling <code>__neg__()</code>, an exception is raised (which, in fact, is described in the code).</p>
<p>So I have some questions:</p>
<ol>
<li>Is this the only difference?</li>
<li>Why is <code>NotImplementedError</code> raising not used in the first case, but used in the second? (Isn't it better to do it uniformly?)</li>
<li>What is the best practice? (I think it's better to raise an exception explicitly, but maybe I don't see the whole picture.)</li>
</ol>
|
<python><abc>
|
2024-07-17 13:53:58
| 0
| 370
|
maskalev
|
78,759,886
| 12,439,683
|
Add additional class to Interpreted Text Roles in Sphinx
|
<p>What I am trying to achieve is to (manually) insert a html-class to certain elements using <a href="https://www.sphinx-doc.org/en/master/usage/domains/python.html#" rel="nofollow noreferrer">sphinx' python domain</a></p>
<p>For example I have this string:</p>
<pre class="lang-none prettyprint-override"><code>Lore Ipsum :py:mod:`dataclasses`
</code></pre>
<p>Which results in:</p>
<pre class="lang-html prettyprint-override"><code>Lore Ipsum <a class="reference external" href="...dataclasses.html#module-dataclasses">
<code class="xref py py-mod docutils literal notranslate">
<span class="pre">dataclass</span>
</code>
</a>
</code></pre>
<p>To either the <code>a</code> tag or <code>code.class</code> I would like to add an additional class. e.g. <code>"injected"</code>, to have the following result</p>
<pre class="lang-html prettyprint-override"><code> <code class="xref py py-mod docutils literal notranslate injected">
</code></pre>
<hr />
<p>Some research I did to find a solution</p>
<h4>"Inherit" role 💥</h4>
<pre class="lang-none prettyprint-override"><code>.. role : injected_mod(?py:mod?)
:class: injected
:injected_mod:`dataclasses`
</code></pre>
<p>Problem: I do not know what to put into the brackets, I think I cannot use domains there -> Not a valid role.</p>
<h4>Register a new role ❌</h4>
<p>Possible but, <strong>Problem</strong>: I want to keep the functionality from the <code>py</code> domain.</p>
<h4>Add role to <code>:py:</code> domain ❓</h4>
<pre class="lang-py prettyprint-override"><code># conf.py
def setup():
app.add_role_to_domain("py", "injected", PyXRefRole())
</code></pre>
<p><strong>What works</strong>: Adds a <code>"py-injected"</code> class that I could work with<br />
<strong>Problem?</strong>: The lookup feature and linking does not work to <code>py:module</code>, i.e. no <code><a class="reference external" is added</code>. I haven't been able to determine where in the sphinx module the lookup feature takes place and if it possible to extend the <code>PyXRefRole</code> to do both.</p>
<h4>Nested Parsing/Roles 😑(nearly)</h4>
<p>The question of <a href="https://stackoverflow.com/q/44829580/12439683">composing roles</a> is similar and provides a useful answer in the <a href="https://sphinx.silverrainz.me/comboroles/index.html" rel="nofollow noreferrer">comboroles</a> extension.</p>
<p>This is somewhat nice as I can combine it with a role directive to add the class.</p>
<pre class="lang-py prettyprint-override"><code>:inject:`:py:mod:\`dataclasses\``
</code></pre>
<p><strong>Problem</strong>: This adds an extra <code><span class=injected></code> around the resulting block from <code>py:mod</code>, instead of modifying the existing tags.</p>
<hr />
<p>I am not sure if nested parsing is somewhat overkill, but so far I have not found a solution to add an additional class.<br />
I think using <a href="https://sphinx.silverrainz.me/comboroles/index.html" rel="nofollow noreferrer">comboroles</a> looks the most promising currently to continue with, but I am not yet sure how to extend it or port its utility to a custom role_function that injects a class instead of nesting an additional tag.
I guess I need to access and modify to nodes in a custom function, but this is where I am stuck.</p>
<p>Notes:</p>
<ul>
<li>I know this is somewhat easy using MyST parsing, but currently I cannot parse this text globally via MyST.</li>
</ul>
|
<python><python-sphinx><restructuredtext><docutils>
|
2024-07-17 13:45:57
| 2
| 5,101
|
Daraan
|
78,759,751
| 1,867,328
|
Dropping rows of a dataframe where selected columns has na values
|
<p>I have below code</p>
<pre><code>df = pd.DataFrame(dict(
age=[5, 6, np.nan],
born=[pd.NaT, pd.Timestamp('1939-05-27'), pd.Timestamp('1940-04-25')],
name=['Alfred', 'Batman', np.nan],
toy=[np.nan, 'Batmobile', 'Joker']))
</code></pre>
<p>Now I want to drop those rows where columns <code>name</code> OR <code>toy</code> have NaN/empty string values. I tried with below code</p>
<pre><code>df[~df[['name', 'toy']].isna()]
</code></pre>
<p>I was expecting only 2nd row would return. Could you please help where I went wrong?</p>
|
<python><pandas>
|
2024-07-17 13:16:33
| 1
| 3,832
|
Bogaso
|
78,759,613
| 14,833,503
|
AttributeError: module 'keras.src.activations' has no attribute 'get'
|
<p>I am running the following error when I try to optimize a LSTM using keras tuner in Python: AttributeError: module 'keras.src.activations' has no attribute 'get'.</p>
<p>I am using the following versions: | Python version: 3.11.7 | Keras version: 3.4.1 | TensorFlow version: 2.16.2 | Keras Tuner version: 1.0.5</p>
<pre><code>#LOADING REQUIRED PACKAGES
import pandas as pd
import math
import keras
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import save_model
from tensorflow.keras.models import model_from_json
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dropout
from kerastuner.tuners import RandomSearch
from kerastuner.engine.hyperparameters import HyperParameters
#GENERATING SAMPLE DATA
# Creating x_train_data with 300 observations and 10 columns
x_train_data = pd.DataFrame(np.random.rand(300, 10))
# Creating y_train_data with 300 observations and 1 column
y_train_data = pd.DataFrame(np.random.rand(300, 1))
# Creating x_test_data with 10 observations and 10 columns
x_test_data = pd.DataFrame(np.random.rand(10, 10))
# Creating y_test_data with 10 observations and 1 column
y_test_data = pd.DataFrame(np.random.rand(10, 1))
#RESHAPING DATA
nrow_xtrain, ncol_xtrain = x_train_data.shape
x_train_data_lstm = x_train_data.reshape(1,nrow_xtrain, ncol_xtrain)
nrow_ytrain= y_train_data.shape[0]
y_train_data_lstm = y_train_data.reshape(1,nrow_ytrain,1)
nrow_ytest= y_test.shape[0]
y_test_data_lstm = y_test.reshape(1,nrow_ytest,1)
nrow_xtest, ncol_xtest = X_test.shape
x_test_data_lstm = X_test.reshape(1,nrow_xtest, ncol_xtest)
#BUILDING AND ESTIMATING MODEL
def build_model(hp):
model = Sequential()
model.add(LSTM(hp.Int('input_unit',min_value=1,max_value=512,step=32),return_sequences=True, input_shape=(x_train_data_lstm.shape[1],x_train_data_lstm.shape[2])))
for i in range(hp.Int('n_layers', 1, 4)):
model.add(LSTM(hp.Int(f'lstm_{i}_units',min_value=1,max_value=512,step=32),return_sequences=True))
model.add(LSTM(hp.Int('layer_2_neurons',min_value=1,max_value=512,step=32)))
model.add(Dropout(hp.Float('Dropout_rate',min_value=0,max_value=0.5,step=0.1)))
model.add(Dense(10, activation=hp.Choice('dense_activation',values=['relu', 'sigmoid',"linear"],default='relu')))
model.compile(loss='mean_squared_error', optimizer='adam',metrics = ['mse'])
return model
tuner= RandomSearch(
build_model,
objective='mse',
max_trials=2,
executions_per_trial=1
)
tuner.search(
x=X_train,
y=Y_train,
epochs=20,
batch_size=128,
validation_data=(x_test_data_lstm,y_test_data_lstm),
)
</code></pre>
<p>Full Error Traceback:</p>
<pre><code>C:\Workspace\Python_Runtime\Envs\bbk\Lib\site-packages\keras\src\layers\rnn\rnn.py:204: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
super().__init__(**kwargs)
Traceback (most recent call last):
Cell In[82], line 12
tuner= RandomSearch(
File C:\Workspace\Python_Runtime\Envs\bbk\Lib\site-packages\keras_tuner\src\tuners\randomsearch.py:174 in __init__
super().__init__(oracle, hypermodel, **kwargs)
File C:\Workspace\Python_Runtime\Envs\bbk\Lib\site-packages\keras_tuner\src\engine\tuner.py:122 in __init__
super().__init__(
File C:\Workspace\Python_Runtime\Envs\bbk\Lib\site-packages\keras_tuner\src\engine\base_tuner.py:132 in __init__
self._populate_initial_space()
File C:\Workspace\Python_Runtime\Envs\bbk\Lib\site-packages\keras_tuner\src\engine\base_tuner.py:192 in _populate_initial_space
self._activate_all_conditions()
File C:\Workspace\Python_Runtime\Envs\bbk\Lib\site-packages\keras_tuner\src\engine\base_tuner.py:149 in _activate_all_conditions
self.hypermodel.build(hp)
Cell In[82], line 8 in build_model
model.add(Dense(10, activation=hp.Choice('dense_activation',values=['relu', 'sigmoid',"linear"],default='relu')))
File C:\Workspace\Python_Runtime\Envs\bbk\Lib\site-packages\keras\src\layers\core\dense.py:89 in __init__
self.activation = activations.get(activation)
AttributeError: module 'keras.src.activations' has no attribute 'get'
</code></pre>
<p>UPDATE</p>
<p>After a restart of the kernel, suddenly the code worked and the error did not pop up anymore. However, in order to understand the error, I will still leave the question online.</p>
|
<python><tensorflow><keras><deep-learning>
|
2024-07-17 12:48:24
| 0
| 405
|
Joe94
|
78,759,142
| 665,335
|
performance issue converting csv file to xlsx in python
|
<p>I use the code below to convert csv file to xlsx file.</p>
<p>For a csv file that is 61MB in size, contains 18 columns and 438,000 rows,
it took 3:30 mins, the xlsx file is 29MB in size.</p>
<p>For a csv file that is 480 MB in size, contains 95 columns and 760,000 rows,
it took 30mins, the xlsx file is 276MB in size</p>
<pre><code>import glob
import csv
from xlsxwriter.workbook import Workbook
def create_excel_file_from_csv(csv_file_path, new_file_path):
"""Create an Excel file from a CSV file"""
for csvfile in glob.glob(csv_file_path):
workbook = Workbook(new_file_path, {'constant_memory': True,'strings_to_numbers':False})
workbook.use_zip64()
worksheet = workbook.add_worksheet()
with open(csvfile, 'rt', encoding='utf8') as f:
reader = csv.reader(f)
for r, row in enumerate(reader):
for c, col in enumerate(row):
worksheet.write(r, c, col)
workbook.close()
</code></pre>
<p>Does anyone know how to improve the performance?</p>
<p>I am happy to change other package or method if required.</p>
|
<python><xlsxwriter>
|
2024-07-17 11:01:35
| 1
| 8,097
|
Pingpong
|
78,759,111
| 18,769,241
|
How to install Python 3.8.7 using Pip (or easy_install) only in a virtualenv
|
<p>I have created a virtualenv and want to install python v.3.8.7 using either pip or easy_install tools without affecting the python version installed on the system which Python 2.7.</p>
<p>I tried using:</p>
<pre><code>pip install python3 --upgrade #in the virtual environment
</code></pre>
<p>But the error states that I need to upgrade Python 2.8 which is the main Python used on my machine (not the virtualenv)</p>
<pre><code> DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
ERROR: Could not find a version that satisfies the requirement python3 (from versions: none)
ERROR: No matching distribution found for python3
python -m pip3 install --upgrade pip
</code></pre>
<p>I tried to upgrade pip only on virualenv: but same error:</p>
<pre><code>DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
Requirement already up-to-date: pip in ./lib/python2.7/site-packages (20.3.4)
</code></pre>
<p>How to install Python 3.8.7 in a virtual environment using either pip or easy_install ?</p>
|
<python><python-3.x><pip>
|
2024-07-17 10:54:45
| 1
| 571
|
Sam
|
78,759,049
| 515,028
|
Using Sparse Categorical CrossEntropy, the loss becomes negative (tensorflow/keras)
|
<p>I am doing the Tensorflow TF tutorial (<a href="https://www.tensorflow.org/text/tutorials/transformer" rel="nofollow noreferrer">https://www.tensorflow.org/text/tutorials/transformer</a>) but with my own data. My data is not related to text, but it is sequences of tokens anyway, with a start token, and an end token. All the tokens go from 0 to 30 (start token is 31, end is 32). The lentgh of the sequence is 64 (in total 66 with the start and end tokens). A sequence looks like:</p>
<pre><code>tf.Tensor(
[31 10 10 10 10 18 10 19 27 22 5 19 10 10 10 10 10 19 10 19 10 1 1 20
22 15 12 26 14 22 17 3 10 14 22 9 25 25 20 7 19 28 4 7 15 14 13 25
21 15 15 17 14 18 14 14 14 27 14 19 25 19 5 3 17 32], shape=(66,), dtype=int32)
</code></pre>
<p>My code is very similar to the one from the tutorial, with very small changes:</p>
<pre><code>import pickle, os
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras import layers
from src.utils.callbacks import VQTransCallback
from src.models.layers import GlobalSelfAttention, CrossAttention, CausalSelfAttention
from src.models.layers import TransformerFeedForward as FeedForward
class PositionalEmbedding(tf.keras.layers.Layer):
def __init__(self, vocab_size, d_model):
super().__init__()
self.d_model = d_model
self.embedding = layers.Embedding(vocab_size, d_model, mask_zero=True)
self.pos_encoding = self.positional_encoding(length=66, depth=d_model)
def compute_mask(self, *args, **kwargs):
return self.embedding.compute_mask(*args, **kwargs)
def positional_encoding(self, length, depth):
depth = depth/2
positions = np.arange(length)[:, np.newaxis] # (seq, 1)
depths = np.arange(depth)[np.newaxis, :]/depth # (1, depth)
angle_rates = 1 / (10000**depths) # (1, depth)
angle_rads = positions * angle_rates # (pos, depth)
pos_encoding = np.concatenate(
[np.sin(angle_rads), np.cos(angle_rads)],
axis=-1)
return tf.cast(pos_encoding, dtype=tf.float32)
def call(self, x):
length = tf.shape(x)[1]
x = self.embedding(x)
# This factor sets the relative scale of the embedding and positonal_encoding.
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x = x + self.pos_encoding[tf.newaxis, :length, :]
return x
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self,*, d_model, num_heads, dff, dropout_rate=0.1):
super().__init__()
self.self_attention = GlobalSelfAttention(
num_heads=num_heads,
key_dim=d_model,
dropout=dropout_rate)
self.ffn = FeedForward(d_model, dff)
self.supports_masking = True
def call(self, x):
x = self.self_attention(x)
x = self.ffn(x)
return x
class Encoder(tf.keras.layers.Layer):
def __init__(self, *, num_layers, d_model, num_heads,
dff, vocab_size, dropout_rate=0.1):
super().__init__()
self.d_model = d_model
self.num_layers = num_layers
self.pos_embedding = PositionalEmbedding(vocab_size=vocab_size, d_model=d_model)
self.enc_layers = [
EncoderLayer(d_model=d_model,
num_heads=num_heads,
dff=dff,
dropout_rate=dropout_rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(dropout_rate)
self.supports_masking = True
def call(self, x):
x = self.pos_embedding(x) # Shape `(batch_size, d_model)`.
x = self.dropout(x)
for i in range(self.num_layers):
x = self.enc_layers[i](x)
return x # Shape `(batch_size, seq_len, d_model)`.
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, *, d_model, num_heads, dff, dropout_rate=0.1):
super(DecoderLayer, self).__init__()
self.causal_self_attention = CausalSelfAttention(
num_heads=num_heads,
key_dim=d_model,
dropout=dropout_rate)
self.cross_attention = CrossAttention(
num_heads=num_heads,
key_dim=d_model,
dropout=dropout_rate)
self.ffn = FeedForward(d_model, dff)
self.supports_masking = True
def call(self, x, context):
x = self.causal_self_attention(x=x)
x = self.cross_attention(x=x, context=context)
x = self.ffn(x) # Shape `(batch_size, seq_len, d_model)`.
return x
class Decoder(tf.keras.layers.Layer):
def __init__(self, *, num_layers, d_model, num_heads, dff, vocab_size,
dropout_rate=0.1):
super(Decoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.pos_embedding = PositionalEmbedding(vocab_size=vocab_size, d_model=d_model)
self.dropout = tf.keras.layers.Dropout(dropout_rate)
self.dec_layers = [
DecoderLayer(d_model=d_model, num_heads=num_heads,
dff=dff, dropout_rate=dropout_rate)
for _ in range(num_layers)]
self.supports_masking = True
def call(self, x, context):
x = self.pos_embedding(x) # (batch_size, target_seq_len, d_model)
x = self.dropout(x)
for i in range(self.num_layers):
x = self.dec_layers[i](x, context)
return x
class VQVAE2Transformer(tf.keras.Model):
def __init__(self, *, enc_num_layers, dec_num_layers, d_model, num_heads, dff,
vocab_size, codebook_length, dropout_rate=0.1):
super().__init__()
self.encoder = Encoder(num_layers=enc_num_layers, d_model=d_model,
num_heads=num_heads, dff=dff, vocab_size=vocab_size,
dropout_rate=dropout_rate)
self.decoder = Decoder(num_layers=dec_num_layers, d_model=d_model,
num_heads=num_heads, dff=dff, vocab_size=vocab_size,
dropout_rate=dropout_rate)
self.enc_num_layers = enc_num_layers
self.dec_num_layers = dec_num_layers
self.d_model = d_model
self.num_heads = num_heads
self.dff = dff
self.vocab_size = vocab_size + 3 + 1 # +3 start, end and mask
self.codebook_length = codebook_length + 2 # +2 start and end tokens
self.start_token = vocab_size+1
self.end_token = vocab_size+2
self.final_layer = tf.keras.layers.Dense(self.vocab_size)
self.supports_masking = True
def call(self, inputs):
enc_in, dec_in = inputs[0], inputs[1]
context = self.encoder(enc_in) # (batch_size, context_len, d_model)
x = self.decoder(dec_in, context) # (batch_size, target_len, d_model)
# Final linear layer output.
logits = self.final_layer(x) # (batch_size, target_len, latent_size)
try:
# Drop the keras mask, so it doesn't scale the losses/metrics.
# b/250038731
del logits._keras_mask
except AttributeError:
pass
# Return the final output and the attention weights.
return logits
def accuracy(self, label, pred):
pred = tf.argmax(pred, axis=2)
label = tf.cast(label, pred.dtype)
match = label == pred
match = tf.cast(match, dtype=tf.float32)
return tf.reduce_sum(match) / self.codebook_length
def compile_model(self):
optimizer = keras.optimizers.AdamW()
scce = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True)
self.compile(optimizer=optimizer,
loss=scce,
metrics=["accuracy"])
def save_build(self, folder):
"""Saves the config before the training starts. The model itself will be saved
later on using keras checkpoints.
Args:
folder: Where to save the config parameters
"""
if not os.path.exists(folder):
os.makedirs(folder)
os.makedirs(os.path.join(folder, 'weights'))
with open(os.path.join(folder, 'params.pkl'), 'wb') as f:
pickle.dump([
self.enc_num_layers,
self.dec_num_layers,
self.d_model,
self.num_heads,
self.dff,
self.vocab_size,
], f)
def train_step(self, batch):
"""Processes one batch inside model.fit()."""
enc_in, dec_in = batch[0][:,:66], batch[0][:,66:]
dec_input = dec_in[:, :-1]
dec_target = dec_in[:, 1:]
with tf.GradientTape() as tape:
preds = self([enc_in, dec_input])
loss = self.compute_loss(None, dec_target, preds)
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update the metrics
self.compiled_metrics.update_state(dec_target, preds)
return {m.name: m.result() for m in self.metrics}
def test_step(self, batch):
enc_in, dec_in = batch[0][:,:66], batch[0][:,66:]
dec_input = dec_in[:, :-1]
dec_target = dec_in[:, 1:]
preds = self([enc_in, dec_input])
self.compiled_metrics.update_state(dec_target, preds)
return {m.name: m.result() for m in self.metrics}
def train(
self, train_dataset, valid_dataset, epochs, run_folder, initial_epoch=0,
print_every_n_epochs=5
):
test_batch = next(valid_dataset.dataset_iter)
display_cb = VQTransCallback(test_batch, run_folder)
checkpoint_filepath = os.path.join(
run_folder, "weights/{epoch:03d}-{loss:.5f}-{val_loss:.5f}.weights.h5")
checkpoint1 = keras.callbacks.ModelCheckpoint(
checkpoint_filepath, save_weights_only=True, save_best_only=True)
checkpoint2 = keras.callbacks.ModelCheckpoint(
os.path.join(run_folder, 'weights/last.weights.h5'),
save_weights_only=True, save_best_only=True)
callbacks_list = [checkpoint1, checkpoint2, display_cb]
self.fit(
train_dataset.dataset, validation_data=valid_dataset.dataset,
epochs=epochs, initial_epoch=initial_epoch, callbacks=callbacks_list,
#steps_per_epoch=1000, validation_steps=1000
)
def generate(self, enc_in, dec_in, startid=0):
"""Performs inference over one batch of inputs."""
bs = tf.shape(enc_in)[0]
# scce = tf.keras.losses.SparseCategoricalCrossentropy(
# from_logits=True)
# logits = self([enc_in, dec_in[:,:-1]])
# print("SCCE LOSS")
# print(scce(dec_in[:,1:],logits))
if startid == 0:
dec_in = tf.ones((bs, 1), dtype=tf.int32) * self.start_token
else:
dec_in = tf.cast(dec_in[:,:startid], dtype=tf.int32)
for _ in range(self.codebook_length - startid):
logits = self([enc_in, dec_in])
logits = tf.argmax(logits, axis=-1, output_type=tf.int32)
last_logit = tf.expand_dims(logits[:, -1], axis=-1)
dec_in = tf.concat([dec_in, last_logit], axis=-1)
return dec_in
</code></pre>
<p>I am using sparse cross entropy as in the tutorial.</p>
<p>I have a problem where the loss that is displayed as the training goes becomes negative. For example right now it says -2.0593.</p>
<p>I am using Sparse Cross Entropy. When I monitor with a callback the loss of some test batches, the value returned is never negative, but usually some value between 1.5 and 2.</p>
<p>As you can see my last layer in the Transformer is a Dense layer without activation function (as in the tutorial) and therefore in the loss I set "from_logit=True". I have tried using a softmax activation in that last layer, and then setting "from_logit=False", but this does not seem to train, and it gets stuck in around a loss of 0.274, and it never moves.</p>
<p>I have no idea why the loss becomes negative, when the loss function always seems to output positive numbers when I test it.</p>
<p>Overall the results of the training are also not good.</p>
|
<python><tensorflow><keras><transformer-model>
|
2024-07-17 10:40:58
| 0
| 1,638
|
Dr Sokoban
|
78,758,991
| 2,393,472
|
Parallel calculation of several custom formulas in LibreOfficeCalc
|
<p>I wrote a custom formula in Python (for example, the name <strong>Formula 1</strong>) and added it as an extension of LibreOfficeCalc.</p>
<p>There can be a lot of such formulas on the sheet and with different parameters.</p>
<p>Is it possible to somehow force LibreOffice Calc to execute these formulas in any way now I have sequentially, but in parallel?</p>
|
<python><libreoffice-calc><uno>
|
2024-07-17 10:26:48
| 0
| 333
|
Anton
|
78,758,736
| 8,282,251
|
How does toml.TomlPreserveInlineDictEncoder() work?
|
<p>I had a look on the toml library source code and I think I understand that this encoder should format dictionaries in one line instead of classic dictionary toml format.</p>
<p>My expected behavior is the following :</p>
<pre><code>import toml
data = {'a': 1, 'b': 2, 'c': 3}
# Default Encoder
toml_string = toml.dumps(data)
print(toml_string)
# Output:
# a = 1
# b = 2
# c = 3
# TomlPreserveInlineDictEncoder
toml_string = toml.dumps(data, encoder=toml.TomlPreserveInlineDictEncoder())
print(toml_string)
# Output:
# {a = 1, b = 2, c = 3}
</code></pre>
<p>But, in fact, changing encoder doesn't change anything to the dumped string. (I also tried with nested dictionaries)</p>
<p>Did I miss something ? Is this functionality suppose to work ?<br />
(I tested it in Python 3.8 and 3.10)</p>
|
<python><python-3.x><toml>
|
2024-07-17 09:34:51
| 1
| 623
|
Gautitho
|
78,758,686
| 2,351,481
|
Schedule Firebase Functions in Python
|
<p>I'm trying to schedule this function to run every week on Monday at 00:05 AM.</p>
<p>But it doesn't run at that time. Using <code>@scheduler_fn.on_schedule</code> has no effect on my function.</p>
<pre><code>@https_fn.on_request()
@scheduler_fn.on_schedule(
schedule="5 0 * * 0",
timezone=scheduler_fn.Timezone("Europe/Bucharest")
)
def scheduleFunction(req: https_fn.Request) -> https_fn.Response:
</code></pre>
|
<python><firebase><google-cloud-platform><google-cloud-functions>
|
2024-07-17 09:22:46
| 1
| 365
|
Cosmin Mihu
|
78,758,644
| 9,877,065
|
Python argparse parse_known_args(namespace=); use of namespace?
|
<p>Trying to understand <a href="https://github.com/schrodinger/pymol-open-source/blob/master/setup.py" rel="nofollow noreferrer">pymol-open-source setup.py</a>, I came accross this use of
<code>argparse.parse_know_args()</code> <code>namespace</code> keyword:</p>
<pre><code>import argparse
...
class options:
osx_frameworks = True
jobs = int(os.getenv('JOBS', 0))
no_libxml = False
no_glut = True
use_msgpackc = 'guess'
testing = False
openvr = False
use_openmp = 'no' if MAC else 'yes'
use_vtkm = 'no'
vmd_plugins = True
...
parser = argparse.ArgumentParser()
parser.add_argument('--glut', dest='no_glut', action="store_false",
help="link with GLUT (legacy GUI)")
parser.add_argument('--no-osx-frameworks', dest='osx_frameworks',
help="on MacOS use XQuartz instead of native frameworks",
action="store_false")
parser.add_argument('--jobs', '-j', type=int, help="for parallel builds "
"(defaults to number of processors)")
parser.add_argument('--no-libxml', action="store_true",
help="skip libxml2 dependency, disables COLLADA export")
parser.add_argument('--use-openmp', choices=('yes', 'no'),
help="Use OpenMP")
parser.add_argument('--use-vtkm', choices=('1.5', '1.6', '1.7', 'no'),
help="Use VTK-m for isosurface generation")
parser.add_argument('--use-msgpackc', choices=('c++11', 'c', 'guess', 'no'),
help="c++11: use msgpack-c header-only library; c: link against "
"shared library; no: disable fast MMTF load support")
parser.add_argument('--testing', action="store_true",
help="Build C-level tests")
parser.add_argument('--openvr', dest='openvr', action='store_true')
parser.add_argument('--no-vmd-plugins', dest='vmd_plugins',
action='store_false',
help='Disable VMD molfile plugins (libnetcdf dependency)')
options, sys.argv[1:] = parser.parse_known_args(namespace=options)
...
</code></pre>
<p>where in <code>options, sys.argv[1:] = parser.parse_known_args(namespace=options)</code>, the
<code>namespace</code> points to the <code>options</code> class.</p>
<p>I guess it is used to filter <code>sysy.argv</code> to be passed to <code>setuptools.setup</code>?</p>
<p>Is it the preferred / pythonic / correct way to use
<code>parser.parse_known_args</code> <code>namespace</code>?</p>
<p>Usually namespace type returns <code><class 'argparse.Namespace'></code> while using this.</p>
<p>I got <code><class '__main__.Namespace'></code>.</p>
<p>Is this behaviour documented for <code>argparse</code>, do <code>namespace</code> keywords accept other kind of objects which could be useful to navigate pre-setup options?</p>
<p>Please bear with me, not expert in Python, setptools, and argparse.</p>
|
<python><argparse><argv>
|
2024-07-17 09:16:00
| 2
| 3,346
|
pippo1980
|
78,758,574
| 13,765,728
|
How to retrive which DAG has updated a Composer Airflow Dataset
|
<p>Regarding Google Cloud Composer, I have defined a DAG in this way:</p>
<pre><code>dataset = Dataset("//my_Dataset")
dag = DAG(
dag_id='my_dag',
default_args=default_args,
schedule=[dataset],
catchup=False)
</code></pre>
<p>The Dataset (//my_Dataset) can be updated by 2 different DAGs. My aim is to retrive information regarding which DAG has updated last time the Dataset. This is because my final goal is to trigger the latter DAG (my_dag) with different parameters depending on which DAG has update the Dataset.</p>
|
<python><airflow><directed-acyclic-graphs><google-cloud-composer>
|
2024-07-17 08:59:50
| 1
| 457
|
domiziano
|
78,758,529
| 893,254
|
Is it important to call `close()` on an file opened with `open()` if `flush()` is called after each write() operation?
|
<p>I have read that it is important to call <code>close()</code> on a file which has been opened with <code>open(filename, 'w'|'a')</code> because otherwise changes made to the opened file may not be persisted. I believe this is due to buffering.</p>
<p>My understanding is that the GC mechanism cannot be relied on to properly close an opened file, because this is not portable across different Python interpreters.</p>
<p>I also understand that a close equivalent of RAII is implemented using contexts.</p>
<p>For example, the following <code>with</code> context automatically calls <code>close()</code> when the scope of the <code>with</code> statement ends.</p>
<pre><code>with open(filename, 'w') as ofile:
# pass
# ofile.close() called automatically (literally, or effectively?)
</code></pre>
<p>Does the situation change if after every call to <code>.write()</code>, we follow up with a call to <code>.flush()</code>? In this context, is it still necessary/important/advisable (delete as appropriate) to also explicitly call <code>.close()</code> rather than leaving the open file to be garbage collected later?</p>
|
<python><garbage-collection><raii>
|
2024-07-17 08:48:57
| 1
| 18,579
|
user2138149
|
78,758,516
| 8,946,188
|
Where to put checks on the inputs of a class?
|
<p>Where should I put checks on the inputs of a class. Right now I'm putting it in <code>__init__</code> as follows, but I'm not sure if that's correct. See example below.</p>
<pre><code>import numpy as np
class MedianTwoSortedArrays:
def __init__(self, sorted_array1, sorted_array2):
# check inputs --------------------------------------------------------
# check if input arrays are np.ndarray's
if isinstance(sorted_array1, np.ndarray) == False or \
isinstance(sorted_array2, np.ndarray) == False:
raise Exception("Input arrays need to be sorted np.ndarray's")
# check if input arrays are 1D
if len(sorted_array1.shape) > 1 or len(sorted_array2.shape) > 1:
raise Exception("Input arrays need to be 1D np.ndarray's")
# check if input arrays are sorted - note that this is O(n + m)
for ind in range(0, len(sorted_array1)-1, 1):
if sorted_array1[ind] > sorted_array1[ind + 1]:
raise Exception("Input arrays need to be sorted")
# end of input checks--------------------------------------------------
self.sorted_array1 = sorted_array1
self.sorted_array2 = sorted_array2
</code></pre>
|
<python><python-3.x><validation>
|
2024-07-17 08:46:43
| 1
| 462
|
Amazonian
|
78,758,511
| 1,982,032
|
How can convert the dataframe into desired new dataframe effectively?
|
<p><code>x</code> is a dataframe:</p>
<pre><code>x
year mar 31, 2024 mar 31, 2023
0 net income 306.000 524.0000
1 net income growth -0.416 -0.0455
2 retained rate NaN NaN
3 pe 419.930 0.0000
</code></pre>
<p>It's row index and column name:</p>
<pre><code>x.index
RangeIndex(start=0, stop=4, step=1)
x.columns
Index(['year', 'mar 31, 2024', 'mar 31, 2023'], dtype='object')
</code></pre>
<p>I want the desired new dataframe with below formats:</p>
<p>new_x</p>
<pre><code> year net income net income growth retained rate pe
0 mar 31, 2024 306.0 -0.416 NaN 419.93
1 mar 31, 2023 524.0 -0.0455 NaN 0.0
new_x.index
RangeIndex(start=0, stop=1, step=1)
new_x.columns
['year', 'net income', 'net income growth', 'retained rate', 'pe']
</code></pre>
<p>How can get the new dataframe effectively?</p>
<pre><code>x.T
0 1 2 3
year net income net income growth retained rate pe
mar 31, 2024 306.0 -0.416 NaN 419.93
mar 31, 2023 524.0 -0.0455 NaN 0.0
</code></pre>
<p>The transpose method can't work.</p>
|
<python><pandas><dataframe>
|
2024-07-17 08:45:27
| 1
| 355
|
showkey
|
78,757,840
| 16,527,170
|
Converting 1H OHLC Data to 3H OHLC Data using Pandas Dataframe
|
<p>I have <code>1hr</code> data in <code>df</code> from <code>yfinance</code>. I need to convert that data into <code>3h</code> interval.</p>
<p>Sample Code:</p>
<pre><code>pip install yfinance
import yfinance as yf
ticker_symbol = '^NSEI' # Nifty index symbol on Yahoo Finance
df = yf.download(ticker_symbol, start='2023-01-01', end='2024-01-01',interval='1h')
df_new = df.resample("3H").mean()
</code></pre>
<p>df Output:</p>
<pre><code> Open High Low Close Adj Close Volume
Datetime
2023-01-02 09:15:00+05:30 18131.699219 18172.949219 18087.550781 18162.900391 18162.900391 0
2023-01-02 10:15:00+05:30 18159.400391 18199.300781 18155.400391 18186.500000 18186.500000 0
2023-01-02 11:15:00+05:30 18186.099609 18192.949219 18170.500000 18192.050781 18192.050781 0
2023-01-02 12:15:00+05:30 18190.650391 18197.300781 18153.900391 18182.349609 18182.349609 0
2023-01-02 13:15:00+05:30 18182.250000 18193.449219 18157.949219 18170.099609 18170.099609 0
2023-01-02 14:15:00+05:30 18169.949219 18205.099609 18123.199219 18193.050781 18193.050781 0
2023-01-02 15:15:00+05:30 18192.099609 18215.099609 18192.099609 18208.949219 18208.949219 0
2023-01-03 09:15:00+05:30 18163.199219 18232.500000 18149.949219 18230.250000 18230.250000 0
2023-01-03 10:15:00+05:30 18230.099609 18238.949219 18215.949219 18231.250000 18231.250000 0
2023-01-03 11:15:00+05:30 18231.099609 18240.400391 18195.800781 18198.199219 18198.199219 0
2023-01-03 12:15:00+05:30 18198.349609 18198.949219 18162.500000 18184.550781 18184.550781 0
</code></pre>
<p>Current Output: df_new</p>
<pre><code> Open High Low Close Adj Close Volume
Datetime
2023-01-02 09:00:00+05:30 18159.066406 18188.399740 18137.817057 18180.483724 18180.483724 0.0
2023-01-02 12:00:00+05:30 18180.949870 18198.616536 18145.016276 18181.833333 18181.833333 0.0
2023-01-02 15:00:00+05:30 18192.099609 18215.099609 18192.099609 18208.949219 18208.949219 0.0
2023-01-02 18:00:00+05:30 NaN NaN NaN NaN NaN NaN
2023-01-02 21:00:00+05:30 NaN NaN NaN NaN NaN NaN
2023-01-03 00:00:00+05:30 NaN NaN NaN NaN NaN NaN
</code></pre>
<p>Expected Output:</p>
<pre><code> Open High Low Close Adj Close Volume
Datetime
2023-01-02 09:15:00+05:30 18131.699219 18199.300781 18087.550781 18192.050781 18192.050781 0.0
2023-01-02 12:15:00+05:30 18190.650391 18205.099609 18123.199219 18193.050781 18193.050781 0.0
2023-01-02 15:15:00+05:30 18192.099609 18215.099609 18192.099609 18208.949219 18208.949219 0.0
2023-01-03 09:15:00+05:30 18163.199219 18240.400391 18149.949219 18198.199219 18198.199219 0
</code></pre>
|
<python><pandas><dataframe>
|
2024-07-17 05:51:37
| 1
| 1,077
|
Divyank
|
78,757,696
| 1,176,573
|
Python - Check if the last value in a sequence is relatively higher than the rest
|
<p>For a list of percentage data, I need to check if the last value (<code>90.2</code>) is <em>somehow higher</em> and <em>somewhat "abnormal"</em> than the rest of the data. Clearly it is in this sequence.</p>
<p><code>delivery_pct = [59.45, 55.2, 54.16, 66.57, 68.62, 64.19, 60.57, 44.12, 71.52, 90.2]</code></p>
<p>But for the below sequnece the last value is not so:</p>
<p><code>delivery_pct = [ 63.6, 62.64, 60.36, 72.8, 70.86, 40.51, 52.06, 61.47, 51.55, 74.03 ]</code></p>
<p>How do I check if the last value is abnormally higher than the rest?</p>
<p><em>About Data</em>: <em>The data point has the range between 0-100%. But since this is percentage of delivery taken for a stock for last 10 days, so it is usually range bound based on nature of stock (highly traded vs less frequently traded), unless something good happens about the stock and there is higher delivery of that stock on that day in anticipation of good news.</em></p>
|
<python><statistics>
|
2024-07-17 04:54:24
| 2
| 1,536
|
RSW
|
78,757,552
| 342,553
|
Work out the correct path for Python mock.patch
|
<p>I am aware the basics of the path for mock.path, but it's increasingly difficult to work out the correct path when the object is encapsulated by layers of dynamic construction, eg. <a href="https://stackoverflow.com/questions/76250086/django-viewflow-how-to-mock-viewflow-handlers">django viewflow how to mock viewflow handlers</a></p>
<p>eg.</p>
<p>forms.py</p>
<pre class="lang-py prettyprint-override"><code>def get_options():
# this is the one I want to mock
...
class Form(...):
field = Field(choices=get_options)
...
</code></pre>
<p>I am just making up stuff here to show some complications here
flow.py</p>
<pre class="lang-py prettyprint-override"><code>from forms import Form
class Flow(...):
start = (
flow.Start(..., form_class=Form).Next(...)
)
</code></pre>
<p>I have tried <code>mock.patch('forms.get_options')</code> and it didn't work. I suspect the <code>get_options</code> reference has been changed somehow.</p>
<p>I am wondering if there are some print stack statement to help to show me the path I need to patch eg.</p>
<p>forms.py</p>
<pre class="lang-py prettyprint-override"><code>def get_options():
# print the stack here and output something like
# "complicated_dynamic_class_creation.flow.instance.blah.blah.get_options"
# this is the one I want to mock
...
</code></pre>
<p>What I have also tried to get the dot notation of the current context that the function is being executed in</p>
<pre class="lang-py prettyprint-override"><code>import inspect
def get_current_function_dot_notation():
# Get the current stack frame
frame = inspect.currentframe()
# Get the caller frame
caller_frame = frame.f_back
# Get the function name
function_name = caller_frame.f_code.co_name
# Get the module name
module_name = caller_frame.f_globals["__name__"]
# Try to get the class name (if the function is a method of a class)
class_name = None
if 'self' in caller_frame.f_locals:
class_name = caller_frame.f_locals['self'].__class__.__name__
elif 'cls' in caller_frame.f_locals:
class_name = caller_frame.f_locals['cls'].__name__
# Construct the dot notation
if class_name:
dot_notation = f"{module_name}.{class_name}.{function_name}"
else:
dot_notation = f"{module_name}.{function_name}"
return dot_notation
# Example usage
def get_options():
print(get_current_function_dot_notation())
...
</code></pre>
<p>This returns the 'forms.get_options` and it doesn't work in my "flow" case.</p>
|
<python><python-mock><django-viewflow>
|
2024-07-17 03:47:46
| 1
| 26,828
|
James Lin
|
78,757,328
| 4,420,797
|
Log file: Selection of specific log content inside log file by start and end date
|
<p>I am working on log analysis where I need to analyze a log file by first extracting the dates within the file. Then, I need to use these dates to define a start date and an end date. Based on the selected start and end dates, only the specific content within that range should be available, effectively filtering the log content by date.</p>
<p>I have managed to successfully extract the dates using a regex format, but the function to filter the log content based on the start and end dates is not working as expected.</p>
<pre><code>@staticmethod
def filter_log_entries(log_content, start_date, end_date):
start_datetime = datetime.strptime(start_date, '%d/%b/%Y').replace(tzinfo=timezone.utc)
end_datetime = datetime.strptime(end_date, '%d/%b/%Y').replace(tzinfo=timezone.utc)
# Adjust end_datetime to include the entire end day
end_datetime = end_datetime + timedelta(days=1) - timedelta(seconds=1)
log_entry_pattern = re.compile(r'\[(\d{2}/[A-Za-z]{3}/\d{4}:\d{2}:\d{2}:\d{2} [+-]\d{4})\]')
filtered_entries = []
for line in log_content.split('\n'):
match = log_entry_pattern.search(line)
if match:
entry_datetime_str = match.group(1)
try:
entry_datetime = datetime.strptime(entry_datetime_str, '%d/%b/%Y:%H:%M:%S %z')
if start_datetime <= entry_datetime <= end_datetime:
filtered_entries.append(line)
except ValueError:
st.write(f"Date parsing error for line: {line}")
filtered_log_content = "\n".join(filtered_entries)
return filtered_log_content
</code></pre>
<p>Log Content (to show):</p>
<p>The date format in the log file is [17/May/2015:10:05:03 +0000], and the log file ends on [20/May/2015:10:05:03 +0000]. I want to filter the log content so that if I select the date range from 17/May/2015 to 18/May/2015, only the content within this timeline is selected.</p>
<pre><code>83.149.9.216 - - [17/May/2015:10:05:03 +0000] "GET /presentations/logstash-monitorama-2013/images/kibana-search.png HTTP/1.1" 200 203023 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:43 +0000] "GET /presentations/logstash-monitorama-2013/images/kibana-dashboard3.png HTTP/1.1" 200 171717 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:47 +0000] "GET /presentations/logstash-monitorama-2013/plugin/highlight/highlight.js HTTP/1.1" 200 26185 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:12 +0000] "GET /presentations/logstash-monitorama-2013/plugin/zoom-js/zoom.js HTTP/1.1" 200 7697 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:07 +0000] "GET /presentations/logstash-monitorama-2013/plugin/notes/notes.js HTTP/1.1" 200 2892 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:34 +0000] "GET /presentations/logstash-monitorama-2013/images/sad-medic.png HTTP/1.1" 200 430406 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:57 +0000] "GET /presentations/logstash-monitorama-2013/css/fonts/Roboto-Bold.ttf HTTP/1.1" 200 38720 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:50 +0000] "GET /presentations/logstash-monitorama-2013/css/fonts/Roboto-Regular.ttf HTTP/1.1" 200 41820 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:24 +0000] "GET /presentations/logstash-monitorama-2013/images/frontend-response-codes.png HTTP/1.1" 200 52878 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36"83.149.9.216 - - [17/May/2015:10:05:50 +0000]
</code></pre>
<p>Complete Link: <a href="https://github.com/linuxacademy/content-elastic-log-samples/blob/master/access.log" rel="nofollow noreferrer">https://github.com/linuxacademy/content-elastic-log-samples/blob/master/access.log</a></p>
|
<python><linux><ubuntu><azure-log-analytics>
|
2024-07-17 01:57:00
| 1
| 2,984
|
Khawar Islam
|
78,757,257
| 4,576,447
|
Plotly figure with subplots and dropdown hides second plot when non-default option selected
|
<p>I am trying to generate a Plotly figure with two sub plots and has dropdown menu. The default option works fine (Fig. <a href="https://i.sstatic.net/iU1zCFj8.png" rel="nofollow noreferrer">1</a>) but when other values are selected the second plot disappears (Fig. <a href="https://i.sstatic.net/E4BfwMyZ.png" rel="nofollow noreferrer">2</a>). How to fix this issue such that for all options selected the second plot and legend appears?</p>
<p><a href="https://i.sstatic.net/iU1zCFj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iU1zCFj8.png" alt="Fig. 1" /></a></p>
<p>Fig. 1</p>
<p><a href="https://i.sstatic.net/E4BfwMyZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E4BfwMyZ.png" alt="Fig. 2" /></a></p>
<p>Fig. 2</p>
<p><strong>MWE</strong></p>
<pre><code>import pandas as pd
import numpy as np
import plotly.graph_objects as go
from plotly.subplots import make_subplots
np.random.seed(42)
data = {
'Value1': np.random.randint(1, 1000, 100),
'Value2': np.random.randint(1, 1000, 100),
'Value3': np.random.randint(1, 1000, 100),
'Value4': np.random.randint(1, 1000, 100),
'Value5': np.random.randint(1, 1000, 100),
'Value6': np.random.randint(1, 1000, 100),
}
df = pd.DataFrame(data)
fig = make_subplots(rows = 2, cols = 1, vertical_spacing = 0.1, shared_xaxes = True)
for r in df.columns[2:-1]:
fig.add_trace(go.Scatter(x = df['Value1'], y = df[r], mode ='markers', marker_symbol = 'circle', visible = False), row = 1, col = 1, secondary_y = False)
fig.data[0].visible = True
dropdown_buttons = [{'label': column, 'method': 'update', 'args': [{'visible': [col == column for col in df.columns[2:-1]]}]} for column in df.columns[2:-1]]
fig.add_trace(go.Scatter(x = df['Value1'], y = df['Value2'], mode = 'markers', visible = True), row = 2, col = 1, secondary_y = False)
fig.update_xaxes(title_text = 'Value1', row = 2, col = 1)
fig.update_layout(updatemenus=[{'buttons': dropdown_buttons, 'direction': 'down', 'showactive': False}], \
template = 'plotly')
fig.write_html('plot.html')
</code></pre>
|
<python><button><plotly><dropdown><scatter-plot>
|
2024-07-17 01:16:15
| 1
| 6,536
|
Tom Kurushingal
|
78,757,230
| 8,761,448
|
Can Cloud Function HTTP Trigger Logging other info Excluding Http Response
|
<p><a href="https://i.sstatic.net/2fHhtgdM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fHhtgdM.png" alt="enter image description here" /></a>I am new to cloud function HTTP trigger,
I wrote a simple function for:</p>
<pre><code>import functions_framework
import logging
@functions_framework.http
def cr_trigger(request):
"""HTTP Cloud Function.
Args:
request (flask.Request): The request object.
<https://flask.palletsprojects.com/en/1.1.x/api/#incoming-request-data>
Returns:
The response text, or any set of values that can be turned into a
Response object using `make_response`
<https://flask.palletsprojects.com/en/1.1.x/api/#flask.make_response>.
"""
logging.info(f'START TO PROCESS THE API RESPONSE')
request_json = request.get_json(silent=True)
logging.info(f'request_json {request_json}')
return request_json
</code></pre>
<p>And when I test and trigger the CF, it does not log out the message for <code>START TO PROCESS THE API RESPONSE</code> is this expected or is there any way I can logging other thing out?</p>
<p>Thanks!</p>
|
<python><google-cloud-platform><google-cloud-functions>
|
2024-07-17 00:52:52
| 0
| 573
|
YihanBao
|
78,757,198
| 6,498,757
|
AWS CodeBuild Fails with YAML_FILE_ERROR When Running Python Script in buildspec.yml
|
<p>I’m trying to run a Python script in AWS CodeBuild using a <code>buildspec.yml</code> file. The Python script is supposed to send an email using AWS SES. Here’s the relevant part of my <code>buildspec.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
- pip install boto3
build:
commands:
- |
echo "import os
import boto3
from botocore.exceptions import ClientError
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.application import MIMEApplication
SUBJECT='There are high or medium alerts..'
SENDER='xxx'
RECIPIENT='xxx'
ATTACHMENT='$CODEBUILD_SRC_DIR/scan-result/compressed_report.zip'
BODY_TEXT = 'Please check file for report'
# The HTML body of the email.
BODY_HTML = '''\
<html>
<head></head>
<body>
<h1>Hello!</h1>
<p>Please see the attached file for a list of customers to contact.</p>
</body>
</html>
'''
CHARSET = 'utf-8'
# Create a new SES resource and specify a region.
client = boto3.client('ses')
# Create a multipart/mixed parent container.
msg = MIMEMultipart('mixed')
# Add subject, from and to lines.
msg['Subject'] = SUBJECT
msg['From'] = SENDER
msg['To'] = RECIPIENT
# Create a multipart/alternative child container.
msg_body = MIMEMultipart('alternative')
# Encode the text and HTML content and set the character encoding. This step is
# necessary if you're sending a message with characters outside the ASCII range.
textpart = MIMEText(BODY_TEXT.encode(CHARSET), 'plain', CHARSET)
htmlpart = MIMEText(BODY_HTML.encode(CHARSET), 'html', CHARSET)
# Add the text and HTML parts to the child container.
msg_body.attach(textpart)
msg_body.attach(htmlpart)
# Define the attachment part and encode it using MIMEApplication.
att = MIMEApplication(open(ATTACHMENT, 'rb').read())
# Add a header to tell the email client to treat this part as an attachment,
# and to give the attachment a name.
att.add_header('Content-Disposition','attachment',filename=os.path.basename(ATTACHMENT))
# Attach the multipart/alternative child container to the multipart/mixed
# parent container.
msg.attach(msg_body)
# Add the attachment to the parent container.
msg.attach(att)
#print(msg)
try:
#Provide the contents of the email.
response = client.send_raw_email(
Source=SENDER,
Destinations=[
RECIPIENT
],
RawMessage={
'Data':msg.as_string(),
},
ConfigurationSetName=CONFIGURATION_SET
)
# Display an error if something goes wrong.
except ClientError as e:
print(e.response['Error']['Message'])
else:
print('Email sent! Message ID:'),
print(response['MessageId'])" > script.py
- python script.py
</code></pre>
<p>However, when I run the build, I get the following error:</p>
<pre><code>YAML_FILE_ERROR Message: could not find expected ':' at line xx(the line of echo "import os)
</code></pre>
<p>I’ve tried enclosing the Python code in double quotes and using single quotes inside the Python code, but the error persists. Any ideas on how to fix this?</p>
|
<python><yaml><cicd><amazon-ses><aws-codebuild>
|
2024-07-17 00:25:59
| 1
| 351
|
Yiffany
|
78,757,188
| 1,609,514
|
Symbolic conversion of transfer function to state-space model using Sympy looks incorrect
|
<p>I'm trying out the new <a href="https://docs.sympy.org/latest/modules/physics/control/lti.html" rel="nofollow noreferrer">Sympy control module</a> and a bit puzzled by the output of this symbolic conversion of a Laplace transfer function to a continuous-time state-space model:</p>
<pre class="lang-python prettyprint-override"><code>import sympy
assert sympy.__version__ >= '1.13.0'
from sympy.physics.control.lti import StateSpace, TransferFunction
s, K, T1 = sympy.symbols('s, K, T1')
G = TransferFunction(K, T1 * s + 1, s)
print(G)
Gss = G.rewrite(StateSpace)
print(Gss)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>TransferFunction(K, T1*s + 1, s)
StateSpace(Matrix([[-1/T1]]), Matrix([[1]]), Matrix([[K]]), Matrix([[0]]))
</code></pre>
<p>When I simulated this I got a steady-state gain 3 times higher than K.
So I would have expected for example, the C matrix to be <code>Matrix([[K/T1]])</code> not <code>Matrix([[K]])</code>, or some similar adjustment to the B matrix.</p>
<p>If I plug some numbers into Python Control I get a result consistent with my expectation:</p>
<pre><code>import control as con
K = 2
T1 = 3
G = con.tf(K, [T1, 1])
print(G)
Gss = con.ss(G)
print(Gss)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code><TransferFunction>: sys[14]
Inputs (1): ['u[0]']
Outputs (1): ['y[0]']
2
-------
3 s + 1
<StateSpace>: sys[14]
Inputs (1): ['u[0]']
Outputs (1): ['y[0]']
States (1): ['x[0]']
A = [[-0.33333333]]
B = [[1.]]
C = [[0.66666667]]
D = [[0.]]
</code></pre>
|
<python><sympy><state-space><python-control>
|
2024-07-17 00:19:58
| 0
| 11,755
|
Bill
|
78,757,169
| 3,949,008
|
Python pandas read_sas with chunk size option fails with value error on index mismatch
|
<p>I have a very large SAS file that won't fit in memory of my server. I simply need to convert to parquet formatted file. To do so, I am reading it in chunks using the <code>chunksize</code> option of the <code>read_sas</code> method in pandas. It is mostly working / doing its job. Except, it fails with the following error after a while.</p>
<p>This particular SAS file has 79422642 rows of data. It is not clear why it fails in the middle.</p>
<pre><code>import pandas as pd
filename = 'mysasfile.sas7bdat'
SAS_CHUNK_SIZE = 2000000
sas_chunks = pd.read_sas(filename, chunksize = SAS_CHUNK_SIZE, iterator = True)
for sasDf in sas_chunks:
print(sasDf.shape)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
(2000000, 184)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 340, in __next__
da = self.read(nrows=self.chunksize or 1)
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 742, in read
rslt = self._chunk_to_dataframe()
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 795, in _chunk_to_dataframe
rslt[name] = pd.Series(self._string_chunk[js, :], index=ix)
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/series.py", line 461, in __init__
com.require_length_match(data, index)
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/common.py", line 571, in require_length_match
raise ValueError(
ValueError: Length of values (2000000) does not match length of index (1179974)
</code></pre>
<p>I just tested the same logic of the code on a smaller SAS file with fewer rows using a smaller chunk size as follows, and it seems to work fine without any errors, and also handles the last remaining chunk that is smaller than the chunk size parameter:</p>
<pre><code>filename = 'mysmallersasfile.sas7bdat'
SAS_CHUNK_SIZE = 1000
sas_chunks = pd.read_sas(filename, chunksize = SAS_CHUNK_SIZE, iterator = True)
for sasDf in sas_chunks:
print(sasDf.shape)
(1000, 5)
(1000, 5)
(1000, 5)
(1000, 5)
(983, 5)
</code></pre>
|
<python><pandas><sas>
|
2024-07-17 00:10:50
| 1
| 10,535
|
Gopala
|
78,757,088
| 2,009,558
|
How can I model the curve of asymmetric peaks using scipy.stats.beta?
|
<p>I am trying to integrate chromatographic peaks. I need to be able to model the peak shape to consistently determine when the peaks begin and end. I can model Gaussian peaks but many peaks are asymmetrical, as in this example data, and would be better modelled with a Beta distribution. I can fit Gaussian models to the data but I can't seem to obtain Beta models which fit the peak. What am I doing wrong here? I have read many SO posts with similar questions but none of them demonstrate how to plot the derived model over the raw data to show the fit.
Eg:</p>
<p><a href="https://stackoverflow.com/questions/64394200/scipy-how-to-fit-this-beta-distribution-using-python-scipy-curve-fit">Scipy - How to fit this beta distribution using Python Scipy Curve Fit</a></p>
<p><a href="https://stackoverflow.com/questions/74154672/struggling-to-graph-a-beta-distribution-using-python">Struggling to graph a Beta Distribution using Python</a></p>
<p><a href="https://stackoverflow.com/questions/73460400/how-to-properly-plot-the-pdf-of-a-beta-function-in-scipy-stats">How to properly plot the pdf of a beta function in scipy.stats</a></p>
<p>Here's my code:</p>
<pre><code>from scipy.stats import beta
import plotly.graph_objects as go
import peakutils
# raw data
yy = [128459, 1822448, 10216680, 24042041, 30715114, 29537797, 25022446, 18416199, 14138783, 12116635, 9596337, 7201602, 5668133, 4671416, 3920953, 3259980, 2756295, 2326780, 2095209, 1858894, 1646824, 1375129, 1300799, 1253879, 1086045, 968363, 932041, 793707, 741462, 741593]
xx = list(range(0,len(yy)))
fig = go.Figure()
fig.add_trace(go.Scatter(x = xx, y = yy, name = 'raw', mode = 'lines'))
# Gaussian model
gamp, gcen, gwid = peakutils.peak.gaussian_fit(xx, yy, center_only= False) # this calculates the Amplitude, Center & Width of a guassian peak
print(gamp, gcen, gwid)
gauss_model = [peakutils.peak.gaussian(x, gamp, gcen, gwid) for x in xx]
fig.add_trace(go.Scatter(x = xx, y = gauss_model, mode = 'lines', name = 'Gaussian'))
# Beta distribution model
aa, bb, loc, scale = beta.fit(dat)
print('beta fit:', beta_fit)
beta_model = beta.pdf(xx, aa, bb, loc, scale)
fig.add_trace(go.Scatter(x = xx, y = beta_model, mode = 'lines', name = 'Beta'))
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/BOsrRaVz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOsrRaVz.png" alt="enter image description here" /></a></p>
|
<python><scipy><curve-fitting><gaussian><beta-distribution>
|
2024-07-16 23:34:11
| 2
| 341
|
Ninja Chris
|
78,756,920
| 2,170,917
|
Stop execution of a python script as if it ran to the end
|
<p>I want to stop the execution of a Python (3.12) script with a behaviour that is identical to the script running to completion in ideally all contexts, such as:</p>
<ul>
<li>run with <code>python script.py</code>;</li>
<li>run with <code>python -i script.py</code>;</li>
<li>run inside Thonny's shell (or any other IDE's shell).</li>
</ul>
<p>To simplify the discussion, let us assume a simple Python script as follows: (EDIT: This is just a simple example to play with; in reality, the exit function could be called from somewhere nested in a branch, a loop, or whatever.)</p>
<pre><code>x = 1
print("HELLO")
magic_exit_function()
print("won't see this")
y = 1
</code></pre>
<p>I want a <code>magic_exit_function</code> that (a) ensures that <code>HELLO</code> is output, but <code>won't see this</code> isn't, (b) if run in interactive mode, ensures that the value of <code>x</code> can be inspected after the script ends, but not the value of <code>y</code>, and (c) doesn't produce any noise on stdout or stderr.</p>
<p>I have tried the following:</p>
<ol>
<li><p>Use <code>exit</code> or <code>quit</code> or <code>sys.exit</code>. This works with <code>python</code>; with <code>python -i</code> it almost works (there is an ugly traceback with SystemExit, not satisfying (c)); with Thonny's shell it ends the subprocess and the value of <code>x</code> cannot be inspected afterwards (thus not satisfying (b)).</p>
</li>
<li><p>Raise a new kind of exception deriving from <code>Exception</code>. This produces noise, but at least I can inspect the value of <code>x</code> even in Thonny's shell.</p>
</li>
<li><p>Raise a new kind of exception deriving from <code>Exception</code> and set the <code>sys.excepthook</code> to ignore this kind of exception specifically. Works well with <code>python</code> and <code>python -i</code>, but still produces noise in Thonny's shell (Thonny seems to catch the exception and ignore <code>sys.excepthook</code>).</p>
</li>
<li><p>Try the same thing, but derive the exception from <code>BaseException</code>. There is no noise in Thonny, but the subprocess ends, and I cannot inspect the value of <code>x</code> afterwards.</p>
</li>
<li><p>Use <code>sys.settrace</code> and the ability to set <code>f_lineno</code> of the current frame. This <em>almost</em> works (in all tested scenarios), but the problem is that I cannot use this to jump to the end of the current module; I can only jump to the last line of the module, and this last line is always executed. This breaks (b), because the <code>y = 1</code> line is executed.</p>
</li>
</ol>
<p>Are there any other possibilities <strong>without</strong> changing the script itself (apart from the <code>magic_exit_function</code>)?</p>
<p>The implementation of the various attempts at <code>magic_exit_function</code> follow:</p>
<pre><code>def magic_exit_function1():
import sys
sys.exit() # or exit() or quit() or raise SystemExit()
def magic_exit_function2():
class MyExit(Exception):
pass
raise MyExit()
def magic_exit_function3():
import sys
class MyExit(Exception):
pass
def hook(exc_type, exc, tb):
if exc_type is MyExit:
return None
return sys.__excepthook__(exc_type, exc, tb)
sys.excepthook = hook
raise MyExit()
def magic_exit_function4():
import sys
class MyExit(BaseException):
pass
def hook(exc_type, exc, tb):
if exc_type is MyExit:
return None
return sys.__excepthook__(exc_type, exc, tb)
sys.excepthook = hook
raise MyExit()
def magic_exit_function5():
import sys
import inspect
def trace(frame, event, arg):
if event != 'line':
return trace
frame.f_lineno = last_line
return trace
sys.settrace(lambda *args, **kwargs: None)
frame = inspect.currentframe().f_back
_, _, last_line = list(frame.f_code.co_lines())[-1]
frame.f_trace = trace
</code></pre>
|
<python><exit>
|
2024-07-16 22:05:38
| 0
| 2,801
|
Nikola Benes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.