QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,793,939
| 12,436,050
|
Join pandas dataframes with column having pipe delimited values
|
<p>I have a dataframe df2 with one of the column with comma separated values. I would like to join df2 with another dataframe df1 using label first and if match not found then use synonym column. The synonym column can also have more than 2 values.</p>
<pre><code>df1
label
Werner syndrome
Ageing
df2
id label synonym
1 Werner's syndrome Werner syndrome|Werner disease
2 Ageing
</code></pre>
<p>Expected output is:</p>
<pre><code>df
label id label synonym
Werner syndrome 1 Werner's syndrome Werner syndrome|Werner disease
Ageing 2 Ageing
</code></pre>
<p>Any help is highly appreciated.</p>
|
<python><pandas><dataframe>
|
2024-01-10 14:00:52
| 0
| 1,495
|
rshar
|
77,793,935
| 5,594,712
|
Y-labels on secondary axis on top of each other
|
<p>I'm trying to plot the following and I notice that the labels on the y-axis is all on top of each other.</p>
<p><a href="https://i.sstatic.net/d78HD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d78HD.png" alt="enter image description here" /></a></p>
<p>How can I avoid this?</p>
<p>I am using the code below to make the plots:</p>
<pre><code>plt.figure(figsize=(18, 7))
fig, ax1 = plt.subplots(figsize=(18, 7))
for i in range(0, len(df_data), 1):
ax2 = ax1.twinx()
_df_data = df_data[i][(df_data[i].wav_um > 9.29) & (df_data[i].wav_um < 9.51)].reset_index(drop=True)
ax1.plot(_df_data.wav_um, _df_data.mircat_auc, 'o--', linewidth=1, c = 'blue', mfc= 'lightblue', mec='black', ms=4, alpha=0.7, label=hdf5_file_locations[i])
ax2.plot(_df_data.wav_um, _df_data.thorlabs_pw_mw, 'o--', linewidth=1, c = 'orange', mfc= 'salmon', mec='gray', ms=3, alpha=0.7)
ax1.set_xlabel(r'Wavelength [$\mu m$]', fontsize = 14)
ax1.set_ylabel(r' AUC', fontsize = 14, color='blue')
ax2.set_ylabel(r' power [mW]', fontsize = 14, color='orange')
ax1.tick_params(axis="x", labelsize=15)
ax2.grid(which='both', color='0.9', linestyle='-', linewidth=1)
</code></pre>
|
<python><matplotlib>
|
2024-01-10 14:00:31
| 1
| 1,244
|
Adam Merckx
|
77,793,897
| 4,859,499
|
Numpy: Accessing sub ndarrays with multiple index lists
|
<p>I am trying to access (more precisely: add to) a subset of a numpy array using multiple index lists.
However, what is working perfectly with slices, does not work as expected using arbitrary index lists:</p>
<pre><code>import numpy as np
# this one works:
A_4th_order = np.zeros( (4,2,4,2) )
sub_a = np.ones( (3,1,3,1) )
i = k = slice(0,3)
j = l = slice(1,2)
A_4th_order[i,j,k,l] += sub_a
# this one doesn't:
B_4th_order = np.zeros( (4,2,4,2) )
sub_b = np.ones( (3,1,3,1) )
i = k = np.array([0,1,2], dtype=int)
j = l = np.array([1], dtype=int)
B_4th_order[i,j,k,l] += sub_b
</code></pre>
<p>How can I achieve this operation in an efficient and readable manner?</p>
|
<python><numpy><indexing>
|
2024-01-10 13:55:54
| 1
| 457
|
mneuner
|
77,793,887
| 12,692,466
|
Quickfix python: memory leak on UtcTimeStamp
|
<p>I started using the library <a href="https://github.com/quickfix/quickfix" rel="nofollow noreferrer">quickfix</a> and I faced an issue using the python version:</p>
<pre><code>swig/python detected a memory leak of type 'UtcTimeStamp *', no destructor found
</code></pre>
<p>And when I go to the source code in C++ of the library I can see that the class <code>FIX::UtcTimeStamp</code> don't have any destructor (<a href="https://github.com/quickfix/quickfix/blob/0b88788710b6b9767440cd430bf24c6b6e2080a2/src/C%2B%2B/FieldTypes.h#L583" rel="nofollow noreferrer">FieldTypes.h:583</a>).</p>
<p>Is it a false positive? or a real error and in this case should I modify internaly the source code of <code>quickfix</code>?</p>
<p><code>Python3</code> version: <code>3.10.4</code><br/>
<code>pip</code> version: <code>22.0.2</code><br/>
<code>quickfix</code> version: <code>1.15.1</code></p>
<blockquote>
<p>The <code>quickfix</code> library was not build localy, but installed directly with <code>pip</code>.</p>
</blockquote>
<blockquote>
<p>I'm not good enough in python to know how to trace back the memory leak, don't hesitate to post how in the comment and I will edit the question with the related information.</p>
</blockquote>
<h4>Edit:</h4>
<ul>
<li>After checking the documentation <a href="https://en.cppreference.com/w/cpp/language/destructor" rel="nofollow noreferrer">destructor</a> in C++, it's possible to have a implicit one. So the issue doesn't come from there.</li>
</ul>
|
<python><swig><quickfix>
|
2024-01-10 13:54:11
| 1
| 572
|
DipStax
|
77,793,527
| 4,638,207
|
How to interpret values returned by numpy.ndarray.item() when downcasting floats
|
<p>I work with spatial data and often have coordinates that require, at most, precision to only a few decimal places, e.g. <code>89.995</code>.</p>
<p>A simple example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr = np.array([89.995])
</code></pre>
<p>Inspect the dtypes:</p>
<pre class="lang-py prettyprint-override"><code>>>> y.dtype
dtype('float64')
>>> arr[0]
-89.995 # This is what I need. Good.
</code></pre>
<p>But for my purposes, I often work with large gridded data where I don't need the precision of 64-bit floats, so I'd like to downcast coordinate values to <code>float32</code> or, ideally, <code>float16</code> to reduce memory load. However, after downcasting, when inspecting the items I see precision issues that don't exist in <code>float64</code>, despite the values being far from exceeding the precision limits of <code>float32</code>.</p>
<p>E.g.</p>
<pre class="lang-py prettyprint-override"><code>>>> arr[0].item()
-89.995
>>> arr[0].astype(np.float32).item()
-89.99500274658203
</code></pre>
<p>I need coordinates to be exact. Since I'm primarily working within a <code>numpy</code> context, should I care about this? Are they functionally the same value (and I'm just misunderstanding how <code>item()</code> works?), or is there actually a fundamental difference between the two values?</p>
|
<python><numpy><casting><floating-point>
|
2024-01-10 12:53:24
| 2
| 572
|
lusk
|
77,793,505
| 3,457,761
|
Code scan issue: URLValidator does not prohibit newlines and tabs
|
<p>I am getting vulnerability while prisma scan.</p>
<pre><code>- CVM : CVE-2021-32052
- PACKAGE : Python
- VERSION: 3.9.18
- STATUS: fixed in 3.2.2, 3.1.10, 2.2.22
- DESCRIPTION: In Django 2.2 before 2.2.22, 3.1 before 3.1.10, and 3.2 before 3.2.2 (with Python 3.9.5+), URLValidator does not prohibit newlines and tabs (unless the URLField form field is used). If an application uses values with newlines in an HTTP response, header injection can occur. Django itself is unaffected because HttpResponse prohibits newlines in HTTP headers. (CVE-2021-32052)
</code></pre>
<p>We are not using django at all. But it seems some dependency packages causing the above vulnerability while code scan.</p>
<p><strong>Question is</strong>:</p>
<p>How can I check, which package version causing issue or what code change need to do for resolving issue:</p>
<pre><code>URLValidator does not prohibit newlines and tabs (unless the URLField form field is used). If an application uses values with newlines in an HTTP response, header injection can occur. Django itself is unaffected because HttpResponse prohibits newlines in HTTP headers.
</code></pre>
|
<python><security><security-code-scan>
|
2024-01-10 12:49:57
| 0
| 7,288
|
Harsha Biyani
|
77,793,435
| 8,040,369
|
Convert a list of floats in a df column into percentage and round off to 1 decimal digit
|
<p>Is there a way to get percentage & roundoff the list of values in a df column.
I have a df as</p>
<pre><code>id name value
============================================
1 AA [0.0012, 0.0022, 0.365]
2 BB [0.0412, 0.822, 0.3]
3 CC [0.692, 0.0002, 0.56]
</code></pre>
<p>Is there a way to the above df to</p>
<pre><code>id name value
============================================
1 AA [0.1, 0.2, 36.5]
2 BB [4.1, 82.2, 3]
3 CC [69.2, 0.0, 56]
</code></pre>
<p>Thanks,</p>
|
<python><pandas><dataframe>
|
2024-01-10 12:40:02
| 2
| 787
|
SM079
|
77,793,386
| 3,732,793
|
Pandas smarter conditional merge for two dataframes
|
<p>Here I tried to find the shortest possible example code for a conditional merge of two dataframes. There I try to merge exact rows based on a condition. Currently I have to iterate over both dataframes for that.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame([
['a', 'Test1', 2995],
['b', 'Test2', 3995],
['c', 'Test3', 5000]],
columns=['ID', 'Name', 'Value'])
df2 = pd.DataFrame([
['1', 1, 2],
['2', 3, 4],
['3', 99, 100]],
columns=['ID', 'Start', 'Context'])
df_test = pd.DataFrame(columns=['ID', 'Name', 'Value','Start'])
i = 0
for index, row1 in df1.iterrows():
for index, row2 in df2.iterrows():
if row1['Value'] + 25 > row2['Start'] * 1000 and row1['Value']< row2['Start'] * 1000:
df_test.loc[i] = row1
df_test.loc[i]['Start'] = row2['Start']
i += 1
</code></pre>
<p>And it works so the dft_test is</p>
<pre><code> ID Name Value Start
0 a Test1 2995 3
</code></pre>
<p>Is there a smarter way doing that?</p>
|
<python><pandas>
|
2024-01-10 12:30:51
| 1
| 1,990
|
user3732793
|
77,793,292
| 7,713,770
|
How to resolve error: collectstatic - Could not find backend 'storages.custom_azure.AzureStaticStorage'
|
<p>I have a django app and I have installed the module: <code>django-storages[azure]</code> with the command: <code>pipenv install django-storages[azure]</code></p>
<p>And now I try to run the collectstatic command. But if I enter the command: <code>python manage.py collectstatic</code></p>
<p>I get this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\core\files\storage\handler.py", line 52, in create_storage
storage_cls = import_string(backend)
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\utils\module_loading.py", line 30, in import_string
return cached_import(module_path, class_name)
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\utils\module_loading.py", line 15, in cached_import
module = import_module(module_path)
File "C:\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'storages.custom_azure'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\repos\django_upload\myuploader\manage.py", line 22, in <module>
main()
File "C:\repos\django_upload\myuploader\manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\core\management\__init__.py", line 442, in execute_from_command_line
utility.execute()
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\core\management\__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\core\management\base.py", line 412, in run_from_argv
self.execute(*args, **cmd_options)
output = self.handle(*args, **options)
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 184, in handle
if self.is_local_storage() and self.storage.location:
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 245, in is_local_storage
return isinstance(self.storage, FileSystemStorage)
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\utils\functional.py", line 280, in __getattribute__
value = super().__getattribute__(name)
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\utils\functional.py", line 251, in inner
self._setup()
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\contrib\staticfiles\storage.py", line 540, in _setup
self._wrapped = storages[STATICFILES_STORAGE_ALIAS]
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\core\files\storage\handler.py", line 43, in __getitem__
storage = self.create_storage(params)
File "C:\Users\engel\.virtualenvs\django_upload-sYcQHG5j\lib\site-packages\django\core\files\storage\handler.py", line 54, in create_storage
raise InvalidStorageError(f"Could not find backend {backend!r}: {e}") from e
django.core.files.storage.handler.InvalidStorageError: Could not find backend 'storages.custom_azure.AzureStaticStorage': No module named 'storages.custom_azure'PS C
</code></pre>
<p>So my pipfile looks:</p>
<pre><code>[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
django = "*"
django-storages = {extras = ["azure"], version = "*"}
[dev-packages]
[requires]
python_version = "3.10"
</code></pre>
<p>settings.py file:</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
"uploader",
"storages"
]
STATIC_URL = '/static/'
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
DEFAULT_FILE_STORAGE = 'uploader.custom_azure.AzureMediaStorage'
STATICFILES_STORAGE = 'storages.custom_azure.AzureStaticStorage'
STATIC_LOCATION = "static"
MEDIA_LOCATION = "media"
AZURE_ACCOUNT_NAME = "name"
AZURE_CUSTOM_DOMAIN = f'{AZURE_ACCOUNT_NAME}.blob.core.windows.net'
STATIC_URL = f'https://{AZURE_CUSTOM_DOMAIN}/{STATIC_LOCATION}/'
MEDIA_URL = f'https://{AZURE_CUSTOM_DOMAIN}/{MEDIA_LOCATION}/'
</code></pre>
<p>and I have a file custom.azure.py in the app uploader:</p>
<pre><code>from storages.backends.azure_storage import AzureStorage
class AzureMediaStorage(AzureStorage):
account_name = ''name'
# Must be replaced by your <storage_account_key>
account_key= 'key'
azure_container = 'media'
expiration_secs = None
class AzureStaticStorage(AzureStorage):
account_name = 'name'
account_key = 'key'
azure_container = 'static'
expiration_secs = None
</code></pre>
<p>Question: how to resolve the error: <code>Could not find backend 'storages.custom_azure.AzureStaticStorage': No module named 'storages.custom_azure' </code></p>
|
<python><django><azure><azure-blob-storage>
|
2024-01-10 12:16:59
| 1
| 3,991
|
mightycode Newton
|
77,792,905
| 5,663,844
|
PySide2: Load SVG from Variable rather than from file
|
<p>I want to display an <code>svg</code> file using PySide. But I would prefer not having to create a <code>file.svg</code> and then load it using <code>QSvgWidget</code></p>
<p>Rather I just have the file contents of the <code>file.svg</code> contents already stored in a variable as a string.</p>
<p>How would I go about directly loading the SVG from the variable rather than form the file.</p>
<p>Best</p>
|
<python><pyside2>
|
2024-01-10 11:08:21
| 2
| 480
|
Janosch
|
77,792,821
| 5,429,320
|
Azure Function App in Docker Desktop does not have access to local sql server database on host
|
<p>I have my mssql server on my local windows desktop, which hold the development database for my application. I have a Azure function app that is written in python which has a couple endpoints that query the database.</p>
<p>If I run this function app in VSCode is connects to the database file. It uses the following env variables:</p>
<pre><code> "DB_HOST": "HOME\\SQLEXPRESS",
"DB_NAME": "devdb.local",
"DB_USERNAME": "user",
"DB_PASS": "...",
</code></pre>
<p>I am trying to run my function app in docker and docker desktop in Linux containers so I can run multiple function app at the same time.</p>
<p>I have the following Dockerfile:</p>
<pre><code>FROM python:3.10-slim-buster
# Set a non-root user and create the working directory.
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
g++ \
wget \
gnupg \
apt-transport-https \
lsb-release \
# Add Microsoft package signing key and repository
&& wget -q https://packages.microsoft.com/keys/microsoft.asc -O- | apt-key add - \
&& wget -q https://packages.microsoft.com/config/debian/10/prod.list -O /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
# Install ODBC Driver for SQL Server
&& ACCEPT_EULA=Y apt-get install -y msodbcsql17 \
# Other dependencies
&& apt-get install -y unixodbc-dev \
&& apt-get install -y libicu-dev \
&& apt-get install -y azure-functions-core-tools-4 \
&& rm -rf /var/lib/apt/lists/* /packages-microsoft-prod.deb
# Copy only the requirements file and install Python dependencies.
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code.
COPY . .
# Expose the port on which the app will run.
EXPOSE 7071
# Start the function app.
CMD ["func", "start", "--python"]
</code></pre>
<p>Docker Compose File:</p>
<pre><code>version: '3.8'
services:
app:
image: func-app:latest
ports:
- "8001:7071"
environment:
- AzureWebJobsStorage=UseDevelopmentStorage=true
- AzureWebJobsFeatureFlags=EnableWorkerIndexing
- FUNCTIONS_WORKER_RUNTIME=python
- ENVIRONMENT=dev
- DB_HOST=HOME\\SQLEXPRESS
- DB_NAME=devdb.local
- DB_USERNAME=user
- DB_PASS=...
- AZURE_STORAGE_ACCOUNT_CONNECTIONSTRING=...
- AZURE_CONTAINER=media
- CARD_MARKET_APP_SECRET=...
- CARD_MARKET_APP_TOKEN=...
- DEV_SHOP=...
- SETS_PROCESSED=100
command: ["func", "start", "--python"]
</code></pre>
<p>When I run this in docker desktop that app runs as expected but I get the following error:</p>
<pre><code>{
"error": "Database connection failed: Failed to connect to database: ('01000', \"[01000] [unixODBC][Driver Manager]Can't open lib '/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.5.1' : file not found (0) (SQLDriverConnect)\")"
}
</code></pre>
<p>in my requirements.txt I have the packages for database connection:</p>
<ul>
<li>mysql-connector-python==8.2.0</li>
<li>pyodbc==5.0.1</li>
</ul>
<p>I cant seem to figure out why my function app in a docker contain does have access to my database on the host.</p>
|
<python><sql-server><azure><docker><azure-functions>
|
2024-01-10 10:54:41
| 1
| 2,467
|
Ross
|
77,792,819
| 1,619,432
|
Linear fit through point (using weights in NumPy Polynomial.fit())
|
<p>How can a linear fit be forced to exactly pass through a point (specifically, the origin) with <a href="https://numpy.org/doc/stable/reference/generated/numpy.polynomial.polynomial.Polynomial.fit.html" rel="nofollow noreferrer">NumPy's Polynomial.fit()</a> function?
I've tried some weights, but I'm unclear on how this works (despite <a href="https://stackoverflow.com/questions/19667877/what-are-the-weight-values-to-use-in-numpy-polyfit-and-what-is-the-error-of-the">What are the weight values to use in numpy polyfit and what is the error of the fit</a>) so I don't know whether this is the approach to use for an exact solution:</p>
<pre><code>""" force fit through point """
import numpy as np
import matplotlib.pyplot as plt
x = np.array([0, 1, 2, 3])
y = np.array([0, 4, 5, 6])
fit, info = np.polynomial.polynomial.Polynomial.fit(
x, y, 1, full=True, w=[1, 0.1, 0.1, 0.1] #!?
)
print(fit)
fitfunc = fit.convert()
print(fitfunc)
print(fitfunc.coef)
print(info)
print(fitfunc(0)) # y-intercept should be zero
fig, ax1 = plt.subplots()
ax1.plot(x, y, ".")
ax1.plot(x, fitfunc(x), label=str(fitfunc))
plt.legend()
plt.show()
</code></pre>
|
<python><numpy><linear-regression>
|
2024-01-10 10:54:15
| 2
| 6,500
|
handle
|
77,792,759
| 7,295,599
|
How to rotate+scale PDF pages around the center with pypdf?
|
<p>I would like to rotate PDF pages around the center (other than just multiples of 90°) in a PDF document and optionally scale them to fit into the original page.</p>
<p>Here on StackOverflow, I found a few similar questions, however, mostly about the outdated PyPDF2.
And in the <a href="https://pypdf.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">latest pypdf documentation</a>, I could not find (or overlooked) a recipe to rotate pages around the center, e.g. for slightly tilted scanned documents, which require rotation of a few degrees.</p>
<p>I know that there is the <code>Transformation Class</code>, but the standard rotation is around the lower left corner and documentation is not explaining in detail what the matrix elements actually are.</p>
<p>How to rotate a PDF page around the center and optionally scale it that it fits into the original page?</p>
|
<python><pdf><rotation><pypdf>
|
2024-01-10 10:46:20
| 2
| 27,030
|
theozh
|
77,792,697
| 6,630,397
|
How to properly aggregate Decimal values in a django app: 'decimal.Decimal' object has no attribute 'aggregate'
|
<p>I'm in a <a href="https://django-tables2.readthedocs.io/en/latest/" rel="nofollow noreferrer">django-tables2</a> <code>Table</code>, trying to compute the sum of a column (based on a <code>MoneyField</code> (<a href="https://django-money.readthedocs.io/en/latest/" rel="nofollow noreferrer"><code>django-money</code></a>) in the model, see hereunder):</p>
<pre class="lang-py prettyprint-override"><code>import django_tables2 as table
class PriceAmountCol(table.Column):
# print(f"Total price is: {record.price.amount}")
# print(f"Type of total price is: {type(record.price.amount)}")
def render(self, value, bound_column, record):
total_price = record.price.amount.aggregate(
total=Sum("price")
)
Class MyTable(table.Table):
# Fields
price = PriceAmountCol(
verbose_name=_("Price [EUR]"),
)
#...
</code></pre>
<p>But this error is raised:</p>
<pre><code> File "/code/djapp/apps/myapp/tables.py", line 192, in render
total_price = record.price.amount.aggregate(
AttributeError: 'decimal.Decimal' object has no attribute 'aggregate'
</code></pre>
<p>If not commented out, the two <code>print</code> instructions give:</p>
<pre class="lang-py prettyprint-override"><code>Total price is: 112.80
Type of total price is: <class 'decimal.Decimal'>
</code></pre>
<p>But the value <code>112.80</code> is only the first one of the table. There are some others.</p>
<p>How could I properly aggregate price values (<code>Decimal</code>) in my current table column?</p>
<p>The field in the model is as follow:</p>
<pre class="lang-py prettyprint-override"><code> price = MoneyField(
default=0.0,
decimal_places=2,
max_digits=12,
default_currency="EUR",
verbose_name=_("Price [EUR]"),
)
</code></pre>
<h3>Version info</h3>
<p>django 4.2.8<br />
djmoney 3.4.1<br />
django-tables2 2.7.0<br />
python 3.10.6</p>
<h3>Related information</h3>
<p><a href="https://django-tables2.readthedocs.io/en/latest/pages/column-headers-and-footers.html#adding-column-footers" rel="nofollow noreferrer">https://django-tables2.readthedocs.io/en/latest/pages/column-headers-and-footers.html#adding-column-footers</a><br />
<a href="https://stackoverflow.com/questions/37701875/django-tables2-column-total">Django-tables2 column total</a><br />
<a href="https://stackoverflow.com/questions/51216880/django-tables2-footer-sum-of-computed-column-values">Django-tables2 footer sum of computed column values</a></p>
|
<python><django><decimal><django-tables2><django-money>
|
2024-01-10 10:37:52
| 1
| 8,371
|
swiss_knight
|
77,792,636
| 8,532,748
|
OpenTelemetry not logging messages with exception details to Application Insights from python code
|
<p>I am in process of migrating a python 3.10 project from OpenCensus to OpenTelemetry as the former is being <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-python-opencensus-migrate?tabs=python" rel="nofollow noreferrer">sunsetted this year</a>.</p>
<p>The logs are working correctly for standard <code>info</code>, <code>warning</code>, <code>error</code>, etc logging but I am having an issue with logging <code>exceptions</code>.</p>
<p>Take the following example code:</p>
<pre><code>from azure.monitor.opentelemetry import configure_azure_monitor
import logging
configure_azure_monitor(
connection_string="MY_APPINSIGHTS_CONNECTION_STRING",
)
try:
logging.info("My INFO message")
logging.warning("My WARNING message")
logging.error("My ERROR message")
raise Exception("My EXCEPTION message")
except Exception:
logging.exception('My CUSTOM message')
logging.error('My CUSTOM message', stack_info=True, exc_info=True)
</code></pre>
<p>The logs before the exception is raised work as expected. What for the logs in the except block is that an Application insights log would be created with an <code>outerMessage</code> or <code>details..message</code> of "<em>My CUSTOM message</em>" or at least "<em>My CUSTOM message</em>" somewhere in the record. What I am seeing when I check though is the following:</p>
<pre><code>exceptions
| order by timestamp desc
</code></pre>
<p><a href="https://i.sstatic.net/XJjO0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XJjO0.png" alt="enter image description here" /></a></p>
<p>"<em>My CUSTOM message</em>" is nowhere to be found. Is there a configuration option I can set so that I get the custom error message? Note I get the exact same log for the following two lines:</p>
<pre><code>logging.exception('My CUSTOM message')
logging.error('My CUSTOM message', stack_info=True, exc_info=True)
</code></pre>
<p>This behaviour is different from the old OpenCensus behaviour which did log the custom message within the record along with the exception data.</p>
|
<python><azure><logging><azure-application-insights><open-telemetry>
|
2024-01-10 10:29:45
| 1
| 4,260
|
SBFrancies
|
77,792,415
| 2,784,493
|
How can I run a python github project to my nodejs project?
|
<p>So I want to use this python project <a href="https://github.com/m-bain/whisperX" rel="nofollow noreferrer">https://github.com/m-bain/whisperX</a> and access it in my nodejs project.</p>
<p>I am thinking of using <a href="https://www.npmjs.com/package/python-shell" rel="nofollow noreferrer">python-shell</a> but I am not really sure what "py" file to target since this is a github project. I am not a python guy so I am just generally confused on how to include that in my project and access <code>whisperx</code> "command" (using <a href="https://nodejs.org/api/child_process.html#child_processexecsynccommand-options" rel="nofollow noreferrer">https://nodejs.org/api/child_process.html#child_processexecsynccommand-options</a>).</p>
<p>ChatGPT told me to clone the github repo to my repo then access it, but if I do that, I still don't know which 'py' file to target. Do I have to "bundle" it as a single .py file somehow?</p>
|
<javascript><python><node.js><interop><openai-whisper>
|
2024-01-10 09:56:18
| 1
| 4,680
|
I am L
|
77,792,414
| 2,735,286
|
Handling psycopg_pool.PoolTimeout connection errors
|
<p>How do you deal with pooled connection errors when using the Python library <code>psycopg = {extras = ["binary", "pool"], version = "^3.1.16"}</code>?</p>
<p>For context: I have implemented a Python web app which runs on aiohttp and uses a connection pool to access Postgres.</p>
<p>The pool is instantiated at startup, like so:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from psycopg_pool import AsyncConnectionPool
def create_pool():
async_pool = AsyncConnectionPool(conninfo=db_cfg.db_conn_str, open=False)
logger.info("Using", db_cfg.db_conn_str)
return async_pool
asynch_pool = create_pool()
async def open_pool():
try:
await asynch_pool.open()
await asynch_pool.wait()
logger.info("Asynch connection pool Opened")
except:
logger.error("Could not open pool")
asyncio.run(open_pool())
</code></pre>
<p>Then we open the connections and open cursors like this:</p>
<pre class="lang-py prettyprint-override"><code>async def create_cursor(func: Callable) -> Any:
await asynch_pool.check()
async with asynch_pool.connection() as conn:
async with conn.cursor() as cur:
return await func(cur)
</code></pre>
<p>After starting up the application all works fine (1 hour or so). However after some time, all connections in the pool are unusable and timeout all the time.</p>
<p>Does anyone know what is wrong here?</p>
<p>Just some extra information about my environment:</p>
<ul>
<li>Python 3.12</li>
<li>Conda 22.9.0</li>
<li>aiohttp 3.9.1</li>
</ul>
|
<python><psycopg3>
|
2024-01-10 09:56:14
| 1
| 14,809
|
gil.fernandes
|
77,791,698
| 13,132,728
|
How to find the most common pairings of size n in a set of values out of all rows pandas
|
<h2>MY DATA</h2>
<p>Let's say I have the following dataframe where each row consists of a unqiue shopping cart:</p>
<pre><code>pd.DataFrame({'0':['banana','apple','orange','milk'],'1':['apple','milk','bread','cheese'],'2':['bread','cheese','banana','eggs']})
0 1 2
0 banana apple bread
1 apple milk cheese
2 orange bread banana
3 milk cheese eggs
</code></pre>
<h2>WHAT I AM TRYING TO DO</h2>
<p>I am trying to create a list of the most common pairings of size n out of each of these shopping carts. For example, the most common pairings of size 2 would be <code>banana, bread</code> and <code>milk, cheese</code></p>
<pre><code>pairing count
banana, bread 2
milk, cheese 2
apple, bread 1
...
orange, banana 1
</code></pre>
<p><em>To clarify, order does not matter here, or in other words, whichever item shows up in the cart first is irrelevant. <code>banana, bread</code> is the same as <code>bread, banana</code></em></p>
<h2>WHAT I HAVE TRIED</h2>
<p>I tried putting all unique values in a list and iterating through each row and bruteforcing the pairings with <code>itertools</code>, but this seems like a very hacky and unpythonic workaround, plus I didn't even get it to work propely.</p>
|
<python><pandas>
|
2024-01-10 07:56:41
| 1
| 1,645
|
bismo
|
77,791,623
| 2,149,718
|
Recording script based on gstreamer doesn’t work with Dahua camera
|
<p>I have the following python script that records video from an rtsp camera:</p>
<pre><code># initialize GStreamer
Gst.init(None)
# calculate pipeline string
pipeline_str = ...
pipeline_str += f"! filesink location={full_path}.mkv"
# build the pipeline
pipeline = Gst.parse_launch(pipeline_str)
# start playing
pipeline.set_state(Gst.State.PLAYING)
# sleep for duration seconds
time.sleep(opt.duration)
print("Send EoS")
Gst.Element.send_event(pipeline, Gst.Event.new_eos())
# wait until EOS or error
bus = pipeline.get_bus()
msg = bus.timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS)
# free resources
print("Switch to NULL state")
pipeline.set_state(Gst.State.NULL)
</code></pre>
<p>Above script works well with Hikvision camera. Sample pipeline string for Hikvision camera:</p>
<blockquote>
<p>rtspsrc location=rtsp://192.168.1.20/Streaming/Channels/1/
user-id=admin user-pw=mypass ! application/x-rtp, media=video,
encoding-name=H264 ! queue ! rtph264depay ! h264parse ! nvv4l2decoder
! nvvidconv ! video/x-raw(memory:NVMM),width=1280,height=720 !
nvv4l2h265enc bitrate=2000000 ! h265parse ! matroskamux ! filesink
location=/home/nvidia/recordings/20240109_2044.mkv</p>
</blockquote>
<p>For Dahua camera I am using the following pipeline string (it’s the same string only location part differs):</p>
<blockquote>
<p>rtspsrc
location=rtsp://admin:pass@192.168.1.2:554/cam/realmonitor?channel=1/subtype=0 user-id=admin user-pw=pass ! application/x-rtp, media=video,
encoding-name=H264 ! queue ! rtph264depay ! h264parse ! nvv4l2decoder
! nvvidconv ! video/x-raw(memory:NVMM),width=1280,height=720 !
nvv4l2h265enc bitrate=2000000 ! h265parse ! matroskamux ! filesink
location=/home/nvidia/recordings/20240109_2047.mkv</p>
</blockquote>
<p>All I get with second pipeline is an empty file of size 1KB. Any idea what might be wrong?</p>
<p>Please note that Dahua camera playback has been tested with OpenCV. Using following lines I can playback the video stream from Dahua camera with no issues:</p>
<pre><code># RTSP stream URL
rtsp_url = "rtsp://admin:pass@192.168.1.2:554/cam/realmonitor?channel=1&subtype=0"
# Create a VideoCapture object
cap = cv2.VideoCapture(rtsp_url)
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>I have managed to narrow down the issue. If, when using Dahua camera, I substitute <code>nvv4l2decoder</code> with <code>decodebin</code> then the script works ok.</p>
<p>Any idea why <code>nvv4l2decoder</code> cannot handle a stream coming from a Dahua camera?</p>
|
<python><gstreamer><rtsp><ip-camera>
|
2024-01-10 07:41:36
| 1
| 72,345
|
Giorgos Betsos
|
77,791,600
| 13,999,613
|
Odoo14 Action Server Error - Unable to Call Python Method Correctly
|
<p>I am facing an issue while trying to define an action server in Odoo14. The objective is to call a Python method that returns a window action because I need to access fields in the current model and pass them to the context. Unfortunately, directly using a window action with a context that calls fields doesn't seem to be working as expected. Below is the code snippet I am using:</p>
<p><strong>XML Code:</strong></p>
<pre><code><field name="name">Créer un paiement</field>
<field name="model_id" ref="account.model_account_bank_statement_line"/>
<field name="binding_model_id" ref="account.model_account_bank_statement_line"/>
<field name="state">code</field>
<field name="binding_view_types">list</field>
<field name="code">action = model.action_payment_bank_statement_line()</field>
</code></pre>
<p><strong>Python Method:</strong></p>
<pre><code>def action_payment_bank_statement_line(self):
action_ref = 'account.view_account_payment_form'
print(action_ref)
action = self.env['ir.actions.act_window']._for_xml_id(action_ref)
action['context'] = {
'default_payment_type': 'outbound',
'default_partner_type': 'customer',
'default_ref': self.payment_ref,
'default_amount': self.amount,
'default_date': datetime.strptime(self.date, '%Y-%m-%d')
}
action['view_mode'] = 'form'
action['target'] = 'new'
return action
</code></pre>
<p>However, I'm encountering the following error when I click on action button:</p>
<pre><code> result = request.dispatch()
File "/opt/odoo14/odoo/odoo/http.py", line 696, in dispatch
result = self._call_function(**self.params)
File "/opt/odoo14/odoo/odoo/http.py", line 370, in _call_function
return checked_call(self.db, *args, **kwargs)
File "/opt/odoo14/odoo/odoo/service/model.py", line 94, in wrapper
return f(dbname, *args, **kwargs)
File "/opt/odoo14/odoo/odoo/http.py", line 358, in checked_call
result = self.endpoint(*a, **kw)
File "/opt/odoo14/odoo/odoo/http.py", line 919, in __call__
return self.method(*args, **kw)
File "/opt/odoo14/odoo/odoo/http.py", line 544, in response_wrap
response = f(*args, **kw)
File "/opt/odoo14/odoo/addons/web/controllers/main.py", line 1728, in run
result = action.run()
File "/opt/odoo14/odoo/odoo/addons/base/models/ir_actions.py", line 632, in run
res = runner(run_self, eval_context=eval_context)
File "/opt/odoo14/odoo/addons/website/models/ir_actions.py", line 61, in _run_action_code_multi
res = super(ServerAction, self)._run_action_code_multi(eval_context)
File "/opt/odoo14/odoo/odoo/addons/base/models/ir_actions.py", line 501, in _run_action_code_multi
safe_eval(self.code.strip(), eval_context, mode="exec", nocopy=True) # nocopy allows to return 'action'
File "/opt/odoo14/odoo/odoo/tools/safe_eval.py", line 347, in safe_eval
raise ValueError('%s: "%s" while evaluating\n%r' % (ustr(type(e)), ustr(e), expr))
Exception
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/odoo14/odoo/odoo/http.py", line 652, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/opt/odoo14/odoo/odoo/http.py", line 317, in _handle_exception
raise exception.with_traceback(None) from new_cause
ValueError: <class 'AssertionError'>: "" while evaluating
'action = record.action_payment_bank_statement_line()'
</code></pre>
<p>I would appreciate any assistance in identifying and correcting the issue in the code, or any alternative suggestions for defining the action server correctly.</p>
|
<python><xml><odoo><odoo-14>
|
2024-01-10 07:35:58
| 1
| 311
|
Ka_
|
77,791,525
| 13,132,728
|
How to get the value counts of each unique value in a row for all rows in a pandas dataframe
|
<h2>MY DATA</h2>
<p>Let's say I have a dataframe where each row represents one shopping cart, like so:</p>
<pre><code>pd.DataFrame({'0':['banana','apple','orange'],'1':['apple','milk','bread'],'2':['bread','cheese','banana']})
0 1 2
0 banana apple bread
1 apple milk cheese
2 orange bread banana
</code></pre>
<h2>WHAT I AM TRYING TO DO</h2>
<p>What I would like to do is get the value counts of each shopping cart and create an overall list. For example:</p>
<pre><code> count
banana 2
bread 2
apple 2
milk 1
cheese 1
orange 1
</code></pre>
<h2>WHAT I HAVE TRIED</h2>
<p>I tried the following, thinking a row-wise function application would work. It almost gets me there, but not quite:</p>
<pre><code>df.apply(lambda x : x.value_counts())
0 1 2
apple 1.0 1.0 NaN
banana 1.0 NaN 1.0
bread NaN 1.0 1.0
cheese NaN NaN 1.0
milk NaN 1.0 NaN
orange 1.0 NaN NaN
</code></pre>
<p>After this, I would need to sum those columns into one column to get the overall count. Am I missing a parameter, or is there a more pythonic way to do this in a single call?</p>
|
<python><pandas>
|
2024-01-10 07:20:38
| 1
| 1,645
|
bismo
|
77,791,392
| 6,822,178
|
Proportion of each unique value of a chosen column for each unique combination of the other in a DataFrame
|
<p>I have a DataFrame with a variable number of columns. I need to calculate the proportion of each unique value of a chosen column for each unique combination of the other. For example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
a="A";b="B"
df = pd.DataFrame({
"n": [a,a,a,b,a,b,b,b],
"X": [0,0,0,0,1,1,1,1],
"Y": [0,0,1,1,0,0,0,0],
})
print(df)
</code></pre>
<pre><code> n X Y
0 A 0 0
1 A 0 0
2 A 0 1
3 B 0 1
4 A 1 0
5 B 1 0
6 B 1 0
7 B 1 0
</code></pre>
<p>Say we want to calculate the proportion of unique <code>n</code> (absolute frequency <code>n_ru</code>) for each unique combination of <code>X</code> and <code>Y</code> (absolute frequency <code>n_u</code>).
Here for example, we got 3 <code>n=B</code> for the combination of 4 <code>(X=1,Y=0)</code>, thus the proportion is <code>3/4</code>, etc.</p>
<p>I thought to do it like</p>
<pre class="lang-py prettyprint-override"><code># complete column list
col = list(df.columns.values)
# column list except n
cov = list(df.columns[1:].values)
# merge absolute frequencies
count = pd.merge(
# absolute freq of each (X,Y)
df.groupby(cov).count(),
# absolute freq of n for each (X,Y)
df.groupby(col).aggregate("n").count(),
# options
on=cov, suffixes=["_u", "_ru"]
)
print(count)
# calculate ell metric
ell = np.sum(
np.log(count["n_ru"]/count["n_u"])
)
print(f"ell = {ell:.3f}")
</code></pre>
<pre><code> n_ru n_u
X Y
0 0 2 2
1 1 2
1 1 2
1 0 1 4
0 3 4
ell = -3.060
</code></pre>
<p>Is there a better way to do it?</p>
|
<python><pandas><dataframe><count><unique>
|
2024-01-10 06:51:23
| 1
| 2,289
|
Max Pierini
|
77,791,241
| 126,362
|
Consistent order of tracks in mediainfo library (pymediainfo)
|
<p>I would like to get a consistent ordering of tracks from mediainfo - in other words, for a given media file, the order should be the same every time the code is run, on any machine.</p>
<p>(Assuming for now the same library version is used, but if there is a way to get an ordering that won't change with future library versions, that would be excellent too).</p>
<p>I think mediainfo already tries to ensure a consistent order. I have read <a href="https://sourceforge.net/p/mediainfo/discussion/297610/thread/42b6b1b5/" rel="nofollow noreferrer">this helpful answer</a> which says the order is</p>
<blockquote>
<p>... video first, then audio, then text, then other (including timecode); inside each kind of track, it is in the ID ascending order</p>
</blockquote>
<p>This is good information about the rationale/intended behaviour but I think it is outdated, because it doesn't seem to be sorted by track ID. Using <a href="https://github.com/ietf-wg-cellar/matroska-test-files/blob/master/test_files/test5.mkv" rel="nofollow noreferrer">this test mkv file</a> from <a href="https://github.com/ietf-wg-cellar/matroska-test-files" rel="nofollow noreferrer">this repo</a>, I can see the text streams are not in order of ID (the text track with id 7 comes last, after id 11). They do however seem to be sorted (within each track type) by "StreamOrder" property.</p>
<p>I noticed there is also "typeorder" attribute in the XML output (and "@typeorder" in the JSON output) which seems to also describe the ordering within each track type.</p>
<p>In pymediainfo there is a "stream_identifier" which seems to give the correct order.</p>
<p>I am wondering, to save having to figure out which property I should be using for the ordering/sequence number and test across different media files, handle edge cases, etc., if I can just rely on the index of the track within the list of tracks. E.g. in pymediainfo <code>for i, track in enumerate(media_info.tracks): ...</code>, and use the index <code>i</code> as a sequence number for ordering.</p>
<p>(Note: I'm using pymediainfo, but I think this question applies to mediainfo/libmediainfo generally because from the pymediainfo source code I can see it does not sort the tracks at all but uses the ordering outputted by the underlying library's "OLDXML" output).</p>
<p>Thank you.</p>
|
<python><metadata><mediainfo>
|
2024-01-10 06:11:32
| 0
| 835
|
ejm
|
77,791,091
| 10,200,497
|
Finding the maximum value between two columns where one of them is shifted and change the value of last row
|
<p>My DataFrame is:</p>
<pre><code>df = pd.DataFrame(
{
'a': [20, 9, 31, 40],
'b': [1, 10, 17, 30],
}
)
</code></pre>
<p>Expected output: Creating column <code>c</code> and <code>name</code></p>
<pre><code> a b c name
0 20 1 20 NaN
1 9 10 20 NaN
2 31 17 17 NaN
3 40 30 40 a
</code></pre>
<p>Steps:</p>
<p>a) <code>c</code> is created by <code>df['c'] = np.fmax(df['a'].shift().bfill(), df['b'])</code></p>
<p>b) for the last row: <code>df['c'] = df[['a', 'b']].max()</code>. Since for the last row <code>a > b</code> 40 is chosen.</p>
<p>c) Get the name of max value between <code>a</code> or <code>b</code> for the last row.</p>
<p>My attempt:</p>
<pre><code>df['c'] = np.fmax(df['a'].shift().bfill(), df['b'])
df.loc[df.index[-1], 'c'] = df.loc[df.index[-1], ['a', 'b']].max()
df.loc[df.index[-1], 'name'] = df.loc[df.index[-1], ['a', 'b']].idxmax()
</code></pre>
<p>Is it the cleanest way / best approach?</p>
|
<python><pandas><dataframe>
|
2024-01-10 05:25:51
| 1
| 2,679
|
AmirX
|
77,790,973
| 11,049,287
|
Lengths of overlapping time ranges listed by rows
|
<p>I am using pandas version 1.0.5</p>
<p>The example dataframe below lists time intervals, recorded over three days, and I seek where some time intervals overlap <strong>every</strong> day.</p>
<p><a href="https://i.sstatic.net/1fICR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1fICR.png" alt="Interim- for illustration" /></a></p>
<p>For example,
one of the overlapping time across all the three dates (yellow highlighted) is 1:16 - 2:13. The other (blue highlighted) would be 18:45 - 19:00</p>
<p>So my expected output would be like: <code>[57,15]</code> because</p>
<ul>
<li>57 - Minutes between 1:16 - 2:13.</li>
<li>15 - Minutes between 18:45 - 19:00</li>
</ul>
<p>Please use this generator of the input dataframe:</p>
<pre><code>import pandas as pd
dat1 = [
['2023-12-27','2023-12-27 00:00:00','2023-12-27 02:14:00'],
['2023-12-27','2023-12-27 03:16:00','2023-12-27 04:19:00'],
['2023-12-27','2023-12-27 18:11:00','2023-12-27 20:13:00'],
['2023-12-28','2023-12-28 01:16:00','2023-12-28 02:14:00'],
['2023-12-28','2023-12-28 02:16:00','2023-12-28 02:28:00'],
['2023-12-28','2023-12-28 02:30:00','2023-12-28 02:56:00'],
['2023-12-28','2023-12-28 18:45:00','2023-12-28 19:00:00'],
['2023-12-29','2023-12-29 01:16:00','2023-12-29 02:13:00'],
['2023-12-29','2023-12-29 04:16:00','2023-12-29 05:09:00'],
['2023-12-29','2023-12-29 05:11:00','2023-12-29 05:14:00'],
['2023-12-29','2023-12-29 18:00:00','2023-12-29 19:00:00']
]
df = pd.DataFrame(dat1,columns = ['date','Start_tmp','End_tmp'])
df["Start_tmp"] = pd.to_datetime(df["Start_tmp"])
df["End_tmp"] = pd.to_datetime(df["End_tmp"])
</code></pre>
|
<python><pandas><dataframe><numpy><python-datetime>
|
2024-01-10 04:45:37
| 1
| 838
|
usr_lal123
|
77,790,846
| 10,200,497
|
Finding the maximum value between two columns where one of them is shifted
|
<p>My DataFrame is:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [20, 9, 31, 40],
'b': [1, 10, 17, 30],
}
)
</code></pre>
<p>Expected output: Creating column <code>c</code></p>
<pre><code> a b c
0 20 1 20
1 9 10 20
2 31 17 17
3 40 30 31
</code></pre>
<p>Steps:</p>
<p><code>c</code> is the maximum value between <code>df.b</code> and <code>df.a.shift(1).bfill()</code>.</p>
<p><a href="https://i.sstatic.net/AKf9S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AKf9S.png" alt="enter image description here" /></a></p>
<p>My attempt:</p>
<pre><code>df['temp'] = df.a.shift(1).bfill()
df['c'] = df[['temp', 'b']].max(axis=1)
</code></pre>
<p>Is it the cleanest way / best approach?</p>
|
<python><pandas><dataframe><max>
|
2024-01-10 03:56:10
| 3
| 2,679
|
AmirX
|
77,790,767
| 2,924,334
|
Runtime.ImportModuleError: Unable to import module 'main': cannot import name 'is_arabic' from
|
<p>When I do:
<code>sam local invoke MyFunction --event events/my_event.json</code></p>
<p>I get this error:
<code>[ERROR] Runtime.ImportModuleError: Unable to import module 'main': cannot import name 'is_arabic' from 'charset_normalizer.utils' (/var/task/charset_normalizer/utils.py)</code></p>
<p>Sorry, I am not too familiar with the <code>aws-sam</code>, I have been just using a prior script and it was working fine. Any pointer would be much appreciated. Thank you!</p>
|
<python><aws-sam><aws-sam-cli>
|
2024-01-10 03:18:42
| 1
| 587
|
tikka
|
77,790,704
| 270,043
|
Matching a nested column in pandas dataframe
|
<p>I have a pandas dataframe with 5 columns, and one of the columns is a list. If I print out the list only, I get something like this:</p>
<pre><code>Row(a='abc',b='def',c=['qwe','rty'])
Row(a='123',b='456',c=['789','089'])
...
</code></pre>
<p>I'm trying to match on the field <code>a</code> and also on the list <code>c</code> (separately). For example, if I match for <code>a='abc'</code>, I would get the first Row (the entire dataframe with all 5 columns), and if I match for <code>089</code> in <code>c</code>, I will get the second Row (the entire dataframe with all 5 columns).</p>
<p>I wrote this function for matching against <code>a</code>, but the results only contain the <code>Row</code>, i.e. without the other 4 non-nested columns. I can't figure out how to refer back to the dataframe itself from the match. Is there a way where I can match and get the dataframes that match <code>a</code> or an element in <code>c</code>?</p>
<pre><code>def find_a(nested_col_list, a_value):
for nested_cols in nested_col_list:
for nested_col in nested_cols:
if nested_col.a == a_value:
print(nest_col)
find_a(df["nested_col"], "abc")
</code></pre>
|
<python><pandas><dataframe>
|
2024-01-10 02:59:02
| 2
| 15,187
|
Rayne
|
77,790,649
| 5,082,463
|
Qt Creator Python debugger
|
<p>After using QtCreator on C++ projects for years, and seeing it being advertised for python development I tried and it works.</p>
<p>But, no debugger?</p>
<p>Even SublimeText has a python debugger <a href="https://github.com/daveleroy/SublimeDebugger" rel="nofollow noreferrer">https://github.com/daveleroy/SublimeDebugger</a>.</p>
<p>So how do we approach the issue of having a python debugger on QtCreator? Those using QtCreator to develop in python, how you do it?</p>
|
<python><debugging><qt-creator><debuggervisualizer>
|
2024-01-10 02:30:30
| 1
| 6,367
|
KcFnMi
|
77,790,633
| 6,943,690
|
Why can not get body field in response of Microsoft Graph API for listing messages?
|
<p>I want to read certain email messages and filter them. I am using Microsoft Graph API to query the office 365 mail box as below. I also added API permission "Mail.ReadBasic.All" for getting mails from application.</p>
<pre><code>graph_api_endpoint = 'https://graph.microsoft.com/v1.0/users/xyz@outlook.com/mailFolders/Inbox/messages?$select=body'
# Function to get inbox messages
def get_inbox_messages(access_token):
headers = {
'Authorization': f'Bearer {access_token}',
'Accept': 'application/json',
'Prefer': 'outlook.body-content-type="text"',
}
response = requests.get(graph_api_endpoint, headers=headers)
print('get_inbox_messages response:', response)
return response.json().get('value', [])
# MSAL ConfidentialClientApplication
app = ConfidentialClientApplication(
client_id,
authority=authority,
client_credential=client_secret,
)
token_response = app.acquire_token_for_client(scopes=[scope])
# Access token
access_token = token_response['access_token']
print(f"Access Token: {access_token}")
if access_token:
inbox_messages = get_inbox_messages(access_token)
print('Inbox Messages:', inbox_messages)
else:
print('Failed to obtain access token')
</code></pre>
<p>But inbox_messages doesn't contain body fields. Its value is like this.</p>
<pre><code>[{'@odata.etag': 'W/"CQAAABYAAAB3N5CnvFQSSqeWD3xLEpaUAACbgpYA"', 'id': 'AAMkAGYyMzUwZDAxLTJkNmItNGJjYi1iNThkLTcxMWU2MzIxMjg2ZgBGAAAAAAAKfqhNGZ8HSr-8OzkXZKcQBwB3N5CnvFQSSqeWD3xLEpaUAAAAAAEMAAB3N5CnvFQSSqeWD3xLEpaUAACb9YkJAAA='}, {'@odata.etag': 'W/"CQAAABYAAAB3N5CnvFQSSqeWD3xLEpaUAACbgpAa"', 'id': 'AAMkAGYyMzUwZDAxLTJkNmItNGJjYi1iNThkLTcxMWU2MzIxMjg2ZgBGAAAAAAAKfqhNGZ8HSr-8OzkXZKcQBwB3N5CnvFQSSqeWD3xLEpaUAAAAAAEMAAB3N5CnvFQSSqeWD3xLEpaUAACb9YkIAAA='}, {'@odata.etag': 'W/"CQAAABYAAAB3N5CnvFQSSqeWD3xLEpaUAACazuV6"', 'id': 'AAMkAGYyMzUwZDAxLTJkNmItNGJjYi1iNThkLTcxMWU2MzIxMjg2ZgBGAAAAAAAKfqhNGZ8HSr-8OzkXZKcQBwB3N5CnvFQSSqeWD3xLEpaUAAAAAAEMAAB3N5CnvFQSSqeWD3xLEpaUAACbQPdBAAA='}, {'@odata.etag': 'W/"CQAAABYAAAB3N5CnvFQSSqeWD3xLEpaUAACazuV4"', 'id': 'AAMkAGYyMzUwZDAxLTJkNmItNGJjYi1iNThkLTcxMWU2MzIxMjg2ZgBGAAAAAAAKfqhNGZ8HSr-8OzkXZKcQBwB3N5CnvFQSSqeWD3xLEpaUAAAAAAEMAAB3N5CnvFQSSqeWD3xLEpaUAACbQPdAAAA='}, {'@odata.etag': 'W/"CQAAABYAAAB3N5CnvFQSSqeWD3xLEpaUAACazuVn"', 'id': 'AAMkAGYyMzUwZDAxLTJkNmItNGJjYi1iNThkLTcxMWU2MzIxMjg2ZgBGAAAAAAAKfqhNGZ8HSr-8OzkXZKcQBwB3N5CnvFQSSqeWD3xLEpaUAAAAAAEMAAB3N5CnvFQSSqeWD3xLEpaUAACbQPc-AAA='}, {'@odata.etag': 'W/"CQAAABYAAAB3N5CnvFQSSqeWD3xLEpaUAACazuVg"', 'id': 'AAMkAGYyMzUwZDAxLTJkNmItNGJjYi1iNThkLTcxMWU2MzIxMjg2ZgBGAAAAAAAKfqhNGZ8HSr-8OzkXZKcQBwB3N5CnvFQSSqeWD3xLEpaUAAAAAAEMAAB3N5CnvFQSSqeWD3xLEpaUAACbQPc_AAA='}]
</code></pre>
<p>Please help me to get body in html format or text format.</p>
|
<python><email><microsoft-graph-api><office365api><outlook-restapi>
|
2024-01-10 02:23:52
| 1
| 2,740
|
avantdev
|
77,790,619
| 7,802,751
|
What is the default datatype for the emotions dataset (one of huggingface's site dataset)?
|
<p>I'm working with the emotions dataset from HuggingFace library.</p>
<p>I know I can switch its presentation format to pandas by doing:</p>
<pre><code>emotions.set_format(type='pandas')
</code></pre>
<p>And I can switch it back to its default presentation format by doing:</p>
<pre><code>emotions.reset_format()
</code></pre>
<p>My doubts are two: 1st) what types are allowed besides 'pandas'? and 2nd) what is the name of the default format? I tried looking on internet and on some books but found nothing.</p>
|
<python><formatting><format><huggingface><huggingface-datasets>
|
2024-01-10 02:17:58
| 0
| 997
|
Gustavo Mirapalheta
|
77,790,459
| 4,126,652
|
python concurrent futures executing even on unhandled exceptions
|
<p>Consider the following code snippet:</p>
<pre class="lang-py prettyprint-override"><code>import concurrent.futures
import time
def throw_func(a):
print(a)
time.sleep(10)
raise ValueError(a)
params = list(range(5000))
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
future_to_row = {executor.submit(throw_func, param): param for param in params}
for future in concurrent.futures.as_completed(future_to_row):
row = future_to_row[future]
future.result()
</code></pre>
<p>It seems like when there is an unhandled exception, the program still tries to complete all the submitted futures.</p>
<p>And no exception messages are printed on the screen.</p>
<p>Why is this happening? How do I make sure that when there is an unhandled exception, the program will exit?</p>
|
<python><multithreading>
|
2024-01-10 01:12:28
| 3
| 3,263
|
Vikash Balasubramanian
|
77,790,274
| 2,386,605
|
Multi-stage build in Docker shows unknown PATH
|
<p>I want to build a Docker image with a multi-stage build. The image is based on FastAPI and Uvicorn.</p>
<p>If I use the following <code>Dockerfile</code> for a multi-stage build</p>
<pre><code># Stage 1: Build stage
FROM pytorch/pytorch:2.1.2-cuda12.1-cudnn8-runtime as builder
# set working directory
WORKDIR /usr/src
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update \
&& apt-get install -y --no-install-recommends python3 python3-pip git \
&& ln -sf python3 /usr/bin/python \
&& ln -sf pip3 /usr/bin/pip \
&& pip install --upgrade pip
# install python dependencies
RUN pip install --upgrade pip setuptools wheel
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get remove -y git
# Stage 2: Final stage
FROM pytorch/pytorch:2.1.2-cuda12.1-cudnn8-runtime
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update \
&& apt-get install -y --no-install-recommends python3 \
&& ln -sf python3 /usr/bin/python
# Copy the installed dependencies from the previous stage
COPY --from=builder /usr/local /usr/local
ENV PATH=/usr/local/bin:$PATH
WORKDIR /usr/src
# Copy your app code
COPY ./app/ ./app/
</code></pre>
<p>and run it with the following <code>docker-compose.yml</code></p>
<pre><code>version: '3.8'
services:
scraiber_gpt:
build:
context: .
dockerfile: Dockerfile
secrets:
- githubpat
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./src:/usr/src
ports:
- '8000:8000'
</code></pre>
<p>I get</p>
<pre><code>Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "uvicorn": executable file not found in $PATH: unknown
</code></pre>
<p>However, for the non-multi-stage build with the <code>Dockerfile</code></p>
<pre><code>FROM pytorch/pytorch:2.1.2-cuda12.1-cudnn8-runtime
# set working directory
WORKDIR /usr/src
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update \
&& apt-get install -y --no-install-recommends python3 python3-pip git \
&& ln -sf python3 /usr/bin/python \
&& ln -sf pip3 /usr/bin/pip \
&& pip install --upgrade pip
# install python dependencies
RUN pip install --upgrade pip setuptools wheel
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN apt-get remove -y git
# Copy your app code
COPY ./app/ ./app/
</code></pre>
<p>things tend do work. Hence, I wonder how I could make the multi-stage build work.</p>
|
<python><docker><docker-compose><fastapi><uvicorn>
|
2024-01-09 23:47:00
| 1
| 879
|
tobias
|
77,790,158
| 10,749,925
|
How do i put a .env file in GitHub?
|
<p>In most instances .env files for Python are in the .gitignore directory. However it's crucial for me to have my .env file in my private repository. I need to be able to git pull it to my local machines or my code does not run. Doing something simple such as:</p>
<pre><code>import streamlit as st
import os
from dotenv import load_dotenv
load_dotenv()
st.title(os.getenv("MY_SECRET_KEY"))
</code></pre>
<p>Where MY_SECRET_KEY is defined in a <strong>.env</strong> file in the same directory. When i try to upload this to github desktop, github says the file is hidden, and when i <code>git pull</code> i can't even retreive it. It's gone!</p>
<p>How do i add my <strong>.env</strong> file to github, and have it available for use?</p>
|
<python><github><version-control>
|
2024-01-09 23:02:26
| 1
| 463
|
chai86
|
77,789,700
| 18,476,381
|
Docker not properly setting environment variables via GitHub actions
|
<p>In my python application I am trying to access/print out an environment variable but it is printing out None.</p>
<pre><code>logger.info(f"OIDC SECRET: {os.getenv('OIDC_CLIENT_SECRET')}")
</code></pre>
<p>This is my step/job in github actions</p>
<pre><code>name: Build and push image to harbor
run: |
IMAGE_VERSION=$GITHUB_RUN_NUMBER
IMAGE_WITH_TAG="$IMAGE_NAME:develop.$IMAGE_VERSION"
docker build \
-t $IMAGE_WITH_TAG --file "$DOCKERFILE_PATH" \
--build-arg OIDC_CLIENT_SECRET=${{ secrets.OIDC_CLIENT_SECRET }} \
--build-arg DB_PWD=${{ secrets.DB_PWD }} \
--build-arg APP_ENV="dev" .
docker login TEST.com -u "$DEV_DEPLOY_USENAME" -p "$DEV_DEPLOY_PWD"
docker push $IMAGE_WITH_TAG
echo "IMAGE_VERSION=$IMAGE_VERSION" >> $GITHUB_ENV
echo "PUSHED_IMAGE=$IMAGE_WITH_TAG" >> $GITHUB_ENV
echo "develop branch"
echo "NAMESPACE=$PROJECT-dev" >> $GITHUB_ENV
env:
DEV_DEPLOY_USENAME: ${{ secrets.DEV_DEPLOY_USENAME }}
DEV_DEPLOY_PWD: ${{ secrets.DEV_DEPLOY_PWD }}
PUSHED_IMAGE: ${{ env.PUSHED_IMAGE }}
BRANCH: ${{ env.BRANCH }}
</code></pre>
<p>And below is my docker file.</p>
<pre><code>FROM TEST.com/library/python:3.10-slim as builder
RUN pip install poetry==1.6.1
ENV POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_IN_PROJECT=1 \
POETRY_VIRTUALENVS_CREATE=1 \
POETRY_CACHE_DIR=/tmp/poetry_cache
WORKDIR /usr/src
ARG OIDC_CLIENT_SECRET
ARG DB_PWD
ARG APP_ENV
ENV OIDC_CLIENT_SECRET=$OIDC_CLIENT_SECRET
ENV DB_PWD=$DB_PWD
ENV APP_ENV=$APP_ENV
RUN echo $OIDC_CLIENT_SECRET
RUN echo $DB_PWD
RUN echo $APP_ENV
COPY pyproject.toml ./
RUN poetry install --without dev --no-root && rm -rf $POETRY_CACHE_DIR
# The runtime image, used to just run the code provided its virtual environment
FROM TEST.com/library/python:3.10-slim as runtime
ARG WORKER_COUNT=1
ENV WORKER_COUNT=${WORKER_COUNT}
RUN mkdir -p /usr/src
WORKDIR /usr/src
ENV VIRTUAL_ENV=/usr/src/.venv \
PATH="/usr/src:/usr/src/.venv/bin:$PATH"
ENV TZ=America/Chicago
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
COPY --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY ./src/ /usr/src/
RUN ls -R
EXPOSE 5000
CMD uvicorn --workers $WORKER_COUNT --host 0.0.0.0 --port 5000 main:app
</code></pre>
<p>My project deploys successfully, however the environment variables of OIDC_CLIENT_SECRET and DB_PWD are being set to None which is confirmed by my log statement. I double checked names, values all throughout my files including the secrets in github. I even have print statements in my docker which are outputting *** which I assume is still correct, its probably hiding the value.</p>
<p>Is there something I am doing wrong to where my environment variables are not making it to my project?</p>
|
<python><docker><github-actions>
|
2024-01-09 20:56:39
| 1
| 609
|
Masterstack8080
|
77,789,573
| 1,839,674
|
VSCode Pytest Discovers but won't run
|
<p>In VScode, pytests are discovered. Using a conda env and it is the selected Python:Interpreter.</p>
<p>However, when I try to run one or all the tests, it just says "Finished running tests!". Can't get into debug. Doesn't give green checkmark or red x's.</p>
<p>If I run <code>pytest</code> in the terminal, it works just fine.</p>
<p>I have reinstalled everything and started from scratch. I have no idea what is going on.</p>
<p>Please help! :)</p>
<p>Added Python Test Log. This is the output when I discover and then try to run a test.</p>
<p><a href="https://i.sstatic.net/4p9a3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4p9a3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Azbaa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Azbaa.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/HzhXJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HzhXJ.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/q6faD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q6faD.png" alt="Python Test Log" /></a></p>
|
<python><visual-studio-code><pytest>
|
2024-01-09 20:28:00
| 2
| 620
|
lr100
|
77,789,489
| 2,386,113
|
Efficiently find neighbors in python
|
<p>I have a three-dimensional grid. For each point on the grid, I want to find its nearest neighbours. Since my grid is uniformly sampled, I just want to gather the immediate neighbors.</p>
<p><strong>Sample grid:</strong></p>
<p><a href="https://i.sstatic.net/BWTu9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BWTu9.png" alt="enter image description here" /></a></p>
<p><strong>Required neighbors:</strong></p>
<p>For a given point X, I need the following neighbors:</p>
<p><a href="https://i.sstatic.net/93SpH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/93SpH.png" alt="enter image description here" /></a></p>
<p><strong>My Current Working Code:</strong></p>
<pre><code>import numpy as np
import cProfile
class Neighbours:
# Get neighbors
@classmethod
def get_neighbour_indices(cls, row, col, frame, distance=1):
# Define the indices for the neighbor pixels
r = np.linspace(row - distance, row + distance, 2 * distance + 1)
c = np.linspace(col - distance, col + distance, 2 * distance + 1)
f = np.linspace(frame - distance, frame + distance, 2 * distance + 1)
nc, nr, nf = np.meshgrid(c, r, f)
neighbors = np.vstack((nr.flatten(), nc.flatten(), nf.flatten())).T
# Filter out valid neighbor indices within the array bounds
valid_indices = (neighbors[ :, 0] >= 0) & (neighbors[ :, 0] < nRows) & (neighbors[ :, 1] >= 0) & (neighbors[ :, 1] < nCols) & (neighbors[ :, 2] >= 0) & (neighbors[ :, 2] < nFrames)
# Return the valid neighbor indices
valid_neighbors = neighbors[valid_indices]
return valid_neighbors
@classmethod
def MapIndexVsNeighbours(cls):
neighbours_info = np.empty((nRows * nCols * nFrames), dtype=object)
for frame in range(nFrames):
for row in range(nRows):
for col in range(nCols):
neighbour_indices = cls.get_neighbour_indices(row, col, frame, distance=1)
flat_idx = frame * (nRows * nCols) + (row * nCols + col)
neighbours_info[flat_idx] = neighbour_indices
return neighbours_info
########################------------------main()-------##################
####--run
if __name__ == "__main__":
nRows = 151
nCols = 151
nFrames = 24
cProfile.run('Neighbours.MapIndexVsNeighbours()', sort='cumulative')
print()
</code></pre>
<p><strong>Problem:</strong> For larger grids (e.g. 201 x 201 x 24), the program takes very long time. In the profiling results using <code>cProfile</code>, I can see that <code>meshgrid()</code> in the <code>get_neighbour_indices()</code> takes quite long. All in all, this is not an efficient implementation. Furthermore, I tried to execute the <code>MapIndexVsNeighbours()</code> on a separate thread but due to GIL lock, it does not really execute in parallel. So, something that can be execute in parallel would be a desired implementation.</p>
|
<python><performance><parallel-processing><cpython><nearest-neighbor>
|
2024-01-09 20:10:59
| 1
| 5,777
|
skm
|
77,789,467
| 11,551,386
|
ipython - docstring not showing for functions that are wrapped
|
<p>With the code below entered in ipython, entering subsequent <code>func_with_wrap?</code> shows <code><no docstring></code> while <code>func_no_wrap?</code> shows the docstring as expected. How can I get <code>func_with_wrap?</code> to properly show the docstring of <code>func_with_wrap</code>?</p>
<pre class="lang-py prettyprint-override"><code>def wrap(f):
"""
wrap docstring
"""
def wrapped():
return
return wrapped
@wrap
def func_with_wrap():
"""
func_with_wrap docstring
"""
return
def func_no_wrap():
"""
func_no_wrap docstring
"""
return
</code></pre>
<p>Here are the outputs:</p>
<pre><code>In [12]: func_with_wrap?
Signature: func_with_wrap()
Docstring: <no docstring>
File: c:\confidential>
Type: function
In [13]: func_no_wrap?
Signature: func_no_wrap()
Docstring: func_no_wrap docstring
File: c:\confidential>
Type: function
</code></pre>
<p>EDIT:</p>
<p>When tryin the following:</p>
<pre><code>def wrap(f):
"""
wrap docstring
"""
def wrapped():
wrapped.__doc__ = f.__doc__
return
return wrapped
</code></pre>
<p><code>ipython</code> still does not show <code>func_with_wrap</code>'s docstring.</p>
<p>BUT,</p>
<pre><code>def wrap(f):
"""
wrap docstring
"""
def wrapped():
return
wrapped.__doc__ = f.__doc__
return wrapped
</code></pre>
<p>does work.</p>
|
<python><ipython>
|
2024-01-09 20:06:16
| 1
| 344
|
likethevegetable
|
77,789,428
| 8,447,743
|
Does the requests module truncate headers?
|
<p>I'm trying to make a POST request to my API using a fairly long (1017 chars) Bearer token for authorization.</p>
<p>No matter what I try, as I fire up the request I get an error saying that <code>\u2026</code> (horizontal ellipsis) in position 512 can't be encoded with the 'latin-1' encoding. Since I can't find any problem (or ellipsis) with the header I specified, my only explanation is that the requests module internally and silently truncates my token. Could that be the case? Can it be disabled? Any other workarounds?</p>
|
<python><python-requests>
|
2024-01-09 19:59:02
| 2
| 916
|
NoBullsh1t
|
77,789,207
| 441,709
|
Searching Gmail from Python
|
<p>I am trying to write a script to search my gmail. I have followed the <a href="https://developers.google.com/gmail/api/quickstart/python" rel="nofollow noreferrer">quickstart</a> guide and got that working and played with altering that with other API calls. When I follow the <a href="https://developers.google.com/gmail/api/guides/filtering" rel="nofollow noreferrer">documentation</a> for searching it shows, not API as for getting labels, sending messages etc call but just a simple GET request. This of course does not work with a simple request</p>
<pre><code>response = requests.get('https://www.googleapis.com/gmail/v1/users/me/messages?q=in:sent after:2014/01/01 before:2014/02/01')
</code></pre>
<p>Just returns unauthorised. This doesn't surprise me given the other examples.</p>
<p>What am I missing? Is the documentation out of date (my intuition) is there a way to do a simple with request with the authorisation that is used by the API?</p>
<p>I.e. This authorisation as used by the quick start example</p>
<pre><code>if os.path.exists("token.json"):
creds = Credentials.from_authorized_user_file("token.json", SCOPES)
</code></pre>
<p>I don't seem to be unable to find a example of this use case with a simple GET.</p>
|
<python><gmail>
|
2024-01-09 19:14:46
| 0
| 675
|
Tommy
|
77,789,103
| 2,893,712
|
Python Requests Login Automation
|
<p>I am trying to use Python Requests to login to my UPS MyChoice account. Each page request has it's own <code>CSRFToken</code> value. I am trying to simulate this with Python Requests but I keep getting 403 Forbidden Error.</p>
<p>UPS does their login in a two-stage setup.. You enter username, make request, and then you enter password (using CSRFToken from previous request), and then login. I am unable to get past the step to enter username.</p>
<p>Here is my code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
s = requests.Session()
jar = requests.cookies.RequestsCookieJar() # Create cookie jar
# Go to home page to get init cookies
response = s.get("https://www.ups.com/us/en/Home.page", headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36'})
jar.update(response.cookies)
# Go to login page, update cookie jar, get CSRFToken
response = s.get("https://www.ups.com/lasso/login?loc=en_US&returnto=https%3A%2F%2Fwww.ups.com%2Fus%2Fen%2FHome.page", headers=headers_main, cookies=jar)
jar.update(response.cookies)
soup = BeautifulSoup(response.content, 'html.parser')
csrftoken = soup.find(id='CSRFToken')['value']
url = "https://www.ups.com/lasso/login"
payload = f'CSRFToken={csrftoken}&loc=en_US&returnto=https%253A%252F%252Fwww.ups.com%252Fus%252Fen%252FHome.page&forgotusername=YZ&connectWithSocial=YZ&userID=MYUSERNAME&getTokenWithPassword1=&ioBlackBox=&ioElapsedTime='
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'Accept-Language': 'en-US,en;q=0.9',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'Content-Type': 'application/x-www-form-urlencoded',
'Origin': 'https://www.ups.com',
'Pragma': 'no-cache',
'Referer': 'https://www.ups.com/us/en/Home.page',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36',
}
response = s.post(url, headers=headers, data=payload, cookies=jar) # Returns 403
</code></pre>
<p>What am I doing wrong in my code? I am comparing the requests between Chrome DevTools and Python Requests and they see similar.</p>
|
<python><cookies><automation><python-requests>
|
2024-01-09 18:52:13
| 0
| 8,806
|
Bijan
|
77,789,028
| 6,467,512
|
I keep getting bad verification code when requesting github access token
|
<p>I am using auth0-vue for user authentication. I have added the github login connection in my auth0 console, selected the permission to view the user repo. I have also created the github auth0 application in my developer settings and the login/permession process in my frontend work as expected. Then when the code and state is returned after the user logs in I am sending the code to my back end server to make the following request for the github access token:</p>
<pre><code>def get_github_token(code: str, state: str = None):
conn = http.client.HTTPSConnection("github.com")
payload = ""
conn.request("POST", f"/login/oauth/access_token?code={code}&client_id={my_github_client_id}&client_secret={my_github_client_secret}", payload)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
return None
</code></pre>
<p>But I keep getting:</p>
<pre><code>error=bad_verification_code&error_description=The+code+passed+is+incorrect+or+expired.&error_uri=https%3A%2F%2Fdocs.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-oauth-app-access-token-request-errors%2F%23bad-verification-code
</code></pre>
<p>I have double and triple checked the github client id and secret I am using and they are correct, Where am I going wrong?</p>
<p>Here is my call back page where I receieve and send the code to my backend</p>
<pre><code><template>
<PageLayout v-if="error">
<div class="content-layout">
<h1 id="page-title" class="content__title">Error</h1>
<div class="content__body">
<p id="page-description">
<span>{{ error.message }}</span>
</p>
</div>
</div>
</PageLayout>
<div class="page-layout" v-else>
<NavBar />
<MobileNavBar />
<div class="page-layout__content" />
</div>
</template>
<script setup>
import { onBeforeMount } from "vue";
import { useAuth0 } from "@auth0/auth0-vue";
import { useRoute } from 'vue-router'
import { postProtectedResource } from "@/services/message.service";
import NavBar from "@/components/navigation/desktop/nav-bar.vue";
import MobileNavBar from "@/components/navigation/mobile/mobile-nav-bar.vue";
import PageLayout from "@/components/page-layout.vue";
const { error, getAccessTokenSilently, user } = useAuth0();
const route = useRoute()
const set_github_access = async (code, state) => {
const access_token = await getAccessTokenSilently();
const email = user.value.email;
const payload = {
"code": code,
"state": state,
"email": email,
}
const response = await postProtectedResource(
"Users/save_github_cred",
access_token,
payload
);
console.log(response);
}
onBeforeMount(() => {
console.log(route.query);
const { code, state} = route.query;
set_github_access(code, state);
})
</script>
</code></pre>
|
<python><vue.js><authentication><github><github-api>
|
2024-01-09 18:36:44
| 0
| 323
|
AynonT
|
77,789,024
| 2,159,372
|
Poetry Install Subprocess PoetryException
|
<p>I am facing the below issue with poetry install in Cloud composer.
Failing at installation of URLObject.
Looking for suggestions on resolving the same as my online search didnt get any results.</p>
<p>The shell script portion is</p>
<pre><code>cat ${SPARK_HOME}/conf/spark-defaults.conf
export POETRY_HOME=/etc/poetry
export PATH="/etc/poetry/bin:$PATH"
curl -sSLfo poetry-install.py https://install.python-poetry.org \
&& python3 poetry-install.py --version 1.3.2
apt-get update && apt-get install -y pkgconf libhdf5-dev
cd $CODE_DIR/test && poetry config virtualenvs.in-project true && poetry install && source .venv/bin/activate
pip3 install shortuuid
</code></pre>
<p>I believe the line "cd $CODE_DIR/test && poetry config virtualenvs.in-project true && poetry install && source .venv/bin/activate" is causing the issue.</p>
<pre><code>[2024-01-09, 12:24:50 UTC] {pod_manager.py:431} INFO - [base] • Installing urlobject (2.4.3)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] • Installing websocket-client (0.59.0)
2024-01-09T12:24:55.208257615Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] CalledProcessError
2024-01-09T12:24:55.208333478Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Command '['/tmp/backend/ingest/.venv/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/tmp/backend/ingest/.venv', '--no-deps', '/root/.cache/pypoetry/artifacts/23/1e/82/ffa13ae9fb790ca9aab416980cfab9da947b36f6fe14267ae27813c9f3/URLObject-2.4.3.tar.gz']' returned non-zero exit status 1.
2024-01-09T12:24:55.208628370Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] at /usr/lib/python3.8/subprocess.py:516 in run
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 512│ # We don't call process.wait() as .__exit__ does that for us.
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 513│ raise
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 514│ retcode = process.poll()
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 515│ if check and retcode:
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] → 516│ raise CalledProcessError(retcode, process.args,
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 517│ output=stdout, stderr=stderr)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 518│ return CompletedProcess(process.args, retcode, stdout, stderr)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 519│
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 520│
2024-01-09T12:24:55.385496695Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] The following error occurred when trying to handle this error:
2024-01-09T12:24:55.385507722Z
2024-01-09T12:24:55.385512832Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] EnvCommandError
2024-01-09T12:24:55.385524223Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Command ['/tmp/backend/ingest/.venv/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/tmp/backend/ingest/.venv', '--no-deps', '/root/.cache/pypoetry/artifacts/23/1e/82/ffa13ae9fb790ca9aab416980cfab9da947b36f6fe14267ae27813c9f3/URLObject-2.4.3.tar.gz'] errored with the following return code 1, and output:
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Processing /root/.cache/pypoetry/artifacts/23/1e/82/ffa13ae9fb790ca9aab416980cfab9da947b36f6fe14267ae27813c9f3/URLObject-2.4.3.tar.gz
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Installing build dependencies: started
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Installing build dependencies: finished with status 'error'
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] error: subprocess-exited-with-error
2024-01-09T12:24:55.385596508Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] × pip subprocess to install build dependencies did not run successfully.
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] │ exit code: 1
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] ╰─> [15 lines of output]
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Collecting setuptools>=40.8.0
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Downloading setuptools-69.0.3-py3-none-any.whl.metadata (6.3 kB)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Collecting wheel
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Using cached wheel-0.42.0-py3-none-any.whl.metadata (2.2 kB)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Downloading setuptools-69.0.3-py3-none-any.whl (819 kB)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 819.5/819.5 kB 10.0 MB/s eta 0:00:00
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Using cached wheel-0.42.0-py3-none-any.whl (65 kB)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] unknown package:
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Expected sha256 177f9c9b0d45c47873b619f5b650346d632cdc35fb5e4d25058e09c9e581433d
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Got e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2024-01-09T12:24:55.386460841Z
2024-01-09T12:24:55.386465642Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] [notice] A new release of pip is available: 23.3 -> 23.3.2
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] [notice] To update, run: pip install --upgrade pip
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] [end of output]
2024-01-09T12:24:55.386485896Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] note: This error originates from a subprocess, and is likely not a problem with pip.
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] error: subprocess-exited-with-error
2024-01-09T12:24:55.386500760Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] × pip subprocess to install build dependencies did not run successfully.
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] │ exit code: 1
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] ╰─> See above for output.
2024-01-09T12:24:55.386520313Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] note: This error originates from a subprocess, and is likely not a problem with pip.
2024-01-09T12:24:55.386538557Z
2024-01-09T12:24:55.386554330Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] at /etc/poetry/venv/lib/python3.8/site-packages/poetry/utils/env.py:1540 in _run
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 1536│ output = subprocess.check_output(
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 1537│ command, stderr=subprocess.STDOUT, env=env, **kwargs
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 1538│ )
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 1539│ except CalledProcessError as e:
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] → 1540│ raise EnvCommandError(e, input=input_)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 1541│
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 1542│ return decode(output)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 1543│
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 1544│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
2024-01-09T12:24:55.553353534Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] The following error occurred when trying to handle this error:
2024-01-09T12:24:55.553448024Z
2024-01-09T12:24:55.553459187Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] PoetryException
2024-01-09T12:24:55.553539955Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] Failed to install /root/.cache/pypoetry/artifacts/23/1e/82/ffa13ae9fb790ca9aab416980cfab9da947b36f6fe14267ae27813c9f3/URLObject-2.4.3.tar.gz
2024-01-09T12:24:55.554136296Z
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] at /etc/poetry/venv/lib/python3.8/site-packages/poetry/utils/pip.py:58 in pip_install
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 54│
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 55│ try:
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 56│ return environment.run_pip(*args)
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] 57│ except EnvCommandError as e:
[2024-01-09, 12:24:55 UTC] {pod_manager.py:431} INFO - [base] → 58│ raise PoetryException(f"Failed to install {path.as_posix()}") from e
[2024-01-09, 12:24:57 UTC] {pod_manager.py:431} INFO - [base] 59│
2024-01-09T12:24:55.560534096Z
[2024-01-09, 12:25:00 UTC] {pod_manager.py:431} INFO - [base] Warning: The file chosen for install of s3transfer 0.8.1 (s3transfer-0.8.1-py3-none-any.whl) is yanked.
[2024-01-09, 12:25:00 UTC] {pod_manager.py:431} INFO - [base] Collecting shortuuid
[2024-01-09, 12:25:00 UTC] {pod_manager.py:431} INFO - [base] Downloading shortuuid-1.0.11-py3-none-any.whl (10 kB)
</code></pre>
|
<python><setuptools><sha256><google-cloud-composer><python-poetry>
|
2024-01-09 18:36:01
| 1
| 1,538
|
mehere
|
77,788,942
| 2,835,427
|
Altair - plot on background image
|
<p>I would like to display an image and plot markers on it. However, the image does not appear to align with the axis well, resulting in the markers in wrong places. Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
import pandas as pd
from PIL import Image
import numpy as np
from io import BytesIO
import base64
def toBase64(np_img):
# convert numpy image to base64 string
with BytesIO() as buffer:
Image.fromarray(np_img).save(buffer, 'png')
img_str = base64.encodebytes(buffer.getvalue()).decode('utf-8')
return f"data:image/png;base64,{img_str}"
h,w = 250,410
im0 = np.zeros((h,w,3), dtype=np.uint8)
tmp = pd.DataFrame([
{"x": w//2, "y": h//2, "img": toBase64(im0)}
])
backgrd = alt.Chart(tmp).mark_image(
width=w,
height=h
).encode(x='x', y='y', url='img')
b0_df = pd.DataFrame({'x': [0,0,w,w], 'y': [0,h,0,h]})
b0 = alt.Chart(b0_df).mark_circle(size=30, color='red').encode(x='x', y='y').properties(width=w, height=h)
(backgrd+b0)
</code></pre>
<p>The results look like this:</p>
<p><a href="https://i.sstatic.net/DbiD3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DbiD3.png" alt="enter image description here" /></a></p>
<p>The markers are plotted correctly but not the image. Also it seems that the image has been stretched a bit. On plotly I can set <code>xref</code> and <code>yref</code> to make the image scale the same as axis scale, how to do it on altair?</p>
|
<python><altair><vega-lite><python-interactive>
|
2024-01-09 18:19:41
| 1
| 1,692
|
Tu Bui
|
77,788,836
| 2,731,076
|
How to set new index for Pandas dataframe from existing column that has duplicates?
|
<p>I am grabbing data from a MongoDB database and converting it into a Pandas dataframe for additional operations to be done later. The MongoDB database contains a bunch of time-based entries and due to how they are stored, each sample for each channel is its own document. Some of these channels always sample at the same time while others are on different schedules. Below is a quick example of a document.</p>
<pre><code>timestamp:
2024-01-05T08:16:30.848+00:00
metaData:
deviceId:
"123"
channelName:
"Channel1"
_id:
659c23016ad87924ff552882
Channel1:
10345
</code></pre>
<p>So when I try to grab a few channels from the database using something like</p>
<pre><code>b = pd.DataFrame(list(timeCol.find({'metaData.deviceId':'123','metaData.channelName':{'$in':['Channel1','Channel2','Channel3','Channel4','Channel5']}},{'_id':0,'metaData':0}).sort('timestamp')))
</code></pre>
<p>I get a dataframe that looks something like below</p>
<pre><code> timestamp Channel1 Channel2 Channel3 Channel4 Channel5
0 2024-01-05 20:27:31.340 0.0 NaN NaN NaN NaN
1 2024-01-05 20:27:31.382 1.0 NaN NaN NaN NaN
2 2024-01-05 20:27:31.400 NaN 2456 NaN NaN NaN
3 2024-01-05 20:27:31.400 NaN NaN 10.231 NaN NaN
4 2024-01-05 20:27:31.400 NaN NaN NaN 2.4 NaN
</code></pre>
<p>But it has many more entries because I'm usually interested in a timespan of a few hours. Anyways as you can see, Channels2-5 typically share a timestamp but Channel1 is at a higher rate.</p>
<p><strong>Is there any way that I can set the timestamp column to be the index and have Pandas only use unique entries for timestamp and then correctly sample the other columns?</strong></p>
<p>I know I can probably do this by creating a series for each column and then merging/joining them, but I think that would require a separate call to the DB for every channel and I would prefer to limit DB calls for speed and efficiency. I could request some changes to the DB but this is how the data is broadcast (separate messages for each channel/device) and nothing guarantees channels will be on the same timestamps but that appears to happen more often than not for certain channels. There are also additional channels that are broadcast at a much higher rate that I also need to work into my analysis but I plan to query for those separately and add them in later.</p>
<p>Thanks!</p>
|
<python><pandas><mongodb>
|
2024-01-09 18:00:10
| 1
| 813
|
user2731076
|
77,788,494
| 14,649,310
|
Partial validation with marshmallow to ingore unknown field
|
<p>I have searched this as much as I could, read the docs, looked in various posts but i start to believe it is not possible. So I want to have a partial validation schema, for a Flask app, with marshmallow like this</p>
<pre><code>from marshmallow import Schema, fields, validate
class MySchema(Schema):
product_type = fields.Str(required=True, allow_none=True)
</code></pre>
<p>and i want to validate a dictionary and ensure that it has the <code>product_type</code> field. Now this dictionary might have any number of other fields unknown to me but I don't care, I want to partially validate that the <code>product_type</code> exists. For example:</p>
<pre><code>data={"product_type":"consumable","other_field1":67,"other_field2":"info"...}
</code></pre>
<p>this passes no matter what fields are there BUT only if product type is missing it fails.
How is this done?</p>
|
<python><validation><flask><marshmallow>
|
2024-01-09 17:01:26
| 1
| 4,999
|
KZiovas
|
77,788,492
| 11,277,108
|
Reduce size between table subplots
|
<p>I'm trying to add two tables to a report but am having issues with the space between the tables. By way of an MRE:</p>
<pre><code>import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
def main():
fig = make_subplots(
rows=2,
cols=1,
vertical_spacing=0,
subplot_titles=("Overall summary", "Year"),
specs=[
[{"type": "table"}],
[{"type": "table"}],
],
)
df1 = pd.DataFrame(
{
"all": {0: 1},
"n_match": {0: 1219076},
"n_odds": {0: 228758},
"acc": {0: 0.735},
}
)
fig.add_table(
header=dict(
values=list(df1.columns),
align="right",
),
cells=dict(
values=[df1[k].tolist() for k in df1.columns[0:]],
align="right",
),
row=1,
col=1,
)
df2 = pd.DataFrame(
{
"year": {
0: 2009,
1: 2010,
2: 2011,
3: 2012,
4: 2013,
5: 2014,
6: 2015,
7: 2016,
8: 2017,
9: 2018,
10: 2019,
11: 2020,
12: 2021,
13: 2022,
14: 2023,
},
"n_match": {
0: 79510,
1: 77020,
2: 80496,
3: 86096,
4: 89434,
5: 93976,
6: 96190,
7: 97396,
8: 96578,
9: 92086,
10: 76258,
11: 27054,
12: 67464,
13: 85188,
14: 74330,
},
"n_odds": {
0: 4422,
1: 4756,
2: 5010,
3: 5556,
4: 6288,
5: 15644,
6: 17166,
7: 16632,
8: 15952,
9: 16060,
10: 18232,
11: 9908,
12: 34294,
13: 40430,
14: 18408,
},
"acc": {
0: 0.734,
1: 0.734,
2: 0.742,
3: 0.743,
4: 0.743,
5: 0.75,
6: 0.748,
7: 0.747,
8: 0.745,
9: 0.733,
10: 0.718,
11: 0.707,
12: 0.715,
13: 0.718,
14: 0.715,
},
}
)
fig.add_table(
header=dict(
values=list(df2.columns),
align="right",
),
cells=dict(
values=[df2[k].tolist() for k in df2.columns[0:]],
align="right",
),
row=2,
col=1,
)
fig.update_layout(width=500, height=850, title_text="Report")
fig.show()
if __name__ == "__main__":
main()
</code></pre>
<p>This creates the following plot:</p>
<p><a href="https://i.sstatic.net/SjyUU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SjyUU.png" alt="enter image description here" /></a></p>
<p>Things I've tried:</p>
<ol>
<li>Adjusting the <code>height</code> in <code>fig.update_layout</code> but less than 850 cuts the bottom off the second table</li>
<li>The <code>vertical_spacing</code> parameter in <code>make_sub_plots</code> doesn't go negative</li>
<li>I can't set different <code>rowspan</code> values for the <code>specs</code> parameter in <code>make_sub_plots</code></li>
</ol>
<p>How would I reduce the size of the gap between the tables?</p>
|
<python><plotly>
|
2024-01-09 17:01:08
| 1
| 1,121
|
Jossy
|
77,788,481
| 3,813,064
|
Correct type annotation for a 2D transpose function in python
|
<p>In our code we do a lot of 2D transposes like <code>zip(*something)</code> where <code>something</code> is a list of tuples.</p>
<p>For example take:</p>
<pre class="lang-py prettyprint-override"><code>>>> a = [('a', 1), ('b', 2), ('c', 3)]
>>> result = tuple(*zip(*a))
>>> result # type checkers should reveal tuple[Iterable[str], Iterable[int]]
[('a', 'b', 'c'), (1, 2, 3)]
</code></pre>
<p>In both MyPy (<a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=525897b62bbde583e43d94084190a46f" rel="nofollow noreferrer">Play</a>, <a href="https://github.com/python/mypy/issues/5247" rel="nofollow noreferrer">Issue</a>) and Pyright (<a href="https://pyright-play.net/?code=IYAgvCDaAUDkywDQgIwEplwEZJAJgxDgGNcBmNAXQCgAnAUwDd7gAbAfQBcBPAB3uicArr1YCAXgEte0AFTA0i6sqwAuEK0kBnTpGGj6kAGasA9sE7IdtZFm6d6WypXBQadJiw49%2B0KTNksJSA" rel="nofollow noreferrer">Play</a>) the type of <code>tuple(zip(*a))</code> reveals something like <code>tuple[Any]</code>. This causes the code to lose all type information, which I don't want.</p>
<p>Therefore, I want to create a special function <code>def transpose(iterable)</code> function that ensures the type checkers are aware of the resulting types.
A workaround to not lose type information.</p>
<p>How to do this?</p>
|
<python><python-typing>
|
2024-01-09 16:59:12
| 1
| 2,711
|
Kound
|
77,788,233
| 8,223,367
|
use pyalsaaudio to connect an input and output device
|
<p>I'm facing an issue with my Python audio streaming code. Currently, when running the code, I can only hear a "tuuuut" sound on the Bluetooth audio speaker. I expected the audio from the input sound device to be properly streamed to the speaker.</p>
<p>The magic happens in the function _process_audio and I am not able to figure out, how to enter the data from capture to the playback.</p>
<p>Does anybody has an idea what is wrong and how to improve it?</p>
<hr />
<p>EDIT:</p>
<p>I found out, that if I just record it to a wav and play it with aplay, it's the same.</p>
<p>on the bash I can do it like this</p>
<pre><code>arecord -D hw:0,0 -f cd -r 44100 -c 2 | aplay -D bluealsa
</code></pre>
<p>and it works!</p>
<p>Why not with my python solution??</p>
<hr />
<pre><code>import alsaaudio
import multiprocessing
class AudioProcessor:
def __init__(
self,
playback_device="bluealsa",
capture_device="hw:0,0",
channels=2,
rate=44100,
format=alsaaudio.PCM_FORMAT_S16_LE,
mode=alsaaudio.PCM_NONBLOCK,
buffer_size=128,
):
"""
Initializes the AudioProcessor class.
Args:
playback_device (str): The playback device name.
capture_device (str): The capture device name.
channels (int): The number of audio channels.
rate (int): The audio sample rate.
format (int): The audio sample format.
mode (int): The audio mode.
buffer_size (int): The audio buffer size.
"""
# Set up the capture (record) device
self.capture = alsaaudio.PCM(
type=alsaaudio.PCM_CAPTURE,
device=capture_device,
format=format,
rate=rate,
channels=channels,
periodsize=buffer_size,
mode=mode,
)
# Set up the playback device
self.playback = alsaaudio.PCM(
type=alsaaudio.PCM_PLAYBACK,
device=playback_device,
format=format,
rate=rate,
channels=channels,
periodsize=buffer_size,
mode=mode,
)
# Flag for graceful termination
self.terminate_flag = multiprocessing.Event()
def _process_audio(self):
"""
Continuously processes audio data from the capture device and plays it back.
This method is intended to be run in a separate process.
"""
while not self.terminate_flag.is_set():
try:
# Capture audio
# _, input_data = self.capture.read()
# Playback audio
self.playback.write(self.capture.read()[1])
except alsaaudio.ALSAAudioError as e:
print(f"Error in audio processing: {e}")
def start_playback(self):
"""Starts the audio playback process"""
self.playback_process = multiprocessing.Process(
target=self._process_audio, name="playback_process"
)
self.playback_process.start()
def close_streams(self):
"""
Closes the audio streams and terminates the playback process.
This method should be called to gracefully stop audio processing.
"""
self.terminate_flag.set()
self.playback_process.join() # Wait for the process to finish
with self.playback, self.capture:
pass
</code></pre>
|
<python><pyalsaaudio>
|
2024-01-09 16:18:50
| 1
| 327
|
theother
|
77,788,150
| 1,008,588
|
Pandas data frame use interpolate() partitioning with specific columns
|
<p>I have the following Pandas data frame (called <code>df</code>).</p>
<pre><code>+--------+--------+------+--------+
| Person | Animal | Year | Number |
+--------+--------+------+--------+
| John | Dogs | 2000 | 2 |
| John | Dogs | 2001 | 2 |
| John | Dogs | 2002 | 2 |
| John | Dogs | 2003 | 2 |
| John | Dogs | 2004 | 2 |
| John | Dogs | 2005 | 2 |
| John | Cats | 2000 | 1 |
| John | Cats | 2001 | NaN |
| John | Cats | 2002 | NaN |
| John | Cats | 2003 | 4 |
| John | Cats | 2004 | 5 |
| John | Cats | 2005 | 6 |
| Peter | Dogs | 2000 | NaN |
| Peter | Dogs | 2001 | 1 |
| Peter | Dogs | 2002 | NaN |
| Peter | Dogs | 2003 | 5 |
| Peter | Dogs | 2004 | 5 |
| Peter | Dogs | 2005 | 5 |
| Peter | Cats | 2000 | NaN |
| Peter | Cats | 2001 | 4 |
| Peter | Cats | 2002 | 4 |
| Peter | Cats | 2003 | 4 |
| Peter | Cats | 2004 | 4 |
| Peter | Cats | 2005 | 4 |
+--------+--------+------+--------+
</code></pre>
<p>My target is to get the following, which means using the interpolate method to fill the <code>NaN</code> values, but based on the other column value. In other words, it should</p>
<ol>
<li>partition the df using the <code>Person</code> and <code>Animal</code> columns</li>
<li>order by <code>Year</code> (asc)</li>
<li>apply the <a href="https://github.com/nkmk/python-snippets/blob/20611e7fec7b73380534460784071f486f57e1bd/notebook/pandas_interpolate.py#L18-L24" rel="nofollow noreferrer">interpolate</a> method</li>
</ol>
<p>.</p>
<pre><code>+--------+--------+------+--------+
| Person | Animal | Year | Number |
+--------+--------+------+--------+
| John | Dogs | 2000 | 2 |
| John | Dogs | 2001 | 2 |
| John | Dogs | 2002 | 2 |
| John | Dogs | 2003 | 2 |
| John | Dogs | 2004 | 2 |
| John | Dogs | 2005 | 2 |
| John | Cats | 2000 | 1 |
| John | Cats | 2001 | 2 |
| John | Cats | 2002 | 3 |
| John | Cats | 2003 | 4 |
| John | Cats | 2004 | 5 |
| John | Cats | 2005 | 6 |
| Peter | Dogs | 2000 | NaN |
| Peter | Dogs | 2001 | 1 |
| Peter | Dogs | 2002 | 3 |
| Peter | Dogs | 2003 | 5 |
| Peter | Dogs | 2004 | 5 |
| Peter | Dogs | 2005 | 5 |
| Peter | Cats | 2000 | NaN |
| Peter | Cats | 2001 | 4 |
| Peter | Cats | 2002 | 4 |
| Peter | Cats | 2003 | 4 |
| Peter | Cats | 2004 | 4 |
| Peter | Cats | 2005 | 4 |
+--------+--------+------+--------+
</code></pre>
<p><strong>What I have done</strong></p>
<p>I can filter for each Person and each Animal and then apply the interpolate methods. Finally, merge all together, but this sounds dull and long if you have many columns.</p>
|
<python><pandas><dataframe><interpolation><fillna>
|
2024-01-09 16:06:17
| 2
| 2,764
|
Nicolaesse
|
77,787,937
| 601,493
|
Use network namespace per thread with sytem DNS resolution
|
<p>I would like to have single Python application to work with different network namespaces. This to my understanding can be achieved on per-thread basis via <code>pthread.setns(...)</code> as explained for example in <a href="https://stackoverflow.com/questions/28846059/can-i-open-sockets-in-multiple-network-namespaces-from-my-python-code">Can I open sockets in multiple network namespaces from my Python code?</a></p>
<p>My code uses a 3rd part library which uses internally DNS resolution through socket (e.g. <code>socket.gethostbyaddr()</code>) and I would like for these socket calls to consult different per-namespace DNS settings ideally same as <code>ip setns exec ...</code> would bind-mount <code>/etc/netns/namespace-name/resolv.conf</code>. This sohuld happen based on the namespace of the thread the socket call is invoked from.</p>
<p>Any idea how to achieve this? Obviously I can patch the library to not use socket but some other DNS library where I can provide the settings myself but for now I would like to keep to delegation to system DNS resolver.</p>
|
<python><linux-kernel><namespaces>
|
2024-01-09 15:32:33
| 0
| 17,968
|
Jan Zyka
|
77,787,715
| 22,538,132
|
How to approximate contour into four points
|
<p>I have binary masks obtained from segmentation model, and I want to get four corners only of its contour that include majority of the points with minimal area as you can see in this image:
<a href="https://i.sstatic.net/qN7gQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qN7gQ.png" alt="enter image description here" /></a></p>
<p>The corners are not necessarily forming a rectangle, and the mask is noisy:</p>
<p><a href="https://i.sstatic.net/4snPs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4snPs.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/CtLoM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CtLoM.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Ay6hO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ay6hO.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/dYCtG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dYCtG.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/dhyfl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dhyfl.png" alt="enter image description here" /></a></p>
<p>I have tried contour detection, approximation, and <code>minAreaRect</code>, but the rectangle is in many cases wider than the shape, and not minimal to the limit I want:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import os
from os import path as osp
import cv2
import numpy as np
path = "seg_masks"
im_list = os.listdir(path)
def lcc(binary_image:np.ndarray)->np.ndarray:
# Find connected components
print(binary_image.shape, binary_image.dtype)
num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(binary_image, connectivity=8)
# Find the label (index) of the largest connected component
largest_component_label = np.argmax(stats[1:, cv2.CC_STAT_AREA]) + 1 # Skip background label (0)
largest_component_mask = (labels == largest_component_label).astype(np.uint8)
largest_component_mask = largest_component_mask.astype(np.uint8)
return largest_component_mask
for img_name in im_list:
bgr_img_mask = cv2.imread(osp.join(path, img_name), 0)
cv2.imwrite(osp.join(path, "white", img_name), bgr_img_mask)
lcc_mask = lcc(bgr_img_mask)
# Erosion to clean the mask contour a bit
cl_ker = 5
kernel = np.ones((cl_ker, cl_ker), np.uint8)
erosion = cv2.erode(lcc_mask,kernel,iterations = 3)
contours, _ = cv2.findContours(
lcc_mask, mode=cv2.RETR_TREE,
method=cv2.CHAIN_APPROX_NONE
)
if(len(contours)):
max_cnt = max(contours, key=cv2.contourArea)
epsilon = 0.008 * cv2.arcLength(max_cnt, True)
approx = cv2.approxPolyDP(max_cnt, epsilon, True)
approx = np.squeeze(np.array(approx, dtype=int), axis=1)
rect = cv2.minAreaRect(approx)
box = cv2.boxPoints(rect)
box = np.int0(box)
bgr_img_mask = cv2.drawContours(bgr_img_mask, [box], 0, 200, 2)
cv2.drawContours(bgr_img_mask, [approx], -1, 128, 2)
bgr_img_mask = cv2.putText(bgr_img_mask, f"{round(epsilon,2)}", (50, 50) ,
cv2.FONT_HERSHEY_SIMPLEX , 1, 255, 2, cv2.LINE_AA)
win_name = "img"
cv2.namedWindow(win_name, cv2.WINDOW_NORMAL)
cv2.imshow(win_name, bgr_img_mask)
cv2.waitKey(0)
</code></pre>
<p>output:</p>
<p><a href="https://i.sstatic.net/D1BiD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D1BiD.png" alt="enter image description here" /></a></p>
<p>Can you please guide me on how to do that? thanks.</p>
|
<python><opencv><image-processing><computer-vision>
|
2024-01-09 15:02:56
| 1
| 304
|
bhomaidan90
|
77,787,408
| 16,383,578
|
How to tell if a bunch of shapes are all mirror images using OpenCV?
|
<p>I had taken an online visual IQ test, in it a lot of questions are like the following:</p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/3.png" alt="" /></p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/5.png" alt="" /></p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/7.png" alt="" /></p>
<p>The addresses of the images are:</p>
<pre><code>[f"https://www.idrlabs.com/static/i/eysenck-iq/en/{i}.png" for i in range(1, 51)]
</code></pre>
<p>In these images there are several shapes that are almost identical and of nearly the same size. Most of these shapes can be obtained from the others by rotation and translation, but there are some shapes that cannot superpose the others via rotation and translation only, and must involve reflection, these shapes have a different chirality from the others, and they are "the odd men". The task is to find them.</p>
<p>The answers here are 2, 1, and 4, respectively. In these questions, you are tasked to find n shapes that have a different chirality from the others, there are always at least 2n + 1 shapes. Yes, sometimes you are tasked to find multiple shapes that are different:</p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/49.png" alt="" /></p>
<p>In the above question, the answers are 5, 6 and 8.</p>
<p>I wish to automate it, and I am extremely close to success, but there is one single problem.</p>
<p>I have written code to load the image from the address, find contours, eliminate the contours that are too small, find the minimum area bounding boxes, and extract the shapes.</p>
<p>I extract the shapes by rotating the image so that the bounding box is straight, and calculating the new coordinates of the bounding box after rotation, and then using index slicing to extract the shape. I also use bit masking with the filled convex hull of the contour to remove extra bits.</p>
<p>I have even written a function to sort the bounding boxes left to right, top to bottom. I do this by repeatedly finding the box that is at the top, and remove it from the list, calculate its radius, then for each remaining box, check if its center is between top-y - radius and top-y + radius, if true, then remove the current box from the list and add it to the current row, then sort the current row by x and concatenate the current row with the result, and then find the top box again... until the list is empty.</p>
<hr />
<p>I have written code to calculate the Hu Moments, and I have found if the shapes have different chirality then their seventh Hu Moment have opposite signs, and if they have the same chirality then their seventh Hu Moment have the same sign.</p>
<p>So the task is to split the shapes into two groups by their seventh Hu Moment sign and identify whichever group has fewer members. It is extremely easy. Because it is so easy I won't show how I do it here.</p>
<p>But here is the hard part, how to tell whether or not all the shapes are almost identical, that is, they can superpose one another via translation, rotation, and reflection?</p>
<p>All above shapes are identical to others in the same group, problem is, the questions aren't always like this.</p>
<p>In fact the very first question isn't like this:</p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/1.png" alt="" /></p>
<p>The most obvious answer is 2, because it has 3 segments that cross its center unlike the others which have 2 segments. And as you can see, in it the shapes are different.</p>
<p>Also the fourth question:</p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/4.png" alt="" /></p>
<p>The most obvious answer is 3 because the pattern is right up repeat and 3 breaks the pattern. But as you can see, these are all of the same chirality. This is easy to deal with.</p>
<p>And there are questions that contain shapes that are extremely similar but not the same, such as:</p>
<p><img src="https://www.idrlabs.com/static/i/eysenck-iq/en/12.png" alt="" /></p>
<p>By the way, in it, the program find more than 5 contours because the small geometric figures are counted as distinct shapes, despite the fact that they are obviously intend to group into five groups.</p>
<p>So how do I determine if the shapes are identical? Obviously this is going to involve some heuristics, but using what heuristics?</p>
<hr />
<p>I have calculated the log10 scaled Hu Moments of each shapes, and the mean, standard deviation, and norm of the moments, I have also written a function to compare every pair of images in a group using <code>cv2.matchShapes</code>, and calculated the mean, std-dev, norm of all the pairs associated with each element:</p>
<pre><code>import cv2
import requests
import numpy as np
def grayscale(image):
return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
def get_contours(image):
_, thresh = cv2.threshold(grayscale(image), 128, 255, 0)
thresh = ~thresh
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
return contours
def sort_bounding_rectangles(rectangles):
l = len(rectangles)
remaining = list(range(l))
rows = []
while remaining:
top = min(((rectangles[i], i) for i in remaining), key=lambda x: x[0][0][1])
remaining.remove(top[1])
row = [top]
y = top[0][0][1]
width, height = top[0][1]
radius = (width**2 + height**2) ** 0.5 / 2
low, high = y - radius, y + radius
for i in remaining.copy():
rectangle = rectangles[i]
if low <= rectangle[0][1] <= high:
row.append((rectangle, i))
remaining.remove(i)
row.sort(key=lambda x: x[0][0][0])
rows.extend(row)
return rows
def find_largest_shapes(image):
contours = [
e for e in get_contours(image) if cv2.contourArea(cv2.convexHull(e)) >= 1000
]
rectangles = [cv2.minAreaRect(contour) for contour in contours]
return [
(rectangle, contours[i])
for rectangle, i in sort_bounding_rectangles(rectangles)
]
def rotate_image(image, angle):
size_reverse = np.array(image.shape[1::-1])
M = cv2.getRotationMatrix2D(tuple(size_reverse / 2.0), angle, 1.0)
MM = np.absolute(M[:, :2])
size_new = MM @ size_reverse
M[:, -1] += (size_new - size_reverse) / 2.0
return cv2.warpAffine(image, M, tuple(size_new.astype(int)))
def int_sort(arr):
return np.sort(np.intp(np.floor(arr + 0.5)))
RADIANS = {}
def rotate(x, y, angle):
if pair := RADIANS.get(angle):
cosa, sina = pair
else:
a = angle / 180 * np.pi
cosa, sina = np.cos(a), np.sin(a)
RADIANS[angle] = (cosa, sina)
return x * cosa - y * sina, y * cosa + x * sina
def new_border(x, y, angle):
nx, ny = rotate(x, y, angle)
nx = int_sort(nx)
ny = int_sort(ny)
return nx[3] - nx[0], ny[3] - ny[0]
def coords_to_pixels(x, y, w, h):
cx, cy = w / 2, h / 2
nx, ny = x + cx, cy - y
nx, ny = int_sort(nx), int_sort(ny)
a, b = nx[0], ny[0]
return a, b, nx[3] - a, ny[3] - b
def new_contour_bounds(pixels, w, h, angle):
cx, cy = w / 2, h / 2
x = np.array([-cx, cx, cx, -cx])
y = np.array([cy, cy, -cy, -cy])
nw, nh = new_border(x, y, angle)
bx, by = pixels[..., 0] - cx, cy - pixels[..., 1]
nx, ny = rotate(bx, by, angle)
return coords_to_pixels(nx, ny, nw, nh)
def extract_shape_helper(box, contour, image, rectangle):
h, w = image.shape[:2]
angle = rectangle[2]
x, y, dx, dy = new_contour_bounds(box, w, h, angle)
empty = np.zeros_like(image)
mask = cv2.fillConvexPoly(empty, cv2.convexHull(contour), (255, 255, 255))
return rotate_image(image & mask, angle)[y : y + dy, x : x + dx]
def extract_shape(rectangle, contour, image):
box = np.intp(np.floor(cv2.boxPoints(rectangle) + 0.5))
shape = extract_shape_helper(box, contour, image, rectangle)
sh, sw = shape.shape[:2]
if sh < sw:
shape = np.rot90(shape)
return shape
def extract_all_shapes(image):
result = []
for i, (rectangle, contour) in enumerate(find_largest_shapes(image)):
shape = extract_shape(rectangle, contour, img)
result.append((shape, contour, i))
return result
def scaled_HuMoments(image):
moments = cv2.HuMoments(cv2.moments(grayscale(image))).flatten()
return -1 * np.copysign(1.0, moments) * np.log10(np.abs(moments))
def load_online_image(url):
return cv2.imdecode(
np.asarray(
bytearray(
requests.get(url).content,
),
dtype=np.uint8,
),
-1,
)
def pairwise_compare(shapes):
l = len(shapes)
comparison = [[] for _ in range(l)]
shapes = [grayscale(shape) for shape in shapes]
for i, e in enumerate(shapes[: l - 1]):
for j in range(i + 1, l):
diff = cv2.matchShapes(e, shapes[j], cv2.CONTOURS_MATCH_I1, 0)
comparison[i].append(diff)
comparison[j].append(diff)
return comparison
for i in range(1, 51):
img = load_online_image(f"https://www.idrlabs.com/static/i/eysenck-iq/en/{i}.png")
shapes, contours, indices = zip(*extract_all_shapes(img))
moments = [scaled_HuMoments(shape) for shape in shapes]
comparison = pairwise_compare(shapes)
print(
i,
np.mean(moments, axis=0),
np.std(moments, axis=0),
np.linalg.norm(moments, axis=0),
[
np.mean([np.linalg.norm(e - f) for f in moments[:i] + moments[i + 1 :]])
for i, e in enumerate(moments)
],
[(np.mean(e), np.std(e), np.linalg.norm(e)) for e in comparison],
)
</code></pre>
<p>The output is quite long, and here is a sample:</p>
<pre><code>1 [ 3.07390334 9.20804671 12.8461357 14.51696789 16.68961852 3.3599212
4.38850922] [ 0.02824479 0.5203131 2.20535896 0.83599035 23.43547701 18.90969881
28.06080182] [ 6.87374697 20.62266349 29.14505196 32.51470718 64.33369835 42.94565052
63.50856683] [62.566545021375056, 48.74582883360671, 64.43159607200943, 78.42156717100089, 51.21300989005003] [(0.005014132959887091, 0.002491910083805807, 0.01119841867500928), (0.006020965019003746, 0.00199687700714013, 0.012686928318818815), (0.0030201480551207693, 0.0015969796757999888, 0.006832755918300609), (0.003546920727516889, 0.0012401031295098184, 0.007514919139713742), (0.0033065038420672516, 0.002016290195678553, 0.007745551965042893)]
2 [ 2.97172593 6.44666183 9.8363414 10.60604303 20.86557466 13.8481828
12.76418785] [1.41401183e-02 7.24228229e-02 3.74833367e-02 8.26491630e-02
1.50811464e-01 1.22104849e-01 1.69709762e+01] [ 6.64505641 14.4160837 21.99488771 23.71655326 46.658062 30.9666818
47.48360373] [10.818253126228177, 10.982586601638047, 42.428013449717554, 10.910867281706224, 10.848427217759596] [(0.001506921852584192, 0.0009882097830638086, 0.0036040932535875334), (0.003516907872134406, 0.0008668050736689571, 0.007244305906522502), (0.0015011522238101704, 0.000993415709169907, 0.003600184867628755), (0.0018174060147817805, 0.001260294847499211, 0.004423260211743394), (0.0023787361481882874, 0.0014468593917222597, 0.005568406508908112)]
3 [ 2.61771073 5.30659133 8.8536436 9.50848944 18.86447369
12.7987164 -11.35360489] [ 0.01619639 0.03695343 0.05006323 0.09325839 0.23476942 0.38993702
15.02706999] [ 5.85349117 11.86618664 19.79766544 21.26265136 42.18551197 28.63207925
42.11396303] [9.747008299781871, 37.573686324953485, 10.243633427009456, 9.720464324546901, 9.78238944786298] [(0.0024618215364579227, 0.0012707943194968936, 0.00554093258570563), (0.002364270277469327, 0.0012510519833639265, 0.005349730838090344), (0.0038182383231789296, 0.002023821027669545, 0.008642868839599421), (0.002909711587421565, 0.0016709432962376343, 0.0067107296238834765), (0.004788150219463536, 0.0015576789867383207, 0.010070302150357719)]
4 [ 2.9954428 6.82028526 9.54511531 10.9612486 -21.23828174
-14.38515271 6.91790527] [4.77222668e-03 7.17286208e-02 3.69215392e-02 4.81258831e-02
8.67832272e-02 7.61551081e-02 2.09493605e+01] [ 7.33731573 16.70714266 23.38083696 26.84972479 52.02338757 35.23677779
54.04071356] [17.94732078416027, 35.29165646520066, 17.950699158333215, 18.23280085935311, 17.93185442022226, 36.313920730849] [(0.0012898553531571987, 0.0002516968837579109, 0.002938603540256601), (0.0004973573147971644, 0.0004829472326333535, 0.0015501650365210475), (0.0004903504494842004, 0.0003222701546145148, 0.0013120625287398094), (0.0004903504494842004, 0.00047557236310889273, 0.0015274368004312548), (0.0005065526488026162, 0.0003082944435671216, 0.0013259733216458537), (0.0006724236253112581, 0.0005150535939673667, 0.0018939822287120812)]
5 [ 3.09082421 7.61628388 9.97917351 11.593797 22.4659269
15.53389331 -13.56588549] [2.12577685e-03 5.70692714e-02 2.52947012e-02 3.34935722e-02
5.10378438e-02 2.73210581e-02 1.81105332e+01] [ 6.91129468 17.03100658 22.31418202 25.92462638 50.23546937 34.73489512
50.59766105] [45.27603301509411, 11.424555924626425, 11.571023004898416, 11.439967497985075, 11.423495225541958] [(0.00033414574755881443, 0.00019891946914752526, 0.000777746323212324), (0.00023785531982488395, 0.00014708424579467746, 0.0005593171856111233), (0.0003804657093928465, 0.00018151804352047388, 0.0008430965689582721), (0.000319213439906485, 0.0001910892479572674, 0.0007440761275617035), (0.00023333035037390037, 0.00013786485948325051, 0.0005420323676163391)]
</code></pre>
<p>You can run the code to see the full output.</p>
<p>The problem is, how to interpret these numbers, what heuristics should I use, so that when the computer sees these numbers, it then knows whether all shapes are identical or not? What cutoff should I use?</p>
|
<python><opencv><image-processing><computer-vision><heuristics>
|
2024-01-09 14:15:42
| 1
| 3,930
|
Ξένη Γήινος
|
77,787,361
| 775,155
|
How do I concatenate two data frames/arrays where each row must have the same key in Python?
|
<p>I have two CSV files:</p>
<pre><code>index,X,Y
1,1.0,2.0
3,1.3,2.3
</code></pre>
<p>and</p>
<pre><code>index,Z
1,3.0
</code></pre>
<p>that I want to read and concatenate into a m x 4 numpy array in Python
with the rule that only rows with indices that are present in both files should be used.</p>
<p>The result of the two files above should be a 1 x 4 dataframe or array:</p>
<pre><code>index,X,Y,Z
1,1.0,2.0,3.0
</code></pre>
<p>I have written 50 lines of (not very pythonic I am afraid) code myself that does this but I would prefer using a more compact and better tested external code. I would like the solution to use either/both numpy and pandas.</p>
|
<python><pandas><numpy>
|
2024-01-09 14:08:04
| 1
| 4,009
|
Andy
|
77,787,231
| 1,422,096
|
Function parameter parsing if the type can vary
|
<p>I have a function that can be called with either a boolean <code>onoff</code> or a dictionary <code>request</code>. If <code>onoff</code> comes <em>directly</em> as a parameter, it can be <code>True</code>, <code>False</code> or <code>None</code>. If it comes from <code>request</code> dictionary, then <code>request["onoff"]</code> can be either an int or a string (it comes from a HTTP API request; note: it is already sanitized).</p>
<pre class="lang-py prettyprint-override"><code>def func(onoff=None, request=None):
if request is not None:
onoff = request.get("onoff")
if onoff is not None:
onoff = bool(int(onoff)) # works but it seems not very explicit
if onoff:
print("Turn On")
else:
print("Turn Off")
else:
print("onoff is None, so no modification")
func()
func(onoff=True) # True
func(request={"onoff": 1}) # True
func(request={"onoff": "1"}) # True
func(onoff=False) # False
func(request={"onoff": 0}) # False
func(request={"onoff": "0"}) # False
</code></pre>
<p>Is there a way to avoid this custom <code>bool(int(onoff))</code> and have a general way to handle this situation?</p>
|
<python>
|
2024-01-09 13:47:07
| 2
| 47,388
|
Basj
|
77,787,226
| 3,941,671
|
define type of queue items in python
|
<p>I have a function with a <code>queue.Queue</code> as an argument. This queue contains elements of my self defined class <code>PlantCommand</code> and should be processed by another thread.
Therefor, I defined the thread target function in the following way:</p>
<pre><code>def run(command_queue: queue.Queue, status: PlantStatus) -> None:
</code></pre>
<p>When I now remove one item from the queue using its <code>get()</code>-function, the type of the element is not known anymore. Is there a way to define the type of the queue items? Something like</p>
<pre><code>def run(command_queue: queue.Queue<PlantCommand>, status: PlantStatus) -> None:
</code></pre>
<p>Or can I define the item type afterwards?
I often use the auto completition and also check my code using the syntax highlighting. And in the case of missing type information, it doesn't work anymore. I.e. when I type in <code>_cmd.</code> I get no suggestions related to the <code>PlantCommand</code>-class of the items, that I put into the queue.</p>
<p><a href="https://i.sstatic.net/i0KT8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i0KT8.png" alt="syntax highlighting" /></a></p>
<pre><code>class PlantCommand():
NO_CMD = 0
ENABLE_REMOTE_CONTROL = 1
RUN_NC_SCRIPT = 2
ABORT = 3
TERMINATE = 4
_valid_commands = (NO_CMD,
ENABLE_REMOTE_CONTROL,
RUN_NC_SCRIPT,
ABORT,
TERMINATE)
def __init__(self, command: int = NO_CMD, file: str = '') -> None:
if command not in PlantCommand._valid_commands:
raise ValueError('No valid command.')
self.command = command
self.file = file
def run(command_queue: queue.Queue, status: PlantStatus) -> None:
logging.debug('Plant thread started.')
# statemachine
while True:
try:
_cmd = command_queue.get_nowait()
if _cmd.command == PlantCommand.TERMINATE:
pass
except queue.Empty:
# no item in queue -> continue
pass
</code></pre>
|
<python>
|
2024-01-09 13:46:20
| 1
| 471
|
paul_schaefer
|
77,787,084
| 5,230,928
|
Python: Iterate and modify a complex dict including lists
|
<p>I need to iterate through a complex dictionary and modify it according to several conditions.</p>
<p>This is a very basic example of the dict</p>
<pre><code>a = {
'x' : ["x1", "x2", 'x3'],
'y' : [
{'y1' : 1},
{'y2' : 2.0},
{'y3' : True},
{'y4' : 99},
],
'z' : {
'x' : ["a1", "a2", 'a3'],
'y' : [
{'y1' : 66},
{'y2' : False},
{'y3' : 3},
{'y4' : 4.3},
]
},
'y3' : "Delete Me"
}
</code></pre>
<p>Îterating through it, is also no problem like:</p>
<pre><code> pTypes = [str, bool, int, float]
def parse(d, c):
t = type(d)
if t == dict:
for k, v in d.items():
parse(v, c+[k])
elif t == list:
for i, p in enumerate(d):
parse(p,c+[i])
elif t in pTypes:
print(c,'=>',d)
else:
print('Error: ',c,d,t)
parse(a,[])
</code></pre>
<p>My problem is, that I need to delete all items which have a key == 'y3' or a value > 50</p>
<p>Additionally all strings need to be enclosed by '-' like 'x1' => '-x1-'</p>
<p>Just modifing the data does not work (is not persistent)</p>
<pre><code>...
elif t in pTypes:
if t == str:
d = '-'+d+'-'
...
</code></pre>
<p>And I have no idea how to remove items while iterating the dict itself.</p>
<p>Since the dict can be huge, i would prefer not to make a "reduced" copy of the original.</p>
<p>Can anybody help?</p>
|
<python><dictionary><parsing>
|
2024-01-09 13:21:08
| 2
| 475
|
JustMe
|
77,787,027
| 13,364,448
|
How to apply filters on MongoDBAtlasVectorSearch similarity_search_with_score” of langchain?
|
<p>I am using <a href="https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas" rel="nofollow noreferrer">MongoDBAtlasVectorSearch</a> and ì want to search for the most similar documents so I use the function <a href="https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas#pre-filtering-with-similarity-search" rel="nofollow noreferrer">similarity_search_with_score</a>.</p>
<p>However, it seems like I am not able to add filters in this similarity_search_with_score function.</p>
<p>This is my code:</p>
<pre><code>vector_search = MongoDBAtlasVectorSearch(
collection=client[os.getenv("MONGODB_DB")]["files"],
embedding=embeddings,
index_name=os.getenv("ATLAS_VECTOR_SEARCH_INDEX_NAME"),
)
results = vector_search.similarity_search_with_score(
query="What are the engagements of the company",
k=5,
pre_filter={
"compound": {
"filter": [
{"equals": {"path": "uploaded_by", "value": chat_owner}},
{"in": {"path": "file_name", "values": file_names}},
]
}
},
)
</code></pre>
<p>This is my index:</p>
<pre><code>{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "cosine",
"type": "knnVector"
},
"file_name": {
"normalizer": "lowercase",
"type": "token"
},
"uploaded_by": {
"normalizer": "lowercase",
"type": "token"
}
}
}
}
</code></pre>
<p>However, this gives me the following error :</p>
<p><code>pymongo.errors.OperationFailure: "knnBeta.filter.compound.filter[1].in.value" is required, full error: {'ok': 0.0, 'errmsg': '"knnBeta.filter.compound.filter[1].in.value" is required', 'code': 8, 'codeName': 'UnknownError', '$clusterTime': {'clusterTime': Timestamp(1704804627, 1), 'signature': {'hash': b'\xfa\x15s+Q\x1d\xa86]R\xb2!\x9d\xc5b-G\xce\xa6S', 'keyId': 7283272637088792583}}, 'operationTime': Timestamp(1704804627, 1)}</code></p>
<p>I also tried like this :</p>
<pre><code> pre_filter={
"$and": [
{"uploaded_by": {"$eq": chat_owner}},
{"file_name": {"$in": file_names}},
]
},
</code></pre>
<p>But I got this error:</p>
<pre><code>pymongo.errors.OperationFailure: "knnBeta.filter" one of [autocomplete, compound, embeddedDocument, equals, exists, geoShape, geoWithin, in, knnBeta, moreLikeThis, near, phrase, queryString, range, regex, search, span, term, text, wildcard] must be present, full error: {'ok': 0.0, 'errmsg': '"knnBeta.filter" one of [autocomplete, compound, embeddedDocument, equals, exists, geoShape, geoWithin, in, knnBeta, moreLikeThis, near, phrase, queryString, range, regex, search, span, term, text, wildcard] must be present', 'code': 8, 'codeName': 'UnknownError', '$clusterTime': {'clusterTime': Timestamp(1704802325, 9), 'signature': {'hash': b'`\xd27-\x81+\x16\xd0a\x14\xc7\x99\xa8\x05|Sx?\x0e:', 'keyId': 7283272637088792583}}, 'operationTime': Timestamp(1704802325, 9)}
WARNING: StatReload detected changes in 'src/routes/chats/chats.py'. Reloading...
</code></pre>
<p>How can I use filters in the similarity_search_with_score properly ?</p>
|
<python><mongodb><langchain><vector-search>
|
2024-01-09 13:13:06
| 1
| 877
|
colla
|
77,786,930
| 1,830,758
|
Correct virtual environment setup to allow VS Code to format Python Files (using autopep8, Black Formatter or Ruff)
|
<p>In Visual Studio Code, all formatter extensions for Python are failing to format my .py files. Message is <strong>"Skipping standard library file"</strong>.</p>
<p>I understand that formatters built into Python are no longer supported and I have installed separate <a href="https://stackoverflow.com/questions/77283648/why-does-the-vs-code-python-extension-circa-v2018-19-no-longer-provide-support">formatter extensions</a> autopep8, Black Formatter, Ruff</p>
<p>I also understand that the formatter will not work if the Python interpreter cannot be determined. I had uninstalled and re-installed Python in a different location and the Python version wasn't showing in the status bar. I have now selected the Python interpreter location and the version is correctly showing in the VS Code status bar. <a href="https://stackoverflow.com/questions/65101442/formatter-black-is-not-working-on-my-vscode-but-why">See this question</a></p>
<p>I also understand that there is an issue where project files are saved within the virtual environment. However, I believe I've correctly separated my project code from my virtual environment (env and project code are in peer folders) (See below and <a href="https://stackoverflow.com/questions/76187582/python-file-not-formating-in-vscode-due-to-it-being-skipped-by-formatter">this question</a>)</p>
<p>Currently I have</p>
<ul>
<li><a href="https://code.visualstudio.com/updates/v1_85" rel="nofollow noreferrer">VS Code v1.85</a></li>
<li><a href="https://www.python.org/downloads/release/python-3120/" rel="nofollow noreferrer">Python v3.12.1</a></li>
<li>autopep8 v2023.8.0</li>
<li>Black Formatter 2023.6.0</li>
<li>Ruff 2024.0.0</li>
</ul>
<p>Using the context menu for the file and selecting <strong>"Format Document with"</strong> then each formatter in turn, I've then viewed the Output for that formatter. All report <strong>"Skipping standard library file"</strong>, so this must be a VS Code issue, or a misunderstanding on my part with setting up the Virtual Environment, however I have separate folders for</p>
<ul>
<li>Python (hovering over the Python version in the VS code status bar shows correct location)</li>
<li><a href="https://docs.python.org/3/library/venv.html" rel="nofollow noreferrer">Python Dev Env</a></li>
<li>Python Django Projects (completely separate form the virtual environment)</li>
</ul>
<p>I'm less than a week into learning Python/Django, so maybe I've missed something basic along the way in relation to virtual environment setup that's causing VS Code to believe my project file is a standard library file?</p>
|
<python><visual-studio-code><format>
|
2024-01-09 12:52:29
| 3
| 617
|
Alex Cooper
|
77,786,853
| 1,860,805
|
How to print timedelta consistently (i.e. formatted)
|
<p>I have this code that prints the time difference in milliseconds.</p>
<pre><code>#!/usr/bin/python
import datetime
import sys
date1= datetime.datetime.strptime('20231107-08:52:53.539', '%Y%m%d-%H:%M:%S.%f')
date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f')
diff = date1-date2
sys.stdout.write(str(diff) + '\t')
date1= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f')
date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f')
diff = date1-date2
sys.stdout.write(str(diff) + '\t')
date1= datetime.datetime.strptime('20231107-08:52:53.532', '%Y%m%d-%H:%M:%S.%f')
date2= datetime.datetime.strptime('20231107-08:52:53.537', '%Y%m%d-%H:%M:%S.%f')
diff = date1-date2
sys.stdout.write(str(diff) + '\n')
</code></pre>
<p>And it prints this, without consistency</p>
<blockquote>
<p>$ python ./prntdatetest.py</p>
<p>0:00:00.002000 <tab> 0:00:00 <tab> -1 day, 23:59:59.995000</p>
</blockquote>
<p>I want this to be printed like this</p>
<blockquote>
<p>00:00:00.002 <tab> 00:00:00.000 <tab> -0:00:00.005</p>
</blockquote>
<p>I do not want to use the <strong>print</strong> but i want to use <strong>stdout.write</strong></p>
<p>How can I do this?</p>
|
<python><datetime><printing><timedelta>
|
2024-01-09 12:43:08
| 2
| 523
|
Ramanan T
|
77,786,816
| 8,877,753
|
Radar Vegation Index scaling issues [0, 1] for Sentinel-1 data
|
<p>I am trying to understand some data issues I am having with using Sentinel-1 VV and VH data to create RVI statistics.</p>
<p>I am familiar with the fact that you need true quad polarisation to calculate the RVI. The best approximate RVI equation for S1 has been discussed at length at the ESA forum <a href="https://forum.step.esa.int/t/creating-radar-vegetation-index/12444/28" rel="nofollow noreferrer">here</a>.
Hence I use the equation:</p>
<pre><code>RVI=4*VH/(VV+VH)
</code></pre>
<p>I am also aware that it is a mathematical no-go to use the dB scaled data as input to the RVI equation (division) and that the main reason for scaling images logarithmically is to compress the dynamic pixel range so that it can be visualised in plotting.</p>
<p><strong>What I am struggling with is why the RVI product never stays within the expected range of 0 to 1.</strong></p>
<p>Allow me to illustrate. The first image below shows a cutout of a S1 tile. Its a random farmland area in Germany.</p>
<p><strong>EDIT:</strong>
As pointed out by a sharp user, I did not include the VV or the VH band range information. See the two images below for dB scale
<a href="https://i.sstatic.net/8YsW0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8YsW0.png" alt="enter image description here" /></a>
Min: -26.01792335510254, Mean: -10.698317527770996, Max: 15.05242919921875
<a href="https://i.sstatic.net/ooFbo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ooFbo.png" alt="enter image description here" /></a>
Min: -48.487674713134766, Mean: -17.52926254272461, Max: 11.8444242477417
and the two for linear converted below:
<a href="https://i.sstatic.net/xQqmu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xQqmu.png" alt="enter image description here" /></a>
Min: 0.002501541282981634, Mean: 0.1089702844619751, Max: 32.00685119628906
<a href="https://i.sstatic.net/oGS3i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oGS3i.png" alt="enter image description here" /></a>
Min: 1.4165510947350413e-05, Mean: 0.029651135206222534, Max: 15.291229248046875.</p>
<p>If I clip the values at 1, so all values above are set to 1 I get this:
<a href="https://i.sstatic.net/LMAsz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LMAsz.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/sd8Dt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sd8Dt.png" alt="enter image description here" /></a>
Which would create an RVI like this:
<a href="https://i.sstatic.net/LnKLC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LnKLC.png" alt="enter image description here" /></a>
<strong>End edit</strong></p>
<p>In the first example I have done it deliberately wrong by using the VV and VH band that have been logarithmically scaled as input to the RVI equation. We see the obvious issues of exceeding the expected scale of 0-1, as the values present a textbook gaussian distribution around the value 2.5.
<a href="https://i.sstatic.net/dlVIo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dlVIo.png" alt="enter image description here" /></a></p>
<p>In the second example I have converted VV and VH from dB to linear scale through this python code:</p>
<pre><code>def db_to_linear(data: ndarray) -> ndarray:
return 10 ** (0.1 * data)
</code></pre>
<p>and then calculated the RVI. The image and the scale is much more in line of what one would expect but as you can observe from the red lines. A substantial amount of data falls above 1.
Now I can understand that issues with the image such as processing artefacts, sensor issues, outliers etc. can produce odd values. But it just seems to be a lot of data points.</p>
<p><a href="https://i.sstatic.net/jWZjX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jWZjX.png" alt="enter image description here" /></a></p>
<p>I then tried setting all values outside the range of 0,1 to nan and then plot it again, which provides figure 3 below.</p>
<p><a href="https://i.sstatic.net/uhGo1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uhGo1.png" alt="enter image description here" /></a></p>
<p>Cutting out the data is definitely not a solution as the image next to the histogram shows, alot of data is lost.</p>
<p>Alternatively I can normalise the data to be within 0-1 so it looks like this:
<a href="https://i.sstatic.net/YNMI2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YNMI2.png" alt="enter image description here" /></a></p>
<p>But at this point I am unsure if what I am doing is sound as I put the RVI value much lower than it likely should be, so am I just forcing something to be "correct", rather than fixing some core issue that yet eludes me.</p>
<p>Thank you for your input and your time.</p>
|
<python><image-processing><statistics><sentinel1>
|
2024-01-09 12:36:51
| 0
| 351
|
Mars
|
77,786,729
| 4,133,464
|
Pandas concat converts dtype to object instead of numeric
|
<p>I have a dataset separated into 10 different .csv files. After importing each separately, I use concat to merge them into one. This converts each column to dtype "object". As a result, when I use Keras model .fit to run the model, I get the error: ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int).</p>
<p>Running the exact same code to one .csv file, meaning without concating, works fine.</p>
<pre><code>df1 = pd.read_csv("CSE-CIC-IDS2018/02-14-2018.csv", low_memory=False, nrows=10000)
df2 = pd.read_csv("CSE-CIC-IDS2018/02-15-2018.csv", low_memory=False, nrows=10000)
df3 = pd.read_csv("CSE-CIC-IDS2018/02-16-2018.csv", low_memory=False, nrows=10000)
df = pd.concat([df1,df2,df3])
df.drop(columns=['Timestamp','Flow ID', 'Src IP', 'Src Port', 'Dst IP'], inplace=True)
df.head()
Dst Port Protocol Flow Duration Tot Fwd Pkts Tot Bwd Pkts TotLen Fwd Pkts TotLen Bwd Pkts Fwd Pkt Len Max Fwd Pkt Len Min Fwd Pkt Len Mean ... Fwd Seg Size Min Active Mean Active Std Active Max Active Min Idle Mean Idle Std Idle Max Idle Min Label
0 0 0 112641719 3 0 0 0 0 0 0.0 ... 0 0 0 0 0 56320859.5 139.300036 56320958 56320761 Benign
1 0 0 112641466 3 0 0 0 0 0 0.0 ... 0 0 0 0 0 56320733.0 114.551299 56320814 56320652 Benign
2 0 0 112638623 3 0 0 0 0 0 0.0 ... 0 0 0 0 0 56319311.5 301.934596 56319525 56319098 Benign
3 22 6 6453966 15 10 1239 2273 744 0 82.6 ... 32 0 0 0 0 0.0 0.0 0 0 Benign
4 22 6 8804066 14 11 1143 2209 744 0 81.642857 ... 32 0 0 0 0 0.0 0.0 0 0 Benign
x = df.drop(["Label", axis=1)
y = df["Label"].apply(lambda x:0 if x=="Benign" else 1)
x_train, x_remaining, y_train, y_remaining = train_test_split(x,y, train_size=0.60, random_state=4)
x_val, x_test, y_val, y_test = train_test_split(x_remaining, y_remaining, test_size=0.5, random_state=4)
#Model
model = Sequential()
model.add(Dense(16, activation="relu"))#, input_dim=len(x_train.columns)))
model.add(Dense(32, activation="relu"))
model.add(Dense(units=1, activation="sigmoid"))
#model.add(Dense(1, activation="softmax"))
#Compiler
model.compile(loss="binary_crossentropy", optimizer="Adam", metrics="accuracy")
model.fit(x_train, y_train, epochs=10, batch_size=512)
</code></pre>
<p><strong>ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int).</strong></p>
|
<python><pandas><numpy><tensorflow><keras>
|
2024-01-09 12:23:19
| 1
| 819
|
deadpixels
|
77,786,578
| 11,159,734
|
How to get proper context from AI-Search documents when calling OpenAI completion endpoint
|
<p>I found <a href="https://github.com/openai/openai-cookbook/blob/main/examples/azure/chat_with_your_own_data.ipynb" rel="nofollow noreferrer">this documentation</a> about chatting with your own data using AI-Search & OpenAI.</p>
<p>It works fine for my data however I don't get any additional context aside from the content and the score:</p>
<pre><code>{"content":"<MY CONTENT>", "id":null,"title":null,"filepath":null,"url":null,"metadata":{"chunking":"orignal document size=2000. Scores=3.6962261Org Highlight count=31."},"chunk_id":"0"}
</code></pre>
<p>I think the additional fields in AI-Search need to be specified somewhere in the code but I don't know where and I couldn't find any example for it.</p>
<p>In the Azure OpenAI Chat Playground you can select the fields within your AI-Search Index. And then it is also correctly displayed in the sample chat app.</p>
<p><a href="https://i.sstatic.net/MaGVX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MaGVX.png" alt="enter image description here" /></a></p>
<p>How can I achieve the same in my code using the code example referenced above?</p>
|
<python><azure-cognitive-services><openai-api><azure-openai>
|
2024-01-09 11:54:41
| 1
| 1,025
|
Daniel
|
77,786,441
| 7,441,757
|
Using pd.cut with duplicate bins and labels
|
<p>I'm using <code>pd.cut</code> with the keyword argument <code>duplicates='drop'</code>. However, this gives errors when you combine it with the keyword argument <code>labels</code>.</p>
<p>The question is similar to <a href="https://stackoverflow.com/questions/70986532/how-to-fix-pd-cut-with-duplicate-edges">this</a> question, but that ignores the label part.</p>
<p>Does not work:</p>
<p><code>pd.cut(pd.Series([0, 1, 2, 3, 4, 5]), bins=[0, 1, 1, 2])</code></p>
<p>Works:</p>
<p><code>pd.cut(pd.Series([0, 1, 2, 3, 4, 5]), bins=[0, 1, 1, 2], duplicates='drop')</code></p>
<p>Does not work:</p>
<p><code>pd.cut(pd.Series([0, 1, 2, 3, 4, 5]), bins=[0, 1, 1, 2], duplicates='drop', labels=[0, 1, 1, 2])</code></p>
<p>Wouldn't we expect it to drop the label corresponding to the duplicate entry?</p>
|
<python><pandas>
|
2024-01-09 11:29:30
| 1
| 5,199
|
Roelant
|
77,786,410
| 8,113,126
|
QuickFIX Python Session File Corruption Issue
|
<p>I am encountering a recurring problem in my production environment using QuickFIX Python. The issue revolves around the corruption of session files (FIX.4.4-A-B.session) in the Sessions folder. The session file is expected to contain a valid date-time format (e.g., <code>20240108-11:42:01</code>), but at random times, additional and seemingly random characters are appended(<code>20231228-22:00:02§♥♥ 0qÄ@☼sS_½•LÕ☼s£↓› GLrµ©5¹vj↕Æ#¿ÈíÜ</code>), leading to date-time conversion errors.</p>
<p><a href="https://i.sstatic.net/diEvu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/diEvu.png" alt="enter image description here" /></a></p>
<p>This is a critical problem as it causes disruptions in the normal functioning of the system, and unfortunately, the error is being thrown from the underlying C++ codebase, making it challenging to catch and handle programmatically.</p>
<p><a href="https://i.sstatic.net/pxtVj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pxtVj.png" alt="enter image description here" /></a></p>
<ol>
<li><strong>Are there known reasons or common scenarios that lead to such corruption in session files?</strong></li>
<li><strong>How can I better handle or prevent such corruption in the session files within my Python code?</strong></li>
<li><strong>Is there a recommended way to catch and handle errors originating from the C++ core of QuickFIX within the Python environment?</strong></li>
</ol>
<p><em>Versions</em>:</p>
<ul>
<li>Fix version - FIX44</li>
<li>I am using quickfix-ssl 1.15.1.post4</li>
<li>OS: Linux - ubuntu - 18.04</li>
</ul>
<p>Any insights or suggestions would be greatly appreciated. Thank you!</p>
|
<python><quickfix><fix-protocol><quickfixj><quickfixn>
|
2024-01-09 11:24:46
| 0
| 340
|
Jai Simha Ramanujapura
|
77,786,242
| 16,525,263
|
How to call the functions that are inside a python class
|
<p>I have a python class where I have nested functions. Im not able to call the functions that are nested.</p>
<pre><code>class Data:
__instance__ = None
def writeCleanData(df,path,d_path,row,days,lk):
Data.write(Data.unify(df,row),path,"append")
d_path[path] = os.path.split(d_path[path])[0]
Data.CreateFolder(d_path[path],lk)
Data.CleanPath(lk,path,d_path)
def unify(df, onCol):
return df.repartition(10, onCol)
def CreateFolder(base, lake):
x = fd.path(base)
def write(df,data_nm,mode):
Data.get_instance._write(df,data_nm,mode)
</code></pre>
<p>Im getting <code>'Data' has no attribute 'unify'</code></p>
<p>Please let me know how to call these functions</p>
|
<python>
|
2024-01-09 10:59:54
| 1
| 434
|
user175025
|
77,786,212
| 5,091,467
|
How to choose pandas Sparse dtype and what are the memory implications?
|
<p>I am trying to understand how to set up a sparse pandas matrix to minimize memory usage and retain precision of all values. I did not find the answers in the pandas Sparse <a href="https://pandas.pydata.org/docs/user_guide/sparse.html" rel="nofollow noreferrer">documentation</a>. Below is an example which illustrates my questions:</p>
<ol>
<li><p>Why does a <code>Sparse(int32)</code> dataframe take as much memory as a <code>Sparse(float32)</code> dataframe? Is there any advantage in specifying a <code>Sparse(int)</code> dtype if this is the case?</p>
</li>
<li><p>How does pandas decide what specific <code>Sparse(int)</code> dtype to use, e.g. <code>int8</code> or <code>int32</code>? Given the example below (please see dataframes <code>sdf_int32</code> and <code>sdf_high_int32</code>), it appears <code>Sparse(int32)</code> is always chosen regardless of whether <code>Sparse(int8)</code> might be more memory-efficient, or <code>Sparse(int32)</code> might truncate some values.</p>
</li>
<li><p>Is the only way to avoid truncation and achieve minimum memory usage to specify <code>Sparse(intNN)</code> or <code>Sparse(floatNN)</code> dtype for each column?</p>
</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
# Generate binary dense matrix with low density
df = pd.DataFrame()
for col in ['col1', 'col2', 'col3']:
df[col] = np.where(np.random.random_sample(100_000_000) > 0.98, 1, 0)
df.name = 'Dense'
# Replace one column by values too high for int32 dtype
df_high = df.copy()
df_high['col1'] = df_high['col1'] * 100_000_000_000
# Convert df to sparse of various dtypes
sdf_float32 = df.astype(pd.SparseDtype('float32', 0))
sdf_float32.name = 'Sparse, float32'
sdf_int8 = df.astype(pd.SparseDtype('int8', 0))
sdf_int8.name = 'Sparse, int8'
sdf_int32 = df.astype(pd.SparseDtype('int', 0))
sdf_int32.name = 'Sparse, int32'
sdf_int64 = df.astype(pd.SparseDtype('int64', 0))
sdf_int64.name = 'Sparse, int64'
# Convert df_high to Sparse(int)
sdf_high_int32 = df_high.astype(pd.SparseDtype('int', 0))
sdf_high_int32.dtypes
sdf_high_int32['col1'].value_counts()
sdf_high_int32.name = 'Sparse, int32 highval'
# Print info for all dataframes
print(f" {df.name} Dataframe; Memory size: {df.memory_usage(deep=True).sum() / 1024 ** 2:.1f} MB, {df['col1'].dtype}")
for data in [sdf_float32, sdf_int8, sdf_int32, sdf_high_int32, sdf_int64]:
print(f" {data.name} Dataframe; Memory size: {data.memory_usage(deep=True).sum() / 1024**2:.1f} MB,"
f"Density {data.sparse.density:.5%}, {data['col1'].dtype}")
"""
Dense Dataframe; Memory size: 1144.4 MB, int32
Sparse, float32 Dataframe; Memory size: 45.8 MB,Density 1.99980%, Sparse[float32, 0]
Sparse, int8 Dataframe; Memory size: 28.6 MB,Density 1.99980%, Sparse[int8, 0]
Sparse, int32 Dataframe; Memory size: 45.8 MB,Density 1.99980%, Sparse[int32, 0]
Sparse, int32 highval Dataframe; Memory size: 45.8 MB,Density 1.99980%, Sparse[int32, 0]
Sparse, int64 Dataframe; Memory size: 68.7 MB,Density 1.99980%, Sparse[int64, 0]
"""
# Show truncated values for sdf_high_int32
print(f"Values for sdf_high_int32, col1: \n {sdf_high_int32['col1'].value_counts()}")
"""
Values for sdf_high_int32, col1:
col1
0 98001473
1215752192 1998527
Name: count, dtype: int64
"""
</code></pre>
|
<python><pandas><precision><sparse-matrix><dtype>
|
2024-01-09 10:54:45
| 1
| 714
|
Dudelstein
|
77,785,950
| 3,932,463
|
Plotting rows of a df based on their label (colour coded)
|
<p>I am trying to plot a dataset where rows have the information. I want to plot the rows as a line plot (so there will be 6 lines), in the colour that their labels are encoded (for ex. <code>label 0 is blue, label 1 is red</code>)
The example df is:</p>
<pre><code>Label,col1,col2,col3
0,43.55,98.86,2.34
1,21.42,51.42,71.05
0,49.17,13.55,101.00
0,5.00,17.88,28.00
1,44.00,2.42,34.69
1,41.88,144.00,9.75
</code></pre>
<p>According to it, the lines for 0th and 2nd and 3rd lines should be blue and 1st & 4th and 5th should be red.</p>
<p>What I tried to convert rows to columns and plot:</p>
<pre><code>df = pd.read_csv(dfname,header=0)
df.index = df['Label'] # index the label
df_T = df.iloc[:,1:].T # remove the label transpose the df
</code></pre>
<p>df_T looks like this:</p>
<pre><code>Label 0 1 0 0 1 1
col1 43.55 21.42 49.17 5.00 44.00 41.88
col2 98.86 51.42 13.55 17.88 2.42 144.00
col3 2.34 71.05 101.00 28.00 34.69 9.75
</code></pre>
<p>However when I plot the columns, each column has their own independent colour:
df_T.plot.line()</p>
<p><a href="https://i.sstatic.net/UkIMq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UkIMq.png" alt="enter image description here" /></a></p>
<p>I would be happy to have your help, thanks!</p>
|
<python><pandas><line-plot><color-codes>
|
2024-01-09 10:16:27
| 2
| 909
|
bapors
|
77,785,410
| 7,622,324
|
Pylance: use Mypy for autocomplete
|
<p>I have python code that uses <code>__getattr__</code> to relay non existing member accesses to another object.</p>
<p>I have written a Mypy plugin which makes the type hints work.</p>
<pre class="lang-py prettyprint-override"><code>class StrProxy(Proxy[str])
s: str
def __init__(self, s: str):
self.s = s
def get(self) -> str:
return self.s
def __deref__(self) -> str:
return s
sp = StrProxy("test")
sp.get # exists according to mypy & pylance
sp.encode # exists according to mypy only (because of my plugin)
</code></pre>
<p>The problem: Pylance uses its own type checker (Pyright) for autocomplete.</p>
<p>Is there any way I can tell Pylance to use Mypy? Or switch pylance for an LSP that respects Mypy?</p>
|
<python><mypy><pylance>
|
2024-01-09 08:47:33
| 0
| 669
|
Omer Lubin
|
77,785,396
| 1,826,066
|
Run group_by_dynamic in polars but only on timestamp
|
<p>I have some dummy data like such:</p>
<pre><code>datetime,duration_in_traffic_s
2023-12-20T10:50:43.063641000,221.0
2023-12-20T10:59:09.884939000,219.0
2023-12-20T11:09:56.003331000,206.0
...
more rows with different dates
...
</code></pre>
<p>Assume this data is stored in a file <code>mwe.csv</code>.
Using <code>polars</code>, I now want to compute averages over the second column, grouped in one hour chunks. I want to use <code>group_by_dynamic</code> (<a href="https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.group_by_dynamic.html" rel="nofollow noreferrer">doc</a>) to get the data every 10 minutes. I run</p>
<pre class="lang-py prettyprint-override"><code>(
pl.read_csv("mwe.csv")
.with_columns(pl.col("datetime").cast(pl.Datetime))
.sort("datetime")
.group_by_dynamic(
index_column="datetime",
every="10m",
period="1h",
)
.agg(pl.col("duration_in_traffic_s").mean())
)
</code></pre>
<p>and the result looks like this
<a href="https://i.sstatic.net/Mpj9l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mpj9l.png" alt="enter image description here" /></a></p>
<p>However, I don't want the averaging to take the date into account, only the time, e.g. <code>2023-12-20 10:40</code> and <code>2023-12-21 10:40</code> should fall into the same bin.</p>
<p>I hoped that adding <code>.with_columns(pl.col("datetime").dt.time())</code> to the pipeline would help but <code>group_by_dynamic</code> doesn't work with time data.</p>
<p>I could compute the time column as float manually as such</p>
<pre class="lang-py prettyprint-override"><code>(
pl.read_csv("mwe.csv")
.with_columns(pl.col("datetime").cast(dtype=pl.Datetime))
.with_columns(
t=pl.col("datetime").dt.hour().cast(pl.Float64)
+ pl.col("datetime").dt.minute().cast(pl.Float64) / 60
+ pl.col("datetime").dt.second().cast(pl.Float64) / 60 / 60
)
).sort("t")
</code></pre>
<p>But I am not sure how to the do the grouping. Also, I do like the time format, so my hope was that I could preserve that.</p>
<p><strong>Is there a way to do the dynamic grouping on the time data only, ignoring the date?</strong></p>
<p>Here's the full <code>mwe.csv</code> file:</p>
<pre><code>datetime,duration_in_traffic_s
2023-12-20T10:50:43.063641000,221.0
2023-12-20T10:59:09.884939000,219.0
2023-12-20T11:09:56.003331000,206.0
2023-12-20T11:12:42.347660000,206.0
2023-12-20T11:17:40.084821000,200.0
2023-12-20T11:31:14.957092000,222.0
2023-12-20T11:46:08.886872000,209.0
2023-12-20T12:00:02.024328000,198.0
2023-12-20T12:15:01.910446000,251.0
2023-12-20T12:30:01.447496000,229.0
2023-12-20T12:45:02.761839000,206.0
2023-12-20T14:00:01.456811000,262.0
2023-12-20T14:15:01.718898000,226.0
2023-12-20T14:30:02.452185000,194.0
2023-12-20T14:45:01.717522000,191.0
2023-12-20T14:49:10.150735000,196.0
2023-12-20T14:50:55.800417000,194.0
2023-12-20T14:57:05.230577000,202.0
2023-12-20T14:59:23.005408000,192.0
2023-12-20T15:00:01.316240000,193.0
2023-12-20T15:00:14.842233000,193.33333333333334
2023-12-20T15:00:49.370172000,193.66666666666666
2023-12-20T15:01:06.300133000,193.66666666666666
2023-12-20T15:15:01.943587000,183.0
2023-12-20T15:20:01.567126000,184.0
2023-12-20T15:30:01.784686000,197.0
2023-12-20T15:40:02.468132000,188.0
2023-12-20T15:50:01.968746000,226.0
2023-12-20T16:00:01.864652000,233.0
2023-12-20T16:10:01.185016000,213.0
2023-12-20T16:20:01.544796000,252.0
2023-12-20T16:30:01.621331000,224.0
2023-12-20T16:40:03.567996000,228.0
2023-12-20T16:50:01.014911000,220.0
2023-12-20T17:00:01.723306000,232.0
2023-12-20T17:10:02.490695000,215.0
2023-12-20T17:20:01.844304000,214.0
2023-12-20T17:30:02.147457000,204.0
2023-12-20T17:40:02.217333000,198.0
2023-12-20T17:50:01.741479000,193.0
2023-12-20T18:00:01.665714000,193.0
2023-12-20T18:10:02.334926000,182.0
2023-12-20T18:26:43.135849000,185.0
2023-12-20T18:30:02.434296000,184.0
2023-12-20T18:32:41.033250000,175.0
2023-12-20T18:40:02.941171000,176.0
2023-12-20T19:36:47.313925000,175.0
2023-12-20T19:40:01.895983000,171.0
2023-12-20T19:50:02.049567000,167.0
2023-12-20T20:00:08.284378000,166.0
2023-12-20T20:10:02.727202000,166.0
2023-12-20T20:40:02.407489000,161.0
2023-12-20T21:10:02.100392000,158.0
2023-12-20T21:21:56.063346000,157.0
2023-12-20T21:30:02.005594000,159.0
2023-12-20T21:40:01.915306000,153.0
2023-12-20T21:50:02.318419000,152.0
2023-12-20T22:00:02.369086000,154.0
2023-12-20T22:10:02.704019000,154.0
2023-12-20T22:20:01.968418000,160.0
2023-12-20T22:30:01.965742000,159.0
2023-12-20T22:40:02.718295000,164.0
2023-12-20T22:50:02.347303000,160.0
2023-12-21T05:00:02.595535000,164.0
2023-12-21T05:10:02.642932000,163.0
2023-12-21T05:20:02.390676000,164.0
2023-12-21T05:30:01.971166000,165.0
2023-12-21T05:40:01.874958000,169.0
2023-12-21T05:50:01.806441000,167.0
2023-12-21T06:00:02.396094000,169.0
2023-12-21T06:10:02.350196000,169.0
2023-12-21T06:20:02.041357000,169.0
2023-12-21T06:33:43.895397000,177.0
2023-12-21T07:30:02.240918000,210.0
2023-12-21T07:47:16.654805000,200.0
2023-12-21T07:50:02.960362000,199.0
2023-12-21T08:10:16.746286000,194.0
2023-12-21T08:20:02.218056000,198.0
2023-12-21T08:30:01.729418000,198.0
2023-12-21T08:40:02.345477000,194.0
2023-12-21T08:50:01.464156000,190.0
2023-12-21T09:00:02.476057000,188.0
2023-12-21T09:10:02.130653000,213.0
2023-12-21T09:20:02.364758000,188.0
2023-12-21T09:30:02.499917000,188.0
2023-12-21T09:40:01.911754000,188.0
2023-12-21T09:50:01.885705000,197.0
2023-12-21T10:00:01.633757000,198.0
2023-12-21T10:10:02.531765000,200.0
2023-12-21T10:20:01.685657000,221.0
2023-12-21T10:30:01.567600000,207.0
2023-12-21T10:40:02.279429000,203.0
2023-12-21T10:50:02.548892000,191.0
2023-12-21T11:00:01.622794000,219.0
2023-12-21T11:10:01.435424000,200.0
2023-12-21T11:20:01.849114000,234.0
2023-12-21T11:30:02.391425000,222.0
2023-12-21T11:40:01.796607000,191.0
2023-12-21T11:50:01.776906000,205.0
2023-12-21T12:00:02.485984000,239.0
</code></pre>
|
<python><python-polars>
|
2024-01-09 08:45:04
| 1
| 1,351
|
Thomas
|
77,785,263
| 3,510,201
|
Deprecating a function that is being replaced with a property
|
<p>I am refactoring parts of an API and wish to add deprecation warnings to parts that will eventually be removed. However I have stumbled into an issue where I would like to replace a function call with a property sharing a name.</p>
<p>Is there a hack where I can support both calling the <code>.length</code> as a property <em>and</em> as a function? I have thinkered with <code>__getattribute__</code> and <code>__getattr__</code> and can't think of a way.</p>
<pre class="lang-py prettyprint-override"><code>import warnings
class A:
@property
def length(self):
return 1
def length(self):
warnings.warn(".length function is deprecated. Use the .length property", DeprecationWarning)
return 1
</code></pre>
<p>P.S.<br />
Preferably I would like the solution to be python 2.7 compatible.</p>
<h3>Additional context</h3>
<p>The only "kind of" solution I have thought of is to overwrite the return value and skip the properties for now and add them in later when the deprecation warnings are removed. This solution would work for my case, if there really isn't any other way, but I would prefer a solution that is a lot less hacky.</p>
<pre class="lang-py prettyprint-override"><code>import warnings
class F(float):
def __init__(self, v):
self.v = v
def __new__(cls, value):
return float.__new__(cls, value)
def __call__(self, *args, **kwargs):
warnings.warn(".length function is deprecated. Use the .length property", DeprecationWarning)
return self.v
class A(object):
def __getattribute__(self, item):
if item == "length":
# This is a hack to enable a deprecation warning when calling .length()
# Remove this in favor for the @property, when the deprecation warnings are removed.
return F(1)
return super(A, self).__getattribute__(item)
# @property
# def length(self):
# # type: () -> float
# return 1.0
</code></pre>
|
<python>
|
2024-01-09 08:24:04
| 1
| 539
|
Jerakin
|
77,785,036
| 7,583,919
|
Iteratively convert an arbitrary depth list to a dict
|
<p>The input has a pattern, every element in the list is a dict and has a fixed keys</p>
<pre class="lang-py prettyprint-override"><code>[{'key': 'a', 'children': [{'key': 'a1', 'children': [{'key': 'a11', 'children': []}]}]}, {'key': 'b', 'children': [{'key': 'b1', 'children': [{'key': 'b11', 'children': []}]}]},]
</code></pre>
<p>expected output</p>
<pre><code>{'a': {'a1': {'a11': ''}}, 'b': {'b1': {'b11': ''}}}
</code></pre>
<p>I want to do it in an iterative way, currently I'm able to get all the value of the fixed key <code>key</code>, but failed to compose to the targetd dict</p>
<pre class="lang-py prettyprint-override"><code>def get_res(stack, key='key'):
result = []
while stack:
elem = stack.pop()
if isinstance(elem, dict):
for k, v in elem.items():
if k == key:
# breakpoint()
result.append(v)
stack.append(v)
elif isinstance(elem, list):
stack.extend(elem)
print(result)
return result
</code></pre>
<p>also stuck in recursive approach</p>
<pre><code>def gen_x(stack):
for bx in stack:
if 'children' not in bx:
return {bx['key']: ''}
tem_ls = bx['children']
xs = gen_x(tem_ls)
print(xs)
</code></pre>
|
<python><algorithm><iteration>
|
2024-01-09 07:41:04
| 2
| 4,211
|
ComplicatedPhenomenon
|
77,784,846
| 709,439
|
How to find similar sounding words?
|
<p>I'm writing a specialized (in food realm) multi-lingual search engine.<br />
I use python and nltk libraries.
I have quite a big database of recipes for all cultures I want to support.</p>
<p>I'm asking if and how it is possible to be able to find in my indexed words corpus a <em>wrong spelled</em> word...<br />
For example, in Italian, to look for "couscous" word, many users would say/write "cus cus", or "cuscus"...</p>
<p>In synthesis, this is an example of how I tokenize my index of lexemes for search:</p>
<pre><code>import re
import nltk
import string
corpus = 'italian'
stemmer = nltk.stem.snowball.ItalianStemmer()
stopWords = nltk.corpus.stopwords.words(corpus)
# tokenize the sentence(s)
wordTokenizedList = nltk.tokenize.word_tokenize(text)
# remove punctuation and everything lower case
wordTokenizedListNoPunct = [ word.lower() for word in wordTokenizedList if word not in string.punctuation ]
# remove stop words
wordTokenizedListNoPunctNoStopWords = [ word for word in wordTokenizedListNoPunct if word not in stopWords ]
# snowball stemmer
wordTokenizedListNoPunctNoStopWordsStems = [ stemmer.stem(i) for i in wordTokenizedListNoPunctNoStopWords ]
return wordTokenizedListNoPunctNoStopWordsStems
</code></pre>
<p>Should I prepare my index someway differently to reach my goal?</p>
<p>Any additional remark about a more complete flow in the text analysis for tokenization should be welcome, of course... :-)</p>
|
<python><nlp><nltk>
|
2024-01-09 06:57:38
| 2
| 17,761
|
MarcoS
|
77,784,646
| 15,361,826
|
Issues while using related names
|
<p>This is the Course and LearnerCourse model:</p>
<pre><code>class Course(BaseModel, Timestamps, SoftDelete):
name = models.TextField(_("Course Name"), max_length=100,
blank=True, null=True, unique=False)
description = models.TextField(
_("Course Description"), blank=True, null=True)
course_number = models.CharField(
max_length=15, null=True, blank=True, unique=True)....
class LearnerCourse(BaseModel, Timestamps, SoftDelete):
course = models.ForeignKey(Course, verbose_name=_("Course"), on_delete=models.CASCADE, db_index=True, related_name='course_learner_courses')
learner = models.ForeignKey(Learner, verbose_name=_("Learner Course"), on_delete=models.CASCADE, db_index=True)
course_subscription = models.ForeignKey(CourseSubscription, null=True, on_delete=models.CASCADE, db_index=True)
enroll_date = models.DateTimeField(verbose_name=_("Course Enroll Date"))
is_favourite = models.BooleanField(default=False)
</code></pre>
<p>in LearnerCourse used the related name for the course field, after that
When used like this</p>
<pre><code> details = LearnerCourse.objects.filter(learner=learner_id).all().order_by('-datetime_created')
print("details", details)
courses = details.course_learner_courses.all()
print("courses", courses)
</code></pre>
<p>which causes an error 'SafeDeleteQueryset' object has no attribute 'course_learner_courses'
Please give me some suggestions to fix this problem.</p>
|
<python><django><django-models><django-rest-framework>
|
2024-01-09 06:04:51
| 2
| 331
|
user15361826
|
77,784,599
| 12,458,212
|
Adding value to 2d numpy array based on condition
|
<p>Having trouble manipulating a 2D numpy array (below). I'm using np.where to add a value if the given condition is met, but doesn't seem to be working. Is this simply not possible or am I making a simple mistake?</p>
<pre><code>a = ['circle','square']
b=['north','south']
c=['red','blue']
d=['long','short']
x = []
for shape in a:
for direction in b:
for color in c:
for length in d:
x.append([shape, direction, color, length, 0])
x = np.array(x)
# conditional add - if condition is met, add 1 to numeric index, else assign value of 2
x[:,4] = np.where((x[:,0]=='circle') & (x[:,1]=='south'), x[:,4]+1, 2)
</code></pre>
|
<python><numpy>
|
2024-01-09 05:50:30
| 2
| 695
|
chicagobeast12
|
77,784,528
| 11,053,801
|
how to reinitialise global variable in flask app python
|
<p>Here is the Flask app.py</p>
<pre><code>from flask import Flask, redirect, url_for, request
import trainer
app = Flask(__name__)
@app.route('/success/<name>')
def success(name):
gValue = trainer.myFunction()
name = name + gValue
return 'welcome %s' % name
@app.route('/login', methods=['POST', 'GET'])
def login():
if request.method == 'POST':
user = request.form['nm']
return redirect(url_for('success', name=user))
else:
user = request.args.get('nm')
return redirect(url_for('success', name=user))
if __name__ == '__main__':
app.run(debug=True, threaded=True, use_reloader=False)
</code></pre>
<p>HTML file</p>
<pre><code><html>
<body>
<form action = "http://localhost:5000/login" method = "post">
<p>Enter Name:</p>
<p><input type = "text" name = "nm" /></p>
<p><input type = "submit" value = "submit" /></p>
</form>
</body>
</html>
</code></pre>
<p>trainer.py</p>
<pre><code>from utils import sumValue
def myFunction():
pullValue = sumValue()
return pullValue
</code></pre>
<p>utils.py</p>
<pre><code>aggregateValue = " defaultValue"
def sumValue():
global aggregateValue
x = " mainValue"
x1 = x + aggregateValue
aggregateValue = " modifiedValue"
x2 = x1 + aggregateValue
print(x2)
return x2
</code></pre>
<p>running the app</p>
<pre><code>C:\test>python app.py
* Serving Flask app 'app'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000
Press CTRL+C to quit
127.0.0.1 - - [09/Jan/2024 16:12:57] "POST /login HTTP/1.1" 302 -
mainValue defaultValue modifiedValue
127.0.0.1 - - [09/Jan/2024 16:12:57] "GET /success/Value HTTP/1.1" 200 -
127.0.0.1 - - [09/Jan/2024 16:15:59] "POST /login HTTP/1.1" 302 -
mainValue modifiedValue modifiedValue
127.0.0.1 - - [09/Jan/2024 16:15:59] "GET /success/Value HTTP/1.1" 200 -
</code></pre>
<p>in the first run we get</p>
<p>mainValue defaultValue modifiedValue</p>
<p>in 2nd run</p>
<p>mainValue modifiedValue modifiedValue</p>
<p>as the 2nd run is a fresh run, I expect the middle value should be defaultValue instead of modifiedValue,</p>
<p>The global declaration in utils.py is not getting initialized during 2nd run,</p>
<p>If we Ctrl+C and start again, memory is cleared to get defaultValue</p>
<p>May I know how to initialize the global variable here?</p>
|
<python><flask>
|
2024-01-09 05:28:35
| 2
| 1,616
|
hanzgs
|
77,784,445
| 11,637,422
|
How to carry out a function in parallel for a large df?
|
<p>I am trying to carry out a function that searches through my dataframe and substitutes newer entries for older ones. My dataframe is dependant on players and their tickets, which means I have had to split the dataframe by players instead of by even sized chunks. The function works fine for smaller dataframes, but for a bigger one I have tried parallelising the function with the module "multiprocessing". The set-up I have carried out is as follows:</p>
<pre><code>
# Callback function for updating progress
def update_progress(result):
with progress_lock:
progress.value += 1
print(f"Progress: {progress.value}/{num_chunks} chunks processed.")
def split_dataframe_by_player(df, num_chunks):
grouped = df.groupby('PLAYER_ID')
sorted_groups = sorted(grouped, key=lambda x: len(x[1]), reverse=True)
chunks = [[] for _ in range(num_chunks)]
for i, group in enumerate(sorted_groups):
chunks[i % num_chunks].append(group[1])
return [pd.concat(chunk) if chunk else pd.DataFrame() for chunk in chunks]
# Parallel processing function
def parallel_process(df, func):
global df_chunks, progress, progress_lock
num_cores = mp.cpu_count()
df_chunks = split_dataframe_by_player(df, num_cores)
progress = Value('i', 0)
progress_lock = Lock()
pool = mp.Pool(num_cores)
results = [pool.apply_async(func, args=(chunk,), callback=update_progress) for chunk in df_chunks]
pool.close()
pool.join()
return pd.concat([r.get() for r in results])
</code></pre>
<p>I don't think it has sped up my computing time as much, but I am new to multiprocessing so it might be a direct product of that. Let me know if I need to improve on my multiprocessing functions or if there are better options.</p>
<p>Thanks for the help!</p>
|
<python><multiprocessing>
|
2024-01-09 05:00:10
| 0
| 341
|
bbbb
|
77,784,413
| 16,405,935
|
Fill N/A with previous day data
|
<p>I have a dataframe and it just has data for weekday. Below is sample dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'BAS_DT': ['2023-01-02', '2023-01-03', '2023-01-04', '2023-01-05', '2023-01-05', '2023-01-05', '2023-01-06', '2023-01-07'],
'CUS_NO': [np.nan, np.nan, '900816636', '900816636', '900816946', '900816931', np.nan, np.nan],
'VALUE': [np.nan, np.nan, 10, 10, 7, 8, np.nan, np.nan],
'BR': [np.nan, np.nan, 100, 100, 200, 300, np.nan, np.nan]})
df
BAS_DT CUS_NO VALUE BR
0 2023-01-02 NaN NaN NaN
1 2023-01-03 NaN NaN NaN
2 2023-01-04 900816636 10.0 100.0
3 2023-01-05 900816636 10.0 100.0
4 2023-01-05 900816946 7.0 200.0
5 2023-01-05 900816931 8.0 300.0
6 2023-01-06 NaN NaN NaN
7 2023-01-07 NaN NaN NaN
</code></pre>
<p>I want to fill <code>2023-01-06</code> and <code>2023-01-07</code> same with <code>2023-01-05</code>. I tried <code>ffill</code> but it just fill with the first row that closest to NaN row. Below is my desired Output:</p>
<pre><code> BAS_DT CUS_NO VALUE BR
0 2023-01-02 NaN NaN NaN
1 2023-01-03 NaN NaN NaN
2 2023-01-04 900816636 10.0 100.0
3 2023-01-05 900816636 10.0 100.0
4 2023-01-05 900816946 7.0 200.0
5 2023-01-05 900816931 8.0 300.0
7 2023-01-06 900816636 10.0 100.0
8 2023-01-06 900816946 7.0 200.0
9 2023-01-06 900816931 8.0 300.0
10 2023-01-07 900816636 10.0 100.0
11 2023-01-07 900816946 7.0 200.0
12 2023-01-07 900816931 8.0 300.0
</code></pre>
<p>Thank you.</p>
|
<python><pandas>
|
2024-01-09 04:46:14
| 4
| 1,793
|
hoa tran
|
77,784,396
| 2,272,824
|
How to quickly (vectorized?) extract varying length slices from one numpy 1D array an insert into a 2d array
|
<p>I'm trying to extract multiple, non-overlapping, slices from a single 1D numpy array into consecutive rows of a 2D array.</p>
<p>Let's say the source array is,</p>
<pre><code>s=array([0, 1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12])
</code></pre>
<p>and the target array is a 2D array of zeroes such as,</p>
<pre><code>t = array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
</code></pre>
<p>And I want to extract the slices <code>[1:4:1]</code> and <code>[6:10:1]</code> from <code>s</code> and inject them into <code>t</code>. I have as many slices in <code>s</code> as there are rows in <code>t</code>. All slices are shorter than the row length of <code>t</code>. Each extracted slice should replace the values of the corresponding row in <code>t</code>.</p>
<p>The desired outcome for <code>t</code> is:</p>
<pre><code>array([[1, 2, 3, 0, 0],
[6, 7, 8, 9, 0]])
</code></pre>
<p>If the extracted slices were all equal length then I could do this with a 2D index array, but they are not.</p>
<p>I suspect I should be able to do this by creating two arrays of slices. One of them defines the slices that select the data from the source, and the other identifies the target row positions and then uses something like,</p>
<pre><code>b[:,target_slice_array]=a[source_slice_array]
</code></pre>
<p>but I just can't figure out any syntax that works.</p>
<p>Here is the simple loop approach I'm using at the moment. This is what I'm trying to speed up.</p>
<pre><code>source=np.array([0, 1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12])
source_slices=[np.s_[0:2:1], np.s_[7:10:1]]
target_slices=[np.s_[0:2:1], np.s_[0:3:1]]
target=np.zeros(shape=(2,10), dtype=np.int32)
#for index, (source_slice, target_slice) in enumerate(zip(source_slices, target_slices)):
#target[index,target_slice]=source[source_slice]
# Doubles the speed by moving the for loop to use range
for index in range(len(source_slices)):
target[index,target_slices[index] = source[source_slices[index]]
</code></pre>
<p>This updates target to,</p>
<pre><code>[[ 0 1 0 0 0 0 0 0 0 0]
[ 7 9 10 0 0 0 0 0 0 0]]
</code></pre>
|
<python><numpy><slice>
|
2024-01-09 04:39:21
| 1
| 391
|
scotsman60
|
77,784,395
| 338,479
|
How can I read a file with varying encoding?
|
<p>Writing code that will read a file in Berkeley mbox format. This is a file containing multiple email messages, each of which may have its own encoding. Code is currently crashing because it encounters a non-utf8 character in the file.</p>
<pre><code> # test; read and print just the headers from an email message
print("file = %s" % ifile)
try:
while True:
line = ifile.readline()
print("line = %s %s" % (type(line), line))
if not line: break
if line.startswith("From "): break
except Exception as e:
print("problem reading file: %s" % e)
raise
</code></pre>
<p>generates</p>
<pre><code>file = <_io.TextIOWrapper name='/Users/falk/Mail/art2' mode='r' encoding='UTF-8'>)
problem reading file: 'utf-8' codec can't decode byte 0x9a in position 7081: invalid start byte
</code></pre>
<p>Does anybody know of a standard way or a utility to handle this, or am I stuck opening the file in binary and figuring out myself? I looked at the email.header and email.message packages and found no joy there.</p>
|
<python><email><character-encoding>
|
2024-01-09 04:38:19
| 1
| 10,195
|
Edward Falk
|
77,784,148
| 13,528,142
|
Pandas groupby.head(-n) drops some groups
|
<p>I have a df such as below:</p>
<p><a href="https://i.sstatic.net/iXZVW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iXZVW.png" alt="dataframe" /></a></p>
<p>For each reviewerID I want to select 2 last instances (sorted by reviewTime) as test data and the rest (up to 2 last instances) as train dataset. This is the code I have:</p>
<pre><code>df = df.sort_values("reviewTime")
df_train = df.groupby('reviewerID').head(-2).reset_index(drop=True)
df_val = df.groupby('reviewerID').tail(2).reset_index(drop=True)
</code></pre>
<p>However, this code drops some groups (reviewerIDs) from the train data. so if I get the number of unique reviewerIDs in the original dataframe and the df_train I will have different numbers:</p>
<pre><code>print(df_train['reviewerID'].nunique())
print(df_val['reviewerID'].nunique())
print(df['reviewerID'].nunique())
</code></pre>
<pre><code>776775
777242
777242
</code></pre>
<p>Note that the test set has correct values.
Now I'm guessing that in cases where my reviewerID has less than 2 rows, pandas is assigning all that to the test df and not the train df.
I'm wondering how to get around this issue now, and keep the reviewerIDs with less than 2 values as part of train data instead of Validation. Considering that I don't know how many train data I have per reviewerID.</p>
|
<python><pandas><dataframe><group-by>
|
2024-01-09 03:03:47
| 2
| 391
|
Zeinab Sobhani
|
77,784,048
| 1,492,229
|
What does the value of LIME mean?
|
<p>I'm exploring the workings of <strong>LIME</strong> in <strong>NLP</strong> models to understand how it elucidates positive and negative words.</p>
<p>I possess a trove of documents resembling this excerpt:</p>
<blockquote>
<p>The UN children’s agency says the 'world cannot stand by and watch' the suffering in Gaza.</p>
</blockquote>
<blockquote>
<p>'Intensifying conflict, malnutrition, and disease in Gaza are creating a deadly cycle that is threatening over 1.1 million children,' UNICEF said in a social media post.</p>
</blockquote>
<blockquote>
<p>At least 249 Palestinians have been killed and 510 wounded in the previous 24 hours in Gaza, the health ministry says."</p>
</blockquote>
<p>My classification model distinguishes each document as either "<strong>United Nation Related</strong>" or "<strong>NON-UN Related</strong>" denoted by labels <strong>1</strong> (United Nation Related) or <strong>0</strong> (NON-UN).</p>
<p>Here's a snippet of the code implementation:</p>
<pre class="lang-py prettyprint-override"><code>exp = explainer.explain_instance(X_test.values[i], clf.predict_proba, num_features=LIMEMaxFeatures)
lst = exp.as_list()
</code></pre>
<p>Initially, everything seems fine. However, an issue arises when employing LIME for certain documents classified as NON-UN Related.</p>
<p>Consider this example:</p>
<blockquote>
<p>"Guterres invoked this responsibility, saying he believed the
situation in Israel and the occupied Palestinian territories, 'may
aggravate existing threats to the maintenance of international peace
and security'."</p>
</blockquote>
<p>Despite this text, my model classifies it as <strong>NON-UN</strong> Related. Upon using LIME to delve into the reasons behind this classification, the results are as follows:</p>
<pre><code> Word Value
occupied -0.130118107160623
situation -0.284915997715762
Guterres 0.22668070156952
Gaza 0.144198872750898
</code></pre>
<p>My query pertains to the word "Guterres," where the LIME value is <strong>POSITIVE</strong>. Does this imply that "Guterres" supports the decision of label <strong>0</strong> prediction (NON-UN Related)? <em>In essence, does a higher value for "Guterres" signify the model's stronger confidence in labeling the document as NON-UN Related?</em></p>
<p>Alternatively, does the <strong>POSITIVE</strong> value for "Guterres" signify a tendency towards label <strong>1</strong> (United Nation Related)? <em>Does a higher positive value for "Guterres" indicate a stronger inclination towards labeling the document as United Nation Related?</em></p>
|
<python><nlp><lime>
|
2024-01-09 02:19:24
| 1
| 8,150
|
asmgx
|
77,783,847
| 2,986,153
|
ModuleNotFoundError: No module named 'yaml' despite having installed yaml and pyyaml
|
<p>I have installed Quarto in VSCode on my M1 Mac.</p>
<p>When I try to render a qmd file into an html file ...</p>
<pre><code>quarto render quarto_vscode.qmd
</code></pre>
<p>...I get a strange error message</p>
<pre><code>Starting python3 kernel...Traceback (most recent call last):
File "/Applications/quarto/share/jupyter/jupyter.py", line 21, in <module>
from notebook import notebook_execute, RestartKernel
File "/Applications/quarto/share/jupyter/notebook.py", line 14, in <module>
from yaml import safe_load
ModuleNotFoundError: No module named 'yaml'
</code></pre>
<p>I have confirmed via <code>conda list</code> that both yaml and pyyaml are installed to my virtual environment.</p>
<pre><code>pyyaml 6.0.1 py310h2aa6e3c_1 conda-forge
yaml 0.2.5 h3422bc3_2 conda-forge
</code></pre>
<p>At this stage I am just trying to get Quarto to render so I have a very simple qmd file:</p>
<pre><code>---
title: "VSCode Quarto Report"
date: "2023-01-07"
format: html
editor: source
include: TRUE
echo: TRUE
jupyter: python3
---
```{python}
print('hello')
```
</code></pre>
|
<python><visual-studio-code><quarto>
|
2024-01-09 00:50:18
| 1
| 3,836
|
Joe
|
77,783,488
| 5,638,335
|
Airflow same dag runs notebook code at different times
|
<p>New to airflow:
I need to run a notebook code from my application dag that's currently scheduled to run at 5AM PST for US region. But I want to expand to a few more countries and that notebook code needs to run at 5AM Local time for that given country.</p>
<p>E.g.</p>
<p>For UK, run the notebook at 5 AM UTC - while passing UK specific parameters to the notebook</p>
<p>For JP, run the notebook at 5 AM JST - while passing JP specific parameters to the notebook</p>
<p>Any ideas on how to do that? Can we pass multiple <code>schedule_interval</code>? Not sure how I'll associate parameters with each region's run tho in the same dag. Thanks in advance.</p>
|
<python><jupyter-notebook><airflow>
|
2024-01-08 22:32:42
| 1
| 680
|
priya khokher
|
77,783,264
| 15,781,591
|
How to grab color mapping from matplotlib pie chart in python?
|
<p>From pandas documentation on creating pie charts from data frames:</p>
<p>I have the following code:</p>
<pre><code>df = pd.DataFrame({'mass': [0.330, 4.87 , 5.97],
'radius': [2439.7, 6051.8, 6378.1]},
index=['Mercury', 'Venus', 'Earth'])
plot = df.plot.pie(y='mass', figsize=(5, 5))
</code></pre>
<p>This create a pie chart of planets by mass:</p>
<p><a href="https://i.sstatic.net/PmonP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PmonP.png" alt="enter image description here" /></a></p>
<p>Now I add an argument in the plotting line to specify I want to use a specific Seaborn color palette "Set2":</p>
<pre><code>df = pd.DataFrame({'mass': [0.330, 4.87 , 5.97],
'radius': [2439.7, 6051.8, 6378.1]},
index=['Mercury', 'Venus', 'Earth'])
plot = df.plot.pie(y='mass', figsize=(5, 5), colors=sns.color_palette('Set2'))
</code></pre>
<p>And we get the same plot, but now with a different color palette used:</p>
<p><a href="https://i.sstatic.net/Tqzp3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tqzp3.png" alt="enter image description here" /></a></p>
<p>When I run each of these code chunks, with each color palette, the color mapping with the planet names remains the same, telling me that the mapping is based on the value assigned to each planet name.</p>
<p>What I want to do is "grab" that mapping as a dictionary, such that it can be used in a for loop.</p>
<p>To explain the goal here, I have a data frame containing many different mass values assigned to each planet, from different analyses. This means that experiment "A" had these specific mass values for our three planets here, and experiment "B" had different values for the three planets, and experiment "C" had another different set of values of the three planets, etc. Each of the dataframes with these experiment values are generated from a for loop, iterating through each experiment. For each experiment I want to create a pie chart with the mass values. The main point here, is that I want all of the color mappings for the planets to be the same for each generated plot in the for loop. And so the idea is to "grab" the color mapping dictionary from the first iteration of the for loop, and then apply that exact color mapping to each following iteration of the for loop, assuming there will always just be three planets in the experiment. How can I do this, specifically getting that color mapping dictionary from the first iteration?</p>
|
<python><pandas><matplotlib>
|
2024-01-08 21:32:57
| 2
| 641
|
LostinSpatialAnalysis
|
77,783,172
| 9,983,652
|
Permutation feature importance with multi-class classification problem
|
<p>I am wondering if we can do Permutation feature importance for multi-class classification problem?</p>
<pre><code>from sklearn.inspection import permutation_importance
metrics = ['balanced_accuracy', 'recall']
pfi_scores = {}
for metric in metrics:
print('Computing permutation importance with {0}...'.format(metric))
pfi_scores[metric] = permutation_importance(xgb, Xtst, ytst, scoring=metric, n_repeats=30, random_state=7)
Cell In[5], line 10
8 for metric in metrics:
9 print('Computing permutation importance with {0}...'.format(metric))
---> 10 pfi_scores[metric] = permutation_importance(xgb, Xtst, ytst, scoring=metric, n_repeats=30, random_state=7)
File c:\ProgramData\anaconda_envs\dash2\lib\site-packages\sklearn\utils\_param_validation.py:214, in validate_params.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
208 try:
209 with config_context(
210 skip_parameter_validation=(
211 prefer_skip_nested_validation or global_skip_validation
212 )
213 ):
--> 214 return func(*args, **kwargs)
215 except InvalidParameterError as e:
216 # When the function is just a wrapper around an estimator, we allow
217 # the function to delegate validation to the estimator, but we replace
218 # the name of the estimator by the name of the function in the error
219 # message to avoid confusion.
220 msg = re.sub(
221 r"parameter of \w+ must be",
222 f"parameter of {func.__qualname__} must be",
223 str(e),
224 )
...
(...)
1528 UserWarning,
1529 )
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
</code></pre>
<p>Then I tried to use <code>average='weighted'</code>, then I still got an error saying <code>average='weighted'</code> is not available. so how can I add <code>average='weighted'</code> into <code>permutation_importance()</code> for multi-class classification?</p>
<pre><code>from sklearn.inspection import permutation_importance
metrics = ['balanced_accuracy', 'recall']
pfi_scores = {}
for metric in metrics:
print('Computing permutation importance with {0}...'.format(metric))
pfi_scores[metric] = permutation_importance(xgb, Xtst, ytst, scoring=metric, n_repeats=30, random_state=7, average='weighted')
TypeError: got an unexpected keyword argument 'average'
</code></pre>
|
<python><scikit-learn>
|
2024-01-08 21:07:38
| 1
| 4,338
|
roudan
|
77,783,148
| 2,801,669
|
how to deal with compression when using pydantic for deserialization and serialization
|
<p>Consider the following simple example of a class called <code>TableConfigs</code>:</p>
<pre><code>import pydantic
from enum import Enum
class TimeUnit(str, Enum):
days = "days"
hours = "hours"
minutes = "minutes"
seconds = "seconds"
class TableNames(str, Enum):
surname = "surname"
weather = "weather"
traffic = "traffic"
class TimeQuantity(pydantic.BaseModel):
value: int
unit: TimeUnit
class TableConfig(pydantic.BaseModel):
on: list[str]
time_quantities: list[TimeQuantity]
class TableConfigs(pydantic.BaseModel):
values: dict[TableNames, TableConfig]
</code></pre>
<p>As I am using pydantic I can easily deseralize/serialize an instance of the class:</p>
<pre><code>tc = TableConfigs(
values={
"weather": TableConfig(
on=["city"],
time_quantities=[
TimeQuantity(value=60, unit=TimeUnit.days),
TimeQuantity(value=7, unit=TimeUnit.days),
TimeQuantity(value=1, unit=TimeUnit.seconds),
],
)
}
)
tc_copy = TableConfigs(**tc.model_dump())
</code></pre>
<p>The json representation leaves lot of room for what I called compression, as the result of the model_dump</p>
<pre><code>{'values':
{
<TableNames.weather: 'weather'>:
{
'on': ['city'],
'time_quantities': [
{'value': 60, 'unit': <TimeUnit.days: 'days'>
},
{'value': 7, 'unit': <TimeUnit.days: 'days'>
},
{'value': 1, 'unit': <TimeUnit.seconds: 'seconds'>
}
]
}
}
}
</code></pre>
<p>could without loss of information be written as (encoded):</p>
<pre><code>{'tables': 'weather:city;60d,7d,1s'}
</code></pre>
<p>Which can be achieved by adding a <code>field_serializer</code> and a <code>model_serializer</code>:</p>
<pre><code>class TableConfigs(pydantic.BaseModel):
values: dict[TableNames, TableConfig]
@pydantic.field_serializer("values")
def serialize_values(self, values: dict[TableNames, TableConfig]) -> str:
s = ""
for i, table_name in enumerate(values):
if i > 0:
s += "#"
s += table_name + ":"
s += ",".join(values[table_name].on) + ";"
for time_quantity in values[table_name].time_quantities:
s += str(time_quantity.value) + time_quantity.unit + ","
s = s[:-1]
s = (
s.replace("days", "d")
.replace("hours", "h")
.replace("minutes", "m")
.replace("seconds", "s")
)
return s
@pydantic.model_serializer
def ser_model(self) -> dict[str, Any]:
return {"tables": f"{self.serialize_values(self.values)}"}
tc = TableConfigs(
values={
"weather": TableConfig(
on=["city"],
time_quantities=[
TimeQuantity(value=60, unit=TimeUnit.days),
TimeQuantity(value=7, unit=TimeUnit.days),
TimeQuantity(value=1, unit=TimeUnit.seconds),
],
)
}
)
print(tc.model_dump())
</code></pre>
<p>prints</p>
<pre><code>{'tables': 'weather:city;60d,7d,1s}
</code></pre>
<p>But how can I now again deserialize? I guess I have to write a function which takes the compressed form and converts it into a dictionary which is then passed to the regular pydantic functions used for deserialization.</p>
|
<python><pydantic>
|
2024-01-08 21:02:03
| 1
| 1,080
|
newandlost
|
77,783,143
| 9,357,484
|
The requirments.txt file on Github repo unable to install the python packages
|
<p>I downloaded a repository from the Github and save it to the computer's download(not clone) folder. In the description section of the repository it is written that their project's dependency is defined in the requirments.txt file and one can install the packages using <strong>"pip3 install -r requirments.txt"</strong>. So I navigate to the folder in my local computer where requirments.txt exists, and run the command.</p>
<p>I received the error <strong>ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirments.txt'</strong></p>
<p>However, I checked the folder using ls command. Requirments.txt file within the folder. I am not sure why pip3 unable to find requirments.txt within the folder.</p>
<p>I am using Ubuntu 20.04.5 LTS as operating system.</p>
|
<python><github><pip>
|
2024-01-08 21:01:04
| 2
| 3,446
|
Encipher
|
77,783,062
| 3,182,496
|
MacOS - how to choose audio device from terminal
|
<p>I've been working on a Python program to create audio and also play back existing sound files. I can spawn multiple processes and have them all play to the laptop speakers, but I was wondering if it was possible to send each signal to a separate sound device. This is so I can apply effects to some processes but not all together.</p>
<p>I'm using a MacBook and python <code>simpleaudio</code>, which calls AudioToolbox to connect to the output device. I've also got <code>ffmpeg</code> installed, so could use <code>ffplay</code> if that is easier. The <code>pydub</code> library uses this - it exports the current wave to a temp file then uses subprocess and <code>ffplay</code> to play it back.</p>
<p>I can get a list of devices, but am not sure how to use this list to choose a device.</p>
<pre><code>% ffplay -devices
Devices:
D. = Demuxing supported
.E = Muxing supported
--
E audiotoolbox AudioToolbox output device
D avfoundation AVFoundation input device
D lavfi Libavfilter virtual input device
E sdl,sdl2 SDL2 output device
D x11grab X11 screen capture, using XCB
</code></pre>
<p>I did see a post that suggested using <code>ffmpeg</code> to list devices, again I can't figure out how to use this list.</p>
<pre><code>% ffmpeg -f lavfi -i sine=r=44100 -f audiotoolbox -list_devices true -
Input #0, lavfi, from 'sine=r=44100':
Duration: N/A, start: 0.000000, bitrate: 705 kb/s
Stream #0:0: Audio: pcm_s16le, 44100 Hz, mono, s16, 705 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
[AudioToolbox @ 0x135e3f230] CoreAudio devices:
[AudioToolbox @ 0x135e3f230] [0] Background Music, (null)
[AudioToolbox @ 0x135e3f230] [1] Background Music (UI Sounds), BGMDevice_UISounds
[AudioToolbox @ 0x135e3f230] [2] MacBook Air Microphone, BuiltInMicrophoneDevice
[AudioToolbox @ 0x135e3f230] [3] MacBook Air Speakers, BuiltInSpeakerDevice
[AudioToolbox @ 0x135e3f230] [4] Aggregate Device, ~:AMS2_Aggregate:0
Output #0, audiotoolbox, to 'pipe:':
Metadata:
encoder : Lavf59.27.100
Stream #0:0: Audio: pcm_s16le, 44100 Hz, mono, s16, 705 kb/s
Metadata:
encoder : Lavc59.37.100 pcm_s16le
size=N/A time=00:00:05.06 bitrate=N/A speed=0.984x
video:0kB audio:436kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Exiting normally, received signal 2.
</code></pre>
<p>This does at least give me a recognisable list of devices. If I add more Aggregate Devices, can I play back different files to each device?</p>
|
<python><macos><audio><ffmpeg>
|
2024-01-08 20:39:38
| 2
| 1,278
|
jon_two
|
77,782,973
| 7,802,751
|
How do I return to the default output (cell output) in Google Colab? Reassign sys.stdout to sys.__stdout__ doesn't seem to be working
|
<p>I'm trying to save results from printing the help of a function in a file, instead of presenting them on the cell output in Google Colab.</p>
<p>My code is below:</p>
<pre><code>import sys
import io
def capture_help_to_buffer(obj):
output_buffer = io.StringIO()
sys.stdout = output_buffer
try:
help(obj)
except:
pass
sys.stdout = sys.__stdout__
return output_buffer.getvalue()
# Example usage
help_output = capture_help_to_buffer(list)
print(help_output)
</code></pre>
<p>My problem is: after running this code, <code>print</code> starts returns nothing. In other words: it seems that the reassiging of <code>sys.stdout</code> to its supposedly default value (<code>sys.__stdout__</code>) is not working.</p>
<p>My question: to what value should I reassign <code>sys.stdout</code> so <code>print</code> will return to its normal behavior (presenting its output in Google Colab's cell's output)?</p>
|
<python><io><printf><google-colaboratory><stdout>
|
2024-01-08 20:14:49
| 1
| 997
|
Gustavo Mirapalheta
|
77,782,906
| 7,462,275
|
How to add a column in a panda series that contains the exponential of another column in multiprecision?
|
<p>Considering this panda data frame :</p>
<pre><code>df = pd.DataFrame({'x': list(range(-10000, 10001, 1000))})
</code></pre>
<p>I would like to insert a column (named <code>exp_x</code>) that contains the exponential of these numbers.
Multi precision is needed because some numbers are too large or too low. The gmpy2 library is used (For example : <code>exp(mpfr(5000))</code> or <code>exp(mpfr(-5000))</code>).</p>
<p>What is the shortest method to do that ?</p>
<p>Thanks for answer.</p>
|
<python><pandas><multiprecision><gmpy>
|
2024-01-08 19:56:03
| 0
| 2,515
|
Stef1611
|
77,782,603
| 23,179,206
|
Is there a way to read the values property of a PySimpleGUI combobox?
|
<p>I have created a PySimpleGUI combo box with code which simplifies down to:</p>
<pre><code>combo_list = ['Choice 1', 'Choice 2']
layout = [[sg.Combo(combo_list, default_value=combo_list[0], key='-COMBO-', enable_events=True)]]
</code></pre>
<p>The user is able to add items to the drop-down list, and when he quits the app, I save the updated combo_list so that it can be used when the program is next run. At the moment I am doing this by adding any new entries made by the user, to my combo_list array. But is there a way to simply read back the updated parameter 'values' from the Combo element itself rather than having to shadow it by updating the combo_list array?</p>
<p>There is no problem reading back the value that the user has chosen using <code>window['-COMBO-'].get()</code>, but in spite of checking the PySimpleGUI docs, and more generally, I have not been able to find a way to read the list of values. If this is not possible, I will continue to update the combo_list array whenever the user adds an item. But it would be good to know.</p>
|
<python><pysimplegui>
|
2024-01-08 18:44:34
| 1
| 910
|
Lee-xp
|
77,782,599
| 9,443,671
|
How can I extract all the frames from a particular time interval in a video?
|
<p>I'm trying to extract all the frames from a particular time frame in Python. For example, I want to take the frames of a video from <code>start_time = 3.25</code> to <code>end_time = 5.5</code>. Let's assume the fps of the video is 60. How would I do that?</p>
<p>Very loose code of what I'm thinking of doing:</p>
<pre><code>
video = load_video(my_video_path)
fps = 60
start_frame = 3.25 * fps
end_frame = 5.5 * fps
sliced_video = video[start_frame:end_frame]
</code></pre>
<p>Not sure what the correct libraries and methodology is to do the following?</p>
|
<python><opencv><video-processing>
|
2024-01-08 18:43:27
| 1
| 687
|
skidjoe
|
77,782,460
| 1,830,639
|
How to configure VsCode to debug python uvicorn + open telemetry
|
<p>This <a href="https://stackoverflow.com/a/61000306/1830639">answer</a> works for debugging FastAPI applications in VsCode.</p>
<p>But now I would like to debug FastAPI while using OpenTelemetry automatic instrumentation for python.</p>
<p>To run your python code with open telemetry, you need to run through the CLI like this (<a href="https://opentelemetry.io/docs/instrumentation/python/getting-started/#run-the-instrumented-app" rel="nofollow noreferrer">example using Flask</a>):</p>
<pre><code>opentelemetry-instrument --traces_exporter console --metrics_exporter console --logs_exporter console --service_name dice-server flask run -p 8080
</code></pre>
<p>Now, how can I run like that using VsCode and still be able to debug FastAPI?</p>
<p>Here's my <code>launch.json</code></p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Uvicorn",
"type": "python",
"request": "launch",
"module": "uvicorn",
"console": "integratedTerminal",
"justMyCode": true,
"args": [
"server.main:app",
//"--reload",
"--port",
"8000"
],
"envFile": "${workspaceFolder}/.env"
}
]
</code></pre>
<p>}</p>
|
<python><open-telemetry><uvicorn><otel>
|
2024-01-08 18:16:09
| 1
| 1,033
|
JobaDiniz
|
77,782,437
| 5,924,264
|
How to check that all non-nan values in dataframe column are > 0?
|
<p>I have a dataframe column that may have NaN, which are acceptable, however non-positive values are not acceptable.</p>
<p>I tried to do</p>
<pre><code>assert (df[col] > 0).all()
</code></pre>
<p>But this asserts if there is a nan column.</p>
<p>I also tried</p>
<pre><code>assert (df[col] > 0 | df[col] == np.nan).all()
</code></pre>
<p>But this gives the error</p>
<pre><code>E TypeError: cannot compare a dtyped [float64] array with a scalar of type [bool]
</code></pre>
<p>What is the appropriate way to check this?</p>
|
<python><pandas><dataframe><nan>
|
2024-01-08 18:11:06
| 4
| 2,502
|
roulette01
|
77,782,213
| 5,133,005
|
Django Foreign Key To Any Subclass of Abstract Model
|
<p>I have an abstract Django model with two concrete implementations. E.g.</p>
<pre class="lang-py prettyprint-override"><code>class Animal(models.Model):
name = models.CharField("name", max_length=100)
class Meta:
abstract = True
class Dog(Animal):
...
class Cat(Animal):
...
</code></pre>
<p>And then I want to create a generic foreign key that points to a Dog or a Cat. From the documentation about <a href="https://docs.djangoproject.com/en/4.2/ref/contrib/contenttypes/#django.contrib.contenttypes.generic.GenericForeignKey" rel="nofollow noreferrer">GenericForeignKeys</a> I know that I can do something like</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType
from django.db import models
class Appointment(models.Model):
content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE)
object_id = models.PositiveIntegerField()
patient = GenericForeignKey("content_type", "object_id")
</code></pre>
<p>But</p>
<ol>
<li>I have to do this for every model that can point to a <code>Dog</code> or a <code>Cat</code>.</li>
<li>It doesn't enforce that the foreign key actual points to an <code>Animal</code>.</li>
</ol>
<p>Is there a way to set up my <code>Animal</code> model such that in the <code>Appointment</code> model I can just do</p>
<pre class="lang-py prettyprint-override"><code>class Apointment(models.Model):
patient = GenericRelation(Animal)
</code></pre>
|
<python><django><generic-foreign-key>
|
2024-01-08 17:26:25
| 1
| 589
|
Gree Tree Python
|
77,782,132
| 7,243,493
|
Getting lastrowid with sqlite in flask app
|
<p>I am having a Flask app with a factory pattern that just uses Sqlite.</p>
<p>When trying to do "last_row = db.lastrowid" id i get the error:</p>
<p>AttributeError: 'sqlite3.Connection' object has no attribute 'lastrowid'</p>
<p>When i am using this sqlite function "last_inserted_rowid"
<a href="https://www.w3resource.com/sqlite/core-functions-last_insert_rowid.php" rel="nofollow noreferrer">https://www.w3resource.com/sqlite/core-functions-last_insert_rowid.php</a></p>
<p>i get the error:</p>
<p>"TypeError: 'sqlite3.Cursor' object is not subscriptable"</p>
<pre><code>def get_db():
if 'db' not in g:
g.db = sqlite3.connect(
current_app.config['DATABASE'],
detect_types=sqlite3.PARSE_DECLTYPES
)
# return rows that behave like dicts. This allows accessing the columns by name.
g.db.row_factory = sqlite3.Row
return g.db
@bp.route('/createstrat', methods=('GET', 'POST'))
@login_required
def createstrat():
if request.method == 'POST':
strategy_name = request.form['strategy_name']
info = request.form['info']
exchange = request.form['exchange']
error = None
if not strategy_name:
error = 'strategy_name is required.'
if error is not None:
flash(error)
else:
db = get_db()
db.execute(
'INSERT INTO strategies (strategy_name, info, fk_user_id, fk_exchange_id)'
' VALUES (?, ?, ?, ?)',
(strategy_name, info, g.user['id'], exchange)
)
db.commit()
# Get the ID of the last inserted row??
#this dont work
#last_row = db.execute('SELECT last_insert_rowid()')
last_row = db.lastrowid
print(last_row, "LAST ROW")
if last_row:
last_inserted_id = last_row[0]
print("Last inserted row ID:", last_inserted_id)
return redirect(url_for('strategy.index'))
</code></pre>
|
<python><sqlite><flask>
|
2024-01-08 17:11:05
| 1
| 568
|
Soma Juice
|
77,782,115
| 3,757,672
|
WGET working in WSL shell but not in pythons subprocess
|
<p><strong>Context</strong></p>
<p>In a Python (3.11) script in Windows WSL (version 1) I wanted to download a file using the <code>requests</code>-module. It didn't work so I did it manually in the shell using <code>wget</code>:</p>
<p><code>wget https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken</code></p>
<p>Works without any problem, so I thought I'd just wrap it in a subprocess like:</p>
<pre class="lang-py prettyprint-override"><code>check_output("wget https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken "), shell=True)
# Alternative
run(["wget", "https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken"])
</code></pre>
<p>This fails with the same reason the requests-library failed - It's unable to connect to the source.</p>
<p><strong>Question</strong></p>
<p>Besides the obvious thing that the download does not work: I do not understand why I can issue the command in the shell but not via <code>subprocess</code> - Shouldn't it just work the same way?
Also assumed Python was using some proxy as I've set the <code>https_proxy</code>/<code>http_proxy</code> env variables for pip; anyhow, deleting them didn't help as well.</p>
<p>Anybody an idea why this occurs (ultimately leading to hopefully resolve the issue...).</p>
|
<python><python-requests><subprocess><windows-subsystem-for-linux>
|
2024-01-08 17:07:23
| 0
| 2,497
|
Markus
|
77,782,091
| 4,399,016
|
Removing rows that have outliers in pandas data frame using Z - Score method
|
<p>I am using <a href="https://stackoverflow.com/a/23202269/4399016">this code</a> to remove outliers.</p>
<pre><code>import pandas as pd
import numpy as np
from scipy import stats
df = pd.DataFrame(np.random.randn(100, 3))
df[np.abs(stats.zscore(df[0])) < 1.5]
</code></pre>
<p>This works. We can see that the number of rows of data frame has reduced. However, I need to remove outliers in the percentage change values of a similar data frame.</p>
<pre><code>df = df.pct_change()
df.plot.line(subplots=True)
df[np.abs(stats.zscore(df[0])) < 1.5]
</code></pre>
<p>This results in an empty data frame. What am I doing wrong? Should the value 1.5 be adjusted?
I tried several values. Nothing works.</p>
|
<python><pandas><outliers><z-score>
|
2024-01-08 17:03:21
| 1
| 680
|
prashanth manohar
|
77,781,998
| 4,451,315
|
Vectorized numpy functions to get elements of array between bounds defined by other array
|
<p>If I have</p>
<pre><code>data = np.array([97, 98, 99, 100])
offsets = np.array([0, 1, 2, 2, 3, 3, 4])
</code></pre>
<p>then I'd like to do</p>
<pre class="lang-py prettyprint-override"><code>result = []
for i in range(len(offsets)-1):
result.append(data[offsets[i]: offsets[i+1]])
</code></pre>
<p>and get</p>
<pre class="lang-py prettyprint-override"><code>[array([97]),
array([98]),
array([], dtype=int64),
array([99]),
array([], dtype=int64),
array([100])]
</code></pre>
<p>This works, but is slow when <code>data</code> gets large-ish.</p>
<p>Is there any vectorized operation I could make use of here to speed it up?</p>
|
<python><numpy>
|
2024-01-08 16:47:50
| 1
| 11,062
|
ignoring_gravity
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.