QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
76,934,225 | 7,662,085 | Pandas - Rolling sum of time deltas (days) of activity in a specific period | <p>I have a DataFrame of dates indicating the days when a machine was turned on and off.</p>
<pre><code> Date Action
0 2019-09-25 Turned On
1 2019-10-10 Turned Off
2 2019-12-01 Turned On
3 2020-01-11 Turned Off
4 2020-01-23 Turned On
5 2020-03-28 Turned Off
6 2020-08-27 Turned On
7 2021-01-10 Turned Off
8 2021-02-17 Turned On
9 2021-06-30 Turned Off
</code></pre>
<p>I need to calculate how many days the machine spent turned on during a specified period, e.g. each quarter. For example, for the fourth quarter of 2019, it spent 40 days turned on:</p>
<ul>
<li>The period 2019-10-01 to 2019-10-10, plus</li>
<li>The period 2019-12-01 to 2019-12-31.</li>
</ul>
<p>The challenge is that the start and end of the quarter don't coincide with the entries in the DataFrame. One needs to work out that, in this example, the days in September and January need to be excluded.</p>
<p>Similarly, to calculate the days of activity in the first quarter of 2020, one needs to subtract the days in the interval 2019-12-01 to 2020-01-11 (rows 2 and 3) since they fall outside of 2020.</p>
<p>I have already attempted to take the difference between consecutive dates and take the sum of days that correspond to active intervals:</p>
<pre><code> Date Direction Days
0 2019-09-25 Turned On NaT
1 2019-10-10 Turned Off 15 days
2 2019-12-01 Turned On 50 days
3 2020-01-11 Turned Off 41 days
4 2020-01-23 Turned On 12 days
5 2020-03-28 Turned Off 65 days
6 2020-08-27 Turned On 152 days
</code></pre>
<p>However, this fails to count intervals that started before each measurement period, and erroneously counts ones that start during the period but continue beyond it.</p>
<p><strong>Note:</strong> I mention quarters as an example, but I also need to be able to work out the same information for arbitrary intervals. For example, for the 12 months after the first recorded activation, which in the example is 2019-09-25 to 2020-09-25, and then every 12 month period thereafter.</p>
<p>Thanks in advance to anyone who can help!</p>
| <python><pandas> | 2023-08-19 08:31:49 | 1 | 8,927 | steliosbl |
76,934,185 | 13,490,662 | Dir and file created by python program doesnβt show up in Windows β explorer | <p>I use sqlite with python program.
To create and locate the db file I do this</p>
<pre><code>home_directory = os.path.expanduser( '~' )
os.mkdir(home_directory+"\AppData\Local\\myprog")
db_url="sqlite:///"+home_directory+"\AppData\Local\\myprog\db1"
</code></pre>
<p>The database file seems to be created as I can retrieve data after closing and restarting the application.
Nevertheless, when I visit the Local dir in the userβs AppData I do not see the created myprog dir.</p>
| <python> | 2023-08-19 08:21:56 | 1 | 535 | Meaulnes |
76,934,153 | 3,909,890 | Alembic does not recognize the default schema when creating initial migrations | <p>I'm at my wits end here. I'm trying to create an initial migration to my Postgres database using alembic. I'm autogenerating the migration from my SQLAlchemy ORM models.</p>
<p>Here's my <code>models.py</code> file:</p>
<pre><code>from typing import List, Optional
from sqlalchemy import Boolean, ForeignKey, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from myapp.database.core import Base, TimestampMixin
from .choices import RoleChoices
class Company(TimestampMixin, Base):
__tablename__ = "companies"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str]
domain: Mapped[str] = mapped_column(String(128), unique=True)
is_active: Mapped[bool] = mapped_column(Boolean, default=True)
users: Mapped[List["User"]] = relationship(
back_populates="company", cascade="all, delete-orphan", passive_deletes=True
)
def __repr__(self) -> str:
return f"""
<Company(
id={self.id},
name={self.name},
domain={self.domain},
is_active={self.id})>"""
class User(TimestampMixin, Base):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
email: Mapped[str] = mapped_column(String(128), unique=True)
username: Mapped[Optional[str]] = mapped_column(String(128))
is_active: Mapped[bool] = mapped_column(Boolean, default=True)
role: Mapped[RoleChoices]
company_id: Mapped[int] = mapped_column(
ForeignKey("companies.id", ondelete="CASCADE")
)
company: Mapped["Company"] = relationship(back_populates="users")
def __repr__(self) -> str:
return f"<User(id={self.id}, email={self.email}, username={self.username})>"
</code></pre>
<p>My base model is:</p>
<pre><code>from datetime import datetime
from sqlalchemy import DateTime
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
pass
class TimestampMixin:
created_at: Mapped[DateTime] = mapped_column(
DateTime, nullable=False, default=datetime.utcnow
)
updated_at: Mapped[DateTime] = mapped_column(DateTime, nullable=True)
</code></pre>
<p>My <code>alembic.ini</code> file is the autogenerated file when running <code>alembic init migrations</code>, so I haven't edited at all.</p>
<p>My <code>env.py</code> file inside the <code>migrations</code> folder looks like this:</p>
<pre><code>import os
from logging.config import fileConfig
from alembic import context
from dotenv import find_dotenv, load_dotenv
from sqlalchemy import engine_from_config, pool
from myapp.auth.models import Company, User # noqa
from myapp.database.core import Base
load_dotenv(find_dotenv())
DB_USER = os.environ.get("DB_USER")
DB_NAME = os.environ.get("DB_NAME")
DB_PASSWORD = os.environ.get("DB_PASSWORD")
DB_HOST = os.environ.get("DB_HOST")
DB_PORT = os.environ.get("DB_PORT")
connection_string = (
f"postgresql+psycopg2://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
)
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
config.set_main_option("sqlalchemy.url", connection_string)
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()
</code></pre>
<p>Finally the way I setup my database is via docker, but I have the commands in my Makefile:</p>
<pre><code>.PHONY: db-create db-start db-stop db-remove db-drop-extensions db-init
db-create:
@echo "Creating database container..."
docker run --name $(DB_CONTAINER_NAME) -e POSTGRES_PASSWORD=$(DB_PASSWORD) -d postgres
db-start:
@echo "Starting database container..."
docker start postgres
db-stop:
@echo "Stopping database container..."
docker stop postgres
db-remove:
@echo "Removing database container..."
docker rm postgres
db-init: db-create db-start
@echo "Initialising database..."
sleep 10
db-teardown: db-stop db-remove
</code></pre>
<p>Oh and for the purpose of testing my <code>env.py</code> looks like this:</p>
<pre><code>export DB_NAME="postgres"
export DB_USER="postgres"
export DB_PASSWORD="postgres"
export DB_HOST="localhost"
export DB_PORT="5432"
export DB_CONTAINER_NAME="postgres"
</code></pre>
<p>After I run <code>source .env</code> I then execute the following make command <code>make db-init</code>. After which this gets printed to my terminal:</p>
<pre><code>Creating database container...
docker run --name postgres -e POSTGRES_PASSWORD=postgres -d postgres
f32873ec9faf9c09bd0ea1e5ff1247dadd60c3eb0ed82b63a318813b27ca783f
Starting database container...
docker start postgres
postgres
Initialising database...
sleep 10
</code></pre>
<p>Now I have my database container up and running.</p>
<p>Finally when I try to run an alembic revision like <code>alembic revision --autogenerate -m "add company and user models"</code> I get the following error message:</p>
<pre><code>INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Traceback (most recent call last):
File "/Users/myuser/my-app/.venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1965, in _exec_single_context
self.dialect.do_execute(
File "/Users/myuser/my-app/.venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 921, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.InvalidSchemaName: no schema has been selected to create in
LINE 2: CREATE TABLE alembic_version (
...
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidSchemaName) no schema has been selected to create in
LINE 2: CREATE TABLE alembic_version (
^
[SQL:
CREATE TABLE alembic_version (
version_num VARCHAR(32) NOT NULL,
CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
)
]
</code></pre>
<p>What's going on here? It seems like Alembic doesn't have access to the right schema, but on the other hand I haven't explicitly set a schema so I'm under the impression that alembic should default to the <code>public</code> schema of postgres.</p>
| <python><postgresql><docker><sqlalchemy><alembic> | 2023-08-19 08:10:07 | 1 | 2,491 | P4nd4b0b3r1n0 |
76,934,145 | 5,109,681 | Python: How to convert date string into datetime object | <p>I have a date string - "Fri Jun 05 15:59:14 PDT 2020" How to convert into <a href="https://docs.python.org/3/library/datetime.html#datetime-objects" rel="nofollow noreferrer">datetime</a> object?</p>
<p>i have tried below format, its not working.</p>
<pre><code>'%a %b %d %H:%M:%S %Z %Y'
</code></pre>
| <python><datetime><python-datetime> | 2023-08-19 08:07:41 | 1 | 1,057 | Rehan Shikkalgar |
76,933,933 | 1,415,826 | LangChain specific default response | <p>using LangChain and OpenAI, how can I have the model return a specific default response? for instance, let's say I have these statement/responses</p>
<pre><code>Statement: Hi, I need to update my email address.
Answer: Thank you for updating us. Please text it here.
Statement: Hi, I have a few questions regarding my case. Can you call me back?
Answer: Hi. Yes, one of our case managers will give you a call shortly.
</code></pre>
<p>if the input is similar to one of the above statements, I would like to have OpenAI respond with the specific answer.</p>
| <python><openai-api><langchain> | 2023-08-19 06:59:56 | 1 | 945 | iambdot |
76,933,625 | 7,120,087 | SageMaker complains that /opt/ml/model does not appear to have a file named config.json | <p>I am using a huggingface model alongside a custom pipeline to deploy my model onto SageMaker, my model.tar.gz structure looks like below:</p>
<pre><code>βββ added_tokens.json
βββ code
β βββ inference.py
β βββ pipeline.py
β βββ requirements.txt
βββ config.json
βββ generation_config.json
βββ model-00001-of-00002.safetensors
βββ model-00002-of-00002.safetensors
βββ model.safetensors.index.json
βββ special_tokens_map.json
βββ tokenizer_config.json
βββ tokenizer.json
βββ tokenizer.model
</code></pre>
<p>I deployed my model via</p>
<pre class="lang-py prettyprint-override"><code>from sagemaker.huggingface.model import HuggingFaceModel
hub = {
'HF_TASK':'text-generation'
}
huggingface_model = HuggingFaceModel(
env=hub,
model_data="s3://my_model_bucket/model.tar.gz",
role=role,
transformers_version="4.28",
pytorch_version="2.0",
py_version='py310',
)
# deploy the endpoint endpoint
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g5.xlarge"
)
</code></pre>
<p>However, when I try to invoke the model, here is my response</p>
<pre class="lang-json prettyprint-override"><code>{
"code": 400,
"type": "InternalServerException",
"message": "/opt/ml/model does not appear to have a file named config.json. Checkout \u0027https://huggingface.co//opt/ml/model/None\u0027 for available files."
}
</code></pre>
<p>Another error is <code>W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - OSError: /opt/ml/model does not appear to have a file named config.json. Checkout 'https://huggingface.co//opt/ml/model/None' for available files.</code></p>
<p>But config.json is clearly in my model directory. Here is my inference.py code</p>
<pre class="lang-py prettyprint-override"><code>import torch
from typing import Dict
from transformers import AutoTokenizer, AutoModelForCausalLM
from pipeline import MyCustomPipeline
pipeline = None
def model_fn(model_dir):
print("Loading model from: " + model_dir)
tokenizer = AutoTokenizer.from_pretrained(
model_dir,
local_files_only=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_dir,
local_files_only=True,
device_map="auto",
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
pipeline = MyCustomPipeline(model, tokenizer)
return model, tokenizer
def transform_fn(model, input_data, content_type, accept):
return pipeline(input_data)
</code></pre>
<p>What am I doing wrong here? I should have followed all needed steps to deploy a huggingface model onto SageMaker.</p>
| <python><huggingface-transformers><amazon-sagemaker><huggingface> | 2023-08-19 04:47:50 | 1 | 1,341 | Baiqing |
76,933,432 | 2,398,040 | How to check if two filenames are similar and select the latest version | <p>I have a messy folder structure with a lot of files and people having saved a lot of versions of these files over time, something like:</p>
<pre><code>Our awesome presentation v0.pptx
our_awesome_presentation v1.pptx
Our_Awesome_Presentation_vF.pptx
</code></pre>
<p>I'd like to pick one of these files and discard the rest, using python. There are potentially tens of thousands of such files with thousands of unique documents. I do NOT need to be 100% accurate in picking the files, so I think for selecting files it's enough for me to look at file creation date or last modified date. If I miss a few files that are similar (i.e., if I pick up two versions of the same file thinking they are two different files), that's also fine. In short, I don't need to be perfect, I just want to reduce the size of the folder while retaining as much unique information as I can.</p>
<p>Now I am thinking I need some sort of string similarity score for filenames. What is a good way to do this? Can I use something like nltk? How?</p>
| <python><nltk> | 2023-08-19 03:11:06 | 2 | 1,057 | ste_kwr |
76,933,392 | 14,745,738 | Do I have to define functions first before calling them when running a Python script in VS Code? | <p>I've been struggling with Python in VS Code because my functions weren't being recognized.</p>
<p>I tried following a bunch of stuff in this Q&A:
<a href="https://stackoverflow.com/questions/64255834/no-definition-found-for-function-vscode-python">No definition found for function - VSCode Python</a>.
I also tried messing with the interpreter, the python settings, updating to Python 3.11, different environments and nothing worked.</p>
<p>Then I discovered when running this, it works totally fine.</p>
<pre><code>def hello():
print("Hello World!")
hello()
</code></pre>
<p>But when running this, it gives an error that <code>hello()</code> isn't defined</p>
<pre><code>hello()
def hello():
print("Hello World!")
</code></pre>
<p>It's been a few years since I've done any coding with Python but this feels weird. I'm used to this logic in Powershell, but everywhere else I'm used to it not caring <em>where</em> it's defined as long as it <em>is</em> defined properly.</p>
<p>Unless this is something related to it being a basic script file that doesn't have a <code>main()</code> function. It's a script that I only plan to run from VS Code, so if I have to define my functions before invoking them I can do it that way. I just prefer to put my functions at the bottom because I'm usually writing them as I'm figuring things out and breaking things into smaller parts.</p>
| <python><visual-studio-code> | 2023-08-19 02:46:07 | 2 | 539 | dbarnes |
76,933,329 | 13,916,049 | Assign column value if index contains substring that matches the index of another dataframe | <p>I have a list of adata objects <code>dfs</code> derived from <code>scanpy</code>, which uses Pandas under the hood. Its observations can be obtained, for example, using <code>dfs["GSM4819737"].obs</code>.</p>
<p>If the index of the observations match the substring of <code>anno.index</code> after the last <code>_</code> delimiter, assign the <code>label</code> column from <code>anno.index</code> to the corresponding row in the observations table.</p>
<p>If there are no matches (i.e., un-annotated), remove the row.</p>
<pre><code>import pandas as pd
import numpy as np
import os
import scanpy as sc
for subdir, dirs, files in os.walk(directory_path):
for file in files:
if file.endswith("_ccRCC.h5"):
file_path = os.path.join(subdir, file) # Get the full file path
id = file.split("_")[0]
dfs[id] = sc.read_10x_h5(file_path, genome="GRCh38", backup_url=None)
dfs[id].var_names_make_unique()
results_file[id] = id + ".h5ad"
for i in dfs:
ids = anno.index.str.extract(r'([^_]*)$')[0].tolist()
if any(j in dfs[i].obs.index.map(str).tolist() for j in ids):
for k in ids:
dfs[i].obs = dfs[i].obs.join(anno.set_index(k)).dropna()
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Input In [124], in <cell line: 1>()
3 if any(j in dfs[i].obs.index.map(str).tolist() for j in ids):
4 for k in ids:
----> 5 dfs[i].obs = dfs[i].obs.join(anno.set_index(k)).dropna()
File ~/.local/lib/python3.9/site-packages/pandas/core/frame.py:5859, in DataFrame.set_index(self, keys, drop, append, inplace, verify_integrity)
5856 missing.append(col)
5858 if missing:
-> 5859 raise KeyError(f"None of {missing} are in the columns")
5861 if inplace:
5862 frame = self
KeyError: "None of ['AAACCTGCAAGTAGTA-1'] are in the columns"
</code></pre>
<p>Input:</p>
<p><code>dfs</code></p>
<pre><code>{'GSM4819737': AnnData object with n_obs Γ n_vars = 1916 Γ 2252
obs: 'n_genes', 'n_genes_by_counts', 'total_counts', 'total_counts_mt', 'pct_counts_mt', 'leiden'
var: 'gene_ids', 'n_cells', 'mt', 'n_cells_by_counts', 'mean_counts', 'pct_dropout_by_counts', 'total_counts', 'highly_variable', 'means', 'dispersions', 'dispersions_norm', 'mean', 'std'
uns: 'hvg', 'leiden', 'leiden_colors', 'leiden_sizes', 'log1p', 'neighbors', 'paga', 'pca', 'rank_genes_groups', 'umap'
obsm: 'X_pca', 'X_umap'
varm: 'PCs'
obsp: 'connectivities', 'distances',
'GSM4819735': AnnData object with n_obs Γ n_vars = 713 Γ 2619
obs: 'n_genes', 'n_genes_by_counts', 'total_counts', 'total_counts_mt', 'pct_counts_mt', 'leiden'
var: 'gene_ids', 'n_cells', 'mt', 'n_cells_by_counts', 'mean_counts', 'pct_dropout_by_counts', 'total_counts', 'highly_variable', 'means', 'dispersions', 'dispersions_norm', 'mean', 'std'
uns: 'hvg', 'leiden', 'leiden_colors', 'leiden_sizes', 'log1p', 'neighbors', 'paga', 'pca', 'rank_genes_groups', 'umap'
obsm: 'X_pca', 'X_umap'
varm: 'PCs'
obsp: 'connectivities', 'distances'}
</code></pre>
<p><code>dfs["GSM4819737"].obs.iloc[0:2,:]</code></p>
<pre><code>pd.DataFrame({'n_genes': {'AAACCTGGTAAATGTG-1': 2261, 'AAACCTGCAAGTAGTA-1': 2497},
'n_genes_by_counts': {'AAACCTGGTAAATGTG-1': 2261, 'AAACCTGCAAGTAGTA-1': 2497},
'total_counts': {'AAACCTGGTAAATGTG-1': 5744.0, 'AAACCTGCAAGTAGTA-1': 10502.0},
'total_counts_mt': {'AAACCTGGTAAATGTG-1': 146.0, 'AAACCTGCAAGTAGTA-1': 18.0},
'pct_counts_mt': {'AAACCTGGTAAATGTG-1': 2.5417826175689697,
'AAACCTGCAAGTAGTA-1': 0.1713959276676178},
'leiden': {'AAACCTGGTAAATGTG-1': '2', 'AAACCTGCAAGTAGTA-1': '1'}})
</code></pre>
<p><code>anno.head().iloc[0:2,-2:]</code></p>
<pre><code>pd.DataFrame({'label': {'SI_18854_AAACCTGCAAGTAGTA-1': '1:Tumor',
'SI_18854_AAACCTGTCCACTGGG-1': '1:Tumor'},
'patient': {'SI_18854_AAACCTGCAAGTAGTA-1': 'SS_2005',
'SI_18854_AAACCTGTCCACTGGG-1': 'SS_2005'}})
</code></pre>
<p>Desired output:</p>
<pre><code>pd.DataFrame({'n_genes': {'AAACCTGCAAGTAGTA-1': 2497},
'n_genes_by_counts': {'AAACCTGCAAGTAGTA-1': 2497},
'total_counts': {'AAACCTGCAAGTAGTA-1': 10502.0},
'total_counts_mt': {'AAACCTGCAAGTAGTA-1': 18.0},
'pct_counts_mt': {'AAACCTGCAAGTAGTA-1': 0.1713959276676178},
'leiden': {'AAACCTGCAAGTAGTA-1': '1'}
'label': {'1:Tumor'}})
</code></pre>
| <python><pandas><dataframe><for-loop><bioinformatics> | 2023-08-19 02:06:26 | 0 | 1,545 | Anon |
76,933,266 | 11,556,864 | Packages installed using poetry are not available inside `poetry shell` | <p>Packages installed using <code>poetry install</code> are available in <code>poetry run</code> but not in <code>poetry shell</code>.</p>
<pre class="lang-bash prettyprint-override"><code>$ poetry add {package}
$ poetry install
</code></pre>
<pre class="lang-bash prettyprint-override"><code># this works
$ poetry run python
> import {package}
</code></pre>
<pre class="lang-bash prettyprint-override"><code># ...but not this
$ poetry shell
$ python
> import {package}
</code></pre>
<p>Am I doing something wrong or misunderstanding what <code>poetry shell</code> does?<br />
I thought it would activate the virtual environment setup by poetry.<br />
<code>poetry --version</code> prints <code>Poetry (version 1.5.1)</code></p>
| <python><python-venv><python-poetry> | 2023-08-19 01:28:15 | 2 | 1,137 | WieeRd |
76,933,178 | 11,922,765 | Python Detect isolated edges in the histogram plot for outliers detection in time-series data | <p>I am attempting to find out outliers my own way. How? Plot the histogram, search for isolated edges with a few counts and zero-count neighbors or edges. Usually they will be at the far end of the histogram. Those could be outliers. Detect and drop them. What kind of data is it? Time-series coming from the field. Sometimes, you would see weird numbers (while sensors data is around 50-100, outliers may be -10000, 1000) when the sensors fail to communicate data in time and the data loggers stores these weird numbers. They are momentary, may occur a few times in a year data and would be less than 1 % of total samples.</p>
<p>My code:</p>
<pre><code># vals, edges = np.histogram(df['column'],bins=20)
# obtained result is
vals = [ 38 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 11 126664 13853 4536]
edges = [ 0. 2.911165 5.82233 8.733495 11.64466 14.555825 17.46699
20.378155 23.28932 26.200485 29.11165 32.022815 34.93398 37.845145
40.75631 43.667475 46.57864 49.489805 52.40097 55.312135 58.2233 ]
# repeat last sample twice in the vals. Why: because vals always have one sample less than edges
vals = np.append(vals, vals[-1])
vedf = pd.DataFrame(data = {'edges':edges,'vals':vals})
# Replace all zero samples with NaN. Hence, these rows will not recognized.
vedf['vals'] = vedf['vals'].replace(0,np.nan)
# Identify the isolated edges by looking the number of samples, say, < 50
vedf['IsolatedEdge?'] = vedf['vals'] <50
# plot histogram
plt.plot(vedf['edges'],vedf['vals'],'o')
plt.show()
</code></pre>
<p>Present output:</p>
<p>This is not a correct output. Why? There is only one isolated edge at the beginning at value 0. However, here, my code detected values at 43 and 46 as isolated ones just because they have less count.</p>
<pre><code>vedf =
edges vals IsolatedEdge?
0 0.000000 38.0 True
1 2.911165 NaN False
2 5.822330 NaN False
3 8.733495 NaN False
4 11.644660 NaN False
5 14.555825 NaN False
6 17.466990 NaN False
7 20.378155 NaN False
8 23.289320 NaN False
9 26.200485 NaN False
10 29.111650 NaN False
11 32.022815 NaN False
12 34.933980 NaN False
13 37.845145 NaN False
14 40.756310 NaN False
15 43.667475 1.0 True
16 46.578640 11.0 True
17 49.489805 126664.0 False
18 52.400970 13853.0 False
19 55.312135 4536.0 False
20 58.223300 4536.0 False
</code></pre>
<p><a href="https://i.sstatic.net/Waebn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Waebn.png" alt="enter image description here" /></a></p>
<p>Expected output:</p>
<p><a href="https://i.sstatic.net/W5CVp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W5CVp.png" alt="enter image description here" /></a></p>
<pre><code>vedf =
edges vals IsolatedEdge?
0 0.000000 38.0 True
1 2.911165 NaN False
2 5.822330 NaN False
3 8.733495 NaN False
4 11.644660 NaN False
5 14.555825 NaN False
6 17.466990 NaN False
7 20.378155 NaN False
8 23.289320 NaN False
9 26.200485 NaN False
10 29.111650 NaN False
11 32.022815 NaN False
12 34.933980 NaN False
13 37.845145 NaN False
14 40.756310 NaN False
15 43.667475 1.0 False
16 46.578640 11.0 False
17 49.489805 126664.0 False
18 52.400970 13853.0 False
19 55.312135 4536.0 False
20 58.223300 4536.0 False
</code></pre>
<p>Once, I know a specific edge is isolated one, I can drop all the samples in the edge.</p>
| <python><pandas><dataframe><numpy><histogram> | 2023-08-19 00:42:24 | 2 | 4,702 | Mainland |
76,933,151 | 670,687 | Multiple targets from one script, copying to multiple different directories in Makefile | <h2>Scenario</h2>
<p>I am working on updating a script that generates source code files for both <strong>C</strong> and <strong>Java</strong> at build time, for each of our builds.<br />
This needs to run on both Windows and Linux.<br />
Two file types need to be created for each language: <strong>Enums</strong> and <strong>Tables</strong>.</p>
<p>Each set of four files (C Enum, C Table, Java Enum, Java Table) is generated from a corresponding CSV file.<br />
Given an input CSV file called <code>A.csv</code>, the output files are <code>AEnum.inc</code>, <code>ATable.inc</code>, <code>AEnum.java</code>, and <code>ATable.java</code>.</p>
<p>All four files can be created at once by calling a python script, with an activated venv, like so:</p>
<pre><code>python main.py --input A.csv --output ./output --enum-c --table-c --enum-java --table-java
</code></pre>
<p>This is shortened in the makefile with a function called <code>run_autogen_all</code>.</p>
<p>Once the files have been created, they need to be copied to different directories:</p>
<ul>
<li>C enums & tables go in <code>$(C_AUTOGEN)/</code></li>
<li>Java enums go in <code>$(JAVA_ENUM)/</code></li>
<li>Java tables go in <code>$(JAVA_TABLE)/</code></li>
</ul>
<h2>Problem</h2>
<p>I have not been able to figure out how to ensure the python script runs only once (runtime is non-negligible), while also copying all of the files to the correct directories.</p>
<p>How can I achieve this behavior with Make?</p>
<h2>Attempted solution</h2>
<p>Here is my current solution that <em>works</em>, but it runs the python script twice and copies files twice.</p>
<pre><code>MAIN_4_CSV_FILES = A.CSV B.CSV C.CSV D.CSV E.CSV F.CSV G.CSV H.CSV I.CSV J.CSV K.CSV L.CSV M.CSV N.CSV O.CSV P.CSV Q.CSV R.CSV S.CSV T.CSV U.CSV V.CSV W.CSV X.CSV Y.CSV Z.CSV
.PHONY : JavaFiles CFiles
JavaFiles : *.java
CFiles : *.inc
*.java *.inc : $(MAIN_4_CSV_FILES)
$(call run_autogen_all,$^)
$(COPYF) *Enum.java $(JAVA_ENUM)
$(COPYF) *Table.java $(JAVA_TABLE)
$(COPYF) *.inc $(C_AUTOGEN)
</code></pre>
<p>Running <code>make JavaFiles CFiles -n</code> gives the following output</p>
<pre><code>python "../../../../../build/Common/CSV_Parser/autogenerate_from_csv.py" --output "./" --csv-files A.CSV B.CSV C.CSV D.CSV E.CSV F.CSV G.CSV H.CSV I.CSV J.CSV K.CSV L.CSV M.CSV N.CSV O.CSV P.CSV Q.CSV R.CSV S.CSV T.CSV U.CSV V.CSV W.CSV X.CSV Y.CSV Z.CSV
cp -f *Enum.java ../../../../Project/java/enums
cp -f *Table.java ../../../../Project/java/tables
cp -f *Enum.inc ../../../../Project/c
cp -f *Table.inc ../../../../Project/c
python "../../../../../build/Common/CSV_Parser/autogenerate_from_csv.py" --output "./" --csv-files A.CSV B.CSV C.CSV D.CSV E.CSV F.CSV G.CSV H.CSV I.CSV J.CSV K.CSV L.CSV M.CSV N.CSV O.CSV P.CSV Q.CSV R.CSV S.CSV T.CSV U.CSV V.CSV W.CSV X.CSV Y.CSV Z.CSV
cp -f *Enum.java ../../../../Project/java/enums
cp -f *Table.java ../../../../Project/java/tables
cp -f *Enum.inc ../../../../Project/c
cp -f *Table.inc ../../../../Project/c
</code></pre>
<h3>Previous attempts</h3>
<p>I've tried many options, here are a few that didn't work</p>
<pre><code>%Enum.inc %Enum.java %Table.inc %Table.java :
$(call run_autogen_all,$<)
$(call copy_attrib,$(notdir $@),$@)
AFiles = $(C_AUTOGEN)/AEnum.c $(C_AUTOGEN)/ATable.c $(JAVA_ENUM)/AEnum.java $(JAVA_TABLE)/ATable.java
$(AFiles) : A.csv
[...]
</code></pre>
<pre><code>JavaEnums : $(JAVA_ENUM)/AEnum.java $(JAVA_ENUM)/BEnum.java [...]
$(JAVA_ENUM)/%Enum.java : %Enum.java
$(call run_autogen_all,$*.csv)
$(call copy_attrib,$(notdir $@),$@)
[...]
</code></pre>
<pre><code>%Enum.java %Enum.c %Table.java %Table.c : %.csv
$(call run_autogen_all,$<)
$(call copy_attrib,$(notdir $@),$@)
[...]
</code></pre>
<p>I also tried messing with <code>.INTERMEDIATE</code> and <code>.SECONDARY</code> to no avail.</p>
| <python><csv><makefile> | 2023-08-19 00:30:43 | 1 | 2,298 | Elec0 |
76,933,117 | 1,633,272 | How to get class name from an annotation for static method? | <p>Here is my code:</p>
<pre><code>import os
import pickle
import hashlib
def read_file(path):
# Implement your read_file logic here
pass
def write_file(path, content):
# Implement your write_file logic here
pass
class CachedStaticMethod:
path = "cache"
def __init__(self, func):
self.func = func
def __get__(self, instance, owner):
if instance is None:
return self # Return the decorator instance for staticmethod to use
return self.func.__get__(instance, owner) # For regular methods
def __call__(self, *args, **kwargs):
cache_path = self.__get_path(*args, **kwargs)
try:
content = read_file(cache_path)
result = pickle.loads(content)
except Exception as e:
result = self.func(*args, **kwargs)
content = pickle.dumps(result)
write_file(cache_path, content)
return result
def __get_path(self, *args, **kwargs):
class_name = self.func.__qualname__.split('.<locals>', 1)[0]
function_name = self.func.__name__
hash_input = f"{class_name}.{function_name}({args}, {kwargs})".encode("utf-8")
hash_value = hashlib.md5(hash_input).hexdigest()
filename = f"{hash_value}.cache"
return os.path.join(self.path, filename)
class MyClass:
@CachedStaticMethod
@staticmethod
def my_static_method(param1: int, param2: str) -> float:
"""
This is a cached static method of MyClass.
"""
result = 3.14 * param1 + len(param2)
return result
# Usage
result1 = MyClass.my_static_method(5, "hello")
result2 = MyClass.my_static_method(5, "hello") # Should use cached result
print(result1)
print(result2)
</code></pre>
<p>This is the Error it's throwing:</p>
<blockquote>
<p>AttributeError Traceback (most recent call
last) in
54
55 # Usage
---> 56 result1 = MyClass.my_static_method(5, "hello")
57 result2 = MyClass.my_static_method(5, "hello") # Should use cached result
58</p>
<p> in <strong>call</strong>(self, *args, **kwargs)
23
24 def <strong>call</strong>(self, *args, **kwargs):
---> 25 cache_path = self.__get_path(*args, **kwargs)
26
27 try:</p>
<p> in __get_path(self, *args, **kwargs)
35
36 def __get_path(self, *args, **kwargs):
---> 37 class_name = self.func.<strong>qualname</strong>.split('.', 1)[0]
38 function_name = self.func.<strong>name</strong>
39</p>
<p>AttributeError: 'staticmethod' object has no attribute '<strong>qualname</strong>'</p>
</blockquote>
| <python><python-3.x> | 2023-08-19 00:11:30 | 1 | 2,343 | user1633272 |
76,933,039 | 3,848,207 | How to launch this python GUI app without having a console in the background? | <p>I have this python script named <code>clock.py</code>.</p>
<p>Here is the source code.</p>
<pre><code># Source: https://www.geeksforgeeks.org/python-create-a-digital-clock-using-tkinter/
# importing whole module
from tkinter import *
from tkinter.ttk import *
# importing strftime function to
# retrieve system's time
from time import strftime
# creating tkinter window
root = Tk()
root.title('Clock')
# This function is used to
# display time on the label
def time():
string = strftime('%H:%M:%S %p')
lbl.config(text=string)
lbl.after(1000, time)
# Styling the label widget so that clock
# will look more attractive
lbl = Label(root, font=('calibri', 40, 'bold'),
background='purple',
foreground='white')
# Placing clock at the centre
# of the tkinter window
lbl.pack(anchor='center')
time()
mainloop()
</code></pre>
<p>I place this python script as a short-cut on Windows. When I double-click on the short-cut icon, the python tkinter app will launch with a GUI but a console will also be launched. This console is redundant.</p>
<p>How can I launch this python script without launching the console? Modify the code or through some Windows settings?</p>
<p><a href="https://i.sstatic.net/AkodK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AkodK.png" alt="enter image description here" /></a></p>
<p>I am using Windows 11.</p>
| <python> | 2023-08-18 23:41:14 | 2 | 5,287 | user3848207 |
76,932,941 | 12,358,733 | Migrating from oauth2client to google-auth for retrieving Google ADC access token in Python | <p>I have a few python scripts designed to be run by an end user with the GCP permissions then controlled by group-based IAM policies. To authenticate, the user generates application default credentials (ADC) by running <code>gcloud auth application-default login</code> on the CLI. I then use <a href="https://pypi.org/project/oauth2client/" rel="nofollow noreferrer">oauth2client</a>'s GoogleCredentials() and get_access_token() to get the access token:</p>
<pre><code>from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
access_token = credentials.get_access_token().access_token
</code></pre>
<p>I then pass <code>access_token</code> to an HTTPS request using the <code>Authorization</code> header in this format:</p>
<pre><code>Authorization: Bearer <access_token>
</code></pre>
<p>Works fine, but oauth2client was deprecated a few years back and apparently <a href="https://google-auth.readthedocs.io" rel="nofollow noreferrer">google-auth</a> and <a href="https://github.com/oauthlib/oauthlib/tree/master" rel="nofollow noreferrer">oauthlib</a> should be used instead. Problem is, I've been completely unable to locate any examples of retrieving the user-generated ADC access token. Seems like this should work:</p>
<pre><code>from google.auth import default
credentials = default()
access_token = credentials.token
</code></pre>
<p>I'm basing this off the following code that uses Service Accounts and works fine:</p>
<pre><code>from os import environ
from google.oauth2.service_account import Credentials
from google.auth.transport.requests import Request
ADC_VAR = 'GOOGLE_APPLICATION_CREDENTIALS'
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']
credentials = Credentials.from_service_account_file(environ.get(ADC_VAR), scopes=SCOPES)
credentials.refresh(Request())
access_token = credentials.token
</code></pre>
| <python><google-cloud-platform><oauth-2.0><google-oauth> | 2023-08-18 23:00:29 | 1 | 931 | John Heyer |
76,932,614 | 7,347,925 | How to make sure every unique label's count number larger than a value? | <p>I have a 1D array that contains labels beginning from 0. The goal is to make sure every unique label's count number is >= a constant (e.g. 10). If not, merge the nearest label into it until the count larger than 10.</p>
<p>Here's an example:</p>
<pre><code>import random
import numpy as np
data = np.concatenate(([0]*5, [1]*10, [2]*15, [3]*10, [4]*12, [5]*5))
random.Random(4).shuffle(data)
print(data)
</code></pre>
<pre><code>array([2, 3, 4, 5, 2, 4, 2, 4, 2, 2, 0, 4, 1, 4, 4, 5, 4, 2, 1, 3, 1, 1,
5, 0, 3, 3, 2, 2, 4, 4, 3, 3, 4, 4, 1, 2, 5, 1, 2, 2, 3, 3, 1, 0,
2, 3, 5, 0, 0, 1, 1, 3, 2, 4, 1, 2, 2])
</code></pre>
<p>The logic should be like this:</p>
<p>Beginning from label <code>0</code>, because the count of <code>0</code> is 5 (< 10), merging <code>1</code> with 0 by replacing <code>1</code> with <code>0</code>. Then label <code>0</code> has enough counts (15).</p>
<p>Then, continue to the next label <code>2</code> which meets the condition ...</p>
<p>The last label needs to be merged is <code>5</code>, which should be replaced by <code>4</code>.</p>
<p>I come up with this method: loop through <code>np.unique(data)</code> and check the count by <code>np.bincount(data)</code>. However, this method is slow if we have a large <code>data</code> array.</p>
<pre><code>import random
data = np.concatenate(([0]*5, [1]*10, [2]*15, [3]*10, [4]*12, [5]*5))
random.Random(4).shuffle(data)
counts = np.bincount(data)
new_label = 0
count_num = 0
for label in np.unique(data):
if count_num > 0:
data[data==label] = label-1
count_num += counts[label]
if count_num >= 10:
count_num = 0
if label == np.unique(data)[-1] and counts[label] < 10:
data[data==label] = label-1
</code></pre>
<pre><code>array([2, 3, 4, 4, 2, 4, 2, 4, 2, 2, 0, 4, 0, 4, 4, 4, 4, 2, 0, 3, 0, 0,
4, 0, 3, 3, 2, 2, 4, 4, 3, 3, 4, 4, 0, 2, 4, 0, 2, 2, 3, 3, 0, 0,
2, 3, 4, 0, 0, 0, 0, 3, 2, 4, 0, 2, 2])
</code></pre>
<p>Any ideas of this kind of merging data? Thanks!</p>
| <python><arrays><numpy> | 2023-08-18 21:14:37 | 2 | 1,039 | zxdawn |
76,932,588 | 1,804,173 | How to format a Fraction as string in Python? | <p>I'm experimenting with <a href="https://docs.python.org/3/library/fractions.html#fractions.Fraction" rel="nofollow noreferrer"><code>fractions.Fraction</code></a> and I'm wondering how to convert them into decimal strings, possibly specifying the precision. Say we have:</p>
<pre class="lang-py prettyprint-override"><code>from fractions import Fraction
a = Fraction.from_float(1e10)
b = Fraction.from_float(1e-10)
c = a + b
</code></pre>
<p><code>str(c)</code> returns a string based on the numerator/denominator of the fraction, i.e. <code>773712524553362671819689765245533627/77371252455336267181195264</code>.</p>
<p><code>f"{c:.20f}"</code> errors with <code>TypeError: unsupported format string passed to Fraction.__format__</code>.</p>
<p>Converting back to float via <code>float(c)</code> allows to obtain a string, but of course loses the precision again.</p>
<p>What I'm interested is to represent <code>c</code> as a string like <code>"1000000000.0000000001[...]"</code>. Is there a way to accomplish that?</p>
| <python> | 2023-08-18 21:08:30 | 1 | 27,316 | bluenote10 |
76,932,584 | 9,415,280 | loading model cause Unable to create dataset (name already exists) on re-trainning | <p>I train my model in multi steps.
When I train a first round, change the learning rate and train again, all work fine.</p>
<p>but</p>
<p>When I train the first round, load the best epoch trained model and change the learning rate I got this error:</p>
<pre><code>File "E:\Anaconda3\envs\tf2.7_bigData\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "E:\Anaconda3\envs\tf2.7_bigData\lib\site-packages\h5py\_hl\group.py", line 161, in create_dataset
dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
File "E:\Anaconda3\envs\tf2.7_bigData\lib\site-packages\h5py\_hl\dataset.py", line 156, in make_new_dset
dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl, dapl=dapl)
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5d.pyx", line 87, in h5py.h5d.create
ValueError: Unable to create dataset (name already exists)
</code></pre>
<p>This is the way I load my model and change the learning rate:</p>
<pre><code>model = tensorflow.keras.models.load_model('resultats/' + str(bv_num) + '_convergeance_' + nom_exp + '.h5')
optimizer.lr.assign(learning_rates[1])
model.compile(optimizer=optimizer, loss='mse')
</code></pre>
<p>Optimizer was created earlyer this way:</p>
<pre><code>optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
</code></pre>
<p>I try to remove learning rate change and the error don't disapear, so it seem to be loading the model who create the problem.</p>
<p>I seen a lot of discussion about similar error, and the need of naming each layer with uniq name (what a did) but with out loading the model all work great so I imagine this is not the trouble...</p>
| <python><tensorflow><keras> | 2023-08-18 21:07:59 | 1 | 451 | Jonathan Roy |
76,932,520 | 9,873,381 | How can I access the data stored in a VM on GoDaddy? | <p>I have <code>n</code> tables that I have stored in a VM on GoDaddy. Currently, I use MySQL-Front to access the data. However, understandably, this approach is not feasible for programmatic exploration and summarization of the data.</p>
<p>Hence, I would like to know how can I access this data using Python (or any other language) so that I can create some reports based on this data.</p>
<p>I have the host's IP address, port, username, password, and the name of the DB which I am trying to read. For MySQL-Front, this much information is enough to access the DB.</p>
<p>I tried to SSH into the DB using the aforementioned username and password but got the error <code>Permission denied, please try again.</code></p>
| <python><database><ssh><report><godaddy-api> | 2023-08-18 20:53:44 | 1 | 672 | Skywalker |
76,932,405 | 4,478,556 | Can't connect to MySQL server on 'localhost:3306' in dockerised Django app using docker-compose | <p>I'm trying to build a Dockerized Django application with a MySQl database, but am unable to connect to the database when running <code>docker-compose up</code></p>
<p>My <code>Dockerfile</code> is:</p>
<pre class="lang-bash prettyprint-override"><code># Use the official Python image as the base image
FROM python:3.10
# Set environment variables
ENV PYTHONUNBUFFERED 1
# Set the working directory inside the container
WORKDIR /app
# Copy the requirements file and install dependencies
COPY requirements.txt /app/
RUN pip install -r requirements.txt
CMD ["sh", "-c", "python manage.py migrate"]
# Copy the project files into the container
COPY . /app/
</code></pre>
<p>And <code>docker-compose.yml</code> looks like:</p>
<pre class="lang-bash prettyprint-override"><code>version: '3'
services:
db:
image: mysql:8
ports:
- "3306:3306"
environment:
- MYSQL_DATABASE='model_db'
- MYSQL_USER='root'
- MYSQL_PASSWORD='password'
- MYSQL_ROOT_PASSWORD='password'
volumes:
- /tmp/app/mysqld:/var/run/mysqld
- ./db:/var/lib/mysql
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
env_file:
- .env
</code></pre>
<p>I'm using a <code>.env</code> file to define environment variables, and it is:</p>
<pre class="lang-bash prettyprint-override"><code>MYSQL_DATABASE=model_db
MYSQL_USER=root
MYSQL_PASSWORD=password
MYSQL_ROOT_PASSWORD=password
MYSQL_HOST=db
</code></pre>
<p>These are then loaded to the app <code>settings.py</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>BASE_DIR = Path(__file__).resolve().parent.parent
env = environ.Env()
if READ_DOT_ENV_FILE := env.bool("DJANGO_READ_DOT_ENV_FILE", default=True):
# OS environment variables take precedence over variables from .env
env.read_env(env_file=os.path.join(BASE_DIR, '.env'))
MYSQL_DATABASE = env('MYSQL_DATABASE', default=None)
MYSQL_USER = env('MYSQL_USER', default=None)
MYSQL_PASSWORD = env('MYSQL_PASSWORD', default=None)
MYSQL_ROOT_PASSWORD = env('MYSQL_ROOT_PASSWORD',default=None)
MYSQL_HOST = env('MYSQL_HOST', default=None)
# Database
# https://docs.djangoproject.com/en/4.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'MYSQL_DATABASE': MYSQL_DATABASE,
'USER': MYSQL_USER, # Not used with sqlite3.
'PASSWORD': MYSQL_PASSWORD, # Not used with sqlite3.
'MYSQL_ROOT_PASSWORD': MYSQL_ROOT_PASSWORD,
'HOST': MYSQL_HOST,
'PORT': 3306,
},
}
</code></pre>
<p>When I run <code>docker-compose up</code> it errors out with <code>_mysql_connector.MySQLInterfaceError: Can't connect to MySQL server on 'db:3306' (99)</code></p>
<p>Any suggestions?</p>
| <python><mysql><django><docker><docker-compose> | 2023-08-18 20:26:58 | 2 | 649 | Cole Robertson |
76,932,375 | 2,185,035 | Matching end of string or ; in regex | <p>Why does the following regex not match the last occurrence of the search word?</p>
<pre><code>(?i)(^|,|;)\s*old\s*($|;)
</code></pre>
<p>This matches the first and second <code>old</code>, but not the third:</p>
<pre><code>One,old;Two,old;old
^ ^ ^
match match \_ no match
</code></pre>
<p>However, if I remove the last <code>;</code> (<code>(?i)(^|,|;)\s*old\s*$</code>), it does match the last occurrence.</p>
<p>P.S. I'm using Python.</p>
| <python><regex> | 2023-08-18 20:20:54 | 1 | 372 | jak123 |
76,932,276 | 10,858,691 | Two consecutive increase in values starts a series of True values until we run into two consecutive False values which starts False values | <p>This is the dataframe</p>
<pre><code> Date Ratio
0 1993-01-29 0.44
1 1993-02-01 0.44
2 1993-02-02 0.45
3 1993-02-03 0.44
4 1993-02-04 0.44
5 1993-02-05 0.56
6 1993-02-08 0.59
7 1993-02-09 0.58
8 1993-02-10 0.57
9 1993-02-11 0.54
10 1993-02-12 0.53
11 1993-02-16 0.47
12 1993-02-17 0.42
13 1993-02-18 0.38
14 1993-02-23 0.35
15 1993-02-24 0.39
16 1993-02-25 0.43
17 1993-02-26 0.46
</code></pre>
<p>I am trying to create a new column 'signal',
which has default value of True</p>
<p>The column 'signal' is looking at column "ratio " and
looking for when there are two consecutive decreases in value.
When it finds that all the values FOLLOWING the two consecutive decreases will be set to false, UNTIL
we get two consecutive increases, which will stop the series of Falses.</p>
<p>We then look for two conseutive decreases further down the column, and again set the values to false
till we run into two consecutive increases which will stop the signal</p>
<p>Rinse and repeat for the entire column</p>
<p>So for above column:</p>
<p>The 9th row starts the start of False values because</p>
<p>6th row to 7th row a decrese (.59 to .58) and 7th to 8th (.58 to .57)</p>
<p>This is the start of the signal until
we run into row 17th, which closes the signal because we had two
consecutive increases.</p>
<p>Output should be this:</p>
<pre><code> Signal
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 True
8 True
9 False
10 False
11 False
12 False
13 False
14 False
15 False
16 False
17 True
</code></pre>
| <python><pandas> | 2023-08-18 20:00:47 | 2 | 614 | MasayoMusic |
76,931,999 | 13,916,049 | Split the index by nth delimiter | <p>I want to split the index of <code>anno</code> dataframe by the 2nd occurrence of the <code>_</code> delimiter.</p>
<pre><code>anno.index = ','.join(str(v) for v in anno.index)
'_'.join(anno.index.split('_')[:2])
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [66], in <cell line: 2>()
1 anno = pd.read_csv(directory_path + "GSE159115_ccRCC_anno.csv.gz", compression="gzip", header=0, sep=",", on_bad_lines="warn", index_col=0)
----> 2 anno.index = ','.join(str(v) for v in anno.index)
3 '_'.join(anno.index.split('_')[:2])
File ~/.local/lib/python3.9/site-packages/pandas/core/generic.py:6002, in NDFrame.__setattr__(self, name, value)
6000 try:
6001 object.__getattribute__(self, name)
-> 6002 return object.__setattr__(self, name, value)
6003 except AttributeError:
6004 pass
File ~/.local/lib/python3.9/site-packages/pandas/_libs/properties.pyx:69, in pandas._libs.properties.AxisProperty.__set__()
File ~/.local/lib/python3.9/site-packages/pandas/core/generic.py:729, in NDFrame._set_axis(self, axis, labels)
723 @final
724 def _set_axis(self, axis: AxisInt, labels: AnyArrayLike | list) -> None:
725 """
726 This is called from the cython code when we set the `index` attribute
727 directly, e.g. `series.index = [1, 2, 3]`.
728 """
--> 729 labels = ensure_index(labels)
730 self._mgr.set_axis(axis, labels)
731 self._clear_item_cache()
File ~/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py:7128, in ensure_index(index_like, copy)
7126 return Index(index_like, copy=copy, tupleize_cols=False)
7127 else:
-> 7128 return Index(index_like, copy=copy)
File ~/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py:516, in Index.__new__(cls, data, dtype, copy, name, tupleize_cols)
513 data = com.asarray_tuplesafe(data, dtype=_dtype_obj)
515 elif is_scalar(data):
--> 516 raise cls._raise_scalar_data_error(data)
517 elif hasattr(data, "__array__"):
518 return Index(np.asarray(data), dtype=dtype, copy=copy, name=name)
File ~/.local/lib/python3.9/site-packages/pandas/core/indexes/base.py:5066, in Index._raise_scalar_data_error(cls, data)
5061 @final
5062 @classmethod
5063 def _raise_scalar_data_error(cls, data):
5064 # We return the TypeError so that we can raise it from the constructor
5065 # in order to keep mypy happy
-> 5066 raise TypeError(
5067 f"{cls.__name__}(...) must be called with a collection of some "
5068 f"kind, {repr(data)} was passed"
5069 )
TypeError: Index(...) must be called with a collection of some kind, 'SI_18854_AAACCTGCAAGTAGTA-1,SI_18854_AAACCTGTCCACTGGG-1,SI_18854_AAACCTGTCCTTTCTC-1,SI_18854_AAACGGGCAAACTGCT-1,SI_18854_AAACGGGCAAGGTTTC-1' was passed
</code></pre>
<p>Input:</p>
<pre><code>Index(['SI_18854_AAACCTGCAAGTAGTA-1', 'SI_18854_AAACCTGTCCACTGGG-1',
'SI_18854_AAACCTGTCCTTTCTC-1', 'SI_18854_AAACGGGCAAACTGCT-1',
'SI_18854_AAACGGGCAAGGTTTC-1'],
dtype='object', name='cell', length=20748)
</code></pre>
| <python><pandas> | 2023-08-18 19:05:38 | 1 | 1,545 | Anon |
76,931,931 | 15,531,189 | Django rendering {% csrf_token %} as string when I set innerHTML | <p>I have a page in which I have created a modal. Now I'm creating form in the modal that can be generated dynamically on the basis of which button is clicked.</p>
<p>I have this JS function that sets the model content with form data.</p>
<pre class="lang-js prettyprint-override"><code>function openModal(askField, platformName) {
const modal = document.getElementById("modal");
const modalContent = modal.querySelector('.modal-content');
modalContent.innerHTML = `
<h2 class="m-2 text-xl font-semibold text-center">${platformName} Authorization</h2>
<form class="p-2" method="post" action="/authorize/${platformName.toLowerCase()}">
{% csrf_token %}
<div class="p-3 border border-gray-500 rounded-lg">
<div class="m-2 flex justify-start space-x-2">
<label class="font-bold" for="ask_field">${titleCase(askField.replace("_", " "))}</label>
<input class="border" id="ask_field" name="ask_field" required>
</div>
<div class="flex justify-between">
<button class="px-4 rounded-lg text-white transition ease-in-out delay-150 bg-red-600 hover:-translate-y-1 hover:scale-110 hover:bg-indigo-500 duration-300" onclick="closeModal()">
Close
</button>
<button class="px-4 rounded-lg text-white transition ease-in-out delay-150 bg-blue-500 hover:-translate-y-1 hover:scale-110 hover:bg-indigo-500 duration-300">
Authorize
</button>
</div>
</div>
</form>
`;
modal.classList.remove('hidden');
}
</code></pre>
<p>This modal works successfully. The only problem is that when this <code>{% csrf_token %}</code> is not actually embedded into the page but it's rendering as a string.</p>
<p><a href="https://i.sstatic.net/Ys8rG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ys8rG.png" alt="enter image description here" /></a></p>
<p>I know as I'm adding this piece of HTML after the page is already rendered by the Django. That's why this is happening.</p>
<p>Is there any way that I can get this working?</p>
| <javascript><python><html><django> | 2023-08-18 18:51:41 | 1 | 343 | SHIVAM SINGH |
76,931,867 | 5,666,203 | Check intersection between each element in array of lists | <p>In Python, I have a list of different size arrays. I would like to know the intersection for all arrays contained in the list. For instance:</p>
<pre><code>import numpy as np
array0 = [0,1,2,3,4,5,6]
array1 = [0,2,3,5,6,7,8,9,10]
array2 = [4,5,6]
array3 = [5,6,7,8,9,10,11,12,13]
array_list = [array0, array1, array2, array3]
</code></pre>
<p>I want to use a function like <code>np.intersect1d</code> across all of the arrays in <code>array_list</code> and get the intersection across all data arrays. From a quick glance, the answer would be</p>
<pre><code>intersection = np.array([5,6])
</code></pre>
<p>I've tried using list comprehension, something like:</p>
<pre><code>np.intersect1d([arraylist[x] for x in np.arange(len(arraylist))]),
</code></pre>
<p>but Numpy expects 2 arguments instead of the one I'm providing. For a similar reason,</p>
<pre><code>list(filter(lambda x:x in [x for x in arraylist]))
</code></pre>
<p>will not work -- <code>filter</code> expects 2 arguments, and is only getting one.</p>
| <python><arrays><list-comprehension><intersection> | 2023-08-18 18:42:14 | 1 | 1,144 | AaronJPung |
76,931,624 | 15,804,190 | Pandas Testing with Pytest - see which rows are different | <p>Relatively new to actually using pytest / testing modules, so I may be missing something very obvious.</p>
<p>I'm using the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.testing.assert_frame_equal.html" rel="nofollow noreferrer">pandas.testing.assert_frame_equal()</a> method in my testing, and I would like to see which row is causing the test to fail when using pytest.</p>
<p>I have my tests set up and they are running, I am calling <code>pytest</code> from the terminal. The output I am getting is like this</p>
<pre><code>#other stuff, name of test that failed, etc
E AssertionError: DataFrame.iloc[:, 0] (column name="Name") are different
E
E DataFrame.iloc[:, 0] (column name="Name") values are different (1.23457 %)
E [index]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80]
E [left]: [Alice, Bob, ...
E [right]: [Alice, Bob, ...
pandas\_libs\testing.pyx:167: AssertionError
============================================== short test summary info ===============================================
FAILED tests/test_outputs.py::test_june - AssertionError: DataFrame.iloc[:, 0] (column name="Name") are different
</code></pre>
<p>It would help to know which row failed here; that is, which name in left didn't equal in right. I found myself having to separately run code to do a line-by-line comparison that printed out Names and indices...</p>
<p>Is there a flag on pytest or pandas that can show this in the pytest printout?</p>
| <python><pandas><pytest> | 2023-08-18 17:57:51 | 0 | 3,163 | scotscotmcc |
76,931,562 | 22,128,188 | Multiple Except with Same Code to be executed in python: If we use logical OR it does not work any idea why? | <p>Any idea how we can write multiple except on same line as code to be executed for both exceptions is same.</p>
<p>I tried logical OR but it did not work. My code works as expected if I write both exception separately.</p>
<pre><code> except FileNotFoundError:
with open("./file.json", "w") as f:
json.dump(new_data, f, indent=4)
except ValueError:
with open("./file.json", "w") as f:
json.dump(new_data, f, indent=4)
</code></pre>
| <python> | 2023-08-18 17:47:20 | 2 | 396 | navalega0109 |
76,931,509 | 378,661 | Is there a Python parsing library that can parse a TOML-like format that specifies nested fields with [ParentHeader_ChildSection]? | <p>I want to parse an externally defined (and undocumented) file format in Python. It looks somewhat similar to TOML, but with different text styles, and no quoting. For example:</p>
<pre><code>[Schedule_Step122]
m_nMaxCurrent=0
m_szAddIn=Relay OFF
m_szLabel=06 - End Charge
m_uLimitNum=2
[Schedule_Step122_Limit0]
Equation0_szCompareSign=>=
Equation0_szRight=F_05_Charge_Capacity
Equation0_szLeft=PV_CHAN_Charge_Capacity
m_bStepLimit=1
m_szGotoStep=End Test
[Schedule_Step122_Limit1]
Equation0_szCompareSign=>=
Equation0_szLeft=PV_CHAN_Voltage
Equation0_szRight=3
m_bStepLimit=1
m_szGotoStep=End Test
</code></pre>
<p>(This is <a href="https://arbin.com/software/#create-and-manage-test-profiles" rel="nofollow noreferrer">Arbin's test schedule</a> format.)</p>
<p>I would like the parsed structure to be something like:</p>
<pre class="lang-json prettyprint-override"><code>"steps": [
{
"max_current": 0,
"add_in": RELAY_OFF,
"label": "09 - End Charge",
"limits": [
{
"equations": [
{
"left": PV_CHAN_CHARGE_CAPACITY,
"compare_sign": ">=",
"right": F_05_CHARGE_CAPACITY
}
],
"step_limit": 1,
"goto_step": END_TEST
},
{
"equations": [
{
"left": PV_CHAN_VOLTAGE,
"compare_sign": ">=",
"right": 6
}
],
"step_limit": 1,
"goto_step": END_TEST
}
]
}
]
</code></pre>
<p>The format seems superficially similar to <a href="https://toml.io/en/" rel="nofollow noreferrer">TOML</a>, including some of the nesting, but the string handling is different. I would also like to capture certain values as named constants.</p>
<p>I was also looking into defining a context-free grammar and using a lexer/parser like <a href="https://github.com/jszheng/py3antlr4book" rel="nofollow noreferrer">ANTLR</a>, <a href="https://www.dabeaz.com/ply/ply.html#ply_nn7" rel="nofollow noreferrer">PLY</a>, <a href="https://github.com/pyparsing/pyparsing?search=1" rel="nofollow noreferrer">pyparsing</a>, or <a href="https://github.com/lark-parser/lark/blob/master/docs/json_tutorial.md" rel="nofollow noreferrer">Lark</a>. I'm familiar with reading grammars in documentation, but haven't written or used one with a parser before. However, I don't know how one would represent the nesting structure (such as <code>Schedule_Step122_Limit0</code> being a member of <code>Schedule_Step122</code>) or the lack of guaranteed order among related keys (like <code>Equation0_szCompareSign</code>, Equation0_szLeft`, etc).</p>
<p>Is there a generic parsing tool I could write a definition for, which would give me the parsed/structured output? Or is the best approach here to write custom parsing logic?</p>
| <python><parsing><ply><lark-parser> | 2023-08-18 17:37:42 | 3 | 793 | markfickett |
76,931,489 | 13,102,905 | reading and decoding .dat file with python | <p>I have a .dat file which I don't have any idea about how it was created and what delimiter was used and any details about it.</p>
<p>i tryied this:</p>
<pre><code>infile = open('x.dat', 'r')
for line in infile:
variable, value = line.split('=')
variable = variable.strip()
infile.close()
</code></pre>
<p>but i got:</p>
<pre><code>Traceback (most recent call last):
File "/home/gbcdev/py/read_dat.py", line 2, in <module>
for line in infile:
File "/usr/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc3 in position 2: invalid continuation byte
</code></pre>
<p>I would first like to understand what is in this .dat file and then change it
can anybody help me?</p>
| <python> | 2023-08-18 17:32:51 | 0 | 1,652 | Ming |
76,931,482 | 6,936,582 | Group rows consecutively into near equal sum | <p>I want to create a group column attribute where each groups cumulative sum is as equal as possible between the groups. The row order must not change.</p>
<pre><code>import pandas as pd
import numpy as np
data = {"id":[1,2,3,4,5,6,7,8,9,10],
"area":[489.8,1099,1004,37.2,371.3,2390.5,2500,2500,1298.7,1125.7]}
df = pd.DataFrame(data)
id area
0 1 489.8
1 2 1099.0
2 3 1004.0
3 4 37.2
4 5 371.3
5 6 2390.5
6 7 2500.0
7 8 2500.0
8 9 1298.7
9 10 1125.7
numgroups = 3 #I want this number of groups
group_area_target = df.area.sum()/numgroups
#4272
My goal is to create the group column:
#df["group"] = [1,1,1,1,1,2,2,3,3,3]
# id area group
# 0 1 489.8 1
# 1 2 1099.0 1
# 2 3 1004.0 1
# 3 4 37.2 1
# 4 5 371.3 1
# 5 6 2390.5 2
# 6 7 2500.0 2
# 7 8 2500.0 3
# 8 9 1298.7 3
# 9 10 1125.7 3
#sums = df.groupby("group")["area"].sum()
# group
# 1 3001.3
# 2 4890.5
# 3 4924.4
#The total deviation from target is (sums-group_area_target).abs().sum() 2541
</code></pre>
<p>I found <a href="https://stackoverflow.com/questions/41639804/group-a-column-such-that-its-sum-is-approximately-equal-in-each-group">this question and answer</a> which roughly suggests:</p>
<pre><code>a = df.area.values
shift_num = group_area_target*np.arange(1, numgroups)
idx = np.searchsorted(a.cumsum(), shift_num,'right').tolist()
for e, i in enumerate(idx,1):
df.loc[i, "group"] = e
sums2 = df.groupby("group")["area"].sum()
#The total deviation is (sums2-group_area_target).abs().sum() 3695
df.group = df.group.bfill().fillna(df.group.max()+1)
id area group
0 1 489.8 1.0
1 2 1099.0 1.0
2 3 1004.0 1.0
3 4 37.2 1.0
4 5 371.3 1.0
5 6 2390.5 1.0
6 7 2500.0 2.0
7 8 2500.0 2.0
8 9 1298.7 3.0
9 10 1125.7 3.0
#The total deviation is (sums2-group_area_target).abs().sum() 3695
</code></pre>
<p>So the result is less optimal than the manual assignment of groups. How can I automate finding the best solution?</p>
| <python><pandas> | 2023-08-18 17:31:32 | 1 | 2,220 | Bera |
76,931,480 | 3,951,929 | Python generator for nested dictionary keys | <p>Let's say my dictionary is</p>
<pre class="lang-py prettyprint-override"><code>data = {
'a': {
'b': {
'c': {
'd': 1
}
}
}
}
</code></pre>
<p>I want to code something like:</p>
<pre class="lang-py prettyprint-override"><code>for k,v in iterator(data):
if some_conditions(k,v):
v = some_value()
</code></pre>
<p>For all the keys of the dictionary, I want to check if the key, value pair matches some condition and update the value in the dictionary if necessary. The idea is that I don't have to keep track of the chain of keys to set the new value.</p>
<p>So, why not an iterator?</p>
<p>The idea seemed good, I would loop through all the keys of the dictionary and update the value on the fly if conditions were met. But I'm having some trouble implementing this, and the reason may be my lack of experience with iterators.</p>
<p>I attempted to code something but it fails and I don't understand why:</p>
<pre><code>def gen(dic):
for k,v in dic.items():
if isinstance(v, dict):
yield v
gen(v)
g = gen(data)
next(g)
</code></pre>
<p>The final data is more complex: some values may be lists having dictionaries as members and I will also have to deal with those dictionaries.</p>
<pre class="lang-py prettyprint-override"><code>data = {
'a': {
'b': {
'c': [
{
'd': 1,
'e': 2
},
1,
{
'g': 1,
'h': 2
},
[
{
'i': {
'j': 1
}
}
]
]
}
}
}
</code></pre>
<p>But if I can figure it out with a simple dictionary, I can then code for the larger case. I am open to different solutions that do not use iterators but remain clean and Pythonic.</p>
| <python><iterator><generator> | 2023-08-18 17:31:23 | 3 | 2,524 | Eduardo |
76,931,359 | 5,134,817 | Custom log level accurate filename | <h3>TLDR</h3>
<p>How can I get a <em>custom</em> logging level to accurately find out the pathname?</p>
<h3>Problem description</h3>
<p>When I try and add my own custom logging levels (e.g. for very verbose program tracing) I want to print the pathname. For most of the standard loggers, it is able to get this correctly, but for my own custom loggers, these all ultimately call <code>logging.log(levelNum, message, *args, **kwargs)</code> with <code>levelNum</code> as my custom level number. This means that only this line is ever printed, not the desired line.</p>
<p>A similar issue was raised here: <a href="https://stackoverflow.com/q/16122712/5134817">Python, logging: wrong pathname when extending Logger</a> and in <a href="https://stackoverflow.com/questions/16122712/python-logging-wrong-pathname-when-extending-logger#comment23029914_16122821">this comment</a> by "<a href="https://stackoverflow.com/users/172176/aya">Aya</a>" they say:</p>
<blockquote>
<p>actually, you can get the correct pathname, but it's much more complicated [...]</p>
</blockquote>
<p>I am currently using <a href="https://stackoverflow.com/a/14859558/5134817">the formatter approach outlined here</a>, and the <a href="https://stackoverflow.com/a/35804945/5134817">adding of the log levels outlined here</a>.</p>
<p><em><strong>Is there a way to get the desired behaviour?</strong></em></p>
<h3>Some example code</h3>
<p>The output from the below is:</p>
<pre><code>/Users/oliver/ClionProjects/testing/venv/bin/python3 /Users/oliver/ClionProjects/testing/misc/misc.py
writing some logs...
DEBUG: (/Users/oliver/ClionProjects/testing/misc/misc.py:90) something debug message
TRACE: (/Users/oliver/ClionProjects/testing/venv/lib/python3.11/site-packages/haggis/logs.py:211) something trace message
[2023-08-18 19:10:17,539]: WARNING: (/Users/oliver/ClionProjects/testing/misc/misc.py:92) something warning message
Process finished with exit code 0
</code></pre>
<p>Notably, the trace line is showing the wrong file path...</p>
<h4>The code:</h4>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
"""
Some wrapping around the default logging module.
"""
import logging
import sys
from haggis.logs import add_logging_level
from termcolor import colored
class MyFormatter(logging.Formatter):
"""A nice formatter for logging messages."""
line_formatting = f" {colored('(%(pathname)s:%(lineno)d)', 'light_grey')}"
timestamp_formatting = f"{colored('[%(asctime)s]: ', 'green')}"
trace_format = f"{colored('TRACE', 'cyan')}:{line_formatting} %(msg)s"
debug_format = f"{colored('DEBUG', 'magenta')}:{line_formatting} %(msg)s"
info_format = f"{colored('INFO', 'blue')}:{line_formatting} %(msg)s"
print_format = f"%(msg)s"
warning_format = f"{timestamp_formatting}{colored('WARNING', 'yellow')}:{line_formatting} %(msg)s"
error_format = f"{timestamp_formatting}{colored('ERROR', 'red')}:{line_formatting} %(msg)s"
critical_format = f"{timestamp_formatting}{colored('CRITICAL', 'red', attrs=['reverse', 'blink', 'bold'])}: {line_formatting} %(msg)s"
def __init__(self):
super().__init__(fmt=f"UNKNOWN: %(msg)s", datefmt=None, style='%')
def format(self, record):
# Save the original format configured by the user
# when the logger formatter was instantiated
format_orig = self._style._fmt
# Replace the original format with one customized by logging level
if record.levelno == logging.TRACE:
self._style._fmt = MyFormatter.trace_format
elif record.levelno == logging.DEBUG:
self._style._fmt = MyFormatter.debug_format
elif record.levelno == logging.INFO:
self._style._fmt = MyFormatter.info_format
elif record.levelno == logging.PRINT:
self._style._fmt = MyFormatter.print_format
elif record.levelno == logging.WARNING:
self._style._fmt = MyFormatter.warning_format
elif record.levelno == logging.ERROR:
self._style._fmt = MyFormatter.error_format
elif record.levelno == logging.CRITICAL:
self._style._fmt = MyFormatter.critical_format
else:
raise NotImplementedError(f"We don't know how to format logging levels: {record.levelno}")
# Call the original formatter class to do the grunt work
result = logging.Formatter.format(self, record)
# Restore the original format configured by the user
self._style._fmt = format_orig
return result
class StdOutFilter(logging.Filter):
def filter(self, rec):
return rec.levelno <= logging.PRINT
class StdErrFilter(logging.Filter):
def filter(self, rec):
stdout_filter = StdOutFilter()
return not stdout_filter.filter(rec)
def setup_console_output():
fmt = MyFormatter()
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(fmt)
stdout_handler.addFilter(StdOutFilter())
logging.root.addHandler(stdout_handler)
stderr_handler = logging.StreamHandler(sys.stderr)
stderr_handler.setFormatter(fmt)
stderr_handler.addFilter(StdErrFilter())
logging.root.addHandler(stderr_handler)
add_logging_level('TRACE', logging.DEBUG - 5)
add_logging_level('PRINT', logging.WARNING - 5)
if __name__ == "__main__":
logging.root.setLevel(logging.TRACE)
setup_console_output()
print("writing some logs...")
logging.debug("something debug message")
logging.trace("something trace message")
logging.warning("something warning message")
</code></pre>
| <python><logging><path><formatting> | 2023-08-18 17:14:05 | 1 | 1,987 | oliversm |
76,931,353 | 17,473,587 | refer to a path in root of Django project from a template | <p>I have this structure:</p>
<ul>
<li>Project
<ul>
<li>common Application
<ul>
<li>templates
<ul>
<li>common
<ul>
<li>index.html</li>
</ul>
</li>
</ul>
</li>
<li>static
<ul>
<li>...</li>
</ul>
</li>
</ul>
</li>
<li>app1 Application
<ul>
<li>...</li>
</ul>
</li>
<li>app2 Application
<ul>
<li>...</li>
</ul>
</li>
<li>...</li>
<li>node_modules
<ul>
<li>swiper
<ul>
<li>swiper-bundle.min.css</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>I have written this code in index.html:</p>
<pre><code><link rel="stylesheet" href="../../../node_modules/swiper/swiper-bundle.min.css">
</code></pre>
<p>But it's not working and the swiper-bundle.min.css file is not found.</p>
<p>So, if I use this path <code>href="/static/swiper-bundle.min.css"</code> and the <code>swiper-bundle.min.css</code> stores in <code>root/common/static</code> it will work truly.</p>
<p>How can I access the swiper-bundle.min.css which is located in <code>root/node_modules/swiper/swiper-bundle.min.css</code> ?</p>
| <python><django><django-templates> | 2023-08-18 17:12:26 | 1 | 360 | parmer_110 |
76,931,295 | 14,997,048 | How to annotate django queryest using subquery where that subquery returns list of ids from related model records | <p>I have one database model, let's say <code>ModelA</code>:</p>
<pre class="lang-py prettyprint-override"><code>class ModelA(models.Model):
field_1 = models.ManyToManyField(
"core.ModelB", blank=True
)
user = models.ForeignKey("core.User", on_delete=models.CASCADE)
type = models.CharField(choices=ModelATypes.choices)
</code></pre>
<p>The <code>field_1</code> of <code>ModelA</code> is a many to many field that references <code>ModelB</code>.</p>
<pre class="lang-py prettyprint-override"><code>class ModelB(models.Model):
uid = models.UUIDField(unique=True, default=uuid.uuid4, editable=False)
amount = models.IntegerField()
</code></pre>
<p>Now, I want to annotate the queryset to show the list of uid's from <code>field_1</code> based on some filter like get the list of <code>ModelB</code> uid's from <code>field_1</code> for all the ModelA records where user is in list of applicable user ids.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>from django.db.models import Subquery
user_ids = [1, 2, 3]
result = ModelA.objects.filter(type="year").annotate(
model_b_uids_list=Subquery(
ModelA.objects.filter(user__id__in=user_ids).values_list("field_1__uid", flat=True)
)
)
</code></pre>
<p>This gives error <code>more than one row returned by a subquery used as an expression</code>. I tried using <code>ArrayAgg</code> function but that does not seems to be working.</p>
<p>I want the annotated field <code>model_b_uids_list</code> on ModelA to have a list of uid's from the field_1 from multiple applicable records on ModelA.</p>
<p>Can anyone help me understand what's going wrong here?</p>
| <python><django><postgresql><django-models><django-rest-framework> | 2023-08-18 17:01:28 | 3 | 1,214 | Ankush Chavan |
76,931,266 | 1,942,868 | Ordering by the column which is connected with foreign key | <p>I have classes like this, <code>Drawing</code> has the one <code>CustomUser</code></p>
<pre><code>class CustomUser(AbstractUser):
detail = models.JSONField(default=dict,null=True,blank=True)
NAM = models.CharField(max_length=1024,null=True,blank=True,)
class Drawing(SafeDeleteModel):
detail = models.JSONField(default=dict,null=True, blank=True)
update_user = models.ForeignKey(CustomUser,on_delete=models.CASCADE,related_name='update_user')
</code></pre>
<p>Now I wan to get the list of <code>Drawing</code> by sorting <code>Drawing.update_user.NAM</code></p>
<p>then i made code like this,</p>
<pre><code>class DrawingViewSet(viewsets.ModelViewSet):
queryset = m.Drawing.objects.all()
serializer_class = s.DrawingSerializer
pagination_class = DrawingResultsSetPagination
filter_backends = [filters.OrderingFilter]
ordering_fields = ['id','update_user.NAM']
ordering = ['-id']
</code></pre>
<p>then call with this url</p>
<p>http://localhost/api/drawings/?ordering=-update_user.NAM</p>
<p>http://localhost/api/drawings/?ordering=update_user.NAM</p>
<p>However in vain.</p>
<p>How can I call this ordering?</p>
| <python><django> | 2023-08-18 16:56:51 | 1 | 12,599 | whitebear |
76,931,152 | 6,725,213 | How to display data in a TableView based on the currentIndex in a ListView when both share the same model? | <p>I have a <code>QListView</code> and a <code>QTableView</code>.<br />
<code>QListView</code> shows the <code>request_group</code> of <code>Suite</code></p>
<p>My question is: how to let <code>QTableView</code> display all <code>requestItem</code> values of <code>request group</code> which is selected in <code>QListView</code> and user can also edit,append and remove <code>requestItem</code> in <code>QTableView</code>,and update value in Suite.</p>
<p>I've already searched through the documentation. I'm wondering if I should use a ProxyModel, such as <code>SortFilterProxyModel</code> or <code>IdentityProxyModel</code>, or if it would be better to use a <code>QTreeView</code>?</p>
<h4>Data schemas</h4>
<pre class="lang-py prettyprint-override"><code>@dataclass
class RequestItem:
method: Optional[str] = field(default=None, metadata={'QtHeaderName': 'Method'})
url: Optional[str] = field(default=None, metadata={'QtHeaderName': 'URL'})
@dataclass
class RequestGroup:
name: str
description: Optional[str] = field(default=None)
requests: [RequestItem] = field(default_factory=list)
@dataclass
class Suite:
groups: [RequestGroup] = field(default_factory=list)
</code></pre>
<h4>Example Data</h4>
<pre class="lang-py prettyprint-override"><code>suite = Suite()
r0_0 = RequestItem('GET', 'https://google.com')
r1_0 = RequestItem('GET', 'https://google.com')
r1_1 = RequestItem('GET', 'https://google.com')
r2_0 = RequestItem('GET', 'https://google.com')
r2_1 = RequestItem('GET', 'https://google.com')
r2_2 = RequestItem('GET', 'https://google.com')
g0 = RequestGroup('group0', 'desc0', [r0_0])
g1 = RequestGroup('group1', 'desc1', [r1_0, r1_1])
g2 = RequestGroup('group2', 'desc2', [r2_0, r2_1, r2_2])
suite.groups = [g0, g1, g2]
model_suite = ModelSuite(suite)
</code></pre>
<h4>Qt Model</h4>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtCore import Qt, QModelIndex, QAbstractTableModel
class ModelSuite(QAbstractTableModel):
def __init__(self, suite: Suite, parent=None):
super().__init__(parent)
self.suite = suite
def rowCount(self, parent=QModelIndex()):
# Set ListView rowCount to the length of requestGroup in suite
return len(self.suite.groups)
def columnCount(self, parent=QModelIndex()):
# Set TableView Header section count to the length of RequestItem properties
return len(fields(RequestItem))
def data(self, index: QModelIndex, role=None):
if not index.isValid():
return None
if role == Qt.ItemDataRole.DisplayRole or role == Qt.ItemDataRole.EditRole:
request_group: RequestGroup = self.suite.groups[index.row()]
return request_group.name
if role == Qt.ItemDataRole.ToolTipRole:
return self.suite.groups[index.row()].description
def setData(self, index: QModelIndex, value: Any, role: int = None):
if not index.isValid():
return False
if role == Qt.ItemDataRole.EditRole:
request_group: RequestGroup = self.suite.groups[index.row()]
request_group.name = value
self.dataChanged.emit(index, index)
return True
...
return False
def insertRows(self, row: int, count: int = 1, parent: QModelIndex = QModelIndex()) -> bool:
...
def removeRows(self, row: int, count: int, parent: QModelIndex = QModelIndex()) -> bool:
...
</code></pre>
| <python><qt><pyqt><pyqt5><pyqt6> | 2023-08-18 16:37:18 | 0 | 1,678 | Chweng Mega |
76,930,807 | 5,767,535 | Override a method by an attribute at instance level | <p>This is related to <a href="https://stackoverflow.com/q/394770/5767535">this question</a> however the methods described there do not work.</p>
<p>I have a certain class <code>Dog</code> with a method <code>woofwoof</code> and an attribute <code>woof</code>. There is a second method <code>bark</code> which returns <code>woofwoof()</code>. <strong>For a given instance of <code>Dog</code>, I want to redefine <code>bark</code> so that it returns the value of <code>woof</code>.</strong> Importantly, <code>bark</code> must remain a method given downstream code relies on calling <code>bark</code>.</p>
<p>See a minimum working example below.</p>
<pre class="lang-py prettyprint-override"><code>class Dog:
woof = 'woof'
def woofwoof(self):
return 'woof woof'
def woOof(self):
return 'woOof'
def bark(self):
return self.woofwoof()
def test(foo):
print(type(foo.bark))
print(foo.bark())
foo = Dog()
test(foo)
foo.bark = foo.woOof
test(foo)
foo.bark = foo.woof
test(foo)
</code></pre>
<p>The above code yields:</p>
<pre class="lang-py prettyprint-override"><code><class 'method'>
woof woof
<class 'method'>
woOof
<class 'str'>
TypeError: 'str' object is not callable
</code></pre>
<p>However the output I want from the last 2 lines is:</p>
<pre class="lang-py prettyprint-override"><code><class 'method'>
woof
</code></pre>
<p>That is preserve the type of <code>bark</code> as a method, but make it return the value of attribute <code>woof</code> (in particular so that if I change the value of <code>woof</code> later on, <code>bark</code> returns the updated value when called).</p>
| <python><class><methods><attributes> | 2023-08-18 15:42:53 | 2 | 2,343 | Daneel Olivaw |
76,930,775 | 1,200,914 | How to concat two rows by index in a dataframe except for nan values? | <p>I have a dataset that looks like (with more columns and rows):</p>
<pre><code> id type value
0 104 0 7999
1 105 1 196193579
2 108 0 245744
3 NaN 1 NaN
</code></pre>
<p>Some rows have NaN values, and I already have the indexes for these rows. Now I would like to concat these rows with their previous row, except for NaN values. If I say <code>indexes=[3]</code>, then the new dataframe should be:</p>
<pre><code> id type value
0 104 0 7999
1 105 1 196193579
2 108 01 245744
</code></pre>
<p>How can I do this?</p>
<p><strong>NOTE</strong>: First row never will be in the list of indexes I have. The solution must be given my list of indexes. I also know the names of the columns where NaN values are, if necessary.</p>
| <python><pandas> | 2023-08-18 15:38:23 | 2 | 3,052 | Learning from masters |
76,930,666 | 19,588,737 | RuntimeWarning: Mean of empty slice return np.nanmean(a, axis, out=out, keepdims=keepdims), but not using nanmean anywhere, and a np.polyfit error | <p>I keep getting this <code>RuntimeWarning</code>, and I can't figure out where it's coming from, since I don't use <code>np.nanmean</code> anywhere in my program. I do use <code>np.mean</code>, though. Does anyone know if there's some other <code>numpy</code> function (e.g., <code>np.mean</code>) that calls <code>np.nanmean</code>, and may be throwing this warning?</p>
<p>I would just go and suppress it, however, each time the warning occurs, afterward when I go to fit a polynomial to the points being processed, I get the following error:</p>
<pre><code>C:\Users\M.Modeler\AppData\Local\miniconda3\envs\cloudtexture\lib\site-packages\numpy\lib\nanfunctions.py:1559: RuntimeWarning: Mean of empty slice
return np.nanmean(a, axis, out=out, keepdims=keepdims)
Intel MKL ERROR: Parameter 6 was incorrect on entry to DGELSD.
Traceback (most recent call last):
File "D:\Work_D\RD22-02_GSM\cloudTexture\cloudTexture.py", line 345, in <module>
thinned = ct.normalize(thinned,
File "D:\Work_D\RD22-02_GSM\cloudTexture\ctUtil.py", line 332, in normalize
poly = np.flip(np.polyfit(blnd[p], relrough[p], deg = polyorder))
File "<__array_function__ internals>", line 180, in polyfit
File "C:\Users\M.Modeler\AppData\Local\miniconda3\envs\cloudtexture\lib\site-packages\numpy\lib\polynomial.py", line 668, in polyfit
c, resids, rank, s = lstsq(lhs, rhs, rcond)
File "<__array_function__ internals>", line 180, in lstsq
File "C:\Users\M.Modeler\AppData\Local\miniconda3\envs\cloudtexture\lib\site-packages\numpy\linalg\linalg.py", line 2292, in lstsq
x, resids, rank, s = gufunc(a, b, rcond, signature=signature, extobj=extobj)
File "C:\Users\M.Modeler\AppData\Local\miniconda3\envs\cloudtexture\lib\site-packages\numpy\linalg\linalg.py", line 100, in _raise_linalgerror_lstsq
raise LinAlgError("SVD did not converge in Linear Least Squares")
numpy.linalg.LinAlgError: SVD did not converge in Linear Least Squares
</code></pre>
<p>For datasets where I don't get the mean of the empty slice warning, the <code>polyfit</code> is successful. So it seems the <code>nanmean</code> warning is telling me something useful, I just can't find what's causing it. Any ideas?</p>
<p>Additionally, any insight into the potential triggers of the above error could be helpful in tracking down the root cause of this?</p>
| <python><numpy> | 2023-08-18 15:22:41 | 1 | 307 | bt3 |
76,930,658 | 8,791,568 | Python crontab script writing to two loacations | <p>I have a python script that runs via crontab.</p>
<p>This is the crontab:</p>
<pre><code>0 14 * * * python3 /opt/app/script.py >> /var/log/app/script_log.txt
</code></pre>
<p>Previously, I had a print statement at the end of the script that would write to <code>/var/log/app/script_log.txt</code> but after switching to <code>logging</code>, nothing gets written there (script successfully writes to <code>data.log</code> - as set up in logging handlers).</p>
<p>This is my logging configuration:</p>
<pre><code>logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("data.log"),
logging.StreamHandler()
]
)
</code></pre>
<p>Is there a way to set up logger to write to both locations?</p>
| <python><python-3.x><logging><cron> | 2023-08-18 15:21:49 | 1 | 8,524 | Mate MrΕ‘e |
76,930,572 | 12,040,751 | Custom Pydantic JSON encoder ignored when composing BaseModels | <p>I have two Pydantic BaseModels (I am using version 1.10), one of them is used as the field of the other:</p>
<pre><code>from datetime import datetime
from pydantic import BaseModel
class Asd(BaseModel):
time: datetime
class Lol(BaseModel):
asd: Asd
</code></pre>
<p>I want the time in the <code>Asd</code> class to be serialized in a custom way, ie I want only the year, month and day. I have successfully achieved this like so</p>
<pre><code>class Asd(BaseModel):
time: datetime
class Config:
json_encoders = {datetime: lambda dt: dt.strptime("%Y-%m-%d")}
</code></pre>
<pre><code>>>> asd = Asd(time=datetime.strptime("2020-01-01 00:00:00", "%Y-%m-%d %H:%M:%S"))
>>> asd
Asd(time=datetime.datetime(2020, 1, 1, 0, 0))
>>> asd.json()
'{"time": "2020-01-01"}'
</code></pre>
<p>When I serialize <code>Lol</code> I would expect the same behaviour to be enforced, however the custom encoding disappears</p>
<pre><code>>>> Lol(asd=asd).json()
'{"asd": {"time": "2020-01-01T00:00:00"}}'
</code></pre>
<p>How can I make <code>Lol</code> implement the json serializer of <code>Asd</code>?</p>
| <python><json><pydantic> | 2023-08-18 15:09:46 | 1 | 1,569 | edd313 |
76,930,554 | 494,134 | Return HTTP 404 response with no content-type? | <p>Is there a way in Django to return a HTTP 404 response without a content-type?</p>
<p>I tried <code>return HttpResponseNotFound()</code> from my view, but that sends a <code>Content-Type: text/html; charset=utf-8</code> header.</p>
| <python><django> | 2023-08-18 15:07:39 | 1 | 33,765 | John Gordon |
76,930,535 | 4,865,723 | Calculate difference between quarter periods in pandas | <p>I want to know the "difference" between two quarters.</p>
<pre><code>>>> a = pandas.Period('2021Q1', freq='Q')
>>> b = pandas.Period('2021Q2', freq='Q')
>>> c = pandas.Period('2021Q4', freq='Q')
>>> c - b
<2 * QuarterEnds: startingMonth=12>
>>> b - a
<QuarterEnd: startingMonth=12>
</code></pre>
<p>Not sure how to interpret the result of this subtractions.</p>
<p>Here I would expect <code>True</code>.</p>
<pre><code>>>> (c - b) == 2
False
>>> (c - b) > 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '>' not supported between instances of 'pandas._libs.tslibs.offsets.QuarterEnd' and 'int'
</code></pre>
<p>What is <code>QuarterEnd</code> exactly and how do I handle that a save way?</p>
<p>I want to have the difference in quarters, not in days, years, hours or something else. In fact I just want to know if one quarter follows the other. If the difference is greater then 1 then it do not follows.</p>
| <python><pandas> | 2023-08-18 15:05:17 | 2 | 12,450 | buhtz |
76,930,361 | 3,616,293 | Using PyTorch's DDP for multi-GPU training with mp.spawn() doesn't work | <p>I am trying to implement multi-GPU single machine training with PyTorch and DDP.</p>
<p>My dataset and dataloader looks as:</p>
<pre><code># Define transformations using albumentations-
transform_train = A.Compose(
[
# A.Resize(width = 32, height = 32),
# A.RandomCrop(width = 20, height = 20),
A.Rotate(limit = 40, p = 0.9, border_mode = cv2.BORDER_CONSTANT),
A.HorizontalFlip(p = 0.5),
A.VerticalFlip(p = 0.1),
A.RGBShift(r_shift_limit = 25, g_shift_limit = 25, b_shift_limit = 25, p = 0.9),
A.OneOf([
A.Blur(blur_limit = 3, p = 0.5),
A.ColorJitter(p = 0.5),
], p = 1.0),
A.Normalize(
# mean = [0.4914, 0.4822, 0.4465],
# std = [0.247, 0.243, 0.261],
mean = [0, 0, 0],
std = [1, 1, 1],
max_pixel_value = 255,
),
# This is not dividing by 255, which it does in PyTorch-
ToTensorV2(),
]
)
transform_test = A.Compose(
[
A.Normalize(
mean = [0, 0, 0],
std = [1, 1, 1],
max_pixel_value = 255
),
ToTensorV2()
]
)
class Cifar10Dataset(torchvision.datasets.CIFAR10):
def __init__(
self, root = "~/data/cifar10",
train = True, download = True,
transform = None
):
super().__init__(
root = root, train = train,
download = download, transform = transform
)
def __getitem__(self, index):
image, label = self.data[index], self.targets[index]
if self.transform is not None:
transformed = self.transform(image = image)
image = transformed["image"]
return image, label
def get_cifar10_data(
rank, world_size,
path_to_files, num_workers,
batch_size = 256, pin_memory = False
):
"""
Split the dataloader
We can split our dataloader with 'torch.utils.data.distributed.DistributedSampler'.
The sampler returns a iterator over indices, which are fed into dataloader to bachify.
The 'DistributedSampler' splits the total indices of the dataset into 'world_size' parts,
and evenly distributes them to the dataloader in each process without duplication.
'DistributedSampler' imposes even partition of indices.
You might set 'num_workers = 0' for distributed training, because creating extra threads in
the children processes may be problemistic. The author also found 'pin_memory = False' avoids
many horrible bugs, maybe such things are machine-specific.
"""
# Define train and test sets-
train_dataset = Cifar10Dataset(
root = path_to_files, train = True,
download = True, transform = transform_train
)
test_dataset = Cifar10Dataset(
root = path_to_files, train = False,
download = True, transform = transform_test
)
train_sampler = DistributedSampler(
dataset = train_dataset, num_replicas = world_size,
rank = rank, shuffle = False,
drop_last = False
)
test_sampler = DistributedSampler(
dataset = test_dataset, num_replicas = world_size,
rank = rank, shuffle = False,
drop_last = False
)
# Define train and test loaders-
train_loader = torch.utils.data.DataLoader(
dataset = train_dataset, batch_size = batch_size,
pin_memory = pin_memory, shuffle = False,
num_workers = num_workers, sampler = train_sampler,
drop_last = False
)
test_loader = torch.utils.data.DataLoader(
dataset = test_dataset, batch_size = batch_size,
pin_memory = pin_memory, shuffle = False,
num_workers = num_workers, sampler = test_sampler,
drop_last = False
)
return train_loader, test_loader, train_dataset, test_dataset
</code></pre>
<p>The rest of the code is:</p>
<pre><code>def setup_process_group(rank, world_size):
# function to setup the process group.
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12344"
dist.init_process_group("nccl", rank = rank, world_size = world_size)
return None
def cleanup_process_group():
dist.destroy_process_group()
def main(rank, world_size):
# Setup process groups-
setup_process_group(rank = rank, world_size = world_size)
# Get distributed datasets and data loaders-
train_loader, test_loader, train_dataset, test_dataset = get_cifar10_data(
rank, world_size,
path_to_files, num_workers = 0,
batch_size = 256, pin_memory = False
)
# Initialize model and move to correct device-
model = ResNet50(beta = 1.0).to(rank)
"""
Wrap model in DDP
'device_id' tells DDP where your model is. 'output_device' tells DDP where to output.
In this case, it is rank.
'find_unused_parameters = True' instructs DDP to find unused output of the forward()
function of any module in the model
"""
model = DDP(module = model, device_ids = [rank], output_device = rank, find_unused_parameters = True)
# Define loss function and optimizer-
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(
params = model.parameters(), lr = 1e-3,
momentum = 0.9, weight_decay = 5e-4
)
best_acc = 50
for epoch in range(1, 51):
# When using DistributedSampler, we have to tell which epoch this is-
train_loader.sampler.set_epoch(train_loader)
running_loss = 0.0
running_corrects = 0.0
for step, x in enumerate(train_loader):
optimizer.zero_grad(set_to_none = True)
output = model(x[0])
loss = loss_fn(output, x[1])
loss.backward()
optimizer.step()
# Compute model's performance statistics-
running_loss += loss.item() * x[0].size(0)
_, predicted = torch.max(output, 1)
running_corrects += torch.sum(predicted == x[1].data)
train_loss = running_loss / len(train_dataset)
train_acc = (running_corrects.double() / len(train_dataset)) * 100
print(f"epoch = {epoch}, loss = {train_loss:.5f}, acc = {train_acc:.3f}%")
if train_acc > best_acc:
best_acc = train_acc
print(f"saving best acc model = {best_acc:.3f}%")
# Save best model-
torch.save(model.module.state_dict(), "ResNet50_swish_best_trainacc.pth")
cleanup_process_group()
if __name__ == "__main__":
# Say, e have 4 GPUs-
# world_size = 4
world_size = torch.cuda.device_count()
print(f"world size = {world_size}")
mp.spawn(
main, args = (world_size),
nprocs = world_size
)
</code></pre>
<p>On executing this, I get the error:</p>
<blockquote>
<p>Traceback (most recent call last): File
"/home/majumdar/Deep_Learning/PyTorch_DDP_Tutorial/PyTorch_DDP_Tutorial.py",
line 184, in
mp.spawn( File "/home/majumdar/anaconda3/envs/torch-cuda-new/lib/python3.10/site-packages/torch/multiprocessing/spawn.py",
line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File
"/home/majumdar/anaconda3/envs/torch-cuda-new/lib/python3.10/site-packages/torch/multiprocessing/spawn.py",
line 198, in start_processes
while not context.join(): File "/home/majumdar/anaconda3/envs/torch-cuda-new/lib/python3.10/site-packages/torch/multiprocessing/spawn.py",
line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:</p>
<p>-- Process 7 terminated with the following error: Traceback (most recent call last): File
"/home/majumdar/anaconda3/envs/torch-cuda-new/lib/python3.10/site-packages/torch/multiprocessing/spawn.py",
line 69, in _wrap
fn(i, *args) TypeError: Value after * must be an iterable, not int</p>
</blockquote>
| <python><pytorch><distributed-computing><torch> | 2023-08-18 14:43:57 | 1 | 2,518 | Arun |
76,930,360 | 1,991,502 | VSCode Microsoft extensions "Black Formatter" and "Pylint" failing to initialise "ModuleNotFoundError: No module named 'lsp_jsonrpc'" | <p>I am trying to install the pylint and black formatter extensions on VSCode, but the services fail to initialise. Both give an error like this</p>
<pre class="lang-none prettyprint-override"><code>2023-08-18 15:25:55.613 [info] Traceback (most recent call last):
File "c:\Users\djames\.vscode\extensions\ms-python.black-formatter-
2023.4.1\bundled\tool\lsp_server.py", line 38, in <module>
import lsp_jsonrpc as jsonrpc
ModuleNotFoundError: No module named 'lsp_jsonrpc'
</code></pre>
<p>In my json file I have the line which points to a 3.10 interpreter</p>
<pre class="lang-none prettyprint-override"><code>"python.defaultInterpreterPath": "C:\\Users\\djames\\Devel\\Python\\Python310\\python.exe",
</code></pre>
<p>and with this interpreter, I tried</p>
<pre class="lang-none prettyprint-override"><code>python -m pip install python-lsp_jsonrpc
</code></pre>
<p>which installed successfully but did not fix the issue.</p>
<p>My version of VSCode is 1.81.1. I am on Windows 10. My Python3.10 was installed using an embeddable zip file.</p>
| <python><visual-studio-code> | 2023-08-18 14:43:39 | 1 | 749 | DJames |
76,930,140 | 19,580,067 | Warning: tensorflow: Can save best model only with val_acc available, skipping | <p>I have an issue with tf.callbacks.ModelChekpoint.
As you can see in my log file, the warning comes always before the last iteration where the val_accuracy is calculated. Therefore, Modelcheckpoint never finds the val_accuracy</p>
<p><a href="https://i.sstatic.net/Ubcbm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ubcbm.png" alt="enter image description here" /></a></p>
<p>The validation losses are not even avail in log files</p>
<p>these are the metrics of log
<code>loss: 2.4456 - table_mask_loss: 0.3486 - col_mask_loss: 0.3167 - table_mask_accuracy: 0.8739 - col_mask_accuracy: 0.8705</code></p>
<p>This is my code</p>
<pre><code>filepath = "/content/table_net.h5"
model_checkpoint = tf.keras.callbacks.ModelCheckpoint("table_net.h5", monitor = 'val_accuracy', save_best_only=True, verbose = 0, mode="min",save_freq='epoch')
es = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', mode='min', patience=5,)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=init_lr, epsilon=1e-8,),
loss=losses,
metrics=['accuracy'] )
EPOCHS = 5
VAL_SUBSPLITS = 30
VALIDATION_STEPS = len(X_test)//BATCH_SIZE//VAL_SUBSPLITS
history = model.fit(train_dataset,
epochs=EPOCHS,
steps_per_epoch=train_steps,
validation_data=test_dataset,
validation_steps=VALIDATION_STEPS,
callbacks=[model_checkpoint])
</code></pre>
| <python><tensorflow><machine-learning><keras><deep-learning> | 2023-08-18 14:14:00 | 0 | 359 | Pravin |
76,930,060 | 2,587,816 | Looking for a way to accept or reject a dictionary trail w/o excepting | <p>I want to do this:</p>
<pre><code> foo23 = base["foo23"]["subfoo"]["subsubfoo"]
print(foo23)
[2,3]
foo23 = base["noexist"]["nothereeither"]["nope"]
print(foo23)
None
</code></pre>
<p>I can't seem to accomplish this, using defaultdict and specialized dictionaries.
The failed access of the first call can return a 'None', but then the following fields cause an exception for it not being subscriptable. Just wondering if this is possible.</p>
| <python><dictionary><defaultdict> | 2023-08-18 14:03:31 | 2 | 5,170 | Jiminion |
76,930,031 | 8,510,149 | preserving original df index while doing groupby, transform and explode | <p>I have an issue with keeping indices after group by. I have a use case that requires a groupby, then I want to feed the values in each group to a function, input is a list and the function returns a list. The next step is to explode the list and redistribute the updated values.</p>
<p>However, I can't join this new df back to the original one since the indices are different. Is there a way to preserve the indices and use them to reshuffle df1 according to the original index positions?</p>
<pre><code>def modify_list(values):
modified_values = []
for i in range(len(values)):
if i != 0 and values[i] > 6:
modified_values.append(0)
else:
modified_values.append(values[i])
return modified_values
df = pd.DataFrame({'group':['A', 'B', 'A', 'A', 'A', 'B', 'B', 'B', 'A'],
'class':['s','s', 'l', 'me', 'mi', 'me', 'mi', 'l', 'ml'],
'value':[0.25, 0.05, 0.34, 0.15, 0.25, 0.45, 0.25, 0.25, 0.01]}
)
df1 = df.groupby('group')['value'].agg(list).transform(modify_list).explode().reset_index().rename(columns={'value':'value2'})
</code></pre>
<p>If I merge df1 and df here, the values will lose there relation to the class feature.</p>
| <python><pandas><transform> | 2023-08-18 13:58:54 | 1 | 1,255 | Henri |
76,929,998 | 21,305,238 | Use runtime type checking (isinstance) along with type hints | <p>I have a class which looks like this:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self, string: str) -> None:
self._string = string
def __add__(self, other: str) -> str:
if not isinstance(other, str):
return NotImplemented
return f'{self._string}foo{other}'
</code></pre>
<p>Mypy <a href="https://mypy-play.net/?mypy=latest&python=3.11&flags=strict%2Cwarn-unreachable&gist=2eaea255883efaf1f72bee7695ed6737" rel="nofollow noreferrer">says</a> the 8th line (<code>return NotImplemented</code>) is unreachable, which is reasonable since I already type-hinted <code>other</code> as a <code>str</code>. However, at runtime, <code>other</code> might not be a string, and if that's the case I would like to return <code>NotImplemented</code> so that Python would raise an exception unless <code>other</code> can handle the operation.</p>
<p>Is there a way, other than turning off <code>--warn-unreachable</code> and use a comment, to nicely let mypy know that no one would ever hear its rambling at runtime and that I need an explicit check?</p>
| <python><mypy><python-typing> | 2023-08-18 13:53:41 | 0 | 12,143 | InSync |
76,929,890 | 1,795,924 | Directories not being copied on databricks notebook | <p>Considering this piece of code of mine:</p>
<pre><code># Define the base directory on DBFS and local directories
dbfs_base_dir = 'dbfs:/FileStore/tables/cnh_dataset/'
local_base_dir = '/tmp/cnh_dataset/'
# Create the local directories if they don't exist
os.makedirs(local_base_dir + 'train', exist_ok=True)
os.makedirs(local_base_dir + 'val', exist_ok=True)
# Copy data from DBFS to local directories
dbutils.fs.cp(dbfs_base_dir + 'train/', local_base_dir + 'train/', recurse=True) # Updated paths
dbutils.fs.cp(dbfs_base_dir + 'val/', local_base_dir + 'val/', recurse=True) # Updated paths
# Print the contents of the local directory after the copy operation
print("Contents of /tmp/cnh_dataset/train:", os.listdir('/tmp/cnh_dataset/train'))
# Check that the expected folder structure is present
print("Files in /tmp/cnh_dataset/train:", os.listdir(local_base_dir + 'train'))
print("Files in /tmp/cnh_dataset/val:", os.listdir(local_base_dir + 'val'))
</code></pre>
<p>All files from the dbfs file should be copied to the underlying OS of the notebook, and I know the path is correct because I've checked using the databricks CLI. But I still get this error:</p>
<pre><code>Contents of /tmp/cnh_dataset/train: []
Files in /tmp/cnh_dataset/train: []
Files in /tmp/cnh_dataset/val: []
FileNotFoundError: Couldn't find any class folder in /tmp/cnh_dataset/train.
</code></pre>
<p>Nothing is being copied, am I using <code>dbutils.fs.cp</code> wrong? I've read the documentation, it seems ok:</p>
<p><a href="https://docs.databricks.com/en/dev-tools/databricks-utils.html" rel="nofollow noreferrer">https://docs.databricks.com/en/dev-tools/databricks-utils.html</a></p>
| <python><databricks> | 2023-08-18 13:39:09 | 1 | 7,893 | Ericson Willians |
76,929,836 | 6,734,243 | How to get all the metadata that have the same name from a python package? | <p>I want to read metadata from a python package to feed a table withsome informations. The package is build using setuptools so all urls are saved under the <code>[project.urls]</code> field in the pyproject.toml file.</p>
<p>My problem is that when using <code>importlib</code> from a python script I only get the first one:</p>
<pre><code>from importlib import metadata
metdata.metdata("pandas")["Project-URL"] # using pandas as an example
>>> 'homepage, https://pandas.pydata.org'
</code></pre>
<p>When in fact there are 3 of them:</p>
<pre class="lang-ini prettyprint-override"><code>[project.urls]
homepage = 'https://pandas.pydata.org'
documentation = 'https://pandas.pydata.org/docs/'
repository = 'https://github.com/pandas-dev/pandas'
</code></pre>
<p>Is there a way to get all these metadata as a dict or at least a list that I can parse ?</p>
| <python><python-importlib> | 2023-08-18 13:32:52 | 1 | 2,670 | Pierrick Rambaud |
76,929,607 | 8,412,665 | User inputs for controlling a plotly plot, using Shiny in Python | <p>I am trying to have user defined input to control sunburst plots. This is all coded in Python, and used both the Shiny and Plotly package. The reason for using these over dash is as I have worked with both of these in R, however this project is required to be in Python.</p>
<p>The idea is that, using numeric inputs, a user can edit the parameters that feed into a sunburst plot.
There will be multiple inputs, but the code below applies to just a single value, as I assume that any answer will be scalable.</p>
<pre><code>import plotly.graph_objs as go
import plotly.express as px
from shiny import App, reactive, render, ui, Inputs, Outputs, Session
from shinywidgets import output_widget, register_widget
import pandas as pd
def panel_box(*args, **kwargs):
return ui.div(
ui.div(*args, class_="card-body"),
**kwargs,
class_="card mb-3",
)
app_ui = ui.page_fluid(
{"class": "p-4"},
ui.row(
ui.column(
4,
panel_box(
ui.input_numeric("FirstValue", "FirstValue", min = 0, value=2),
),
),
ui.column(
8,
output_widget("scatterplot"),
),
),
)
def server(input: Inputs, output: Outputs, session: Session):
FirstValue = reactive.Value(2)
@reactive.Effect
@reactive.event(input.FirstValue)
def _():
FirstValue.set(input.FirstValue())
scatterplot = go.FigureWidget(
data=[
go.Sunburst(
labels = ["Eve", "Cain", "Seth", "Enos", "Noam", "Abel", "Awan", "Enoch", "Azura"],
parents = ["", "Eve", "Eve", "Seth", "Seth", "Eve", "Eve", "Awan", "Eve" ],
values = [2, 14, 12, 10, 2, 6, 6, 4, 4],
),
],
layout={"showlegend": False},
)
@reactive.Effect
def _():
scatterplot.data[0].values = [FirstValue, 14, 12, 10, 2, 6, 6, 4, 4]
register_widget("scatterplot", scatterplot)
app = App(app_ui, server)
</code></pre>
<p>This currently comes up with the error <code>Error in Effect: <shiny.reactive._reactives.Value object at 0x000002275FC35540> is not JSON serializable</code>.</p>
<p>I've tried several other approaches, many of which break the reactive property - this is the closest I have gotten.</p>
<p>How can I make the plot linked to the values the user defines?</p>
| <python><plotly><py-shiny> | 2023-08-18 13:03:28 | 1 | 518 | Beavis |
76,929,209 | 4,818,789 | Json Dumps doesn't parse correctly emojis unicode | <p>We are getting a json file from whatsapp contact to process in Aws Glue.
In username we have lot of emojis and Glue fails each time to process the json.</p>
<p>We noticed that when doing json dumps when we create the json file, some emojis are converted to unicode
For example <code>LE JOOCHAR π€πͺ</code> is converted to <code>LE JOOCHAR \U0001f90cπͺ</code></p>
<p>Because of the unicode <code>\U0001f90c</code> Glue doesn't succeed to parse the file.</p>
<p>We tried lot of solutions</p>
<pre><code>string_data = string_data.replace('\U0001f90c', '\u0001f90c')
</code></pre>
<p>Not working. The Unicode is not even replaced in the string.</p>
<pre><code>contact_response_obj = json.dumps(contact_response_obj, ensure_ascii=False).encode('utf8')
</code></pre>
<p>Not working, always have this unicode character</p>
<p>Please how could we do to remove or convert correctly this unicode ?</p>
<p>Thanks a lot.</p>
| <python><json><unicode> | 2023-08-18 12:08:50 | 1 | 1,341 | Teddy Kossoko |
76,929,190 | 876,592 | QWizard with custom widgets | <p>I implemented a small widget to allow users to select a file and show the path in a QLineEdit:</p>
<pre class="lang-py prettyprint-override"><code>class FileWidget(QWidget):
fileChanged = pyqtSignal(str)
filename = ""
def __init__(self):
super().__init__()
self.setLayout(QHBoxLayout())
self.__file_edit = QLineEdit()
self.__file_edit.setEnabled(False)
self.layout().addWidget(self.__file_edit)
self.__file_button = QPushButton()
self.__file_button.clicked.connect(self.__open_file_dlg)
self.__file_button.setText("Select Template")
self.layout().addWidget(self.__file_button)
def __open_file_dlg(self, *args):
selected_file = QFileDialog().getOpenFileName(self, "Select file", str(pathlib.Path.home().absolute()), "Documents (*.docx)")[0]
self.__file_edit.setText(selected_file)
self.filename = selected_file
self.fileChanged.emit(selected_file)
</code></pre>
<p>Now, I'd like to use this widget in a QWizard. According to the documentation (<a href="https://doc.qt.io/qt-6/qwizard.html#setDefaultProperty" rel="nofollow noreferrer">https://doc.qt.io/qt-6/qwizard.html#setDefaultProperty</a>) I need to specify the property which holds the value as well as a signal that is emitted, when the property changes:</p>
<pre class="lang-py prettyprint-override"><code>wizard = QWizard()
wizard_page = QWizardPage()
wizard_page.setLayout(QFormLayout())
wizard.addPage(wizard_page)
widget = FileWidget()
wizard.setDefaultProperty(FileWidget.__name__, "filename", FileWidget.fileChanged)
wizard_page.layout().addRow("File Name:", widget)
wizard_page.registerField("filename*", widget)
wizard.show()
wizard.exec()
</code></pre>
<p>This displays everything correctly, however once I select a file, the QLineEdit in FileWidget receives the correct value, but it seems like the wizard is not notified.</p>
<p>I feel like I'm just missing one crucial step here. Would be glad if anybody shares their ideas.</p>
| <python><python-3.x><pyqt6><qwizard> | 2023-08-18 12:05:47 | 0 | 429 | Shelling |
76,928,948 | 4,494,781 | is there a way to have an optional argument and subparsers for argparse | <p>I would like to have my program handle commands (<code>gui</code>, <code>daemon</code>...) via subparsers, but also handle one optional positional <code>url</code> argument. The optional argument would only be used when the OS calls my program because the user clicked on some URI associated with my program.</p>
<p>This is a simplified example of what I would like to achieve:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument("url", nargs="?")
# global options
parser.add_argument("--password")
parser.add_argument("--verbose", action="store_true")
# subparser
subparsers = parser.add_subparsers(dest="cmd", help='sub-command help')
gui_parser = subparsers.add_parser("gui")
gui_parser.add_argument("--testnet", action="store_true")
daemon_parser = subparsers.add_parser("daemon")
daemon_parser.add_argument("subcommand", nargs="?", help="start, stop, status or commands added by plugins")
daemon_parser.add_argument("subargs", nargs="*", metavar="arg", help="additional arguments (used by plugins)")
url = 'ecash:sdfqsdf?amount=1337.42&message="monthly payment to supplier #42"'
args = parser.parse_args([url])
print(args.url, args.cmd, args.verbose, args.password)
args = parser.parse_args(["--verbose", url])
print(args.url, args.cmd, args.verbose, args.password)
args = parser.parse_args(["--verbose", "--password", "123", "gui", "--testnet"])
print(args.url, args.password, args.cmd, args.testnet)
args = parser.parse_args(["daemon", "start"])
print(args.url, args.password, args.cmd, args.subcommand)
args = parser.parse_args(["daemon", "show_label", "id00021"])
print(args.url, args.password, args.cmd, args.subcommand, args.subargs)
</code></pre>
<p>Unfortunately the optional <code>url</code> argument does not seem to work:</p>
<pre><code>$ python test.py
usage: test.py [-h] [--password PASSWORD] [--verbose] [url] {gui,daemon} ...
test.py: error: argument cmd: invalid choice: 'ecash:sdfqsdf?amount=1337.42&message="monthly payment to supplier #42"' (choose from 'gui', 'daemon')
</code></pre>
<p>Is there a way to do that, or are subparsers incompatible with additional arbitrary positional arguments?</p>
| <python><argparse> | 2023-08-18 11:32:41 | 1 | 1,105 | PiRK |
76,928,930 | 5,838,180 | How to make subprocess in python print the outputs immediately? | <p>I have a simple function for executing terminal commands and it looks something like this:</p>
<pre><code>import subprocess
def func(command):
subprocess.run(command, shell=True)
func('python my_python_program.py --par my_parameter_file.py')
</code></pre>
<p>The function indeed executes the command, but there should be some output/print statements coming from the program I am executing. The program takes about 10 min to execute and creates a lot of print statements. They get displayed when I just execute the command in a terminal, but <code>subprocess</code> doesn't display them.</p>
<p>How can I get the output to be printed out when using <code>subprocess</code>? Can you provide me with 2 different options - 1) if I want the output just printed on the screen, each line immediately, not everything when the command is completely done ; 2) if I want the output to be saved into a file, also immediately line after line? Tnx!</p>
| <python><shell><subprocess> | 2023-08-18 11:30:34 | 1 | 2,072 | NeStack |
76,928,802 | 1,252,347 | Python - How to copy a DB record? | <p>Newbie on Python, Pandas and SqlAlchemy.</p>
<ol>
<li>I need to read a record from a table</li>
<li>Update a field ('N' to 'Y') for this original record</li>
<li>Changing some other fields of the original record</li>
<li>Save the record as new record in the table</li>
</ol>
<pre><code>def set_prj01_status( db: Session, ifu_infounit_id:str, statusDesc: str):
logger.info(f"[set_prj01_status] set '{ifu_infounit_id}' to '{statusDesc}'")
# read current status record for the id
query = fetch_prj01_status( db=db, ifu_infounit_id=ifu_infounit_id)
# print(query.statement)
query_df = pd.read_sql( query.statement, query.session.bind)
print(f"[set_prj01_status][fetch_prj01] count: {query_df.shape[0]}")
# the original record (only one)
record = query_df.iloc[0]
prj_id = record[0]
print(f"prj_id={prj_id}")
# Update modified record
record.FLAG_DELETED = 'Y'
# update_prj01_status( db=db, record)
db.query(PRJ01).filter(PRJ01.PRJ_ID==prj_id).update({'FLAG_DELETED':'ID'})
# New record to add - same record?
record.PRJ01_CMS_PROJECTDATA_EID = null
record.FLAG_DELETED = 'N'
record.PRJ01_LEGENDPROCESSSTATUS = statusDesc
print("-- newRecord:")
print(record)
# insert_prj01_status( db=db, new_record=record)
# db.add(record)
</code></pre>
<p>In this way I simply have the two operations (update and insert) but I receive the error 'pandas.core.series.Series is not mapped'</p>
<p>I'm wondering if there is a way to build (and save) the new record starting from the original record, changing only some fields.</p>
<p>Thanks</p>
| <python><pandas><sqlalchemy> | 2023-08-18 11:12:52 | 1 | 418 | SteMMo |
76,928,772 | 3,070,181 | How to decompress lzip in python | <p>I have a string that I have compressed in javascript using <a href="https://pieroxy.net/blog/pages/lz-string/index.html" rel="nofollow noreferrer">lz-string</a>.</p>
<p><strong>compress.js</strong></p>
<pre><code>var compressed = LZString.compress('abc');
</code></pre>
<p>I can decompress it in javascript using</p>
<pre><code>console.log(LZString.decompress(compressed));
</code></pre>
<p>I have copied the compressed string to the clipboard and I am attempting to decompress in python using <a href="https://pypi.org/project/lzip/" rel="nofollow noreferrer">lzip</a></p>
<p><strong>decompress.py</strong></p>
<pre><code>import lzip
# compressed is the contents of the clipboard from the javascript
compressed = 'βγδ'
compressed_bytes = str.encode(compressed)
print(compressed_bytes)
print(lzip.decompress
</code></pre>
<p>I get the error</p>
<blockquote>
<p>RuntimeError: Lzip error: Header error</p>
</blockquote>
<p>Is this the right approach?</p>
<p>It appears that the compression implementation is different in lz-string from lzip</p>
<p>When I run</p>
<pre><code>import lzip
compressed = 'βγδ'
python_compressed = lzip.compress_to_buffer(str.encode('abc'))
print(f'{python_compressed=}')
compressed_bytes = str.encode(compressed)
print(f'{compressed_bytes=}')
</code></pre>
<p>I get</p>
<blockquote>
<p>python_compressed=b"LZIP\x01\x17\x000\x98\x88\xa4J\x8e\x9f\xff\xf6c\x80\x00\xc2A$5\x03\x00\x00\x00\x00\x00\x00\x00'\x00\x00\x00\x00\x00\x00\x00"</p>
</blockquote>
<p>and</p>
<blockquote>
<p>compressed_bytes=b'\xe2\x86\x82\xe3\x83\x86\xe4\x80\x80'</p>
</blockquote>
<p>So there is a mismatch somewhere</p>
| <python><lzip> | 2023-08-18 11:07:19 | 1 | 3,841 | Psionman |
76,928,770 | 6,168,231 | python dict: how to assign a value to a key which will raise an error if it is accessed though the key as a one-liner? | <p>What I'm looking for is to define a dict and assign initial values to keys. Furthermore, I want to define certain keys as errors, and I want these errors to be raised immediately if the key is used. E.g.</p>
<pre><code>d = {}
d["foo"] = "bar"
d["baz"] = "rat"
d["no-no"] = raise ValueError("This is a no-no") # this is invalid syntax, but should clarify what I'm looking for.
</code></pre>
<p>such that when somebody would call</p>
<pre><code>>>> d["foo"]
bar
>>> d["baz"]
rat
>>> d["no-no"]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: This is a no-no
</code></pre>
<p>Is it possible to do this as a one-liner? Only other solution I can think of is to create a subclass of <code>dict</code> and overwrite the setter function, but I'd like to avoid that, and solve it as a one-liner instead.</p>
| <python><python-3.x><dictionary><error-handling> | 2023-08-18 11:07:14 | 0 | 562 | mivkov |
76,928,719 | 6,930,340 | Squeeze rows containing missing values in multi-index dataframe | <p>Consider the following multi-index <code>pd.DataFrame</code> that has a number of missing values.</p>
<pre><code>import numpy as np
import pandas as pd
# Create multi-index
index = pd.MultiIndex.from_tuples(
[
("A", "X", "I"),
("A", "X", "I"),
("A", "Y", "I"),
("A", "Y", "II"),
("A", "Y", "I"),
],
names=["level_1", "level_2", "level_3"],
)
# Create dataframe
data = [[1, np.nan], [np.nan, 1], [np.nan, 1], [np.nan, 1], [1, np.nan]]
df = pd.DataFrame(data, index=index, columns=["column1", "column2"])
print(df)
column1 column2
level_1 level_2 level_3
A X I 1.0 NaN
I NaN 1.0
Y I NaN 1.0
II NaN 1.0
I 1.0 NaN
</code></pre>
<p>How can I squeeze the rows as much as possible? I am looking for the following result:</p>
<pre><code> column1 column2
level_1 level_2 level_3
A X I 1.0 1.0
Y I 1.0 1.0
II NaN 1.0
</code></pre>
| <python><pandas><dataframe><multi-index> | 2023-08-18 10:59:53 | 2 | 5,167 | Andi |
76,928,604 | 3,848,207 | How to make this clock app always on top? | <p>I have this clock app written in python. It runs successfully on Windows.</p>
<p>Here is the source code.</p>
<pre><code># Source: https://www.geeksforgeeks.org/python-create-a-digital-clock-using-tkinter/
# importing whole module
from tkinter import *
from tkinter.ttk import *
# importing strftime function to
# retrieve system's time
from time import strftime
# creating tkinter window
root = Tk()
root.title('Clock')
# This function is used to
# display time on the label
def time():
string = strftime('%H:%M:%S %p')
lbl.config(text=string)
lbl.after(1000, time)
# Styling the label widget so that clock
# will look more attractive
lbl = Label(root, font=('calibri', 40, 'bold'),
background='purple',
foreground='white')
# Placing clock at the centre
# of the tkinter window
lbl.pack(anchor='center')
time()
mainloop()
</code></pre>
<p>This is how the clock app looks.</p>
<p><a href="https://i.sstatic.net/HwtdZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HwtdZ.png" alt="enter image description here" /></a></p>
<p>It works fine. However, I want to make the app always appear on top in Windows. How can I modify the code to make the app always appear on top?</p>
<p>I am using Windows 11.</p>
| <python><tkinter> | 2023-08-18 10:45:09 | 3 | 5,287 | user3848207 |
76,928,544 | 11,594,202 | Upserting a dictionary with another dictionary in python | <p>Some attributes of objects in our stack are stored in JSON format. When a part of an object is edited in the front-end, the information should be updated in the back-end.</p>
<p>To accomplish this, I have written the following function:</p>
<pre><code>def save_update(d1: dict, d2: dict):
for k, v in d1.items():
if k in d2:
if isinstance(v, dict):
d2[k].update(v)
elif isinstance(v, list):
d2[k].extend(v)
d1.update(d2)
return d1
</code></pre>
<p>The problem here is that we run the risk of extending the list with duplicates whenever. In an ideal world this should not happen, but we want to prevent side-effects before they occur. An alternative would be to change the function like so:</p>
<pre><code>def upsert_dictionary(d1: dict, d2: dict):
"""updates d1 with the key-value pairs of d2 and
checks for nested dicts and lists 1 deep
"""
# loop over key-value pairs
for k, v in d1.items():
# if d2 contains this key we check for nested lists and dicts and perform an update
if k in d2:
if isinstance(v, dict):
d2[k].update(v)
# if the value is a list, we prevent duplicates by converting to a set
elif isinstance(v, list):
extended_list = d2[k] + (v)
d2[k] = list(set(extended_list))
d1.update(d2)
return d1
</code></pre>
<p>This last solution seems fine, but is going to throw an exception when it is not possible to convert the contents of the list to a set. This means strict guidelines for the format is required (which goes against the point of using json in the first place) or an additional check is required</p>
<p>I can imagine others have tried a similar thing before, is there a better way to go about this problem?</p>
| <python><json><dictionary> | 2023-08-18 10:35:58 | 1 | 920 | Jeroen Vermunt |
76,928,445 | 12,275,675 | How to restart iteration regardless of error in python | <p>I have this code which produces a lot of errors which causes it to break out of the loop. These errors are due to coding issue in this code or because of remote server(s) issue. Is there anyway to restart the code block no matter what the error was. What i was thinking was a while loop inside a while loop but for some reason it is still breaking out of iteration.</p>
<pre><code>while True:
while True:
[CODE BLOCK]
</code></pre>
<p>Could you please advise how a code block can be forced to restart iteration no matter the error. If there is anyway to skip any error and just move on.</p>
| <python><python-3.x><iteration> | 2023-08-18 10:21:23 | 1 | 1,220 | Slartibartfast |
76,928,336 | 3,992,990 | Python is decoding my Base64-String into a false String-Representation | <p>I have the following C# Code to create a Base64-String out of a String with german Umlauts:</p>
<pre><code>var file = @"C:\Work\Podcast\GehΓΆr\Folgen\2023-08-08 11-05-01.mkv";
var filePathBytes = Encoding.UTF8.GetBytes(file);
var encodedFilePathString = Convert.ToBase64String(filePathBytes);
</code></pre>
<p>I am then giving it over to Python to do some work on this video file with specific python modules like so:</p>
<pre><code>var startInfo = new ProcessStartInfo()
{
WindowStyle = ProcessWindowStyle.Hidden,
FileName = "cmd",
Arguments = $"/c cd {ConfigurationManager.AppSettings["PythonScriptPath"]} & {ConfigurationManager.AppSettings["PythonFilePath"]} -c \"import MyModule; MyModule.doWork('{encodedFilePathString}')\"",
UseShellExecute = false,
CreateNoWindow = false,
RedirectStandardOutput = true,
RedirectStandardError = true
};
</code></pre>
<p>And on Python-side I am (for now) trying to output the String to see how the encoding works:</p>
<pre><code>decodedFileName = base64.b64decode(encodedVideoFilePath).decode('utf-8')
the_encoding = chardet.detect(base64.b64decode(encodedVideoFilePath))['encoding']
print(the_encoding, flush=True)
print(encodedVideoFilePath, flush=True)
print(decodedFileName, flush=True)
print(os.path.exists(decodedFileName), flush=True)
</code></pre>
<p>And the Python-Output looks as following:</p>
<pre><code>Received from standard out: ISO-8859-9
Received from standard out: QzpcV29ya1xQb2RjYXN0XEdlaMO2clxGb2xnZW5cMjAyMy0wOC0wOCAxMS0wNS0wMS5ta3Y=
Received from standard out: C:\Work\Podcast\GehΓ·r\Folgen\2023-08-08 11-05-01.mkv
Received from standard out: True
Received from standard out:
</code></pre>
<p>(This output is from my Process-Console and thus prefixed with "Received from standard out:")</p>
<p>The only real thing is here that the string of the MKV-File isn't correctly displayed. In the Print-Output and somewhere down the road in the python code where I would need the string to be inserted somewhere else but this <code>Γ·</code>-Char is still there.</p>
<p>But Python can find the File even with this falsy encoding. So what should I do to handle this UTF-8 encoded string correctly?</p>
| <python><c#><python-3.x><windows><utf-8> | 2023-08-18 10:05:12 | 0 | 1,025 | Gadawadara |
76,928,323 | 14,860,526 | Combine coverage reports into one xml in python | <p>my python project is composed of several different packages</p>
<pre><code>main_dir
|- package_1
|- package_2
|_ package_3
</code></pre>
<p>i'm testing each package separately with pytest and pytest-cov</p>
<pre><code>python -m pytest package_1\tests --junitxml package_1\test_result.xml --cov package_1 --cov-branch --cov-report xml:package_1\test_coverage.xml
python -m pytest package_2\tests --junitxml package_2\test_result.xml --cov package_2 --cov-branch --cov-report xml:package_2\test_coverage.xml
python -m pytest package_3\tests --junitxml package_3\test_result.xml --cov package_3 --cov-branch --cov-report xml:package_3\test_coverage.xml
</code></pre>
<p>this produces separate <strong>test_coverage.xml</strong> and <strong>.coverage</strong> file for each package</p>
<p>i want to merge the test coverage files into one xml file</p>
<p>what i did:</p>
<pre><code>coverage combine package_1\.coverage package_2\.coverage package_3\.coverage
</code></pre>
<p>which works, generating a new <strong>.coverage</strong> file in my working directory, but then if I run:</p>
<pre><code>coverage xml .coverage
</code></pre>
<p>i get this error:</p>
<pre><code>Couldn't parse 'C:\Users\xxxx\Documents\git\main_dir\.coverage' as Python source: 'EOF in multi-line statement' at line 6580
</code></pre>
<p>Any idea on how I can achieve what i want? Is it possible to combine the xml files directly into one without passing through the .coverage files?</p>
<p>Many thanks</p>
| <python><coverage.py> | 2023-08-18 10:04:06 | 1 | 642 | Alberto B |
76,928,115 | 14,681,038 | is it possible to generate examples from PolymorphicProxySerializer using drf-spectacular | <p>I am using <code>drf-spectacular</code> to generate some swagger documentations. There is one endpoints, that supports multiple body requests and I am using <code>PolymorphicProxySerializer</code> to describe this <code>oneOf</code> schema. Also, I see that by default one example is auto generated for one of the serializers. I was wondering if it is possible to also have second example for second serializer, that is automatically generated.</p>
<p>This is my schema:</p>
<pre><code> @extend_schema(
request=PolymorphicProxySerializer(
"PatchDifferentProducts",
serializers=[create_update_single_product_serializer, create_update_portfolio_product_serializer],
resource_type_field_name=None,
),
responses={status.HTTP_200_OK: ""},
)
</code></pre>
<p>I want dropdown for both serializers, so I can have example value for both of them.
<a href="https://i.sstatic.net/g4NDG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g4NDG.png" alt="enter image description here" /></a></p>
<p>Could not find anything related to this in docs.</p>
| <python><django-rest-framework><drf-spectacular> | 2023-08-18 09:33:51 | 0 | 643 | Sharmiko |
76,928,107 | 1,991,502 | Creating a dataclass that generates an instance of another class as a default attribute. How can I satisfy this Pylint warning? | <p>Consider this python script</p>
<pre><code>from dataclasses import dataclass, field
class ClassA:
def __init__(self):
pass
@dataclass
class ClassB:
class_a: ClassA = field(default_factory=lambda: ClassA())
</code></pre>
<p>Pylint warns that the <a href="https://pylint.readthedocs.io/en/latest/user_guide/messages/warning/unnecessary-lambda.html" rel="nofollow noreferrer">lambda might not be necessary</a>. Is there a cleaner alternative?</p>
| <python><pylint><python-dataclasses> | 2023-08-18 09:32:45 | 1 | 749 | DJames |
76,928,034 | 5,022,051 | Understand the behavior of `super.__init__()` with dynamic value in parent class | <p>I am using python 3.8 (same behavior in 3.9) and I have the below parent class (simplified to illustrate the example) where <code>date</code> is dynamically set when instantiating the class.</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, timezone
import time
class Parent():
def __init__(self, date=datetime.now(timezone.utc).timestamp()):
self.date = date
def date_id(self):
return id(self.date)
</code></pre>
<p>instantiating a child class as below returns 10 times the same id for the variable <code>self.date</code>. My assumption was that on instantiating a new <code>Child</code> class it will create a new <code>self.date</code> variable.</p>
<pre><code>class Child(Parent):
def __int__(self):
super().__init__()
for i in range(10):
time.sleep(1)
print(Child().date_id())
</code></pre>
<p>If I want to achieve the expected behavior I instead need to define my dynamic assignment in the <code>__init__</code> body as such</p>
<pre class="lang-py prettyprint-override"><code>class Parent():
def __init__(self):
self.date = datetime.now(timezone.utc).timestamp()
</code></pre>
<p>what is the mechanism in python that results in this behavior?</p>
| <python><python-3.x><oop><inheritance> | 2023-08-18 09:21:54 | 1 | 665 | Teddy |
76,928,014 | 7,069,189 | Azure CosmosDB Trigger (Python) not getting triggered when new document is inserted | <p>I am unable to trigger a response when I add documents in Azure Cosmos DB. I have applied solutions from the following links
<a href="https://stackoverflow.com/questions/51199933/unable-to-run-cosmosdb-trigger-locally-with-azure-function-core-tools">Unable to run cosmosDB trigger locally with Azure Function Core Tools</a>
<a href="https://stackoverflow.com/questions/48342175/azure-functions-cosmosdbtrigger-not-triggering-in-visual-studio">Azure Functions: CosmosDBTrigger not triggering in Visual Studio</a></p>
<p>I am trying to create a cosmosDB trigger that gets triggered when a new document is added to the collection. I created a resourcegroup <code>appRG</code> and then added an Azure function App and a Azure cosmos DB.
I used python as my programming language not python V2 and created the function. My <code>__init__.py</code> as follows:</p>
<pre><code>import logging
import azure.functions as func
def main(documents: func.DocumentList) -> str:
if documents:
logging.info('Document id: %s', documents[0]['id'])
</code></pre>
<p>Basic code that gets created when you create an azure function trigger. I followed this link: <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger?tabs=python-v1%2Cin-process%2Cfunctionsv2&pivots=programming-language-python" rel="nofollow noreferrer">Azure CosmosDB Python Trigger</a> . I am using Function 2.x, as such the following function.json gets created</p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "cosmosDBTrigger",
"name": "documents",
"direction": "in",
"leaseCollectionName": "leases",
"connectionStringSetting": "app_DOCUMENTDB",
"databaseName": "test",
"collectionName": "story",
"createLeaseCollectionIfNotExists": true
}
]
}
</code></pre>
<p>The <code>connectionStringSetting</code> "app_DocumentDB" starts with <code>AccountEndpoint</code> as explained in <a href="https://stackoverflow.com/questions/62872969/the-connection-string-is-missing-a-required-property-accountendpoint-error-whil">connection string</a>. Before deployment, I debugged and executed the function, and it was triggered . I got the errors during debugging as explained in this <a href="https://stackoverflow.com/questions/48862833/azure-functions-unable-to-convert-trigger-to-cosmosdbtrigger">link</a> but when I executed the function it displayed the following result</p>
<pre><code>Document id: sample
[2023-08-18T08:01:16.039Z] Executed 'Functions.CosmosTrigger1' (Succeeded, Id=8144c581-a6df-
4f93-a6bb-e5aeccd3f45b, Duration=94ms)
</code></pre>
<p>When I deploy the function to the function app, I also upload the settings, so in the function APP configuration--->Application settings, <code>app_DocumentDB</code> is stored with the same value as shown in the <code>local.settings.json</code> file. <a href="https://github.com/Azure/Azure-Functions/issues/1206" rel="nofollow noreferrer">github issue</a></p>
<p>But when I insert document into the story collection nothing gets triggered, as nothing is shown under the monitor Invocation (I have waited for 5 minutes).
<a href="https://i.sstatic.net/e7siB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e7siB.png" alt="Monitor Invocation" /></a></p>
<p>I also tried adding the cosmosDB primary connection string into the app configuration and saving it in the variable <code>app_conn</code></p>
<pre><code>app_conn = mongodb://d-app:......................
</code></pre>
<p>and using that variable in
"connectionStringSetting":"app_conn" in the local.settings.json file before deployment.</p>
<p>One thing that I would like to mention is that as per <a href="https://stackoverflow.com/questions/62408892/azure-function-trigger-not-working-for-a-different-resource-id-of-cosmosdb">Azure Function Trigger not working for a different resource id of cosmosdb</a>, my primary connection string in azure cosmosDB however does not start with <code>AccountEndpoint</code></p>
<p>However, still the function is not getting triggered.</p>
<p>Please help, I have read the github issues and the azure documentation and followed most of the strategies mentioned in the stackoverflow. But since I am new to this I may have missed some key configurations.</p>
<p>Any help will be appreciated. Thanks in advance.</p>
| <python><azure><azure-functions><azure-cosmosdb> | 2023-08-18 09:19:24 | 1 | 329 | Abhinav Choudhury |
76,927,998 | 11,858,026 | HTTP Error 409 while accessing registered dataset as a URI file (Azure SDKv2) | <p>In Azure ML I'm trying to read data from the blob storage, register it as a <code>URI File</code>. This is needed to keep data versioning and at the same time to keep data lineage (sort of) because components require to have either <code>URI File</code> or <code>URI Folder</code>.</p>
<p>So, this is how I try to manage that:</p>
<pre class="lang-py prettyprint-override"><code># Setting the credentials
subscription_id = 'my_id'
resource_group = 'my_group'
workspace_name = 'my_workspace'
workspace = Workspace(subscription_id, resource_group, workspace_name)
# authenticate
credential = DefaultAzureCredential()
credential.get_token("https://management.azure.com/.default")
# Get a handle to the workspace
ml_client = MLClient(
credential=credential,
subscription_id=subscription_id,
resource_group_name=resource_group,
workspace_name=workspace_name,
)
# Waking up "lazy" client
data_asset = ml_client.data.get(name="coffee_data", version=1)
print(f"Data asset URI: {data_asset.path}")
from azure.ai.ml.entities import Data
from azure.ai.ml.constants import AssetTypes
web_path = 'https://containername12345.blob.core.windows.net/my-container/raw_data/raw.csv'
raw_data = Data(
name="TESTESTEST",
path=web_path,
type=AssetTypes.URI_FILE,
description="Dataset for some work I do!",
tags={"user":"Egor"},
)
data = ml_client.data.create_or_update(raw_data)
print(
f"Dataset with name {data.name} was registered to workspace, the dataset version is {data.version}"
)
</code></pre>
<pre><code>Output:
Dataset with name TESTESTEST was registered to workspace, the dataset version is 4
</code></pre>
<p>Then I try to get this data which is a <code>URI File</code> now.</p>
<pre class="lang-py prettyprint-override"><code>data_asset = ml_client.data.get(name="TESTESTEST", version = ml_client.data._get_latest_version('TESTESTEST').version)
print(f"Data asset URI: {data_asset.path}")
</code></pre>
<pre><code>Output:
Data asset URI: https://containername12345.blob.core.windows.net/my-container/raw_data/raw.csv
</code></pre>
<p>And here's funny thing. If I follow this URL, which is clickable I see this:</p>
<pre class="lang-xml prettyprint-override"><code>This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>PublicAccessNotPermitted</Code>
<Message>Public access is not permitted on this storage account. RequestId:abcde-12345 Time:2023-08-18T07:31:07.6896441Z</Message>
</Error>
</code></pre>
<p>Which is basically leads to the problem when I try to read data throughout this <code>URI file</code>.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv(data_asset.path)
</code></pre>
<pre><code>Output:
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
Cell In[8], line 1
----> 1 df = pd.read_csv(data_asset.path)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/pandas/io/parsers/readers.py:912, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, date_format, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options, dtype_backend)
899 kwds_defaults = _refine_defaults_read(
900 dialect,
901 delimiter,
(...)
908 dtype_backend=dtype_backend,
909 )
910 kwds.update(kwds_defaults)
--> 912 return _read(filepath_or_buffer, kwds)
... (this contains many lines)
HTTPError: HTTP Error 409: Public access is not permitted on this storage account.
</code></pre>
| <python><pandas><azure><azure-machine-learning-service><azureml-python-sdk> | 2023-08-18 09:16:27 | 0 | 333 | Egorsky |
76,927,688 | 11,154,036 | Fractional truncation into deadlock SSMS | <p>In short:
I need to write data for the past 8 years to a database per quarter. Every quarter will be written in 9 batches. When I try to run the script over the 32 quarters (8*4) it randomly errors at one point. I get many "Fractional truncation" errors with a single deadlock mixed in.</p>
<p>Since the data is significantly large, I can't find the row that errors. IT WORKS when I redo the exact batch that gave the "fractional truncation" errors and deadlock errors. So there is no way to narrow the batch down to the problem.</p>
<p>The process takes several days (an issue for another time) so I can't monitor until it breaks, manually redo the batch, filter out duplicates, check if everything is in there and restart the other quarters.</p>
<p>In detail:
In the SQL Server database, I'm reading data from Table A into Python. I'm predicting column X and column Y. I'm dumping the data in table B, which is table A + columns X, Y and Z (timestamp).</p>
<p>My analysis:
Since the column types of the overlapping columns in A and B are the same, these shouldn't give a problem. In Python, I check for nans, extremes, lengths, and I even regex the floats to see if those fit in a decimal datatype. So those columns should not be an issue. The timestamp is just a date stamp and can be written as well. The X and Y columns are floating values and are also controlled with regex to fit in the decimal(7,2).</p>
<p>The issue probably isn't the data on itself, since the data can be written to the database when I redo the batch manually. What could be causing the fractional truncation? What could cause the deadlock error, is it even relevant? I'm 99.9% sure nobody is reading from this table since I'm one of the few that has access to it, and it's on the develop server. Is it because when I'm writing those batches SQL Server is doing something on the background while I am writing the next batch to the table?</p>
<p>Additional information:
The data I'm writing is around 8.5 million rows per quarter and around 14 columns. So nothing extreme.
Error:</p>
<pre><code>Exception has occurred: DBAPIError x
(pyodbc.Error) ('01S07', '[01S07] [Microsoft][ODBC Driver 17 for SQL Server] Fractional truncation (0) (SQLExecute); [01S07] [Microsoft][ODBC Driver 17 for SQL Server] Fractional truncation (0) (SQLExecute); [01S07] [Microsoft][ODBC Driver 17 for SQL Server] Fractional truncation (0) (SQLExecute); [01S07] [Microsoft][ODBC Driver 17 for SQL Server] Fractional truncation (0) (SQLExecute); [01S07] [Microsoft][ODBC Driver 17 for SQL Server] Fractional truncation (0) (SQLExecute); [01S07] [Microsoft][ODBC Driver 17 for SQL Server] Fractional truncation (0) (SQLExecute); [01S07] [Microsoft][ODBC Driver 17 for SQL Server] Fractional truncation (0) (SQLExecute); [01S07] [Microsoft][ODBC Driver 17 for SQL Server] Transaction (Process ID 517) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. (1205); (SQLExecute); [01S07] [Microsoft][ODBC Driver 17 for SQL Server] Fractional truncation (0) (SQLExecute); ...
</code></pre>
<p>Script to write to database:</p>
<pre><code>def write_to_db(
data,
table,
database,
cred=cred,
server=server,
schema=schema,
if_exists="append",
index=False,
chunksize=100,
):
"""
purpose: Write data to the SSMS database system.
input:
data -- Dataframe containing data to write to server
table -- Tablename to write to
database -- database to login to
cred -- file location of the login credentials
server -- SSMS server you want to operate on
schema -- Schema to use for writing
if_exists -- how to handle existing tables with the same name.
- Append means we just add new rows
- Replace means the full table is replaced.
- Truncate means all rows are deleted
and then we append new rows without deleting table.
index, chunksize -- Passed on to pandas method,
for documentation see:
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html
output:
None
"""
user, passw = get_info(cred)
cn_str = (
"Driver={ODBC Driver 17 for SQL Server};"
f"Server={server},1433;"
f"Database={database};"
f"UID={user};"
f"PWD={passw}"
)
connection_uri = (
"mssql+pyodbc:///?"
f"odbc_connect={urllib.parse.quote_plus(cn_str)}"
"&autocommit=true"
)
engine = sa.create_engine(connection_uri, fast_executemany=True)
# Check if table exists
if if_exists in ["truncate", "append"]:
with pyodbc.connect(cn_str) as con:
with con.cursor() as cursor:
if not cursor.tables(table=table, tableType="TABLE").fetchone(): # noqa
raise ValueError(
f"Object [{database}].[{schema}].[{table}] "
"does not exist" # noqa
)
# Truncate table, if required
if if_exists == "truncate":
with pyodbc.connect(cn_str) as con:
with con.cursor() as cursor:
query = f"truncate table [{database}].[{schema}].[{table}]"
cursor.execute(query)
if_exists = "append"
data.to_sql(
table,
engine,
index=index,
schema=schema,
if_exists=if_exists,
chunksize=chunksize,
)
return None
</code></pre>
<p>Python 3.10.10
sqlalchemy 1.4.47
Microsoft SQL Server 2019</p>
<p>Added DDL Table A (source table):</p>
<pre><code>SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [YourSchemaName].[YourSourceTable](
[A] [bigint] NOT NULL,
[B] [bigint] NULL,
[C] [varchar](25) NOT NULL,
[D] [int] NOT NULL,
[E] [int] NOT NULL,
[F] [decimal](18, 2) NOT NULL,
[G] [decimal](18, 2) NOT NULL,
[H] [tinyint] NOT NULL,
[I] [tinyint] NOT NULL,
[J] [varchar](6) NOT NULL,
[K] [varchar](4) NOT NULL,
[L] [int] NULL,
[M] [varchar](20) NULL,
[N] [varchar](30) NULL,
[O] [varchar](6) NULL,
[P] [varchar](50) NULL,
[Q] [decimal](15, 12) NULL,
[R] [decimal](15, 2) NULL,
[S] [bit] NOT NULL,
[T] [date] NOT NULL,
[U] [varchar](50) NOT NULL,
CONSTRAINT [PK_AnonymousTable] PRIMARY KEY CLUSTERED
(
[A] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [YourSchemaName].[YourSourceTable] ADD DEFAULT ('TypeA') FOR [U]
GO
</code></pre>
<p>Table B (destination table):</p>
<pre><code>USE [yourdatabase]
GO
/****** Object: Table [python].[AVM] Script Date: 18-8-2023 13:37:37 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [yourschema].[yourtable](
A [bigint] NOT NULL,
V [date] NOT NULL,
W [decimal](7, 2) NOT NULL,
X [int] NOT NULL,
Y [int] NOT NULL,
Z [int] NOT NULL,
H [tinyint] NOT NULL,
I [tinyint] NOT NULL,
G [decimal](18, 2) NOT NULL,
C [varchar](25) NOT NULL,
D [int] NOT NULL,
F [decimal](18, 2) NOT NULL,
J [varchar](6) NOT NULL,
K [varchar](4) NOT NULL,
ZZ [date] NOT NULL
) ON [PRIMARY]
GO
</code></pre>
| <python><sql-server><odbc><deadlock> | 2023-08-18 08:37:13 | 0 | 302 | Hestaron |
76,927,626 | 12,493,545 | How can I pass a parameter list to FastAPI using request? | <p>I send the list as param:</p>
<pre class="lang-py prettyprint-override"><code>import requests
def test():
params = {"test_param": [1, 2, 3]}
response = requests.get("http://localhost:8000/test", params=params, timeout=20)
response_data = response.json()
print(response_data)
</code></pre>
<p>and handle everything with FastAPI</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
from fastapi import FastAPI
from fastapi.responses import JSONResponse
import uvicorn
app = FastAPI()
@app.get("/test")
def test(test_param: list):
return JSONResponse(content={"message": test})
if __name__ == "__main__":
uvicorn.run("minimal:app", host="0.0.0.0", port=8000,
workers=2) # Use the on_starting event)
</code></pre>
<p>and I get the error <code>"GET /test?test_param=1&test_param=2&test_param=3 HTTP/1.1" 422 Unprocessable Entity</code>, but I thought that listing the parameter multiple times is the desired way of how FastAPI wants to receive lists...</p>
<p>My error might be that I am not using typing as described here, but imho that shouldn't influence the result as typing is ignored by python anyway: <a href="https://fastapi.tiangolo.com/tutorial/query-params-str-validations/#query-parameter-list-multiple-values" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/query-params-str-validations/#query-parameter-list-multiple-values</a></p>
| <python><request><fastapi> | 2023-08-18 08:26:58 | 0 | 1,133 | Natan |
76,927,610 | 14,336,726 | TqdmWarning: Iprogress not found | <p>I am operating in a virtual environment in Visual studio code, using Python 3.10.0. After importing libraries in Python I get the following odd message right after the cell</p>
<pre><code>c:\Users\user12345\Desktop\Project\venv\lib\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
</code></pre>
<p>I go to the website that is mentioned in the error message. There it is instructed to <code>pip install ipywidgets</code> but when I run that command, I get another error message, this time in the terminal</p>
<pre><code>pip : The term 'pip' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify th
at the path is correct and try again.
At line:1 char:1
+ pip install ipywidgets
+ ~~~
+ CategoryInfo : ObjectNotFound: (pip:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
</code></pre>
<p>I don't understand what this is, because I sure have pip installed. Then I checked in the command promt if I have pip, and there I get a message saying "Python is not found". So what is all this about?</p>
| <python><visual-studio-code> | 2023-08-18 08:24:26 | 1 | 480 | Espejito |
76,927,497 | 6,308,605 | How to parse dag_run.conf to a custom function using a custom operator | <p>This is my current code:</p>
<pre><code>def render_file_name(task_type, table_name):
if 'cleanup' == task_type:
file_name = f"path/to/{table_name}_his_full_cleanup.sql'
else:
raise ValueError(f"Unknown task type: {task_type}")
return file_name
with DAG("load_data", template_searchpath=SCRIPTS_PATH) as dag:
cleanup_sql = render_file_name(task_type='cleanup', table_name='{{ dag_run.conf["table_name"] }}')
table_his_full_cleanup = MyCustomOperator(
task_id="table_full_cleanup",
name="table_full_cleanup",
sql=cleanup_sql,
parameters={
"env": ENV,
"temp_table_name": '{{ dag_run.conf["table_name"] }}_temp',
},
default_iam_role=IAM_ROLE,
spark_cluster=MY_SPARK,
)
</code></pre>
<p>I'm getting the following error:</p>
<blockquote>
<p>jinja2.exceptions.TemplateNotFound: path/to/{{ dag_run.conf["table_name"] }}_his_full_cleanup.sql</p>
</blockquote>
<p>It seems that <code>sql</code> unable to take <code>dag_run.conf["table_name"]</code>
But <code>parameters</code> able to parse the value correctly. Why is that?</p>
<p>How do I solve this?
I can share the source code of <code>MyCustomOperator</code> if needed.</p>
| <python><airflow> | 2023-08-18 08:03:12 | 1 | 761 | user6308605 |
76,927,313 | 3,746,802 | How can I create a python object based on a variable name? | <p>I'm using Flask to build a chat interface to the Google Cloud Large Language Model (Palm2). In order to support multiple concurrent users, I need to create a different chat object for each user so they maintain their own context. To achieve this I am generating a uuid for each user, and I would like to create the chat object based on <code>session['sessionid']</code>, but I can't figure out how to do it! I've read about variable variables and looked at dictionaries, but after several hours looking at this I still can't work it out, and I'm sure there is a simple solution! Here is the snippet:</p>
<pre><code>@app.route('/')
def index():
if session.get('sessionid'):
print ('sessionid already set: ',session['sessionid'])
else: session['sessionid'] = uuid.uuid4().hex
chat = chat_model.start_chat()
return redirect(url_for('chat'))
</code></pre>
<p>You can see on line 6 I create the chat object, but I need to base the object name on the session uuid to ensure it it unique for each user.</p>
<p>I then need to make a call to the LLM using that object, you can see in the below snippet I'm calling it chat, but again, this actually need to be based on <code>session['sessionid']</code> :</p>
<pre><code>response = chat.send_message(user_input, **parameters)
</code></pre>
<p>If anyone can help me understand how to do this I will be eternally grateful, I've been looking at this for hours and I'm at my wits end!</p>
| <python><flask><session> | 2023-08-18 07:38:58 | 1 | 497 | Kelly Norton |
76,927,255 | 7,658,051 | Cannot import from my custom python package. ModuleNotFoundError: No module named 'a_python_file_inside_my_custom_module' | <p>I am trying to build a python package following <a href="https://www.freecodecamp.org/news/build-your-first-python-package/" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>So I have made a base folder, then a module folder, then inside it I have placed the file starting_template.py, which contains the function <code>load_img</code> and a <code>__init__.py</code> file whose content is barely</p>
<pre><code>from starting_template import load_img
print("loaded!")
</code></pre>
<p>If I manually run <code>__init__.py</code>, it works without showing any error, it prints "loaded!".</p>
<p>So this is the directory tree:</p>
<p><a href="https://i.sstatic.net/f85W0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f85W0.png" alt="enter image description here" /></a></p>
<p>I have built the package by running</p>
<pre><code>python setup.py sdist bdist_wheel
</code></pre>
<p>in the base directory, then I have uploaded the package on my pypi account by running</p>
<pre><code>twine upload dist/opencv_auxiliary_for_vscode-0.0.2*
</code></pre>
<p>where <code>0.0.2</code> is the version number of the distribution,<br>
and finally installed the module by running</p>
<pre><code>pip install opencv_auxiliary_for_vscode==0.0.2
</code></pre>
<p>The package is uploaded <a href="https://pypi.org/project/opencv-auxiliary-for-vscode/0.0.2/" rel="nofollow noreferrer">here</a>.<br>
It is a lame code, but it is just to practice the process of building python packages.</p>
<p>Then, in a python file, I wrote the import statement</p>
<pre><code>from opencv_auxiliary_for_vscode import load_img
</code></pre>
<p>but as I run it, I get the error</p>
<pre><code>ModuleNotFoundError: No module named 'starting_template'
</code></pre>
<p>I have also tried to import</p>
<pre><code>from opencv_auxiliary_for_vscode.starting_template import load_img
</code></pre>
<p>but it does not work.</p>
<p>What am I possibly doing wrong?</p>
| <python><package><init><python-packaging><modulenotfounderror> | 2023-08-18 07:29:32 | 1 | 4,389 | Tms91 |
76,927,208 | 2,195,440 | why pylsp is not returning any response? | <p>I started pylsp with</p>
<pre><code>pylsp --verbose --tcp --host 127.0.0.1 --port 9999
</code></pre>
<p>Now I connected to it with following python code:</p>
<pre><code>import json
import socket
def send_request(request):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.connect(('127.0.0.1', 9999)) # Assuming the LSP server is running on localhost and port 9999
request_str = json.dumps(request)
# Send the length of the request followed by the request itself
sock.sendall((len(request_str).to_bytes(4, byteorder='little')) + request_str.encode('utf-8'))
# Read the response length (first 4 bytes) and then the response
response_length = int.from_bytes(sock.recv(4), byteorder='little')
reponse = sock.recv(response_length).decode('utf-8')
response = json.loads(reponse)
return response
# Initialize the LSP session
initialize_request = {
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"processId": 12345, # Arbitrary process ID
"rootPath": "/path/to/my/repo",
"capabilities": {}, # For simplicity, we're sending empty capabilities
}
}
response = send_request(definition_request)
print("Definition:", response)
</code></pre>
<p>It does not return anything.</p>
<p>So I checked netstat and see :</p>
<pre><code>tcp4 0 0 127.0.0.1.9999 *.* LISTEN
</code></pre>
<p>So it is listening to this port.</p>
<p>I also tried to do telnet and send the following command. But the server does not return anything. What am I missing here?</p>
<pre><code>telnet 127.0.0.1 9999
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"processId": 12345,
"rootPath": "/path/to/my/repo",
"capabilities": {}
}
}
</code></pre>
| <python><language-server-protocol><python-language-server><python-lsp-server><pylsp> | 2023-08-18 07:20:21 | 0 | 3,657 | Exploring |
76,927,099 | 736,662 | For loop configuration in Locust/Python | <p>I have a working script in (Locust/Python) shown below:</p>
<pre><code>import json
# import requests
import base64
from locust import HttpUser, between, task
server_name = "https://xxx"
from_date = "2023-07-04T08:00:00.000Z"
to_date = "2023-07-04T09:00:00.000Z"
# url = f"https://{server_name}/api/SaveValues"
# Basic authentication:
username = "xxx"
password = "xxx"
auth_token = f"{username}:{password}"
# Encode the authentication token in base64:
auth_token_bytes = auth_token.encode("ascii")
auth_header = "Basic " + base64.b64encode(auth_token_bytes).decode("ascii")
headers={'X-API-KEY': 'xxx', 'Content-Type': 'application/json'}
# headers = {
# "Content-Type": "application/json",
# "Authorization": auth_header
# }
TS_IDs = {
'10158': 10,
'10159': 20,
'10174': 30,
'10182': 40,
'10185': 50,
'11016': 60,
'10479': 70,
'10482': 80
}
def get_data(ts_id, from_date, to_date, ts_value):
myjson = {
"id": int(ts_id),
"values": [
{
"from": from_date,
"to": to_date,
"value": ts_value
}
]
}
return myjson
class SaveValuesUser(HttpUser):
host = server_name
# wait_time = between(10, 15)
@task
def save_list_values(self):
data_list = []
for ts_id, ts_value in TS_IDs.items():
data = get_data(ts_id, from_date, to_date, ts_value)
data_list.append(data)
json_data = json.dumps(data_list, indent=2)
self.save_values(json_data)
def save_values(self, json_data):
print(type(json_data))
print(json_data)
# Make the PUT request with authentication:
response = self.client.put("/api/SaveValues", data=json_data, headers=headers)
# Check the response:
if response.status_code == 200:
print("SaveValues successful!")
print("Response:", response.json())
else:
print("SaveValues failed.")
print("Response:", response.text)
</code></pre>
<p>The script will do the for-loop (and construct the payload) for as many iterations as it is elements in the structure "TS_IDs".</p>
<p>However I want to make the number of times the for-loop runs "configurable" and hence the size of the payload configurable.</p>
<p>Looking at the "range" function in Python I would assume I could do this:</p>
<pre><code> for ts_id, ts_value in range(3):
data = get_data(ts_id, from_date, to_date, ts_value)
data_list.append(data)
</code></pre>
<p>This results in this error:</p>
<p><code>[2023-08-18 09:00:21,361] N52820/ERROR/locust.user.task: cannot unpack non-iterable int object Traceback (most recent call last): File "C:\PythonScripting\Applications\Mambaforge\envs\_scripting310w\lib\site-packages\locust\user\task.py", line 347, in run self.execute_next_task() File "C:\PythonScripting\Applications\Mambaforge\envs\_scripting310w\lib\site-packages\locust\user\task.py", line 372, in execute_next_task self.execute_task(self._task_queue.pop(0)) File "C:\PythonScripting\Applications\Mambaforge\envs\_scripting310w\lib\site-packages\locust\user\task.py", line 493, in execute_task task(self.user) File "C:\PythonScripting\MyCode\pythonProject\SaveValues_FIXED.py", line 60, in save_list_values for ts_id, ts_value in range(3): TypeError: cannot unpack non-iterable int object</code></p>
<p>Any tips on how to make the script configurable with the number of times the for-loop should run and hence how "big" the payload would be?</p>
| <python><locust> | 2023-08-18 07:03:09 | 1 | 1,003 | Magnus Jensen |
76,927,014 | 4,277,485 | Read .txt file in python, assign first line to a variable and from 2nd line to a dataframe | <p>Working on Python script to read .txt files(space seperated) to a pandas dataframe. But 1st line contain server information. How do I extract 1st line to other variable and remaining file content to a pandas dataframe?</p>
<p>Sample file1.txt</p>
<p><strong>srv123 12_45/56-01V top</strong><br/>
Date location character1 character2 character3<br/>
2023-01-24 asd 3434.56 67.567 898.898<br/>
2023-01-24 axs 345.56 78.567 934.898<br/>2023-01-24 ert 4567.56 89.123 901.898<br/>2023-01-25 tgb 7879.56 90.567 456.898<br/></p>
<p>Expected Dataframe :<br/></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">server</th>
<th style="text-align: center;">Date</th>
<th style="text-align: right;">location</th>
<th style="text-align: left;">character1</th>
<th style="text-align: center;">character2</th>
<th style="text-align: right;">character3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">srv123</td>
<td style="text-align: center;">2023-01-24</td>
<td style="text-align: right;">asd</td>
<td style="text-align: left;">3434.56</td>
<td style="text-align: center;">67.567</td>
<td style="text-align: right;">898.898</td>
</tr>
<tr>
<td style="text-align: left;">srv123</td>
<td style="text-align: center;">2023-01-24</td>
<td style="text-align: right;">axs</td>
<td style="text-align: left;">345.56</td>
<td style="text-align: center;">78.567</td>
<td style="text-align: right;">934.898</td>
</tr>
<tr>
<td style="text-align: left;">srv123</td>
<td style="text-align: center;">2023-01-24</td>
<td style="text-align: right;">ert</td>
<td style="text-align: left;">4567.56</td>
<td style="text-align: center;">89.123</td>
<td style="text-align: right;">901.898</td>
</tr>
<tr>
<td style="text-align: left;">srv123</td>
<td style="text-align: center;">2023-01-25</td>
<td style="text-align: right;">tgb</td>
<td style="text-align: left;">7879.56</td>
<td style="text-align: center;">90.567</td>
<td style="text-align: right;">456.898</td>
</tr>
</tbody>
</table>
</div><br/>
I tried with read_csv but first line and header are messed up.
| <python><pandas><text-files><read-csv> | 2023-08-18 06:47:30 | 2 | 438 | Kavya shree |
76,926,828 | 12,224,591 | Calculate X-values at Certain Y-value of High-Degree Polynomial Fit? (Python) | <p>I'm attempting to calculate X-values at a certain Y-value for a very high degree polynomial using Python's numpy function <code>roots</code>. I'm using the numpy function <code>polyfit</code> to generate the coefficents of my high-degree polynomial. I'm using Python 3.10.</p>
<p>The main issue I'm running into is the fact that the <code>roots</code> function doesn't appear to produce correct answers for my X-values at the desired Y-value.</p>
<p>Here's a reproducible example code with some sample data that illustrates the issue:</p>
<pre><code>import numpy as np
from numpy.polynomial import Polynomial as poly
import matplotlib.pyplot as plt
def main():
# Declare sample data
dataX = [0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0, 14.5, 15.0, 15.5, 16.0, 16.5, 17.0]
dataY = [0.0, -0.008539747658600522, -0.06274870291613317, -0.19444406675443215, -0.36136059621877487, -0.600774409182093, -0.9162127771014803, -1.323039561600279, -1.8023953285501042, -2.3960659331052065, -3.1540213754489517, -4.067701143485799, -5.252273439144881, -6.771579806673188, -8.841718736169389, -11.702002554463427, -10.622244389289413, -7.093229874530799, -4.658886103196484, -2.6582045851707403, -1.046251560907664, 0.26783134185570256, 1.3575816391972544, 2.262728874235532, 2.990654481815292, 3.557426755904812, 3.955698729737411, 4.191874958827449, 4.253204560527751, 4.116558660927984, 3.7485922103648823, 3.1198702528381874]
polyDegree = 60
# Perform polynomial fit (ascending coefficent order)
polyCoeffs = np.flip(np.polyfit(dataX, dataY, polyDegree))
# Find all X-values at the desired Y-value of polynomial
y = 0.8
x = (poly(polyCoeffs) - y).roots().tolist()
# Filter out all found X-values that are imaginary or beyond desired limits (0, 17)
i = 0
while (i < len(x)):
if (x[i].imag != 0) or (x[i].real < 0) or (x[i].real > 17):
del x[i]
i -= 1
else:
x[i] = x[i].real
i += 1
# Plot data, polynomial and found X-values
plt.xlim(-1, 18)
plt.ylim(-20, 20)
polyDataY = []
for i in range(len(dataX)):
value = 0
for j in range(len(polyCoeffs)):
value += polyCoeffs[j] * pow(dataX[i], j)
polyDataY.append(value)
plt.scatter(dataX, dataY, c = "dodgerblue", label = "Original Data")
plt.plot(dataX, polyDataY, color = "orange", label = "Polynomial Fit")
plt.axhline(y, color = "red", label = "Desired Y-value")
for i in range(len(x)):
plt.axvline(x[i], color = "forestgreen")
plt.axvline(99999, color = "forestgreen", label = "Found X-values")
plt.legend()
plt.show()
plt.close()
plt.clf()
if (__name__ == '__main__'):
main()
</code></pre>
<p>In the example code above, I generate my 60 degree polynomial from some sample data, and store the generated coefficents in the <code>polyCoeffs</code> variable. I subsequently try to calculate all X-values at the Y-value of <code>0.8</code>, stored in the variables <code>x</code> and <code>y</code> respectively. I then filter out all imaginary X-value answers, and any answers that are below 0 and above 17. Finally, I plot the original data, fitted polynomial, desired Y-value and all found X-values onto a single plot.</p>
<p>The example code above currently generates the following plot:
<a href="https://i.sstatic.net/KvWg2l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KvWg2l.png" alt="enter image description here" /></a></p>
<p>Unfortunately, there appears to be something clearly wrong here.</p>
<p>While one of the found X-values appears to be correctly placed near my desired Y-value, there are a bunch of listed X-values to the right-hand side of the plot:
<a href="https://i.sstatic.net/h1RUql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h1RUql.png" alt="enter image description here" /></a></p>
<p>I also tried lowering the polynomial degree, as I suspected the <code>roots</code> function may simply not be able to calculate all X-values of such a high degree polynomial, but even polynomial of degree 30 appears to give incorrect results for my X-values:
<a href="https://i.sstatic.net/soRHll.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/soRHll.png" alt="enter image description here" /></a></p>
<p>What is going on here? Why is the <code>roots</code> function failing to provide me with the correct X-values for the Y-value of <code>0.8</code> here? Is the <code>roots</code> function simply not able to calculate X-values of such a high degree polynomial? Is there perhaps a better way of calculating for X for such high-degree polynomials?</p>
<p>Thanks for reading my post, any guidance is appreciated.</p>
| <python><curve-fitting><polynomials> | 2023-08-18 06:11:28 | 0 | 705 | Runsva |
76,926,667 | 6,628,096 | Octave freqz command with fliplr --> Python syntax | <p>I'm trying to translate command</p>
<pre><code>[h(:,m), w] = freqz(fliplr(b), fliplr(a),2048); % fliplr --> freqz works on powers of z^-1
</code></pre>
<p>from Octave to Python (w/ scipy (freqz) and numpy (fliplr)) but, it results error when written <em>as is</em>:</p>
<pre><code> File "<__array_function__ internals>", line 180, in fliplr
File "/home/xxxx/yyyy/venv/lib/python3.8/site-packages/numpy/lib/twodim_base.py", line 98, in fliplr
raise ValueError("Input must be >= 2-d.")
ValueError: Input must be >= 2-d.
</code></pre>
<p>Which points to <a href="https://github.com/numpy/numpy/blob/v1.25.0/numpy/lib/twodim_base.py#L48-L99" rel="nofollow noreferrer">fliplr</a> function seem to work somehow differently compared to Octave's fliplr function.</p>
<p>Here are my <em>b</em> and <em>a</em> arrays:</p>
<pre><code>b= [ 1.01063287e+00 -1.46490341e+01 9.94030209e+01 -4.19168764e+02
1.22949513e+03 -2.66000588e+03 4.39112431e+03 -5.64225597e+03
5.70320516e+03 -4.55022454e+03 2.85602975e+03 -1.39550096e+03
5.20372994e+02 -1.43160328e+02 2.74037105e+01 -3.26098385e+00
1.81735269e-01]
a= [ 1.00000000e+00 -1.45159238e+01 9.86464912e+01 -4.16614074e+02
1.22391361e+03 -2.65216678e+03 4.38533779e+03 -5.64421414e+03
5.71487734e+03 -4.56742504e+03 2.87187255e+03 -1.40575405e+03
5.25150201e+02 -1.44741759e+02 2.77584882e+01 -3.30950845e+00
1.84797453e-01]
</code></pre>
<p>I've also tried by slicing w/ or w/o flipping:</p>
<pre><code>[h[:, m], w] = freqz(np.flip(b[0:N]), np.flip(a[0:N]), 2048)
</code></pre>
<p>which of both passes compilation but, plots looks weird.</p>
<p>Any suggestions on translation?</p>
| <python><octave> | 2023-08-18 05:32:30 | 1 | 327 | Juha P |
76,926,353 | 3,642,360 | How to create row number for customer and order id in pyspark | <p>I have a dataframe, as follows:</p>
<pre><code>customer_id order_ts order_nbr item_nbr
162038 04/04/23 18:42 1258 972395
162038 04/04/23 18:42 1258 551984
162038 04/04/23 18:42 1258 488298
162038 04/04/23 18:42 1258 649230
162038 26/02/23 16:28 2715 372225
162038 26/02/23 16:28 2715 911716
162038 26/02/23 16:28 2715 696677
162038 26/02/23 16:28 2715 229455
162038 26/02/23 16:28 2715 870016
162038 29/01/23 13:07 1171 113719
162038 29/01/23 13:07 1171 553461
162060 01/05/23 18:42 1259 300911
162060 01/05/23 18:42 1259 574962
162060 01/05/23 18:42 1259 843300
162060 01/05/23 18:42 1259 173719
162060 05/05/23 18:42 2719 254899
162060 05/05/23 18:42 2719 776553
162060 05/05/23 18:42 2719 244739
162060 05/05/23 18:42 2719 170742
162060 05/05/23 18:42 2719 525719
162060 10/05/23 18:42 1161 896919
162060 10/05/23 18:42 1161 759465
</code></pre>
<p>I am interested to create a column "row_num" as follows:</p>
<pre><code>customer_id order_ts order_nbr item_nbr row_num
162038 04/04/23 18:42 1258 972395 1
162038 04/04/23 18:42 1258 551984 1
162038 04/04/23 18:42 1258 488298 1
162038 04/04/23 18:42 1258 649230 1
162038 26/02/23 16:28 2715 372225 2
162038 26/02/23 16:28 2715 911716 2
162038 26/02/23 16:28 2715 696677 2
162038 26/02/23 16:28 2715 229455 2
162038 26/02/23 16:28 2715 870016 2
162038 29/01/23 13:07 1171 113719 3
162038 29/01/23 13:07 1171 553461 3
162060 01/05/23 18:42 1259 300911 1
162060 01/05/23 18:42 1259 574962 1
162060 01/05/23 18:42 1259 843300 1
162060 01/05/23 18:42 1259 173719 1
162060 05/05/23 18:42 2719 254899 2
162060 05/05/23 18:42 2719 776553 2
162060 05/05/23 18:42 2719 244739 2
162060 05/05/23 18:42 2719 170742 2
162060 05/05/23 18:42 2719 525719 2
162060 10/05/23 18:42 1161 896919 3
162060 10/05/23 18:42 1161 759465 3
</code></pre>
<p>"row_num" column is basically unique for a particular customer_id, order_ts, order_nbr. For example, customer_id 162038, order_ts = 04/04/23 18:42, order_nbr = 1258, row_num = 1 for all the item numbers.</p>
<p>I did as follows:</p>
<pre><code>windowDept = Window.partitionBy("customer_id","order_ts","order_nbr").orderBy(col("order_ts").desc())
df1 = df.withColumn("row_num",row_number().over(windowDept))
</code></pre>
<p>But I am getting this as output:</p>
<p><a href="https://i.sstatic.net/kBK9q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kBK9q.png" alt="enter image description here" /></a></p>
<p>Any help will be highly appreciated. Thanks.</p>
| <python><apache-spark><pyspark><apache-spark-sql> | 2023-08-18 03:56:49 | 1 | 792 | user3642360 |
76,926,251 | 206,253 | Can a pytest test change the value of a session fixture and affect subsequent tests? | <p>Let's say that a pytest session fixture returns a dictionary and this dictionary is then manipulated within a test. Would this affect subsequent tests which use the same fixture?</p>
| <python><pytest><fixtures> | 2023-08-18 03:18:26 | 1 | 3,144 | Nick |
76,926,237 | 123,246 | PySpark/Python - best way to create new calculated columns from variable number of column inputs | <p>Given this data structure</p>
<ul>
<li><p>age_group{#}</p>
<ul>
<li>is the count of records that fit into a user defined set of age groups (ex: age_group1 = the count of records between ages 0-10, age_group2 is 11-21...). There can be up to 10 age groups.</li>
</ul>
</li>
<li><p>bin_number</p>
<ul>
<li>user can define multiple age groupings. Ex: bin 1 is age_group1 = 0-10, age_group2 = 11-21, bin 2 is age_group1 = 0-17, age_group2 = 18-44. There can be any number of bins of age groups.</li>
</ul>
</li>
<li><p>t_group</p>
<ul>
<li>The total population from the census for this age group by county, where the format of the column name represents t_{age group #}_{bin #}. Ex: t_group1_1 is the total population for ages 0-10 in bin 1.</li>
</ul>
</li>
</ul>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">county</th>
<th style="text-align: center;">bin_number</th>
<th style="text-align: right;">age_group1</th>
<th style="text-align: right;">age_group2</th>
<th style="text-align: right;">t_group1_1</th>
<th style="text-align: right;">t_group2_1</th>
<th style="text-align: right;">t_group2_2</th>
<th style="text-align: right;">t_group2_2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">01001</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">200</td>
<td style="text-align: right;">100</td>
<td style="text-align: right;">300</td>
<td style="text-align: right;">400</td>
</tr>
<tr>
<td style="text-align: left;">01001</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">100</td>
<td style="text-align: right;">200</td>
<td style="text-align: right;">300</td>
<td style="text-align: right;">400</td>
</tr>
<tr>
<td style="text-align: left;">01003</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">200</td>
<td style="text-align: right;">100</td>
<td style="text-align: right;">300</td>
<td style="text-align: right;">400</td>
</tr>
<tr>
<td style="text-align: left;">01003</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">100</td>
<td style="text-align: right;">200</td>
<td style="text-align: right;">300</td>
<td style="text-align: right;">400</td>
</tr>
</tbody>
</table>
</div>
<p>The goal is to add new columns with the following calculations. Based on this cut of sample data, the new columns would be:<br/>
where bin_number = 1: (age_group1 / t_group1_1) * 100000<br/>
where bin_number = 1: (age_group2 / t_group2_1) * 100000<br/>
where bin_number = 2: (age_group1 / t_group1_2) * 100000<br/>
where bin_number = 2: (age_group2 / t_group2_2) * 100000<br/></p>
<p>The best I have come up with is to loop through the number of bins (hardcoded for the 2 in this sample), filter by the bin_number, select/alias the calculated columns, and then union each separate dataframe back together.</p>
<p>I'm stuck trying to think of other ways to do this that might be cleaner/more efficient. I keep coming back to groupBy or window partitions but the complexity of the data structure has me lost.</p>
<p>Note: The data structure is flexible and can be changed to simplify a solution, but I do need the final output to be in the same structure as the table above. Thank you for any feedback!</p>
<pre><code> dfs = []
while i <= 2:
dfGroup = df.filter(F.col("bin_number") == i) # df is the table in this post
totalBins = [x for x in df.columns if x.startswith("t_group") and x.endswith(str(i))]
dfGroup = dfGroup.select(
"*",
*[((F.col(f"age_group{x}") / F.col(f"t_group{x}_{i}")) * RATE).alias(
f"crude_rate_age_group_bin_{x}"
) for x in range(1, len(totalBins))],
)
dfs.append(dfGroup)
i += i
dfRate = reduce(F.DataFrame.unionAll, dfs)
</code></pre>
| <python><pyspark> | 2023-08-18 03:12:32 | 1 | 858 | tessa |
76,926,155 | 9,625,777 | Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" | <p>I am trying to export <code>.engine</code> from <code>onnx</code> for the pretrained <code>yolov8m</code> model but get into <code>trtexec</code> issue. <strong>Note that I am targeting for a model supporting <code>dynamic</code> batch-size.</strong></p>
<p>I got the onnx by following the official instructions from ultralytics.</p>
<pre class="lang-py prettyprint-override"><code>from ultralytics import YOLO
# Load a model
model = YOLO('yolov8m.pt') # load an official model
model = YOLO('path/to/best.pt') # load a custom trained
# Export the model
model.export(format='onnx',dynamic=True) # Note the dynamic arg
</code></pre>
<p>I get the corresponding onnx. Now when I try to run <code>trtexec</code></p>
<pre><code>trtexec --onnx=yolov8m.onnx --workspace=8144 --fp16 --minShapes=input:1x3x640x640 --optShapes=input:2x3x640x640 --maxShapes=input:10x3x640x640 --saveEngine=my.engine
</code></pre>
<p>I get</p>
<pre><code>[08/10/2023-23:53:10] [I] TensorRT version: 8.2.5
[08/10/2023-23:53:11] [I] [TRT] [MemUsageChange] Init CUDA: CPU +336, GPU +0, now: CPU 348, GPU 4361 (MiB)
[08/10/2023-23:53:11] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 348 MiB, GPU 4361 MiB
[08/10/2023-23:53:12] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 483 MiB, GPU 4393 MiB
[08/10/2023-23:53:12] [I] Start parsing network model
[08/10/2023-23:53:12] [I] [TRT] ----------------------------------------------------------------
[08/10/2023-23:53:12] [I] [TRT] Input filename: yolov8m.onnx
[08/10/2023-23:53:12] [I] [TRT] ONNX IR version: 0.0.8
[08/10/2023-23:53:12] [I] [TRT] Opset version: 17
[08/10/2023-23:53:12] [I] [TRT] Producer name: pytorch
[08/10/2023-23:53:12] [I] [TRT] Producer version: 2.0.1
[08/10/2023-23:53:12] [I] [TRT] Domain:
[08/10/2023-23:53:12] [I] [TRT] Model version: 0
[08/10/2023-23:53:12] [I] [TRT] Doc string:
[08/10/2023-23:53:12] [I] [TRT] ----------------------------------------------------------------
[08/10/2023-23:53:12] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:773: While parsing node number 305 [Range -> "/model.22/Range_output_0"]:
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:774: --- Begin node ---
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:775: input: "/model.22/Constant_8_output_0"
input: "/model.22/Cast_output_0"
input: "/model.22/Constant_9_output_0"
output: "/model.22/Range_output_0"
name: "/model.22/Range"
op_type: "Range"
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:776: --- End node ---
[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3353 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"
[08/10/2023-23:53:12] [E] Failed to parse onnx file
[08/10/2023-23:53:12] [I] Finish parsing network model
[08/10/2023-23:53:12] [E] Parsing model failed
[08/10/2023-23:53:12] [E] Failed to create engine from model.
</code></pre>
<p>I am aware that some people suggest upgrading to latest TRT version but I am looking for an alternate solution.</p>
| <python><yolo><tensorrt><yolov8> | 2023-08-18 02:48:46 | 1 | 3,109 | Pe Dro |
76,926,118 | 3,236,841 | setup reticulate virtual environment with python 3.11.4 | <p>I have Python 3.11.4 installed. I have also installed <code>keras</code> package. I was trying to create a virtual environment but the command wants to install a new version of python (3.9). Is it possible to use the system-wide installed 3.11.4?</p>
<p>I tried:</p>
<pre><code>install.packages("keras")
library(reticulate)
virtualenv_create("r-reticulate", python = install_python())
</code></pre>
<p>and this pulls in and wants to install python 3.9. My question is simply if we can make the reticulate virtual environment to use python 3.11.4?</p>
<p>I am using Fedora 38 linux environment.</p>
<p>I tried:</p>
<pre><code>virtualenv_create("r-reticulate", python = NULL)
Error in stop_no_virtualenv_starter(version) :
Suitable Python installation for creating a venv not found.
Please install Python with one of following methods:
- https://github.com/rstudio/python-builds/
- reticulate::install_python(version = '<version>')
</code></pre>
<p>I also tried:</p>
<pre><code>virtualenv_create("r-reticulate", python = '/usr/lib/python3.11')
Error in stop_no_virtualenv_starter() :
Suitable Python installation for creating a venv not found.
Please install Python with one of following methods:
- https://github.com/rstudio/python-builds/
- reticulate::install_python(version = '<version>')
</code></pre>
<p>Thoguh I am not sure if my call is correct, because even the following did not work.</p>
<pre><code>virtualenv_create("r-reticulate", python = '/usr/lib/python3.11')
Error in stop_no_virtualenv_starter() :
Suitable Python installation for creating a venv not found.
Please install Python with one of following methods:
- https://github.com/rstudio/python-builds/
- reticulate::install_python(version = '<version>')
</code></pre>
<p>Per suggestion:</p>
<pre><code>virtualenv_create("r-reticulate", python = NULL, version='3.11')
Error in stop_no_virtualenv_starter(version) :
Suitable Python installation for creating a venv not found.
Requested version constraint: 3.11
Please install Python with one of following methods:
- https://github.com/rstudio/python-builds/
- reticulate::install_python(version = '<version>')
</code></pre>
<p>which python?</p>
<pre><code>$ which python
python is /bin/python
python is /usr/bin/python
</code></pre>
<p>Update: the issue continues even with the suggestions:</p>
<pre><code>virtualenv_create("r-reticulate", python = '/bin/python')
Error in stop_no_virtualenv_starter() :
Suitable Python installation for creating a venv not found.
Please install Python with one of following methods:
- https://github.com/rstudio/python-builds/
- reticulate::install_python(version = '<version>')
</code></pre>
<p>I do get:</p>
<pre><code> virtualenv_starter(version='3.11')
[1] "/usr/bin/python3.11"
</code></pre>
<p>Any suggestions would be very appreciated.</p>
| <python><r><keras><reticulate> | 2023-08-18 02:33:36 | 0 | 1,396 | user3236841 |
76,925,785 | 1,285,061 | Unique rows of numpy array while preserving their order | <p>How can I get the unique rows of an array while preserving the order (of first appearance) of the rows in the result?</p>
<p>The below code tried with variations resulting in a single array.<br />
<code>array_equal</code> compares with element positions.</p>
<pre><code>import numpy as np
unique = np.array([])
arr = np.array([[1,2,3,4,5],[3,4,5,6,7],[5,6,7,8,9],[7,8,9,0,1],[9,0,1,2,3],[1,2,3,4,5],[-8,-7,-6,-5,-4]])
u = 0
for idx, i in enumerate(arr):
if np.array_equal(unique, i) == False:
unique = np.append(unique, i, axis=None)
u += 1
print (unique)
print(u)
>>> print (unique)
[ 1. 2. 3. 4. 5. 3. 4. 5. 6. 7. 5. 6. 7. 8. 9. 7. 8. 9.
0. 1. 9. 0. 1. 2. 3. 1. 2. 3. 4. 5. -8. -7. -6. -5. -4.]
>>> print(u)
7
>>>
</code></pre>
<p>For this example, the expected result is an array with 6 unique rows.</p>
<pre><code>[[1,2,3,4,5],[3,4,5,6,7],[5,6,7,8,9],[7,8,9,0,1],[9,0,1,2,3],[-8,-7,-6,-5,-4]]
</code></pre>
| <python><numpy> | 2023-08-18 00:26:24 | 1 | 3,201 | Majoris |
76,925,727 | 1,322,962 | Pandas `*** TypeError: boolean value of NA is ambiguous` When All Boolean Columns Have Values? | <h1>Error</h1>
<pre><code>tests/test_flatten_rates_etl.py:153:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
pandas/_libs/testing.pyx:52: in pandas._libs.testing.assert_almost_equal
???
pandas/_libs/testing.pyx:159: in pandas._libs.testing.assert_almost_equal
???
pandas/_libs/testing.pyx:105: in pandas._libs.testing.assert_almost_equal
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E TypeError: boolean value of NA is ambiguous
pandas/_libs/missing.pyx:388: TypeError
</code></pre>
<h1>What I'm Looking For</h1>
<p>I'm looking either for help finding the columns with the <code>NA</code>s OR I'm looking for someone that knows what my actual issue is and can explain how to fix this</p>
<h1>PDB Sessions</h1>
<pre><code>(Pdb) actual_df.select_dtypes(include=["bool", "boolean", bool])
service_level__is_trackable service_level__available_shippo object_custom_logic_applied is_trackable includes_insurance available_shippo address_from__is_residential address_from__validate address_to__is_residential address_to__validate address_return__is_residential address_return__validate shipment__bi_state_is_test
0 True False True False False False False False False False False False False
1 True True True False False False False False False False False False False
2 True True True False False False False False False False False False False
3 True True True False False False False False False False False False False
(Pdb) expected_df.select_dtypes(include=["bool", "boolean", bool])
service_level__is_trackable service_level__available_shippo object_custom_logic_applied is_trackable includes_insurance available_shippo address_from__is_residential address_from__validate address_to__is_residential address_to__validate address_return__is_residential address_return__validate shipment__bi_state_is_test
0 True False True False False False False False False False False False False
1 True True True False False False False False False False False False False
2 True True True False False False False False False False False False False
3 True True True False False False False False False False False False False
(Pdb) assert_frame_equal(actual_df, expected_df, check_like=True)
*** TypeError: boolean value of NA is ambiguous
</code></pre>
| <python><pandas> | 2023-08-18 00:00:53 | 0 | 8,598 | AlexLordThorsen |
76,925,701 | 22,212,435 | How to make the layout of the Toplevel windows to be fixed? | <p>I want to create a few windows, and I want them to stay in the same 'position' compare to the other Toplevel siblings. They still all should be moveable, but just not pop up on the top of others on mouse click. I only know the '-topmost' attribute, but it seems to not work for many windows. So here is just a simple code example:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
class Window(tk.Toplevel):
move_pos = 0
def __init__(self, master=None, *args, **kwargs):
super().__init__(master, *args, **kwargs)
self.geometry(f'+{self.move_pos}+100')
self.attributes('-topmost', True)
self.__class__.move_pos += 100
self.bind('<Escape>', lambda e: root.quit())
root.withdraw()
for i in range(10):
Window()
root.mainloop()
</code></pre>
<p>Note: There are no need to create a logic, if some windows have been 'minimized', because I will use 'overrideredirect' for each of them in my code. Also, the code must work together with overridedirect.</p>
| <python><tkinter> | 2023-08-17 23:51:24 | 1 | 610 | Danya K |
76,925,658 | 7,885,426 | Efficient way to assign positions to longest match | <p>I have been bending my head about this one for a bit. The data I have is quite simple. Something like this:</p>
<pre><code>Position match IDs
1 1, 5, 8
2 1, 10, 30
3 10, 40, 70
4 1, 10, 90, 100, 200, 203, 300
</code></pre>
<p>I want to assign each <code>Position</code> to the "longest" match, in this case:</p>
<pre><code>Position match ID
1 1
2 10
3 10
4 10
</code></pre>
<p>For position <code>2</code> we have options 1, 10, and 30. To get their lengths we have to check how many consecutive positions each match covers:</p>
<pre><code>Match ID covers positions
1 1-2 (also 4, but itβs not consecutive anymore)
10 2-4
30 2
</code></pre>
<p>So the longest match for position 2 is 10, covering 3 positions.</p>
<p>NOTE: the actual data has thousands of unique match id so making a matrix of all of them first for example is too memory intensive.</p>
<hr />
<p>A solution I thought of is something like this but I feel like there should be a much easier way:</p>
<pre><code>from collections import defaultdict
text = """Position matches
1 1, 5, 8
2 1, 10, 30
3 10, 40, 70
4 1, 10, 90, 100, 200, 203, 300"""
d = defaultdict(list)
# Get all the positions each match occurs in
n_data = 0
for (i, line) in enumerate(text.splitlines()):
if i == 0:
continue
else:
n_data +=1
position, matches = line.split(" ")
for match_id in map(int, matches.split(", ")):
d[match_id].append(int(position))
# Find consecutive positions to get full match coverage
intervals = []
for (match_id, positions) in d.items():
start = positions[0]
end = positions[0]
for num in positions[1:]:
if num == end + 1:
end = num
else:
intervals.append((match_id, start, end))
start = num
end = num
# Append the last interval if there is one
intervals.append((match_id, start, end))
# Only keep the max value
max_id = [0] * (n_data +1)
max_vector = [0] * (n_data + 1)
for interval in intervals:
match_id, start, end = interval
match_size = end - start + 1
for i in range(start, end+1):
print(i)
if max_vector[i] < match_size:
max_vector[i] = match_size
max_id[i] = match_id
# Print it
for i in range(1, len(max_id)):
print("Position{}: {}".format(i, max_id[i]))
</code></pre>
<p>I can implement the algorithm in Python or Julia so both are fine. Pseudocode is also fine.</p>
<p>Curious to see if there is a smarter way to approach this :)</p>
| <python><algorithm><performance><julia> | 2023-08-17 23:36:30 | 3 | 1,840 | CodeNoob |
76,925,642 | 48,956 | Can yield from a context manager (or similar) multiple times after exception? | <p>I'd like to report multiple errors while processing a list:
For example:</p>
<pre><code>with MultipleExceptions.testItems(items) as value:
... process value
if value==3: raise Exception("3 err")
if value==5: raise Exception("5 err")
</code></pre>
<p>should possibly collect and report an exception for <em>each</em> value of items that raises an exception in the block (not just the first).</p>
<p>The naive implementation:</p>
<pre><code>class MultipleExceptions(Exception)
@staticmethod
@contextmanager
def testAllItems(items):
errs = []
for i, value in enumerate(items):
try:
yield value
except Exception e:
errs.append(i,e,value)
if errs: ... compose and raise mega exception message
</code></pre>
<p>... fails because context managers can only yield once. Its not obvious why this has to be true. Is there a way to yield multiple times?</p>
<p>As a generator:</p>
<pre><code>for value in MultipleExceptions.testAllItems(items):
... process value
if value==3: raise Exception("3 err")
if value==5: raise Exception("5 err")
</code></pre>
<pre><code>class MultipleExceptions(Exception)
@staticmethod
def testAllItems(items):
errs = []
for i, value in enumerate(items):
try:
yield value
except Exception e:
errs.append(i,e,value)
if errs: ... compose and raise mega exception message
</code></pre>
<p>... fails because the exception is never caught within testAllItems.</p>
<p>Clearly I could wrap my block into a function and provide that to a utility (without yield):</p>
<pre><code>def f(value):
... process value
if value==3: raise Exception("3 err")
if value==5: raise Exception("5 err")
MultipleExceptions.testItems(items, f)
</code></pre>
<p>... but yielding to a block with a generator or context manager seems so much cleaner and requires less work form the caller. Is possible to get to treat a block more like a function to run many times?</p>
| <python><yield><contextmanager> | 2023-08-17 23:30:20 | 2 | 15,918 | user48956 |
76,925,624 | 3,746,802 | How do I create a session for each anonymous user in Flask? | <p>I'm writing a chatbot interface using Flask for users to query the Google Palm2 LLM via Vertex AI. Once complete I'll package this up in a container and publish it using Cloud Run. I'm using the ChatModel class from the Google aiplatform python sdk, the (very sparse!) documentation is here:</p>
<p><a href="https://cloud.google.com/python/docs/reference/aiplatform/latest/vertexai.language_models.ChatModel" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/aiplatform/latest/vertexai.language_models.ChatModel</a></p>
<p>The chat_model.start_chat() call returns a ChatSession that maintains state and context of the chat session, so the LLM can answer questions with context. The code snippet in question looks like this:</p>
<pre><code>chat_model = ChatModel.from_pretrained("chat-bison@001")
parameters = {
"temperature": 0.2,
"max_output_tokens": 256,
"top_p": 0.8,
"top_k": 40
}
chat = chat_model.start_chat(
context="""Context about the bot goes here""",
)
</code></pre>
<p>The challenge is, once I give access to our users and have multiple people hitting Cloud Run at the same time, they are all going to be inside the same ChatSession, which will lead to crazy race conditions and is not an option.</p>
<p>So I'm wondering, is there a way to use Flask sessions, and tie each incoming anonymous user to their own copy of chat_model.start_chat()?</p>
| <python><flask><google-cloud-platform><artificial-intelligence> | 2023-08-17 23:22:33 | 0 | 497 | Kelly Norton |
76,925,588 | 22,307,474 | GLError 1281 invalid value in glGetUniformLocation | <p>I wrote a shader class and for some reason I get an error:</p>
<pre><code>OpenGL.error.GLError: GLError(
err = 1281,
description = b'invalid value',
baseOperation = glGetUniformLocation,
cArguments = (0, b'projection\x00'),
result = -1
)
</code></pre>
<p>Here is my code:</p>
<pre><code># shader.py
from rengine.debugging import Debug
from rengine.math import REMath
from rengine.gfx.gl import *
class Shader():
def __init__(self, vertex_src:str = None, fragment_src:str = None,
vertex_path:str = None, fragment_path:str = None):
self.__uniform_locations = {}
if vertex_src and fragment_src:
self.load_from_src(vertex_src, fragment_src)
elif vertex_path and fragment_path:
self.load_from_file(vertex_path, fragment_path)
else: self.__program_id = 0
def use(self):
glUseProgram(self.__program_id)
def unbind(self):
glUseProgram(0)
def destroy(self):
glDeleteShader(self.__vertex_shader)
glDeleteShader(self.__fragment_shader)
def set_uniform_location(self, uniform_name:str):
if uniform_name not in self.__uniform_locations:
self.__uniform_locations[uniform_name] = glGetUniformLocation(self.__program_id, uniform_name)
# OpenGL.error.GLError: GLError(
# err = 1281,
# description = b'invalid value',
# baseOperation = glGetUniformLocation,
# cArguments = (0, b'projection\x00'),
# result = -1
# )
def set_uniform_mat4(self, uniform_name:str, matrix:REMath.mat4):
self.set_uniform_location(uniform_name)
glUniformMatrix4fv(self.__uniform_locations[uniform_name], 1, GL_FALSE, REMath.value_ptr(matrix))
def set_uniform_float(self, uniform_name:str, value:float):
self.set_uniform_location(uniform_name)
glUniform1f(self.__uniform_locations[uniform_name], value)
def set_uniform_int(self, uniform_name:str, value:int):
self.set_uniform_location(uniform_name)
glUniform1i(self.__uniform_locations[uniform_name], value)
def set_uniform_vec2(self, uniform_name:str, vector:REMath.vec2):
self.set_uniform_location(uniform_name)
glUniform2fv(self.__uniform_locations[uniform_name], 1, REMath.value_ptr(vector))
def set_uniform_vec3(self, uniform_name:str, vector:REMath.vec3):
self.set_uniform_location(uniform_name)
glUniform3fv(self.__uniform_locations[uniform_name], 1, REMath.value_ptr(vector))
def set_uniform_vec4(self, uniform_name:str, vector:REMath.vec4):
self.set_uniform_location(uniform_name)
glUniform4fv(self.__uniform_locations[uniform_name], 1, REMath.value_ptr(vector))
def compile_shader(self, source:str, shader_type:GL_SHADER_TYPE):
shader_id = glCreateShader(shader_type)
glShaderSource(shader_id, source)
glCompileShader(shader_id)
if not glGetShaderiv(shader_id, GL_COMPILE_STATUS):
error = glGetShaderInfoLog(shader_id).decode()
Debug.print_err(f"Shader Compilation Failed: {error}")
glDeleteShader(shader_id)
return
Debug.print_succ(f"Shader Compiled Successfully: {shader_id}")
return shader_id
def compile_program(self, vertex_shader:GL_SHADER, fragment_shader:GL_SHADER):
program_id = glCreateProgram()
glAttachShader(program_id, vertex_shader)
glAttachShader(program_id, fragment_shader)
glLinkProgram(program_id)
return program_id
def load_from_src(self, vertex_src:str, fragment_src:str):
self.__vertex_shader = self.compile_shader(vertex_src, GL_VERTEX_SHADER)
self.__fragment_shader = self.compile_shader(fragment_src, GL_FRAGMENT_SHADER)
self.__program_id = self.compile_program(self.__vertex_shader, self.__fragment_shader)
def load_from_file(self, vertex_path:str, fragment_path:str):
with open(vertex_path, "r") as vf:
self.__vertex_shader = self.compile_shader(vf.read(), GL_VERTEX_SHADER)
with open(fragment_path, "r") as ff:
self.__fragment_shader = self.compile_shader(ff.read(), GL_FRAGMENT_SHADER)
self.__program_id = self.compile_program(self.__vertex_shader, self.__fragment_shader)
</code></pre>
<p>Here is my shaders:
VERTEX SHADER:</p>
<pre><code>#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos, 1.0);
}
</code></pre>
<p>FRAGMENT SHADER:</p>
<pre><code>#version 330 core
out vec4 fragColor;
void main()
{
fragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
</code></pre>
<p>I tried to debug the code and noticed that when I create shaders they have the same id (0), I think this is the problem, but I don't know why this happens</p>
<p>It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.</p>
| <python><pyopengl> | 2023-08-17 23:07:54 | 1 | 510 | bin4ry |
76,925,491 | 22,407,544 | Why is my Django view not recieving input from my HTML form? | <p>I am making a website that takes user text, modifies it and returns a response using Ajax. I am using Django on the backend and vanilla JS on the frontend. The user chooses <code>source_language</code> and <code>target_language</code>, enters some text in input_text and the browser outputs a translation in the output_text textarea. However whenever I try I keep getting either a <code>None</code> as output_text(as the output from my translation API) or a <code>No Null</code> error for input text whenever it tries to save to the database, so it seems the input text is not being detected by my django view. The same goes for the source language and target language. When I comment out the code to save translation to the database I get <code>None</code> as ouput text. However when I check the payload (in developer tools) being sent I see the input text and the selected languages. I use the <code><form></code> tag in my HTML with AJAX so as to prevent page reloads when the user recieves a response. When I remove all JS it works as expected but of course the presenation is a black screen and requires a page reload. Here is my code:</p>
<p>JS:</p>
<pre><code>const csrfToken = document.querySelector('[name=csrfmiddlewaretoken]').value;
document.addEventListener('DOMContentLoaded', () => {
//const myForm = document.getElementById('translate-form');
const myForm = document.querySelector('#translate-form');
myForm.addEventListener('submit', (e) => {
e.preventDefault();
/* Convert form data to JSON */
let formData = new FormData(e.target);
formData = formDataToJSON(formData);
/* Create a new Request object */
let request = new Request('/translator/', {
method: 'POST',
body: formData,
headers: { 'content-type': 'application/json',
'X-CSRFToken': csrfToken
}
});
/* Send the request and wait for response */
fetch(request)
.then(response => {
if (!response.ok) throw new Error('Failed to send request.');
return response.json();
}).then(data => {
/* Do something with data from response */
console.log(data);
//let outputTextArea = document.getElementById('translate-form');
outputTextArea.value = !data.output_text ? `Translation error: ${data.error}` : data.output_text;
}).catch(console.error);
});
});
/* Converts FormData to JSON */
function formDataToJSON(formData) {
let obj = {};
for (let key of formData.keys()) obj[key] = formData.get(key);
return JSON.stringify(obj);
}
</code></pre>
<p>Here is Django view:</p>
<pre><code>from django.http import JsonResponse
from django.views.decorators.csrf import csrf_protect
from django.shortcuts import render
from django.http import HttpResponse
from .models import TranslationRequest
@csrf_protect
def translateText(request):
if request.method == 'POST':
source_language = request.POST.get('source_language')
input_text = request.POST.get('input_text')
target_language = request.POST.get('target_language')
try:
# Call the API for translation logic
output_text = translation_via_API
# Save the translation to database
translation_object = TranslationRequest.objects.create(
source_language = source_language,
target_language = target_language,
input_text = input_text,
output_text = output_text
)
return JsonResponse({'output_text': output_text})
except Exception as e:
# Handle API and model save errors here
error_message = f"An error occurred: {e}"
return JsonResponse({'error': error_message}, status=500)
return render(request, 'translator/index.html', {response})
return render(request, 'translator/index.html')
</code></pre>
<p>Here is the relevant HTML:</p>
<pre><code><form id="translate-form" action="{% url 'translateText' %}" method="post">
{% csrf_token %}
<label for="input-lang-select"></label>
<select class="input-lang-select" name="source_language">
<option value="zh">Chinese</option>
<option value="en">English</option>
<option value="es">Spanish</option>
<option value="de">German</option>
<option value="jp">Japanese</option>
<option value="ko">Korean</option>
<option value="ru">Russian</option>
<option value="pt">Portugese</option>
<!-- add more language options as needed -->
</select>
</div>
<!--<p class="box-title">Detected language: <span id="detected-language">Detecting...</span></p>-->
<textarea id="input-text-area" name="input_text"
autofocus
minlength="1"
maxlength="10000"
placeholder="Enter text here"></textarea>
<!--change min and maxlength-->
<div class="submit-btn-container">
<button type="submit" id="submit">Translate</button>
</div>
<div class="word-count-div">
<span class="word-count"><span class="string-count">0</span>/10000</span>
<!--change max wordcount-->
</div>
</div>
<div class="output-box">
<div class="output-lang-select-div">
<label for="output-lang-select">></label>
<select class="output-lang-select" name="target_language">
<option value="en">English</option>
<option value="zh">Chinese</option>
<option value="es">Spanish</option>
<option value="de">German</option>
<option value="jp">Japanese</option>
<option value="ko">Korean</option>
<option value="ru">Russian</option>
<option value="pt">Portugese</option>
<!-- add more language options as needed -->
</select>
</form>
</code></pre>
<p>Here's my mysite\urls.py:</p>
<pre><code>```
from django.contrib import admin
from django.urls import include, path
from translator import views
urlpatterns = [
path('translator/', include('translator.urls')),
path('admin/', admin.site.urls),
]
```
</code></pre>
<p>And my translator\urls.py:</p>
<pre><code>```
from django.urls import path
from . import views
urlpatterns = [
path("", views.translateText, name = "translateText"),
]
```
</code></pre>
<p>I read Django documentation back and forth and tried everything I understood but it made no difference. And as I said on checking the payload I saw that each field is being sent to the backend with the relevant data.</p>
| <python><html><django><ajax><forms> | 2023-08-17 22:35:43 | 0 | 359 | tthheemmaannii |
76,925,417 | 2,320,476 | How to read the values in a dictionary of a list | <p>I have the list below that looks like this</p>
<pre><code>my_test_data = []
</code></pre>
<p>when I print my_test_data I get the result below</p>
<pre><code>[
{
'tnType': None,
'tnCode': None,
'postedDt': None,
'amt': None, 'tranDesc1': None
}
]
</code></pre>
<p>This to me looks like a dictionary in a list</p>
<p>I tried printing like this</p>
<pre><code>for key, val in my_test_data.items():
print(key, val)
</code></pre>
<p>I got an error message like this : AttributeError: 'list' object has no attribute 'items'</p>
<p>I want to be able to read the value on every row and store that value in another list.</p>
<p>How can I read this one by one?</p>
| <python> | 2023-08-17 22:14:48 | 2 | 2,247 | Baba |
76,925,343 | 2,223,106 | Django field with foreignKey | <p>I'm using <code>Django v4.1</code> and have a <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#django.db.models.ForeignKey" rel="nofollow noreferrer">foreignKey</a> relationship I want to pull into a form.</p>
<p>The two models:</p>
<pre><code>class Service(models.Model):
client = models.ForeignKey("Client", on_delete=models.SET_NULL, null=True)
# some other fields
</code></pre>
<p>and</p>
<pre><code>class Client(models.Model):
name = models.CharField(max_length=50)
# a few more fields
</code></pre>
<p>The form:</p>
<pre><code>class ServiceForm(forms.ModelForm):
class Meta:
model = Service
fields = "__all__"
</code></pre>
<p>However, in the front-facing form, "Client" is generated, but without a <code>SELECT</code> field.</p>
<p>There is another model with a foreignKey relationship to Client, order:</p>
<pre><code>class Order(models.Model):
client = models.ForeignKey("Client", on_delete=models.SET_NULL, null=True)
# more fields...
</code></pre>
<p>And it's form:</p>
<pre><code>class OrderModelForm(forms.ModelForm):
class Meta:
model = Order
fields = "__all__"
</code></pre>
<p>Which renders as expected with a <code>SELECT</code> field for the <code>Client</code>.</p>
<p>Looking at the <a href="https://docs.djangoproject.com/en/4.1/topics/forms/modelforms/" rel="nofollow noreferrer">forms docs</a> I experimented with (importing <code>Client</code> from <code>models</code> and) adding a <a href="https://docs.djangoproject.com/en/4.1/ref/forms/fields/#django.forms.ModelChoiceField" rel="nofollow noreferrer">ModelChoiceField</a>,
(<code>"client": forms.ModelChoiceField(queryset=Client.objects.all())</code> both inside and outside of a <code>widgets</code> dict), but based on <a href="https://stackoverflow.com/a/21165052/2223106">this SO post</a> I'm thinking Django should be rendering that on it's own, as it is for <code>Order</code>.</p>
<p>Please share suggestions in debugging. Thanks much.</p>
| <python><django><django-models><modelchoicefield> | 2023-08-17 21:57:07 | 1 | 6,610 | MikeiLL |
76,925,283 | 329,174 | Python multiple inheritance accessing all inherited attributes | <p>I am trying to wrap my head around multiple inheritance and contrived a very simple example that is not able to access <em>all</em> of the inherited attributes. This may or may not be possible, but if so I could use some help. Here is how my code is structured along with some comments.</p>
<p>I made a Status class with a single attribute (status). The idea is that all of the other classes can inherit from status and thus get the status attribute.</p>
<pre><code>class Status():
def __init__(self, status: str = ""):
self.status = status
</code></pre>
<p>Now I will make a class 'A' which inherits from Status and adds some methods to connect or disconnect from something.</p>
<pre><code>class A(Status):
def __init__(self):
super().__init__("Disconnected")
def connect(self):
if self.status == "Disconnected":
print("Connecting A")
self.status = "Connected"
else:
print("A already connected, doing nothing")
def disconnect(self):
if self.status == "Connected":
print("Disconnecting A")
self.status = "Disconnected"
else:
print("A already disconnected, doing nothing")
</code></pre>
<p>This seems to work. In my test code, I can instantiate an instanse of A, call the new methods as well ass access A's status.</p>
<pre><code> print('\n***Class A**')
a = A()
print(a.status)
a.connect()
print(a.status)
</code></pre>
<p>I then repeated this process and made a very simple class (B) which also inherits from Status. Following similar test code as before, I can instantiate an instance of B and access its status.</p>
<pre><code>class B(Status):
def __init__(self):
super().__init__("B Uninitialized")
</code></pre>
<p>Now I would like to make a new class 'Composite' that inherits from A, B, and Status</p>
<pre><code>class Composite(A, B, Status):
def __init__(self):
super(A, self).__init__()
super(B, self).__init__()
super(Status, self).__init__()
self.status = "Composite Uninitialized"
</code></pre>
<p>This "works" as composite can read/write its status (inherited from Status) as well as call the methods inherited from A. However, I would also like to be able to access the status of the A and B objects in addition to composite's own status. It seems this should be doable, but I cannot figure out how. Thanks!!</p>
| <python><inheritance><multiple-inheritance> | 2023-08-17 21:42:46 | 0 | 331 | Frac |
76,925,105 | 13,650,733 | Cleaner way to assign NaNs in cells below the lower-left to upper-right diagonal of pandas dataframe | <p>I don't know the exact terminology of what I'm looking to do so apologies if I'm not describing the desired results properly. Example below to show what I mean that should make it clear.</p>
<p>Also, I do have a solution that is working, but it's super clunky. I'm hoping someone can help me find a much cleaner way to do this.</p>
<p><strong>The Issue:</strong></p>
<p>I am trying to take a dataframe of values, like this (it doesn't matter what the values are but note the columns are integers increasing from 0 to whatever):</p>
<pre><code>In [1]: a = np.random.randint(0,10,(5,5))
In [2]: df = pd.DataFrame(a, index=['A','B','C','D','E'],columns=[x for x in range(5)])
In [3]: df
Out [3]:
0 1 2 3 4
A 0 8 9 4 5
B 3 4 5 0 1
C 8 5 5 4 1
D 8 5 3 8 3
E 7 2 6 7 2
</code></pre>
<p><strong>Desired result:</strong></p>
<p>Then what I'm looking to do is return a version of the df that has NaNs in the lower, right-hand side below the "lower-left to upper-right" diagonal.</p>
<p>So desired output would be this:</p>
<pre><code> 0 1 2 3 4
A 0 8.0 9.0 4.0 5.0
B 3 4.0 5.0 0.0 NaN
C 8 5.0 5.0 NaN NaN
D 8 5.0 NaN NaN NaN
E 7 NaN NaN NaN NaN
</code></pre>
<p>I've found several answers on assigning specific values up the diagonal, but nothing quite like this.</p>
<p><strong>Working solution:</strong></p>
<p>As I mentioned, I did get a very clunky solution together, but I'm hoping someone has a much cleaner way to do this. Here is what I have done:</p>
<pre><code>In [4]: check_digit_max = df.columns[-1] # the columns are ints, so just get the highest value
In [5]: df['check_digit'] = [i for i,x in enumerate(df.index)]
</code></pre>
<p>So now I have this:</p>
<pre><code>In [6]: df
Out [6]:
0 1 2 3 4 check_digit
A 0 8 9 4 5 0
B 3 4 5 0 1 1
C 8 5 5 4 1 2
D 8 5 3 8 3 3
E 7 2 6 7 2 4
</code></pre>
<p>So basically if the column number plus the check digit is greater than the check_*digit_*max, the values should be NaN. And this messy loop does the trick:</p>
<pre><code>In [7]: for col in df.iloc[:,:-1].columns:
df[col] = [x if int(col)+y <= check_digit_max else np.nan
for x,y in zip(df[col], df['check_digit'])]
In [8]: df.drop('check_digit',axis=1)
Out [8]:
0 1 2 3 4
A 0 8.0 9.0 4.0 5.0
B 3 4.0 5.0 0.0 NaN
C 8 5.0 5.0 NaN NaN
D 8 5.0 NaN NaN NaN
E 7 NaN NaN NaN NaN
</code></pre>
<p>Hopefully someone has a much better way of doing this. Any help appreciated!</p>
| <python><pandas><numpy> | 2023-08-17 21:01:48 | 2 | 315 | markd227 |
76,924,920 | 6,324,055 | How to specify platform-specific dependences with Poetry? | <p>I want to constrain a <em>single</em> dependency (that is, a single package with a single name) based on the OS platform. For instance, the package version or the package origin (URL, local wheel, etc.) could change depending on the OS.</p>
<p>I tried the solution of linked in the <a href="https://python-poetry.org/docs/dependency-specification/#multiple-constraints-dependencies" rel="nofollow noreferrer">documentation</a> but that does not work. Poetry tries to install the wrong package for the wrong OS platform. I also searched StackOverflow and found <a href="https://python-poetry.org/docs/dependency-specification/#multiple-constraints-dependencies" rel="nofollow noreferrer">1 related question</a> but it does not help.</p>
<p>As a practical use case, I want to install PyTorch 2.0.1 from PyPI on macOS and a specific wheel (with a specific version of CUDA) on Ubuntu. As such, my package specification is:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
python = "^3.10"
torch = [
{platform = "linux", url = "https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl"},
{platform = "darwin", version = "2.0.1"},
]
</code></pre>
<p>Unfortunately, on macOS Poetry tried to install the Linux package as mentioned in the error message:</p>
<pre><code>Installing dependencies from lock file
Package operations: 1 install, 0 updates, 0 removals
β’ Installing torch (2.0.1+cu118 https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl): Failed
RuntimeError
Package https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl cannot be installed in the current environment {'implementation_name': 'cpython', 'implementation_version': '3.10.11', 'os_name': 'posix', 'platform_machine': 'arm64', 'platform_release': '22.5.0', 'platform_system': 'Darwin', 'platform_version': 'Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000', 'python_full_version': '3.10.11', 'platform_python_implementation': 'CPython', 'python_version': '3.10', 'sys_platform': 'darwin', 'version_info': [3, 10, 11, 'final', 0], 'interpreter_name': 'cp', 'interpreter_version': '3_10'}
at ~/Library/Application Support/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/executor.py:788 in _download_link
784β # Since we previously downloaded an archive, we now should have
785β # something cached that we can use here. The only case in which
786β # archive is None is if the original archive is not valid for the
787β # current environment.
β 788β raise RuntimeError(
789β f"Package {link.url} cannot be installed in the current environment"
790β f" {self._env.marker_env}"
791β )
792β
</code></pre>
<p>Please not that I made sure that the lock file is consistent with <code>pyproject.toml</code> by running <code>poetry lock</code> before.</p>
<p><strong>Is there a solution to this issue?</strong></p>
| <python><pytorch><python-poetry> | 2023-08-17 20:30:01 | 1 | 6,586 | Louis Lac |
76,924,873 | 1,171,746 | Can I add custom data to a pyproject.toml file | <p>I am using the <a href="https://pypi.org/project/toml/" rel="noreferrer">toml</a> package to read my <code>pyproject.toml</code> file.
I want to add custom data which in this case I will read in my <code>docs/conf.py</code> file.
When I try and add a custom section I get errors and warnings from <a href="https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml" rel="noreferrer">Even Better TOML</a> extension in Vs Code stating that my custom data is no allowed.</p>
<p>Example TOML section in <code>pyproject.toml</code></p>
<pre class="lang-ini prettyprint-override"><code>[custom.metadata]
docs_name = "GUI Automation for windows"
docs_author = "myname"
docs_copyright = "2023"
docs_url= "https://someurl.readthedocs.io/en/latest/"
</code></pre>
<p>So, my question is: Is there a valid way of adding custom data to a <code>pyproject.toml</code> file?</p>
| <python><python-packaging><pyproject.toml><toml> | 2023-08-17 20:22:13 | 1 | 327 | Amour Spirit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.