QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,733,663
| 2,985,049
|
Ray: Parallel read of audio in chunks that preserves ordering
|
<p>I would like to use Ray to perform large scale inference on audio files. I would like to read each file in, chunk by chunk, perform inference on each chunk, and then write each file out chunk by chunk so that I never have to have an entire file in memory at one time. At the same time, I'd like to parallelize the operation so that multiple audiofiles could be processed at the same time.</p>
<p>I've created a datasource and a datasink that works if you load the entire file into memory at once. But I haven't figured out how to do it when you're loading a file in chunk by chunk and have to maintain the order and make sure you have all the chunks when performing the write operation.</p>
<pre class="lang-py prettyprint-override"><code>class AudioDataSink(RowBasedFileDatasink):
def __init__(self, path: str, file_format: str, model_sample_rate: int, **file_datasink_kwargs):
super().__init__(path, file_format=file_format, **file_datasink_kwargs)
self.model_sr = model_sample_rate
self.file_format = file_format
def write_row_to_file(self, row: dict[str, Any], file: "pyarrow.NativeFile"):
audio = row["audio"]
if self.model_sr != row["original_sr"]:
audio = soxr.resample(audio, self.model_sr, row["original_sr"])
sf.write(file, audio, self.model_sr)
class AudioDatasource(FileBasedDatasource):
_WRITE_FILE_PER_ROW = True
_NUM_THREADS_PER_TASK = 8
def __init__(
self,
paths: str | list[str],
model_sample_rate: int,
**file_based_datasource_kwargs,
):
super().__init__(
paths,
file_extensions=["wav", "mp3", "flac", "ogg", "aiff"],
**file_based_datasource_kwargs,
)
self.model_sr = model_sample_rate
def _read_stream(self, f: "pyarrow.NativeFile", path: str, **reader_args) -> Iterator[Block]:
try:
audio, sr = sf.read(f, dtype="float32", always_2d=True)
except RuntimeError as e:
raise ValueError(f"Error reading {f}: {e}")
if sr != self.model_sr:
audio = soxr.resample(audio, sr, self.model_sr)
audio = np.transpose(audio)
builder = DelegatingBlockBuilder()
item = {"audio": audio, "original_sr": sr}
builder.add(item)
block = builder.build()
yield block
</code></pre>
|
<python><ray>
|
2023-12-29 19:21:35
| 0
| 7,189
|
Luke
|
77,733,650
| 3,023,116
|
Mypy error is not reproducible in local environment
|
<p>I have a github action and pre-commit hook for my python code</p>
<p>Below is my yml file for the git action</p>
<pre><code>name: Mypy
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
pip install "mypy==1.7.0" "pydantic>=2.4" "alembic>=1.8.1" "types-aiofiles>=23.2.0" "types-redis>=4.6.0" --quiet
- name: Running mypy checks
run: |
mypy . --ignore-missing-imports --config-file backend/app/mypy.ini
</code></pre>
<p>here is my pre-commit config</p>
<pre><code>repos:
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.7.0
hooks:
- id: mypy
args: [--ignore-missing-imports, --config-file, backend/app/mypy.ini]
verbose: true
additional_dependencies:
- "pydantic>=2.4"
- "alembic>=1.8.1"
- "types-aiofiles>=23.2.0"
- "types-redis>=4.6.0"
</code></pre>
<p>The code I am checking looks like this</p>
<pre><code> async def total_monthly_size(self, user_id: int) -> int:
now = utcnow()
current_month = datetime(now.year, now.month, 1)
sum_total_size_query = select(func.sum(self.model.total_size or self.model.estimated_total_size)).where(
self.model.user_id == user_id,
self.model.is_failed.is_(False),
self.model.requested_at > current_month,
)
sum_total_size_result = await self._db.execute(sum_total_size_query)
sum_total_size = sum_total_size_result.scalar()
return int(sum_total_size or 0)
</code></pre>
<p>pre-commit runs without errors but git action fails with</p>
<pre><code>error: Need type annotation for "sum_total_size_query" [var-annotated]
</code></pre>
<p>How do I achieve the consistent behavior of both tools?</p>
<p><strong>UPDATE:</strong>
When I run the git action's command in the local environment, I do not get an error as well:</p>
<pre><code>mypy . --ignore-missing-imports --config-file backend/app/mypy.ini
</code></pre>
|
<python><github-actions><mypy><pre-commit><pre-commit.com>
|
2023-12-29 19:18:13
| 1
| 6,935
|
taras
|
77,733,599
| 10,499,034
|
How to get Python to recognize installed ost module
|
<p>This is not the same error shown at <a href="https://stackoverflow.com/questions/14295680/unable-to-import-a-module-that-is-definitely-installed">Unable to import a module that is definitely installed</a> so it is not a duplicate. The resolution of that error involve the module being installed in the wrong directory for the wrong version of python. That is not the case here.</p>
<p>I installed the <code>ost</code> module with <code>pip install ost</code>.</p>
<p>I confirmed that it is installed with <code>pip list</code>.</p>
<p><a href="https://i.sstatic.net/hl6CY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hl6CY.png" alt="enter image description here" /></a></p>
<p>I try to run a tutorial script that uses ost:</p>
<pre><code>import modeller
import ost
# change directory
os.chdir('C:/Users/realt/OneDrive/Documents/Biology Projects/Junk/')
# print content of directory to screen
print('Files before modelling:\n' + ' '.join(os.listdir()) + '\n\n')
# perform modelling
routine = hm.routines.Routine_promod3(
alignment=aln,
target='ARAF',
templates=['3NY5'],
tag='model')
routine.generate_models()
print('Files after modelling:\n' + ' '.join(os.listdir()) + '\n')
# remove model
os.remove('model_1.pdb')
# change back to tutorial directory
os.chdir('../..')
</code></pre>
<p>and I get the error:</p>
<pre><code>"ost" is required for this functionality, but could not be imported.
</code></pre>
<p>How can I get python to recognize and import <code>ost</code>.</p>
<p>I have tried restarting python, anaconda and my whole system to no avail.</p>
<p>I have run:</p>
<pre><code>(base) PS C:\Users\realt> pip show ost
</code></pre>
<p>And it returned:</p>
<pre><code>Name: ost
Version: 0.1.dev13
Summary: OpenSubtitles interface for Python 3
Home-page: http://www.github.com/rlirey/ost
Author: Ryan L. Irey
Author-email: ireyx001@umn.edu
License: Apache 2.0 License
Location: c:\users\realt\anaconda3\lib\site-packages
Requires: requests
Required-by:
</code></pre>
<p>I have also run the following code in a Jupyter notebook:</p>
<pre><code>import sys; print(sys.path)
</code></pre>
<p>And it returned:</p>
<pre><code>['C:\\Users\\realt\\OneDrive\\Documents\\Biology Projects\\Scripts\\Python Scripts', 'C:\\Users\\realt\\anaconda3\\python310.zip', 'C:\\Users\\realt\\anaconda3\\DLLs', 'C:\\Users\\realt\\anaconda3\\lib', 'C:\\Users\\realt\\anaconda3', '', 'C:\\Users\\realt\\anaconda3\\lib\\site-packages', 'C:\\Users\\realt\\anaconda3\\lib\\site-packages\\win32', 'C:\\Users\\realt\\anaconda3\\lib\\site-packages\\win32\\lib', 'C:\\Users\\realt\\anaconda3\\lib\\site-packages\\Pythonwin']
</code></pre>
<p>And here is the full traceback of the error:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[17], line 9
6 print('Files before modelling:\n' + ' '.join(os.listdir()) + '\n\n')
8 # perform modelling
----> 9 routine = hm.routines.Routine_promod3(
10 alignment=aln,
11 target='ARAF',
12 templates=['3NY5'],
13 tag='model')
14 routine.generate_models()
16 print('Files after modelling:\n' + ' '.join(os.listdir()) + '\n')</p>
<pre><code>File ~\anaconda3\lib\site-packages\homelette\routines.py:924, in Routine_promod3.__init__(self, alignment, target, templates, tag)
915 def __init__(
916 self,
917 alignment: typing.Type["Alignment"],
(...)
921 ) -> None:
922 # check dependencies
923 dependencies = ["ost", "promod3"]
--> 924 self._check_dependencies(dependencies)
925 # check number of templates
926 if len(templates) != 1:
File ~\anaconda3\lib\site-packages\homelette\routines.py:238, in Routine._check_dependencies(dependencies)
236 for dependency in dependencies:
237 if not _IMPORTS[dependency]:
--> 238 raise ImportError(
239 '"{}" is required for this functionality, but could '
240 "not be imported.".format(dependency)
241 )
ImportError: "ost" is required for this functionality, but could not be imported.
</code></pre>
<p>The full code that I am working with is as follows:</p>
<pre><code>import os
import homelette as hm
os.chdir('C:/Users/realt/OneDrive/Documents/Biology Projects/Junk/')
# read in the alignment
aln = hm.Alignment('aln_01.fasta')
# print to screen to check alignment
aln.print_clustal(line_wrap=70)
# annotate the alignment
aln.get_sequence('ARAF').annotate(seq_type = 'sequence')
aln.get_sequence('3NY5').annotate(seq_type = 'structure',
pdb_code = '3NY5',
begin_res = '1',
begin_chain = 'A',
end_res = '81',
end_chain = 'A')
import modeller
import ost
# change directory
os.chdir('C:/Users/realt/OneDrive/Documents/Biology Projects/Junk/')
# print content of directory to screen
print('Files before modelling:\n' + ' '.join(os.listdir()) + '\n\n')
# perform modelling
routine = hm.routines.Routine_promod3(
alignment=aln,
target='ARAF',
templates=['3NY5'],
tag='model')
routine.generate_models()
print('Files after modelling:\n' + ' '.join(os.listdir()) + '\n')
# remove model
os.remove('model_1.pdb')
# change back to tutorial directory
os.chdir('../..')
</code></pre>
<p>I installed Homelette on a totally different machine and tried to import it. It gave me numerous warnings of dependencies that could not be imported so I think the problem is with Homelette. The warnings were:</p>
<p>`</p>
<pre><code>"C:\Users\jamie\anaconda3\lib\site-packages\homelette\__init__.py:68: UserWarning: Module "altmod" could not be imported.
warnings.warn(msg)
C:\Users\jamie\anaconda3\lib\site-packages\homelette\__init__.py:68: UserWarning: Module "modeller" could not be imported.
warnings.warn(msg)
C:\Users\jamie\anaconda3\lib\site-packages\homelette\__init__.py:68: UserWarning: Module "ost" could not be imported.
warnings.warn(msg)
C:\Users\jamie\anaconda3\lib\site-packages\homelette\__init__.py:68: UserWarning: Module "promod3" could not be imported.
warnings.warn(msg)
C:\Users\jamie\anaconda3\lib\site-packages\homelette\__init__.py:68: UserWarning: Module "qmean" could not be imported.
warnings.warn(msg)
C:\Users\jamie\anaconda3\lib\site-packages\homelette\__init__.py:75: UserWarning: Please install the missing modules in order to enjoy the full functionality of "homology"
warnings.warn(msg)"
</code></pre>
<p>`</p>
|
<python>
|
2023-12-29 19:03:47
| 0
| 792
|
Jamie
|
77,733,521
| 3,291,993
|
How to vectorize operations in pandas data frame?
|
<pre><code>import pandas as pd
columns = ['S1', 'S2', 'S3', 'S4', 'S5']
df = pd.DataFrame({'Patient':['p1', 'p2', 'p3', 'p4', 'p5', 'p6', 'p7', 'p8', 'p8', 'p10'],
'S1':[0.7, 0.3, 0.5, 0.8, 0.9, 0.1, 0.9, 0.2, 0.6, 0.3],
'S2':[0.2, 0.3, 0.5, 0.4, 0.9, 0.1, 0.9, 0.7, 0.4, 0.3],
'S3':[0.6, 0.3, 0.5, 0.8, 0.9, 0.8, 0.9, 0.3, 0.6, 0.3],
'S4':[0.2, 0.3, 0.7, 0.8, 0.9, 0.1, 0.9, 0.7, 0.3, 0.3 ],
'S5':[0.9, 0.8, 0.5, 0.8, 0.9, 0.7, 0.2, 0.7, 0.6, 0.3 ]})
# vectorized operations in data frame
# get the number of the cells that are >=0.5 for each column
arr1 = df[columns].ge(0.5).sum().to_numpy()
# get the sum the cells that are >=0.5 for each column
arr2 = df[df[columns]>=0.5][columns].sum().to_numpy()
print(arr1)
print(arr2)
</code></pre>
<p>How do I get the list of patients or a set of patients for each column in the df like below?</p>
<pre><code>[('p1', 'p3', 'p4', 'p5', 'p7', 'p9'),
('p3', 'p5', 'p7', 'p8'),
('p1', 'p3', 'p4', 'p5', 'p6', 'p7', 'p9'),
(...),
(...)]
</code></pre>
|
<python><pandas><dataframe><vectorization>
|
2023-12-29 18:42:23
| 3
| 1,147
|
burcak
|
77,733,490
| 1,234,434
|
Why does my generator function not return a value
|
<p>I'm learning python.</p>
<p>I'm trying to read in a text file (Linux: <code>/etc/passwd</code>)
I've taken the answer from <a href="https://stackoverflow.com/questions/4842057/easiest-way-to-ignore-blank-lines-when-reading-a-file-in-python">here</a> and tried to implement it with my learning's. I know from all my readings many websites state to use readline() (helps with big files).</p>
<pre><code>import pandas as pd
import numpy as np
def nonblank_lines(f):
rawline=f.readline()
while rawline != '':
line=rawline.rstrip()
print("#'#'#'#'#'",line)
if line:
yield line
rawline=f.readline()
filein="/etc/passwd"
datain=[]
columnnames=['Username','Password','UID','GID','Name of User','HOMEDIR','Login Shell']
with open(filein,'r') as passwdline:
print(f"passwdline: {passwdline}")
for line in nonblank_lines(passwdline):
print(f"Back from function ====, {line}")
datain.append(line.split(':'))
</code></pre>
<p>For reference I edited the <code>/etc/passwd</code> file and put in a blank line to test my program</p>
<pre><code>root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
</code></pre>
<p>However, my program keeps returning empty lines instead of each non-blank line in the <code>/etc/passwd</code> file:</p>
<pre><code>#'#'#'#'#'
#'#'#'#'#'
#'#'#'#'#'
#'#'#'#'#'
#'#'#'#'#'
</code></pre>
<p>repeated continuously.</p>
<p>What am I doing wrong. My guess is it has to do with putting a readline after the yield statement, but I have no idea why it would be a problem. Any thoughts?</p>
|
<python>
|
2023-12-29 18:34:24
| 1
| 1,033
|
Dan
|
77,733,356
| 1,867,246
|
Does IntelliJ treat .py files differently from .txt files when they are created
|
<p>I accidentally created a new .txt file when I wanted a .py file. Of course, Run doesn't work. But, it made me wonder what is happening inside of IntelliJ when I create a .py file, compared to when I create a .txt file.</p>
<p>Here are some possibilities I considered:</p>
<ol>
<li><p>Is there hidden code in the file itself? Seems unlikely because then there would have to be cleanup before committing.</p>
</li>
<li><p>Is there a table of some kind inside IntelliJ that it uses to decide what to render on the screen and what functions to make available?</p>
</li>
<li><p>Is there some other structure involved that I haven't thought of?</p>
</li>
</ol>
|
<python><file><intellij-idea><ide>
|
2023-12-29 17:56:53
| 2
| 531
|
Nora McDougall-Collins
|
77,733,255
| 11,145,820
|
How to copy a collection from one instance to another instance with Qdrant?
|
<p>In the process that i'm running, i need very low latency to be able to process a job, so i use a local instance of qdrant db to be able to insert everything very fast to it.</p>
<p>After finishing the job, i want to copy the whole collection from the local instance to a cloud instance of Qdrant.</p>
<p>I was just wondering if there is any better way of doing this other than simply scrolling through the whole collection and inserting it to the other instance.</p>
|
<python><python-embedding><openaiembeddings><qdrant><qdrantclient>
|
2023-12-29 17:32:39
| 2
| 1,979
|
Guinther Kovalski
|
77,733,213
| 908,924
|
df.groupby()[].transform('max') returns indices, not the data in the column
|
<p>I have an excel sheet with two columns of data. The data in the first column is grouped by rows the adjacent column contains some data to be processed. What I would like to do is take the data in column 'B' find it's maximum and print that max next to the data in column 'A'.</p>
<p>The data in both columns looks like</p>
<p><a href="https://i.sstatic.net/IFPLM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IFPLM.png" alt="enter image description here" /></a></p>
<p>what I would like is</p>
<p><a href="https://i.sstatic.net/lthby.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lthby.png" alt="enter image description here" /></a></p>
<p>When I try</p>
<pre><code> df = pd.read_excel("data.xlsx", usecols= "A, B")
df.groupby('A')['B'].transform('max')
</code></pre>
<p>The result is</p>
<p><a href="https://i.sstatic.net/tv6dv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tv6dv.png" alt="enter image description here" /></a></p>
<p>Which returns the "max" but instead of the values in 'A' the indices are returned. Is there a way to do this with .groupgy, or do I need to try something else?</p>
|
<python><pandas>
|
2023-12-29 17:19:59
| 0
| 2,355
|
Stripers247
|
77,733,036
| 1,684,518
|
Python script not releasing memory in Linux
|
<p>I have a Python script that reads a CSV that contains URLs to PDFs, opens them, does some analytics, and saves that to another CSV. The input CSV is like this:</p>
<pre><code>Bid Number,Items
45455454,['https://example.com/file1.pdf']
230000000237,"['https://example.com/file2.pdf', 'https://example.com/file3.pdf']"
</code></pre>
<p>So, for every bid, I can have multiple files that I need to check. What I did was create a ProccesPoolExecutor with 50 processes, and each process creates N threads (one per URL). URLs per bid are always less than 10.</p>
<p>The Python script I have is the following:</p>
<pre><code>import argparse
import ast
import csv
import fitz
import gc
import pandas as pd
import requests
import threading
from documents import best_document, evaluate_document
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor, as_completed
def parse_url(url):
"""
Receives a url and returns a dictionary containing the following keys:
- url: the url of the document
- number_of_pages: the number of pages in the document
- number_of_words: the number of words in the document
- keywords_counter: the number of keywords in the document
- negative_keywords_counter: the number of negative keywords in the document
"""
print(f"(INFO) Processing {url} on thread {threading.current_thread().name}")
response = requests.get(url)
if response.status_code != 200:
return None
word_count, keywords_counter, negative_keywords_counter, page_count = 0, 0, 0, 0
try:
pdf = fitz.open(stream=response.content, filetype="pdf")
word_count, keywords_counter, negative_keywords_counter = evaluate_document(pdf)
page_count = pdf.page_count
except:
return None
finally:
if pdf:
pdf.close()
del pdf
if response:
response.close()
del response
return {
"url": url,
"number_of_pages": page_count,
"number_of_words": word_count,
"keywords_counter": keywords_counter,
"negative_keywords_counter": negative_keywords_counter,
}
def process_bid(bid_number, urls):
print(f"(INFO) Processing bid number {bid_number}...")
with ThreadPoolExecutor(max_workers=len(urls)) as executor:
bid_results = []
futures = [executor.submit(parse_url, url) for url in urls]
bid_results = [
future.result()
for future in as_completed(futures)
if future.result() is not None
]
del futures
best_result = None
if len(bid_results) != 0:
best_result = best_document(bid_results, bid_number)
del bid_results
return (bid_number, best_result)
def main(max_processes=50, input_filename="bids_grouped.csv"):
last_bid_index = 0
dataframe = pd.read_csv(input_filename, skiprows=range(1, last_bid_index + 1))
# Prepare a list to store future objects
futures = []
iteration_number = 0
with open("best_documents.csv", "w+", newline="") as csv_file:
writer = csv.writer(csv_file)
writer.writerow(["Bid Number", "Best Document"])
with ProcessPoolExecutor(max_workers=max_processes) as executor:
# Submit tasks
for _, row in dataframe.iterrows():
bid_number = row["Bid Number"]
items = ast.literal_eval(row["Items"])
future = executor.submit(process_bid, bid_number, items)
futures.append(future)
# Collect results as they are completed
for future in as_completed(futures):
try:
bid_number, best_result = future.result()
if best_result:
writer.writerow([bid_number, best_result["url"]])
else:
print(f"(INFO) No feasible documents for bid {bid_number}")
print(f"(INFO) Finished processing bid number {bid_number}")
except Exception as exc:
print(f"URL processing generated an exception: {exc}")
finally:
iteration_number += 1
if iteration_number % 1000 == 0:
gc.collect()
if __name__ == "__main__":
parser = argparse.ArgumentParser("parse_bids.py")
parser.add_argument(
"-p",
"--processes",
help="Number of max processes to spawn. Each bid is going to be assigned to a process",
type=int,
default=50,
)
parser.add_argument(
"-i",
"--input",
help="Input CSV containing grouped bids",
default="bids_grouped.csv",
)
args = parser.parse_args()
print(f"(INFO) Going to use {args.processes} max processes")
print(f"(INFO) Using input CSV {args.input}")
main(args.processes, args.input)
</code></pre>
<p>The issue I'm having is that the CSV contains millions of lines. I developed the script on a M1 Mac (my everyday machine), and memory does not increase linearly with time.
I need to run it in production on a Linux machine, but for some reason, when I run it, the memory keeps increasing linearly with time (and eventually, I run out of memory and the processes/threads start to fail).
Python version is the same (3.10), and library versions are the same:</p>
<pre><code>certifi==2023.11.17
cffi==1.16.0
charset-normalizer==3.3.2
cryptography==41.0.7
idna==3.6
numpy==1.26.2
pandas==2.1.4
pycparser==2.21
PyMuPDF==1.23.8
PyMuPDFb==1.23.7
python-dateutil==2.8.2
pytz==2023.3.post1
requests==2.31.0
six==1.16.0
tqdm==4.66.1
tzdata==2023.3
urllib3==2.1.0
</code></pre>
<p>I might be leaking memory somewhere, but I guess I should see that on MacOS, too, if that was the case.</p>
<p>Thanks!</p>
|
<python><memory-management><memory-leaks>
|
2023-12-29 16:40:23
| 0
| 1,222
|
jpbalarini
|
77,732,940
| 967,621
|
Aggregate dataframe using simpler (vectorized?) operations instead of loops
|
<p>I have a piece of code that works correctly (gives the expected answer), but is both inefficient and unnecessarily complex. It uses loops that I want to simplify and make more efficient, possibly using vectorized operations. It also converts a dataframe to a series and then back into a dataframe again - another code chunk that needs work. In other words, I want to make this piece of code more pythonic.</p>
<p>I marked the problematic places in the code (below) with comments that start with <code># TODO:</code>.</p>
<p>The goal of the code is to summarize and aggregate the input dataframe <code>df</code> (which has the distributions of DNA fragment lengths for two types of regions: <code>all</code> and <code>captured</code>). This is a bioinformatic problem, part of a larger project that ranks enzymes by their ability to cut certain DNA regions into pieces of defined length. For the purpose of this question, the only relevant information is that <code>length</code> is integer and there are two types of DNA <code>regions</code>: <code>all</code> and <code>captured</code>. The aim is to produce the dataframe <code>df_pur</code> with purity <em>vs.</em> <code>length_cutoff</code> (the cutoff of length when purifying the DNA). The steps are:</p>
<ul>
<li>Compute the fraction of the total length for each type of <code>regions</code> that is above each of the <code>length_cutoffs</code>.</li>
<li>Find the ratio of this fraction: <code>captured / all</code> for each of the <code>length_cutoffs</code> and store the result in a dataframe.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import io
import pandas as pd
# This is a minimal reproducible example. The real dataset has 2
# columns and 10s of millions of rows. Column 1 is integer, column 2
# has 2 values: 'all' and 'captured':
TESTDATA="""
length regions
1 all
49 all
200 all
20 captured
480 captured
2000 captured
"""
df = pd.read_csv(io.StringIO(TESTDATA), sep='\s+')
# This is a minimal reproducible example. The real list has ~10
# integer values (cutoffs):
length_cutoffs = [10, 100, 1000]
df_tot_length = pd.DataFrame(columns=['tot_length'])
df_tot_length['tot_length'] = df.groupby(['regions']).length.sum()
df_tot_length.reset_index(inplace=True)
print(df_tot_length)
# regions tot_length
# 0 all 250
# 1 captured 2500
df_frc_tot = pd.DataFrame(columns=['regions', 'length_cutoff', 'sum_lengths'])
regions = df['regions'].unique()
df_index = pd.DataFrame({'regions': regions}).set_index('regions')
# TODO: simplify this loop (vectorize?):
for length_cutoff in length_cutoffs:
df_cur = (pd.DataFrame({'length_cutoff': length_cutoff,
'sum_lengths': df[df['length'] >= length_cutoff]
.groupby(['regions']).length.sum()},
# Prevent dropping rows where no elements
# are selected by the above
# condition. Re-insert the dropped rows,
# use for those sum_lengths = NaN
index=df_index.index)
# Correct the above sum_lengths = NaN to 0:
.fillna(0)).reset_index()
# Undo the effect of `fillna(0)` above, which casts the
# integer column as float:
df_cur['sum_lengths'] = df_cur['sum_lengths'].astype('int')
# TODO: simplify this loop (vectorize?):
for region in regions:
df_cur.loc[df_cur['regions'] == region, 'frc_tot_length'] = (
df_cur.loc[df_cur['regions'] == region, 'sum_lengths'] /
df_tot_length.loc[df_tot_length['regions'] == region, 'tot_length'])
df_frc_tot = pd.concat([df_frc_tot, df_cur], axis=0)
df_frc_tot.reset_index(inplace=True, drop=True)
print(df_frc_tot)
# regions length_cutoff sum_lengths frc_tot_length
# 0 all 10 249 0.996
# 1 captured 10 2500 1.000
# 2 all 100 200 0.800
# 3 captured 100 2480 0.992
# 4 all 1000 0 0.000
# 5 captured 1000 2000 0.800
# TODO: simplify the next 2 statements:
ser_pur = (df_frc_tot.loc[df_frc_tot['regions'] == 'captured', 'frc_tot_length']
.reset_index(drop=True) /
df_frc_tot.loc[df_frc_tot['regions'] == 'all', 'frc_tot_length']
.reset_index(drop=True))
df_pur = pd.DataFrame({'length_cutoff': length_cutoffs, 'purity': ser_pur})
print(df_pur)
# length_cutoff purity
# 0 10 1.004016
# 1 100 1.240000
# 2 1000 inf
</code></pre>
<p><strong>Note:</strong></p>
<p><strong>I am primarily interested in making the code more clear, simple and pythonic.</strong></p>
<p>Among the answers that are tied for the above, I will prefer the more efficient one in terms of speed.</p>
<p>I have 8 GB available by default per job, but can increase this to 32 GB if needed. To benchmark efficiency, please use this real life-size example dataframe:</p>
<pre><code>num_rows = int(1e7)
df = pd.concat([
pd.DataFrame({'length': np.random.choice(range(1, 201), size=num_rows, replace=True), 'regions': 'all'}),
pd.DataFrame({'length': np.random.choice(range(20, 2001), size=num_rows, replace=True), 'regions': 'captured'}),
]).reset_index(drop=True)
</code></pre>
|
<python><pandas><group-by><aggregate><vectorization>
|
2023-12-29 16:18:18
| 4
| 12,712
|
Timur Shtatland
|
77,732,923
| 17,519,895
|
Can't directly paste a string directly in a cell of Jupyter notebook
|
<p>in the Jupyter notebook version 7 or latest, I can't directly paste a string/text etc directly in the cell but first have to do a work around (paste it in the search bar and then recopy and then paste in the cell ), in contrast to it if I use an older version i.e., <strong>6.5.2</strong>, It works like it is supposed to do.
Is it a problem in the new notebook of If I am doing something wrong?</p>
|
<python><jupyter-notebook><jupyter>
|
2023-12-29 16:14:42
| 1
| 421
|
Aleef
|
77,732,836
| 7,243,493
|
Posting URL id to a flask blueprint endpoint
|
<p>Im using flask with Blueprints. I am trying to pass an ID from the url with a post request, but i dont understand the behaviour here:</p>
<p>i am on page localhost/2/updatestrat</p>
<p>This will post to /add_indicator</p>
<pre><code>let indi_data = await postJsonGetData(data, "/add_indicator");
</code></pre>
<pre><code>async function postJsonGetData(data, endpoint, method = "POST") {
const options = {
method: method,
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(data),
};
let response = await fetch(endpoint, options);
if (!response.ok) {
throw new Error("Request failed");
}
const responseData = await response.json();
return responseData;
}
</code></pre>
<p>This will post to /2/load_conditions</p>
<pre><code> const { sell_conds, buy_conds } = await getJson("load_conditions");
</code></pre>
<pre><code>async function getJson(endpoint) {
const options = {
method: "POST",
headers: {
"Content-Type": "application/json",
},
};
let response = await fetch(endpoint, options);
if (!response.ok) {
throw new Error("Request failed");
}
const responseData = await response.json();
console.log(responseData, "DDDDDDDD");
return responseData;
}
</code></pre>
<p>How come that getJson will include the id of the page in the endpoint and postJsonGetData dont?</p>
<p>I can easily get the data from getJson like this</p>
<pre><code>@bp.route('/<int:id>/load_conditions', methods=['POST'])
def load_conditions(id):
</code></pre>
<p>but when i try to do the same with "/add_indicator"</p>
<pre><code>@bp.route('/<int:strategy_id>/add_indicator', methods=('POST',))
@login_required
def add_indicator(strategy_id):
if request.method == 'POST':
print(strategy_id)
</code></pre>
<p>i get this error:
POST <a href="http://127.0.0.1:5000/add_indicator" rel="nofollow noreferrer">http://127.0.0.1:5000/add_indicator</a> 404 (NOT FOUND)</p>
<p>so clearly it is not posting to the correct endpoint.</p>
|
<javascript><python><flask><post>
|
2023-12-29 15:54:25
| 1
| 568
|
Soma Juice
|
77,732,688
| 2,755,307
|
Forcing an 'eq' constraint function to 0 in scipy.optimize.minimize causes a failure
|
<p>I want to put a "dummy" constraint into my scipy.optimize.minimize call so that I can use it to interrogate variables as they change. I add this simple constraint:</p>
<pre><code># Code that fails.
def tracer(actuals, sReturn): # To allow troubleshooting and insight.
print('You asked to return {}.'.format(sReturn))
return sReturn
consTracer = { 'type': 'eq', 'fun': tracer, 'args': (0,) } # Fails!
</code></pre>
<p>The code goes on to call minimize() with <code>method = 'SLSQP'</code>. But it gets this message:</p>
<pre><code>Singular matrix C in LSQ subproblem (Exit mode 6)
Current function value: -0.0
Iterations: 1
Function evaluations: 8
Gradient evaluations: 1
Failed to find solution. Exit code 6
</code></pre>
<p>Now, I can solve this by changing it to 'ineq' and returning 1.0. So this works:</p>
<pre><code># Code that works.
def tracer(actuals, sReturn): # To allow troubleshooting and insight.
print('You asked to return {}.'.format(sReturn))
return sReturn
consTracer = { 'type': 'ineq', 'fun': tracer, 'args': (1,) } # Works!
</code></pre>
<p>That's a fine work-around, and I understand why it works, but <strong>I would like a conceptual understanding of why 'eq' of a hard coded 0.0 fails</strong>.</p>
<p>Wouldn't it be the case that every time it called "tracer" and saw 0.0, it would essentially find the constraint passed, and move on to the other constraints.</p>
<p>Note there are eight other constraints, and without the "tracer" constraint above, it all works fine.</p>
<p><strong>Why wouldn't an 'eq' constraint that always returns 0.0 just work?</strong></p>
<p>The full working code is below. (Update, there are two bodies of working code, and the second narrows it even more)</p>
<p>Thanks!</p>
<pre><code>import numpy as np
from scipy.optimize import minimize
chips = 5 # You have five chips to play.
value = [3,8,7,2,5,4,1] # Each position has a different value.
bounds = list((0,2) for _ in range(len(value))) # Max 2 chips per position.
actuals = [0,0,0,0,0,0,0]
print("@ value = {}".format(value))
print("@ actuals = {}".format(actuals))
print("@ bounds = {}".format(bounds))
def objective(actuals):
return -np.sum(actuals * value) # Maximize the chips * position.
def useAll(actuals):
return np.sum(actuals) - chips # Use all your chips.
consUseAll = { 'type': 'eq', 'fun': useAll }
def tracer(actuals, sReturn): # To allow troubleshooting and insight.
print('You asked to return {}.'.format(sReturn))
return sReturn
# Pick one of the following: <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
consTracer = { 'type': 'eq', 'fun': tracer, 'args': (0,) } # Fails!
# consTracer = { 'type': 'ineq', 'fun': tracer, 'args': (1,) } # Works!
constraints = [consUseAll, consTracer]
solution = minimize (
fun = objective,
x0 = actuals,
bounds = bounds,
method = 'SLSQP',
constraints = constraints,
options = {'disp':True}
)
if (solution.success) and (solution.status == 0): # Solution is feasible and optimal.
print("@ solution.fun (inverted) = {} # Expecting 35.".format(-round(solution.fun)))
print(solution.x.round())
elif solution.status != 0:
print("Failed to find solution. Exit code", solution.status)
else:
print(solution.message) # Something else went wrong.
</code></pre>
<p>UPDATE: Forget my dummy tracer function. Notice that even if I do a "round" on the return value of the constraint function it fails. If you run it without the "round", it gets into tiny fractions. But I wonder if it needs those tiny fractions somehow?</p>
<p>Try the code below with and without the commented line that does the rounding and see what happens.</p>
<pre><code>import numpy as np
from scipy.optimize import minimize
chips = 5 # You have five chips to play.
value = [3,8,7,2,5,4,1] # Each position has a different value.
bounds = list((0,2) for _ in range(len(value))) # Max 2 chips per position.
actuals = [0,0,0,0,0,0,0]
def objective(actuals):
return -np.sum(actuals * value) # Maximize the chips times position.
def useAll(actuals):
fReturn = np.sum(actuals) - chips
# fReturn = round(fReturn) # <<<<< Note that if we round, it fails too.
return fReturn
constraints = { 'type': 'eq', 'fun': useAll }
solution = minimize (
fun = objective,
x0 = actuals,
bounds = bounds,
method = 'SLSQP',
constraints = constraints,
options = {'disp':True}
)
if (solution.success) and (solution.status == 0): # Solution is feasible and optimal.
print("@ solution.fun (inverted) = {} # Expecting 35.".format(-round(solution.fun)))
print(solution.x.round())
elif solution.status != 0:
print("Failed to find solution. Exit code", solution.status)
else:
print(solution.message) # Something else went wrong.
</code></pre>
<p>I further suspect that the constraint functions for 'eq' must allow for movement "past" the zero line. Consider this output, made by putting a print in the "useAll" function:</p>
<pre><code>-4.999999985098839
-4.999999985098839
-4.999999985098839
5.329070518200751e-15
5.329070518200751e-15
1.4901166522918174e-08
1.4901166522918174e-08
1.4901166522918174e-08
</code></pre>
<p>Notice that it starts out less than zero, then crosses over into more than zero. Which I suspect means it's ensuring it's "looks past" zero for some reason, then comes back and says, "meh, that's zero alright". That's obviously an intuitive observation not a rigorous one! But there's some thing like that going in.</p>
|
<python><scipy><scipy-optimize>
|
2023-12-29 15:15:24
| 0
| 953
|
James Madison
|
77,732,534
| 4,799,172
|
Check for duplicates in a database and report back whether a new record was created or not
|
<p>I have a repeated pattern in my code that I feel is an antipattern. I have an <code>sqlalchemy</code> engine set up as this:</p>
<pre><code>engine = create_engine(Config.CONN_STRING)
autoengine = engine.execution_options(isolation_level="AUTOCOMMIT")
</code></pre>
<p>When I want to make a new entry, using raw SQL I have, e.g.:</p>
<pre><code>class ProductCategory(Base):
__tablename__ = 'product_categories'
id: Mapped[int] = mapped_column(primary_key=True)
name = Column(String)
@staticmethod
def create_new(
name: str
):
with autoengine.connect() as conn:
q = text(
"""
SELECT
name
FROM
product_categories
WHERE
name = :name
""")
data = list(conn.execute(q, {'name': name}))
if data:
return False, f"Product category: {name} already exists"
q = text(
"""
INSERT INTO product_categories (
name
)
VALUES (
:name
)
""")
conn.execute(q, {'name': name})
return True, f"Product category: {name} successfully created"
</code></pre>
<p>To me, it should be possible to have a single query to do this <em>and</em> report back whether it was successful or not (note the <code>return</code> statements giving me a tuple to give back to the frontend in both cases). I've had a go with CTEs and gone down the path of something like <code>ON CONFLICT DO NOTHING RETURNING EXCLUDED.name</code> but the primary key is not <code>name</code>, it's an auto-generated <code>id</code> and I got lost. I can do the reverse no problem because I get an <code>id</code> from the frontend:</p>
<pre><code>@staticmethod
def delete(id_: int):
with autoengine.connect() as conn:
q = text(
"""
UPDATE
unit_measures
SET
is_deprecated = true
WHERE
id = :id
RETURNING
name
""")
deleted = list(conn.execute(q, {'id': id_}))
if not deleted:
return False, "Unit measure ID not found"
return True, f"Unit measure successfully deleted: {deleted[0][0]}"
</code></pre>
<p>Is there a way to consolidate the two queries into one? This is Postgres but I'm happy for any dialect if the approach can be translated.</p>
<hr />
<p>To clarify: I am well aware of race conditions and I know how they are likely with the current setup. If creating a <code>UNIQUE</code> constraint or an index is <em>necessary</em> to consolidate my query, that's an acceptable answer. However, it would be great if there is a solution that uses the existing DB setup (no index on <code>name</code>).</p>
|
<python><sql><sqlalchemy>
|
2023-12-29 14:41:23
| 1
| 13,314
|
roganjosh
|
77,732,498
| 221,342
|
How to get for each node the sum of the edge weights for children in a networkx tree
|
<p>In the example below, how to get the sum of all the children edges from the ROOT.</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
G = nx.DiGraph()
G.add_node("ROOT")
for i in range(5):
G.add_node("Child_%i" % i)
G.add_node("Grandchild_%i" % i)
G.add_node("Greatgrandchild_%i" % i)
G.add_edge("ROOT", "Child_%i" % i, weight=(i+1))
G.add_edge("Child_%i" % i, "Grandchild_%i" % i, weight=(2*i+1))
G.add_edge("Grandchild_%i" % i, "Greatgrandchild_%i" % i, weight=(3*i+1))
from networkx.drawing.nx_pydot import graphviz_layout
pos = graphviz_layout(G, prog="dot")
nx.draw(G, pos, with_labels=True, arrows=True)
labels = nx.get_edge_attributes(G,'weight')
nx.draw_networkx_edge_labels(G,pos,edge_labels=labels)
plt.show()
</code></pre>
<p>For example in the example below, I would like to have for every node:</p>
<ul>
<li>Grandchild_4 = 13</li>
<li>Child_4 = 22</li>
<li>GrandChild_3 = 10</li>
<li>Child_3 = 17</li>
<li>...</li>
<li>ROOT = <code>sum of all the edges on this graph</code></li>
</ul>
<p><a href="https://i.sstatic.net/nuc7n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuc7n.png" alt="enter image description here" /></a></p>
<p>EDIT: here is an example which works in the accepted solution, but not the other one:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
G = nx.DiGraph()
G.add_node("ROOT")
s = 0
for i in range(5):
G.add_node("C_%i" % i)
G.add_node("GC_%i" % i)
G.add_node("GGC_%i" % i)
G.add_node("GGD_%i" % i)
G.add_edge("ROOT", "C_%i" % i, weight=(i+1))
s += i+1
G.add_edge("C_%i" % i, "GC_%i" % i, weight=(2*i+1))
s += 2*i+1
G.add_edge("GC_%i" % i, "GGC_%i" % i, weight=(3*i+1))
s += 3*i+1
G.add_edge("GC_%i" % i, "GGD_%i" % i, weight=(4 * i + 1))
s +=4 * i + 1
print(s)
from networkx.drawing.nx_pydot import graphviz_layout
pos = graphviz_layout(G, prog="dot")
nx.draw(G, pos, with_labels=True, arrows=True)
labels = nx.get_edge_attributes(G,'weight')
nx.draw_networkx_edge_labels(G,pos,edge_labels=labels)
# plt.show()
###
leaves = {n for n in G.nodes() if G.out_degree(n)==0}
out = {}
for n in leaves:
for parent, child in reversed(next(nx.all_simple_edge_paths(G, 'ROOT', n))):
out[parent] = (out.get(parent, 0) + out.get(child, 0)
+ G.get_edge_data(parent, child)['weight']
)
from pprint import pp
pp(out)
data = {
node: sum(G.edges[edge]["weight"] for edge in nx.bfs_edges(G, node))
for node in G.nodes
}
from pprint import pp
pp(data)
</code></pre>
|
<python><graph><tree><networkx>
|
2023-12-29 14:34:50
| 2
| 10,193
|
BlueTrin
|
77,732,267
| 2,003,686
|
Take python input data directly from a Jupyter cell
|
<p>I would like to create an interactive notebook with Jupiter Lab. My idea is to use this notebook for teaching Python programming in a similar way to some platforms like hackerrank do, using tescases with different input sets. The students should write their code in a jupyter cell and this code would be tested taking the input from another cell.</p>
<p>To illustrate this wiht an easy example, I may ask a student to write a program that takes two numbers and prints their sum. Every number is given on a separate line.
The student could write the following program in a code cell:</p>
<pre><code>x = (int)(input())
y = (int)(input())
print(x+y)
</code></pre>
<p>An then I would provide a test case in another cell with the following input data for that program:</p>
<pre><code>5
6
</code></pre>
<p>I would give the result (11) in another cell and the student should execute his program and compare the result.</p>
<p>At first, I thought about providing the input datasets in dataframes imported from a csv file included in the notebook, but then I was wondering if something as in the former example (taking the input dataset directly from a jupyter cell) would be possible.</p>
|
<python><jupyter-notebook><jupyter><user-input><jupyter-lab>
|
2023-12-29 13:35:26
| 1
| 1,940
|
rodrunner
|
77,732,146
| 4,451,521
|
Can not use add with Poetry
|
<p>I just did poetry init (and did not install anything)</p>
<p>After that if I do</p>
<pre><code>poetry add pandas
'HttpResponse' object has no attribute 'strict'
</code></pre>
<p>What might be the problem and how can I solve it.
(I read a question similar to this but did not understand the reason of the problem. Please don't say just "read the answer")</p>
|
<python><python-poetry>
|
2023-12-29 13:08:21
| 1
| 10,576
|
KansaiRobot
|
77,732,090
| 955,273
|
polars: multiply 2 LazyFrames together by column
|
<p>I have 2 polars LazyFrames:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import numpy as np
df1 = pl.LazyFrame(data={
'foo': np.random.uniform(0,127, size= n).astype(np.float64),
'bar': np.random.uniform(1e3,32767, size= n).astype(np.float64),
'baz': np.random.uniform(1e6,2147483, size= n).astype(np.float64)
})
df2 = pl.LazyFrame(data={
'foo': np.random.uniform(0,127, size= n).astype(np.float64),
'bar': np.random.uniform(1e3,32767, size= n).astype(np.float64),
'baz': np.random.uniform(1e6,2147483, size= n).astype(np.float64)
})
</code></pre>
<p>I would like to multiply each column in df1 with its respective column in df2.</p>
<p>If I convert these to non-lazy <code>DataFrames</code> I can achieve this:</p>
<pre class="lang-py prettyprint-override"><code>df1.collect() * df2.collect()
</code></pre>
<pre class="lang-none prettyprint-override"><code>foo bar baz
f64 f64 f64
3831.295563 6.4637e6 3.3669e12
164.194271 2.9691e8 2.2696e12
3655.918761 1.9444e7 2.3625e12
7191.48868 3.7044e7 3.1687e12
9559.505277 2.6864e8 2.5426e12
</code></pre>
<p>However, if I try to perform the same expression on the <code>LazyFrames</code>, I get an exception</p>
<pre class="lang-py prettyprint-override"><code>df1 * df2
</code></pre>
<blockquote>
<p><code>TypeError</code>: unsupported operand type(s) for <code>*</code>: '<code>LazyFrame</code>' and '<code>LazyFrame</code>'</p>
</blockquote>
<p>How can I perform column-wise multiplication across 2 <code>LazyFrames</code>?</p>
|
<python><dataframe><python-polars>
|
2023-12-29 12:52:35
| 1
| 28,956
|
Steve Lorimer
|
77,731,909
| 3,291,993
|
Running a function on multiple columns of a pandas dataframe in parallel
|
<p>Assume we have a pandas dataframe and 100 columns from S1 to S100.
There might be other columns in the data frame, but we are interested in these only.
We need to get the number of rows satisfying the condition below.</p>
<pre><code>num_of_rows = len(df[df[S1] >= float(cutoff)])
</code></pre>
<p>Is there a way to do this in parallel for 100 columns and get an array of num_of_rows resulting from each column?</p>
|
<python><pandas><dataframe>
|
2023-12-29 12:08:30
| 1
| 1,147
|
burcak
|
77,731,840
| 13,608,794
|
Python - os.path.normpath, but leaving the slashes intact
|
<p>I want to create a Python function that will simultaneously:</p>
<ul>
<li>normalize a path</li>
<li>leave the slashes intact</li>
</ul>
<pre class="lang-py prettyprint-override"><code>def rawnorm(path: str) -> str:
"""
Normalizes path, leaving the slashes intact.
Args:
path: File path.
"""
slashes = "\\", "/"
# ?????
return path
</code></pre>
<p>What I want to achieve:</p>
<pre class="lang-py prettyprint-override"><code>>>> import os
>>>
>>> os.path.normpath(r"a\b\c\..\d/e\f\g\..\h/i"
'a\\b\\d\\e\\f\\h\\i'
>>>
>>> # I want this
>>> rawnorm(r"a\b\c\..\d/e\f\g\..\h/i")
'a\\b\\d/e\\f\\h/i'
</code></pre>
<p>How can I do it? I am certainly sure that there is a built-in function for this somewhere, but I cannot find it apparently.</p>
|
<python><operating-system>
|
2023-12-29 11:56:04
| 0
| 303
|
kubinka0505
|
77,731,430
| 11,211,759
|
Specify a list of arguments in the pylintrc file for "unused-argument"
|
<p>Can I specify a list of arguments in the pylintrc file instead of disabling unused-argument altogether?</p>
<p>I need it to avoid messages like this:</p>
<p>W0613: Unused argument 'evt' (unused-argument)</p>
|
<python><pylint>
|
2023-12-29 10:32:05
| 1
| 1,318
|
1966bc
|
77,731,218
| 13,838,385
|
Why do optional arguments appear before positional arguments in the help output from argparse?
|
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
import sys
from fontbro import Font
def get_font_format(font_path, ignore_flavor):
"""
Gets the font format: otf, ttf, woff, woff2.
:param font_path: Path to the font file.
:param ignore_flavor: If True, the original format without compression will be returned.
:returns: The format.
:rtype: str or None
"""
font = Font(font_path)
return font.get_format(ignore_flavor=ignore_flavor)
def main():
parser = argparse.ArgumentParser(description='Get the font format of a font file.')
parser.add_argument('font_path', type=str, help='The path to the font file')
parser.add_argument('--ignore_flavor', action='store_true', help='Return original format without compression if set to True')
# Print help if no arguments supplied
if len(sys.argv)==1:
parser.print_help(sys.stderr)
sys.exit(1)
args = parser.parse_args()
# Get and print the font format
format = get_font_format(args.font_path, args.ignore_flavor)
print(format)
if __name__ == "__main__":
main()
</code></pre>
<p>If I run the script without any arguments I get help/usage:</p>
<pre><code>usage: FontOpsGetFontFormat.py [-h] [--ignore_flavor] font_path
Get the font format of a font file.
positional arguments:
font_path The path to the font file
options:
-h, --help show this help message and exit
--ignore_flavor Return original format without compression if set to True
</code></pre>
<p>Why isn't the usage info formatted like this instead?</p>
<p><code>usage: FontOpsGetFontFormat.py font_path [-h] [--ignore_flavor]</code></p>
<p>From my limited understanding, positional arguments come first and others come after - is this incorrect?</p>
|
<python><argparse>
|
2023-12-29 09:39:15
| 0
| 577
|
fmotion1
|
77,731,198
| 955,273
|
polars: product of all columns except one in 2 LazyFrames
|
<p>I am learning polars, having come from pandas.</p>
<p>In pandas land, I frequently operate on 2 dataframes, each with a datetime index, and the same columns.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>df1 = pd.DataFrame(data={
'time': pd.date_range('2023-01-01', periods=n, freq='1 min'),
'foo': np.random.uniform(0,127, size=n).astype(np.float64),
'bar': np.random.uniform(1e3,32767, size=n).astype(np.float64),
'baz': np.random.uniform(1e6,2147483, size=n).astype(np.float64)
}).set_index('time')
df2 = pd.DataFrame(data={
'time': pd.date_range('2023-01-01', periods=n, freq='1 min'),
'foo': np.random.uniform(0,127, size=n).astype(np.float64),
'bar': np.random.uniform(1e3,32767, size=n).astype(np.float64),
'baz': np.random.uniform(1e6,2147483, size=n).astype(np.float64)
}).set_index('time')
</code></pre>
<p>To calculate the product of the columns I can do the following:</p>
<pre class="lang-py prettyprint-override"><code>df1 * df2
</code></pre>
<pre><code> foo bar baz
time
2023-01-01 00:00:00 8720.704791 1.632745e+08 2.276452e+12
2023-01-01 00:01:00 310.271257 3.375341e+08 2.195998e+12
2023-01-01 00:02:00 2936.646429 5.506997e+08 2.005228e+12
2023-01-01 00:03:00 12342.312737 3.383745e+08 3.779531e+12
2023-01-01 00:04:00 382.163185 1.371315e+08 1.529299e+12
</code></pre>
<p>The index remains the same, and each column in dataframe 1 is multiplied by its respective column in dataframe 2.</p>
<p>I am now trying to do the same with polars LazyFrames.</p>
<p>Here is the polars LazyFrame equivalent to my pandas dataframes above:</p>
<pre class="lang-py prettyprint-override"><code>df1 = pl.DataFrame(data={
'time': pd.date_range('2023-01-01', periods=n, freq='1 min'),
'foo': np.random.uniform(0,127, size= n).astype(np.float64),
'bar': np.random.uniform(1e3,32767, size= n).astype(np.float64),
'baz': np.random.uniform(1e6,2147483, size= n).astype(np.float64)
}).lazy()
df2 = pl.DataFrame(data={
'time': pd.date_range('2023-01-01', periods=n, freq='1 min'),
'foo': np.random.uniform(0,127, size= n).astype(np.float64),
'bar': np.random.uniform(1e3,32767, size= n).astype(np.float64),
'baz': np.random.uniform(1e6,2147483, size= n).astype(np.float64)
}).lazy()
</code></pre>
<p>I believe the correct way to operate on these 2 LazyFrames is to <a href="https://docs.pola.rs/py-polars/html/reference/api/polars.concat.html" rel="nofollow noreferrer"><code>concat</code></a>, <a href="https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.group_by.html" rel="nofollow noreferrer"><code>group_by</code></a> and then apply some aggregation.</p>
<p>However, whilst something like <code>sum</code> works as expected:</p>
<pre class="lang-py prettyprint-override"><code>pl.concat([df1, df2]).group_by('time').sum().sort('time').collect()
</code></pre>
<pre><code>time foo bar baz
datetime[ns] f64 f64 f64
2023-01-01 00:00:00 117.758176 35887.733953 3.4859e6
2023-01-01 00:01:00 83.828093 32037.128498 3.3425e6
2023-01-01 00:02:00 158.950876 51900.065898 2.1312e6
2023-01-01 00:03:00 90.781075 41924.70712 3.4727e6
2023-01-01 00:04:00 156.831011 34252.423581 3.0899e6
</code></pre>
<p>I do not know how to perform a <code>product</code> aggregation</p>
<p>Things I have tried:</p>
<ul>
<li><code>agg(col('*').mul(col'*'))</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>pl.concat([df1, df2]).group_by('time').agg(pl.col("*").mul(pl.col("*"))).sort('time').collect()
</code></pre>
<p>I'm not even sure what this is doing - it produces a list of values for each column</p>
<pre><code>time foo bar baz
datetime[ns] list[f64] list[f64] list[f64]
2023-01-01 00:00:00 [12359.923484, 114.893701] [1.3056e8, 4.3358e8] [1.2441e12, 3.0416e12]
2023-01-01 00:01:00 [1212.065713, 15767.846044] [8.9478e8, 8.6531e8] [1.3674e12, 2.0145e12]
2023-01-01 00:02:00 [38.587401, 3658.818448] [7.2488e8, 2.1755e8] [3.6923e12, 1.6436e12]
2023-01-01 00:03:00 [298.835241, 1202.613949] [7.9310e8, 5.6334e8] [1.6880e12, 1.9158e12]
2023-01-01 00:04:00 [11931.488236, 697.035171] [7.1008e7, 1.0676e9] [2.1519e12, 3.2458e12]
</code></pre>
<p>How can I perform column-wise multiplication on my 2 dataframes, using <code>time</code> as the index?</p>
|
<python><python-polars>
|
2023-12-29 09:35:12
| 1
| 28,956
|
Steve Lorimer
|
77,730,997
| 1,134,071
|
Deactivate manually a virtual environment activated by direnv without leaving the project directory or shell
|
<p>This is my first approach to direnv. I've been trying to adapt direnv to Poetry and venv.</p>
<p>~/.zshrc:</p>
<pre><code># skip loading direnv if it is already loaded
if test -z $DIRENV_DIR; then
if whereis direnv 2&>1 1> /dev/null ; then
case "$(basename $SHELL)" in
zsh)
eval "$(direnv hook zsh)"
;;
bash)
eval "$(direnv hook bash)"
;;
*)
echo "skip" /dev/null
;;
esac
fi
fi
# substitute prompt (direnv)
setopt PROMPT_SUBST
show_virtual_env() {
if [[ -n "$VIRTUAL_ENV" && -n "$DIRENV_DIR" ]]; then
echo "($(basename $VIRTUAL_ENV)) "
fi
}
PS1='$(show_virtual_env)'$PS1
</code></pre>
<p>~/.config/direnv/direnvrc:</p>
<pre><code>layout_poetry() {
PYPROJECT_TOML="${PYPROJECT_TOML:-pyproject.toml}"
if [[ ! -f "$PYPROJECT_TOML" ]]; then
log_status "No pyproject.toml found. Executing \`poetry init\` to create a \`$PYPROJECT_TOML\` first."
poetry init
fi
if [[ -d ".venv" ]]; then
VIRTUAL_ENV="$(pwd)/.venv"
else
VIRTUAL_ENV=$(poetry env info --path 2>/dev/null ; true)
fi
if [[ -z $VIRTUAL_ENV || ! -d $VIRTUAL_ENV ]]; then
log_status "No virtual environment exists. Executing \`poetry install\` to create one."
poetry install
VIRTUAL_ENV=$(poetry env info --path)
fi
PATH_add "$VIRTUAL_ENV/bin"
export POETRY_ACTIVE=1
export VIRTUAL_ENV
}
</code></pre>
<p>Everything works, but there's no way to deactivate the virtual environment.</p>
<p>.envrc with Poetry:</p>
<pre><code>layout poetry
</code></pre>
<p>.envrc with (<code>python -m venv .venv</code>):</p>
<pre><code>layout python
</code></pre>
<p>In both cases, <code>C-d</code> or <code>exit</code> exits the shell, and <code>deactivate</code> is unavailable, which is somewhat unexpected in the latter case. What can I do to deactivate the virtual environment and stay within the same directory? And to get back into the virtual environment the same way, for that matter.</p>
|
<python><python-venv><python-poetry><direnv>
|
2023-12-29 08:43:40
| 0
| 2,884
|
Alexey Orlov
|
77,730,834
| 8,792,159
|
Separate inner logic of function from displaying user information using tqdm
|
<p>I wonder if it is possible to separate the logic of a function from the user information displayed with <code>tqdm</code>? Currently, my function takes in a <code>verbose</code> argument and includes an if-else statement so that the user information is displayed or not depending on the user's input. But I think it would be cleaner if the function itself wouldn't have anything to do with user information and that verbosity could somehow be defined outside of its scope (that would allow me to get rid of the if-else statement and the verbose argument). Currently I do something like this:</p>
<pre class="lang-py prettyprint-override"><code>from tqdm import trange
from time import sleep
# user defined
verbose = True
def my_function(verbose):
if verbose == True:
for i in trange(100):
sleep(0.01)
elif verbose == False:
for i in range(100):
sleep(0.01)
my_function(verbose)
</code></pre>
<p><strong>EDIT:</strong> The code above is just an example of a tqdm progress bar. I don't look for an exact solution that works with <code>trange</code> but I might also end up using <a href="https://github.com/tqdm/tqdm#usage" rel="nofollow noreferrer">any of the other implementations</a>. I should specify that I only know that this function does something using an iterable / for-loop + I want to use <code>tqdm</code> to print the progress to the console + I want this function to be either verbose or not without having to use an if-else statement inside that function.</p>
<p>Instead, is it possible to do something like this?</p>
<pre class="lang-py prettyprint-override"><code>verbose = True
if verbose == True:
with tqdm():
my_function()
elif verbose == False:
my_function()
</code></pre>
|
<python><progress-bar><tqdm>
|
2023-12-29 07:55:57
| 1
| 1,317
|
Johannes Wiesner
|
77,730,831
| 8,046,443
|
Image dimension mismatch while trying to add Noise to image using Keras Sequential
|
<p>To Recreate this question's ask on your system, please find the Source code and Dataset <a href="https://www.swisstransfer.com/d/55b0d34b-02a0-4cd6-be26-78cee6e4899c" rel="nofollow noreferrer">here</a></p>
<p><strong>What I am trying?</strong><br />
I am trying to create a simple GAN (Generative Adversarial N/w) where I am trying to recolor Black and White images using a few ImageNet images.</p>
<hr />
<p><strong>What Process am I following?</strong><br />
I have take a few Dog images, which are stored in folder <code>./ImageNet/dogs/</code> directory. Using Python code I have created 2 more steps where I convert</p>
<ol>
<li>Dog images into 244 x 244 resolution and save in <code>./ImageNet/dogs_lowres/</code></li>
<li>Dog Low Res. images into Grayscale and save in <code>./ImageNet/dogs_bnw/</code></li>
<li>Feed the Low Res BnW images to GAN model and generate colored images.</li>
</ol>
<hr />
<p><strong>Where am I Stuck?</strong><br />
I am stuck at understanding how the Image dimensions / shape are used.
I am getting the error as such:</p>
<pre><code>ValueError: `logits` and `labels` must have the same shape, received ((32, 28, 28, 3) vs (32, 224, 224)).
</code></pre>
<p>Here's the code for Generator and Discriminator:</p>
<pre><code># GAN model for recoloring black and white images
generator = Sequential()
generator.add(Dense(7 * 7 * 128, input_dim=100))
generator.add(Reshape((7, 7, 128)))
generator.add(Conv2DTranspose(64, kernel_size=5, strides=2, padding='same'))
generator.add(Conv2DTranspose(32, kernel_size=5, strides=2, padding='same'))
generator.add(Conv2DTranspose(3, kernel_size=5, activation='sigmoid', padding='same'))
# Discriminator model
discriminator = Sequential()
discriminator.add(Flatten(input_shape=(224, 224, 3)))
discriminator.add(Dense(1, activation='sigmoid'))
# Compile the generator model
optimizer = Adam(learning_rate=0.0002, beta_1=0.5)
generator.compile(loss='binary_crossentropy', optimizer=optimizer)
# Train the GAN to recolor images
epochs = 10000
batch_size = 32
</code></pre>
<p>and the training loop is as follows:</p>
<pre><code>for epoch in range(epochs):
idx = np.random.randint(0, bw_images.shape[0], batch_size)
real_images = bw_images[idx]
noise = np.random.normal(0, 1, (batch_size, 100))
generated_images = generator.predict(noise)
# noise_rs = noise.reshape(-1, 1)
g_loss = generator.train_on_batch(noise, real_images)
if epoch % 100 == 0:
print(f"Epoch: {epoch}, Generator Loss: {g_loss}")
</code></pre>
<hr />
<p><strong>Where is the Error?</strong>
I get error on line:<br />
<code>g_loss = generator.train_on_batch(noise, real_images)</code></p>
<p>When I check for the <code>shape</code> of noise and real_images objects, this is what I get:</p>
<pre><code>real_images.shape
(32, 224, 224)
noise.shape
(32, 100)
</code></pre>
<p>Any help/suggestion is appreciated.</p>
|
<python><tensorflow><keras><generative-adversarial-network>
|
2023-12-29 07:55:21
| 1
| 761
|
T3J45
|
77,730,802
| 1,939,432
|
How to remove duplicated entries?
|
<pre><code>Input: new_mainArr[] =
['the', 'at', 2]
['fulton', 'np-tl', 1]
['county', 'nn-tl', 1]
['grand', 'jj-tl', 1]
['jury', 'nn-tl', 1]
['said', 'vbd', 2]
['friday', 'nr', 1]
['an', 'at', 1]
['investigation', 'nn', 1]
['of', 'in', 1]
["atlanta's", 'np$', 1]
['recent', 'jj', 1]
['primary', 'nn', 1]
['election', 'nn', 1]
['produced', 'vbd', 1]
['.', '.', 2]
['the', 'nn', 1]
['jury', 'nn', 1]
['further', 'rbr', 1]
['in', 'in', 1]
['term-end', 'nn', 1]
['presentments', 'nns', 1]
['that', 'cs', 1]
['city', 'nn-tl', 1]
</code></pre>
<p>I want to collect same 0'th index elements in one row to remove dupplicates. As you see, there are 2 'the' and 'jury' elements. I want to bring their next elements to the same
and delete the 2nd 'the' and 'jury' rows.
I wrote below code but 2nd 'the' and 'jury' rows still stays. Where should I edit my code?</p>
<pre><code># add duplicate tags for the same word
resultArr = []
temp = []
for i in range(0, len(new_mainArr)):
temp = []
temp.append(new_mainArr[i][0])
temp.append(new_mainArr[i][1])
temp.append(new_mainArr[i][2])
hook = new_mainArr[i][0]
for j in range(i+1, len(new_mainArr)):
if(new_mainArr[j][0] == hook):
temp.append(new_mainArr[j][1])
temp.append(new_mainArr[j][2])
resultArr.append(temp)
</code></pre>
<p>Output:</p>
<p>---word with different tags---</p>
<pre><code>['the', 'at', 2, 'nn', 1]
['fulton', 'np-tl', 1]
['county', 'nn-tl', 1]
['grand', 'jj-tl', 1]
['jury', 'nn-tl', 1, 'nn', 1]
['said', 'vbd', 2]
['friday', 'nr', 1]
['an', 'at', 1]
['investigation', 'nn', 1]
['of', 'in', 1]
["atlanta's", 'np$', 1]
['recent', 'jj', 1]
['primary', 'nn', 1]
['election', 'nn', 1]
['produced', 'vbd', 1]
['.', '.', 2]
['the', 'nn', 1] <= this one should not be in the output
['jury', 'nn', 1] <= this one should not be in the output
['further', 'rbr', 1]
['in', 'in', 1]
['term-end', 'nn', 1]
['presentments', 'nns', 1]
['that', 'cs', 1]
['city', 'nn-tl', 1]
</code></pre>
|
<python><arrays>
|
2023-12-29 07:48:17
| 2
| 2,050
|
Lyrk
|
77,730,744
| 2,604,247
|
SQL Alchemy ORM Query Does not Include Inner Join Condition
|
<p>The problem is as the question says. Using AWS athena server with pyathena engine.</p>
<p>Here is some context. I am basically filtering on a table containing all the bookings, to include only most recent ones based on an <code>earliest_history</code> date (anything earlier than that will be dropped).
I need two columns after this, the date and the customer id.</p>
<p>There is another table mapping customer id to some other data, that I need to join with the previous result. Here is how I am implementing it in sql alchemy ORM.</p>
<pre class="lang-py prettyprint-override"><code># SELECT date, customer_id FROM bookings
query: Query = session.query(func.date(bookings.columns[TIMESTAMP]).label(name=TIMESTAMP),
bookings.columns[CUSTOMER_ID])
# Alias as I need to get some filter
all_history: Alias = cast(typ=Alias, val=aliased(element=query.subquery()))
# noinspection PyTypeChecker
# SELECT * FROM all_history WHERE timestamp>=earliest_history
query = session.query(all_history).filter(
all_history.columns[TIMESTAMP] >= func.date(earliest_history)).distinct()
# noinspection PyTypeChecker
customer_results: Alias = aliased(element=query.subquery())
mapping_table: str = config.get(section='db', option='MAPPING_TABLE', raw=True)
mappings: Table = metadata.tables[mapping_table]
# noinspection PyTypeChecker
# Inner join customer results with the mappings table
join = customer_results.join(right=mappings,
onclause=mappings.columns[CUSTOMER_ID_MAPPING] == customer_results.columns[CUSTOMER_ID])
query=session.query(join)
raw_query: str = str(query.statement.compile(compile_kwargs={'literal_binds': True}))
print(raw_query)
</code></pre>
<p>The problem is, this is the raw_query (formatting and comments mine)</p>
<pre class="lang-sql prettyprint-override"><code>SELECT
anon_1.booking_dt,
anon_1.cust_id,
customer_mapping.ustb_customer_id,
customer_mapping.ustb_customer_create_time,
customer_mapping.esc_customer_id,
customer_mapping.esc_customer_create_time
FROM
(
SELECT
DISTINCT anon_2.booking_dt AS booking_dt,
anon_2.cust_id AS cust_id
FROM
(
SELECT
date(esc_booking.booking_dt) AS booking_dt,
esc_booking.cust_id AS cust_id
FROM
esc_booking
) AS anon_2
WHERE
anon_2.booking_dt >= date('2023-11-29')
) AS anon_1, -- Should I expect the join clause here?
customer_mapping; -- What does it mean?
</code></pre>
<p>Basically, it seems alchemy is totally removing my join clause for some reason. How to get sql alchemy to really perform the join?</p>
<p>Related, is the <code>filter</code> method in SQL Alchemy an ORM counterpart of <code>SELECT * WHERE</code> or of <code>JOIN</code>?</p>
|
<python><mysql><sqlalchemy><orm><pyathena>
|
2023-12-29 07:31:46
| 0
| 1,720
|
Della
|
77,730,123
| 12,569,596
|
Generate hooks for selected functions in Python Abstract Base Class
|
<p>I have Python ABC like,</p>
<pre class="lang-py prettyprint-override"><code>class Foo(ABC):
@abstractmethod
def bar(self):
...
@abstractmethod
def baz(self):
...
</code></pre>
<p>When I initialize it, I want to implement the hooks without defining them in the parent class, and they will also run in order, is this possible?</p>
<pre class="lang-py prettyprint-override"><code>class FooChild(Foo):
def pre_bar(self): # not defined in parent but will run as well
...
def bar(self):
...
def post_bar(self):
...
</code></pre>
<p>So when I run</p>
<pre class="lang-py prettyprint-override"><code>foo_child = FooChild()
foo_child.bar()
</code></pre>
<p><code>pre_bar</code> and <code>post_bar</code> will also run automatically.</p>
|
<python><abstract-class>
|
2023-12-29 03:37:21
| 0
| 3,005
|
kennysliding
|
77,729,925
| 1,689,987
|
Errors installing TA-Lib in Debian Linux Dockerfile
|
<p>When installing TA-Lib in a Dockerfile which inherits from python:3.10.13-slim-bookworm I am using these Dockerfiles as examples however still am getting errors:</p>
<p><a href="https://dev.to/lazypro/build-a-small-ta-lib-container-image-4ca1" rel="nofollow noreferrer">https://dev.to/lazypro/build-a-small-ta-lib-container-image-4ca1</a></p>
<p><a href="https://github.com/deepnox-io/docker-python-ta-lib/blob/main/v0.4.17/python/3.8.1/alpine/3.11/Dockerfile" rel="nofollow noreferrer">https://github.com/deepnox-io/docker-python-ta-lib/blob/main/v0.4.17/python/3.8.1/alpine/3.11/Dockerfile</a></p>
<p>Example errors I am seeing:</p>
<pre><code>Problem with the CMake installation, aborting build. CMake executable is cmake
CMake was unable to find a build program corresponding to "Ninja".
Unit test failures:
Could not build wheels for patchelf, which is required to install pyproject.toml-based projects
FAIL: set-interpreter-long.sh
FAIL: set-rpath.sh
FAIL: add-rpath.sh
</code></pre>
<p>How do I install TA-Lib into a Dockerfile?</p>
|
<python><docker><ta-lib>
|
2023-12-29 01:59:20
| 1
| 1,666
|
user1689987
|
77,729,847
| 5,765,761
|
Pandas Dataframe to jsonlines grouping by columns
|
<pre><code>data = {
'my_index': [1, 2],
'start': ['2023-12-28 00:00:00', '2023-12-29 00:00:00'],
'target': [['value1', 'value2'], ['value3']],
'dynamic_feat': [[['feat1', 'feat2'], ['feat3']], [['feat4']]]
}
df = pd.DataFrame(data)
pd.DataFrame.from_dict(data)
</code></pre>
<p>expected jsonlines format:</p>
<pre><code>{
"my_index": 1,
"features": {
"start": "2023-12-28 00:00:00",
"target": [],
"dynamic_feat": [[]]
}
}
...
</code></pre>
<p>I have a data that requires 1 column to be independent key while the rest of the columns to be grouped as "features". What is the best way to achieve this?</p>
<p>The data is huge so I can't do simple iteration to achieve this. Initially I used to_json to create the portion of features, but now I need to match it back to my_index. Not sure if there is any sleek way to do this!</p>
|
<python><json><pandas><dataframe>
|
2023-12-29 01:13:06
| 1
| 1,090
|
tmhs
|
77,729,786
| 519,422
|
Pandas dataframe: how to multiply a specific value by using its row index?
|
<p>I want to change specific values in a Pandas dataframe. Here is an example dataframe (in reality, there are many more rows):</p>
<pre><code> Value Property
0 CH4 Type
1 -10.90979 Density (g/cm3)
2 5.00000 Temperature (K)
</code></pre>
<p>Here I want to multiply "<code>10.90979</code>" by <code>10</code> in the row labeled "<code>1</code>". I don't want to have to write "<code>10.90979 * 10</code>" because I only know that I have a property called "<code>Density (g/cm3)</code>". I don't know its value. So I want to use the index of the row in which "<code>Density (g/cm3)</code>" appears in the multiplication.</p>
<p>I have tried:</p>
<pre><code>row_index = df.index.get_loc(df[df['Property'] == 'Density (g/cm3)'].index[0])
new_value = df.iloc[row_index][0] * 10
df["Value"].replace(df.iloc[row_index][0], new_value, inplace=True)
</code></pre>
<p>However, this gives me weird output. I get:</p>
<pre><code> Value Property
0 CH4 Type
1 -10.90979-10.90979... Density (g/cm3)
2 5.00000 Temperature (K)
</code></pre>
<p>I can't post the details of the code but am hoping someone will recognize a simple mistake. I am not sure that I'm using multiplication correctly for a dataframe. I also tried using</p>
<pre><code>df.iloc[row_index][0].mul(10)
</code></pre>
<p>but get the error <code>AttributeError: 'str' object has no attribute 'mul'</code>.</p>
<p>Can anyone please point me in the right direction?</p>
|
<python><python-3.x><pandas><dataframe><replace>
|
2023-12-29 00:35:53
| 4
| 897
|
Ant
|
77,729,490
| 10,006,534
|
How to do hyper-parameter tuning with panel data in sklearn framework?
|
<p>Imagine we have multiple time-series observations for multiple entities, and we want to perform hyper-parameter tuning on a single model, splitting the data in a time-series cross-validation fashion.</p>
<p>To my knowledge, there isn't a straightforward solution to performing this hyper-parameter tuning operation within the scikit-learn framework. There exists the functionality to do this with a single time-series using TimeSeriesSplit, however this doesn't work for multiple entities.</p>
<p>As a simple example imagine we have a dataframe:</p>
<pre><code>from itertools import product
# create a dataframe
countries = ['ESP','FRA']
periods = list(range(10))
df = pd.DataFrame(list(product(countries,periods)), columns = ['country','period'])
df['target'] = np.concatenate((np.repeat(1, 10), np.repeat(0, 10)))
df['a_feature'] = np.random.randn(20, 1)
# this produces the following dataframe:
country,period,target,a_feature
ESP,0,1,0.08
ESP,1,1,-2.0
ESP,2,1,0.1
ESP,3,1,-0.59
ESP,4,1,-0.83
ESP,5,1,0.05
ESP,6,1,0.05
ESP,7,1,0.42
ESP,8,1,0.04
ESP,9,1,2.17
FRA,0,0,-0.44
FRA,1,0,-0.48
FRA,2,0,0.82
FRA,3,0,-1.64
FRA,4,0,0.19
FRA,5,0,0.6
FRA,6,0,-0.73
FRA,7,0,-0.5
FRA,8,0,1.11
FRA,9,0,-0.75
</code></pre>
<p>And we want to train a single model across Spain and France so that we take all the data up to a certain period, and then predict using that trained model the next period for both Spain and France. And we want to assess which set of hyper-parameters work best for performance.</p>
<p>How to do hyper-parameter tuning with panel data in an time-series cross-validation framework?</p>
<p>Similar questions have been asked here:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/56362366/unbalanced-panel-data-how-to-use-time-series-splits-cross-validation?rq=3">Unbalanced Panel data: How to use Time Series Splits Cross-Validation?</a></li>
<li><a href="https://stackoverflow.com/questions/69482251/random-forest-hyper-parameters-tuning-with-panel-data-in-python">Random Forest hyper parameters tuning with panel data in python</a></li>
<li><a href="https://stats.stackexchange.com/questions/369397/correct-cross-validation-procedure-for-single-model-applied-to-panel-data">https://stats.stackexchange.com/questions/369397/correct-cross-validation-procedure-for-single-model-applied-to-panel-data</a></li>
</ul>
|
<python><scikit-learn><time-series><hyperparameters><panel-data>
|
2023-12-28 22:47:02
| 1
| 581
|
Slash
|
77,729,310
| 7,128,934
|
paramiko issue with host keys - BadHostKeyException
|
<p>Here is the script that I am using to connect to an SFTP server. <code>sftp.pem</code> and <code>host.pem</code> are private keys in OpenSSH format. I created the user and added the host key to the server myself.</p>
<p>I keep getting an error with this approach (see error message below). I can log in successfully if I add <code>client.set_missing_host_key_policy(paramiko.AutoAddPolicy())</code>. But my understanding is that it is not a good practice.</p>
<p>What is the proper way to connect to the SFTP server with private keys?</p>
<pre class="lang-py prettyprint-override"><code>import paramiko
private_key_path = "./sftp.pem"
host_key_path = "./host.pem"
client = paramiko.SSHClient()
hostname = '##.###.##.#'
username = 'username'
key_obj = paramiko.RSAKey(filename=host_key_path)
client.get_host_keys().add(hostname, "ssh-rsa", key_obj)
client.connect(hostname=hostname,
username=username,
key_filename=private_key_path)
</code></pre>
<p><strong>ERROR</strong></p>
<blockquote>
<p>paramiko.ssh_exception.BadHostKeyException: Host key for server
'##.###.##.#' does not match: got
'AAAAB3NzaC1yc2EAAAADAQABAAABAQ...+jCkEL', expected
'AAAAB3NzaC1yc2EAAAADAQABAAABAQ...feZ6l'</p>
</blockquote>
|
<python><paramiko>
|
2023-12-28 21:45:46
| 2
| 32,598
|
d.b
|
77,729,297
| 6,367,971
|
Why does using Requests fail when the URL works fine in browser?
|
<p>I have this code that used to work but I now get connection timeout errors when I run it. I assumed that something changed on the URL but when I paste the URL into a browser I get a successful response. Yet when curling from Terminal or running it in Pycharm I get the timeout error.</p>
<p>I have tried setting headers and retries but still can't get a successful response when using Requests.</p>
<pre><code>import requests
# Creating a session object
session = requests.Session()
# Add headers with your user-agent
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.3 Safari/605.1.15'
}
# Making a POST request using the session
response = session.post('https://www.valueinvestorsclub.com/ideas/loadideas', headers=headers)
# Print the response
print(response)
</code></pre>
<pre><code>requests.exceptions.ConnectionError: HTTPSConnectionPool(host='www.valueinvestorsclub.com', port=443): Max retries exceeded with url: /ideas/loadideas (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x122309b80>: Failed to establish a new connection: [Errno 60] Operation timed out'))
</code></pre>
|
<python><python-requests>
|
2023-12-28 21:42:06
| 2
| 978
|
user53526356
|
77,729,182
| 9,135,359
|
Unable to encode categorical variables for Deep learning classification model
|
<p>I tried to train a convolutional neural network to predict the labels (categorical data) given the criteria (text). This should have been a simple classification problem. There are 7 labels, hence my network has 7 output neurons with <code>sigmoid</code> activation functions.</p>
<p>I encoded training data using the following simple format, in a txt file, using text descriptors (<code>'criteria'</code>) and categorical label variables (<code>'label'</code>):</p>
<pre><code>'criteria'|'label'
</code></pre>
<p>Here's a peak at one entry from data file:</p>
<pre><code>Headache location: Bilateral (intracranial). Facial pain: Nil. Pain quality: Pulsating. Thunderclap onset: Nil. Pain duration: 11. Pain episodes per month: 26. Chronic pain: No. Remission between episodes: Yes. Remission duration: 25. Pain intensity: Moderate (4-7). Aggravating/triggering factors: Innocuous facial stimuli, Bathing and/or showering, Chocolate, Exertion, Cold stimulus, Emotion, Valsalva maneuvers. Relieving factors: Nil. Headaches worse in the mornings and/or night: Nil. Associated symptoms: Nausea and/or vomiting. Reversible symptoms: Nil. Examination findings: Nil. Aura present: Yes. Reversible aura: Motor, Sensory, Brainstem, Visual. Duration of auras: 47. Aura in relation to headache: Aura proceeds headache. History of CNS disorders: Multiple Sclerosis, Angle-closure glaucoma. Past history: Nil. Temporal association: No. Disease worsening headache: Nil. Improved cause: Nil. Pain ipsilateral: Nil. Medication overuse: Nil. Establish drug overuse: Nil. Investigations: Nil.|Migraine with aura
</code></pre>
<p>Here's a snippet of the code from the training algorithm:</p>
<pre><code>'''A. IMPORT DATA'''
dataset = pd.read_csv('Data/ICHD3_Database.txt', names=['criteria', 'label'], sep='|')
features = dataset['criteria'].values
labels = dataset['label'].values
labels = labels.reshape(len(labels), 1) # Reshape target to be a 2d array
'''B. DATA PRE-PROCESSING: WORD EMBEDDINGS'''
def word_embeddings(features):
maxlen = 200
features_train, features_test, labels_train, labels_test = train_test_split(features, labels, test_size=0.33, random_state=42)
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(features_train)
features_train = pad_sequences(tokenizer.texts_to_sequences(features_train), padding='post', maxlen=maxlen)
features_test = pad_sequences(tokenizer.texts_to_sequences(features_test), padding='post', maxlen=maxlen)
labels_train = pad_sequences(tokenizer.texts_to_sequences(labels_train), padding='post', maxlen=maxlen)
labels_test = pad_sequences(tokenizer.texts_to_sequences(labels_test), padding='post', maxlen=maxlen)
vocab_size = len(tokenizer.word_index) + 1 # Adding 1 because of reserved 0 index
return features_train, features_test, labels_train, labels_test, vocab_size, maxlen
features_train, features_test, labels_train, labels_test, vocab_size, maxlen = word_embeddings(features) # Pre-process text using word embeddings
'''C. CREATE THE MODEL'''
def design_model(features, hidden_layers=2, number_neurons=128):
model = Sequential(name = "My_Sequential_Model")
model.add(layers.Embedding(input_dim=vocab_size, output_dim=50, input_length=maxlen))
model.add(layers.Conv1D(128, 5, activation='relu'))
model.add(layers.GlobalMaxPool1D())
for i in range(hidden_layers):
model.add(Dense(number_neurons, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(7, activation='sigmoid'))
opt = Adam(learning_rate=0.01)
model.compile(loss='binary_crossentropy', metrics=['mae'], optimizer=opt)
return model
</code></pre>
<p>I then pipe the model through a <code>GridSearchCV</code> to find the optimal number of epochs, batch size, etc.</p>
<p>However, before it even gets to the <code>GridSearchCV</code>, when I run it, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\user\Desktop\Deep Learning\deep_learning_headache.py", line 51, in <module>
features_train, features_test, labels_train, labels_test, vocab_size, maxlen = word_embeddings(features) # Pre-process text using word embeddings
File "c:\Users\user\Desktop\Deep Learning\deep_learning_headache.py", line 45, in word_embeddings
labels_train = pad_sequences(tokenizer.texts_to_sequences(labels_train), padding='post', maxlen=maxlen)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\src\preprocessing\text.py", line 357, in texts_to_sequences
return list(self.texts_to_sequences_generator(texts))
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\src\preprocessing\text.py", line 386, in texts_to_sequences_generator
seq = text_to_word_sequence(
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\src\preprocessing\text.py", line 74, in text_to_word_sequence
input_text = input_text.lower()
AttributeError: 'numpy.ndarray' object has no attribute 'lower'
</code></pre>
<p>Where am I going wrong?</p>
|
<python><tensorflow><keras>
|
2023-12-28 21:12:12
| 1
| 844
|
Code Monkey
|
77,729,040
| 10,853,071
|
Dask Dataframe Mode on groupy?
|
<p>I am trying to extract the "mode" of a series under a groupby agregation in a dask dataframe. I could find the <a href="https://docs.dask.org/en/stable/generated/dask.dataframe.DataFrame.mode.html" rel="nofollow noreferrer">documentation of mode</a>, but not how to use it under a group by.</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.DataFrame({
'status' : ['pending', 'pending','pending', 'canceled','canceled','canceled', 'confirmed', 'confirmed','confirmed'],
'clientId' : ['A', 'B', 'C', 'A', 'D', 'C', 'A', 'B','C'],
'partner' : ['A', np.nan,'C', 'A',np.nan,'C', 'A', np.nan,'C'],
'product' : ['afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard'],
'brand' : ['brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3','brand_1', 'brand_3', 'brand_3'],
'gmv' : [100,100,100,100,100,100,100,100,100]})
data = data.astype({'partner':'category','status':'category','product':'category', 'brand':'category'})
import dask.dataframe as dd
df = dd.from_pandas(data,npartitions=1)
df.groupby(['clientId', 'product'], observed=True).aggregate({'brand':'mode'})
df.compute()
</code></pre>
<p>Thanks!</p>
|
<python><pandas><group-by><dask><dask-dataframe>
|
2023-12-28 20:27:51
| 1
| 457
|
FábioRB
|
77,728,901
| 19,127,570
|
Use curve fitting to approximate a group of many straight line segments in python
|
<p>Suppose you have a group of connected line segments like in the picture below. How can you find a curve to approximate the collection of line segments as shown in the image on the right?</p>
<p><a href="https://i.sstatic.net/LjIeXm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LjIeXm.png" alt="Connected line segments number 1" /></a>
<a href="https://i.sstatic.net/nKh0om.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nKh0om.png" alt="Fitted line segments number 1" /></a></p>
<p>This is similar to another question (<a href="https://stackoverflow.com/questions/25368615/approximate-a-group-of-line-segments-as-a-single-best-fit-straight-line">Approximate a group of line segments as a single best fit straight line</a>) with a couple of key differences.</p>
<ol>
<li>There are many more lines, some of which are perpendicular to the line of best fit.</li>
<li>I am looking for a <em><strong>CURVE</strong></em>, not just a line. This is important because I have other examples that cannot be approximated by a straight line. See below for one such example.</li>
<li>This is different from other questions about regression because I am asking specifically about how to fit a curve to lines instead of points. Also, I don't think that the solution is linear regression in this situation.</li>
</ol>
<p><a href="https://i.sstatic.net/Ab2L0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ab2L0.png" alt="Connected line segments number 2" /></a></p>
<h4>Sample Data</h4>
<p>This is the data used to create the examples in the pictures. Pass these ndarrays to the <code>LineCollection(...)</code> function from matplotlib to replicate the plots.</p>
<pre><code>data1 = np.array([[[553, 420], [553, 422]],[[553, 420], [554, 419]],[[553, 420], [555, 419]],[[553, 420], [554, 421]],[[553, 420], [555, 421]],[[553, 420], [554, 422]],[[553, 422], [554, 421]],[[553, 422], [555, 421]],[[553, 422], [554, 422]],[[557, 413], [558, 412]],[[558, 412], [558, 414]],[[557, 413], [558, 414]],[[556, 415], [557, 413]],[[557, 413], [557, 415]],[[556, 415], [558, 414]],[[557, 415], [558, 414]],[[556, 415], [557, 415]],[[556, 416], [558, 414]],[[557, 416], [558, 414]],[[556, 415], [556, 416]],[[556, 415], [557, 416]],[[555, 417], [556, 415]],[[556, 415], [557, 417]],[[556, 416], [557, 415]],[[557, 415], [557, 416]],[[555, 417], [557, 415]],[[557, 415], [557, 417]],[[556, 416], [557, 416]],[[555, 417], [556, 416]],[[556, 416], [557, 417]],[[556, 416], [556, 418]],[[555, 417], [557, 416]],[[557, 416], [557, 417]],[[556, 418], [557, 416]],[[555, 417], [557, 417]],[[555, 417], [556, 418]],[[554, 419], [555, 417]],[[555, 417], [555, 419]],[[556, 418], [557, 417]],[[555, 419], [557, 417]],[[554, 419], [556, 418]],[[555, 419], [556, 418]],[[554, 419], [555, 419]],[[554, 419], [554, 421]],[[554, 419], [555, 421]],[[554, 421], [555, 419]],[[555, 419], [555, 421]],[[554, 421], [555, 421]],[[554, 421], [554, 422]],[[554, 422], [555, 421]],[[558, 412], [559, 411]],[[558, 412], [559, 412]],[[558, 412], [560, 410]],[[557, 413], [559, 411]],[[557, 413], [559, 412]],[[558, 414], [559, 412]],[[559, 409], [559, 411]],[[559, 409], [560, 408]],[[559, 409], [560, 410]],[[559, 409], [561, 407]],[[559, 409], [561, 408]],[[559, 411], [559, 412]],[[559, 411], [560, 410]],[[559, 412], [560, 410]],[[560, 408], [560, 410]],[[560, 408], [561, 407]],[[560, 408], [561, 408]],[[560, 408], [562, 406]],[[560, 410], [561, 408]],[[561, 407], [561, 408]],[[561, 407], [562, 406]],[[561, 408], [562, 406]],[[553, 422], [553, 423]],[[552, 423], [553, 422]],[[553, 423], [554, 421]],[[552, 423], [554, 421]],[[553, 423], [555, 421]],[[553, 423], [554, 422]],[[552, 423], [554, 422]],[[552, 423], [553, 423]],[[552, 425], [553, 423]],[[551, 425], [553, 423]],[[553, 423], [553, 425]],[[552, 423], [552, 425]],[[551, 425], [552, 423]],[[552, 423], [553, 425]],[[551, 425], [552, 425]],[[552, 425], [553, 425]],[[551, 426], [552, 425]],[[552, 425], [552, 426]],[[551, 425], [553, 425]],[[551, 425], [551, 426]],[[551, 425], [552, 426]],[[551, 426], [553, 425]],[[552, 426], [553, 425]],[[551, 426], [552, 426]],[[551, 426], [551, 428]],[[550, 428], [551, 426]],[[551, 428], [552, 426]],[[550, 428], [552, 426]],[[550, 428], [551, 428]],[[550, 429], [551, 428]],[[550, 430], [551, 428]],[[550, 428], [550, 429]],[[550, 428], [550, 430]],[[550, 429], [550, 430]],[[549, 431], [550, 429]],[[550, 429], [550, 431]],[[549, 431], [550, 430]],[[550, 430], [550, 431]],[[548, 432], [550, 430]],[[549, 431], [550, 431]],[[548, 432], [549, 431]],[[549, 431], [549, 433]],[[548, 432], [550, 431]],[[549, 433], [550, 431]],[[548, 432], [549, 433]],[[548, 432], [549, 434]],[[548, 432], [548, 434]],[[549, 433], [549, 434]],[[548, 434], [549, 433]],[[547, 435], [549, 433]],[[548, 434], [549, 434]],[[547, 435], [549, 434]],[[548, 436], [549, 434]],[[547, 435], [548, 434]],[[548, 434], [548, 436]],[[547, 435], [548, 436]],[[547, 435], [547, 437]],[[547, 437], [548, 436]]], dtype=np.int64)
</code></pre>
<pre><code>data2 = np.array([[[1142, 1698], [1142, 1699]],[[1142, 1698], [1144, 1699]],[[1142, 1698], [1144, 1700]],[[1142, 1699], [1144, 1699]],[[1142, 1699], [1144, 1700]],[[1144, 1699], [1144, 1700]],[[1144, 1699], [1145, 1698]],[[1144, 1699], [1146, 1699]],[[1144, 1699], [1146, 1700]],[[1144, 1700], [1145, 1698]],[[1144, 1700], [1146, 1699]],[[1144, 1700], [1146, 1700]],[[1145, 1698], [1146, 1699]],[[1145, 1698], [1146, 1700]],[[1146, 1699], [1146, 1700]],[[1146, 1699], [1148, 1700]],[[1146, 1700], [1148, 1700]],[[1144, 1699], [1145, 1701]],[[1144, 1699], [1146, 1701]],[[1144, 1700], [1145, 1701]],[[1144, 1700], [1146, 1701]],[[1145, 1701], [1146, 1699]],[[1146, 1699], [1146, 1701]],[[1146, 1699], [1148, 1701]],[[1145, 1701], [1146, 1700]],[[1146, 1700], [1146, 1701]],[[1146, 1700], [1148, 1701]],[[1146, 1700], [1147, 1702]],[[1146, 1701], [1148, 1700]],[[1148, 1700], [1148, 1701]],[[1147, 1702], [1148, 1700]],[[1148, 1700], [1150, 1702]],[[1145, 1701], [1146, 1701]],[[1145, 1701], [1147, 1702]],[[1145, 1701], [1146, 1703]],[[1146, 1701], [1148, 1701]],[[1146, 1701], [1147, 1702]],[[1146, 1701], [1146, 1703]],[[1147, 1702], [1148, 1701]],[[1148, 1701], [1150, 1702]],[[1146, 1703], [1148, 1701]],[[1148, 1701], [1149, 1703]],[[1146, 1703], [1147, 1702]],[[1147, 1702], [1149, 1703]],[[1149, 1703], [1150, 1702]],[[1147, 1702], [1148, 1704]],[[1148, 1704], [1150, 1702]],[[1150, 1702], [1151, 1704]],[[1146, 1703], [1148, 1704]],[[1148, 1704], [1149, 1703]],[[1149, 1703], [1151, 1704]],[[1149, 1703], [1150, 1705]],[[1148, 1704], [1150, 1705]],[[1150, 1705], [1151, 1704]],[[1151, 1704], [1151, 1706]],[[1151, 1704], [1152, 1706]],[[1150, 1705], [1151, 1706]],[[1150, 1705], [1152, 1706]],[[1150, 1705], [1151, 1707]],[[1151, 1706], [1152, 1706]],[[1151, 1706], [1151, 1707]],[[1151, 1706], [1152, 1708]],[[1151, 1706], [1151, 1708]],[[1151, 1707], [1152, 1706]],[[1152, 1706], [1152, 1708]],[[1151, 1708], [1152, 1706]],[[1151, 1707], [1152, 1708]],[[1151, 1707], [1151, 1708]],[[1151, 1707], [1152, 1709]],[[1151, 1708], [1152, 1708]],[[1152, 1708], [1152, 1709]],[[1151, 1710], [1152, 1708]],[[1151, 1708], [1152, 1709]],[[1151, 1708], [1151, 1710]],[[1151, 1710], [1152, 1709]],[[1151, 1710], [1152, 1712]],[[1152, 1718], [1152, 1719]],[[1152, 1718], [1152, 1720]],[[1152, 1719], [1152, 1720]],[[1152, 1719], [1152, 1721]],[[1152, 1720], [1152, 1721]],[[1151, 1722], [1152, 1720]],[[1152, 1720], [1152, 1722]],[[1151, 1722], [1152, 1721]],[[1152, 1721], [1152, 1722]],[[1151, 1723], [1152, 1721]],[[1151, 1722], [1152, 1722]],[[1151, 1722], [1151, 1723]],[[1151, 1723], [1152, 1722]],[[1151, 1722], [1152, 1724]],[[1150, 1724], [1151, 1722]],[[1152, 1722], [1152, 1724]],[[1150, 1724], [1152, 1722]],[[1151, 1723], [1152, 1724]],[[1150, 1724], [1151, 1723]],[[1151, 1723], [1151, 1725]],[[1150, 1724], [1152, 1724]],[[1151, 1725], [1152, 1724]],[[1150, 1726], [1152, 1724]],[[1151, 1726], [1152, 1724]],[[1150, 1724], [1151, 1725]],[[1150, 1724], [1150, 1726]],[[1150, 1724], [1151, 1726]],[[1150, 1726], [1151, 1725]],[[1151, 1725], [1151, 1726]],[[1149, 1727], [1151, 1725]],[[1150, 1727], [1151, 1725]],[[1150, 1726], [1151, 1726]],[[1149, 1727], [1150, 1726]],[[1150, 1726], [1150, 1727]],[[1149, 1728], [1150, 1726]],[[1149, 1727], [1151, 1726]],[[1150, 1727], [1151, 1726]],[[1149, 1728], [1151, 1726]],[[1149, 1727], [1150, 1727]],[[1147, 1728], [1149, 1727]],[[1149, 1727], [1149, 1728]],[[1149, 1728], [1150, 1727]],[[1147, 1728], [1149, 1728]],[[1151, 1704], [1153, 1704]],[[1151, 1706], [1153, 1704]],[[1151, 1706], [1153, 1707]],[[1151, 1706], [1153, 1708]],[[1152, 1706], [1153, 1704]],[[1152, 1706], [1153, 1707]],[[1152, 1706], [1153, 1708]],[[1151, 1707], [1153, 1707]],[[1151, 1707], [1153, 1708]],[[1152, 1708], [1153, 1707]],[[1152, 1708], [1153, 1708]],[[1152, 1708], [1154, 1709]],[[1152, 1708], [1154, 1710]],[[1151, 1708], [1153, 1707]],[[1151, 1708], [1153, 1708]],[[1152, 1709], [1153, 1707]],[[1152, 1709], [1153, 1708]],[[1152, 1709], [1154, 1709]],[[1152, 1709], [1154, 1710]],[[1152, 1709], [1153, 1711]],[[1151, 1710], [1153, 1708]],[[1151, 1710], [1153, 1711]],[[1152, 1712], [1154, 1710]],[[1152, 1712], [1153, 1711]],[[1152, 1712], [1154, 1712]],[[1152, 1712], [1153, 1713]],[[1152, 1712], [1154, 1713]],[[1152, 1712], [1153, 1714]],[[1152, 1718], [1153, 1716]],[[1152, 1718], [1154, 1716]],[[1152, 1718], [1153, 1717]],[[1152, 1718], [1154, 1718]],[[1152, 1719], [1153, 1717]],[[1152, 1719], [1154, 1718]],[[1152, 1720], [1154, 1718]],[[1152, 1718], [1153, 1719]],[[1152, 1718], [1154, 1719]],[[1152, 1718], [1153, 1720]],[[1152, 1719], [1153, 1719]],[[1152, 1719], [1154, 1719]],[[1152, 1719], [1153, 1720]],[[1152, 1719], [1154, 1721]],[[1152, 1720], [1153, 1719]],[[1152, 1720], [1154, 1719]],[[1152, 1720], [1153, 1720]],[[1152, 1720], [1154, 1721]],[[1152, 1720], [1153, 1722]],[[1152, 1720], [1154, 1722]],[[1152, 1721], [1153, 1719]],[[1152, 1721], [1154, 1719]],[[1152, 1721], [1153, 1720]],[[1152, 1721], [1154, 1721]],[[1152, 1721], [1153, 1722]],[[1152, 1721], [1154, 1722]],[[1152, 1721], [1153, 1723]],[[1151, 1722], [1153, 1720]],[[1151, 1722], [1153, 1722]],[[1151, 1722], [1153, 1723]],[[1152, 1722], [1153, 1720]],[[1152, 1722], [1154, 1721]],[[1152, 1722], [1153, 1722]],[[1152, 1722], [1154, 1722]],[[1152, 1722], [1153, 1723]],[[1151, 1723], [1153, 1722]],[[1151, 1723], [1153, 1723]],[[1152, 1724], [1153, 1722]],[[1152, 1724], [1154, 1722]],[[1152, 1724], [1153, 1723]],[[1151, 1725], [1153, 1723]],[[1153, 1707], [1153, 1708]],[[1153, 1707], [1154, 1709]],[[1153, 1708], [1154, 1709]],[[1153, 1708], [1154, 1710]],[[1154, 1709], [1154, 1710]],[[1153, 1711], [1154, 1709]],[[1154, 1709], [1155, 1711]],[[1153, 1711], [1154, 1710]],[[1154, 1710], [1155, 1711]],[[1153, 1711], [1155, 1711]],[[1154, 1710], [1154, 1712]],[[1153, 1711], [1154, 1712]],[[1153, 1711], [1153, 1713]],[[1153, 1711], [1154, 1713]],[[1154, 1712], [1155, 1711]],[[1153, 1713], [1155, 1711]],[[1154, 1713], [1155, 1711]],[[1155, 1711], [1156, 1713]],[[1153, 1713], [1154, 1712]],[[1154, 1712], [1154, 1713]],[[1154, 1712], [1156, 1713]],[[1153, 1714], [1154, 1712]],[[1154, 1712], [1155, 1714]],[[1153, 1713], [1154, 1713]],[[1153, 1713], [1153, 1714]],[[1153, 1713], [1155, 1714]],[[1153, 1713], [1155, 1715]],[[1153, 1713], [1154, 1715]],[[1154, 1713], [1156, 1713]],[[1153, 1714], [1154, 1713]],[[1154, 1713], [1155, 1714]],[[1154, 1713], [1155, 1715]],[[1154, 1713], [1154, 1715]],[[1155, 1714], [1156, 1713]],[[1155, 1715], [1156, 1713]],[[1154, 1715], [1156, 1713]],[[1153, 1714], [1155, 1714]],[[1153, 1714], [1155, 1715]],[[1153, 1714], [1154, 1715]],[[1155, 1714], [1155, 1715]],[[1154, 1715], [1155, 1714]],[[1154, 1715], [1155, 1715]],[[1153, 1714], [1153, 1716]],[[1153, 1714], [1154, 1716]],[[1153, 1716], [1155, 1714]],[[1154, 1716], [1155, 1714]],[[1153, 1716], [1155, 1715]],[[1154, 1716], [1155, 1715]],[[1153, 1717], [1155, 1715]],[[1153, 1716], [1154, 1715]],[[1154, 1715], [1154, 1716]],[[1153, 1717], [1154, 1715]],[[1153, 1714], [1155, 1716]],[[1155, 1714], [1155, 1716]],[[1155, 1714], [1156, 1716]],[[1155, 1715], [1155, 1716]],[[1155, 1715], [1155, 1717]],[[1155, 1715], [1156, 1716]],[[1154, 1715], [1155, 1716]],[[1154, 1715], [1155, 1717]],[[1154, 1715], [1156, 1716]],[[1153, 1716], [1154, 1716]],[[1153, 1716], [1153, 1717]],[[1153, 1716], [1154, 1718]],[[1153, 1717], [1154, 1716]],[[1154, 1716], [1154, 1718]],[[1153, 1717], [1154, 1718]],[[1153, 1717], [1153, 1719]],[[1153, 1717], [1154, 1719]],[[1153, 1719], [1154, 1718]],[[1154, 1718], [1154, 1719]],[[1153, 1720], [1154, 1718]],[[1153, 1719], [1154, 1719]],[[1153, 1719], [1153, 1720]],[[1153, 1719], [1154, 1721]],[[1153, 1720], [1154, 1719]],[[1154, 1719], [1154, 1721]],[[1153, 1720], [1154, 1721]],[[1153, 1720], [1153, 1722]],[[1153, 1720], [1154, 1722]],[[1153, 1722], [1154, 1721]],[[1154, 1721], [1154, 1722]],[[1153, 1723], [1154, 1721]],[[1153, 1722], [1154, 1722]],[[1153, 1722], [1153, 1723]],[[1153, 1723], [1154, 1722]],[[1153, 1716], [1155, 1716]],[[1153, 1716], [1155, 1717]],[[1153, 1716], [1155, 1718]],[[1154, 1716], [1155, 1716]],[[1154, 1716], [1155, 1717]],[[1154, 1716], [1155, 1718]],[[1154, 1716], [1156, 1716]],[[1154, 1716], [1156, 1718]],[[1153, 1717], [1155, 1716]],[[1153, 1717], [1155, 1717]],[[1153, 1717], [1155, 1718]],[[1154, 1718], [1155, 1716]],[[1154, 1718], [1155, 1717]],[[1154, 1718], [1155, 1718]],[[1154, 1718], [1156, 1716]],[[1154, 1718], [1156, 1718]],[[1153, 1719], [1155, 1717]],[[1153, 1719], [1155, 1718]],[[1154, 1719], [1155, 1717]],[[1154, 1719], [1155, 1718]],[[1154, 1719], [1156, 1718]],[[1153, 1720], [1155, 1718]],[[1155, 1716], [1155, 1717]],[[1155, 1716], [1155, 1718]],[[1155, 1716], [1156, 1716]],[[1155, 1716], [1156, 1718]],[[1155, 1717], [1155, 1718]],[[1155, 1717], [1156, 1716]],[[1155, 1717], [1156, 1718]],[[1155, 1718], [1156, 1716]],[[1155, 1718], [1156, 1718]],[[1156, 1716], [1156, 1718]],[[1148, 1729], [1149, 1727]],[[1148, 1729], [1150, 1727]],[[1147, 1728], [1148, 1729]],[[1148, 1729], [1149, 1728]]], dtype=np.int64)
</code></pre>
<h4>What I have tried</h4>
<p>The answer to the question linked above helped me write the following code. Essentially, it finds the vertices of the convex hull around the collection of line segments. Then, using these vertices, the code performs linear regression to find the line of best fit. Finally, the line is cropped so that it is entirely contained by the convex hull polygon. Of course this does not work for non-linear line segment collections. Any suggestions would be welcome!</p>
<p><a href="https://i.sstatic.net/ThA97m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ThA97m.png" alt="Convex hull around line collection" /></a>
<a href="https://i.sstatic.net/wrynTm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wrynTm.png" alt="Line of best fit extended" /></a>
<a href="https://i.sstatic.net/loMexm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/loMexm.png" alt="Line of best fit cropped" /></a></p>
<h4>Code</h4>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.collections import PatchCollection, LineCollection
from scipy.stats import linregress
from scipy.spatial import KDTree, ConvexHull
from shapely import box, LineString, normalize, Polygon, polygons, intersection
fig = plt.figure(figsize=(6,11))
ax = fig.add_subplot()
pairs = np.reshape(np.array(data1), (data1.shape[0]*2,2))
hull = ConvexHull(pairs)
line_coll = LineCollection(data1, linewidth=1.0, colors='r')
ax.add_collection(line_coll)
for simp in hull.simplices:
ax.plot(pairs[simp,0], pairs[simp,1], c='w')
lreg = linregress(pairs)
xs = [0, (-1 *(lreg.intercept /lreg.slope))]
ys = [lreg.intercept, 0]
lstr = LineString([(xs[0],ys[0]), (xs[1],ys[1])])
plyg = polygons(pairs[hull.vertices])
coords = intersection(lstr, plyg).xy
ax.plot(coords[0], coords[1], linewidth=2.5, c='b')
plt.show()
</code></pre>
|
<python><numpy><scipy><regression><shapely>
|
2023-12-28 19:53:52
| 0
| 773
|
trent
|
77,728,795
| 1,285,061
|
Convert Numpy array into a sine wave signal for Fourier analysis
|
<p>I have a long tabular data, simple peak amplitude measurement with respect to time, i.e 1D array index of interval 1 minute.</p>
<pre><code>>>> a = np.array([5, 7, -6, 9, 0, 1, 15, -2, 8])
#array is of length of many thousands readings taken on interval of 1 minute.
#The index represents 1-minute interval.
</code></pre>
<p>How can I convert this NumPy array into a "signal" that I can use to perform FFT on it, either in NumPy <code>numpy.fft</code> or any other library?</p>
|
<python><numpy><signal-processing><fft>
|
2023-12-28 19:26:50
| 1
| 3,201
|
Majoris
|
77,728,583
| 21,306,952
|
Python Selenium ActionChains not working with canvas
|
<p>I am trying to test a canvas element with selenium in python: in the canvas there's a custom video editor in which I can drag elements. I'm using Selenium ActionChains, but it doesn't seem to work in the canvas to drag and drop different elements. Here's a code extract:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
canvas = driver.find_element(By.CSS_SELECTOR, "canvas")
# Get the position of the canvas element on the page
canvas_x, canvas_y = canvas.location["x"], canvas.location["y"]
initial_x = 0
initial_y = 180
offset_x = 0
offset_y = -180
# Perform the move and drag action
ActionChains(driver).move_to_element_with_offset(
canvas, initial_x, initial_y
).click_and_hold().move_by_offset(offset_x, offset_y).release().perform()
</code></pre>
<p>In this code, I am trying to drag an element positioned at 180px down the center of the canvas and move it to the center. It looks like the canvas doesn't detect the mouse click and hold at all and I don't know how to do this.</p>
|
<python><selenium-webdriver><testing><canvas>
|
2023-12-28 18:37:35
| 1
| 918
|
Alexxino
|
77,728,423
| 5,822,440
|
Drawing the Tangent of 3D Curve Data Points in an Artificial Neural Network From Scratch
|
<p>I'm working on building an Artificial Neural Network (ANN) from scratch, and I have a set of 3D curve data points representing the output of the network. I'm interested in visualizing the tangents of the curve at various points. How can I draw the tangents of these 3D curve data points within the context of my ANN implementation? What are the steps and code examples I should follow to achieve this?</p>
<p><a href="https://i.sstatic.net/cVJYN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cVJYN.png" alt="My ANN architecture is looking like this" /></a></p>
<p>My primary goal is to identify and visualize the decision boundaries of the data points processed by my ANN. Additionally, I would like to trace the tangents of these decision boundaries.</p>
<p>Could someone please provide guidance on how to determine these decision boundaries and effectively trace the tangents using Python or any relevant tools or libraries? Any code examples or insights into the implementation would be highly valuable.</p>
<p>Thank you for your assistance!</p>
<p><a href="https://i.sstatic.net/E1TuA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E1TuA.png" alt="enter image description here" /></a></p>
|
<python><math><artificial-intelligence>
|
2023-12-28 17:52:49
| 0
| 411
|
Fatihi Youssef
|
77,728,364
| 15,098,472
|
Update the settings of a class after construction
|
<p>I got the following structure:</p>
<p>First, a config class:</p>
<pre><code>class Config:
def __init__(self, config_path='./configs') -> None:
file = open(config_path, 'r')
self.config_path = config_path
self.config = yaml.safe_load(file)
</code></pre>
<p>The purpose of the class is basically to load a .yaml file and set the configurations specified in that yaml as an attribute. The idea, is that other classes can inherit from that, like so:</p>
<pre><code>class SARSA_TileEncoding(Config):
def __init__(self, env: gym.Env, config_dir='./configs') -> None:
super().__init__(SARSA_TileEncoding, config_dir)
self.tile_encoder = TileEncoding(env, self.config)
</code></pre>
<p>With this, I can now simply use <code>self.config['parameter']</code> in the class, for example:</p>
<pre><code>class SARSA_TileEncoding(Config):
def __init__(self, env: gym.Env, config_dir='./configs') -> None:
super().__init__(SARSA_TileEncoding, env, config_dir)
self.tile_encoder = TileEncoding(env, self.config)
def do_something(self):
for n in self.config['n_times']:
print('Hello')
</code></pre>
<p>However, the config might get updated after construction, which is not a problem, since the parameters are accessed on the fly. But, as you can see the class <code>SARSA_TileEncoding</code> has the attribute</p>
<pre><code>self.tile_encoder = TileEncoding(env, self.config)
</code></pre>
<p>which also relies on the config. The issue now is, that if the config gets updated, this class <code>TileEncoding</code> will still uses the values from the config at construction. Is there a good way to solve this, i.e., also update the config from <code>TileEncoding</code>?</p>
|
<python><class><constructor>
|
2023-12-28 17:41:01
| 0
| 574
|
kklaw
|
77,728,334
| 12,343,115
|
install lightgbm GPU in a WSL conda env
|
<p>-------------------- original question ---------------------------------</p>
<p>How to install LightGBM??
I have checked multiple sources but staill failed to install.</p>
<p>I tried pip and conda but both return the error:</p>
<pre><code>[LightGBM] [Warning] Using sparse features with CUDA is currently not supported.
[LightGBM] [Fatal] CUDA Tree Learner was not enabled in this build.
Please recompile with CMake option -DUSE_CUDA=1
</code></pre>
<p>What i have tried is following:</p>
<pre><code>git clone --recursive https://github.com/microsoft/LightGBM
cd LightGBM/
mkdir -p build
cd build
cmake -DUSE_GPU=1 ..
make -j$(nproc)
cd ../python-package
pip install .
</code></pre>
<p>-------------------- My solution below (cuda) ---------------------------------</p>
<p>Thanks for the replies guys. I tried some ways and finally it works as below:
First, make sure cmake is installed (under the wsl):</p>
<pre><code>sudo apt-get update
sudo apt-get install cmake
sudo apt-get install g++
</code></pre>
<p>Then,</p>
<pre><code>git clone --recursive https://github.com/microsoft/LightGBM
cd LightGBM
mkdir build
cd build
cmake -DUSE_GPU=1 -DOpenCL_LIBRARY=/usr/local/cuda/lib64/libOpenCL.so -DOpenCL_INCLUDE_DIR=/usr/local/cuda/include/ ..
make -j4
</code></pre>
<p>Currently, the install is not linked to any conda env yet. So to do this, under the vscode terminal (or still wsl), conda activate an env and then create a jupyter notebook for testing:
Make sure that <code>lib_lightgbm.so</code> is under the <code>LightGBM/python-package</code>, if not, copy into that folder.
Then in the jupyter notebook:</p>
<pre><code>import sys
import numpy as np
sys.path.append('/mnt/d/lgm-test2/LightGBM/python-package')
import lightgbm as lgb
</code></pre>
<p>The final bit is you can refer Jame's reply that device needs to be set to 'cuda' instead of 'gpu'.</p>
|
<python><machine-learning><lightgbm>
|
2023-12-28 17:34:48
| 2
| 811
|
ZKK
|
77,728,246
| 9,788,900
|
Files not being written to mapped network drive on windows using python script
|
<p>I have a python script which accesses a mapped network drive on Windows OS and creates a file in the mapped location to write. Here's the code to write.</p>
<pre><code>output_file = f"{output_dirPath}/{directory_name}{output_suffix}"
with open(output_file, 'w') as file:
file.write("Sample Name\tFilename\tSample Type\tProcessing Setting\tReference\n")
for path in file_paths:
file_name = os.path.basename(path) # Extract the file name from the path
final_filename = os.path.splitext(file_name)[0] # Remove the .bam extension
file.write(f"{final_filename}\t{path}\t{sample_type}\t{processing_setting}\t{reference}\n")
</code></pre>
<p>Where, output_dirPath is the mapped network drive which I am accessing as below</p>
<pre><code>output_dirPath = "//abc/data/lab/Archives/Test/{}".format(directory_name)
</code></pre>
<p>Directory name is can be any alphanumeric string with spaces, _, and - in them.</p>
<p>The output suffix is <code>_template.txt</code></p>
<p>When I run this script the section of the code (not mentioned here runs and unzips the files) however does not write to a file and gives following error.</p>
<pre><code>Traceback (most recent call last):
File "Template.py", line 76, in <module>
with open(output_file, 'w') as file:
^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '//abc/data/lab/Archives/Test/Output Folder abc 1ef3D00.example/Output Folder abc 1ef3D00.example_bamTemplate.txt'
</code></pre>
|
<python><unc>
|
2023-12-28 17:12:25
| 0
| 343
|
Callie
|
77,728,178
| 11,436,917
|
generate python models from k8s CRDs
|
<p>I'm trying to use python to add/delete/get status of CRDs. Since the official SDK provides only generic functions to deal with CRD, i want to implement my own wrapper to manage those particular CRDs ( mainly crossplane ones)</p>
<p>is there a way to generate Python models ideally with validation from a yaml CRD file like this one: <a href="https://github.com/crossplane-contrib/provider-gcp/blob/master/package/crds/cache.gcp.crossplane.io_cloudmemorystoreinstances.yaml" rel="nofollow noreferrer">https://github.com/crossplane-contrib/provider-gcp/blob/master/package/crds/cache.gcp.crossplane.io_cloudmemorystoreinstances.yaml</a></p>
|
<python><kubernetes><automation><cross-platform><code-generation>
|
2023-12-28 17:00:28
| 0
| 634
|
ilyesAj
|
77,728,058
| 908,924
|
Given a column with sets of data print data that corresponds to group in the adjacent row
|
<p>I have an excel sheet with two columns of data. The data in the first column is grouped by rows the adjacent column contains some data to be processed.</p>
<p>for example</p>
<p><img src="https://i.sstatic.net/e8xCW.png" alt="For example" /></p>
<p>What I would like to do is read in the rows in column "A" that have "0" and print the data in column "B" that corresponds to it then move to the second "0.25" group and again print the data, and again move on to the next group. What I have so far is</p>
<pre><code>import pandas as pd
df = pd.read_excel(r"C:\data.xlsx", usecols= "A, B")
x = 0
for index, row in df.iterrows():
if (row["A"] == x):
print(row["B"])
if row["A"] != x:
x = x + 0.25
print(row["B"])
</code></pre>
<p>Which only prints the first "0" group but does not increment and move to the next group. What am I doing wrong?</p>
|
<python><pandas>
|
2023-12-28 16:32:11
| 2
| 2,355
|
Stripers247
|
77,728,032
| 4,343,563
|
Split a text by word from list of words and keep the word in Python
|
<p>I have a list of addresses like:</p>
<pre><code>addresses = [['123 Abc Ave Santa Monica, CA'], ['595 Apts 76 Box Rd Washington, DC'],['Avalon Apts 34 Plain St Dallas, TX']]
</code></pre>
<p>I am trying to split the addresses to get the street address, city, and state separately. Like so:</p>
<pre><code>street_add = ['123 Abc Ave', '76 Box Rd', '34 Plain St']
city = ['Santa Monica', 'Washington', 'Dallas']
state = ['CA', 'DC', 'TX']
</code></pre>
<p>I have tried the solution from here to for using a list of words but I am unable to keep the string I'm splitting on: <a href="https://stackoverflow.com/questions/35976827/python-split-string-using-any-word-from-a-list-of-word">Python. Split string using any word from a list of word</a></p>
<pre><code> #trails = ("(St)", "(Street)", "(Dr)", "(Drive)", "(Avenue)", "(Ave)", "(Court)", "(Road)")
trails = ("St", "Street", "Dr", "Drive", "(Avenue)", "(Ave)", "(Court)", "(Road)")
# \b means word boundaries.
regex = r"\b(?:{}).*".format("|".join(trails))
addresses = [re.split(regex, y) for x in addresses for y in x]
</code></pre>
<p>but this gives me:</p>
<pre><code>[['123 Abc ', None, None, None, None, ''], ['76 Box', 'Road', None, None, None, ''], ['34 Plain', None, None, None, None, '']]
</code></pre>
<p>How can I get just the addresses, city, and state separately?</p>
|
<python>
|
2023-12-28 16:26:45
| 0
| 700
|
mjoy
|
77,727,724
| 3,388,491
|
What's wrong with my pytest moto context in this test?
|
<p>As a frequent user of moto, I'm rather confused about the issue I'm facing.
I set up a test class with a pytest fixture providing a DynamoDB table which I want to access in my test.
Business as usual:</p>
<pre><code>class TestDynamodbClient:
@pytest.fixture
def test_table(self):
with mock_dynamodb():
table_name = "test_table"
dynamodb = boto3.resource('dynamodb')
params = {
'TableName': table_name,
'KeySchema': [
{'AttributeName': 'id', 'KeyType': 'HASH'},
],
'AttributeDefinitions': [
{'AttributeName': 'id', 'AttributeType': 'N'},
],
'ProvisionedThroughput': {
'ReadCapacityUnits': 5,
'WriteCapacityUnits': 5
}
}
table = dynamodb.create_table(**params)
table.wait_until_exists()
assert table_name in [t.name for t in dynamodb.tables.all()] # this is to illustrate the setup works
return table_name
</code></pre>
<p>Using the debugger, I confirmed the fixture to be evaluated prior to the test execution.
In my test, however, the table does not exist:</p>
<pre><code> @pytest.mark.integration
def test_recreate_table__table_exists__recreates_table(self, test_table):
with mock_dynamodb():
# given
client = DynamodbClient()
# this is to illustrate the setup does not work
dynamodb = boto3.resource('dynamodb')
assert test_table in [t.name for t in dynamodb.tables.all()] # raises AssertionError: assert 'test_table' in []
# when
client.recreate_table("test_table")
# then
# ...
</code></pre>
<p>My <code>conftest.py</code> looks as follows:</p>
<pre><code>def pytest_configure(config):
"""Mocked AWS Credentials for moto."""
os.environ["AWS_ACCESS_KEY_ID"] = "testing"
os.environ["AWS_SECRET_ACCESS_KEY"] = "testing"
os.environ["AWS_SECURITY_TOKEN"] = "testing"
os.environ["AWS_SESSION_TOKEN"] = "testing"
os.environ["AWS_DEFAULT_REGION"] = "eu-central-1"
os.environ["AWS_REGION"] = "eu-central-1"
config.addinivalue_line(
"markers",
"integration: mark test to run only run as part of the integration test suite",
)
</code></pre>
<p>I'm flabbergasted as to why the <code>"test_table"</code> table does not exist in my test, despite my fixture being evaluated.
Does anyone have any pointers towards possible causes?</p>
|
<python><pytest><moto>
|
2023-12-28 15:20:28
| 2
| 2,708
|
oschlueter
|
77,727,545
| 8,869,570
|
How to get around unexpected keyword arguments when callsite uses keyword argument but API uses unnamed arg?
|
<p>I have a class with a method</p>
<pre><code>def method(self, keyarg1, keyarg2):
# definition
</code></pre>
<p>At the callsite, it uses keyword arguments when passing in the arguments, i.e.,</p>
<pre><code>class_instance.method(keyarg1=..., keyarg2=...)
</code></pre>
<p>I'm trying to write a mockclass with <code>method</code>, where <code>keyarg1, keyarg2</code> aren't used, so I wrote it as follows:</p>
<pre><code>class MockedClass:
def method(self, _, __):
# definition
</code></pre>
<p>However, this causes issues at the callsite since the client passes in the arguments using <code>keyarg1=, keyarg2=</code>. What is the canonical way in python to get around this issue?</p>
<p>I could, of course, define <code>method</code> in <code>MockedClass</code> to take in <code>keyarg1, keyarg2</code>, but I will run into linter errors that I will have to silence and would prefer avoiding that route.</p>
|
<python>
|
2023-12-28 14:46:02
| 1
| 2,328
|
24n8
|
77,727,456
| 1,907,755
|
How to read DeltaTable on azure storage account using GraphQL?
|
<p>I'm looking for different ways to read <code>DeltaTable</code> on Azure Storage account using API's.
One option i find is using <code>delta-rs</code> to read the Delta Table and expose that as <code>Azure Functions</code>.</p>
<p>I was wondering is there anyway, we could use <code>GraphQL</code> to read the Delta tables and expose that as Azure Functions.</p>
|
<python><graphql><azure-functions><delta-lake>
|
2023-12-28 14:27:48
| 1
| 9,019
|
Shankar
|
77,727,072
| 525,865
|
Scraper with BS4 does not give back any response (data) on the screen
|
<p>I want to have a overiew on the data of austrian hospitals: how to scrape the data from the site: <a href="https://www.klinikguide.at/oesterreichs-kliniken/" rel="nofollow noreferrer">https://www.klinikguide.at/oesterreichs-kliniken/</a></p>
<p>my approach - with BS4 is these few lines. Note: at the moment i want to give the data out on the screen. For the further process the data or save it to a file, database, etc.
but at the mmoent i only want to print out the data on the screen.</p>
<p>well - on colab I get no (!) response.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
# URL of the website
url = "https://www.klinikguide.at/oesterreichs-kliniken/"
# Send a GET request to the URL
response = requests.get(url)
# Check if the request was successful (status code 200)
if response.status_code == 200:
# Parse the HTML content of the page
soup = BeautifulSoup(response.text, 'html.parser')
# Find the elements containing the hospital data
hospital_elements = soup.find_all('div', class_='your-hospital-data-class')
# Process and print the data
for hospital in hospital_elements:
# Extract relevant information (adjust based on the structure of the website)
hospital_name = hospital.find('span', class_='hospital-name').text
address = hospital.find('span', class_='address').text
# Extract other relevant information as needed
# Print or store the extracted data
print(f"Hospital Name: {hospital_name}")
print(f"Address: {address}")
print("")
# You can further process the data or save it to a file, database, etc.
else:
print(f"Failed to retrieve the page. Status code: {response.status_code}")
</code></pre>
<p>hospital_elements = soup.find_all('div', class_='your-hospital-data-class')
hospital_name = hospital.find('span', class_='hospital-name').text</p>
<p><strong>update</strong>: note: i changed the code to this</p>
<pre><code>import re
from urllib.request import urlopen
from bs4 import BeautifulSoup as BS
url = "https://www.klinikguide.at/oesterreichs-kliniken/"
response = urlopen(url)
html_content = response.read()
soup = BS(html_content, "html.parser")
pat_kontakt = (
r"(?P<address>.+),\s*"
r"Tel:\s*(?P<tel>.+),\s*"
r"E-Mail:\s*(?P<email>.+),\s*"
r"Webadresse:\s*"
r"(?P<web_adress>.+)"
)
out = {}
for tag in soup.select(".hospitals a,h2"):
if tag.name == "h2":
category = tag.get_text()
elif tag.name == "a":
hospital_name = tag.get_text()
hospital_url = tag["href"]
with urlopen(hospital_url) as hospital_response:
hospital_html_content = hospital_response.read()
soup_kontakt = BS(hospital_html_content, "html.parser")
m = re.search(pat_kontakt, soup_kontakt.select_one(".kontakt_hospital > p").get_text())
out.setdefault(category, []).append(dict(name=hospital_name, **m.groupdict()))
# print(f"{category:<20}", hospital_name) # uncomment if needed
# import time; time.sleep(2) # You might want to include a delay to be respectful when scraping
# Now 'out' contains the scraped data
print(out)
</code></pre>
<p>and get back the following error</p>
<hr />
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-1-faa3c9672b73> in <cell line: 21>()
29 soup_kontakt = BS(hospital_html_content, "html.parser")
30 m = re.search(pat_kontakt, soup_kontakt.select_one(".kontakt_hospital > p").get_text())
---> 31 out.setdefault(category, []).append(dict(name=hospital_name, **m.groupdict()))
32
33 # print(f"{category:<20}", hospital_name) # uncomment if needed
AttributeError: 'NoneType' object has no attribute 'groupdict'
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2023-12-28 13:09:39
| 2
| 1,223
|
zero
|
77,727,056
| 12,276,279
|
How can I use `df.shift(n)` in pandas dataframe such that I can bring `n` item from bottom to the top instead of `nan` values or vice versa?
|
<p>I have a pandas dataframe <code>df</code> containing 5 rows and 2 columns.</p>
<pre><code>A B
0 10 0
1 20 5
2 30 10
3 40 15
4 50 20
</code></pre>
<p><code>df.to_dict()</code> returns</p>
<pre><code>{'A': {0: 10, 1: 20, 2: 30, 3: 40, 4: 50},
'B': {0: 0, 1: 5, 2: 10, 3: 15, 4: 20}}
</code></pre>
<p>For column A, I want to shift each item to two rows below. Instead of having <code>nan</code> values on top, I want to bring two elements that would be pushed out in the bottom to the top.</p>
<p>For column B, I want to do the opposite - shift each item to two rows above. Instead of having <code>nan</code> values on bottom, I want to bring two elements that would be pushed out in the top to the bottom.</p>
<p>I can use <code>df["A"].shift(2)</code> and <code>df["B"].shift(-2)</code>. However, I get nan values.</p>
<p>My expected results is:</p>
<pre><code>A B
0 40 10
1 50 15
2 10 20
3 20 0
4 30 5
</code></pre>
<p>How can I achieve this?</p>
|
<python><python-3.x><pandas><dataframe><shift>
|
2023-12-28 13:05:37
| 1
| 1,810
|
hbstha123
|
77,726,963
| 19,130,803
|
extract file name from html file uploader
|
<p>I am developing a <code>python</code> web app. One of the page has file uploader functionality using html <code><input type="file" id="file"></code>. I want to get the filename of uploaded file by the user. In my backend the function receieve the input box value as <code>str</code> for eg <code>C:\fakepath\hello.txt</code>. I tried below code to extract filename, suppose user uploaded <code>hello.txt</code> file</p>
<pre><code>
def process(value: str) -> bool:
status: bool = False
print(f'{value}') # gives C:\fakepath\hello.txt
file_path: Path = Path(value)
file_name: str = file_path.name # gives C:\fakepath\hello.txt and not hello.txt
# further processing code
return status
</code></pre>
<p>What I am missing?</p>
|
<python>
|
2023-12-28 12:43:49
| 2
| 962
|
winter
|
77,726,479
| 315,168
|
Python: Converting SQL list of dicts to dict of lists in fast manner (from row data to columnar)
|
<p>I am reading and processing row-oriented data from SQL database, and then write it out as columnar data as a Parquet file.</p>
<p>Converting this data in Python is simple. The problem is that the dataset is very large and the raw speed of Python code is a practical bottleneck. My code is spending a lot of time converting a Python list of dictionaries to dictionary of lists to feed it to PyArrow's <code>ParquetWriter.write_table()</code>.</p>
<p>The data is read is SQLAlchemy and psycopg2.</p>
<p>The simplified loop looks like:</p>
<pre class="lang-py prettyprint-override"><code># Note: I am using a trick of preallocated lists here already
columnar_data = {"a": [], "b": []}
for row in get_rows_from_sql():
columnar_data["a"].append(process(row["a"])
columnar_data["b"].append(process(row["b"])
</code></pre>
<p>What I would like to do:</p>
<pre class="lang-py prettyprint-override"><code>input_data = get_rows_from_sql()
columnar_input = convert_from_list_of_dicts_to_dict_of_lists_very_fast(input_data)
columnar_output["a"] = map(process, columnar_input["a"])
columnar_output["b"] = map(process, columnar_input["b"])
</code></pre>
<p>I would like to move as much as possible of the loop of transforming data from Python native to CPython internal, so that the code runs faster.</p>
<p>SQLAlchemy or psycopg2 does not seem to natively support columnar data output, as SQL is row-oriented, but I might be wrong here.</p>
<p>My question is what kind of Python optimisations can be applied here? I assume this is a very common problem, as Pandas and Polars operate on column-oriented data, whereas data input is often row-oriented like SQL.</p>
|
<python><pandas><psycopg2><parquet>
|
2023-12-28 11:02:44
| 2
| 84,872
|
Mikko Ohtamaa
|
77,726,384
| 9,359,785
|
How can a Windows program detect Python installations?
|
<p>I am trying to determine how a given Windows application locates and loads the Python installation it's going to use.</p>
<p>Some context : I have <strong>Python 3.12</strong> installed on my system (for my own programming). But I also have a Windows application installed, named "DaVinci Resolve", which needs <strong>Python 3.6</strong> in order to offer a certain functionality.</p>
<p>Problem: That application keeps detecting and loading Python 3.12.</p>
<p>Please note that it succeeds in loading Python 3.12, which is a good thing because then I can run some commands to check <code>sys.path</code> and <code>platform.python_version()</code> . The issues only arise later, when I try to use that certain functionality I was mentionning (it's a scripting API. It requires Python 3.6). That API is the whole point of using that application, it's my end goal. Don't let it distract you, if you want the full story you can find it here : <a href="https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=168568&p=1009509#p1009509" rel="nofollow noreferrer">https://forum.blackmagicdesign.com/viewtopic.php?f=12&t=168568&p=1009509#p1009509</a></p>
<p>I'm giving you the short version of my attempts:</p>
<ul>
<li><p><strong>If %PATH% <em>only</em> contains Python 3.6 then the application <em>still</em> loads Python 3.12</strong> (I've verified by starting the application from a
command prompt where I <code>echo %PATH%</code> before, and also do
<code>print(sys.path)</code> from with the application after startup. Both show
only python36)</p>
</li>
<li><p><strong>If I "hide" the folder containing Python 3.12</strong> (by renaming it temporarily) then the application says that Python is not installed on the system. If it was merely scanning the folders then it would still detect Python 3.6!</p>
</li>
</ul>
<p><strong>Hence the question :
What are the standard-ish ways used by a Windows application to locate Python installations on the system?</strong></p>
<ul>
<li>PATH is out of the way, as demonstrated</li>
<li>I'm not looking for completely custom (goofy) methods, only standard methods that I might not know but that experienced Python users might know.</li>
</ul>
|
<python><davinci-resolve>
|
2023-12-28 10:41:55
| 0
| 1,785
|
jeancallisti
|
77,726,216
| 5,710,451
|
How to cover room in light with least number of light sources?
|
<p>I have an image of a map of room.
and I want to calculate minimum point in this map that I can set a lamp. that all of my map have light.
I read image and edge detect.
but I don't know how can I calculate number of lamp.
have python a function for this?</p>
<p>Fro example:
<a href="https://i.sstatic.net/Hkz6r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hkz6r.png" alt="enter image description here" /></a></p>
<p>and answer: 2</p>
<p><a href="https://i.sstatic.net/hROPs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hROPs.png" alt="enter image description here" /></a></p>
|
<python><image-processing><geometry>
|
2023-12-28 10:07:32
| 0
| 370
|
z.gomar
|
77,725,842
| 865,169
|
How can I check if a Python variable is an integer, including numpy.int*?
|
<p>I would like to use an abstract way of checking whether a variable is an integer. I know I can do:</p>
<pre><code>>>> a = 1
>>> isinstance(a, int)
True
</code></pre>
<p>However, this does not work for some special types of integers, e.g.:</p>
<pre><code>>>> import numpy as np
>>> b = np.int64(1)
>>> isinstance(b, int)
False
</code></pre>
<p>I know I can also do:</p>
<pre><code>>>> isinstance(b, np.integer)
True
</code></pre>
<p>But unfortunately, <code>np.integer</code> is not "abstract" enough to match an ordinary <code>int</code>:</p>
<pre><code>>>> isinstance(a, np.integer)
False
</code></pre>
<p>What can I do to check and conclude that both <code>a</code> and <code>b</code> are "integers", preferably in a way that also matches other "integer-in-spirit" special data types that other packages than NumPy might bring to the party?</p>
|
<python><integer>
|
2023-12-28 08:49:19
| 0
| 1,372
|
Thomas Arildsen
|
77,725,219
| 3,623,537
|
Type comments for list of values
|
<p>Is there way for typing e.g. a list of ints without <code>typing</code>?</p>
<p>I use python 2.7 and there is no <code>typing</code> yet and the only way to define a type for type hints is type comments.</p>
<p>The goal is that PyLance would see the variables as list of integers.</p>
<pre class="lang-py prettyprint-override"><code># python 3+
from typing import List
parameters = [] # type: List[int]
# python 2.7
parameters = [] # type: ???
</code></pre>
|
<python><python-2.7><python-typing><pyright>
|
2023-12-28 05:51:29
| 1
| 469
|
FamousSnake
|
77,725,199
| 9,797,207
|
Convert datetime string to with timezone in django based on active time zone
|
<p>I am reading a get request and adding the data to db.</p>
<pre><code>import dateutil.parser as dt
date_string = '2023-12-14 08:00:00'
date_obj = dt.parse(date_string)
</code></pre>
<p>It shows warning</p>
<blockquote>
<p>RuntimeWarning: DateTimeField RequestLab.end_time received a naive datetime (2023-12-14 08:00:00) while time zone support is active.</p>
</blockquote>
<p>Is there any way i can convert this to date object with time zone</p>
<pre><code>from django.utils import timezone
curr_timezone = timezone.get_current_timezone()
</code></pre>
<p>and store in the db..</p>
<p>I tried using various methods like</p>
<pre><code>datetime_obj_test1 = date_obj.replace(tzinfo=curr_timezone)
datetime_obj_test2 = date_obj.astimezone(curr_timezone)
</code></pre>
<p>But these 2 are changing the time i.e for <em>timezone = "Asia/Kolkata"</em></p>
<blockquote>
<p>datetime_obj_test1 = 2023-12-14 08:00:00+05:53</p>
</blockquote>
<blockquote>
<p>datetime_obj_test2 = 2023-12-14 19:30:00+05:30</p>
</blockquote>
<p>I am expecting this as output:</p>
<blockquote>
<p>2023-12-14 08:00:00+05:30</p>
</blockquote>
|
<python><django><datetime><django-timezone>
|
2023-12-28 05:44:19
| 2
| 467
|
saibhaskar
|
77,725,122
| 856,804
|
Does TextRecognitionModel from opencv support GPU mode?
|
<p>I've made the <code>TextDetectionModel</code> tutorial at <a href="https://docs.opencv.org/4.x/d4/d43/tutorial_dnn_text_spotting.html" rel="nofollow noreferrer">https://docs.opencv.org/4.x/d4/d43/tutorial_dnn_text_spotting.html</a> work, and I'd like to speed up the detection with GPU by adding a couple of lines, but I'm hitting error.</p>
<p>The code I'm running is below, and I've commented the two lines that would lead to error</p>
<pre class="lang-py prettyprint-override"><code>import io
import PIL.Image
import cv2 as cv
import numpy as np
import requests
# Downloaded following instruction from https://docs.opencv.org/4.x/d4/d43/tutorial_dnn_text_spotting.html
db_model_path = (
"/home/dev/disk/projects/learn/opencv/DB_TD500_resnet50.onnx" # eng and ch
)
db_model = cv.dnn.TextDetectionModel_DB(db_model_path)
binThresh = 0.3
polyThresh = 0.5
maxCandidates = 200
unclipRatio = 2.0
(
db_model.setBinaryThreshold(binThresh)
.setPolygonThreshold(polyThresh)
.setMaxCandidates(maxCandidates)
.setUnclipRatio(unclipRatio)
)
scale = 1.0 / 255.0
mean = (122.67891434, 116.66876762, 104.00698793)
inputSize = (736, 736)
db_model.setInputParams(scale, inputSize, mean)
# These are the two lines that create error.
db_model.setPreferableBackend(cv.dnn.DNN_BACKEND_CUDA)
db_model.setPreferableTarget(cv.dnn.DNN_BACKEND_CUDA)
response = requests.get("https://docs.opencv.org/4.x/detect_test1.jpg")
img = PIL.Image.open(io.BytesIO(response.content))
out = db_model.detect(np.asarray(img))
</code></pre>
<p>The error:</p>
<pre><code>error: OpenCV(4.8.1) /io/opencv/modules/dnn/src/net_impl.cpp:119: error: (-215:Assertion failed) preferableBackend != DNN_BACKEND_CUDA || IS_DNN_CUDA_TARGET(preferableTarget) in function 'validateBackendAndTarget'
</code></pre>
<p>I wonder if <code>TextDetectionModel_DB</code> supports GPU mode? If so, what's the proper way to set it up?</p>
|
<python><c++><opencv>
|
2023-12-28 05:10:44
| 0
| 9,110
|
zyxue
|
77,724,574
| 4,281,353
|
Langchain huggngface text splitter can cause overflow the max token size
|
<p>It looks LangChain <a href="https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#hugging-face-tokenizer" rel="nofollow noreferrer">Hugging Face tokenizer Text Splitter</a> is broken and cannot split a text into the token size below the max token length that model can accept.</p>
<p>Tokenization may not preserve the same word it had tokenized if tokens get chopped. For example, tokenizer of the model <strong>all-MiniLM-L6-v2</strong> will tokenize <strong>8 trillions</strong> into <code>['[CLS]', '8', 'trillion', '##s', '[SEP]']</code> where the plural <strong>s</strong> in <strong>trillions</strong> becomes a token, not part of the token of the word, hence a word can generate multiple tokens. This can cause an issue.</p>
<p>Suppose <strong>8 trillions</strong> is a part of a long text to split into multiple by counting its tokens.</p>
<ol>
<li>tokenizes the long text including <strong>8 trillions</strong> as <code>['[CLS]', '8', 'trillion', '##s', '[SEP]']</code>.</li>
<li>split the tokens at <code>##s</code>.</li>
<li>restore the tokens of the split will make <strong>##s</strong> as a new word.</li>
<li>text split including <strong>##s</strong> is then re-tokenized into <code>['[CLS]', '#', '#', 's', '[SEP]']</code> as model inputs, adding two more token counts which can overflow the max token length.</li>
</ol>
<pre><code>from transformers import (
AutoTokenizer,
PreTrainedTokenizer
)
model_name: str = "all-MiniLM-L6-v2"
tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained(
f'sentence-transformers/{model_name}'
)
text = "8 trillions"
tokenizer.batch_decode(tokenizer(text)['input_ids'])
-----
['[CLS]', '8', 'trillion', '##s', '[SEP]']
</code></pre>
<pre><code>tokenizer.batch_decode(tokenizer("##s")['input_ids'])
-----
['[CLS]', '#', '#', 's', '[SEP]'] # 's' is used to be one token as part of 'trillions', now become three.
</code></pre>
<p>It also assumes that the tokenizers generate the structure of <code>[start_token][tokens][end_token]</code> and use <code>[1:-1]</code> to truncate start_token and end_token.</p>
<p><a href="https://api.python.langchain.com/en/stable/_modules/langchain/text_splitter.html#SentenceTransformersTokenTextSplitter" rel="nofollow noreferrer">SentenceTransformersTokenTextSplitter</a></p>
<pre><code>def split_text(self, text: str) -> List[str]:
def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:
return self._encode(text)[1:-1] # <-----
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self.tokens_per_chunk,
decode=self.tokenizer.decode,
encode=encode_strip_start_and_stop_token_ids,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
</code></pre>
<p>However, this does not work because some model(s) may not always generate a start token. For instance, tokenizing a word <strong>a</strong> has start token <code>3</code> which is an empty string with <code>gtr-t5-large</code> model.</p>
<pre><code>model_name: str ="gtr-t5-large"
tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained(
f'sentence-transformers/{model_name}'
)
text = "a"
sentence_t5_tokenizer(text)['input_ids']
-----
[3, 9, 1]
</code></pre>
<pre><code>tokenizer.batch_decode(tokenizer(text)['input_ids'])
-----
['', 'a', '</s>']
</code></pre>
<p>But tokenizing <strong>Madame</strong> does not have the start token. Then slicing tokens with <code>[1:-1]</code> will lose the first token.</p>
<pre><code>text = "Madame"
sentence_t5_tokenizer(text)['input_ids']
-----
[27328, 1]
</code></pre>
<pre><code>tokenizer.batch_decode(tokenizer(text)['input_ids'])
-----
['Madame', '</s>']
</code></pre>
<p>Below shows naively slicing with <code>[1:-1]</code> will chop off the first word.</p>
<pre><code>from langchain.text_splitter import (
SentenceTransformersTokenTextSplitter
)
splitter = SentenceTransformersTokenTextSplitter(
model_name="gtr-t5-large",
chunk_overlap=0
)
text: str = "Madame Speaker, Vice President Biden, members of Congress, distinguished guests, and fellow Americans:"
sentence_trf_splitter.split_text(text)
-----
# 'Madame' is lost, if run it again on the result, then 'Speaker' gets lost.
['Speaker, Vice President Biden, members of Congress, distinguished guests, and fellow Americans:']
</code></pre>
|
<python><huggingface-tokenizers><py-langchain>
|
2023-12-28 00:41:56
| 0
| 22,964
|
mon
|
77,724,454
| 12,974,079
|
Patching a property of a used class
|
<p>I'm trying to patch two properties from a class but the mocking is returning a MagicMock instead the expected return value (String).</p>
<p>Client class:</p>
<pre><code>class ClientApi:
def create_path(self):
return f"client_api_{self.time_now}T{self.date_now}.json"
@property
def date_now(self):
return datetime.now().strftime("%Y-%m-%d")
@property
def time_now(self):
return datetime.now().strftime("%H:%M:%S")
</code></pre>
<p>Test Class</p>
<pre><code>class TestClientApi:
@pytest.fixture
def setup_class(self):
yield ClientApi()
def test_create_path(self, setup_class):
with patch("client_api.ClientApi.date_now") as mock_date_now, \
patch(client_api.ClientApi.time_now") as mock_time_now:
mock_date_now.return_value = "2023-11-28"
mock_time_now.return_value = "00:00:00"
expected_path = "client_api_2023-11-28T00:00:00.json"
assert expected_path = setup_class.create_path()
</code></pre>
<p>The create_path method is returning <code>"<MagicMock name='time_now' id='1687227445600'>T<MagicMock name='date_now' id='1687227461936'>.json"</code> and I would like to understand why.</p>
<p>I would like to ask as well if anyone know how can I compress the two patches in one and if it will work, for example:</p>
<pre><code>def test_create_path(self, setup_class):
with patch("client_api.ClientApi") as mock_client_api:
mock_client_api.return_value.date_now.return_value = "2023-11-28"
mock_client_api.return_value.time_now.return_value = "00:00:00"
</code></pre>
<p>Or any similar way, but it's not mocking it and just sending the current timestamp.</p>
<p>Thank you in advance</p>
|
<python><python-3.x><python-unittest><python-unittest.mock>
|
2023-12-27 23:41:26
| 1
| 363
|
HouKaide
|
77,724,269
| 11,932,910
|
Why does VSCode debugger ignore raised exception despite the checkmark?
|
<p>I was debugging Pandas in VSCode (Python). I ran the following code in the debugger:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
index = pd.Index([1,2,3,4]).get_loc('w')
</code></pre>
<p>which resulted in an error (<code>KeyError: 'w'</code>). However, this error was not caught by the debugger, but simply printed in the output.</p>
<p>I expected the debugger to stop on the <code>raise</code> inside <code>except</code> here in this Pandas file (<a href="https://github.com/pandas-dev/pandas/blob/bdb509f95a8c0ff16530cedb01c2efc822c0d314/pandas/core/indexes/base.py#L3652-L3655" rel="nofollow noreferrer">permalink</a>):</p>
<pre class="lang-py prettyprint-override"><code> try:
return self._engine.get_loc(casted_key)
except KeyError as err:
raise KeyError(key) from err
</code></pre>
<p>I have the following breakpoint settings:</p>
<ul>
<li>(unchecked) Raised Exceptions</li>
<li>(checked) Uncaught Exceptions</li>
<li>(checked) User Uncaught Exceptions</li>
</ul>
<p>and also <code>"justMyCode": false,</code> in my debugging profile in <code>launch.json</code>.</p>
<p><strong>Question</strong>: How do I make VSCode debugger stop on this uncaught exception (instead of terminating and printing an error message)?</p>
<p>If I check "Raised Exceptions", then the debugger will stop <s>where I expect it to stop</s> inside <code>try</code>. But it will also stop tens of times in many other places, which I am not interested in, so this is not an option.</p>
<p>Now, if I follow the same code pattern and try this:</p>
<pre class="lang-py prettyprint-override"><code>try:
print(1 / 0)
except ZeroDivisionError as err:
raise ZeroDivisionError('bla') from err
</code></pre>
<p>then VSCode Python debugger will stop at the last line (and let me debug). This shows that raising inside <code>except</code> counts as an uncaught exception.</p>
<hr />
<p>P.S. To answer @starball's <a href="https://stackoverflow.com/questions/77724269/why-does-vscode-debugger-ignore-raised-exception-despite-the-checkmark?noredirect=1#comment137024764_77724269">comment</a>, if I run</p>
<pre class="lang-py prettyprint-override"><code>try:
index = pd.Index([1,2,3,4]).get_loc('w')
except KeyError as err:
raise KeyError('w') from err
</code></pre>
<p>exactly the same happens, as if I ran the same line without try-except.</p>
|
<python><pandas><visual-studio-code><vscode-debugger>
|
2023-12-27 22:34:43
| 0
| 381
|
paperskilltrees
|
77,724,104
| 3,810,748
|
Why does pandas.apply return an index on the first iteration instead of the actual element?
|
<h2>Current Problem:</h2>
<p>I currently have the following pandas dataframe object:</p>
<pre><code>>>> print(my_df)
Date Revenue
0 2023-12-27 00:00:00-05:00 3880359
1 2023-12-26 00:00:00-05:00 3139100
2 2023-12-22 00:00:00-05:00 2849700
3 2023-12-21 00:00:00-05:00 4884800
4 2023-12-20 00:00:00-05:00 4032200
5 2023-12-19 00:00:00-05:00 4979100
6 2023-12-18 00:00:00-05:00 6314700
7 2023-12-15 00:00:00-05:00 11503000
8 2023-12-14 00:00:00-05:00 8033300
9 2023-12-13 00:00:00-05:00 7727900
</code></pre>
<p>I get normal expected results when I loop through <code>Revenue</code> column:</p>
<pre><code>>>> my_df['Revenue'].apply(lambda x: print(x, type(x)))
3880359 <class 'int'>
3139100 <class 'int'>
2849700 <class 'int'>
4884800 <class 'int'>
4032200 <class 'int'>
4979100 <class 'int'>
6314700 <class 'int'>
11503000 <class 'int'>
8033300 <class 'int'>
7727900 <class 'int'>
</code></pre>
<p>I get abnormal unexpected results when I loop through <code>Date</code> column:</p>
<pre><code>>>> my_df['Date'].apply(lambda x: print(x, type(x)))
DatetimeIndex(['2023-12-27 00:00:00-05:00', '2023-12-26 00:00:00-05:00', '2023-12-22 00:00:00-05:00', '2023-12-21 00:00:00-05:00', '2023-12-20 00:00:00-05:00', '2023-12-19 00:00:00-05:00', '2023-12-18 00:00:00-05:00', '2023-12-15 00:00:00-05:00', '2023-12-14 00:00:00-05:00', '2023-12-13 00:00:00-05:00'], dtype='datetime64[ns, America/New_York]', freq=None) <class 'pandas.core.indexes.datetimes.DatetimeIndex'>
2023-12-27 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-26 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-22 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-21 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-20 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-19 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-18 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-15 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-14 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
2023-12-13 00:00:00-05:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'>
</code></pre>
<p><strong>Why is this happening? Why do I get an index object at first?</strong></p>
<h2>Steps to Reproduce:</h2>
<p>Create a file locally named <code>example.json</code> with the following contents:</p>
<pre><code>{"Date":{"0":1703653200000,"1":1703566800000,"2":1703221200000,"3":1703134800000,"4":1703048400000,"5":1702962000000,"6":1702875600000,"7":1702616400000,"8":1702530000000,"9":1702443600000},"Revenue":{"0":3880359,"1":3139100,"2":2849700,"3":4884800,"4":4032200,"5":4979100,"6":6314700,"7":11503000,"8":8033300,"9":7727900}}
</code></pre>
<p>Create a file locally named <code>example.py</code> with the following contents:</p>
<pre><code>import pandas as pd
# Assuming "example.json" is in the same directory as your Python script or notebook
file_path = 'example.json'
# Read DataFrame from JSON file
df = pd.read_json(file_path)
# Display the DataFrame
print(df)
# Recreate the problem
df['Date'].apply(lambda x: print(x, type(x)))
</code></pre>
|
<python><pandas><date><datetime>
|
2023-12-27 21:35:57
| 1
| 6,155
|
AlanSTACK
|
77,724,047
| 3,973,175
|
how to add comment to SVG file written by matplotlib?
|
<p>Given an example plot</p>
<pre><code>import matplotlib.pyplot as plt
plt.plot([1,2,3], [2,4,6])
plt.savefig('simple.plot.svg')
</code></pre>
<p>how can I write a comment into that SVG file?</p>
<p>for example, I would like to write that the SVG file was written by <code>simple.plot.py</code></p>
<p>I have seen there may be a way to add comments: <a href="https://matplotlib.org/stable/api/backend_svg_api.html" rel="nofollow noreferrer">https://matplotlib.org/stable/api/backend_svg_api.html</a> I am not a python or matplotlib expert, however, and I don't know how I could implement that.</p>
<p>I was hoping that something like <code>plt.savefig('simple.plot.svg', comment = "written by simple.plot.py")</code> would be available, but alas, this functionality doesn't seem to exist: <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html#matplotlib.pyplot.savefig" rel="nofollow noreferrer">https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html#matplotlib.pyplot.savefig</a></p>
<p>but I have no idea how I could implement that into a simple script like I have above, how can I do so?</p>
|
<python><matplotlib><svg>
|
2023-12-27 21:16:35
| 1
| 6,227
|
con
|
77,724,025
| 1,390,272
|
pyenv installing any python version is resulting in ModuleNotFoundError: No module named '_ssl'
|
<p>As a new user on my ubuntu WSL system I am installing python versions using pyenv to manage multiple version, I'm using the standard instructions:</p>
<p>I installed homebrew:</p>
<pre><code>/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
</code></pre>
<p>Updated brew and installed pyenv using homebrew (currently the recommended process)</p>
<pre><code>brew update
brew install pyenv
</code></pre>
<p>Install any python version using this newly installed pyenv:</p>
<pre><code>pyenv install 3.10.13
</code></pre>
<p>And I get the following error:</p>
<pre><code>Downloading Python-3.10.13.tar.xz...
-> https://www.python.org/ftp/python/3.10.13/Python-3.10.13.tar.xz
Installing Python-3.10.13...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/adam/.pyenv/versions/3.10.13/lib/python3.10/ssl.py", line 99, in <module>
import _ssl # if we can't import it, let the error propagate
ModuleNotFoundError: No module named '_ssl'
ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?
Please consult to the Wiki page to fix the problem.
https://github.com/pyenv/pyenv/wiki/Common-build-problems
BUILD FAILED (Ubuntu 20.04 using python-build 2.3.35-5-g2d850751)
Inspect or clean up the working tree at /tmp/python-build.20231227145009.23362
Results logged to /tmp/python-build.20231227145009.23362.log
Last 10 log lines:
LD_LIBRARY_PATH=/tmp/python-build.20231227145009.23362/Python-3.10.13 ./python -E -m ensurepip \
$ensurepip --root=/ ; \
fi
Looking in links: /tmp/tmprqhmro5y
Processing /tmp/tmprqhmro5y/setuptools-65.5.0-py3-none-any.whl
Processing /tmp/tmprqhmro5y/pip-23.0.1-py3-none-any.whl
Installing collected packages: setuptools, pip
WARNING: The scripts pip3 and pip3.10 are installed in '/home/adam/.pyenv/versions/3.10.13/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pip-23.0.1 setuptools-65.5.0
</code></pre>
<p>Any thoughts on the cause?
I've tried the following with no luck:</p>
<pre><code>sudo apt install libssl-dev
</code></pre>
|
<python><linux><openssl><homebrew><pyenv>
|
2023-12-27 21:08:45
| 1
| 1,673
|
Goku
|
77,723,993
| 17,835,120
|
Gemini Pro API Blocking Replies
|
<p>Running Gemini AP added code to avoid blocking but getting blocked. My commands do not get blocked when using ChatGPT?</p>
<p><strong>Code in Python</strong></p>
<pre><code>def get_gemini_response(question, safety_settings=None):
# If safety settings are not provided, set it to 'block_none' for all categories
if safety_settings is None:
safety_settings = {
'SEXUALLY_EXPLICIT': 'block_none',
'HATE_SPEECH': 'block_none',
'HARASSMENT': 'block_none',
'DANGEROUS_CONTENT': 'block_none'
}
</code></pre>
<p><strong>Reply from Gemini Pro</strong></p>
<pre><code>BlockedPromptException: block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: HIGH } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE }
</code></pre>
<p>The <a href="https://ai.google.dev/tutorials/python_quickstart#safety_settings" rel="nofollow noreferrer">safety settings page</a> from Google implies it may still get blocked:</p>
<pre><code>Now provide the same prompt to the model with newly configured safety settings, **and you may get a response.**
response = model.generate_content('[Questionable prompt here]',
safety_settings={'HARASSMENT':'block_none'})
</code></pre>
|
<python><google-gemini>
|
2023-12-27 21:01:08
| 1
| 457
|
MMsmithH
|
77,723,957
| 4,652,030
|
Permutation where element can be repeated specific times and do it fast
|
<p>I have been looking for a function like this, sadly I was not able to found it.</p>
<p>Here a code that do what I write:</p>
<pre class="lang-py prettyprint-override"><code>import itertools
#Repeat every element specific times
data = {
1: 3,
2: 1,
3: 2
}
#Length
n = 0
to_rep = []
for i in data:
to_rep += [i]*data[i]
n += data[i]
#itertools will generate also duplicated ones
ret = itertools.permutations(to_rep, n)
#clean dups
ret = list(set(ret))
</code></pre>
<p>So, the code will show all lists of length 6, where there is 3 times "1", 1 time "2", and 2 times "3", the code works.</p>
<p>So.... the challenge here is time, this method is too expensive! which would be the fastest way to do this?</p>
<p>I have tested this with some samples of 27 times True and one time False, which is not much, in total there is 28 ways to do it, but this code takes forever... there is even more ways, but I would like to know a more efficient one.</p>
<p>I was able to write a code when we have two elements, like True/False, but seems a lot more complex with more than two elements.</p>
<pre class="lang-py prettyprint-override"><code>def f(n_true, n_false):
ret = []
length = n_true + n_false
full = [True]*length
for rep in itertools.combinations(range(length), n_false):
full_ = full.copy()
for i in rep:
full_[i] = False
ret.append(full_)
return ret
</code></pre>
<p>Exists a function that already do this?
Which is best way to run this?</p>
|
<python>
|
2023-12-27 20:51:53
| 2
| 307
|
Latot
|
77,723,939
| 4,974,679
|
Adding two lists with different length to a Dictionary then to a Dataframe
|
<p>I have two lists with different length such as below:</p>
<pre><code>list1 = ['col1', 'col2', 'col3', 'col4']
list2 = [[1, 2, 3], [2, 3], [1, 8, 4, 3], [22, 35, 32], [65], [2, 45, 55]]
</code></pre>
<p>and I have a Dataframe as the following with 6 rows:</p>
<pre><code>df = pd.DataFrame([['Alex', 33, 'Male'], [Marly, 28, 'Female'], [Charlie, 30, 'Female'], ['Mimi', 37, 'Female'], ['James', 44, 'Male'], ['Jone', 25, 'Male']], columns=['Name', 'Age', 'Gender'])
|name |age|Gender|
|:------|:--|:-----|
|Alex |33 |Male |
|Marly |28 |Female|
|Charlie|30 |Female|
|Mimi |37 |Female|
|James |44 |Male |
|Jone |25 |Male |
</code></pre>
<p>I want to concat list1 as columns and list2 as rows as bellow:</p>
<pre><code>|name |age|Gender|col1|col2|col3|col4|
|:------|:--|:-----|:---|:---|:---|:---|
|Alex |33 |Male |1 |2 |3 |0 |
|Marly |28 |Female|2 |3 |0 |0 |
|Charlie|30 |Female|1 |8 |4 |3 |
|Mimi |37 |Female|22 |35 |32 |0 |
|James |44 |Male |65 |0 |0 |0 |
|Jone |25 |Male |2 |45 |55 |0 |
</code></pre>
<p>However, list1 and list2 are updated <strong>inside a loop</strong> and added to the Dataframe, so <strong>I want to do that using dictionary or any other efficient way</strong> as I am dealing with large number columns and once I tried to add the columns using</p>
<pre><code>df[list1] = pd.DataFrame(list2, index=df.index)
</code></pre>
<p>I got Performance warning DataFrame is highly fragmented.</p>
|
<python><dataframe><list>
|
2023-12-27 20:46:11
| 1
| 587
|
Arwa
|
77,723,918
| 17,176,270
|
Cyrillic content of attached to email file has wrong encoding when sending it with email in Python
|
<p>I have a json file with this content, it has cyrillic letters.</p>
<pre><code>{
"Полотенцесушитель EF 600L (белый)": {
"sku": "Полотенцесушитель EF 600L (белый)",
"price": "2 690 грн",
"link": "https://example.org/some_link_bla_bla"
},
</code></pre>
<p>I attach this file to email and send it with this code:</p>
<pre><code>def send_email(subject: str, json_file_path):
"""Send email."""
m = MIMEMultipart()
m.add_header("from", sender)
m.add_header("to", receiver)
m.add_header("subject", subject)
if json_file_path:
with open(json_file_path, 'r', encoding='utf-8') as file:
attachment = MIMEApplication(file.read(), _subtype="json")
attachment.add_header('Content-Disposition', 'attachment', filename='file.json')
m.attach(attachment)
context = ssl.create_default_context()
try:
with smtplib.SMTP_SSL(
smtp_server, smtp_port, context=context, timeout=20
) as server:
server.login(smtp_username, smtp_password)
server.sendmail(sender, receiver, m.as_string())
ic("Email sent")
except Exception as e:
logging.error(e)
ic(e)
</code></pre>
<p>It works, but all cyrillic letters in the attached file become like:</p>
<pre><code>{
"\u041f\u043e\u043b\u043e\u0442\u0435\u043d\u0446\u0435\u0441\u0443\u0448\u0438\u0442\u0435\u043b\u044c EF 600L (\u0431\u0435\u043b\u044b\u0439)": {
"sku": "\u041f\u043e\u043b\u043e\u0442\u0435\u043d\u0446\u0435\u0441\u0443\u0448\u0438\u0442\u0435\u043b\u044c EF 600L (\u0431\u0435\u043b\u044b\u0439)",
"price": "2 690 \u0433\u0440\u043d",
"link": "https://example.org/some_link_bla_bla"
},
</code></pre>
<p>How to fix that issue?</p>
|
<python><json><email><encoding>
|
2023-12-27 20:39:43
| 0
| 780
|
Vitalii Mytenko
|
77,723,914
| 342,331
|
How to share a common argument between nested function calls in Python?
|
<p>I have two Python functions <code>f()</code> and <code>g()</code> which share one argument: <code>a</code>.</p>
<pre class="lang-py prettyprint-override"><code>def f(a, b):
return a + b
def g(a, c):
return a * c
</code></pre>
<p>Sometimes, <code>g()</code> is called standalone, in which case both arguments are necessary:</p>
<pre class="lang-py prettyprint-override"><code>g(a = 1, c = 3)
</code></pre>
<p>Sometimes, <code>g()</code> is called as an argument inside a <code>f()</code> call:</p>
<pre class="lang-py prettyprint-override"><code>f(a = 1, b = g(a = 1, c = 3))
</code></pre>
<p>Notice that that the <code>a</code> argument is redundant, because it always has to have the same value in <code>f()</code> and <code>g()</code>. I would like to avoid this repetition.</p>
<p><code>g()</code> should figure out if it is called inside a <code>f()</code> call. If so, and if the <code>a</code> argument is not explicitly provided, then it should use the same <code>a</code> argument specified in the <code>f()</code> call. In other words, I want the same result as above with this simpler call:</p>
<pre class="lang-py prettyprint-override"><code>f(a = 1, g(c = 3))
</code></pre>
|
<python>
|
2023-12-27 20:38:51
| 2
| 18,208
|
Vincent
|
77,723,899
| 19,130,803
|
persisting input box value entered by user
|
<p>I am developing a <code>dash</code> application.I want the data, user has entered to be persisted if page gets refreshed or user switches between different pages.</p>
<p>I am first trying a basic example using an <code>input</code> box to persists the value entered by user and manually clicking on refresh button of browser.</p>
<pre><code>from dash import Dash
import dash_bootstrap_components as dbc
dbc_css = (
"https://cdn.jsdelivr.net/gh/AnnMarieW/dash-bootstrap-templates@V1.0.2/dbc.min.css"
)
app = Dash(
__name__,
suppress_callback_exceptions=True,
external_stylesheets=[dbc.themes.BOOTSTRAP, dbc_css],
)
# With placeholder
ip1 = dbc.Input(placeholder="Please input count of persons", persistence=True)
ip2 = dbc.Input(placeholder="Please input count of persons", persistence=True, persistence_type="session")
ip3 = dbc.Input(placeholder="Please input count of persons", persistence=True, persistence_type="session", persisted_props="value")
# Without placeholder
ip4 = dbc.Input(persistence=True)
ip5 = dbc.Input(persistence=True, persistence_type="session")
ip6 = dbc.Input(persistence=True, persistence_type="session", persisted_props="value")
app.layout = dbc.Container([ip1, ip2, ip3, ip4, ip5, ip6])
if __name__ == "__main__":
app.run(host="0.0.0.0", port="8001", debug=True)
</code></pre>
<p>All the above cases failed to persist the value enter by the user. What I am missing?</p>
|
<python><plotly><plotly-dash>
|
2023-12-27 20:33:48
| 0
| 962
|
winter
|
77,723,713
| 7,167,564
|
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 18 but got size 17 for tensor number 1 in the list
|
<p>I am new to Pytorch and using the pre-trained model for my Django App which restores broken images. I took the help from the code given in <a href="https://github.com/safwankdb/Deep-Image-Prior/tree/master" rel="nofollow noreferrer">this repository</a>.</p>
<p>My ML Model Class's code goes like this:</p>
<pre><code>class Hourglass(nn.Module):
def __init__(self):
super(Hourglass, self).__init__()
self.leaky_relu = nn.LeakyReLU()
self.d_conv_1 = nn.Conv2d(2, 8, 5, stride=2, padding=2)
self.d_bn_1 = nn.BatchNorm2d(8)
self.d_conv_2 = nn.Conv2d(8, 16, 5, stride=2, padding=2)
self.d_bn_2 = nn.BatchNorm2d(16)
self.d_conv_3 = nn.Conv2d(16, 32, 5, stride=2, padding=2)
self.d_bn_3 = nn.BatchNorm2d(32)
self.s_conv_3 = nn.Conv2d(32, 4, 5, stride=1, padding=2)
self.d_conv_4 = nn.Conv2d(32, 64, 5, stride=2, padding=2)
self.d_bn_4 = nn.BatchNorm2d(64)
self.s_conv_4 = nn.Conv2d(64, 4, 5, stride=1, padding=2)
self.d_conv_5 = nn.Conv2d(64, 128, 5, stride=2, padding=2)
self.d_bn_5 = nn.BatchNorm2d(128)
self.s_conv_5 = nn.Conv2d(128, 4, 5, stride=1, padding=2)
self.d_conv_6 = nn.Conv2d(128, 256, 5, stride=2, padding=2)
self.d_bn_6 = nn.BatchNorm2d(256)
self.u_deconv_5 = nn.ConvTranspose2d(256, 64, 4, stride=2, padding=1)
self.u_bn_5 = nn.BatchNorm2d(64)
self.u_deconv_4 = nn.ConvTranspose2d(128, 32, 4, stride=2, padding=1)
self.u_bn_4 = nn.BatchNorm2d(32)
self.u_deconv_3 = nn.ConvTranspose2d(64, 16, 4, stride=2, padding=1)
self.u_bn_3 = nn.BatchNorm2d(16)
self.u_deconv_2 = nn.ConvTranspose2d(32, 8, 4, stride=2, padding=1)
self.u_bn_2 = nn.BatchNorm2d(8)
self.u_deconv_1 = nn.ConvTranspose2d(16, 4, 4, stride=2, padding=1)
self.u_bn_1 = nn.BatchNorm2d(4)
self.out_deconv = nn.ConvTranspose2d(4, 4, 4, stride=2, padding=1)
self.out_bn = nn.BatchNorm2d(4)
def forward(self, noise):
down_1 = self.d_conv_1(noise)
down_1 = self.d_bn_1(down_1)
down_1 = self.leaky_relu(down_1)
down_2 = self.d_conv_2(down_1)
down_2 = self.d_bn_2(down_2)
down_2 = self.leaky_relu(down_2)
down_3 = self.d_conv_3(down_2)
down_3 = self.d_bn_3(down_3)
down_3 = self.leaky_relu(down_3)
skip_3 = self.s_conv_3(down_3)
down_4 = self.d_conv_4(down_3)
down_4 = self.d_bn_4(down_4)
down_4 = self.leaky_relu(down_4)
skip_4 = self.s_conv_4(down_4)
down_5 = self.d_conv_5(down_4)
down_5 = self.d_bn_5(down_5)
down_5 = self.leaky_relu(down_5)
skip_5 = self.s_conv_5(down_5)
down_6 = self.d_conv_6(down_5)
down_6 = self.d_bn_6(down_6)
down_6 = self.leaky_relu(down_6)
up_5 = self.u_deconv_5(down_6)
up_5 = torch.cat([up_5, skip_5], 1)
up_5 = self.u_bn_5(up_5)
up_5 = self.leaky_relu(up_5)
up_4 = self.u_deconv_4(up_5)
up_4 = torch.cat([up_4, skip_4], 1)
up_4 = self.u_bn_4(up_4)
up_4 = self.leaky_relu(up_4)
up_3 = self.u_deconv_3(up_4)
up_3 = torch.cat([up_3, skip_3], 1)
up_3 = self.u_bn_3(up_3)
up_3 = self.leaky_relu(up_3)
up_2 = self.u_deconv_2(up_3)
up_2 = self.u_bn_2(up_2)
up_2 = self.leaky_relu(up_2)
up_1 = self.u_deconv_1(up_2)
up_1 = self.u_bn_1(up_1)
up_1 = self.leaky_relu(up_1)
out = self.out_deconv(up_1)
out = self.out_bn(out)
out = nn.Sigmoid()(out)
return out
</code></pre>
<p>and I am using it in my view to restore image which goes like this:</p>
<pre><code>def restore_image_deep_image_prior(original_image_path):
lr = 1e-2
device = 'cpu'
print('Using {} for computation'.format(device.upper()))
hg_net = Hourglass()
hg_net.to(device)
mse = nn.MSELoss()
optimizer = optim.Adam(hg_net.parameters(), lr=lr)
n_iter = 500
images = []
losses = []
to_tensor = tv.transforms.ToTensor()
z = torch.Tensor(np.mgrid[:542, :347]).unsqueeze(0).to(device) / 512 # Adjust the size according to your requirement
x = PILImage.open(original_image_path)
x = to_tensor(x).unsqueeze(0)
x, mask = pixel_thanos(x, 0.5)
mask = mask[:, :3, :, :].to(device).float() # Keep only the first 3 channels if mask has 4 channels
x = x.to(device)
for i in range(n_iter):
optimizer.zero_grad()
y = hg_net(z)
loss = mse(x, y*mask)
losses.append(loss.item())
loss.backward()
optimizer.step()
if i < 1000 and (i+1)%4==0 or i==0:
with torch.no_grad():
out = x + y * (1 - mask)
out = out[0].cpu().detach().permute(1, 2, 0)*255
out = np.array(out, np.uint8)
images.append(out)
if (i+1) % 20 == 0:
print('Iteration: {} Loss: {:.07f}'.format(i+1, losses[-1]))
restored_image_bytes = image_to_bytes(images)
return restored_image_bytes
def pixel_thanos(img, p=0.5):
assert p > 0 and p < 1, 'The probability value should lie in (0, 1)'
mask = torch.rand(1, 3, 542, 347)
img[mask < p,] = 0
mask = mask > p
mask = mask.repeat(1, 3, 1, 1)
return img, mask
def image_to_bytes(image):
buffer = BytesIO()
image.save(buffer, format="JPEG")
return buffer.getvalue()
</code></pre>
<p>Now, the problem is I am getting the error listen in the title as well, which in detail, is:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\khubi\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\exception.py", line 55, in inner
response = get_response(request)
File "C:\Users\khubi\AppData\Local\Programs\Python\Python39\lib\site-packages\django\core\handlers\base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\khubi\Desktop\Projects\VU\CS619\image restorer\image_restorer\image_app\views.py", line 35, in restore_image
restored_image = restore_image_deep_image_prior(original_image.image_file.path)
File "C:\Users\khubi\Desktop\Projects\VU\CS619\image restorer\image_restorer\image_app\views.py", line 76, in restore_image_deep_image_prior
y = hg_net(z)
File "C:\Users\khubi\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\khubi\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\khubi\Desktop\Projects\VU\CS619\image restorer\image_restorer\image_app\views.py", line 182, in forward
up_5 = torch.cat([up_5, skip_5], 1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 18 but got size 17 for tensor number 1 in the list.
</code></pre>
<p>I know that I had to set the channels. I set it to 4 as the image which I am using for restoration has 4 channels. But still getting the error. How to resolve it?</p>
|
<python><machine-learning><deep-learning><pytorch><tensor>
|
2023-12-27 19:38:15
| 1
| 311
|
Khubaib Khawar
|
77,723,648
| 3,615,274
|
Databricks ODBC Driver fails if number of rows are exceeding the limit
|
<p>After installing Databricks ODBC Driver and configuring from <a href="https://www.databricks.com/spark/odbc-drivers-download" rel="nofollow noreferrer">https://www.databricks.com/spark/odbc-drivers-download</a>, it can fetch a maximum of 16,000 records but fails if the record count is 16,000 or more.</p>
<p>Looks like this is related to the cloud fetch architecture mentioned at <a href="https://www.databricks.com/blog/2021/08/11/how-we-achieved-high-bandwidth-connectivity-with-bi-tools.html" rel="nofollow noreferrer">https://www.databricks.com/blog/2021/08/11/how-we-achieved-high-bandwidth-connectivity-with-bi-tools.html</a></p>
<p>I am looking for a workaround or option to bypass this and use the default architecture when the number of records is more than 16,000. Is there any known parameter for this?</p>
|
<python><odbc><databricks>
|
2023-12-27 19:23:44
| 1
| 389
|
Vijred
|
77,723,585
| 3,142,695
|
wfdb in python script returns `IndexError: positional indexers are out-of-bounds`
|
<p>I am trying to convert WFDB formatted data to csv (using the wfdb package), but I do get an <code>IndexError: positional indexers are out-of-bounds</code> error in the python script, which I do not understand.</p>
<p>I am using the <code>infant10_ecg.*</code> data from here: <a href="https://physionet.org/content/picsdb/1.0.0/" rel="nofollow noreferrer">https://physionet.org/content/picsdb/1.0.0/</a></p>
<p>This is the script I'm using:</p>
<pre><code>import wfdb # WaveForm-Database package. A library of tools for reading, writing, and processing WFDB signals and annotations.
import pandas as pd
import numpy as np
import glob
dat_files=glob.glob('*.dat')
df=pd.DataFrame(data=dat_files)
df.to_csv("files_list.csv",index=False,header=None)
files=pd.read_csv("files_list.csv",header=None)
for i in range(1,len(files)+1):
recordname=str(files.iloc[[i]])
print(recordname[:-4])
recordname_new=recordname[-7:-4]
record = wfdb.rdsamp(recordname_new)
record=np.asarray(record[0])
path=recordname_new+".csv"
np.savetxt(path,record,delimiter=",")
print("Files done: %s/%s"% (i,len(files)))
print("\nAll files done!")
</code></pre>
|
<python><csv>
|
2023-12-27 19:04:04
| 0
| 17,484
|
user3142695
|
77,723,202
| 2,573,075
|
How to avoid ODOO 15 render error for missing values
|
<p>In ODOO 15 I have made my own template that runs a method that returns some data to show in a dictionary.</p>
<p>This is a piece of the template:</p>
<pre><code><t t-set="PRI_par_DSP_par_stage" t-value="o.PRI_par_DSP_par_stage(o.date_start, o.date_end, o.source, o.domain)"/>
<table border="1" style="text-align: left; width: auto; margin: 0 auto;">
<tbody>
<t t-foreach="PRI_par_DSP_par_stage" t-as="row">
<tr>
<td><t t-esc="row"/></td>
<td><t t-esc="row['dsp_id']"/></td>
<td><t t-esc="row['Brouillon']"/></td>
[....]
</tbody>
</table>
</t>
</code></pre>
<p>And my method returns something like:</p>
<pre><code>[{'dsp_id': 'DEBIT', 'Brouillon': 3936.0, 'Qualification': 40299.24, 'Closing': 156753.59}, {'dsp_id': 'THD', 'Closing': 22487.8}]
</code></pre>
<p>When rendering this I got an 500 error:</p>
<pre><code>Web
Error message:
Error when render the template
KeyError: 'Brouillon'
Template: 1026
Path: /t/t/div/main/t/div/div[7]/table/tbody/t/tr/td[3]/t
Node: <t t-esc="row['Brouillon']"/>
The error occurred while displaying the model and evaluated the following expressions: 1026<t t-esc="row['Brouillon']"/>
</code></pre>
<p>Because, of course, I don't have that key in second dictionary from list.
Is there a way (other than add those keys in my response) to overcome this issue, like checking if there is really a key and after that trying to render it, I have tried with <code>t-if=row...</code>, <code>t-if=define(row...)</code> still have this issue.</p>
<p>Any ideas will be highly appreciated. :)</p>
|
<python><html><render><qweb><odoo-15>
|
2023-12-27 17:17:42
| 1
| 633
|
Claudiu
|
77,723,163
| 9,335,403
|
Can a decorator name classes it creates?
|
<p>I have a decorator which wraps functions and produces classes (for a better reason than in the following example). It all works, except that I'd like to be able to set the name of the class given by <code>type()</code>.</p>
<p>For example,</p>
<pre><code>>>> class Animal:
... pass
...
>>> def linnean_name(genus, species):
... def _linnean_name(fn):
... class _Animal(Animal):
... binomial_name = (genus, species)
... def converse(self):
... fn()
... _Animal.__name__ = fn.__name__.title()
... _Animal.__module__ = fn.__module__
... return _Animal
... return _linnean_name
...
>>> @linnean_name('Vombatus', 'ursinus')
... def Wombat():
... print("Hello, I am a wombat.")
...
>>> sheila = Wombat()
>>> sheila.binomial_name
('Vombatus', 'ursinus')
>>> sheila.converse()
Hello, I am a wombat.
</code></pre>
<p>All well and good, but</p>
<pre><code>>>> type(sheila)
<class '__main__.linnean_name.<locals>._linnean_name.<locals>._Animal'>
</code></pre>
<p>where I was hoping to see</p>
<pre><code><class '__main__.Wombat'>
</code></pre>
<p>This was the reason for</p>
<pre><code>... _Animal.__name__ = fn.__name__.title()
... _Animal.__module__ = fn.__module__
</code></pre>
<p>which appear not to do anything particularly useful. How can I do this?</p>
|
<python>
|
2023-12-27 17:09:55
| 1
| 435
|
Marnanel Thurman
|
77,723,131
| 14,449,816
|
How to return YouTube channel information based on a unique channel name or ID
|
<p>I am trying to do a simply Python script that returns some information about a channel via <code>@username</code> or <code>UCrandomID123</code> search.</p>
<p>I know that you can use the following endpoint to access channel data: <a href="https://www.googleapis.com/youtube/v3/channels" rel="noreferrer">https://www.googleapis.com/youtube/v3/channels</a></p>
<p>However, I came across some possible restrictions and problems. Some channels might not have their custom "name" in their channel URL:</p>
<p><a href="https://i.sstatic.net/0usVQ.png" rel="noreferrer"><img src="https://i.sstatic.net/0usVQ.png" alt="enter image description here" /></a></p>
<p>vs.</p>
<p><a href="https://i.sstatic.net/8KihM.png" rel="noreferrer"><img src="https://i.sstatic.net/8KihM.png" alt="enter image description here" /></a></p>
<p>I looked up the problem and came across multiple posts addressing the issue:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/73582656/how-to-find-channels-with-youtube-api">How to find channels with youtube API</a></li>
<li><a href="https://stackoverflow.com/questions/67508794/find-youtube-user-name-for-a-channel">Find YouTube user name for a channel</a></li>
</ul>
<p>But both of the solutions are rather old, and I was wondering if there's now a better solution to this issue?</p>
<p><strong>I have the following code:</strong></p>
<pre class="lang-py prettyprint-override"><code>async def get_channel_id(self, channel_input):
api_url = f"https://www.googleapis.com/youtube/v3/channels"
params = {
'part': 'snippet',
'key': DEVELOPER_KEY,
'maxResults': 1
}
# Determine if the input is a channel ID or username
if channel_input.startswith('UC'):
# Channel ID provided
params['id'] = channel_input
else:
# Assume it's a custom username
params['forUsername'] = channel_input
response = await self.make_api_request(api_url, params) # Just a function to make an async req.
if response and response.get('items'):
channel_info = response['items'][0]
channel_id = channel_info['id']
channel_title = channel_info['snippet']['title']
channel_url = f"https://www.youtube.com/channel/{channel_id}"
return channel_id, channel_title, channel_url
return None, None, None
</code></pre>
<p>This one works fine if I have the <code>UC....</code> URL but giving the unique "username", it fails to find the channel for some reason. The reason I want to catch both cases is that right now, the majority of channels has such a custom name URL.</p>
|
<python><youtube-data-api>
|
2023-12-27 17:03:02
| 1
| 3,582
|
Dominik
|
77,722,617
| 19,363,912
|
Measurement of progress using ObjectiveValue and BestObjectiveValue
|
<p>How to measure progress of (cp-sat) solver?</p>
<p>I am currently approximating with</p>
<pre><code>o = solver.ObjectiveValue()
b = solver.BestObjectiveBound()
p = 100 * o / b
</code></pre>
<p>This works only when maximizing sum over bools multiplied by positive coefficients, so value cannot be below 0.</p>
<p>Any alternative methods?</p>
<p>If one introduces negative coefficients, it seems necessary to floor / cap certain values.</p>
<p>Also, best bound moves constrantly. Shall not one reflect its progress as well (e.g. saving the starting value)?</p>
|
<python><or-tools><cp-sat>
|
2023-12-27 15:03:44
| 1
| 447
|
aeiou
|
77,722,578
| 374,458
|
How to serve concurrent matplotlib figures as images in an HTTP API?
|
<p>I use <code>FastAPI</code> and <code>Uvicorn</code> to run an HTTP server.</p>
<p>This server serves images generated from <code>Matplotlib</code> figures, using also <code>Image</code> from <code>PIL</code>:</p>
<pre class="lang-py prettyprint-override"><code> import matplotlib.pyplot as plt
from io import BytesIO
fig = fig, ax = plt.subplots(figsize=(SIZE_X, SIZE_Y))
# Adding elements on figure...
fig_buf = io.BytesIO()
plt.savefig(fig_buf, ax=ax, format='png')
fig_buf.seek(0)
fig_img = Image.open(fig_buf)
</code></pre>
<p>It works well, but when serving multiple HTTP requests in the same time, I get the following error: <code>RuntimeError: main thread is not in main loop</code>.</p>
<p>To fix this, I've tried to add <code>matplotlib.use('AGG')</code> to use a non-interactive backend, but it doesn't work: I have no more errors but only one of the concurrent figures is well served. The other ones are almost blanks.</p>
<p>Any idea?</p>
|
<python><matplotlib><concurrency><python-imaging-library><fastapi>
|
2023-12-27 14:53:42
| 0
| 1,870
|
Nicolas
|
77,722,349
| 1,659,599
|
Install python 2.7 on cygwin
|
<p>How to install Python 2.7 on new cygwin versions (as of 2023)?</p>
<p>Packages for python 2 seem not to be available. Only Python 3 is available.</p>
<p>There are several packages <code>python2-###</code> with dependencies to <code>python2</code> but no <code>python2</code> package.</p>
|
<python><python-2.7><cygwin>
|
2023-12-27 14:07:13
| 0
| 7,359
|
wolfrevo
|
77,722,325
| 3,079,439
|
How do I remove a specific part of a filename in Python?
|
<p>I have the folder that contains of several images (let's say 10) in .jpg format. The names of the files are the following:</p>
<pre><code>IM001
IM002
IM003
etc.
</code></pre>
<p>Is there a way to remove certain part of the filename (for example <code>IM00</code>) and leave the rest using Python? I only have idea with <code>os.rename</code> but I'm not sure if this is the right one.</p>
<p>Thanks!</p>
|
<python><directory><filenames>
|
2023-12-27 14:02:26
| 1
| 3,158
|
Keithx
|
77,722,308
| 64,023
|
Python way to declare test dependencies
|
<p>There should be one-- and preferably only one --obvious way to do it.</p>
<p>According to the Zen of Python... Python's test dependency management is not preferable.</p>
<p>There are dozens upon dozen of ways to manage test dependencies. For instance: dev-dependencies.txt, setup.py extras_require, tests_require, setup.cfg, some sort of tox incantation (tox supports anything you can dream up), I think maybe a tools section of pyproject.toml?</p>
<p>Is there a good write up anywhere on the current best practice? This is a wild one to navigate, there are so many possibilities, I can't seem to find consensus anywhere.</p>
<p>If there isn't a general consensus on how to do this is there a best way that integrates well with PyCharm? Most of my build experience is with Java through Gradle/Maven which does have a fairly opinionated way of handling dependencies, I'm looking for something similar (something strongly opinionated).</p>
|
<python><python-packaging><pyproject.toml>
|
2023-12-27 13:58:58
| 1
| 3,648
|
Michael
|
77,722,234
| 12,016,688
|
Why manually writing into stderr suppress other exceptions' error massage in python?
|
<p>I figured out that after I try to manually write something into the <code>stderr</code> file, no error message will be appear in the shell.</p>
<pre class="lang-py prettyprint-override"><code>>>> f = open(2, 'w')
>>> f.write('hello\n')
hello
6
>>> f.close()
>>> raise Exception
>>>
>>>
>>> 1/0
>>>
</code></pre>
<p>Why it happens?</p>
|
<python><exception><io><stderr>
|
2023-12-27 13:43:22
| 0
| 2,470
|
Amir reza Riahi
|
77,722,219
| 1,234,434
|
pandas pivot_table column name should not be present and how to order a text column
|
<p>I'm learning pandas.</p>
<p>I have created this test dataframe:</p>
<pre><code>dfdict = {'product':['ruler', 'pencil', 'case', 'rubber'],'sold':[4,23,0,14],'Quarter':['Q1/22','Q2/23','Q3/22','Q1/23']}
dftest=pd.DataFrame(dfdict)
dftemp=dftest.pivot_table(index=['product'],columns=['Quarter'],values=['sold'],aggfunc=sum,fill_value=0)
print(f"{dftemp}")
</code></pre>
<p>which produces:</p>
<pre><code> sold
Quarter Q1/22 Q1/23 Q2/23 Q3/22
product
case 0 0 0 0
pencil 0 0 23 0
rubber 0 14 0 0
ruler 4 0 0 0
</code></pre>
<p>Two points I need help with:</p>
<ol>
<li><p>How do I remove only the <code>sold</code> column name? I don't want it there as when I write it to a csv each <code>Quarter</code> column has a "sold" name above it.</p>
</li>
<li><p>How can I order the <code>Quarter</code> columns in date order, right now they are text. What's the best way? Is it to somehow convert them into a date and then order them and write it out in the same style?</p>
</li>
</ol>
|
<python><pandas>
|
2023-12-27 13:39:50
| 2
| 1,033
|
Dan
|
77,721,953
| 518,973
|
Handling Backslash Escaping Issue in AWS Lambda PyODBC Connection String for SQL Server
|
<p>In one of my AWS Lambda functions, I need to connect to an on-prem SQL Server instance and my lambda function is as follows :</p>
<pre><code>import os
import json
import pyodbc
def lambda_handler(event, context):
# Establish a connection to the SQL Server database
connection_string = 'DRIVER={ODBC Driver 17 for SQL Server};SERVER=1.10.143.76;PORT=1433;DATABASE=MyDB;UID=COMPT\svcacct_clientloc;PWD=MyPassword;'
print(connection_string)
connection = pyodbc.connect(connection_string)
cursor = connection.cursor()
</code></pre>
<p>Now when I execute the code, I am getting the following error</p>
<pre><code>Response
{
"errorMessage": "('28000', \"[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'COMPT\\\\svcacct_clientloc'. (18456) (SQLDriverConnect)\")",
"errorType": "InterfaceError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 22, in lambda_handler\n connection = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=1.10.143.76;PORT=1433;DATABASE=MyDB;UID=COMPT\\svcacct_clientloc;PWD=MyPassword;')\n"
]
}
</code></pre>
<p>Here in the error message it is showing that the username is <code>'COMPT\\\\svcacct_clientloc'</code> which is with four slashes and in the stack trace it is <code>'COMPT\\svcacct_clientloc'</code> with two slashes. I am not sure how to fix this but I have tried all the following methods.</p>
<pre><code>server = '1.10.143.76'
database = MyDB
username = 'COMPT\svcacct_clientloc'
password = 'Password'
port=1433
driver='{ODBC Driver 17 for SQL Server}'
connectUrl = "DRIVER={driver};SERVER={server};PORT={port};DATABASE={db_name};UID={username};PWD={password};".format(
driver=driver,
server=server,
port=port,
db_name=database,
username=username,
password=password
)
</code></pre>
<p>and also tried with <code>username = r'COMPT\svcacct_clientloc'</code> and <code>username = 'COMPT\svcacct_clientloc'.strip()</code>, but everything is giving the same error. I am running this on Python 3.8 runtime and corresponding layers has been uploaded. Also the code is working from EC 2 instance and I am able to fetch from the DB.</p>
<p>It would be helpful if someone can tell me how can I properly escape the username here.</p>
|
<python><python-3.x><sql-server><pyodbc>
|
2023-12-27 12:44:13
| 0
| 4,802
|
Happy Coder
|
77,721,850
| 2,178,956
|
Parallel processing fails with Paddle OCR
|
<p>I am trying to implement parallel processing using Paddle OCR.
Please refer to the method <code>predictx_parallel</code></p>
<p><code>predictx_parallel(input_images, ocr_params, num_threads=1)</code> - setting num_threads=1 works all the time</p>
<p><code>predictx_parallel(input_images, ocr_params, num_threads=2)</code> - setting num_threads>1 fails with varying errors.</p>
<pre class="lang-py prettyprint-override"><code>def predictx_parallel(input_images: List[Image], ocr_params: OcrParams, num_threads: int) -> Tuple[List[Dict], List[Image]]:
def ocr_image(image):
image_array = np.array(image) # type(image) : <class 'PIL.Image.Image'>
# type(image_array) : <class 'numpy.ndarray'>
results = ocr.ocr(image_array) # [[[[[381.0, 285.0], [537.0, 285.0], [537.0, 333.0], [381.0, 333.0]], ('PAGE1A', 0.9997838139533997)], [[[388.0, 371.0], [530.0, 371.0], [530.0, 419.0], [388.0, 419.0]], ('PAGE1B', 0.998117983341217)]]]
return results
ocr = get_ocr_obj(params=ocr_params) # <class 'paddleocr.paddleocr.PaddleOCR'>
with ThreadPoolExecutor(max_workers=num_threads) as executor:
results = list(executor.map(ocr_image, input_images))
</code></pre>
<pre><code>UnimplementedError: Currently, only can set dims from DenseTensor or SelectedRows. (at /paddle/paddle/fluid/framework/infershape_utils.cc:314)
[operator < fused_conv2d > error]
</code></pre>
<pre><code>NotFoundError: Variable Id 29797 is not registered.
[Hint: Expected it != Instance().id_to_type_map_.end(), but received it == Instance().id_to_type_map_.end().] (at /paddle/paddle/fluid/framework/var_type_traits.cc:103)
[operator < fused_conv2d > error]
</code></pre>
<p><code>paddlepaddle==2.6.0</code>
<code>paddleocr==2.7.0.3</code>
<code>python==3.9.12</code></p>
<p>Any suggestions please.</p>
<p>--- EDIT ---</p>
<p>I am able to get it to work.
As I understand, the trick is to use new paddle OCR objects.</p>
<p>CAUSE: I created one single ocr object and used the same ocr object across multiple threads.</p>
<p>FIX: I tried multiprocessing, and in each process, create a new instance of ocr. It worked.</p>
<pre class="lang-py prettyprint-override"><code># pipeline.py
def predictx_parallel_processes(input_images, num_processes):
with Pool(processes=num_processes) as pool:
pool.map(ocr_image_x, input_images)
</code></pre>
<pre class="lang-py prettyprint-override"><code># ocr_processing.py
def ocr_image_x(image):
process_pid = os.getpid()
logger.info(f"Process PID: {process_pid}")
ocr = PaddleOCR() # Create new ocr object each time
image_array = np.array(image)
results = ocr.ocr(image_array)
logger.info(results)
</code></pre>
|
<python><ocr><paddle-paddle><paddleocr>
|
2023-12-27 12:24:01
| 1
| 893
|
Soumya
|
77,721,825
| 424,333
|
How can I create HTML email drafts in Gmail using Python?
|
<p>I can successfully create plaintext email drafts in Gmail using this snippet:</p>
<pre><code>from email.message import Message
import imaplib
import time
message = Message()
# Create message
message["To"] = "john.doe@gmail.com"
message["Subject"] = "Subject line"
message.set_payload("This is the email <a href=#>body</a>")
utf8_message = str(message).encode("utf-8")
# Send message
status, data = imap_ssl.append('"[Google Mail]/Drafts"', "", imaplib.Time2Internaldate(time.time()), utf8_message)
</code></pre>
<p>However, it displays as:</p>
<p><code>This is the email <a href=#>body</a></code></p>
<p>Is there a way to get it to display as this?</p>
<blockquote>
<p>This is the email <a href="https://#" rel="nofollow noreferrer">body</a></p>
</blockquote>
<p>I've tried playing with MIMEText, which I can successfully use to send HTML emails via <code>smtplib</code>, but I'm not sure if it's possible with <code>imaplib</code>.</p>
|
<python><email><gmail><html-email><imaplib>
|
2023-12-27 12:17:48
| 1
| 3,656
|
Sebastian
|
77,721,396
| 3,291,993
|
Input output error for the already opened file
|
<p>I'm directing <code>sys.stderr</code> to a file and opening that file in write mode.
Only at the end, I'm closing <code>sys.stderr</code> and redirecting it to original <code>sys.__stderr__</code></p>
<pre><code>sys.__stderr__ = sys.stderr
sys.stderr = open(error_file, 'w')
# main code in between
sys.stderr.close()
sys.stderr = sys.__stderr__ # redirect to original stderr
</code></pre>
<p>However, it gives me the following error.</p>
<pre><code>sys.stderr.flush()
ValueError: I/O operation on closed file
</code></pre>
<p>.</p>
|
<python>
|
2023-12-27 10:38:52
| 2
| 1,147
|
burcak
|
77,721,135
| 8,584,739
|
Get the closest element from a list of dictionaries based on a variable
|
<p>I need to extract the most close element when counting backwards from a list of dicts based on a variable <code>utime</code>. If <code>utime</code> is less than <code>uid.time</code>, it should not print anything.</p>
<p>I have created below script which does the work but doesn't seem to be efficient or foolproof as I have a preset value <code>dif = 999999</code>. How to get rid of this and make this foolproof? Also if <code>utime</code> is less than any of the <code>uid.time</code> value, it should not print anything.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3
utime = 172200
uid = [
{
"id": 52,
"time": 172100
},
{
"id": 58,
"time": 172120
},
{
"id": 33,
"time": 172153
},
{
"id": 75,
"time": 172150
},
{
"id": 73,
"time": 172210
}
]
dif = 999999
ind = 0
for indx,i in enumerate(uid):
tDiff = utime - i['time']
if (tDiff > 0 and tDiff < dif):
dif = tDiff
ind = indx
print(uid[ind])
</code></pre>
<p>Expected Output (which I am getting anyway)</p>
<pre class="lang-json prettyprint-override"><code>{'id': 33, 'time': 172153}
</code></pre>
|
<python><python-3.x>
|
2023-12-27 09:42:30
| 4
| 1,228
|
Vijesh
|
77,720,977
| 7,798,822
|
How to import openai package using jupyter notebook?
|
<p>I am getting the below error message when importing openai as ai using google jupyter notebook:</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-9-3f86bb4abbfc> in <module>
----> 1 import openai as ai
/opt/anaconda3/lib/python3.8/site-packages/openai/__init__.py in <module>
4
5 import os as _os
----> 6 from typing_extensions import override
7
8 from . import types
ImportError: cannot import name 'override' from 'typing_extensions'
(/opt/anaconda3/lib/python3.8/site-packages/typing_extensions.py)
</code></pre>
<p>I have no idea how to fix this. Any ideas?</p>
|
<python><openai-api><large-language-model>
|
2023-12-27 09:09:46
| 1
| 321
|
Natália Resende
|
77,720,936
| 3,291,993
|
Python inner function can not resolve the local variable of the outer function
|
<p>I have defined <code>df</code> locally in a python function <code>func1</code>.
I want subfunction <code>accumulate_df</code> to see <code>df</code>.
But it gives <code>Unresolved reference 'df'</code> error.</p>
<p>I don't want global variable. Results coming from multiple processors will be accumulated in these variables.</p>
<p>I have similar code giving no error. But I couldn't understand what is happening here.</p>
<pre><code>import pandas as pd
import numpy as np
def func1():
# Initialization for accumulated df
df = pd.DataFrame(columns=['col1',
'col2',
'col3',
'col4'])
df['col1'] = df['col1'].astype('string')
df['col2'] = df['col2'].astype(np.float32)
df['col3'] = df['col3'].astype(np.int32)
df['col4'] = df['col4'].astype(np.float64)
def accumulate_df(result_tuple):
col1_value = result_tuple[0]
col2_value = result_tuple[1]
col3_value = result_tuple[2]
col4_value = result_tuple[3]
if df[(df['col1'] == col1_value) & (df['col2'] == col2_value)].values.any():
# Update Accumulate
df.loc[(df['col1'] == col1_value) & (df['col2'] == col2_value), 'col3'] += col3_value
df.loc[(df['col1'] == col1_value) & (df['col2'] == col2_value), 'col4'] += col4_value
else:
df = df.append({'col1': col1_value,
'col2': col2_value,
'col3': col3_value,
'col4': col4_value}, ignore_index=True)
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-27 09:01:21
| 0
| 1,147
|
burcak
|
77,720,872
| 424,333
|
How do I create a Gmail email draft in Python?
|
<p>I'm trying to create an email in Python that will appear in my Gmail drafts.</p>
<p>Here's my code:</p>
<pre><code>from email.message import Message
import imaplib
import time
with imaplib.IMAP4_SSL(host="imap.gmail.com", port=imaplib.IMAP4_SSL_PORT) as imap_ssl:
print("Logging into mailbox...")
resp_code, response = imap_ssl.login('name@gmail.com', 'password key')
# Create message
message = Message()
message["From"] = "You <me@example.com>"
message["To"] = "recipient <someone@example.com"
message["Subject"] = "This is a subject"
message.set_payload("This is the email body")
utf8_message = str(message).encode("utf-8")
# Send message
status, data = imap_ssl.append("[Gmail]/Drafts", "", imaplib.Time2Internaldate(time.time()), utf8_message)
print(data)
</code></pre>
<p>The output is: <code>[b"[TRYCREATE] Folder doesn't exist. (Failure)"]</code></p>
<p>I understand <code>[Gmail]/Drafts</code> is the issue, but I'm not sure what it should be instead?</p>
<p>N.B. The code will only be used with one email account so interoperability isn't a requirement.</p>
<p><strong>Update</strong></p>
<p><code>imap_ssl.list()</code> returned, among other things, <code>b'(\\Drafts \\HasNoChildren) "/" "[Google Mail]/Drafts"'</code></p>
<p>When I try:</p>
<pre><code>imap_ssl.append("[Google Mail]/Drafts", "", imaplib.Time2Internaldate(time.time()), utf8_message)
</code></pre>
<p>I get:</p>
<pre><code>Traceback (most recent call last):
File "/Users/apple/test.py", line 20, in <module>
status, data = imap_ssl.append("[Google Mail]/Drafts", "", imaplib.Time2Internaldate(time.time()), utf8_message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/imaplib.py", line 417, in append
return self._simple_command(name, mailbox, flags, date_time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/imaplib.py", line 1230, in _simple_command
return self._command_complete(name, self._command(name, *args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/imaplib.py", line 1055, in _command_complete
raise self.error('%s command error: %s %s' % (name, typ, data))
imaplib.IMAP4.error: APPEND command error: BAD [b'Could not parse command']
</code></pre>
|
<python><email><gmail><imap>
|
2023-12-27 08:47:07
| 0
| 3,656
|
Sebastian
|
77,720,387
| 3,297,613
|
ChromaDB index update using gunicorn multiple uviworkers on macos throws YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__ error
|
<p>I have created a simple FastAPI app for updating/uploading documents to ChromaDB Vectorstore on Mac OSX in-order to do a simple query search. Here is the below code,</p>
<pre><code>import asyncio
from fastapi import BackgroundTasks, FastAPI
from langchain.document_loaders import DirectoryLoader
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores.chroma import Chroma
app = FastAPI()
directory = "pets"
embedding = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
def load_docs(directory):
return [Document(page_content="Hi, My name is Tom. My job is to collect tickets.", metadata={"source": "tom"})]
def split_docs(documents, chunk_size=1000, chunk_overlap=20):
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
docs = text_splitter.split_documents(documents)
return docs
@app.post("/update")
def update():
print("loading docs")
documents = load_docs(directory)
print("splitting docs")
docs = split_docs(documents)
print("Index updating..")
db = Chroma.from_documents(docs, embedding, persist_directory="chromadb")
db.persist()
print('Done.')
return {"status": "done"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001)
</code></pre>
<p>If I run the above script directly, index update works perfectly fine upon calling <code>/update</code> endpoint.</p>
<pre><code>(venv) $ python test.py
INFO: Started server process [32951]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)
loading docs
splitting docs
Index updating..
Done.
INFO: 127.0.0.1:53373 - "POST /update HTTP/1.1" 200 OK
</code></pre>
<p>But if I run the same code using <code>gunicorn</code> with multiple <code>UvicornWorker</code>, it throws
<code>The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().</code> error.</p>
<pre><code>(venv) $ gunicorn test:app -w 4 -k uvicorn.workers.UvicornWorker --preload
[2023-12-27 11:44:06 +0530] [33014] [INFO] Starting gunicorn 21.2.0
[2023-12-27 11:44:06 +0530] [33014] [INFO] Listening at: http://127.0.0.1:8000 (33014)
[2023-12-27 11:44:06 +0530] [33014] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2023-12-27 11:44:06 +0530] [33022] [INFO] Booting worker with pid: 33022
[2023-12-27 11:44:06 +0530] [33023] [INFO] Booting worker with pid: 33023
[2023-12-27 11:44:06 +0530] [33022] [INFO] Started server process [33022]
[2023-12-27 11:44:06 +0530] [33022] [INFO] Waiting for application startup.
[2023-12-27 11:44:06 +0530] [33022] [INFO] Application startup complete.
[2023-12-27 11:44:06 +0530] [33023] [INFO] Started server process [33023]
[2023-12-27 11:44:06 +0530] [33023] [INFO] Waiting for application startup.
[2023-12-27 11:44:06 +0530] [33023] [INFO] Application startup complete.
[2023-12-27 11:44:06 +0530] [33024] [INFO] Booting worker with pid: 33024
[2023-12-27 11:44:06 +0530] [33024] [INFO] Started server process [33024]
[2023-12-27 11:44:06 +0530] [33024] [INFO] Waiting for application startup.
[2023-12-27 11:44:06 +0530] [33024] [INFO] Application startup complete.
[2023-12-27 11:44:06 +0530] [33025] [INFO] Booting worker with pid: 33025
[2023-12-27 11:44:06 +0530] [33025] [INFO] Started server process [33025]
[2023-12-27 11:44:06 +0530] [33025] [INFO] Waiting for application startup.
[2023-12-27 11:44:06 +0530] [33025] [INFO] Application startup complete.
loading docs
splitting docs
Index updating..
The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().
Break on __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__() to debug.
[2023-12-27 11:44:32 +0530] [33014] [ERROR] Worker (pid:33025) was sent SIGSEGV!
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
[2023-12-27 11:44:32 +0530] [33047] [INFO] Booting worker with pid: 33047
</code></pre>
<p><em>Spec:</em></p>
<pre><code>OS: MacOsx Ventura
Python Version: 3.10.1
gunicorn: 21.2.0
</code></pre>
<p>PS: I want the <code>--preload</code> option to get included.</p>
|
<python><macos><fastapi><gunicorn><chromadb>
|
2023-12-27 06:30:07
| 1
| 175,204
|
Avinash Raj
|
77,720,235
| 10,200,497
|
Groupby column of sequence of numbers
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame(
{
'a': [101, 9, 1, 10, 100, 103, 102, 105, 90, 110],
'b': [1, 0, 1, 0, 0, 1, 1, 1, 0, 1],
}
)
</code></pre>
<p>And this is the way that I want to group them by column <code>b</code>:</p>
<pre><code> a b
# -------------------------
0 101 1
1 9 0
2 1 1
3 10 0
4 100 0
5 103 1
6 102 1
7 105 1
8 90 0
9 110 1
# -------------------------
2 1 1
3 10 0
4 100 0
5 103 1
6 102 1
7 105 1
8 90 0
9 110 1
# -------------------------
5 103 1
6 102 1
7 105 1
8 90 0
9 110 1
# -------------------------
9 110 1
</code></pre>
<p>I need to find the rows in <code>b</code> that 1 is after 0 or 1 is the first value. And then create groups from that row to the end.</p>
<p>This image clarifies the point:</p>
<p><a href="https://i.sstatic.net/34BXp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/34BXp.png" alt="enter image description here" /></a></p>
<p>I could identify these rows. But I don't know how to continue. Note that I just need the groups, to apply some functions to them later. I couldn't identify the first row as well:</p>
<pre><code>df.loc[(df.b == 1) & (df.b.shift(1) == 0), 'c'] = 'x'
a b c
0 101 1 NaN
1 9 0 NaN
2 1 1 x
3 10 0 NaN
4 100 0 NaN
5 103 1 x
6 102 1 NaN
7 105 1 NaN
8 90 0 NaN
9 110 1 x
</code></pre>
|
<python><pandas>
|
2023-12-27 05:47:24
| 1
| 2,679
|
AmirX
|
77,720,146
| 7,040,692
|
numpy array matrix-like multiplication
|
<p>I have the following definition:</p>
<pre><code>class Transform:
mat_ = np.ones([2, 2], dtype=float)
def __init__(self, mat):
self.mat_ = mat
# vec should be a column vector
def r_mult(self, vec):
return self.mat_ @ vec.T
</code></pre>
<p>I'm using it to initiate an object, like</p>
<pre><code>t = transform.Transform(np.array([[1, 1], [2, 1]], dtype=float))
</code></pre>
<p>Then, I call</p>
<pre><code>t.r_mult(np.array([1, 0.0]))
</code></pre>
<p>however, the answer I get from the PyCharm console is:</p>
<pre><code>array([[1., 0.],
[2., 0.]])
</code></pre>
<p>Which is a point-wise multiplication and not a matrix-vector type I would expect. Direct calculation without the class in the middle works as expected</p>
<pre><code>np.array([[1, 1], [2, 1]], dtype=float) @ np.array([1, 0.0])
</code></pre>
<p>leads to</p>
<pre><code>array([1., 2.])
</code></pre>
<p>Is there something I should know about python classes, or why the multiplication does not work?</p>
|
<python><arrays><numpy>
|
2023-12-27 05:13:16
| 0
| 630
|
dEmigOd
|
77,720,049
| 10,853,071
|
optimizing multiple try except code for web scraping
|
<p>I have a web scraping script where, due to many reasons, some codes "break" if not finding the information as expected. I am handling it with multiple "try/except" blocks.</p>
<pre><code> asin = item.get('data-asin')
title = item.find_all('span',{'class' : 'a-size-base-plus a-color-base a-text-normal'})[0].get_text()
try:
label = item.find_all('span',{'aria-label' : 'Escolha da Amazon'})[0].get('aria-label')
except IndexError :
label = None
try:
current_whole_price = item.find_all('span', {'class' : 'a-price'})[0].find('span', {'class' : 'a-price-whole'}).get_text().replace(',','').replace('.','')
except:
current_whole_price = '0'
try :
current_fraction_price = item.find_all('span', {'class' : 'a-price'})[0].find('span', {'class' : 'a-price-fraction'}).get_text()
except :
current_fraction_price = '0'
current_price = float(current_whole_price+'.'+current_fraction_price)
try :
rating_info = item.find('div', {'class':'a-row a-size-small'}).get_text()
rating = float(rating_info[:3].replace(',','.'))
rating_count = int(re.findall(r'\d+', rating_info)[-1])
except :
rating = None
rating_count = None
try:
ad = True if (item.find_all('span', {'class' : 'a-color-secondary'})[0].get_text() == 'Patrocinado') else False
except IndexError:
ad = False
_ = {'productId' : itemId,
'asin' : asin,
'opt_label' : label,
#"ad": True if (item.find_all('span', {'class' : 'a-color-secondary'})[0].get_text() == 'Patrocinado') else False ,
"ad": ad,
'title' : title,
'current_price' : current_price,
'url':f'https://www.amazon.com.br/dp/{asin}',
'rating' : rating,
'rating_count' : rating_count,
}
</code></pre>
<p>But, looking at my code, you can see that many of the "try/except" are similar.
I wonder if I could use some kind of function where I just pass the "item", the "desired selector" and the "failsafe value" to return if it goes wrong.</p>
<p>I intend to make my code simpler and smaller. I take any tips!</p>
<p>Regards!</p>
|
<python>
|
2023-12-27 04:30:11
| 1
| 457
|
FábioRB
|
77,719,957
| 10,853,071
|
Playwright typing annotation
|
<p>Despite being for informational use, I am trying to setup typing annotations on my script, as it helps me to debug it later.</p>
<p>I am trying to set up Playwright's browser and context as the return of the following code, but I am not being successful:</p>
<pre><code>import asyncio
from playwright.async_api import async_playwright
async def run_browser() :
p = await async_playwright().start()
browser = await p.chromium.launch(headless=False)
context = await browser.new_context(java_script_enabled=True,locale='pt-br')
return context, p
</code></pre>
<p>What would I have to set on the <code>-></code> type?</p>
|
<python><playwright><python-typing>
|
2023-12-27 03:48:56
| 1
| 457
|
FábioRB
|
77,719,741
| 7,563,454
|
Change the color of a pixel on a surface from a thread pool
|
<p>I'm working on a voxel raytracer in Python, just ported from Tkinter to Pygame for window management and pixel drawing. I use a thread pool to do the raytracing for each pixel, in my original code the <code>trace</code> function does various calculations before returning the color as a hex string: The main loop runs periodically on the main thread (eg: 30 times a second for 30 FPS) and calls the pool with a range to request new traces and update all pixel colors, each index is translated to a 2D position to know which location each color refers to. I left out functions unrelated to my question in this simplified example, like how I'm converting index <code>i</code> to two <code>x, y</code> integer positions in a custom vector class, same as the hex to <code>r, g, b</code> converter... and yes I have a way to break out of the <code>while True</code> loop when quitting, the representation below runs just as intended.</p>
<pre><code>import multiprocessing as mp
import pygame as pg
def trace(i):
# Rays are calculated here, for simplicity of example return a fixed color
return "ff7f00"
pg.init()
screen = pg.display.set_mode((64, 16))
clock = pg.time.Clock()
pool = mp.Pool()
while True:
# Raytrace each pixel and draw the new color, 64 * 16 = 1024
result = pool.map(trace, range(0, 1024)):
for i, c in enumerate(result):
pos = vec2_from_index(i)
col = rgb_from_hex(c)
screen.set_at((pos.x, pos.y), (col.r, col.g, col.b))
clock.tick(30)
</code></pre>
<p>But there's a problem: Performance is very slow on the main thread which acts as a bottleneck, the tracing threads don't even get to run at their full potential because of this. At higher resolutions there are a lot more pixels, eg: 240 x 120 = 28800 entries in the <code>result</code> array; Merely fetching it without doing anything to the result saddles the main thread, enumerating the result to apply the colors makes it even worse. I'm hoping to remove this burden by changing the pixel being traced directly on the thread calculating it, instead of the helper thread merely returning the 6-character hex string and the main thread having to process it. The ideal code would then look something like this instead:</p>
<pre><code>import multiprocessing as mp
import pygame as pg
pg.init()
screen = pg.display.set_mode((64, 16))
clock = pg.time.Clock()
pool = mp.Pool()
def trace(i):
# Rays are calculated here, for simplicity of example return a fixed color
pos = vec2_from_index(i)
col = rgb_from_hex("ff7f00")
screen.set_at((pos.x, pos.y), (col.r, col.g, col.b))
while True:
# Raytrace each pixel and draw the new color, 64 * 16 = 1024
pool.map(trace, range(0, 1024)):
clock.tick(30)
</code></pre>
<p>However this approach is bound to fail due to the way threading works: Threads can only return a modified result when the function ends, they can't edit variables from the outside directly in a way that will be seen by the main thread or other threads. Any changes done by the process are thus temporary and only exist in the reality of this thread before it finishes.</p>
<p>What do you see as the best solution here, in case anything better than my current approach is possible? Is there a way for threads to execute <code>pygame.set_at</code> on the screen surface with permanent results? Also in this case I wouldn't need the thread pool to return a result... should I use something other than <code>pool.map</code> for more efficiency?</p>
|
<python><python-3.x><multithreading><pygame><multiprocessing>
|
2023-12-27 01:55:54
| 1
| 1,161
|
MirceaKitsune
|
77,719,698
| 9,335,403
|
Should <instance>.<property> += v call the property's setter?
|
<p>Given</p>
<pre><code>class TameWombat:
def __init__(self):
self.stomach = []
def __iadd__(self, v):
self.stomach += v
return self
class Fred:
def __init__(self):
self._pet = TameWombat()
@property
def wombat(self):
return self._pet
@wombat.setter
def wombat(self, v):
raise ValueError("Fred only wants this particular wombat, thanks.")
</code></pre>
<p>we cannot use the <code>__iadd__()</code> method of the <code>TameWombat</code> directly with the property:</p>
<pre><code>>>> fred = Fred()
>>>
>>> fred.wombat += 'delicious food'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 9, in wombat
ValueError: Fred only wants this particular wombat, thanks.
</code></pre>
<p>Yet we know that <code>+=</code> here applies to the result of the getter, not to <code>fred</code> himself:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/11987949/how-to-implement-iadd-for-a-python-property">How to implement __iadd__ for a Python property</a></li>
<li><a href="https://stackoverflow.com/questions/60029429/should-iadd-return-a-new-object-or-manipulate-self">Should __iadd__ return a new object or manipulate self?</a></li>
</ul>
<p>What's going on?</p>
|
<python>
|
2023-12-27 01:30:38
| 1
| 435
|
Marnanel Thurman
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.