QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,644,550
| 12,226,377
|
Rule based extraction from a pandas string using NLP
|
<p>I have a sit-in where I am performing a keyword extraction and building a category on top of it which is working fine as I am able to do a <code>str.contains</code> for a single keyword.
But what if I want to use a string of keywords and build multiple sub categories on top of it. For example from the column Text I would like to extract the Target Sub Category based on the rules defined below:</p>
<pre><code>Text Target Sub Category
invoice received payment sent data csv Invoice & Payment and Data Request
</code></pre>
<p><strong>Rules</strong></p>
<p>The first part of the Sub Category i.e. <code>"Invoice & Payment"</code> is derived from keywords like <code>"invoice" & "payment"</code> and the second part of the sub category i.e. <code>"Data & Information"</code> is derived from keywords like <code>"data" and "csv"</code>.</p>
<p>how can I build something like this. The keywords can come in any order so essentially an intelligent search has to be built to and there are a lot of different keywords on which I would like to build this sub category. Any directional sense that I can build here?</p>
|
<python><pandas><text-extraction>
|
2023-03-05 18:44:55
| 1
| 807
|
Django0602
|
75,644,507
| 11,665,178
|
Python AWS Lambda in error because of pyjwt[crypto] (cryptography)
|
<p>I have the following error when i run my AWS lambda under python 3.9 :</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /var/task/cryptography/hazmat/bindings/_rust.abi3.so)
Traceback (most recent call last):
</code></pre>
<p>I am aware that this is somehow a compilation issue so here the steps i have done until the AWS lambda deployment :</p>
<ul>
<li>Create a Dockerfile :</li>
</ul>
<pre><code># syntax=docker/dockerfile:1
FROM ubuntu:latest
ENV TZ=Europe/Paris
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update -y
RUN apt-get install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa
# Install py39 from deadsnakes repository
RUN apt-get install python3.9 -y
# Install pip from standard ubuntu packages
RUN apt-get install python3-pip -y
RUN apt-get install zip -y
RUN apt-get install libc6 -y
RUN mkdir /home/packages
RUN pip install --target=/home/packages pyjwt[crypto]==2.6.0
RUN pip install --target=/home/packages pymongo[srv]==4.3.3
</code></pre>
<ul>
<li>Inside the docker container, i do : <code>cd /home/packages</code></li>
<li>Then : <code>zip -r ../package.zip .</code></li>
<li>Then i use <code>docker cp</code> to copy the <code>package.zip</code> to my MacOS host.</li>
</ul>
<p>I use <code>zip -g package.zip lambda_function.py</code> and i upload the <code>.zip</code> file using boto3.</p>
<p><a href="https://i.sstatic.net/5sjII.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5sjII.png" alt="enter image description here" /></a></p>
<p>I would like to know why this is not enough or what am i missing here ?</p>
<p>Note : i need to keep using the <code>zip</code> method to upload the lambda package for other reasons, unless there is no other choice of course..</p>
<p>EDIT : In response to @Tom 's answer if i add the mentioned <code>pip</code> command in my <code>Dockerfile</code> :</p>
<pre><code> > [12/14] RUN pip install --platform --platform manylinux2014_x86_64 --implementation cp --python 3.9 --only-binary=:all: --upgrade --target /home/packages cryptography:
#19 3.185 ERROR: Could not find a version that satisfies the requirement manylinux2014_x86_64 (from versions: none)
#19 3.186 ERROR: No matching distribution found for manylinux2014_x86_64
</code></pre>
|
<python><docker><ubuntu><aws-lambda><python-cryptography>
|
2023-03-05 18:40:52
| 1
| 2,975
|
Tom3652
|
75,644,459
| 12,871,587
|
Polars read_excel issue with a date format
|
<p>I'm trying to read rather messy excel file(s) to Polars dataframe, but getting a "XlsxValueError: Error: potential invalid date format.".</p>
<p>I believe that the issue is related to some date column values being in excel numerical date format that raises the error. Is there a way for me to put a setting to Polars or Xlsx2csv options that I want to read the columns as string types rather than try to convert to dates. I tried setting infer_schema_length to 0 to read the columns as strings, but looks like it is already the xlsx2csv writer raising the error.</p>
<p>My code currently as below:</p>
<pre><code>pl.read_excel(file="file_path",
sheet_name="sheet_name",
read_csv_options={"infer_schema_length":0},
xlsx2csv_options={"skip_hidden_rows":False})
</code></pre>
<p>Out:</p>
<pre><code>XlsxValueError: Error: potential invalid date format.
</code></pre>
<p>When reading the data in with Pandas read_excel it does not raise errors. When trying to convert Pandas df to polars it raises an error:</p>
<pre><code>df = pd.read_excel("file_name",
sheet_name="sheet_name")
pl.from_pandas(df)
</code></pre>
<p>Out:</p>
<pre><code>ArrowTypeError: Expected bytes, got a 'int' object
</code></pre>
<p>Current workaround (not ideal) I have is to read the data with Pandas in string format, and then convert to Polars dataframe and start cleaning the data.</p>
|
<python><python-polars>
|
2023-03-05 18:32:16
| 1
| 713
|
miroslaavi
|
75,644,257
| 1,873,108
|
Unable to cast Python instance of type <class 'dict'> to C++
|
<p>I'm trying to pass dict[str, object] from Python to C++ to do some operations & keep object in C++ while it's being used.</p>
<pre class="lang-cpp prettyprint-override"><code>typedef std::map<std::string, py::object> dataMap;
void init_basicWindowMule(pybind11::module_ & m) {
pybind11::class_<testObject, QObject> widgetClass(m, "basicWindow");
widgetClass.def(pybind11::init<>());
widgetClass.def("create", [](py::object obj) {
manager = new testObject();
manager->setData(obj.cast<dataMap>());
});
}
</code></pre>
<pre class="lang-py prettyprint-override"><code>PyBindTest.basicWindow.create({"someStuff":testObject()})
</code></pre>
<p>What did I mess up?</p>
|
<python><c++><pybind11>
|
2023-03-05 17:56:44
| 1
| 1,076
|
Dariusz
|
75,644,182
| 3,552,975
|
Change the color of placeholder text of prompt_toolkit prompt in Python?
|
<p>The following is my code that causes this error: <code>TypeError: argument of type 'HTML' is not iterable</code></p>
<pre><code>from prompt_toolkit import prompt
from prompt_toolkit.formatted_text import FormattedText, HTML
def get_input():
prompt_text = [
("class:prompt-prefix", HTML('<ansired><b>{}</b></ansired>'.format('your input > '))),
("", '')
]
placeholder = HTML('<b>enter `q` or `exit` to exit </b>')
return prompt(prompt_text, placeholder=placeholder, default='').lower()
</code></pre>
<p>I tested and found that it is the <code>prompt_text</code> that introduces errors and I changed it to this:</p>
<pre><code>prompt_text = HTML('<ansired><b>{}</b></ansired>'.format('your input \> '))
# escape > character or not does not matter, I found
</code></pre>
<p>It still does not work. How to fix this bug?</p>
|
<python><colors><prompt-toolkit>
|
2023-03-05 17:43:15
| 1
| 7,248
|
Lerner Zhang
|
75,644,169
| 2,326,961
|
How to stop subprocesses that communicate with the main process through request and response queues?
|
<p>I have a Python program that starts <em>N</em> subprocesses (clients) which send requests to and listen for responses from the main process (server). The interprocess communication uses pipes through <code>multiprocessing.Queue</code> objects according to the following scheme (one queue per consumer, so one request queue and <em>N</em> response queues):</p>
<pre><code> 1 req_queue
<-- Process-1
MainProcess <-- ============= <-- …
<-- Process-N
N resp_queues
--> ============= --> Process-1
MainProcess --> ============= --> …
--> ============= --> Process-N
</code></pre>
<p>The (simplified) program:</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
def work(event, req_queue, resp_queue):
while not event.is_set():
name = multiprocessing.current_process().name
x = 3
req_queue.put((name, x))
print(name, 'input:', x)
y = resp_queue.get()
print(name, 'output:', y)
if __name__ == '__main__':
event = multiprocessing.Event()
req_queue = multiprocessing.Queue()
resp_queues = {}
processes = {}
N = 10
for _ in range(N): # start N subprocesses
resp_queue = multiprocessing.Queue()
process = multiprocessing.Process(
target=work, args=(event, req_queue, resp_queue))
resp_queues[process.name] = resp_queue
processes[process.name] = process
process.start()
for _ in range(100): # handle 100 requests
(name, x) = req_queue.get()
y = x ** 2
resp_queues[name].put(y)
event.set() # stop the subprocesses
for process in processes.values():
process.join()
</code></pre>
<p>The problem that I am facing is that the execution of this program (under Python 3.11.2) sometimes never stops, hanging at the line <code>y = resp_queue.get()</code> in some subprocess once the main process notify subprocesses to stop at the line <code>event.set()</code>. The problem is the same if I use the <code>threading</code> library instead of the <code>multiprocessing</code> library.</p>
<p>How to stop the subprocesses?</p>
|
<python><multiprocessing><pipe><client-server><message-queue>
|
2023-03-05 17:41:34
| 2
| 8,424
|
Géry Ogam
|
75,644,097
| 2,783,767
|
Is there a way to clear GPU memory after training the TF2 model
|
<p>I’m training multiple models sequentially, which will be memory-consuming if I keep all models without any cleanup. However, I am not aware of any way to the graph and free the GPU memory in Tensorflow 2.x. Is there a way to do so?
Below is my code.</p>
<pre><code>import os
os.chdir(os.path.dirname(os.path.abspath(__file__)))
import pandas as pd
import traceback
import numpy as np
from sklearn.preprocessing import StandardScaler
from pickle import load, dump
import tensorflow as tf
from imblearn.under_sampling import RandomUnderSampler
from tensorflow.keras.layers import LSTM, Dense
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
from keras.layers import Conv1D, BatchNormalization, GlobalAveragePooling1D, Permute, Dropout
from keras.layers import Input, Bidirectional, CuDNNLSTM, concatenate, Activation
from keras.models import Model, load_model
from tensorflow.keras.callbacks import ModelCheckpoint
for idx in range(10):
ep = 20
#load X_train, y_train, X_test, y_test
for size in units:
#create model
model = Model(ip, out)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[
# tf.keras.metrics.Precision(),
# tf.keras.metrics.Recall()
])
model.fit(X_train, y_train, epochs=ep, batch_size=256, validation_data=(X_test, y_test))
</code></pre>
|
<python><tensorflow>
|
2023-03-05 17:30:50
| 1
| 394
|
Granth
|
75,644,077
| 6,494,707
|
PyTorch: RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
|
<p>I have two tensors:</p>
<pre><code># losses_q
tensor(0.0870, device='cuda:0', grad_fn=<SumBackward0>)
# this_loss_q
tensor([0.0874], device='cuda:0', grad_fn=<AddBackward0>)
</code></pre>
<p>When I am trying to concat them, PyTorch raises an error:</p>
<pre><code>losses_q = torch.cat((losses_q, this_loss_q), dim=0)
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
</code></pre>
<p>How to resolve this error?</p>
|
<python><python-3.x><pytorch><tensor><pytorch-geometric>
|
2023-03-05 17:28:01
| 1
| 2,236
|
S.EB
|
75,644,033
| 403,425
|
Pillow gives AttributeError: 'NoneType' object has no attribute 'seek' when trying to save a file
|
<p>I'm trying to create a view in Django, using Django REST Framework and its <code>FileUploadParser</code>, where I can upload an image to, and it then stores both the original image and a thumbnail.</p>
<pre class="lang-py prettyprint-override"><code>class ImageUploadParser(FileUploadParser):
media_type = "image/*"
class ImageUploadView(APIView):
permission_classes = (permissions.IsAuthenticated,)
parser_classes = [ImageUploadParser]
def post(self, request, filename):
if "file" not in request.data:
raise ParseError("Empty content")
file_obj = request.data["file"]
try:
img = Image.open(file_obj)
img.verify()
except UnidentifiedImageError:
raise ParseError("Unsupported image type")
path = settings.MEDIA_ROOT / f"{request.user.id}"
unique = uuid.uuid4().hex
extension = img.format.lower()
filename_full = f"{unique}.{extension}"
filename_thumb = f"{unique}.thumb.{extension}"
if not os.path.exists(path):
os.makedirs(path)
img.thumbnail((580, 580), Image.ANTIALIAS)
img.save(path / filename_thumb, "image/png", optimize=True)
return Response(f"{request.user.id}/{filename_full}", status=status.HTTP_201_CREATED)
</code></pre>
<p>When I post a file, this is the error I get:</p>
<pre><code>Internal Server Error: /api/uploads/upload/test.png
Traceback (most recent call last):
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/PIL/ImageFile.py", line 182, in load
seek = self.load_seek
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/PIL/Image.py", line 529, in __getattr__
raise AttributeError(name)
AttributeError: load_seek
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/asgiref/sync.py", line 486, in thread_handler
raise exc_info[1]
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 43, in inner
response = await get_response(request)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
response = await wrapped_callback(
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/asgiref/sync.py", line 448, in __call__
ret = await asyncio.wait_for(future, timeout=None)
File "/opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
return await fut
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 22, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/asgiref/sync.py", line 490, in thread_handler
return func(*args, **kwargs)
File "/opt/homebrew/Cellar/python@3.10/3.10.9/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 55, in wrapped_view
return view_func(*args, **kwargs)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/django/views/generic/base.py", line 103, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/Users/kevin/Workspace/cn-django/criticalnotes/uploads/views.py", line 74, in post
img.thumbnail((580, 580), Image.ANTIALIAS)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/PIL/Image.py", line 2618, in thumbnail
im = self.resize(size, resample, box=box, reducing_gap=reducing_gap)
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/PIL/Image.py", line 2156, in resize
self.load()
File "/Users/kevin/Workspace/cn-django/.venv/lib/python3.10/site-packages/PIL/ImageFile.py", line 185, in load
seek = self.fp.seek
AttributeError: 'NoneType' object has no attribute 'seek'
</code></pre>
<p>I am not sure what is going wrong, what is <code>None</code>, because <code>img</code> is not <code>None</code> at least.</p>
|
<python><django-rest-framework><python-imaging-library>
|
2023-03-05 17:19:13
| 1
| 5,828
|
Kevin Renskers
|
75,644,014
| 10,566,763
|
Python `pip` on Windows - SSL: CERTIFICATE_VERIFY_FAILED
|
<p>I'm working with Windows and since two days ago, I'm getting the following error:</p>
<pre><code> ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host='f
iles.pythonhosted.org', port=443): Max retries exceeded with url: /packages/da/6
d/1235da14daddaa6e47f74ba0c255358f0ce7a6ee05da8bf8eb49161aa6b5/pandas-1.5.3-cp31
1-cp311-win_amd64.whl (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CER
TIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl
.c:992)')))
</code></pre>
<p>I get the error working with the python installed on my pc also into the different virtual environments. I tried it on Python 3.10 and 3.9 and both get it.</p>
<p>To solve it I tried the following solutions but neither of them worked:</p>
<blockquote>
This error occurs when the SSL certificate verification fails during a package installation using pip in Python on a Windows system. To fix this error, you can follow the steps given below:
<p>Download the cacert.pem file from the official cURL website (<a href="https://curl.se/docs/caextract.html" rel="nofollow noreferrer">https://curl.se/docs/caextract.html</a>) and save it in a suitable location on your computer.</p>
<p>Set the SSL_CERT_FILE environment variable to the path of the "cacert.pem" file. You can do this by opening the Command Prompt and running the following command:</p>
<pre><code>setx SSL_CERT_FILE "C:\path\to\cacert.pem"
</code></pre>
<p>Make sure to replace C:\path\to\cacert.pem with the actual path where you saved the "cacert.pem" file.</p>
<p>Close and reopen the Command Prompt to ensure that the environment variable is set correctly.</p>
<p>Try running the pip install command again.</p>
<p>This should resolve the error and allow you to install packages using pip on your Windows system.</p>
</blockquote>
<blockquote>
Upgrade pip to the latest version using the following command:
<pre><code>python -m pip install --upgrade pip
</code></pre>
<p>Add the --trusted-host pypi.org --trusted-host files.pythonhosted.org option to your pip install command to instruct pip to trust the SSL certificates of these domains. For example:</p>
<pre><code>pip install <package-name> --trusted-host pypi.org --trusted-host files.pythonhosted.org
</code></pre>
<p>If the above steps did not work, you can try updating the certifi package using the following command:</p>
<pre><code>pip install --upgrade certifi
</code></pre>
<p>This will update the CA certificates that are used by Python to verify SSL connections.</p>
<p>If you are still facing the issue, you can try disabling SSL verification by adding the <code>--trusted-host</code> and <code>--no-verify options</code> to your pip install command:</p>
<pre><code>pip install <package-name> --trusted-host pypi.org --trusted-host files.pythonhosted.org --no-verify
</code></pre>
<p>However, this is not recommended as it can pose a security risk.</p>
</blockquote>
<p>As I mentioned, neither solution fixed the error. What is wrong with the SSL certificate?</p>
|
<python><pip><ssl-certificate>
|
2023-03-05 17:15:51
| 3
| 706
|
notarealgreal
|
75,643,921
| 4,020,302
|
Append new column to csv based on lookup
|
<p>I have two csv files lookup.csv and data.csv. I'm converting lookup.csv as dictionary and need to add new column in data.csv based on the column.</p>
<p>Input:</p>
<p>lookup.csv</p>
<pre><code> 1 first
2 second
...
</code></pre>
<p>data.csv</p>
<pre><code> 101 NYC 1
202 DC 2
</code></pre>
<p>Expected output:</p>
<p>data.csv</p>
<pre><code> col1 col2 col3 col4
101 NYC 1 first
202 DC 2 second
...
</code></pre>
<p>Here for the first row new column col4 has first because the col3 has 1 and it's corresponding value in lookup.csv is first.</p>
<p>I tried the below logic but failing here:</p>
<pre><code>df = pd.read_csv("lookup.csv",header=None, index_col=0, squeeze=True).to_dict()
df1 = pd.read_csv("data.csv")
df1['col4'] = df.get(df1['col3'])
Error: TypeError: unhashable type: 'Series'
</code></pre>
<p>Can someone please help in resolving this issue?</p>
|
<python><python-3.x><pandas><csv>
|
2023-03-05 17:02:47
| 3
| 405
|
praneethh
|
75,643,822
| 14,729,820
|
How to replace specific string in data frame column value?
|
<p>I have <code>input.txt</code> file that has 2 columns (<code>file_name,text</code>) and want to replace " " seprater charchter (which represnt tab here becuse I spreated txt file by this char) that apperar in <code>text</code> column</p>
<p>example for input file :</p>
<pre><code>0.jpg Jól olvasom? Összesen négy, azaz 4 számot játszott el
1.jpg a csapat a koncerten Ilyet még nem is hallottam
</code></pre>
<p>I wrote the following code :</p>
<pre><code>df = pd.read_csv(f'{path}labels.txt',# labels labels_tab_remove
header=None,
delimiter=' ',
encoding="utf8",
engine='python'
)
df.rename(columns={0: "file_name", 1: "text"}, inplace=True)
print(df.head())
</code></pre>
<p>So I want to replace " " tab by " " single space</p>
<pre><code>for idx in range(len(df)):
df['text'][idx].replace(" "," ")
</code></pre>
<p>So the expacted output :</p>
<pre><code>0.jpg Jól olvasom? Összesen négy, azaz 4 számot játszott el
1.jpg a csapat a koncerten Ilyet még nem is hallottam
</code></pre>
|
<python><pandas><dataframe><text><deep-learning>
|
2023-03-05 16:45:52
| 2
| 366
|
Mohammed
|
75,643,736
| 5,378,816
|
What is the correct typing for this case (subclassed method)?
|
<p>I can't get the following type hint right. Could you please help?</p>
<pre><code># example edited to make simpler (was async)
from typing import Any, Callable, ClassVar
class Base:
def n_a(self) -> str:
return 'N/A'
meth1: ClassVar[Callable[[Base], str]] = n_a
# also meth2, meth3, ...
class Subclass(Base):
def meth1(self) -> str: # <-- line 11
return 'yes'
</code></pre>
<p>the error is:</p>
<pre><code>test.py:11: error: Signature of "meth1" incompatible with supertype "Base" [override]
test.py:11: note: Superclass:
test.py:11: note: def (Base, /) -> str
test.py:11: note: Subclass:
test.py:11: note: def meth1(self) -> str
</code></pre>
<p>The problem is probably here:</p>
<pre><code>ClassVar[Callable[[Base], str]]
^^^^^^
</code></pre>
<p>because this is accepted without errors:</p>
<pre><code> ClassVar[Callable[..., str]]
</code></pre>
<hr />
<p>Update: attempt to use <code>Self</code> as suggested in the comments:</p>
<pre><code>test-self.py:11: error: Signature of "meth1" incompatible with supertype "Base" [override]
test-self.py:11: note: Superclass:
test-self.py:11: note: def (Self, /) -> str
test-self.py:11: note: Subclass:
test-self.py:11: note: def meth1(self) -> str
</code></pre>
|
<python><python-typing>
|
2023-03-05 16:33:51
| 1
| 17,998
|
VPfB
|
75,643,701
| 10,415,970
|
How to stop headless Python Selenium from stealing window focus?
|
<p>I'm using Python Selenium in headless mode and I have some parts of the script that open new tabs, do clicks, etc.</p>
<p>However, it keeps stealing my computer's window focus and putting it on the script. Is there a way to prevent this from happening?</p>
<p>For example: I'm doing literally anything else on my computer, say typing this post, and my selenium automation focuses on it's logging window and my typing on this window stops working.</p>
|
<python><selenium-webdriver><headless-browser>
|
2023-03-05 16:28:40
| 0
| 4,320
|
Zack Plauché
|
75,643,572
| 1,770,724
|
Get Dropbox folder path in Python
|
<p>I need to know the local Dropbox path in any machine (mac or windows) whereever is this file (might be in a seconcary hard drive) and whatever is its name (might be 'E:\My_funny_dropbox'). I'm using the dropbox API. In the Dropbox website, I've set a token to grant access to "files.metadata.read". The token was saved in an environment variable for security reasons, where I can successlully read it.
Here is the current code:</p>
<pre><code>Attempting to use the Dropbox API to locate the Dropbox directory.
import os, dropbox
"""Acquiring necessary variables"""
tokens = os.environ
access_token = tokens['DROPBOX_ACCESS_TOKEN']
if __name__ == "__main__":
print(f"The Dropbox access token is: {access_token}") # The access token to my Dropbox as previously saved in the system. So far so good. thiw works.
# Initialize the Dropbox API with your access token
dbx = dropbox.Dropbox(access_token)
# Get the Dropbox user's account information
account_info = dbx.users_get_current_account()
# Get the root folder information
# The program stucks here
root_folder = dbx.files_get_metadata("/")
# Get the path to the root folder
dropbox_path = root_folder.path_display
</code></pre>
<p>In case it helps the full log is:</p>
<pre><code>E:\Dropbox\pythonProject1\Scripts\python.exe "E:\Dropbox\pythonProject1\dropbox_folder.py"
The Dropbox access token is: *** got the right token ***
Traceback (most recent call last):
File "E:\Dropbox\pythonProject1\Lib\site-packages\requests\models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\me\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\me\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\me\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Dropbox\pythonProject1\Lib\site-packages\dropbox\dropbox_client.py", line 624, in raise_dropbox_error_for_resp
if res.json()['error'] == 'invalid_grant':
^^^^^^^^^^
File "E:\Dropbox\pythonProject1\Lib\site-packages\requests\models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Dropbox\pythonProject1\dropbox_folder.py", line 20, in <module>
root_folder = dbx.files_get_metadata("/")
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Dropbox\pythonProject1\Lib\site-packages\dropbox\base.py", line 1671, in files_get_metadata
r = self.request(
^^^^^^^^^^^^^
File "E:\Dropbox\pythonProject1\Lib\site-packages\dropbox\dropbox_client.py", line 326, in request
res = self.request_json_string_with_retry(host,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Dropbox\pythonProject1\Lib\site-packages\dropbox\dropbox_client.py", line 476, in request_json_string_with_retry
return self.request_json_string(host,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Dropbox\pythonProject1\Lib\site-packages\dropbox\dropbox_client.py", line 596, in request_json_string
self.raise_dropbox_error_for_resp(r)
File "E:\Dropbox\pythonProject1\Lib\site-packages\dropbox\dropbox_client.py", line 632, in raise_dropbox_error_for_resp
raise BadInputError(request_id, res.text)
dropbox.exceptions.BadInputError: BadInputError('db812a8d6e8948308f2439a3ca50bc53', 'Error in call to API function "files/get_metadata": request body: path: The root folder is unsupported.')
Process finished with exit code 1
</code></pre>
<p>To my best undesrtanding the meaning of this lies in the last line: 'path: The root folder is unsupported'. which seems related to this line of the program: <code>root_folder = dbx.files_get_metadata("/")</code>. Dropbox API is said to accept an empy string instead, like this: <code>root_folder = dbx.files_get_metadata("")</code> but it doesn't.</p>
|
<python><python-3.x><dropbox>
|
2023-03-05 16:12:07
| 2
| 4,878
|
quickbug
|
75,643,203
| 12,715,723
|
How to make a random function without using any library?
|
<p>I know that the way to produce random numbers from 0 to 1 (<code>0.0 ≤ n < 1.0</code>) can be done using a code like this in Python:</p>
<pre class="lang-py prettyprint-override"><code>import random
print(random.random())
</code></pre>
<p>But I'm curious about how this function can be made and how it works. Is anyone who can explain how to make a random function from the scratch without using any library?</p>
<p>The result I want:</p>
<pre class="lang-py prettyprint-override"><code>def getRandom():
# return float between 0 and 1 (0 ≤ n < 1)
</code></pre>
<p>I have found several website sources, but I don't get a good understanding of it.</p>
|
<python><random>
|
2023-03-05 15:13:55
| 1
| 2,037
|
Jordy
|
75,643,086
| 11,498,867
|
how to loop over a list of dataframes and change specific values
|
<p>I have a list of dataframes. Each dataframe of the list has following format:</p>
<p><a href="https://i.sstatic.net/s5qYC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s5qYC.png" alt="enter image description here" /></a></p>
<p>(screenshot from spyder)</p>
<p>On this list of dataframes, I perform several tasks. For example, I want to change the name of "specA" to "Homo sapiens" in all of the dataframes, so I do the following:</p>
<pre><code>for i,df in enumerate(dataframes_list):
dataframes_list[i]=df.replace("specA","Homo sapiens")
</code></pre>
<p>This gives me the desired output.</p>
<p>I perform several other of these kind of <code>for loops</code> with the same structure, i.e.:</p>
<pre><code>for i,df in enumerate(dataframes_list):
dataframes_list[i]= *expression*
</code></pre>
<p>In the end, for each dataframe I sum the "reads" column of all species:</p>
<pre><code>for i,df in enumerate(dataframes_list):
dataframes_list[i]=(df.groupby('species')['reads'].sum()).to_frame()
</code></pre>
<p>and thereafter merge all dataframes in the list to a single dataframe:</p>
<pre><code>df_merged = reduce(lambda left,right: pd.merge(left,right,on=['species'],
how='outer'), dataframes_list)
</code></pre>
<p>This gives me exactly the output I want to have.</p>
<p>Now there is a new task I want to perform, but I don't know how to implement it.</p>
<p>For each dataframe in the list, I want to change the "perc_ID" value of species "specC" to "100". This could be done with .loc:</p>
<pre><code>df.loc[df.species == "specC","perc_ID"]=100
</code></pre>
<p>But I need to loop over de DFs in the list so that would be:</p>
<pre><code>for i,df in enumerate(dataframes_list):
dataframes_list[i]=df.loc[df.species == "specC","perc_ID"]=100
</code></pre>
<p>this obviously does not work, as in the second line, there is 2x "=".</p>
<p>I could change this by removing <code>enumerate</code>:</p>
<pre><code>for df in dataframes_list:
df.loc[df.species == "specC","perc_ID"]=100
</code></pre>
<p>this works. However, as I mentioned, i perform several of these <code>for loops</code>, including <code>enumerate</code>. For some reason, if I remove the <code>enumerate</code> in combination with <code>dataframes_list[i]</code> for all for loops, my merging of the dataframes gets messed up. Therefore I would like to keep the <code>enumerate</code> and <code>dataframes_list[i]</code> for all my <code>for loops</code>.</p>
<p>So my question: How can I change the value of a specific column, based on the value of another column, when looping over a list of dataframes?</p>
<p>So, how to write the following code, but keeping <code>enumerate</code> and <code>dataframes_list[i]</code>?</p>
<pre><code>for i,df in enumerate(dataframes_list):
dataframes_list[i]=df.loc[df.species == "specC","perc_ID"]=100
</code></pre>
<p>Thanks!</p>
<p><strong>PS: there are likely way better methods to work with this kind of data (no for loops, no dataframe lists,..) However, I'm not a Python expert, and I understand what I write in this way. Therefore I like to keep this structure.</strong></p>
|
<python><pandas><dataframe><for-loop>
|
2023-03-05 14:55:00
| 1
| 1,302
|
RobH
|
75,643,059
| 683,367
|
How to yield into asyncio.run?
|
<p>I know I'm missing something basic here.</p>
<p><strong>Prerequisite</strong>: I'm not allowed to edit <code>'__main__'</code>.</p>
<p>Given that prerequisite, how can I print <code>x</code> as soon as <code>fun()</code> returns something ? I know I can use <code>asynio.gather</code>, but that would mean waiting for everything to complete.</p>
<pre><code>import asyncio
async def fun(n):
for i in range(n):
await asyncio.sleep(i*.1)
yield i
def main(n):
yield from asyncio.run(fun(n))
if __name__ == '__main__':
n = 5
for x in main(n):
print(x)
</code></pre>
|
<python><python-asyncio>
|
2023-03-05 14:50:59
| 0
| 3,102
|
Jibin
|
75,642,930
| 4,473,615
|
For loop in a single line and assign it to a variable in Python
|
<p>I need to assign a result of for loop to a variable. It is easy to print the for loop in a single line using <code>end =" "</code>, but I am unable to assign the result as a single line.</p>
<pre><code>a = [1, 2, 3, 4]
for i in range(4):
print(a[i], end =" ")
</code></pre>
<p>Result is <code>1 2 3 4 </code></p>
<p>But I have to assign the result to a variable as <code>var = 1234</code>.</p>
<p>I am trying below way which is incorrect and not providing the result:</p>
<pre><code>a = [1, 2, 3, 4]
for i in range(4):
var = print(a[i], end =" ")
print(var) # It provides the result in multiple line
</code></pre>
<p>What can I try next?</p>
|
<python>
|
2023-03-05 14:30:23
| 3
| 5,241
|
Jim Macaulay
|
75,642,865
| 6,275,705
|
How can modify ViT Pytorch transformer model for regression task
|
<p>How can modify ViT Pytorch transformer model for regression task on datasets such as:</p>
<ol>
<li>Stock Prediction Dataset</li>
<li>Real Estate Price Prediction</li>
</ol>
<p>Anyone have code or worked on transformer model for regression?</p>
|
<python><regression><transformer-model>
|
2023-03-05 14:17:00
| 1
| 396
|
saifhassan
|
75,642,673
| 11,028,342
|
ImportError: cannot import name 'ViT' from 'tensorflow.keras.applications'
|
<p>how to use Vision Transformer (ViT) models for image classification in tensorflow. I'm unable to import them in my pc with tensorflow (2.4.0) or in Google Colab (tensorflow 2.11) with</p>
<pre><code>from tensorflow.keras.applications import ViT
</code></pre>
<p>The error bellow accured while executing the code:</p>
<pre><code>ImportError: cannot import name 'ViT' from 'tensorflow.keras.applications'
</code></pre>
|
<python><tensorflow>
|
2023-03-05 13:46:10
| 2
| 547
|
Baya Lina
|
75,642,455
| 1,469,465
|
Django admin - building HTML page with 2,000 inlines is slow, while DB query is fast
|
<p><strong>Question in short</strong>: I have a model admin with tabular inlines. There are around 2,000 related records. Fetching them from the database takes only 1 ms, but then it takes 4-5 seconds to render them into an HTML page. What can I do to speed this up?</p>
<p><strong>Question in detail:</strong></p>
<p>I have the following (simplified) models:</p>
<pre><code>class Location(models.Model):
name = models.CharField(max_length=128, unique=True)
class Measurement(models.Model):
decimal_settings = {'decimal_places': 1, 'max_digits': 8, 'null': True, 'blank': True, 'default': None}
location = models.ForeignKey(Location, related_name='measurements', on_delete=models.CASCADE)
day = models.DateField() # DB indexing is done using the unique_together with location
temperature_avg = models.DecimalField(**decimal_settings)
temperature_min = models.DecimalField(**decimal_settings)
temperature_max = models.DecimalField(**decimal_settings)
feels_like_temperature_avg = models.DecimalField(**decimal_settings)
feels_like_temperature_min = models.DecimalField(**decimal_settings)
feels_like_temperature_max = models.DecimalField(**decimal_settings)
wind_speed = models.DecimalField(**decimal_settings)
precipitation = models.DecimalField(**decimal_settings)
precipitation_duration = models.DecimalField(**decimal_settings)
class Meta:
unique_together = ('day', 'location')
</code></pre>
<p>I have created the following inlines on the Location admin:</p>
<pre><code>class MeasurementInline(TabularInline):
model = Measurement
fields = ('day',
'temperature_avg', 'temperature_min', 'temperature_max',
'feels_like_temperature_avg', 'feels_like_temperature_min', 'feels_like_temperature_max',
'wind_speed', 'precipitation', 'precipitation_duration')
readonly_fields = fields
extra = 0
show_change_link = False
def has_add_permission(self, request, obj=None):
return False
</code></pre>
<p>When I open a Location in the admin panel, I get a nice-looking overview of all measurements.</p>
<p><a href="https://i.sstatic.net/u7z8e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u7z8e.png" alt="Location admin panel" /></a></p>
<p>This takes however 4-5 seconds to load. I have installed Django Debug Toolbar to see what takes so much time. To my surprise, it didn't spend time in the database, but in CPU.</p>
<p><a href="https://i.sstatic.net/R4FNg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R4FNg.png" alt="Django Debug Toolbar screenshot" /></a></p>
<p>When clicking for more details on CPU usage, I see the following. I find it hard to interpret. I assume it's taking a long time to build the HTML page from the template.</p>
<p><a href="https://i.sstatic.net/ODH2H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ODH2H.png" alt="CPU usage details from Django Debug Toolbar" /></a></p>
<p>Now my questions are:</p>
<ul>
<li>How can I easily paginate these inlines, so I don't need to show 2,000 at once? I tried to add <code>paginate_by = 100</code> to the class <code>MeasurementInline</code>, but I still see all 2,000 records.</li>
<li>How do I investigate further what takes so much CPU time to return the admin page?</li>
<li>What can I do to speed this up?</li>
</ul>
<p>p.s. I found <a href="https://stackoverflow.com/questions/12870302/django-query-fast-while-rendering-slow">this related question</a>, but there is no answer given there either.</p>
|
<python><django><django-templates><django-admin>
|
2023-03-05 13:13:13
| 1
| 6,938
|
physicalattraction
|
75,642,266
| 4,317,058
|
Default lib jars folder for Apache Toree kernel
|
<p>Say I want a default relative <code>lib</code> folder in jupyter notebook project directory where I can download custom jars so that I can import later without <code>%addjar</code> magic.</p>
<p>I was under <a href="https://toree.incubator.apache.org/docs/current/user/installation/" rel="nofollow noreferrer">impression</a> I can do something like:</p>
<pre><code>"__TOREE_OPTS__": "--jar-dir=./lib/"
</code></pre>
<p>in <code>~/.local/share/jupyter/kernels/apache_toree_scala/kernel.json</code>,
but this doesn't work.</p>
<p>What am I missing?</p>
|
<python><scala><apache-spark><jupyter-notebook><apache-toree>
|
2023-03-05 12:38:44
| 2
| 25,529
|
Sergey Bushmanov
|
75,642,045
| 17,696,880
|
Separate this string using these separator elements but without removing them from the resulting strings
|
<pre class="lang-py prettyprint-override"><code>import re
input_string = "Sus cosas deben ser llevadas allí, ella tomo a sí, Lucy la hermana menor, esta muy entusiasmada. por verte hoy por la tarde\n sdsdsd"
#result_list = re.split(r"(?:.\s*\n|.|\n|;|,\s*[A-Z])", input_string)
result_list = re.split(r"(?=[.,;]|(?<=\s)[A-Z])", input_string)
print(result_list)
</code></pre>
<p>Separate the string <code>input_string</code> using these separators <code>r"(?:.\s*\n|.|\n|;|,\s*[A-Z])"</code> , but without removing them from the substrings of the resulting list.</p>
<p>When I use a positive lookahead assertion instead of a non-capturing group. This will split the input string at the positions immediately before the separators, while keeping the separators in the substrings. But I get this wrong output list</p>
<pre><code>['Sus cosas deben ser llevadas allí', ', ella tomo a sí', ', ', 'Lucy la hermana menor', ', esta muy entusiasmada', '. por verte hoy por la tarde\n sdsdsd']
</code></pre>
<p>In order to obtain this correct list of output, when printing</p>
<pre><code>["Sus cosas deben ser llevadas allí, ella tomo a sí,", " Lucy la hermana menor, esta muy entusiasmada.", " por verte hoy por la tarde\n", " sdsdsd"]
</code></pre>
|
<python><regex><split>
|
2023-03-05 11:55:52
| 2
| 875
|
Matt095
|
75,641,934
| 3,718,065
|
matplotlib fixed size when dragging RectangleSelector
|
<p>When I click the center of <code>RectangleSelector</code> and drag it to the edges of <code>axes</code>, the <strong>width</strong> or <strong>height</strong> will be changed (become smaller). This behavior is different from <code>EllipseSelector</code> whose <strong>width</strong> and <strong>height</strong> don't be changed and can be dragged outside current <code>axes</code> view. So how can keep <code>RectangleSelector</code> size when dragging?</p>
<p><a href="https://i.sstatic.net/b86VD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b86VD.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-03-05 11:35:53
| 2
| 791
|
sfzhang
|
75,641,925
| 6,494,707
|
How to deal with the peptide sequences that have atypical amino acids in the seuqnces?
|
<p>I am not a bioinformatician and my question may sound basic.</p>
<p>I have some issues with RDKit
<strong>The issue:</strong> there are some sequences that have X in the antimicrobial peptide sequence. Seems that RDKit cannot process these cases. For example the following sequences:
<code>seq = 'HFXGTLVNLAKKIL', 'HFLGXLVNLAKKIL', 'HFLGTLVNXAKKIL', 'fPVXLfPXXL', 'SRWPSPGRPRPFPGRPKPIFRPRPXNXYAPPXPXDRW'...]</code>, and the <code>Chem.MolFromSequence(seq[i])</code> returns None for these cases.</p>
<p>My question is how do deal with this kind of sequence?</p>
|
<python><bioinformatics><fingerprint><rdkit><cheminformatics>
|
2023-03-05 11:33:45
| 1
| 2,236
|
S.EB
|
75,641,662
| 12,902,027
|
Load csv file, using python
|
<pre><code>Beijing, China, [123,456,789,11]
Tokyo, Japan, [153,456,788,12]
Seoul, Korea, [144,559,363,211]
</code></pre>
<p>Using python, I would like to load this format of csv file.
I know there are lots to achieve this. Show me some relevant smart way.
I especially wish to know the way using pickle module.</p>
<p>The points are...</p>
<ol>
<li>input file should be csv file, not json file.</li>
<li>the 3rd item is a sequence, each number separated with ",". and should have "[" for beginning, and "]" for end.</li>
<li>output is not defined. I am thinking of dictionary with "Country", "City", and "Value" keys.</li>
</ol>
|
<python><json><csv><pickle>
|
2023-03-05 10:44:50
| 1
| 301
|
agongji
|
75,641,472
| 19,694,624
|
How to expose FastAPI app to the outside world?
|
<p>I've built a simple rest api with just one POST endpoint and I want to send the link (i.e., <code>https://mylink/docs</code>) to a another person who will be sending a POST request to my application.</p>
<p>How do I send them the link? I'm running the app on <code>localhost</code> now obviously. I tried accessing <code>https://my_ip/docs</code> but it didn't work.</p>
|
<python><fastapi>
|
2023-03-05 10:11:37
| 0
| 303
|
syrok
|
75,641,459
| 10,535,123
|
What are Keyword-only fields in Python, and when should I use them?
|
<p>I'm having trouble understanding what their role is and when I would want to use them. I would appreciate an explanation with an example.</p>
<p>From the <a href="https://docs.python.org/3/whatsnew/3.10.html" rel="nofollow noreferrer">doc</a>:
dataclasses now supports fields that are keyword-only in the generated <strong>init</strong> method. There are a number of ways of specifying keyword-only fields.</p>
<p>Example:</p>
<pre><code>from dataclasses import dataclass
@dataclass(kw_only=True)
class Birthday:
name: str
birthday: datetime.date
</code></pre>
|
<python><python-dataclasses>
|
2023-03-05 10:09:04
| 0
| 829
|
nirkov
|
75,641,448
| 16,347,614
|
How to query nested object (with a particular key, value pair) in an array (in firebase) with python?
|
<p>I am new to Firebase and have started working with it. I have stored the data, and a document is in the following format:</p>
<pre class="lang-js prettyprint-override"><code>{
title: "Tera Hone Laga Hoon",
album: "Ajab Prem Ki Ghazab Kahani"
artists: [
{name: "Atif Aslam", thumbnail: "https://download.org/At98yvnsh-oid"},
{name: "Joi Barua", thumbnail: "https://download.org/Joi-0osmaa"}
]
}
</code></pre>
<p><strong>I want to write a query to select all documents having artist with name <code>Atif Aslam</code></strong></p>
<p>I have used the following code:</p>
<pre class="lang-py prettyprint-override"><code>db.collection("songs").where(
u"artists",
u"array_contains",
{"name": "Atif Aslam", "thumbnail": "https://download.org/At98yvnsh-oid"}
)
</code></pre>
<p><em>This code works fine, but the thumbnail url does not match for every case; the url for some cases is different.</em></p>
<p><strong>So, is there any way I could query throughout the array with a particular key, <code>like if the array contains an object of key-value pair of {name: "Atif Aslam"}</code></strong></p>
<p>Any help would be appreciated.</p>
|
<python><firebase><google-cloud-firestore><nosql>
|
2023-03-05 10:06:29
| 1
| 574
|
Harsh Narwariya
|
75,641,389
| 7,301,792
|
How set ticks invisible but leave tick-lables are visible
|
<p>I plot a clock panel as:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Create a polar coordinates
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111, polar=True)
# set ticks and labels
angles = np.radians(np.linspace(0, 360, 12, endpoint=False))
labels = [str(l) for l in range(12)]
ax.set_xticks(angles, labels)
ax.set_yticks([])
</code></pre>
<p>It display as:</p>
<p><a href="https://i.sstatic.net/iVfjo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVfjo.png" alt="enter image description here" /></a></p>
<p>I want to set the ticks invisible but leave labels visible and tried adding line <code>ax.set_xticks([])</code></p>
<p>Then all the ticks and tick-labels disappear.</p>
<p>How could I set ticks invisible but leave tick-labels visible?</p>
|
<python><matplotlib>
|
2023-03-05 09:55:51
| 1
| 22,663
|
Wizard
|
75,641,372
| 17,973,259
|
Pygame resizable window when game is not active
|
<p>How can I make the window in my game resizable only when the game is not active, when self.stats.game_active is False in my code.
This is how i'm setting the window in the <strong>init</strong> of the main class:</p>
<pre><code>def __init__(self):
"""Initialize the game, and create game resources."""
pygame.init()
self.clock = pygame.time.Clock()
self.settings = Settings()
self.screen = pygame.display.set_mode((1250, 660), pygame.RESIZABLE)
self.settings.screen_width = self.screen.get_rect().width
self.settings.screen_height = self.screen.get_rect().height
</code></pre>
<p>And this is my run_game method:</p>
<pre><code>def run_game(self):
"""Main loop for the game."""
running = True
i = 0
while running:
if not self.paused: # check if the game is paused
if self.stats.level <= 1:
self.bg_img = self.reset_bg
elif self.stats.level == 4:
self.bg_img = self.second_bg
elif self.stats.level >= 6:
self.bg_img = self.third_bg
self.screen.blit(self.bg_img, [0,i])
self.screen.blit(self.bg_img, [0, i - self.settings.screen_height])
if i >= self.settings.screen_height:
i = 0
i += 1
self.check_events()
self._check_game_over()
if self.stats.game_active:
if self.stats.level >= 7:
self._create_asteroids()
self._update_asteroids()
self._check_asteroids_collisions()
self._create_power_ups()
self._update_power_ups()
self._create_alien_bullets(3)
self._update_alien_bullets()
self._check_alien_bullets_collisions()
self._check_power_ups_collisions()
self._update_bullets()
self._update_aliens()
self.first_player_ship.update()
self.second_player_ship.update()
self._shield_collisions(self.ships, self.aliens,
self.alien_bullet, self.asteroids)
self._update_screen()
self.clock.tick(60)
self._check_for_pause()
</code></pre>
<p>When self.stats.game_active is True, I want the window to not be resizable.</p>
<p>Or it would be better to add a list of resolutions from which the player can choose instead of letting the window to be resizable.</p>
|
<python><pygame>
|
2023-03-05 09:51:49
| 3
| 878
|
Alex
|
75,641,287
| 18,877,953
|
Using the right api_version of the azure SDK
|
<p>I am trying to verify the existence of some resources in my tenant.
I use the <code>ResourceManagementClient</code> to do so, either the <code>check_existence_by_id</code> method or <code>get_by_id</code> as described in <a href="https://github.com/Azure/azure-sdk-for-python/issues/2808" rel="nofollow noreferrer">this</a> issue.</p>
<p>My script uses a constant api_version for now, "2022-09-01", which apparently isn't supported for every resource type, because I sometimes get a <code>NoRegisteredProviderFound</code> like this:</p>
<pre><code>HttpResponseError: (NoRegisteredProviderFound) No registered resource provider found for location 'westeurope' and API version '2022-03-01' for type 'firewallPolicies'. The supported api-versions are '2019-06-01, 2019-07-01, 2019-08-01, 2019-09-01, 2019-11-01, 2019-12-01, 2020-01-01, 2020-03-01, 2020-04-01, 2020-05-01, 2020-06-01, 2020-07-01, 2020-08-01, 2020-11-01, 2021-01-01, 2021-02-01, 2021-03-01, 2021-04-01, 2021-05-01, 2021-06-01, 2021-08-01, 2021-12-01, 2022-01-01, 2022-05-01, 2022-07-01, 2022-09-01'. The supported locations are 'qatarcentral, uaenorth, australiacentral2, uaecentral, germanynorth, centralindia, koreasouth, switzerlandnorth, switzerlandwest, japanwest, francesouth, southafricawest, westindia, canadaeast, southindia, germanywestcentral, norwayeast, norwaywest, southafricanorth, eastasia, southeastasia, koreacentral, brazilsouth, brazilsoutheast, westus3, jioindiawest, swedencentral, japaneast, ukwest, westus, eastus, northeurope, westeurope, westcentralus, southcentralus, australiaeast, australiacentral, australiasoutheast, uksouth, eastus2, westus2, northcentralus, canadacentral, francecentral, centralus'.
Code: NoRegisteredProviderFound
</code></pre>
<p>From what I've seen, the interfaces I use remain similar to the client for each version, but the Azure API service for some reason doesn't support them all.</p>
<p>How can I determine a right version to use for each resource type during runtime? the list of resources I want to query is huge and has many types of resources.</p>
<p><strong>Edit:</strong> The answer that @SiddheshDesai provided works perfectly but if you face a similar issue when using the <code>ResourceMangamentClient</code> I also recommend trying the <code>ResourceGraphClient</code> from <code>azure.mgmt.resoucegraph</code>. It provides an interface that lets you query the same information very easily.</p>
|
<python><azure><azure-sdk><azure-sdk-python>
|
2023-03-05 09:36:28
| 1
| 780
|
LITzman
|
75,641,232
| 4,442,753
|
Numba: difference between using a factory function vs `cache=True`
|
<p>Looking for using (and improving execution speed of) a jitted (numba) function having one or several other jitted functions as parameters, I could see in <a href="https://numba.readthedocs.io/en/stable/user/faq.html#programming" rel="nofollow noreferrer">numbas's FAQ</a> the following:</p>
<pre><code>dispatching with arguments that are functions has extra overhead. If this matters for your application, you can also use a factory function to capture the function argument in a closure:
</code></pre>
<pre class="lang-py prettyprint-override"><code>def make_f(g):
# Note: a new f() is created each time make_f() is called!
@jit(nopython=True)
def f(x):
return g(x) + g(-x)
return f
f = make_f(jitted_g_function)
result = f(1)
</code></pre>
<p>Please, my questions are:</p>
<ul>
<li>What happens in the inside of numba then?</li>
<li>What is the difference with using <code>@njit(cache=True)</code>?</li>
<li>Ultimately, is it advised to use both, a factory function and <code>cache=True</code>?</li>
</ul>
<pre class="lang-py prettyprint-override"><code>def make_f(g):
@njit(cache=True)
def f(x):
return g(x) + g(-x)
return f
</code></pre>
<p>To provide a bit more insight on my actual use case, it has a slightly more increased complexity. The closure would look like this:</p>
<pre class="lang-py prettyprint-override"><code>from numba import njit, literal_unroll
from numpy import zeros
def make_f(tup):
@njit
def f(x):
# 'x' is a 2d array
res = zeros((3,x.shape[1]), dtype=x.dtype)
slice_start = 0
third_of_x = int(len(x)/3)
slice_ends = (third_of_x, third_of_x * 2, len(x))
for i in range(3):
slice_end = slice_ends[i]
for item in literal_unroll(tup):
cols, func = item
res[i, cols] = func(x[slice_start:slice_end,cols])
slice_start = slice_end
return res
return f
</code></pre>
<p>With <code>tup</code> being a tuple of tuples of columns indices and a callable. for instance:</p>
<pre class="lang-py prettyprint-override"><code>tup=(
(np.array([0,2], dtype="int64"), np.sum),
(np.array([1,3], dtype="int64"), np.max)
)
</code></pre>
<p>I am aware in the example, it does not make much sense to slice in 3 <code>x</code> row-wise.
But in my actual use case, each chunk is of a 'specific type'. Calculation for this chunk is dependent about what is the type of the previous chunk and what is the type of the current chunk.
So I need to operate row chunk by row chunk, one after the other.</p>
<p>If looking at the 'real' code is necessary, I am speaking about the <a href="https://github.com/yohplala/oups/blob/0987194cb44dd48fd6faf622eb689cf3bba75355/oups/jcumsegagg.py#L220" rel="nofollow noreferrer"><code>jcsagg</code> function</a> (currently not in a factory function) within <a href="https://github.com/yohplala/oups" rel="nofollow noreferrer"><code>oups</code> project</a>.
<code>jcsagg()</code> is then called by the non-jitted <a href="https://github.com/yohplala/oups/blob/0987194cb44dd48fd6faf622eb689cf3bba75355/oups/cumsegagg.py" rel="nofollow noreferrer"><code>cumsegagg</code> function</a>.</p>
<p>Thanks for your advise!</p>
|
<python><caching><numba>
|
2023-03-05 09:23:26
| 1
| 1,003
|
pierre_j
|
75,640,960
| 5,695,336
|
Run a coroutine in the background from a non-async function
|
<p>Here is a simplified version of what my code is doing, hope it is self-explanatory enough.</p>
<pre><code>def config_changed(new_config: Any):
apply_config(new_config)
start()
asyncio.create_task(upload_to_db({"running": True}))
async def upload_to_db(data: Any):
await some_db_code(data)
def start():
asyncio.create_task(run())
async def run():
while True:
do_something_every_second()
await asyncio.sleep(1)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
some_db_client.listen_to_change(path_to_config, callback=config_changed)
loop.run_forever()
</code></pre>
<p>The code above runs into an error: <code>no running loop</code> at the line <code>asyncio.create_task(run())</code>. So I changed all <code>asyncio.create_task</code> into:</p>
<pre><code>try:
loop = asyncio.get_event_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.create_task(...)
</code></pre>
<p>Now the task never start running. I confirmed <code>start</code> has been called.</p>
<p>I changed <code>loop.create_task(...)</code> into:</p>
<pre><code>task = loop.create_task(...)
loop.run_until_complete(task)
</code></pre>
<p>Now it will run, but <code>upload_to_db</code> was never called. Seems like the first <code>run_until_complete</code> will block the code below it.</p>
<p>What is the correct way of doing this?</p>
<p>I am aware that using threading can probably solve this, but I think this should be solvable without using threading. After all, <code>asyncio</code> is to spare us from using threading in this kind of situations right?</p>
<p><strong>Edit: add more context.</strong></p>
<p>The reason <code>start</code> need to be non-async is because it is a function in an abstract class, in fact, all functions are members of that abstract class, but only <code>start</code> is abstract. Some child class may use it to run an async function like my example above, others may use it to run a function that will callback in the future, but itself is not async, for example:</p>
<pre><code>def start():
some_db_client.listen_to_change(path, callback=my_callback)
</code></pre>
<p>So <code>start</code> has to be non-async and use <code>asyncio.create_task</code> when the implementation requires async, in order to be compatible with both cases.</p>
<p>If not for that limitation, I can just call</p>
<pre><code>asyncio.gather(start(), upload_to_db({"running": True}))
</code></pre>
|
<python><python-asyncio>
|
2023-03-05 08:24:05
| 2
| 2,017
|
Jeffrey Chen
|
75,640,922
| 2,908,017
|
How to make a ReadOnly Edit in a Python FMX GUI App
|
<p>I made an <code>Edit</code> on a <code>Form</code> using the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX Python Library</a>, but how do I make an <code>Edit</code> that is read-only?</p>
<p>The user should be able to read text in the Edit, but not be able to enter their own text.</p>
<p>Here's my current code to make an <code>Edit</code>:</p>
<pre><code>self.edt = Edit(self)
self.edt.Parent = self
self.edt.Align = "Center"
self.edt.Width = 500
self.edt.Height = 100
self.edt.Text = "Hello World"
self.edt.StyledSettings = ""
self.edt.TextSettings.Font.Size = 50
</code></pre>
|
<python><user-interface><firemonkey>
|
2023-03-05 08:15:34
| 1
| 4,263
|
Shaun Roselt
|
75,640,866
| 2,908,017
|
How to add a border to a label in a Python VCL GUI App?
|
<p>I've made a simple <code>Hello World!</code> app in the <a href="https://github.com/Embarcadero/DelphiVCL4Python" rel="nofollow noreferrer">DelphiVCL GUI Library for Python</a>. "Hello World!" is shown on a <code>Label</code> on the <code>Form</code> as can be seen below with my code and screenshot:</p>
<pre><code>from delphivcl import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'Hello World'
self.Width = 1000
self.Height = 500
self.myLabel = Label(self)
self.myLabel.Parent = self
self.myLabel.Caption = "Hello World!"
self.myLabel.Font.Size = 85
self.myLabel.Left = (self.Width - self.myLabel.Width) / 2
self.myLabel.Top = (self.Height - self.myLabel.Height) / 2
Application.Initialize()
Application.Title = 'Hello World'
MainForm = frmMain(Application)
MainForm.Show()
FreeConsole()
Application.Run()
</code></pre>
<p><a href="https://i.sstatic.net/Fxxgf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fxxgf.png" alt="Python GUI Hello World App" /></a></p>
<p>Is there a way to add a border around the <code>Label</code>? How would one do this?</p>
<p>I tried doing things like the following code, but it doesn't work:</p>
<pre><code>self.myLabel.Stroke.Kind = "Solid"
self.myLabel.Stroke.Color = "Black"
self.myLabel.Stroke.Thickness = 1
</code></pre>
<p>Is it possible to add a border around a <code>Label</code>?, if yes, then how?</p>
|
<python><user-interface><vcl>
|
2023-03-05 08:02:29
| 1
| 4,263
|
Shaun Roselt
|
75,640,849
| 13,412,418
|
How to Store file_paths for fast matching queries?
|
<p>I am trying to store the file path of all files in a <strong>BUCKET</strong>. In my case a bucket can have millions of files. I have to display that folder structure in a UI for navigation.</p>
<pre><code>
storage_folder:
- bucket_1
- bucket_files.txt
- bucket_metadata.txt
- bucket_2
- bucket_files.txt
- bucket_metadata.txt
# bucket_file.txt contains
folder1/sub_folder1/file1.txt
folder1/sub_folder1/file2.zip
folder1/sub_folder2/file1.txt
folder2/.....
....
....
</code></pre>
<h3>Current approach:</h3>
<p>I will create a <code>.txt</code> file corresponding to each <strong>BUCKET</strong> which will contain absolute paths of each file in that bucket. Then whenever a query comes the whole file goes through lots of string matching which is really slow.</p>
<h3>What I want:</h3>
<p>I want to store those files in a tree based structure for optimized queries.</p>
<h3>Type of queries:</h3>
<p>a. List the contents of a particular directory.
b. Prefix-search</p>
<p>Also, the files is not stored in memory for the next query.</p>
<p>Is there any solution for this? for context I am building my backend in Django.</p>
|
<python><django>
|
2023-03-05 07:59:02
| 1
| 1,888
|
Abhishek Prajapat
|
75,640,837
| 3,459,293
|
Databricks to SQL connection Error : [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server'
|
<p>I am trying to connect to Azure SQL from Databricks by using following</p>
<pre><code>import pyodbc
# Connect to Azure SQL database
server = 'xxxx.database.windows.net'
database = 'db-dev-xxxx'
username = 'abc'
password = 'xyz'
driver= '{ODBC Driver 17 for SQL Server}'
cnxn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password)
</code></pre>
<p>However, I get the following error</p>
<h3>Error</h3>
<pre><code>Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server': file not found (0) (SQLDriverConnect)")
Command took 0.19 seconds -- by g@xxx.onmicrosoft.com at 3/1/2023, 3:59:15 PM on g @xxx.onmicrosoft.com's Cluster
</code></pre>
<p>Any suggestions/advice would be appreciated.</p>
<p>Thanks in advance</p>
|
<python><sql-server><odbc><azure-databricks><databricks-sql>
|
2023-03-05 07:53:54
| 1
| 340
|
user3459293
|
75,640,781
| 1,872,639
|
How to access environment variables in dbt Python model
|
<ul>
<li><a href="https://docs.getdbt.com/docs/build/environment-variables" rel="nofollow noreferrer">https://docs.getdbt.com/docs/build/environment-variables</a></li>
<li><a href="https://docs.getdbt.com/reference/dbt-jinja-functions/env_var" rel="nofollow noreferrer">https://docs.getdbt.com/reference/dbt-jinja-functions/env_var</a></li>
</ul>
<p>These documents only explain about how to access environment variables in dbt SQL model.
How can I do it in dbt dython model?</p>
|
<python><snowflake-cloud-data-platform><dbt>
|
2023-03-05 07:36:10
| 2
| 1,524
|
Yohei Onishi
|
75,640,477
| 3,783,002
|
How to better visualize python file structure with VS 2022
|
<p>In PyCharm, I can check the structure of my current file using the "Structure" window. The figure below shows this window circled in red.</p>
<p><a href="https://i.sstatic.net/IGHV3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IGHV3.png" alt="enter image description here" /></a></p>
<p>In Visual Studio 2022, all I could find was the drop down as circled in the figure below.</p>
<p><a href="https://i.sstatic.net/KCSuG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KCSuG.png" alt="enter image description here" /></a></p>
<p>I find this to be quite underwhelming. Is it possible to get a slightly more informative file structure in Visual Studio 2022 for the Python files I'm working on? If so how?</p>
|
<python><visual-studio><pycharm><ide><visual-studio-2022>
|
2023-03-05 06:01:14
| 1
| 6,067
|
user32882
|
75,640,466
| 11,586,653
|
Change plot axis with bioinfokit
|
<p>I have the following code which produces a volcano plot using Bioinfokit:</p>
<pre><code>pip install bioinfokit
import pandas as pd
import numpy as np
import bioinfokit
from bioinfokit import analys, visuz
import random
lg2f = [random.uniform(-5,10) for i in range(0,4000)]
pad = [random.uniform(0.0000001,0.999) for i in range(0,4000)]
df1 = {'log2FoldChange':lg2f, 'padj':pad}
df1 = pd.DataFrame(df1)
bioinfokit.visuz.GeneExpression.volcano(df=df1, lfc='log2FoldChange', pv='padj', lfc_thr=(1.5,1.5), sign_line=True,plotlegend=True, legendpos='upper right', legendanchor=(1.46,1), dotsize=0.1, gfont=7,xlm=(-5.0,10.0,1), pv_thr=(0.05,0.05))
</code></pre>
<p>The following plot is produced:</p>
<p><a href="https://i.sstatic.net/tVgPR.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tVgPR.jpg" alt="enter image description here" /></a></p>
<p>However, I am trying to find a way to extend the X-axis so that it ranges from -10 to 10. Currently the x-axis extends from -5 to 9.</p>
<p>Please let me know if you have ideas. I tried adding a "fake" data point in which to extend the axis but this does not really work very well.</p>
<p>Thanks</p>
|
<python><pandas><plot><bioinformatics>
|
2023-03-05 05:57:41
| 1
| 471
|
Chip
|
75,640,303
| 1,039,860
|
problem settinfg column span (setSpan) in QTableWidget with Python
|
<p>I have my header information in HeaderCol objects (below) that hold the text, the actual row the text is to appear on (it is one of the first two rows) the start column, and the span (number of columns)</p>
<p>I have the following code:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QTableWidget, QTableWidgetItem, QApplication, QDialog, QVBoxLayout, QHBoxLayout
class TenantEvent:
TENANT_EVENTS = []
def __init__(self, text):
self.text = text
TenantEvent.TENANT_EVENTS.append(self.text)
class HeaderCol:
MAX_COL = 0
HEADERS = [[]]
def __init__(self, text, row, col, span=1, header_row=0):
if row == 0:
HeaderCol.MAX_COL += col
self.text = text
self.row = row
self.col = col
self.span = span
if len(HeaderCol.HEADERS) <= header_row:
HeaderCol.HEADERS.append([])
HeaderCol.HEADERS[header_row].append(self)
class TransactionsTable(QTableWidget):
HEADER_TOP_ROW = 0
HEADER_SECOND_ROW = 1
col = 0
row = HEADER_TOP_ROW
ENTRY_COL = HeaderCol('Entry', row, col, 2)
col += ENTRY_COL.span
DUE_COL = HeaderCol('Due', row, col, 4)
col += DUE_COL.span
PAID_COL = HeaderCol('Paid', row, col, 7)
col += PAID_COL.span
MANAGEMENT_COL = HeaderCol('Management', row, col, 2)
col += MANAGEMENT_COL.span
col = 0
row = HEADER_SECOND_ROW
DATE_COL = HeaderCol('Date', row, col, 1, 1)
col += DATE_COL.span
EVENT_COL = HeaderCol('Event', row, col, 1, 1)
col += EVENT_COL.span
FEES_AND_CHARGES_COL = HeaderCol('Fees & Charges', row, col, 1, 1)
col += FEES_AND_CHARGES_COL.span
RENT_COL = HeaderCol('Rent', row, col, 1, 1)
col += RENT_COL.span
TAX_DUE_COL = HeaderCol('Tax', row, col, 1, 1)
col += TAX_DUE_COL.span
TOTAL_DUE_COL = HeaderCol('Total Due', row, col, 1, 1)
col += TOTAL_DUE_COL.span
PAID_COL = HeaderCol('Paid', row, col, 1, 1)
col += PAID_COL.span
PAYMENT_METHOD_COL = HeaderCol('Payment Method', row, col, 1, 1)
col += PAYMENT_METHOD_COL.span
CHECK_NUMBER_COL = HeaderCol('Check Number', row, col, 1, 1)
col += CHECK_NUMBER_COL.span
DATE_BANKED_COL = HeaderCol('Date Banked', row, col, 1, 1)
col += DATE_BANKED_COL.span
RENT_COL = HeaderCol('Rent', row, col, 1, 1)
col += RENT_COL.span
TAX_PAID_COL = HeaderCol('Tax', row, col, 1, 1)
col += TAX_PAID_COL.span
TOTAL_PAID_COL = HeaderCol('Total Paid', row, col, 1, 1)
col += TOTAL_PAID_COL.span
TENANT_STATUS_COL = HeaderCol('Tenant Status', row, col, 1, 1)
col += TENANT_STATUS_COL.span
EXPENSE_COL = HeaderCol('Expense', row, col)
col += EXPENSE_COL.span
MANAGEMENT_FEE_COL = HeaderCol('Management Fee', row, col)
HeaderCol.MAX_COL = col + MANAGEMENT_FEE_COL.span
NET_COL = HeaderCol('Net', row, col)
col += NET_COL.span
NOTES_COL = HeaderCol('Notes', row, col)
col += NOTES_COL.span
FEE_DUE_EVENT = TenantEvent('Fee Due')
MANAGEMENT_EXPENSE_EVENT = TenantEvent('Management Expense')
RENT_DUE_EVENT = TenantEvent('Rent Due')
LATE_FEE_EVENT = TenantEvent('Late Fee')
RENT_PAID_EVENT = TenantEvent('Rent Paid')
FEE_PAID_EVENT = TenantEvent('Fee Paid')
BOUNCED_RENT_CHECK_EVENT = TenantEvent('Bounced Rent Check')
BOUNCED_CHECK_FEE_EVENT = TenantEvent('Bounced Check Fee Due')
REPAIRS_AND_MAINTENANCE_EVENT = TenantEvent('Repairs & Maintenance')
def __init__(self, data, *args):
QTableWidget.__init__(self, *args)
self.verticalHeader().setVisible(False)
self.horizontalHeader().setVisible(False)
self.data = data
# self.set_data()
self.setRowCount(20)
self.setColumnCount(HeaderCol.MAX_COL+30)
self.resizeColumnsToContents()
self.resizeRowsToContents()
def add_headers(self):
print(f'{"text":>15}\trow\tcol\tr_s\tc_s')
for header_row in range(2):
for header in HeaderCol.HEADERS[header_row]:
row = header_row
col = header.col
row_span = 1
col_span = header.span
print(f'{header.text:>15}\t{row}\t{col}\t{row_span}\t{col_span}')
self.setSpan(row, col, row_span, col_span)
new_item = QTableWidgetItem(header.text)
self.setItem(header_row, header.col, new_item)
class TransactionsDialog(QDialog):
def __init__(self, data, *args):
super().__init__()
top_layout = QVBoxLayout()
layout = QHBoxLayout()
self.table = TransactionsTable(data, *args)
layout.addWidget(self.table)
top_layout.addLayout(layout)
self.setLayout(top_layout)
self.resize(2400, 600)
def main():
app = QApplication(sys.argv)
data = {'col1': ['1', '2', '3', '4'],
'col2': ['1', '2', '1', '3'],
'col3': ['1', '1', '2', '1']}
dialog = TransactionsDialog(data, 4, 3)
dialog.show()
dialog.exec()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
<p>The print output is this (which looks correct to me):</p>
<pre><code> text row col r_s c_s
Entry 0 0 1 2
Due 0 2 1 4
Paid 0 6 1 7
Management 0 13 1 2
Expense 0 14 1 1
Management Fee 0 15 1 1
Net 0 15 1 1
Notes 0 16 1 1
Date 1 0 1 1
Event 1 1 1 1
Fees & Charges 1 2 1 1
Rent 1 3 1 1
Tax 1 4 1 1
Total Due 1 5 1 1
Paid 1 6 1 1
Payment Method 1 7 1 1
Check Number 1 8 1 1
Date Banked 1 9 1 1
Rent 1 10 1 1
Tax 1 11 1 1
Total Paid 1 12 1 1
Tenant Status 1 13 1 1
</code></pre>
<p>The problem is that the cells are not laid out correctly and don't get the correct text:</p>
<p>The first column and row are correct (Entry 0 0 1 2)
The first two cells on the first row are spanned together.
The problem is that the next entry (Due 0 2 1 4) on the first row merges all of the rest of the cells on the first row and I assume because there are no more cells on the first row, the rest is distrubuted down the table. I have tried stopping after the second entry, but the problem of taking up all the rest of the cells on first row is still there.</p>
<p><a href="https://i.sstatic.net/FfRLz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FfRLz.png" alt="enter image description here" /></a></p>
<p>I just tried the following code:</p>
<pre><code>def add_headers(self):
row = 0
col = 0
row_span = 1
col_span = 3
self.setSpan(row, col, row_span, col_span)
row = 0
col = 4
row_span = 1
col_span = 3
self.setSpan(row, col, row_span, col_span)
</code></pre>
<p>The first two cells were merged in row 1 but the next cells remained as normal. Arrggh!</p>
<p>OK, I boiled my problem into this:</p>
<pre><code>---------------------------------------------------------------------------------
! ! ! ! !
! ! ! ! !
! ! ! ! !
---------------------------------------------------------------------------------
! ! ! ! ! ! ! ! ! ! !
! col=0 ! col=1 ! col=2 ! col=3 ! col=4 ! col=5 ! col=6 ! col=7 ! col=8 ! col=9 !
! ! ! ! ! ! ! ! ! ! !
--------------------------------------------------------------------------------
! ! ! ! ! ! ! ! ! ! !
- ! ! ! ! ! ! ! ! ! !
- ! ! ! ! ! ! ! ! ! !
---------------------------------------------------------------------------------
row = 0
col = 0
row_span = 1
col_span = 1
self.setSpan(row, col, row_span, col_span)
new_item = QTableWidgetItem(f"setSpan({row}, {col}, {row_span}, {col_span})")
self.setItem(row, col, new_item)
col = 1
col_span = 2
self.setSpan(row, col, row_span, col_span)
new_item = QTableWidgetItem(f"setSpan({row}, {col}, {row_span}, {col_span})")
self.setItem(row, col, new_item)
col = 3
col_span = 3
self.setSpan(row, col, row_span, col_span)
new_item = QTableWidgetItem(f"setSpan({row}, {col}, {row_span}, {col_span})")
self.setItem(row, col, new_item)
col = 6
col_span = 4
self.setSpan(row, col, row_span, col_span)
new_item = QTableWidgetItem(f"setSpan({row}, {col}, {row_span}, {col_span})")
self.setItem(row, col, new_item)
return
</code></pre>
<p>This does not work at all:
<a href="https://i.sstatic.net/kSMQc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kSMQc.png" alt="enter image description here" /></a></p>
|
<python><qt><qtablewidget>
|
2023-03-05 05:04:39
| 1
| 1,116
|
jordanthompson
|
75,640,254
| 2,975,684
|
How can I serialize / deserialize private fields in pydantic models
|
<p>I'm using pydantic to model objects which are then being serialized to json and persisted in mongodb
For better encapsulation, I want to some fields to be private
but I still want them to be serialized to json when saving to mongodb, and then deserialized back from json when I fetch the object from the db</p>
<p>how can this be done?</p>
<p>Example Model:</p>
<pre><code>class MyModel(BaseModel):
public_field: str
_creation_time: str
def __init__(self, public_field: str):
super().__init__(public_field=public_field,
_creation_time=str(datetime.now()))
model = MyModel(public_field='foo')
json_str = model.json()
print(json_str)
</code></pre>
<p>The output of this code is:</p>
<pre><code>{"public_field": "foo"}
</code></pre>
<p>I would like it to be something like this:</p>
<pre><code>{"public_field": "foo", "_creation_time": "2023-03-03 09:43:47.796720"}
</code></pre>
<p>and then also to be able to deserialize the above json back with the private field populated</p>
|
<python><json><serialization><pydantic>
|
2023-03-05 04:48:07
| 1
| 443
|
Nir Brachel
|
75,640,162
| 16,853,253
|
Django showing error 'constraints' refers to the joined field
|
<p>I have two models Product and Cart. Product model has <code>maximum_order_quantity</code>. While updating quantity in cart, I'll have to check whether quantity is greater than <code>maximum_order_quantity</code>at database level. For that am comparing quantity with <code>maximum_order_quantity</code> in Cart Model But it throws an error when I try to migrate</p>
<p><code>cart.CartItems: (models.E041) 'constraints' refers to the joined field 'product__maximum_order_quantity'</code>.</p>
<p>Below are my models</p>
<pre><code>class Products(models.Model):
category = models.ForeignKey(
Category, on_delete=models.CASCADE, related_name="products"
)
product_name = models.CharField(max_length=50, unique=True)
base_price = models.IntegerField()
product_image = models.ImageField(
upload_to="photos/products", null=True, blank=True
)
stock = models.IntegerField(validators=[MinValueValidator(0)])
maximum_order_quantity = models.IntegerField(null=True, blank=True)
)
</code></pre>
<pre><code>class CartItems(models.Model):
cart = models.ForeignKey(Cart, on_delete=models.CASCADE)
product = models.ForeignKey(Products, on_delete=models.CASCADE)
quantity = models.IntegerField()
class Meta:
verbose_name_plural = "Cart Items"
constraints = [
models.CheckConstraint(
check=models.Q(quantity__gt=models.F("product__maximum_order_quantity")),
name="Quantity cannot be more than maximum order quantity"
)
]
</code></pre>
<h1>Error</h1>
<pre><code>SystemCheckError: System check identified some issues:
ERRORS:
cart.CartItems: (models.E041) 'constraints' refers to the joined field 'product__maximum_order_quantity'.
</code></pre>
|
<python><django><django-models><foreign-keys><django-validation>
|
2023-03-05 04:17:54
| 1
| 387
|
Sins97
|
75,640,152
| 10,570,372
|
Child class with different signatures, how to reasonable resolve it without breaking the code?
|
<p>I am implementing machine learning algorithms from scratch using python. I have a base class called <code>BaseEstimator</code> with the following structure:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from typing import Optional, TypeVar
import numpy as np
from abc import ABC, abstractmethod
T = TypeVar("T", np.ndarray, torch.Tensor)
class BaseEstimator(ABC):
"""Base Abstract Class for Estimators."""
@abstractmethod
def fit(self, X: T, y: Optional[T] = None) -> BaseEstimator:
"""Fit the model according to the given training data.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples,) or (n_samples, n_outputs), optional
Target relative to X for classification or regression;
None for unsupervised learning.
Returns
-------
self : object
Returns self.
"""
@abstractmethod
def predict(self, X: T) -> T:
"""Predict class labels for samples in X.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Samples.
Returns
-------
C : array, shape (n_samples,)
Predicted class label per sample.
"""
class KMeans(BaseEstimator):
def fit(X: T) -> BaseEstimator:
...
def predict(X: T) -> T:
...
class LogisticRegression(BaseEstimator):
def fit(X: T, y: Optional[T] = None) -> BaseEstimator:
...
def predict(X: T) -> T:
...
</code></pre>
<p>Now when I implemented the base class, I did not plan properly, some algorithms such as <code>KMeans</code> are unsupervised and hence do not need <code>y</code> at all in <code>fit</code>. Now a quick fix I thought of is to type hint <code>y</code> as <code>Optional</code>, so that it can be <code>None</code>, is that okay? In that case, in <code>KMeans</code>' <code>fit</code> method, I will also have to include the <code>y: Optional[T] = None</code>, which will never be used.</p>
|
<python><oop><design-patterns>
|
2023-03-05 04:12:51
| 1
| 1,043
|
ilovewt
|
75,640,108
| 4,021,436
|
Why does pdb sometimes skip stepping into multi-part conditional?
|
<p>I have a Minimal Working Example code :</p>
<p><code>test.py :</code></p>
<pre><code>a = 4
b = 6
l = [4, 6, 7, 8]
for t in l :
if t == a or t == b:
continue
print(t)
</code></pre>
<p>I am stepping through the code using <code>pdb</code> (<code>python-3.9.2</code>) :</p>
<pre><code>local: ~ $ python -m pdb test.py
> test.py(1)<module>()
-> a = 4
(Pdb) b 5
Breakpoint 1 at test.py:5
(Pdb) c
> test.py(5)<module>()
-> if t == a or t == b:
(Pdb) p t,a
(4, 4)
(Pdb) p t == a
True
(Pdb) t == a or t == b
True
(Pdb) n #### <<--- Conditional is True, why doesn't it explicitly step into it?
> test.py(4)<module>()
-> for t in l :
(Pdb) n
> test.py(5)<module>()
-> if t == a or t == b:
(Pdb) p t==a, t==b
(False, True)
(Pdb) t == a or t == b
True
(Pdb) n #### <<--- Conditional is True, and it explicitly steps into it
> test.py(6)<module>()
-> continue
(Pdb) t == a or t == b
True
</code></pre>
<p>QUESTION :</p>
<ol>
<li>Why does <code>pdb</code> explicitly step into the conditional (i.e. it explicitly goes to line 6, the <code>continue</code> statement) when <code>t==b</code> and not when <code>t==a</code>? Is this an optimization?</li>
</ol>
|
<python><pdb>
|
2023-03-05 03:59:27
| 1
| 5,207
|
irritable_phd_syndrome
|
75,640,079
| 4,212,875
|
Removing rows in a pandas dataframe after groupby based on number of elements in the group
|
<p>I'm stuck trying to figure out the following: Given a pandas dataframe, I would like to group by by one of the columns, remove the first row in each group if the group has less than <code>n</code> rows, but remove the first and last row in each group if the group has <code>n</code> or more rows. Is there an efficient way to achieve this?</p>
|
<python><pandas><dataframe>
|
2023-03-05 03:46:59
| 2
| 411
|
Yandle
|
75,640,046
| 14,328,098
|
How to use a fixed number of multi-processes in python?
|
<p>I want to analyze the results of different benchmark evaluations.
I have many benchmarks, and when running on the server, I want to evaluate 10 in parallel at a time. In my python script, there is a function that does the evaluation.</p>
<pre><code>def load_single_stat(benchmark,prefetcher, retry=False)
</code></pre>
<p>But every time the function is called, the execution will continue only if the function returns.</p>
<p>I can write a shell script to run ten python scripts.</p>
<pre><code> for((i=0;i<${#PREFETCH_METHODS[@]};i++))
do
for ((j=1; j<=$BENCHMARK_NUM; j++))
do
sleep 2
array=($(ps -aux | grep -o ${PREFETCH_METHODS[i]}))
echo ${#array[@]}
while [ ${#array[@]} -ge 10 ]
do
sleep 60
array=($(ps -aux | grep -o ${PREFETCH_METHODS[i]}))
done
cmd="python my_script.py ${PREFETCH_METHODS[i]} $BENCHMARK_NUM "
$cmd &
done
done
</code></pre>
<p>Can the above work be done in a python script? I can use multiple processes to run functions, but I can't control the number of them running (the server has other users, and I don't want to take up all the resources).</p>
<p>How can I do it more efficiently?
Thanks</p>
|
<python><multiprocessing>
|
2023-03-05 03:35:00
| 1
| 816
|
Gerrie
|
75,640,043
| 308,827
|
Fill in using unique values in pandas groupby
|
<pre><code>df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3], 'name': ['A','A', 'B','B','B','B', 'C','C','C']})
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 2
5 B 2
6 C 3
7 C NaN
8 C 3
</code></pre>
<p>In the dataframe above, I want to use groupby to fill in the missing values for each group (based on the <code>name</code> column) using the <code>unique</code> value for each group. It is gauranteed that each group will have a single unique value apart from NaNs. How do I do that?</p>
|
<python><pandas>
|
2023-03-05 03:33:25
| 1
| 22,341
|
user308827
|
75,639,759
| 1,354,439
|
Parallelize AsyncGenerators
|
<p>I am looking for a <code>asyncio.create_task</code> equivalent for <code>AsyncGenerator</code>. I want the generator to already start executing in the background, without awaiting results explicitly.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>async def g():
for i in range(3):
await asyncio.sleep(1.0)
yield i
async def main():
g1 = g()
g2 = g()
t = time.time()
async for i in g1:
print(i, time.time() - t)
async for i in g2:
print(i, time.time() - t)
</code></pre>
<p>This takes 6 seconds to execute:</p>
<pre><code>0 1.001204013824463
1 2.0024218559265137
2 3.004373788833618
0 4.00572395324707
1 5.007828950881958
2 6.009296894073486
</code></pre>
<p>If both generators were executed in parallel, the total execution would take just ~3 seconds. What is the recommended approach here?</p>
|
<python><asynchronous><python-asyncio><generator>
|
2023-03-05 01:43:16
| 2
| 5,979
|
Piotr Dabkowski
|
75,639,679
| 3,566,606
|
Type hint for a cast-like function that raises if casting is not possible
|
<p>I am having a function <code>safe_cast</code> which casts a value to a given type, but raises if the value fails to comply with the type at runtime:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
T = TypeVar('T')
def safe_cast(t: type[T], value: Any) -> T:
if isinstance(value, t):
return cast(T, value)
raise TypeError()
</code></pre>
<p>This works nicely with primitive types. But I run into problems if I want to <code>safe_cast</code> against a UnionType:</p>
<pre class="lang-py prettyprint-override"><code>string = "string"
casted: str | int = safe_cast(str | int, string)
</code></pre>
<p>The instance check works with a union type. But my solution does not work, because mypy gives me</p>
<pre><code>error: Argument 1 to "safe_cast" has incompatible type "UnionType"; expected "Type[<nothing>]" [arg-type]
</code></pre>
<p>I figure that <code><nothing></code> refers to the unspecified type variable <code>T</code> here. I also figure that apparently mypy cannot resolve <code>Union[str, int]</code> to <code>Type[T]</code>. My question is: How can I solve this?</p>
<p>I looked into creating an overload for the UnionType. IIUC, in order to write the overload, I would need to create a Generic Union Type with a variadic number of arguments. I failed to get this done.</p>
<p>Is this the right direction? If yes, how do I get it done? If no, how can I solve my problem with <code>safe_cast</code>ing Union types?</p>
|
<python><casting><mypy><python-typing>
|
2023-03-05 01:16:08
| 2
| 6,374
|
Jonathan Herrera
|
75,639,455
| 16,009,435
|
Modify the value of a multi-nested dictionary
|
<p>What is the best way to access a multi-nested dictionary by its value and modify its value? The way that the value is modified must be by its current value. To explain better, the below example array <code>myArr</code> has a nested dictionary with value <code>c2</code> how can I access that dictionary and modify <code>c2</code> into <code>c2_edit</code>? The final output should look like <code>editedArr</code>. What is the most efficient way to achieve this for a large array size? Thanks in advance</p>
<pre><code>myArr = [{
"a1": [{
"b1": "c1"
}, {
"b2": "c2"
}]
}, {
"a2": [{
"b3": "c3"
}, {
"b4": "c4"
}]
}]
#expected output
editedArr = myArr = [{
"a1": [{
"b1": "c1"
}, {
"b2": "c2_edit"
}]
}, {
"a2": [{
"b3": "c3"
}, {
"b4": "c4"
}]
}]
</code></pre>
|
<python>
|
2023-03-05 00:04:25
| 2
| 1,387
|
seriously
|
75,639,294
| 2,908,017
|
How to add a border to a label in a Python FMX GUI App?
|
<p>I've made a simple <code>Hello World!</code> app in the <a href="https://github.com/Embarcadero/DelphiFMX4Python" rel="nofollow noreferrer">DelphiFMX GUI Library for Python</a>. "Hello World!" is shown on a <code>Label</code> on the <code>Form</code> as can be seen below with my code and screenshot:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Caption = 'Hello World'
self.Width = 1000
self.Height = 500
self.Position = "ScreenCenter"
self.myLabel = Label(self)
self.myLabel.Parent = self
self.myLabel.Text = "Hello World!"
self.myLabel.Align = "Client"
self.myLabel.StyledSettings = ""
self.myLabel.TextSettings.Font.Size = 85
self.myLabel.TextSettings.HorzAlign = "Center"
Application.Initialize()
Application.Title = "My Application"
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
</code></pre>
<p><a href="https://i.sstatic.net/PXUmG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PXUmG.png" alt="Python GUI Hello World App" /></a></p>
<p>Is there a way to add a border around the <code>Label</code>? How would one do this?</p>
<p>I tried doing things like the following code, but it doesn't work:</p>
<pre><code>self.myLabel.Stroke.Kind = "Solid"
self.myLabel.Stroke.Color = "Black"
self.myLabel.Stroke.Thickness = 1
</code></pre>
<p>Is it possible to add a border around a <code>Label</code>?, if yes, then how?</p>
|
<python><user-interface><firemonkey>
|
2023-03-04 23:21:46
| 1
| 4,263
|
Shaun Roselt
|
75,639,160
| 2,908,017
|
What is the absolute simplest way to make a Python FMX GUI App?
|
<p>I've got the following Python code to make the FMX GUI Form, but I'm trying to make the code shorter if possible. What is the least amount of code that is required to make only a <code>Form</code> and show it. Here's my current code:</p>
<pre><code>from delphifmx import *
class frmMain(Form):
def __init__(self, owner):
self.Width = 300
self.Height = 150
Application.Initialize()
Application.MainForm = frmMain(Application)
Application.MainForm.Show()
Application.Run()
Application.MainForm.Destroy()
</code></pre>
<p><a href="https://i.sstatic.net/pUgtQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pUgtQ.png" alt="Python GUI Empty Form" /></a></p>
<p>Is there a way to decrease the code? Maybe a shorter way to write the same code?</p>
|
<python><user-interface><firemonkey>
|
2023-03-04 22:49:26
| 1
| 4,263
|
Shaun Roselt
|
75,639,149
| 661,720
|
Create a widget that acts as a terminal
|
<p>I need to write a Streamlit app that acts as a Terminal, meaning that it behaves exactly like the terminal would, including by asking the user for input() when required by the script.</p>
<p>The script "danse_macabre.py" is a command line text RPG, so the user needs to give regular inputs and get responses from the system. It works well in the Terminal but I wanted to make a web version of it using Streamlit, hence my question.</p>
<p>Here what I got so far, but it doesn't work, it keeps loading forever.</p>
<p>Can you help ?</p>
<pre><code>import streamlit as st
import subprocess
st.title("Command-line interface")
# Define the command to be executed
cmd = ["python", "danse_macabre.py"]
# Create a subprocess object
process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
# Loop until the Python program exits
while process.poll() is None:
# Check if the program has output
output = process.stdout.readline()
if output:
st.text(output.strip())
# Check if the program has an error
error = process.stderr.readline()
if error:
st.error(error.strip())
# Get user input using a Streamlit text_input component
user_input = st.text_input("Input", "")
# Pass user input to the Python program
if user_input:
process.stdin.write(user_input + "\n")
process.stdin.flush()
</code></pre>
|
<python><subprocess><streamlit>
|
2023-03-04 22:46:54
| 1
| 1,349
|
Taiko
|
75,639,090
| 3,316,136
|
Reading a CSV file with quotation and backslashes not working correctly
|
<p>I am trying to read a <strong>CSV file</strong> containing the following lines into a Pandas DataFrame:</p>
<pre><code>103028,"Kinokompaniya \"Ego Production\"",[ru],,K5251,K5251,ef6ba1a20ed58265766c35d9e2823f17
60985,"Studio \"Orlenok\", Central Television USSR",[ru],,S3645,S3645,909356683cb8bb5f9872a7f34242b81f
159429,TBWA\CHIAT\DAY,[us],,T123,T1232,e59c9a8f96296cf1418fd92777a5f543
82924,"\"I, of the Craft...\"",[us],,I1326,I1326,3e798b706075164fb6b21cbf20e472d2
130274,"Producentska grupa \"Most\", Zagreb",,,P6325,,1e9cf3da625add311321a8cab69458df
</code></pre>
<p>However, I am experiencing problems regarding the quotation of the strings and the backslashes.</p>
<p>I have tried adding the <code>quotechar='"'</code> and <code>escapechar='\\'</code> arguments to the <code>read_csv</code> function call. However, in this case the back slashes from <code>TBWA\CHIAT\DAY</code> were removed, which is not desired.</p>
<p>Here you can see the output of the whole DataFrame for this case, which would be correct with the exception of line with index 2.</p>
<pre><code> id name country_code imdb_id name_pcode_nf name_pcode_sf md5sum
0 103028 Kinokompaniya "Ego Production" [ru] NaN K5251 K5251 ef6ba1a20ed58265766c35d9e2823f17
1 60985 Studio "Orlenok", Central Television USSR [ru] NaN S3645 S3645 909356683cb8bb5f9872a7f34242b81f
2 159429 TBWACHIATDAY [us] NaN T123 T1232 e59c9a8f96296cf1418fd92777a5f543
3 82924 "I, of the Craft..." [us] NaN I1326 I1326 3e798b706075164fb6b21cbf20e472d2
4 130274 Producentska grupa "Most", Zagreb NaN NaN P6325 NaN 1e9cf3da625add311321a8cab69458df
</code></pre>
<p>Without adding these two arguments, I get the error:</p>
<blockquote>
<p>pandas.errors.ParserError: Error tokenizing data. C error: Expected 7 fields in line 2, saw 8</p>
</blockquote>
<p>This is my current code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from io import StringIO
data = '''103028,"Kinokompaniya \\"Ego Production\\"",[ru],,K5251,K5251,ef6ba1a20ed58265766c35d9e2823f17
60985,"Studio \\"Orlenok\\", Central Television USSR",[ru],,S3645,S3645,909356683cb8bb5f9872a7f34242b81f
159429,TBWA\CHIAT\DAY,[us],,T123,T1232,e59c9a8f96296cf1418fd92777a5f543
82924,"\\"I, of the Craft...\\"",[us],,I1326,I1326,3e798b706075164fb6b21cbf20e472d2
130274,"Producentska grupa \\"Most\\", Zagreb",,,P6325,,1e9cf3da625add311321a8cab69458df
'''
print(data)
</code></pre>
<p>How can I ensure that all lines are correctly read into the DataFrame?</p>
|
<python><pandas>
|
2023-03-04 22:33:58
| 2
| 333
|
Alexander
|
75,639,058
| 198,301
|
Creating an OFX file which an be opened by Quicken (v6.12.3) for macOS
|
<p>I have some investment accounts which I am tracking with Quicken for <em>macOS</em>. However, I am using the Quicken feature to auto-download the status of the account. However, the company does not support downloading the individual transactions, which I would like to also track inside of Quicken. But, I do not want enter these transitions manually. The do allow me to download the transactions as a CSV file. I had the idea of taking that CSV file and converting it to OFX which Quicken should be able to import. However, the OFX file I am creating is not valid and I am not sure how to fix it so Quicken will import it.</p>
<p>The data below is mocked, but is same format.</p>
<p>When I try to import the OFX file, the error I get from Quicken is:</p>
<pre><code>This FI is inactive, we cannot connect.
</code></pre>
<p>If I remove the FI block, I get the error:</p>
<pre><code>Unable to read the selected Web Connect file.
</code></pre>
<p>How can I change the OFX file?</p>
<p>The CSV file is:</p>
<pre><code>
Brokerage
Run Date,Action,Symbol,Security Description,Security Type,Quantity,Price ($),Commission ($),Fees ($),Accrued Interest ($),Amount ($),Settlement Date
01/02/2023,YOU BOUGHT,A,AGILENT TECHNOLOGIES INC,Cash,42,84,,,,-3528.00,01/03/2023
01/03/2023, YOU BOUGHT,AA,ALCOA CORPORATION,Cash,43,86,,,,-3698.00,01/04/2023
01/04/2023, YOU BOUGHT,AAC,ARES ACQUISITION CORP,Cash,44,88,,,,-3872.00,01/05/2023
</code></pre>
<p>The OFX file I am creating is:</p>
<pre><code><?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?OFX OFXHEADER="200" VERSION="220" SECURITY="NONE" OLDFILEUID="NONE" NEWFILEUID="NONE"?>
<OFX>
<SIGNONMSGSRSV1>
<SONRS>
<STATUS>
<CODE>0</CODE>
<SEVERITY>INFO</SEVERITY>
</STATUS>
<DTSERVER>20230304170917.174[+0:UTC]</DTSERVER>
<LANGUAGE>ENG</LANGUAGE>
<FI>
<ORG>Investments</ORG>
<FID>1234</FID>
</FI>
</SONRS>
</SIGNONMSGSRSV1>
<INVSTMTMSGSRSV1>
<INVSTMTTRNRS>
<TRNUID>0</TRNUID>
<STATUS>
<CODE>0</CODE>
<SEVERITY>INFO</SEVERITY>
</STATUS>
<INVSTMTRS>
<DTASOF>20230304170917.178[+0:UTC]</DTASOF>
<CURDEF>USD</CURDEF>
<INVACCTFROM>
<BROKERID>investments.com</BROKERID>
<ACCTID>Y12345678</ACCTID>
</INVACCTFROM>
<INVTRANLIST>
<DTSTART>20230102000000.000[+0:UTC]</DTSTART>
<DTEND>20230104000000.000[+0:UTC]</DTEND>
<BUYOTHER>
<INVBUY>
<INVTRAN>
<FITID>725bb8d9-ef22-4b9b-9214-8080a4a59ddb</FITID>
<DTTRADE>20230102000000.000[+0:UTC]</DTTRADE>
<DTSETTLE>20230103000000.000[+0:UTC]</DTSETTLE>
</INVTRAN>
<SECID>
<UNIQUEID>100000000</UNIQUEID>
<UNIQUEIDTYPE>OTHER</UNIQUEIDTYPE>
</SECID>
<UNITS>42</UNITS>
<UNITPRICE>84</UNITPRICE>
<TOTAL>3528</TOTAL>
<SUBACCTSEC>CASH</SUBACCTSEC>
<SUBACCTFUND>CASH</SUBACCTFUND>
</INVBUY>
</BUYOTHER>
<BUYOTHER>
<INVBUY>
<INVTRAN>
<FITID>a04dad7f-7374-4920-bf1c-ed90bdef5991</FITID>
<DTTRADE>20230103000000.000[+0:UTC]</DTTRADE>
<DTSETTLE>20230104000000.000[+0:UTC]</DTSETTLE>
</INVTRAN>
<SECID>
<UNIQUEID>100000001</UNIQUEID>
<UNIQUEIDTYPE>OTHER</UNIQUEIDTYPE>
</SECID>
<UNITS>43</UNITS>
<UNITPRICE>86</UNITPRICE>
<TOTAL>3698</TOTAL>
<SUBACCTSEC>CASH</SUBACCTSEC>
<SUBACCTFUND>CASH</SUBACCTFUND>
</INVBUY>
</BUYOTHER>
<BUYOTHER>
<INVBUY>
<INVTRAN>
<FITID>f2a9db1d-22e8-43ea-aab9-ad9c27e40cee</FITID>
<DTTRADE>20230104000000.000[+0:UTC]</DTTRADE>
<DTSETTLE>20230105000000.000[+0:UTC]</DTSETTLE>
</INVTRAN>
<SECID>
<UNIQUEID>100000002</UNIQUEID>
<UNIQUEIDTYPE>OTHER</UNIQUEIDTYPE>
</SECID>
<UNITS>44</UNITS>
<UNITPRICE>88</UNITPRICE>
<TOTAL>3872</TOTAL>
<SUBACCTSEC>CASH</SUBACCTSEC>
<SUBACCTFUND>CASH</SUBACCTFUND>
</INVBUY>
</BUYOTHER>
</INVTRANLIST>
</INVSTMTRS>
</INVSTMTTRNRS>
</INVSTMTMSGSRSV1>
<SECLISTMSGSRSV1>
<SECLIST>
<MFINFO>
<SECINFO>
<SECID>
<UNIQUEID>100000000</UNIQUEID>
<UNIQUEIDTYPE>OTHER</UNIQUEIDTYPE>
</SECID>
<SECNAME>AGILENT TECHNOLOGIES INC</SECNAME>
<TICKER>A</TICKER>
</SECINFO>
</MFINFO>
<MFINFO>
<SECINFO>
<SECID>
<UNIQUEID>100000001</UNIQUEID>
<UNIQUEIDTYPE>OTHER</UNIQUEIDTYPE>
</SECID>
<SECNAME>ALCOA CORPORATION</SECNAME>
<TICKER>AA</TICKER>
</SECINFO>
</MFINFO>
<MFINFO>
<SECINFO>
<SECID>
<UNIQUEID>100000002</UNIQUEID>
<UNIQUEIDTYPE>OTHER</UNIQUEIDTYPE>
</SECID>
<SECNAME>ARES ACQUISITION CORP</SECNAME>
<TICKER>AAC</TICKER>
</SECINFO>
</MFINFO>
</SECLIST>
</SECLISTMSGSRSV1>
</OFX>
</code></pre>
<p>The python code I wrote to do the conversion is:</p>
<pre><code>from ofxtools.models import *
from ofxtools.Types import *
from ofxtools.utils import UTC
from decimal import Decimal
from datetime import datetime
from pprint import pprint
from ofxtools.header import make_header
import xml.etree.ElementTree as ET
import csv
import uuid
import os
import re
PATH = "History_for_Account_Y12345678.csv"
OUT_PATH = "History_for_Account_Y12345678.ofx"
HEADER = ['Run Date', 'Action', 'Symbol', 'Security Description', 'Security Type', 'Quantity', 'Price ($)', 'Commission ($)', 'Fees ($)', 'Accrued Interest ($)', 'Amount ($)', 'Settlement Date' ]
filename = os.path.basename( PATH )
filename = re.search( ".*_(.*)\.csv", filename )
acctid = filename.group(1)
def validate_file( lines ):
if lines[3] != ['Brokerage']:
print( "[!] Forth line does not contain Brokerage" )
print( lines[3] )
return False
fileHeader = lines[5]
if len( HEADER ) != len( fileHeader ):
print( "[!] Header Length Mismatch" )
return False
for column in HEADER:
if column not in fileHeader:
print( f"[!] Header Column Not Found: {column}" )
return False
return True
def extract_unique_securities( lines ):
lines = lines[1:]
uniqueSecurities = set()
identifier = 100000000
for line in lines:
if line[3].strip() != 'No Description':
uniqueSecurities.add( ( line[2].strip(), line[3].strip(), identifier ) )
identifier = identifier + 1
uniqueSecurities = list( uniqueSecurities )
securityMap = {}
for security in uniqueSecurities:
securityMap[ security[0] ] = security
# pprint( securityMap )
return securityMap
def make_security_list_message_set_response_messages( securityMap ): # SECLISTMSGSRSV1
messages = []
securityList = []
for security in securityMap.values():
secid = SECID( uniqueid = str( security[2] ), uniqueidtype = 'OTHER' )
secname = security[1]
ticker = security[0]
secinfo = SECINFO( secid = secid, secname = secname, ticker = ticker )
mfinfo = MFINFO( secinfo = secinfo )
securityList.append( mfinfo )
seclist = SECLIST( *securityList )
messages = SECLISTMSGSRSV1( seclist )
return messages
def make_investment_statement_message_set_response_messages( securityMap, transactions ):
transactionList = []
response = None
trnuid = "0"
status = STATUS( code = 0, severity = 'INFO' )
startDate = datetime( 3000, 1, 1, tzinfo = UTC )
endDate = datetime( 1970, 1, 1, tzinfo = UTC )
for transaction in transactions[1:]:
transactionDate = datetime.strptime( transaction[0].strip(), "%m/%d/%Y" ).replace( tzinfo = UTC )
if startDate > transactionDate:
startDate = transactionDate
if endDate < transactionDate:
endDate = transactionDate
description = transaction[1].strip()
symbol = transaction[2].strip() if len( transaction[2].strip() ) > 0 else None
securityType = transaction[4].strip() if len( transaction[4].strip() ) > 0 else None
quantity = float( transaction[5].strip() ) if len( transaction[5].strip() ) > 0 else None
price = float( transaction[6].strip() ) if len( transaction[6].strip() ) > 0 else None
fee = float( transaction[8].strip() ) if len( transaction[8].strip() ) > 0 else None
amount = float( transaction[10].strip() ) if len( transaction[10].strip() ) > 0 else None
settlementDate = datetime.strptime( transaction[11].strip(), "%m/%d/%Y" ).replace( tzinfo = UTC ) if len( transaction[11].strip() ) > 0 else None
# print( f"{transactionDate} {symbol} {quantity} {price} {fee} {amount} {settlementDate}" )
if symbol and amount and quantity and price and securityType == 'Cash':
if amount < 0:
invtran = INVTRAN( fitid = str( uuid.uuid4() ), dttrade = transactionDate, dtsettle = settlementDate )
secid = SECID( uniqueid = str( securityMap[ symbol ][2] ), uniqueidtype = 'OTHER' )
units = quantity
unitprice = price
fees = fee
total = amount * -1
subacctsec = 'CASH'
subacctfund = 'CASH'
invbuy = INVBUY( invtran = invtran, secid = secid, units = units, unitprice = unitprice, fees = fees, total = total, subacctsec = subacctsec, subacctfund = subacctfund )
buyother = BUYOTHER( invbuy = invbuy )
transactionList.append( buyother )
else:
print( f"[?] Not Handled {transaction}" )
invtranlist = INVTRANLIST( dtstart = startDate, dtend = endDate, *transactionList )
currentDate = datetime.now().replace( tzinfo = UTC )
invacctfrom = INVACCTFROM( brokerid = "investments.com", acctid = acctid )
invstmtrs = INVSTMTRS( dtasof = currentDate, curdef = 'USD', invacctfrom = invacctfrom, invtranlist = invtranlist )
transactionResponse = INVSTMTTRNRS( trnuid = trnuid, status = status, invstmtrs = invstmtrs )
messages = INVSTMTMSGSRSV1( transactionResponse )
return messages
def process_file( lines ):
fileHeader = r[0]
transactions = list( filter( lambda line: len( line ) > 1, lines ) )
currentDate = datetime.now().replace( tzinfo = UTC )
securityMap = extract_unique_securities( transactions )
securityMessages = make_security_list_message_set_response_messages( securityMap )
transactionMessages = make_investment_statement_message_set_response_messages( securityMap, transactions )
status = STATUS( code = 0, severity = 'INFO' )
fi = FI( org = 'Investments', fid = '1234' )
sonrs = SONRS( status = status, dtserver = currentDate, language='ENG', fi = fi )
signonmsgs = SIGNONMSGSRSV1( sonrs = sonrs )
ofx = OFX( signonmsgsrsv1 = signonmsgs, seclistmsgsrsv1 = securityMessages, invstmtmsgsrsv1 = transactionMessages )
root = ofx.to_etree()
ET.indent( root )
fileData = ET.tostring( root ).decode()
header = str( make_header( version = 220 ) )
# print( header )
# print( fileData )
with open( OUT_PATH, "w" ) as fp:
fp.write( header )
fp.write( fileData )
with open(PATH, newline='') as f:
fileLines = list( csv.reader( f ) )
if validate_file( fileLines ):
process_file( fileLines[5:] )
</code></pre>
|
<python><ofx><quicken>
|
2023-03-04 22:27:22
| 1
| 8,782
|
ericg
|
75,639,004
| 16,009,435
|
positional argument error when trying to send a post request
|
<p>I am trying to send a post request to a python backend running on a fast API server. The current method I am using below throws error <code>test() takes 0 positional arguments but 1 was given</code> but I haven't passed any arguments to the function <code>test</code>. Why is this happening and how can I fix it? Thanks in advance.</p>
<pre><code>import requests
data = {"foo": "bar"}
r = requests.post("http://127.0.0.1:8000/test", json = data)
print(r.text)
</code></pre>
<pre><code>import requests
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.route('/test', methods=['POST'])
def test(request):
print(request.form['foo'])
return 'thisIsResponse'
</code></pre>
<p><strong>EDIT:</strong> as per John Gordon's answer I updated my code by assigning a positional argument to the test function now I get error <code>TypeError: 'method' object is not subscriptable</code></p>
<p>Stack trace:</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 375, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/lib/python3/dist-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/home/spree/.local/lib/python3.10/site-packages/fastapi/applications.py", line 271, in __call__
await super().__call__(scope, receive, send)
File "/home/spree/.local/lib/python3.10/site-packages/starlette/applications.py", line 118, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/spree/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/spree/.local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/spree/.local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/spree/.local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/spree/.local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/spree/.local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/spree/.local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/spree/.local/lib/python3.10/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/home/spree/.local/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/spree/.local/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/home/spree/spreeTv/./ods_info.py", line 30, in test
print(request.form['foo'])
TypeError: 'method' object is not subscriptable
</code></pre>
|
<python><fastapi>
|
2023-03-04 22:14:47
| 2
| 1,387
|
seriously
|
75,638,803
| 1,421,322
|
google cloud. upsert large csv file by chunks start to take a lot longer than expected
|
<p>I have a large csv files of prices that I want to upsert to a postgres table. I am reading it with pandas by chunks then uploading it into a temp table, then copy the temp table into the prices table and finally dropping the temp table. Locally this is really fast. each chunk takes 18s, and the file of 18M rows takes 14 minutes. In the google cloud run however, the first 3 chunks take 17s, the 4th one take 42s, the 5th takes 102s and after that the chunks take btween 100 and 120s (sometimes more).</p>
<p>my cloud run has 2 CPUs, Concurrency 200, 1GB of RAM with CPU is only allocated during request processing. My SQL isntance has 1CPU and 628 MB of Memory.</p>
<p>Here is the upsert code:</p>
<pre><code>import time
import pandas as pd
from sqlalchemy.pool import NullPool
from aldjemy.core import get_engine
import uuid
from typing import List
import pandas as pd
import sqlalchemy as sa
engine = get_engine(connect_args={'connect_timeout': 3600, 'pool_size': NullPool})
i = 0
with pd.read_csv("prices.csv", chunksize=350000) as reader:
for prices in reader:
time.sleep(0.01)
i=i+1
print("chunk number %s" %str(i))
t = time.time()
try:
_ = df_upsert(data_frame=prices, engine=engine, table_name=ModelPrice._meta.db_table, match_columns=['id', 'date'])
except Exception as e:
raise Exception(f"{self.__class__.__name__}: {str(e)}")
elapsed = time.time() - t
print('price data upserted. TOTAL elapsed = %d' %elapsed)
</code></pre>
<p>and <code>df_upsert</code>:</p>
<pre><code>def df_upsert(data_frame: pd.DataFrame, table_name: str, engine: sa.engine.Engine, match_columns: List[str]=None):
"""
Perform an "upsert" on a PostgreSQL table from a DataFrame.
Constructs an INSERT … ON CONFLICT statement, uploads the DataFrame to a
temporary table, and then executes the INSERT.
Parameters
----------
data_frame : pandas.DataFrame
The DataFrame to be upserted.
table_name : str
The name of the target table. Note that this string value is injected
directly into the SQL statements, so proper quoting is required for
table names that contain spaces, etc. A schema can be specified as
well, e.g., 'my_schema."my table"'.
engine : sa.engine.Engine
The SQLAlchemy Engine to use.
match_columns : list of str, optional
A list of the column name(s) on which to match. If omitted, the
primary key columns of the target table will be used. Note that these
names *are* automatically quoted in the INSERT statement, so do not
quote them in this list, e.g., ["my column"], not ['"my column"'].
"""
df_columns = list(data_frame.columns)
if not match_columns:
insp = sa.inspect(engine)
match_columns = insp.get_pk_constraint(table_name)[
"constrained_columns"
]
temp_table_name = f"temp_{uuid.uuid4().hex[:6]}"
columns_to_update = [col for col in df_columns if col not in match_columns]
insert_col_list = ", ".join([f'"{col_name}"' for col_name in df_columns])
stmt = f"INSERT INTO {table_name} ({insert_col_list})\n"
stmt += f"SELECT {insert_col_list} FROM {temp_table_name}\n"
match_col_list = ", ".join([f'"{col}"' for col in match_columns])
stmt += f"ON CONFLICT ({match_col_list}) DO UPDATE SET\n"
stmt += ", ".join(
[f'"{col}" = EXCLUDED."{col}"' for col in columns_to_update]
)
with engine.begin() as conn:
t = time.time()
conn.exec_driver_sql(
f"CREATE TEMPORARY TABLE {temp_table_name} AS SELECT * FROM {table_name} WHERE false"
)
elapsed = time.time() - t
print('TEMPORARY TABLE created. elapsed = %d' %elapsed)
t = time.time()
data_frame.to_sql(temp_table_name, conn, if_exists="append", index=False)
elapsed = time.time() - t
print('dataframe written to TEMPORARY TABLE. elapsed = %d' %elapsed)
t = time.time()
conn.exec_driver_sql(stmt)
elapsed = time.time() - t
print('TEMPORARY TABLE copied into price table. elapsed = %d' %elapsed)
t = time.time()
engine.execute(f'DROP TABLE "{temp_table_name}"')
elapsed = time.time() - t
print('TEMPORARY TABLE dropped. elapsed = %d' %elapsed)
</code></pre>
<p>I have tried:</p>
<ol>
<li>Reducing the chunk size (to 100k). Smaller chunks is worse actually. Relatively each chunk becomes slower.</li>
<li>Deleting the prices table and rebuilding the index and doing an upsert with on empty table.</li>
<li>Increase RAM and CPU on cloud run</li>
<li>Use CPU always allocated on cloud run</li>
</ol>
<p>None of them worked.
A little more details on the timing breakdown:
first 3 chunks:
Temp Table creation: 0s
Copy to Temp table: 9s
Copy from Temp table to prices table: 7-9s
Drop Temp table: 0s</p>
<p>Afterwards:
Temp Table creation: 0s
Copy to Temp table: 48s
Copy from Temp table to prices table: 74s
Drop Temp table: 0s</p>
|
<python><pandas><google-cloud-platform><google-cloud-sql><google-cloud-run>
|
2023-03-04 21:33:00
| 0
| 993
|
Courvoisier
|
75,638,799
| 19,694,624
|
I run into a discord.gateway warning "Shard ID None heartbeat blocked for more than x seconds."
|
<p>my simple bot checks of it got a .json file from an API, if it did, then the bot outputs some info and then deletes that .json file. But if it did not, then it checks again until there is a .json file in that directory.</p>
<p>Bot actually works fine, but I get a warning <code>Discord.gateway warning "Shard ID None heartbeat blocked for more than x seconds."</code></p>
<p>Can it crush my bot? Because the bot is supposed to be constantly running.</p>
<pre><code>import discord
from discord.ext import commands
import os
import json
import time
from config import TOKEN
intents = discord.Intents.default()
intents.members = True
intents.message_content = True
bot = commands.AutoShardedBot(shard_count=2, command_prefix="!", intents=intents)
@bot.event
async def on_ready():
print(f'Logged in as {bot.user} (ID: {bot.user.id})')
@bot.command()
async def example(ctx):
while True:
try:
f = open("result.json", "r")
result = json.load(f)
await ctx.send(f"some message, {result['name']}")
os.remove("/home/user/Projects/project_folder/result.json")
except FileNotFoundError:
continue
if __name__ == "__main__":
bot.run(TOKEN)
</code></pre>
|
<python><discord.py>
|
2023-03-04 21:32:32
| 1
| 303
|
syrok
|
75,638,561
| 11,741,232
|
Shared Memory in Python doesn't work if used within a Class
|
<p>I'm using the <code>multiprocessing.shared_memory</code> module in Python to very rapidly send images between Python processes on the same computer. When I use the shared memory code raw, it works without any problem.</p>
<p>However, when I try to abstract it into a class, which would make it easier to use, it doesn't work and gives this error: <code>Process finished with exit code -1073741510 (0xC000013A: interrupted by Ctrl+C)")</code> (which has been seen on SO before, but for the underlying C functions, which doesn't help me a lot).</p>
<p>Working code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from multiprocessing import shared_memory
import cv2
w, h = 1920, 1080
shm = shared_memory.SharedMemory(create=True, name="SharedVideoBuffer", size=w*h*3)
im = np.ndarray((h, w, 3), dtype=np.uint8, buffer=shm.buf)
image = cv2.imread("19201080.jpg")
while True:
im[:] = image
</code></pre>
<p>Code that errors:</p>
<pre><code>import numpy as np
from multiprocessing import shared_memory
import cv2
w, h = 1920, 1080
class SharedMemImagePub:
def __init__(self):
shm = shared_memory.SharedMemory(create=True, name="SharedVideoBuffer", size=w * h * 3)
self.im = np.ndarray((h, w, 3), dtype=np.uint8, buffer=shm.buf)
def publish_image(self, cv_image):
self.im[:] = cv_image
if __name__ == '__main__':
shmem_image_pub = SharedMemImagePub()
image = cv2.imread("19201080.jpg")
while True:
shmem_image_pub.publish_image(image)
</code></pre>
<p>This code should be the same for anything else in Python but in this case, it does not act identically. Does anyone know why?</p>
|
<python><shared-memory><cpython>
|
2023-03-04 20:46:28
| 0
| 694
|
kevinlinxc
|
75,638,498
| 7,265,114
|
How to calculate cdf using scipy and xarray
|
<p>I want to fit a <code>gamma</code> distribution and get <code>cdf</code> from nd DataArray <code>test_data</code> below. Currently, I can do it with numpy.apply_along_axis function, but failed to implement on xarray.apply_ufunc(). I am not sure why it happened.</p>
<p>Here is sample code.</p>
<pre><code>import xarray as xr
import numpy as np
import pandas as pd
import scipy.stats as st
# make some random data
random_data=np.random.randint(1,100,(100,50,50))
# Make ramdom datetime
time_dim=pd.date_range("2000-01-01",periods=len(random_data),freq='MS')
# Make DataArray
test_data=xr.DataArray(random_data,dims=("time","y","x"))
test_data["time"]=time_dim
# Fitting gamma distribution and calculate cdf to each data point as time dimension. But it failed.
def fit_gamma(a):
if np.isnan(a).all():
return np.array(a, dtype=np.float32)
else:
filter_nan=a[~np.isnan(a)]
shape, loc, scale=st.gamma.fit(filter_nan, scale=np.std(filter_nan))
cdf=st.gamma.cdf(a, shape, loc=loc, scale=scale)
ppf=st.norm.ppf(cdf)
return np.array(ppf, dtype=np.float32)
# Test for numpy apply_along_axis, and it worked.
result_numpy=np.apply_along_axis(fit_gamma,0, test_data.values)
# test for xarray ufunc, but failed to work
result=xr.apply_ufunc(fit_gamma, test_data, input_core_dims=[["time"]], vectorize=True)
</code></pre>
<p>It throws long errors</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_6860/3359275878.py in <module>
23 # result_numpy=np.apply_along_axis(fit_gamma,0, test_data.values)
24 # test for xarray ufunc, but failed to work
---> 25 result=xr.apply_ufunc(fit_gamma, test_data, input_core_dims=[["time"]], vectorize=True)
c:\users\hava_tu\appdata\local\programs\python\python38\lib\site-packages\xarray\core\computation.py in apply_ufunc(func, input_core_dims, output_core_dims, exclude_dims, vectorize, join, dataset_join, dataset_fill_value, keep_attrs, kwargs, dask, output_dtypes, output_sizes, meta, dask_gufunc_kwargs, *args)
1163 # feed DataArray apply_variable_ufunc through apply_dataarray_vfunc
1164 elif any(isinstance(a, DataArray) for a in args):
-> 1165 return apply_dataarray_vfunc(
1166 variables_vfunc,
1167 *args,
c:\users\hava_tu\appdata\local\programs\python\python38\lib\site-packages\xarray\core\computation.py in apply_dataarray_vfunc(func, signature, join, exclude_dims, keep_attrs, *args)
288
289 data_vars = [getattr(a, "variable", a) for a in args]
--> 290 result_var = func(*data_vars)
291
292 if signature.num_outputs > 1:
c:\users\hava_tu\appdata\local\programs\python\python38\lib\site-packages\xarray\core\computation.py in apply_variable_ufunc(func, signature, exclude_dims, dask, output_dtypes, vectorize, keep_attrs, dask_gufunc_kwargs, *args)
731 )
732
--> 733 result_data = func(*input_data)
734
735 if signature.num_outputs == 1:
c:\users\hava_tu\appdata\local\programs\python\python38\lib\site-packages\numpy\lib\function_base.py in __call__(self, *args, **kwargs)
2327 vargs.extend([kwargs[_n] for _n in names])
2328
-> 2329 return self._vectorize_call(func=func, args=vargs)
2330
2331 def _get_ufunc_and_otypes(self, func, args):
c:\users\hava_tu\appdata\local\programs\python\python38\lib\site-packages\numpy\lib\function_base.py in _vectorize_call(self, func, args)
2401 """Vectorized call to `func` over positional `args`."""
2402 if self.signature is not None:
-> 2403 res = self._vectorize_call_with_signature(func, args)
2404 elif not args:
2405 res = func()
c:\users\hava_tu\appdata\local\programs\python\python38\lib\site-packages\numpy\lib\function_base.py in _vectorize_call_with_signature(self, func, args)
2461
2462 for output, result in zip(outputs, results):
-> 2463 output[index] = result
2464
2465 if outputs is None:
ValueError: setting an array element with a sequence.
</code></pre>
|
<python><scipy><numpy-ndarray><python-xarray>
|
2023-03-04 20:34:45
| 0
| 1,141
|
Tuyen
|
75,638,494
| 6,357,916
|
Cupy API and elementwise kernel for faster neural network
|
<p>I know I can easily rewrite numpy dot product in cupy dot product by using corresponding API:</p>
<pre><code>import numpy as np
import cupy as cp
arr1 = np.array([[1, 2], [3, 4]])
arr2 = np.array([[5, 6], [7, 8]])
np.dot(arr1, arr2)
arr1_ = cp.asarray(arr1)
arr2_ = cp.asarray(arr2)
cp.dot(arr1_, arr2_)
</code></pre>
<p>I <a href="https://towardsdatascience.com/make-your-python-functions-10x-faster-142ab40b31a7" rel="nofollow noreferrer">read</a> that elementwise kernels in cupy run a lot faster (over hundreds times faster than corresponding numpy dot product). So I was guessing <strong>Q1.</strong> if I can do above dot product using elementwise kernel, or its that dot product is simply not an elementwise operation?</p>
<p>I wanted to increase the execution speed of neural network that I coded from scratch in numpy (for academic purpose and I dont want to use pytorch or tensorflow). And most of operations in neural network computation involve dot product. <strong>Q2.</strong> So, if we cannot use cupy elementwise kernel for dot product, then for what else I can use them (in the context of neural networks involving multiclass classification)?</p>
<p><strong>Q3.</strong> Is there any faster alternative in cupy for <code>cupy.dot</code>? (Just want to be sure I am utilising fastest approach)</p>
|
<python><numpy><neural-network><cupy>
|
2023-03-04 20:33:50
| 0
| 3,029
|
MsA
|
75,638,453
| 19,425,874
|
How to combine creds.json file into python script and create a gui?
|
<p>It's been a long road, but I had 4 separate python scripts I wanted to combine as one.</p>
<p>This is working and is listed below:</p>
<pre><code>import gspread
import requests
from bs4 import BeautifulSoup
import pandas as pd
from oauth2client.service_account import ServiceAccountCredentials
# Set up credentials and authorize the client
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
creds = {
"type": "service_account",
"project_id": "g-league-tracker-final",
"private_key_id": "1b8efa2e9cc9ff846ee358811687b98f0425d4ea",
"private_key": "-----BEGIN PRIVATE KEY-----\nMII\n-----END PRIVATE KEY-----\n",
"client_email": "g",
"client_id": "1",
"auth_uri": "https://acc",
"token_uri": "http",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/gleaguetracker%40g-league-tracker-final.iam.gserviceaccount.com"
}
client = gspread.authorize(creds)
print('STARTING G LEAGUE PROFILES')
# G Leauge PROFILES
gc = gspread.service_account(creds)
sh = gc.open_by_key('1DpasSS8yC1UX6WqAbkQ515BwEEjdDL-x74T0eTW8hLM')
worksheet = sh.worksheet('GLeague Profile Details')
# AddValue = ["Test", 25, "Test2"]
# worksheet.insert_row(AddValue, 3)
def get_links(url):
data = []
req_url = requests.get(url)
soup = BeautifulSoup(req_url.content, "html.parser")
for td in soup.find_all('td', {'data-th': 'Player'}):
a_tag = td.a
name = a_tag.text
player_url = a_tag['href']
pos = td.find_next_sibling('td').text
print(f"Getting {name}")
req_player_url = requests.get(
f"https://basketball.realgm.com{player_url}")
soup_player = BeautifulSoup(req_player_url.content, "html.parser")
div_profile_box = soup_player.find("div", class_="profile-box")
row = {"Name": name, "URL": player_url, "pos_option1": pos}
row['pos_option2'] = div_profile_box.h2.span.text
for p in div_profile_box.find_all("p"):
try:
key, value = p.get_text(strip=True).split(':', 1)
row[key.strip()] = value.strip()
except: # not all entries have values
pass
data.append(row)
return data
urls = [
'https://basketball.realgm.com/dleague/players/2022',
]
res = []
for url in urls:
print(f"Getting: {url}")
data = get_links(url)
res = [*res, *data]
if res != []:
header = list(res[0].keys())
values = [
header, *[[e[k] if e.get(k) else "" for k in header] for e in res]]
worksheet.append_rows(values, value_input_option="USER_ENTERED")
print('FINISHED G LEAGUE PROFILES')
print('STARTING INTERNATIONAL PROFILES')
# STARTING INTERNATIONAL PROFILES
worksheet2 = sh.worksheet('International Profile Details')
# AddValue = ["Test", 25, "Test2"]
# worksheet.insert_row(AddValue, 3)
def get_links2(url):
data = []
req_url = requests.get(url)
soup = BeautifulSoup(req_url.content, "html.parser")
for td in soup.select('td.nowrap'):
a_tag = td.a
if a_tag:
name = a_tag.text
player_url = a_tag['href']
pos = td.find_next_sibling('td').text
print(f"Getting {name}")
req_player_url = requests.get(
f"https://basketball.realgm.com{player_url}")
soup_player = BeautifulSoup(req_player_url.content, "html.parser")
div_profile_box = soup_player.find("div", class_="profile-box")
row = {"Name": name, "URL": player_url, "pos_option1": pos}
row['pos_option2'] = div_profile_box.h2.span.text if div_profile_box.h2.span else None
for p in div_profile_box.find_all("p"):
try:
key, value = p.get_text(strip=True).split(':', 1)
row[key.strip()] = value.strip()
except: # not all entries have values
pass
data.append(row)
return data
urls2 = ["https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc",
"https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/2"]
res2 = []
for url in urls2:
data = get_links2(url)
res2 = [*res2, *data]
# print(res2)
if res2 != []:
header = list(res2[0].keys())
values = [
header, *[[e[k] if e.get(k) else "" for k in header] for e in res2]]
worksheet2.append_rows(values, value_input_option="USER_ENTERED")
print('FINISHED INTERNATIONAL PROFILES')
print('STARTING G LEAGUE PROFILES')
# STARTING GLEAGUE STATS
worksheet_name1 = "All G League Stats"
worksheet1 = sh.worksheet(worksheet_name1)
url = 'https://basketball.realgm.com/dleague/stats/2023/Averages/Qualified/player/All/desc/1/Regular_Season'
res = []
for count in range(1, 99):
# pd.read_html accepts a URL too so no need to make a separate request
df_list = pd.read_html(f"{url}/{count}")
res.append(df_list[-1])
data = pd.concat(res)
# Convert the data to a list of lists
values = data.values.tolist()
# Add header row
header = data.columns.tolist()
values.insert(0, header)
# Write the data to the worksheet
worksheet1.clear() # Clear any existing data
worksheet1.append_rows(values, value_input_option="USER_ENTERED",
insert_data_option="INSERT_ROWS", table_range="B1")
print('FINISHED G LEAGUE STATS')
print('STARTING INTERNATIONAL STATS')
# STARTING INTERNATIONAL STATS
worksheet_name2 = "All International Stats"
worksheet2 = sh.worksheet(worksheet_name2)
url = 'https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc'
res = []
for count in range(1, 95):
# pd.read_html accepts a URL too so no need to make a separate request
df_list = pd.read_html(f"{url}/{count}")
res.append(df_list[-1])
data = pd.concat(res)
# Replace NaN values with an empty string
data = data.fillna("")
# Convert the data to a list of lists
values = data.values.tolist()
# Add header row
header = data.columns.tolist()
values.insert(0, header)
# Write the data to the worksheet
worksheet2.clear() # Clear any existing data
worksheet2.append_rows(values, value_input_option="USER_ENTERED",
insert_data_option="OVERWRITE", table_range="B1")
</code></pre>
<p>I wanted to just wrap it in a simple GUI so that I could run from my desktop:</p>
<pre><code>import tkinter as tk
import threading
def update_data():
# Set the status to "In progress"
status_label.config(text="In progress...")
root.update()
# Paste your code here
# Set the status to "Completed"
status_label.config(text="Completed")
root.update()
print("Data updated.")
# Create the main window
root = tk.Tk()
root.geometry("400x250")
root.title("G League & International Finder")
# Create the title label
title_label = tk.Label(root, text="G League & International Finder", font=("Helvetica", 16))
title_label.pack(pady=10)
# Create the update data button
update_data_button = tk.Button(root, text="Update Data", font=("Helvetica", 14), command=update_data)
update_data_button.pack(pady=20)
# Create the status label
status_label = tk.Label(root, text="", font=("Helvetica", 12))
status_label.pack(pady=10)
# Start the main loop
root.mainloop()
</code></pre>
<p>I'm adding in the code where it's supposed to go, but receiving various errors, I thought the code should look like this:</p>
<pre><code>import tkinter as tk
import threading
import gspread
import requests
from bs4 import BeautifulSoup
import pandas as pd
from oauth2client.service_account import ServiceAccountCredentials
# Set up credentials and authorize the client
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
creds = {
"type": "service_account",
"project_id": "g",
"private_key_id": "1b8efa2e9c",
"private_key": "-----BEGIN PRIVATE KEY-----\\+DEpmj73dM8TUFEGuI7BSbW\ndCvEgLYRbFNE4d1AoGdxjpntne64DyzHwOKWVV0/aQKBgFZOTfyKxp16bThXmcDI\ntuZbLGK5PEP+OAsqM9lQ0DveaDXsl942LNHLKYj11+ZZ375DFmZeIHsFjcO73XuQ\nFRK9+zSsWL9PZWr18PwUUdqaLkMqh7EKoMHo2JcG9EOo6o4srdrtH8SFQoJ1Eklm\n7vzwtoJU0aGPoOqoJIxKH/z7\n-----END PRIVATE KEY-----\n",
"client_email": "",
"client_id": "",
"auth_uri": "",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/gleaguetracker%40g-league-tracker-final.iam.gserviceaccount.com"
}
client = gspread.authorize(creds)
print('STARTING G LEAGUE PROFILES')
# G Leauge PROFILES
gc = gspread.service_account(creds)
sh = gc.open_by_key('1DpasSS8yC1UX6WqAbkQ515BwEEjdDL-x74T0eTW8hLM')
def update_data():
# Set the status to "In progress"
status_label.config(text="In progress...")
root.update()
# Paste your code here
worksheet = sh.worksheet('GLeague Profile Details')
# AddValue = ["Test", 25, "Test2"]
# worksheet.insert_row(AddValue, 3)
def get_links(url):
data = []
req_url = requests.get(url)
soup = BeautifulSoup(req_url.content, "html.parser")
for td in soup.find_all('td', {'data-th': 'Player'}):
a_tag = td.a
name = a_tag.text
player_url = a_tag['href']
pos = td.find_next_sibling('td').text
print(f"Getting {name}")
req_player_url = requests.get(
f"https://basketball.realgm.com{player_url}")
soup_player = BeautifulSoup(req_player_url.content, "html.parser")
div_profile_box = soup_player.find("div", class_="profile-box")
row = {"Name": name, "URL": player_url, "pos_option1": pos}
row['pos_option2'] = div_profile_box.h2.span.text
for p in div_profile_box.find_all("p"):
try:
key, value = p.get_text(strip=True).split(':', 1)
row[key.strip()] = value.strip()
except: # not all entries have values
pass
data.append(row)
return data
urls = [
'https://basketball.realgm.com/dleague/players/2022',
]
res = []
for url in urls:
print(f"Getting: {url}")
data = get_links(url)
res = [*res, *data]
if res != []:
header = list(res[0].keys())
values = [
header, *[[e[k] if e.get(k) else "" for k in header] for e in res]]
worksheet.append_rows(values, value_input_option="USER_ENTERED")
print('FINISHED G LEAGUE PROFILES')
print('STARTING INTERNATIONAL PROFILES')
# STARTING INTERNATIONAL PROFILES
worksheet2 = sh.worksheet('International Profile Details')
# AddValue = ["Test", 25, "Test2"]
# worksheet.insert_row(AddValue, 3)
def get_links2(url):
data = []
req_url = requests.get(url)
soup = BeautifulSoup(req_url.content, "html.parser")
for td in soup.select('td.nowrap'):
a_tag = td.a
if a_tag:
name = a_tag.text
player_url = a_tag['href']
pos = td.find_next_sibling('td').text
print(f"Getting {name}")
req_player_url = requests.get(
f"https://basketball.realgm.com{player_url}")
soup_player = BeautifulSoup(req_player_url.content, "html.parser")
div_profile_box = soup_player.find("div", class_="profile-box")
row = {"Name": name, "URL": player_url, "pos_option1": pos}
row['pos_option2'] = div_profile_box.h2.span.text if div_profile_box.h2.span else None
for p in div_profile_box.find_all("p"):
try:
key, value = p.get_text(strip=True).split(':', 1)
row[key.strip()] = value.strip()
except: # not all entries have values
pass
data.append(row)
return data
urls2 = ["https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc",
"https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/2"]
res2 = []
for url in urls2:
data = get_links2(url)
res2 = [*res2, *data]
# print(res2)
if res2 != []:
header = list(res2[0].keys())
values = [
header, *[[e[k] if e.get(k) else "" for k in header] for e in res2]]
worksheet2.append_rows(values, value_input_option="USER_ENTERED")
print('FINISHED INTERNATIONAL PROFILES')
print('STARTING G LEAGUE PROFILES')
# STARTING GLEAGUE STATS
worksheet_name1 = "All G League Stats"
worksheet1 = sh.worksheet(worksheet_name1)
url = 'https://basketball.realgm.com/dleague/stats/2023/Averages/Qualified/player/All/desc/1/Regular_Season'
res = []
for count in range(1, 99):
# pd.read_html accepts a URL too so no need to make a separate request
df_list = pd.read_html(f"{url}/{count}")
res.append(df_list[-1])
data = pd.concat(res)
# Convert the data to a list of lists
values = data.values.tolist()
# Add header row
header = data.columns.tolist()
values.insert(0, header)
# Write the data to the worksheet
worksheet1.clear() # Clear any existing data
worksheet1.append_rows(values, value_input_option="USER_ENTERED",
insert_data_option="INSERT_ROWS", table_range="B1")
print('FINISHED G LEAGUE STATS')
print('STARTING INTERNATIONAL STATS')
# STARTING INTERNATIONAL STATS
worksheet_name2 = "All International Stats"
worksheet2 = sh.worksheet(worksheet_name2)
url = 'https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc'
res = []
for count in range(1, 95):
# pd.read_html accepts a URL too so no need to make a separate request
df_list = pd.read_html(f"{url}/{count}")
res.append(df_list[-1])
data = pd.concat(res)
# Replace NaN values with an empty string
data = data.fillna("")
# Convert the data to a list of lists
values = data.values.tolist()
# Add header row
header = data.columns.tolist()
values.insert(0, header)
# Write the data to the worksheet
worksheet2.clear() # Clear any existing data
worksheet2.append_rows(values, value_input_option="USER_ENTERED",
insert_data_option="OVERWRITE", table_range="B1")
# Set the status to "Completed"
status_label.config(text="Completed")
root.update()
print("Data updated.")
# Create the main window
root = tk.Tk()
root.geometry("400x250")
root.title("G League & International Finder")
# Create the title label
title_label = tk.Label(
root, text="G League & International Finder", font=("Helvetica", 16))
title_label.pack(pady=10)
# Create the update data button
update_data_button = tk.Button(root, text="Update Data", font=(
"Helvetica", 14), command=update_data)
update_data_button.pack(pady=20)
# Create the status label
status_label = tk.Label(root, text="", font=("Helvetica", 12))
status_label.pack(pady=10)
# Start the main loop
root.mainloop()
</code></pre>
<p>But I'm receiving this error, is this from the credentials? I originally had it as a separate file, but needed to add it into my code so it was all together. Not sure what is going wrong here, but any advice on how I can package this all as a gui would be much appreciated.</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\AMadle\GLeagueFinal\gui.py", line 32, in <module>
client = gspread.authorize(creds)
File "C:\Python\python3.10.5\lib\site-packages\gspread\__init__.py", line 40, in authorize
client = client_class(auth=credentials)
File "C:\Python\python3.10.5\lib\site-packages\gspread\client.py", line 46, in __init__
self.auth = convert_credentials(auth)
File "C:\Python\python3.10.5\lib\site-packages\gspread\utils.py", line 67, in convert_credentials
module = credentials.__module__
AttributeError: 'dict' object has no attribute '__module__'. Did you mean: '__reduce__'?
PS C:\Users\AMadle\GLeagueFinal>
</code></pre>
<p>4PM EST 3/4 UPDATE:</p>
<p>I was wrong - it seems like the error is definitely coming from placing CREDS directly into my file. Like I mentioned, it was a separate file before, going to try to figure this out but if anyone has a solution would be much appreciated.</p>
|
<python><pandas><user-interface><python-requests><credentials>
|
2023-03-04 20:25:45
| 1
| 393
|
Anthony Madle
|
75,638,331
| 11,092,636
|
SQLAlchemy returns `list` instead of `Sequence[Row[_TP]]`
|
<p>The type-hinting of the method <code>sqlalchemy.engine.result.Result.fetchall()</code> suggests the method is supposed to return a <code>Sequence[Row[_TP]]</code> but when I print <code>print(type(result_1.fetchall()))</code>, I get a <code>list</code>.
<a href="https://i.sstatic.net/WAS2n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WAS2n.png" alt="enter image description here" /></a></p>
<p>Is there anything I'm missing?</p>
<p>My code is the following:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == "__main__":
engine: sqlalchemy.engine.base.Engine = sqlalchemy.create_engine(
"sqlite:///data_SQL/new_db.db", echo=False
)
with engine.connect() as connection:
result_1: sqlalchemy.engine.cursor.CursorResult = connection.execute(
sqlalchemy.text("SELECT * FROM MY_TABLE WHERE batch = 'sdfg';")
)
data_fetched_1: sqlalchemy.Sequence = result_1.fetchall()
print(type(data_fetched_1))
</code></pre>
|
<python><sqlalchemy><python-typing>
|
2023-03-04 20:00:15
| 2
| 720
|
FluidMechanics Potential Flows
|
75,638,222
| 1,315,621
|
Sqlalchemy + psycopg: cannot use column reference in DEFAULT expression
|
<p>I am trying to create the following table with Sqlalchemy on Postgres db:</p>
<pre><code>from sqlalchemy import Column, DateTime, Float, Integer, String, func
from sqlalchemy.orm import declarative_base
Base = declarative_base()
class Test(Base):
__tablename__ = "tests"
id = Column(Integer, primary_key=True)
name = Column(String, unique=True)
create_date = Column(DateTime, server_default=func.sysdate())
</code></pre>
<p>When running:</p>
<pre><code>Test.__table__.create(bind=engine, checkfirst=True)
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1964, in _exec_single_context
self.dialect.do_execute(
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 747, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.FeatureNotSupported: cannot use column reference in DEFAULT expression
LINE 5: create_date TIMESTAMP WITHOUT TIME ZONE DEFAULT sysdate,
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/matteo/Documents/DataKind/MyProject/application/backend/src/app/db/import_data.py", line 22, in <module>
Test.__table__.create(bind=engine, checkfirst=True)
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 1149, in create
bind._run_ddl_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3220, in _run_ddl_visitor
conn._run_ddl_visitor(visitorcallable, element, **kwargs)
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2427, in _run_ddl_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/sql/visitors.py", line 670, in traverse_single
return meth(obj, **kw)
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/sql/ddl.py", line 958, in visit_table
CreateTable(
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/sql/ddl.py", line 315, in _invoke_with
return bind.execute(self)
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1414, in execute
return meth(
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/sql/ddl.py", line 181, in _execute_on_connection
return connection._execute_ddl(
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1526, in _execute_ddl
ret = self._execute_context(
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1842, in _execute_context
return self._exec_single_context(
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1983, in _exec_single_context
self._handle_dbapi_exception(
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2325, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1964, in _exec_single_context
self.dialect.do_execute(
File "/Users/matteo/Documents/DataKind/MyProject/venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 747, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.NotSupportedError: (psycopg2.errors.FeatureNotSupported) cannot use column reference in DEFAULT expression
LINE 5: create_date TIMESTAMP WITHOUT TIME ZONE DEFAULT sysdate,
^
[SQL:
CREATE TABLE tests (
id SERIAL NOT NULL,
name VARCHAR,
create_date TIMESTAMP WITHOUT TIME ZONE DEFAULT sysdate,
PRIMARY KEY (id),
UNIQUE (name)
)
]
(Background on this error at: https://sqlalche.me/e/20/tw8g)
</code></pre>
|
<python><postgresql><sqlalchemy>
|
2023-03-04 19:40:03
| 1
| 3,412
|
user1315621
|
75,638,209
| 15,222,211
|
How to parse Fortigate config by python
|
<p>I need to parse the Fortigate config to find and change some commands by python, but I have not found good solution. Maybe someone has experience parsing the Fortigate configuration and can recommend a good tool for this task.</p>
<p>I have experience with the <a href="https://github.com/mpenning/ciscoconfparse" rel="nofollow noreferrer">ciscoconfparse</a> library and would like to use some thing similar to play with the Fortigate config.</p>
|
<python><parsing><fortigate><ciscoconfparse>
|
2023-03-04 19:36:33
| 1
| 814
|
pyjedy
|
75,638,199
| 10,077,354
|
Why is unpickling a custom class with a restricted Unpickler forbidden although it is allowed in find_class?
|
<p>I'm required to run some code repeatedly to train a model. I found that using pickle for saving my object after one iteration of the code was useful, and I could load it and use it in my second iteration.</p>
<p>But as pickle has the security issue, I wanted to use the <a href="https://docs.python.org/3/library/pickle.html#restricting-globals" rel="nofollow noreferrer">restricted_loads</a> option. However I can't seem to get it working for custom classes. Here's a smaller block of code where I get the same error:</p>
<pre class="lang-py prettyprint-override"><code>import builtins
import io
import os
import pickle
safe_builtins = {
'range',
'complex',
'set',
'frozenset',
'slice',
}
allow_classes = {
'__main__.Shape'
}
class RestrictedUnpickler(pickle.Unpickler):
def find_class(self, module, name):
# Only allow safe classes from builtins.
if module == "builtins" and name in safe_builtins | allow_classes:
return getattr(builtins, name)
# Forbid everything else.
raise pickle.UnpicklingError("global '%s.%s' is forbidden" %
(module, name))
def restricted_loads(s):
"""Helper function analogous to pickle.loads()."""
return RestrictedUnpickler(io.BytesIO(s)).load()
class Person:
def __init__(
self,
name: str,
age: int,
):
self.name = name
self.age = age
class Shape:
def __init__(
self,
name: Person,
n: int = 50,
):
self.person = Person(
name = name,
age = "10",
)
self.n = n
s = Shape(
name = "name1",
n = 30,
)
filepath = os.path.join(os.getcwd(), "temp.pkl")
with open(filepath, 'wb') as outp:
pickle.dump(s, outp, -1)
with open(filepath, 'rb') as inp:
x = restricted_loads(inp.read())
</code></pre>
<p>Error:</p>
<pre class="lang-none prettyprint-override"><code>UnpicklingError Traceback (most recent call last)
Cell In[20], line 63
60 pickle.dump(s, outp, -1)
62 with open(filepath, 'rb') as inp:
---> 63 x = restricted_loads(inp.read())
Cell In[20], line 30, in restricted_loads(s)
28 def restricted_loads(s):
29 """Helper function analogous to pickle.loads()."""
---> 30 return RestrictedUnpickler(io.BytesIO(s)).load()
Cell In[20], line 25, in RestrictedUnpickler.find_class(self, module, name)
23 return getattr(builtins, name)
24 # Forbid everything else.
---> 25 raise pickle.UnpicklingError("global '%s.%s' is forbidden" %
26 (module, name))
UnpicklingError: global '__main__.Shape' is forbidden
</code></pre>
|
<python><python-3.x><pickle>
|
2023-03-04 19:35:23
| 1
| 2,487
|
Suraj
|
75,638,100
| 6,840,039
|
Python: how to calculate removal effect correctly using Markov Chains
|
<p>I have a bunch of path-to-purchase chains, and the more I have, the more I'm getting stuck about building the graph and calculating the removal effect.</p>
<p>Let's imagine we have 5 different chains:</p>
<ol>
<li>start -> facebook -> google -> remarketing -> conversion</li>
<li>start -> facebook -> google -> null</li>
<li>start -> facebook -> remarketing -> conversion</li>
<li>start -> google -> remarketing -> null</li>
<li>start -> facebook -> google -> facebook -> google -> conversion</li>
</ol>
<p>I build the adjacency matrix using all possible pairs from these chains and I get here:</p>
<pre class="lang-py prettyprint-override"><code>['start', 'facebook', 'google', 'remarketing', 'conversion', 'null']
array([[0, 4, 1, 0, 0, 0],
[0, 0, 4, 1, 0, 0],
[0, 1, 0, 2, 1, 1],
[0, 0, 0, 0, 2, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
</code></pre>
<p>Let's try to count <strong>Removal effect</strong> for <code>remarketing</code>: 1/5 * 1/5 = 1/25 ~ 0.04 (start -> google -> conversion)
Does it make any sense? It looks like these repeats make the adjacency matrix incorrect and result is also incorrect.</p>
<hr />
<p>I understand how to do that without the 5th chain.
The adjacency matrix will be</p>
<pre class="lang-py prettyprint-override"><code>['start', 'facebook', 'google', 'remarketing', 'conversion', 'null']
array([[0, 3, 1, 0, 0, 0],
[0, 0, 2, 1, 0, 0],
[0, 0, 0, 2, 0, 1],
[0, 0, 0, 0, 2, 1],
[0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0]])
</code></pre>
<p>and <code>remarketing</code> will be 0.
And <strong>Removal effect</strong> for <code>facebook</code>, for example, will be 1/4 * 2/3 * 2/3 ~ 0.11</p>
<hr />
<p>So how to calculate <strong>Removal Effect</strong> correctly when you have lots of complicated chains?</p>
|
<python><graph><markov-chains>
|
2023-03-04 19:15:15
| 0
| 4,492
|
Petr Petrov
|
75,638,045
| 4,503,546
|
How to loop through variables w/ trend trading system
|
<p>I built a simple moving average trading system like so:</p>
<pre><code>import pandas as pd
import numpy as np
def trendfunc(f1,fastnum,slownum):
f1['Change'] = f1['Close'].pct_change()
f1['Fast'] = f1['Close'].rolling(window=fastnum,center=False).mean()
f1['Slow'] = f1['Close'].rolling(window=slownum,center=False).mean()
f1['Trend'] = f1['Fast'] - f1['Slow']
f1['Position'] = np.where(f1['Trend'].shift(2)>0,1,0)
f1['Result'] = f1['Position']*f1['Change']
f1 = f1.dropna()
f1['MktVAMI'] = f1['Change']+1
f1['MktVAMI'].iloc[0] = 1000
f1['MktVAMI'] = f1['MktVAMI'].cumprod()
f1['MktHi'] = f1['MktVAMI'].cummax()
f1['MktDD'] = (f1['MktVAMI']/f1['MktHi'])-1
f1['MktMaxDD'] = f1['MktDD'].cummin()
f1['SysVAMI'] = f1['Result']+1
f1['SysVAMI'].iloc[0] = 1000
f1['SysVAMI'] = f1['SysVAMI'].cumprod()
f1['SysHi'] = f1['SysVAMI'].cummax()
f1['SysDD'] = (f1['SysVAMI']/f1['SysHi'])-1
f1['SysMaxDD'] = f1['SysDD'].cummin()
keep = ['Date','MktVAMI','MktMaxDD','SysVAMI','SysMaxDD']
f2 = f1[keep].tail(1)
return f2
tkrs = ['spy']
for tkr in tkrs:
df1 = pd.read_csv(f'C:\\Path\\To\\Date\\{tkr}.csv')
FastMA = 50
SlowMA = 200
AA = trendfunc(df1,FastMA,SlowMA)
</code></pre>
<p>In the code above, I use one "fast" moving average (FastMA = 50) and one "slow" moving average (SlowMA = 200). I'd like to be able to loop through several possible values for both FastMA and SlowMA.</p>
<p>So, I might want to use:</p>
<p>FastMA = [10,20,30]
FastMA = [100,200,300]</p>
<p>Where I end up with 9 possible combination each of which is fed to the "trendfunc" function.</p>
<p>I'd also like to append each result to a dataframe with the values used for the respective variables so I could then export that dataframe and compare results.</p>
<p>My loop skills aren't very strong when it comes to multi level nesting and appending to a dataframe hence this post. Please help. Thx.</p>
|
<python><pandas><dataframe><loops>
|
2023-03-04 19:09:09
| 1
| 407
|
GC123
|
75,637,997
| 4,792,565
|
writing Yaml file in python with no new line
|
<p>Let's say I have the following snippet :</p>
<pre><code>import yaml
Key = ["STAGE0", "STAGE1"]
dict = {}
dict[Key[0]] = [' ']
dict[Key[1]] = [' ']
dict[Key[0]][0]="HEY"
dict[Key[0]][0]="WHY newline?"
with open("SUMMARY.YAML", "w") as file_yaml:
yaml.dump(dict, file_yaml)
</code></pre>
<p>The output <code>SUMMARY.YAML</code> file looks like this :</p>
<pre><code>STAGE0:
- WHY newline?
STAGE1:
- ' '
</code></pre>
<p>However I need to save them, in the following desired format :</p>
<pre><code>STAGE0: WHY newline?
STAGE1: ' '
</code></pre>
<p>I am unable to get this output</p>
|
<python><file><io><yaml><pyyaml>
|
2023-03-04 19:02:32
| 2
| 525
|
Ayan Mitra
|
75,637,853
| 313,768
|
Adapt von Mises KDE to Seaborn
|
<p>I am attempting to use Seaborn to plot a bivariate (joint) KDE on a polar projection. There is no support for this in Seaborn, and no direct support for an angular (von Mises) KDE in Scipy.</p>
<p><a href="https://stackoverflow.com/a/44783738">scipy gaussian_kde and circular data</a> solves a related but different case. Similarities are - random variable is defined over linearly spaced angles on the unit circle; the KDE is plotted. Differences: I want to use Seaborn's <a href="https://seaborn.pydata.org/examples/joint_kde.html" rel="nofollow noreferrer">joint kernel density estimate support</a> to produce a contour plot of this kind -</p>
<p><a href="https://i.sstatic.net/PDwM4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PDwM4.png" alt="contour joint KDE" /></a></p>
<p>but with no categorical ("species") variation, and on a polar projection. The marginal plots would be nice-to-have but are not important.</p>
<p>The rectilinear version of my situation would be</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import seaborn as sns
from numpy.random._generator import default_rng
angle = np.repeat(
np.deg2rad(
np.arange(0, 360, 10)
),
100,
)
rand = default_rng(seed=0)
data = pd.Series(
rand.normal(loc=50, scale=10, size=angle.size),
index=pd.Index(angle, name='angle'),
name='power',
)
matplotlib.use(backend='TkAgg')
joint = sns.JointGrid(
data.reset_index(),
x='angle', y='power'
)
joint.plot_joint(sns.kdeplot, bw_adjust=0.7, linewidths=1)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/I1145.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I1145.png" alt="rectilinear" /></a></p>
<p>but this is shown in the wrong projection, and also there should be no decreasing contour lines between the angles of 0 and 360.</p>
<p>Of course, as <a href="https://stackoverflow.com/a/62197982">Creating a circular density plot using matplotlib and seaborn</a> explains, the naive approach of using the existing Gaussian KDE in a polar projection is not valid, and even if I wanted to I couldn't, because <code>axisgrid.py</code> hard-codes the subplot setup with no parameters:</p>
<pre class="lang-py prettyprint-override"><code> f = plt.figure(figsize=(height, height))
gs = plt.GridSpec(ratio + 1, ratio + 1)
ax_joint = f.add_subplot(gs[1:, :-1])
ax_marg_x = f.add_subplot(gs[0, :-1], sharex=ax_joint)
ax_marg_y = f.add_subplot(gs[1:, -1], sharey=ax_joint)
</code></pre>
<p>I started in with a monkeypatching approach:</p>
<pre class="lang-py prettyprint-override"><code>import scipy.stats._kde
import numpy as np
def von_mises_estimate(
points: np.ndarray,
values: np.ndarray,
xi: np.ndarray,
cho_cov: np.ndarray,
dtype: np.dtype,
real: int = 0
) -> np.ndarray:
"""
Mimics the signature of gaussian_kernel_estimate
https://github.com/scipy/scipy/blob/main/scipy/stats/_stats.pyx#L740
"""
# https://stackoverflow.com/a/44783738
# Will make this a parameter
kappa = 20
# I am unclear on how 'values' would be used here
class VonMisesKDE(scipy.stats._kde.gaussian_kde):
def __call__(self, points: np.ndarray) -> np.ndarray:
points = np.atleast_2d(points)
result = von_mises_estimate(
self.dataset.T,
self.weights[:, None],
points.T,
self.inv_cov,
points.dtype,
)
return result[:, 0]
import seaborn._statistics
seaborn._statistics.gaussian_kde = VonMisesKDE
</code></pre>
<p>and this does successfully get called in place of the default Gaussian function, but (1) it's incomplete, and (2) I'm not clear that it would be possible to convince the joint plot methods to use the new projection.</p>
<p>A very distorted and low-quality preview of what this would look like, via Gimp transformation:</p>
<p><a href="https://i.sstatic.net/yMEHu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yMEHu.png" alt="gimp transformation" /></a></p>
<p>though the radial axis would increase instead of decrease from the centre out.</p>
|
<python><numpy><scipy><seaborn><kernel-density>
|
2023-03-04 18:41:20
| 2
| 16,660
|
Reinderien
|
75,637,814
| 3,747,724
|
Overwrite Blender python class method?
|
<p>Not sure what the right words are but,
I'm using Python to edit a 3d object in Blender. I can change the position by using:</p>
<pre><code>some_3d_variable.obj.location = (0, 1, 0)
</code></pre>
<p>This moves the object to location (0, 1, 0) in meters i.e 100 centimeters:</p>
<p><a href="https://i.sstatic.net/tKgIS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tKgIS.png" alt="enter image description here" /></a></p>
<p>I would like location to be in centimeters. Obviously i could write</p>
<pre><code>some_3d_variable.obj.location = (0, 1/100,0)
</code></pre>
<p><a href="https://i.sstatic.net/LkDs0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LkDs0.png" alt="enter image description here" /></a></p>
<p>But that's not a long term solution. But Can I do this under the hood?</p>
<p>I have a custom class the stores the <code>self.obj</code> that has the location method. The location method comes from Blender. But can I do some python magic such that <code>self.obj.position</code> divides the input with 100 but also set the 3d position in blender?</p>
<p>Here is an example. You can run this script in blender:</p>
<pre><code>import bpy
from dataclasses import dataclass, field, KW_ONLY
import bmesh
@dataclass
class Sphere():
def __post_init__(self):
bm = bmesh.new()
bmesh.ops.create_uvsphere(bm,u_segments=62, v_segments=36, radius=1)
mesh = bpy.data.meshes.new('Sphere')
bm.to_mesh(mesh)
mesh.update()
bm.free()
self.obj = bpy.data.objects.new("Sphere", mesh)
bpy.context.scene.collection.objects.link(self.obj)
# todo put here a method to handle obj.location, such that all the values are divided by 100
s = Sphere()
# update position
s.obj.location = (1, 0, 0)
</code></pre>
<p>This position is now set to 1 meter of the sphere object. I need to be 1/100 instead, i.e. a centimeters.. Any ideas?</p>
|
<python><blender>
|
2023-03-04 18:35:58
| 0
| 304
|
nammerkage
|
75,637,796
| 54,873
|
In pandas, how do I get multiple slices of a MultiIndexed-dataframe at a time?
|
<p>In pandas, I'm familiar with how to slice a Multi-Index with a list to get multiple values, like such:</p>
<pre><code>(Pdb) df = pd.DataFrame({"A": range(0,10), "B": -1, "C": range(20,30), "D": range(30,40), "E":range(40,50)}).set_index(["A", "B", "C"])
(Pdb) df
D E
A B C
0 -1 20 30 40
1 -1 21 31 41
2 -1 22 32 42
3 -1 23 33 43
4 -1 24 34 44
5 -1 25 35 45
6 -1 26 36 46
7 -1 27 37 47
8 -1 28 38 48
9 -1 29 39 49
(Pdb) df.loc[ [0,1,2]]
D E
A B C
0 -1 20 30 40
1 -1 21 31 41
2 -1 22 32 42
</code></pre>
<p>But how can I do this for multiple levels at a time?</p>
<pre><code>(Pdb) df.loc[ [0,1,2], -1]
*** KeyError: -1
</code></pre>
<p>Or ideally:</p>
<pre><code>(Pdb) df.loc[ [0,1,2], [-1]]
*** KeyError: "None of [Int64Index([-1], dtype='int64')] are in the [columns]"
</code></pre>
|
<python><pandas>
|
2023-03-04 18:32:56
| 1
| 10,076
|
YGA
|
75,637,688
| 14,688,879
|
How to evenly space the grid on a matplotlib log scale
|
<p>How can I set the scale of a log-scaled axis in matplotlib to show values less than base^1 and space them evenly?</p>
<p><strong>Example</strong>:</p>
<p>I have a scale that looks like this:</p>
<p><a href="https://i.sstatic.net/SMf5S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SMf5S.png" alt="enter image description here" /></a></p>
<p>But I want a scale that looks like this:</p>
<p><a href="https://i.sstatic.net/gRg12.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gRg12.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><graph><logarithm>
|
2023-03-04 18:15:56
| 1
| 342
|
confusedandsad
|
75,637,585
| 19,694,624
|
The rest of the python code doesn't work with FastAPI
|
<p>I want to make a discord bot that would work with data that would be sent from a web application via POST request. So for that I need to make my own API containing that POST request with FastAPI.</p>
<p>My question is: how can make my API work with the rest of the code?</p>
<p>Consider this example:</p>
<pre><code>from fastapi import FastAPI, Request
from pydantic import BaseModel
app = FastAPI()
dict = {"name": "","description": ""}
class Item(BaseModel):
name: str
description: str
@app.post("/item")
async def create_item(request: Request, item: Item):
result = await request.json()
dict["name"] = result["name"]
print(dict)
print(dict)
</code></pre>
<p>When I run the API, type the values in and print <code>dict</code> for the fist time, it outputs something like that: <code>{'name': 'some_name', 'desc': 'some_desc'}</code>
But when I run my file as python code, only <code>{'name': '', 'desc': ''}</code> gets printed out.</p>
<p>I thought that after I type in values in <code>dict</code> on my API page (https://localhost:800/docs), python would output the exact values I typed, but it didn't happen.</p>
<p>What do I do?</p>
|
<python><fastapi>
|
2023-03-04 18:00:59
| 2
| 303
|
syrok
|
75,637,560
| 3,672,883
|
Is this the correct way of use Protocol, and if it's why mypy fails?
|
<p>I have the following two classes:</p>
<pre><code>@runtime_checkable
class AbstractFolder(Protocol):
def __iter__(self) -> "AbstractFolder":
raise NotImplementedError
def __next__(self) -> AbstractFileReadable:
raise NotImplementedError
</code></pre>
<p>and their implementation:</p>
<pre><code>class FileSystemFolder(AbstractFolder, Iterator):
def __init__(self, path: str):
self.path: str = path
def __iter__(self) -> "FileSystemFolder":
self.jobs: List[AbstractFileReadable] = [
FileSystemFileReadable(path)
for path in [*glob(os.path.join(self.path, "*"))]
]
self.current: int = -1
return self
def __next__(self) -> FileSystemFileReadable:
self.current += 1
if self.current >= len(self.jobs):
raise StopIteration
return self.jobs[self.current]
</code></pre>
<p>and the following function</p>
<pre><code>def process(folder: AbstractFolder) -> None:
...
</code></pre>
<p>The children are returning instances of the implementation, which could be a lot, but when I execute mypy I get the following error:</p>
<pre><code>error: Incompatible return value type (got "AbstractFileReadable", expected "FileSystemFileReadable")
</code></pre>
<p>Is this the correct way to implement and use Protocol and typing?</p>
<p>Thanks</p>
|
<python><python-typing><mypy>
|
2023-03-04 17:57:11
| 1
| 5,342
|
Tlaloc-ES
|
75,637,544
| 5,356,096
|
Type-hinting based on custom config file structure
|
<p>Is there a method in Python or PyCharm to produce attribute type hints for a class that dynamically loads the attributes from a file?</p>
<p>Consider the following <code>Settings</code> class:</p>
<pre class="lang-py prettyprint-override"><code>import re
class Settings:
"""
This class loads a settings file and creates properties for each name in the file.
The values are either strings or numbers.
If the name starts with a star, the value is saved to self.cfg_dict to be used for wandb config.
"""
def __init__(self, path):
self.path = path
self.cfg_dict = {}
self.load_settings()
def load_settings(self):
with open(self.path, 'r') as file:
for line in file:
# if line is empty or starts with a comment, skip it
if not line.strip() or line.strip().startswith('#'):
continue
if '=' in line:
name, value = line.split('=')
name = name.strip()
value = value.strip()
# check if name has a star on the beginning and if so, save it to self.cfg_dict
if name.startswith('*'):
name = name[1:]
self.cfg_dict[name] = value.strip('"')
else:
# check if value is a number, either int or float
if re.match(r'^-?\d+$', value):
value = int(value)
elif re.match(r'^-?\d+\.\d+$', value):
value = float(value)
else:
value = value.strip('"')
setattr(self, name, value)
</code></pre>
<p>And a sample config:</p>
<pre><code>var_a=5
var_b="Hello World"
</code></pre>
<p>Therefore, initializing <code>Settings</code> with the config file (<code>a = Settings('myconfig.cfg')</code>) would produce type hints for the new attributes: <code>a.var_a</code> and <code>a.var_b</code>. Is this possible to achieve in any shape or form, or am I best off just creating a dataclass and modify the parameters on the fly?</p>
|
<python><pycharm><type-hinting>
|
2023-03-04 17:54:26
| 1
| 1,665
|
Jack Avante
|
75,637,511
| 17,082,611
|
os.listdir returns the error: no such file or directory even though the directory actually exists
|
<p>This is my project files structure:</p>
<p><a href="https://i.sstatic.net/Jubqj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jubqj.png" alt="hierarchy" /></a></p>
<p>and this is my <code>main.py</code> script:</p>
<pre><code>if __name__ == '__main__':
leukemia_dir = "../dataset/leukemia" # if I click here, I get redirected to the folder
file_names = os.listdir(leukemia_dir) # << won't work
</code></pre>
<p>Unfortunately, <code>os.listdir(leukemia_dir)</code> returns me the following error:</p>
<blockquote>
<p>FileNotFoundError: [Errno 2] No such file or directory: '../dataset/leukemia'</p>
</blockquote>
<p>If I remove <code>../</code> from <code>leukemia_dir</code>, it works. Furthermore, <code>os.getcwd()</code> returns <code>/Users/John/Desktop/Leukemia</code>.</p>
<p>What am I missing?</p>
|
<python><directory><pycharm>
|
2023-03-04 17:49:34
| 2
| 481
|
tail
|
75,637,278
| 8,667,071
|
Why do small changes have dramatic effects on the runtime of my numba parallel function?
|
<p>I'm trying to understand why my parallelized numba function is acting the way it does. In particular, why it is so sensitive to how arrays are being used.</p>
<p>I have the following function:</p>
<pre><code>@njit(parallel=True)
def f(n):
g = lambda i,j: zeros(3) + sqrt(i*j)
x = zeros((n,3))
for i in prange(n):
for j in range(n):
tmp = g(i,j)
x[i] += tmp
return x
</code></pre>
<p>Trust that n is large enough for parallel computing to be useful. For some reason this actually runs faster with fewer cores. Now when I make a small change (<code>x[i]</code> -> <code>x[i, :]</code>).</p>
<pre><code>@njit(parallel=True)
def f(n):
g = lambda i,j: zeros(3) + sqrt(i*j)
x = zeros((n,3))
for i in prange(n):
for j in range(n):
tmp = g(i,j)
x[i, :] += tmp
return x
</code></pre>
<p>The performance is significantly better, and it scales properly with the number of cores (ie. more cores is faster). Why does slicing make the performance better? To go even further, another change that makes a big difference is turning the <code>lambda</code> function into and external njit function.</p>
<pre><code>@njit
def g(i,j):
x = zeros(3) + sqrt(i*j)
return x
@njit(parallel=True)
def f(n):
x = zeros((n,3))
for i in prange(n):
for j in range(n):
tmp = g(i,j)
x[i, :] += tmp
return x
</code></pre>
<p>This again ruins the performance and scaling, reverting back to runtimes equal to or slower than the first case. Why does this external function ruin the performance? The performance can be recovered with two options shown below.</p>
<pre><code>@njit
def g(i,j):
x = sqrt(i*j)
return x
@njit(parallel=True)
def f(n):
x = zeros((n,3))
for i in prange(n):
for j in range(n):
tmp = zeros(3) + g(i,j)
x[i, :] += tmp
return x
</code></pre>
<pre><code>@njit(parallel=True)
def f(n):
def g(i,j):
x = zeros(3) + sqrt(i*j)
return x
x = zeros((n,3))
for i in prange(n):
for j in range(n):
tmp = g(i,j)
x[i, :] += tmp
return x
</code></pre>
<p>Why is the <code>parallel=True</code> numba decorated function so sensitive to how arrays are being used? I know arrays are not trivially parallelizable, but the exact reason each of these changes dramatically effects performance isn't obvious to me.</p>
|
<python><performance><numba>
|
2023-03-04 17:11:44
| 1
| 366
|
Cavenfish
|
75,637,211
| 15,491,774
|
python schedule package with init function
|
<p>I need to create a init function, which needs to run before the scheduler starts. I am using the schedule package.</p>
<p>Is there an easy way to add an init function, which runs before the scheduler starts.</p>
<pre><code>import schedule
def call_me():
print("I am invoked")
schedule.every(1).seconds.do(call_me)
while True:
schedule.run_pending()
</code></pre>
|
<python><schedule>
|
2023-03-04 17:00:53
| 1
| 448
|
stylepatrick
|
75,637,156
| 8,176,763
|
sub count of column values after group by pandas
|
<p>I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame({
'org':['a','a','a','a','b','b'],
'product_version':['bpm','bpm','bpm','bpm','ppp','ppp'],
'release_date':['2022-07','2022-07','2022-07','2022-07','2022-08','2022-08'],
'date_avail':['no','no','no','yes','no','no'],
'status':['green','green','yellow','yellow','green','green']
})
</code></pre>
<p>that looks like that:</p>
<pre><code> org product_version release_date date_avail status
0 a bpm 2022-07 no green
1 a bpm 2022-07 no green
2 a bpm 2022-07 no yellow
3 a bpm 2022-07 yes yellow
4 b ppp 2022-08 no green
5 b ppp 2022-08 no green
</code></pre>
<p>I would like to have the total count after groupinp by columns <code>['org','product_version','release_date']</code> . This is straightforward:</p>
<pre><code>print(df.groupby(['org','product_version','release_date']).size())
org product_version release_date
a bpm 2022-07 4
b ppp 2022-08 2
</code></pre>
<p>However I would like also to get sub-count from this group for the different values of the other columns that were not grouped. For example, first group which has <code>4</code> as total count is <code>a bpm 2022-07</code> . This group has <code>3</code> <code>no</code> and <code>1</code> <code>yes</code> for column <code>date_avail</code> and <code>2</code> <code>green</code> and <code>2</code> <code>yellow</code> for column <code>status</code>.</p>
<p>So my desired table result would look like:</p>
<pre><code>org product release_date total number_of_no number_of_yes number_of_green number_of_yellow
a bpm 2022-07 4 3 1 2 2
b ppp 2022-08 2 2 0 2 0
</code></pre>
|
<python><pandas>
|
2023-03-04 16:50:18
| 4
| 2,459
|
moth
|
75,637,133
| 5,281,775
|
Padding time dimension in softmax output for CTC loss
|
<p>Network:</p>
<pre><code>Input sequence -> BiLSTM---------> BiLSTM --------> Dense with softmax
Output shapes: (None, 5, 256) (None, 5, 128) (None, 5, 11)
</code></pre>
<p>Here is my CTC loss:</p>
<pre><code>def calculate_ctc_loss(y_true, y_pred):
batch_length = tf.cast(tf.shape(y_true)[0], dtype="int64")
input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64")
label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_length, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_length, 1), dtype="int64")
loss = tf.keras.backend.ctc_batch_cost(y_true, y_pred, input_length, label_length)
return loss
</code></pre>
<p>There are 10 classes in total. For the first batch with a batch size of 16 the shapes are:</p>
<pre><code>y_true: (16, 7)
y_pred: (16, 5, 11)
</code></pre>
<p>I tried to pad the time demesnion in <code>y_pred</code> so that the shape is <code>(16, 7, 11)</code> but the loss turned <code>nan</code>.</p>
<p>Ques: How to correctly pad the time dimension in this case so that <code>y_true</code> and <code>y_pred</code> have compatible shapes for CTC calculation?</p>
|
<python><tensorflow><keras><loss-function><ctc>
|
2023-03-04 16:46:33
| 0
| 2,325
|
enterML
|
75,636,887
| 1,663,762
|
ModuleNotFoundError: No module named 'importlib.util' in
|
<p>I am trying to install a project I downloaded from github:</p>
<pre><code>jlinkels@schrans-pc:/tmp/subsai/src/subsai$ pip3 install -I git+file:///tmp/subsai
Collecting git+file:/tmp/subsai
Cloning file:///tmp/subsai to /tmp/pip-req-build-8znnvukr
Running command git clone -q file:///tmp/subsai /tmp/pip-req-build-8znnvukr
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting whisper-timestamped@ git+https://github.com/linto-ai/whisper-timestamped
Cloning https://github.com/linto-ai/whisper-timestamped to /tmp/pip-install-dk7lxwc1/whisper-timestamped_9286d14b0de04a85a41b274c894ca025
Running command git clone -q https://github.com/linto-ai/whisper-timestamped /tmp/pip-install-dk7lxwc1/whisper-timestamped_9286d14b0de04a85a41b274c894ca025
Collecting openai-whisper<20230124.1,>=20230124
Using cached openai-whisper-20230124.tar.gz (1.2 MB)
Collecting streamlit-player<0.2.0,>=0.1.5
Using cached streamlit_player-0.1.5-py3-none-any.whl (1.7 MB)
Collecting pandas<1.6.0,>=1.5.2
Using cached pandas-1.5.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)
Collecting importlib<1.1.0,>=1.0.4
Using cached importlib-1.0.4.zip (7.1 kB)
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-dk7lxwc1/importlib_89709f26aefb4b5da3eeb624637ca00e/setup.py'"'"'; __file__='"'"'/tmp/pip-install-dk7lxwc1/importlib_89709f26aefb4b5da3eeb624637ca00e/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-xff5mn0s
cwd: /tmp/pip-install-dk7lxwc1/importlib_89709f26aefb4b5da3eeb624637ca00e/
Complete output (11 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 10, in <module>
import distutils.core
File "/usr/lib/python3.9/distutils/core.py", line 16, in <module>
from distutils.dist import Distribution
File "/usr/lib/python3.9/distutils/dist.py", line 19, in <module>
from distutils.util import check_environ, strtobool, rfc822_escape
File "/usr/lib/python3.9/distutils/util.py", line 9, in <module>
import importlib.util
ModuleNotFoundError: No module named 'importlib.util'
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/31/77/3781f65cafe55480b56914def99022a5d2965a4bb269655c89ef2f1de3cd/importlib-1.0.4.zip#sha256=b6ee7066fea66e35f8d0acee24d98006de1a0a8a94a8ce6efe73a9a23c8d9826 (from https://pypi.org/simple/importlib/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement importlib<1.1.0,>=1.0.4 (from subsai)
ERROR: No matching distribution found for importlib<1.1.0,>=1.0.4
</code></pre>
<p>The installation fails here:</p>
<pre><code> File "/usr/lib/python3.9/distutils/util.py", line 9, in <module>
import importlib.util
ModuleNotFoundError: No module named 'importlib.util'
</code></pre>
<p>importlib is a part from python3.9. So it should not fail.</p>
<p>Indeed, when I start a Python3 shell I can import that module:</p>
<pre><code>Python 3.9.2 (default, Feb 28 2021, 17:03:44)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import importlib.util
>>>
</code></pre>
<p>Furthermore, I don't understand this line:</p>
<pre><code>Collecting importlib<1.1.0,>=1.0.4
Using cached importlib-1.0.4.zip (7.1 kB)
</code></pre>
<p>All other dependencies are in the requirement.txt but not importlib:</p>
<pre><code>torch~=1.13.1
openai-whisper~=20230124
streamlit~=1.18.1
streamlit_player~=0.1.5
streamlit-aggrid~=0.3.3
ffsubsync~=0.4.23
git+https://github.com/linto-ai/whisper-timestamped
pandas~=1.5.2
pysubs2~=1.6.0
</code></pre>
<p>Now it might be relevant that this line:</p>
<pre><code>importlib~=1.0.4
</code></pre>
<p><strong>was</strong> present in the upstream repository. But installing threw the same error as I still have now.</p>
<p>Update: I also removed any references in pyproject.toml and poetry.lock but I failed to include that in my original post.</p>
<p>Therefore I cloned the project and removed the reference to importlib~=1.0.4 from the requirements.txt.</p>
<p>Why is pip still insisting on installing it. And why does it fail, while it succeeds when I execute the same import in a Python3 shell.</p>
<p>Environment: Debian 11 Bullseye<br />
pip 20.3.4 from /usr/lib/python3/dist-packages/pip (python 3.9)<br />
Python 3.9.2</p>
|
<python><python-3.x><pip>
|
2023-03-04 16:06:34
| 1
| 353
|
Johannes Linkels
|
75,636,358
| 7,236,133
|
Add a new column to Pandas data frame with aggregated data
|
<p>I implemented my own K-Fold cross validation (had a special case to deal with), and I need to save the predictions and its confidence as new columns.</p>
<p>1- In each iteration: <code>test_predictions = clf.predict(X_test)</code></p>
<p>2- Compare the perditions to ground truth:</p>
<pre><code>treatments = test_fold.loc[:, 'treatment'].unique().tolist()
idx = df.index[df['treatment'].isin(treatments)].tolist()
</code></pre>
<p>and Tried saving these values into a new column, but I have values for only one test fold and NOT the entire data set at each iteration, so it didn't work:</p>
<pre><code>df.iloc[idx]['new_col'] = (y_test == test_predictions)
</code></pre>
<p>where y_test is the label data for the fold that was picked for testing (comparing the real labeled data with the classifier predictions)</p>
<p>How to aggregate all the predictions corresponding to the correct indexes of each test fold from each iteration, then save it as a new column on the origin data frame at the end (or during each iteration save one part each time)?</p>
|
<python><pandas>
|
2023-03-04 14:37:44
| 1
| 679
|
zbeedatm
|
75,636,356
| 1,946,052
|
Get solution value of variable by name in pySCIPopt
|
<p>I want to get the solution value of a model variable by name in pySCIPopt.</p>
<p>It should be something like <code>model.getSolByName("x")</code>.</p>
<p>Is this possible? (I didn't find anything like this.) I wrote my own child class of Model to do this, but wonder if there is a built in method.</p>
<pre><code>from pyscipopt import Model
# ==========================================================
class myModel(Model):
def __init__(self,*args,**kwargs):
super().__init__(*args,**kwargs)
self._varByName = {}
def __add__(self,constraint):
if type(constraint) == tuple:
cnstr = constraint[0]
name = constraint[1]
else:
cnstr = constraint
name = ""
self.addCons(cnstr,name=name)
return self
def addVar(self,*args,**kwargs):
var = args[0]
obj = super().addVar(*args,**kwargs)
self._varByName[var] = obj
return obj
def optimize(self):
super().optimize()
self.sol = self.getBestSol()
def varByName(self,name):
return self._varByName[name]
def solByName(self,name):
return self.sol[self._varByName[name]]
# ==========================================================
model = myModel("Example")
x = model.addVar("x")
y = model.addVar("y",vtype="INTEGER")
model += 2*x - y*y >= 3, "cnst01"
model += y >= 1
model.setObjective(x + y)
model.setIntParam("display/verblevel",1)
model.optimize()
print("Status:",model.getStatus())
print("solByName('x') = %s" % (model.solByName("x")))
print("solByName('y') = %s" % (model.solByName("y")))
</code></pre>
|
<python><pyscipopt>
|
2023-03-04 14:37:29
| 1
| 2,283
|
Michael Hecht
|
75,636,205
| 9,859,642
|
Merging series of 2D DataFrames to 3D xarray
|
<p>I have a series of 2D DataFrames that should be merged into a 3D xarray. The structure of DataFrames looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>5 6 4</th>
<th>8 -1 3</th>
<th>angle</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>105.87</td>
<td>459.62</td>
<td>0.1</td>
</tr>
<tr>
<td>10</td>
<td>211.74</td>
<td>919.24</td>
<td>0.1</td>
</tr>
</tbody>
</table>
</div><div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>5 6 4</th>
<th>8 -1 3</th>
<th>angle</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>125.87</td>
<td>439.62</td>
<td>0.2</td>
</tr>
<tr>
<td>10</td>
<td>241.74</td>
<td>949.24</td>
<td>0.2</td>
</tr>
</tbody>
</table>
</div>
<p>My goal is to have a 3D xarray that will have such structure:</p>
<pre><code>Dimensions: (xyz: 2, thickness: 2, angle: 2)
Coordinates:
* xyz (xyz) object '5 6 4' '8 -1 3'
* thickness (thickness) int 5 10
* angle (angle) float64 0.1 0.2
Data variables:
I don't know how the variables should be sorted
</code></pre>
<p>For now I changed DataFrames into xarrays in such a manner:</p>
<pre><code>xa = xarray.Dataset.from_dataframe(df).set_coords("angle")
</code></pre>
<p>The 2D xarrays look like this:</p>
<pre><code>Dimensions: (thickness: 2)
Coordinates:
* thickness (thickness) int 5 10
angles (thickness) float64 0.1 0.1
Data variables:
5 6 4 (thickness) float64 105.87 211.74
8 -1 3 (thickness) float64 459.62 919.24
</code></pre>
<p>Then when I try to merge the xarrays with <code>.merge</code>, I got an error <code>MergeError: conflicting values for variable '0 0 0 ' on objects to be combined. You can skip this check by specifying compat='override'.</code></p>
<p>I wanted to know:</p>
<ol>
<li>How to turn angles into a dimension? Seems that it's something different than coordinates.</li>
<li>How to make this list of xyz coordinates ('5 6 4', '8 -1 3') into another dimension called 'xyz'?</li>
</ol>
|
<python><dataframe><python-xarray>
|
2023-03-04 14:11:16
| 1
| 632
|
Anavae
|
75,636,154
| 1,727,657
|
How do I append the index of a string to a list using extend() in python?
|
<p>I'm trying to look through a long string to find instances of a substring, then I want to create a list that has the index of each substring found and the substring found. But instead of the index in a readable form, I'm getting a reference to the object, such as <code> [<built-in method index of str object at 0x000001687B01E930>, 'b']</code>. I'd rather have <code>[123, 'b']</code>.</p>
<p>Here's the code I've tried:</p>
<pre><code>test_string = "abcdefg"
look_for = ["b","f"]
result = []
for each in test_string:
if each in look_for:
result.extend([each.index, each])
print(result)
</code></pre>
<p>I know I could do this with a list comprehension, but I plan to add a bunch of other code to this <code>for</code> later and am only asking about the index issue here.</p>
<p>I've tried <code>str(each.index)</code> and <code>print(str(result))</code></p>
<p>But that doesn't help. What am I missing?</p>
|
<python>
|
2023-03-04 14:02:56
| 3
| 477
|
OutThere
|
75,635,611
| 16,127,735
|
Pacman game with Pygame
|
<p>I am making a pacman game with python. Currently, the player moves continuously if I hold a certain key. So, for example, if I hold the left key, the player moves continuously to the left.
But in the original pacman game, you can just press a key 1 time to move continuously instead of holding it.
So I want to make the player be able to move continuously if the key is pressed, not if the key is held.</p>
<p>This part is responsible for the player:</p>
<pre><code>class Player(pygame.sprite.Sprite):
# Set speed vector
change_x=0
change_y=0
# Constructor function
def __init__(self,x,y, filename):
# Call the parent's constructor
pygame.sprite.Sprite.__init__(self)
# Set height, width
self.image = pygame.image.load(filename).convert()
# Make our top-left corner the passed-in location.
self.rect = self.image.get_rect()
self.rect.top = y
self.rect.left = x
self.prev_x = x
self.prev_y = y
# Clear the speed of the player
def prevdirection(self):
self.prev_x = self.change_x
self.prev_y = self.change_y
# Change the speed of the player
def changespeed(self,x,y):
self.change_x+=x
self.change_y+=y
# Find a new position for the player
def update(self,walls,gate):
# Get the old position, in case we need to go back to it
old_x=self.rect.left
new_x=old_x+self.change_x
prev_x=old_x+self.prev_x
self.rect.left = new_x
old_y=self.rect.top
new_y=old_y+self.change_y
prev_y=old_y+self.prev_y
# Did this update cause us to hit a wall?
x_collide = pygame.sprite.spritecollide(self, walls, False)
if x_collide:
# Whoops, hit a wall. Go back to the old position
self.rect.left=old_x
# self.rect.top=prev_y
# y_collide = pygame.sprite.spritecollide(self, walls, False)
# if y_collide:
# # Whoops, hit a wall. Go back to the old position
# self.rect.top=old_y
# print('a')
else:
self.rect.top = new_y
# Did this update cause us to hit a wall?
y_collide = pygame.sprite.spritecollide(self, walls, False)
if y_collide:
# Whoops, hit a wall. Go back to the old position
self.rect.top=old_y
# self.rect.left=prev_x
# x_collide = pygame.sprite.spritecollide(self, walls, False)
# if x_collide:
# # Whoops, hit a wall. Go back to the old position
# self.rect.left=old_x
# print('b')
if gate != False:
gate_hit = pygame.sprite.spritecollide(self, gate, False)
if gate_hit:
self.rect.left=old_x
self.rect.top=old_y
</code></pre>
<p>This is event proccessing:</p>
<pre><code> bll = len(block_list)
score = 0
done = False
i = 0
while done == False:
# ALL EVENT PROCESSING SHOULD GO BELOW THIS COMMENT
for event in pygame.event.get():
if event.type == pygame.QUIT:
done=True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
Pacman.changespeed(-30,0)
if event.key == pygame.K_RIGHT:
Pacman.changespeed(30,0)
if event.key == pygame.K_UP:
Pacman.changespeed(0,-30)
if event.key == pygame.K_DOWN:
Pacman.changespeed(0,30)
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT:
Pacman.changespeed(30,0)
if event.key == pygame.K_RIGHT:
Pacman.changespeed(-30,0)
if event.key == pygame.K_UP:
Pacman.changespeed(0,30)
if event.key == pygame.K_DOWN:
Pacman.changespeed(0,-30)
# ALL EVENT PROCESSING SHOULD GO ABOVE THIS COMMENT
# ALL GAME LOGIC SHOULD GO BELOW THIS COMMENT
Pacman.update(wall_list,gate)
returned = Pinky.changespeed(Pinky_directions,False,p_turn,p_steps,pl)
p_turn = returned[0]
p_steps = returned[1]
Pinky.changespeed(Pinky_directions,False,p_turn,p_steps,pl)
Pinky.update(wall_list,False)
returned = Blinky.changespeed(Blinky_directions,False,b_turn,b_steps,bl)
b_turn = returned[0]
b_steps = returned[1]
Blinky.changespeed(Blinky_directions,False,b_turn,b_steps,bl)
Blinky.update(wall_list,False)
returned = Inky.changespeed(Inky_directions,False,i_turn,i_steps,il)
i_turn = returned[0]
i_steps = returned[1]
Inky.changespeed(Inky_directions,False,i_turn,i_steps,il)
Inky.update(wall_list,False)
returned = Clyde.changespeed(Clyde_directions,"clyde",c_turn,c_steps,cl)
c_turn = returned[0]
c_steps = returned[1]
Clyde.changespeed(Clyde_directions,"clyde",c_turn,c_steps,cl)
Clyde.update(wall_list,False)
# See if the Pacman block has collided with anything.
blocks_hit_list = pygame.sprite.spritecollide(Pacman, block_list, True)
# Check the list of collisions.
if len(blocks_hit_list) > 0:
score +=len(blocks_hit_list)
# ALL GAME LOGIC SHOULD GO ABOVE THIS COMMENT
# ALL CODE TO DRAW SHOULD GO BELOW THIS COMMENT
screen.fill(black)
wall_list.draw(screen)
gate.draw(screen)
all_sprites_list.draw(screen)
monsta_list.draw(screen)
text=font.render("Score: "+str(score)+"/"+str(bll), True, red)
screen.blit(text, [10, 10])
if score == bll:
doNext("Congratulations, you won!",145,all_sprites_list,block_list,monsta_list,pacman_collide,wall_list,gate)
monsta_hit_list = pygame.sprite.spritecollide(Pacman, monsta_list, False)
if monsta_hit_list:
doNext("Game Over",235,all_sprites_list,block_list,monsta_list,pacman_collide,wall_list,gate)
# ALL CODE TO DRAW SHOULD GO ABOVE THIS COMMENT
pygame.display.flip()
clock.tick(10)
</code></pre>
<p>This is the full code:</p>
<pre><code>#Pacman in Python with PyGame
#https://github.com/hbokmann/Pacman
import pygame
black = (0,0,0)
white = (255,255,255)
blue = (0,0,255)
green = (0,255,0)
red = (255,0,0)
purple = (255,0,255)
yellow = ( 255, 255, 0)
Trollicon=pygame.image.load('images/Trollman.png')
pygame.display.set_icon(Trollicon)
#Add music
pygame.mixer.init()
pygame.mixer.music.load('pacman.mp3')
pygame.mixer.music.play(-1, 0.0)
# This class represents the bar at the bottom that the player controls
class Wall(pygame.sprite.Sprite):
# Constructor function
def __init__(self,x,y,width,height, color):
# Call the parent's constructor
pygame.sprite.Sprite.__init__(self)
# Make a blue wall, of the size specified in the parameters
self.image = pygame.Surface([width, height])
self.image.fill(color)
# Make our top-left corner the passed-in location.
self.rect = self.image.get_rect()
self.rect.top = y
self.rect.left = x
# This creates all the walls in room 1
def setupRoomOne(all_sprites_list):
# Make the walls. (x_pos, y_pos, width, height)
wall_list=pygame.sprite.RenderPlain()
# This is a list of walls. Each is in the form [x, y, width, height]
walls = [ [0,0,6,600],
[0,0,600,6],
[0,600,606,6],
[600,0,6,606],
[300,0,6,66],
[60,60,186,6],
[360,60,186,6],
[60,120,66,6],
[60,120,6,126],
[180,120,246,6],
[300,120,6,66],
[480,120,66,6],
[540,120,6,126],
[120,180,126,6],
[120,180,6,126],
[360,180,126,6],
[480,180,6,126],
[180,240,6,126],
[180,360,246,6],
[420,240,6,126],
[240,240,42,6],
[324,240,42,6],
[240,240,6,66],
[240,300,126,6],
[360,240,6,66],
[0,300,66,6],
[540,300,66,6],
[60,360,66,6],
[60,360,6,186],
[480,360,66,6],
[540,360,6,186],
[120,420,366,6],
[120,420,6,66],
[480,420,6,66],
[180,480,246,6],
[300,480,6,66],
[120,540,126,6],
[360,540,126,6]
]
# Loop through the list. Create the wall, add it to the list
for item in walls:
wall=Wall(item[0],item[1],item[2],item[3],blue)
wall_list.add(wall)
all_sprites_list.add(wall)
# return our new list
return wall_list
def setupGate(all_sprites_list):
gate = pygame.sprite.RenderPlain()
gate.add(Wall(282,242,42,2,white))
all_sprites_list.add(gate)
return gate
# This class represents the ball
# It derives from the "Sprite" class in Pygame
class Block(pygame.sprite.Sprite):
# Constructor. Pass in the color of the block,
# and its x and y position
def __init__(self, color, width, height):
# Call the parent class (Sprite) constructor
pygame.sprite.Sprite.__init__(self)
# Create an image of the block, and fill it with a color.
# This could also be an image loaded from the disk.
self.image = pygame.Surface([width, height])
self.image.fill(white)
self.image.set_colorkey(white)
pygame.draw.ellipse(self.image,color,[0,0,width,height])
# Fetch the rectangle object that has the dimensions of the image
# image.
# Update the position of this object by setting the values
# of rect.x and rect.y
self.rect = self.image.get_rect()
# This class represents the bar at the bottom that the player controls
class Player(pygame.sprite.Sprite):
# Set speed vector
change_x=0
change_y=0
# Constructor function
def __init__(self,x,y, filename):
# Call the parent's constructor
pygame.sprite.Sprite.__init__(self)
# Set height, width
self.image = pygame.image.load(filename).convert()
# Make our top-left corner the passed-in location.
self.rect = self.image.get_rect()
self.rect.top = y
self.rect.left = x
self.prev_x = x
self.prev_y = y
# Clear the speed of the player
def prevdirection(self):
self.prev_x = self.change_x
self.prev_y = self.change_y
# Change the speed of the player
def changespeed(self,x,y):
self.change_x+=x
self.change_y+=y
# Find a new position for the player
def update(self,walls,gate):
# Get the old position, in case we need to go back to it
old_x=self.rect.left
new_x=old_x+self.change_x
prev_x=old_x+self.prev_x
self.rect.left = new_x
old_y=self.rect.top
new_y=old_y+self.change_y
prev_y=old_y+self.prev_y
# Did this update cause us to hit a wall?
x_collide = pygame.sprite.spritecollide(self, walls, False)
if x_collide:
# Whoops, hit a wall. Go back to the old position
self.rect.left=old_x
# self.rect.top=prev_y
# y_collide = pygame.sprite.spritecollide(self, walls, False)
# if y_collide:
# # Whoops, hit a wall. Go back to the old position
# self.rect.top=old_y
# print('a')
else:
self.rect.top = new_y
# Did this update cause us to hit a wall?
y_collide = pygame.sprite.spritecollide(self, walls, False)
if y_collide:
# Whoops, hit a wall. Go back to the old position
self.rect.top=old_y
# self.rect.left=prev_x
# x_collide = pygame.sprite.spritecollide(self, walls, False)
# if x_collide:
# # Whoops, hit a wall. Go back to the old position
# self.rect.left=old_x
# print('b')
if gate != False:
gate_hit = pygame.sprite.spritecollide(self, gate, False)
if gate_hit:
self.rect.left=old_x
self.rect.top=old_y
#Inheritime Player klassist
class Ghost(Player):
# Change the speed of the ghost
def changespeed(self,list,ghost,turn,steps,l):
try:
z=list[turn][2]
if steps < z:
self.change_x=list[turn][0]
self.change_y=list[turn][1]
steps+=1
else:
if turn < l:
turn+=1
elif ghost == "clyde":
turn = 2
else:
turn = 0
self.change_x=list[turn][0]
self.change_y=list[turn][1]
steps = 0
return [turn,steps]
except IndexError:
return [0,0]
Pinky_directions = [
[0,-30,4],
[15,0,9],
[0,15,11],
[-15,0,23],
[0,15,7],
[15,0,3],
[0,-15,3],
[15,0,19],
[0,15,3],
[15,0,3],
[0,15,3],
[15,0,3],
[0,-15,15],
[-15,0,7],
[0,15,3],
[-15,0,19],
[0,-15,11],
[15,0,9]
]
Blinky_directions = [
[0,-15,4],
[15,0,9],
[0,15,11],
[15,0,3],
[0,15,7],
[-15,0,11],
[0,15,3],
[15,0,15],
[0,-15,15],
[15,0,3],
[0,-15,11],
[-15,0,3],
[0,-15,11],
[-15,0,3],
[0,-15,3],
[-15,0,7],
[0,-15,3],
[15,0,15],
[0,15,15],
[-15,0,3],
[0,15,3],
[-15,0,3],
[0,-15,7],
[-15,0,3],
[0,15,7],
[-15,0,11],
[0,-15,7],
[15,0,5]
]
Inky_directions = [
[30,0,2],
[0,-15,4],
[15,0,10],
[0,15,7],
[15,0,3],
[0,-15,3],
[15,0,3],
[0,-15,15],
[-15,0,15],
[0,15,3],
[15,0,15],
[0,15,11],
[-15,0,3],
[0,-15,7],
[-15,0,11],
[0,15,3],
[-15,0,11],
[0,15,7],
[-15,0,3],
[0,-15,3],
[-15,0,3],
[0,-15,15],
[15,0,15],
[0,15,3],
[-15,0,15],
[0,15,11],
[15,0,3],
[0,-15,11],
[15,0,11],
[0,15,3],
[15,0,1],
]
Clyde_directions = [
[-30,0,2],
[0,-15,4],
[15,0,5],
[0,15,7],
[-15,0,11],
[0,-15,7],
[-15,0,3],
[0,15,7],
[-15,0,7],
[0,15,15],
[15,0,15],
[0,-15,3],
[-15,0,11],
[0,-15,7],
[15,0,3],
[0,-15,11],
[15,0,9],
]
pl = len(Pinky_directions)-1
bl = len(Blinky_directions)-1
il = len(Inky_directions)-1
cl = len(Clyde_directions)-1
# Call this function so the Pygame library can initialize itself
pygame.init()
# Create an 606x606 sized screen
screen = pygame.display.set_mode([606, 606])
# This is a list of 'sprites.' Each block in the program is
# added to this list. The list is managed by a class called 'RenderPlain.'
# Set the title of the window
pygame.display.set_caption('Pacman')
# Create a surface we can draw on
background = pygame.Surface(screen.get_size())
# Used for converting color maps and such
background = background.convert()
# Fill the screen with a black background
background.fill(black)
clock = pygame.time.Clock()
pygame.font.init()
font = pygame.font.Font("freesansbold.ttf", 24)
#default locations for Pacman and monstas
w = 303-16 #Width
p_h = (7*60)+19 #Pacman height
m_h = (4*60)+19 #Monster height
b_h = (3*60)+19 #Binky height
i_w = 303-16-32 #Inky width
c_w = 303+(32-16) #Clyde width
def startGame():
all_sprites_list = pygame.sprite.RenderPlain()
block_list = pygame.sprite.RenderPlain()
monsta_list = pygame.sprite.RenderPlain()
pacman_collide = pygame.sprite.RenderPlain()
wall_list = setupRoomOne(all_sprites_list)
gate = setupGate(all_sprites_list)
p_turn = 0
p_steps = 0
b_turn = 0
b_steps = 0
i_turn = 0
i_steps = 0
c_turn = 0
c_steps = 0
# Create the player paddle object
Pacman = Player( w, p_h, "images/Trollman.png" )
all_sprites_list.add(Pacman)
pacman_collide.add(Pacman)
Blinky=Ghost( w, b_h, "images/Blinky.png" )
monsta_list.add(Blinky)
all_sprites_list.add(Blinky)
Pinky=Ghost( w, m_h, "images/Pinky.png" )
monsta_list.add(Pinky)
all_sprites_list.add(Pinky)
Inky=Ghost( i_w, m_h, "images/Inky.png" )
monsta_list.add(Inky)
all_sprites_list.add(Inky)
Clyde=Ghost( c_w, m_h, "images/Clyde.png" )
monsta_list.add(Clyde)
all_sprites_list.add(Clyde)
# Draw the grid
for row in range(19):
for column in range(19):
if (row == 7 or row == 8) and (column == 8 or column == 9 or column == 10):
continue
else:
block = Block(yellow, 4, 4)
# Set a random location for the block
block.rect.x = (30*column+6)+26
block.rect.y = (30*row+6)+26
b_collide = pygame.sprite.spritecollide(block, wall_list, False)
p_collide = pygame.sprite.spritecollide(block, pacman_collide, False)
if b_collide:
continue
elif p_collide:
continue
else:
# Add the block to the list of objects
block_list.add(block)
all_sprites_list.add(block)
bll = len(block_list)
score = 0
done = False
i = 0
while done == False:
# ALL EVENT PROCESSING SHOULD GO BELOW THIS COMMENT
for event in pygame.event.get():
if event.type == pygame.QUIT:
done=True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
Pacman.changespeed(-30,0)
if event.key == pygame.K_RIGHT:
Pacman.changespeed(30,0)
if event.key == pygame.K_UP:
Pacman.changespeed(0,-30)
if event.key == pygame.K_DOWN:
Pacman.changespeed(0,30)
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT:
Pacman.changespeed(30,0)
if event.key == pygame.K_RIGHT:
Pacman.changespeed(-30,0)
if event.key == pygame.K_UP:
Pacman.changespeed(0,30)
if event.key == pygame.K_DOWN:
Pacman.changespeed(0,-30)
# ALL EVENT PROCESSING SHOULD GO ABOVE THIS COMMENT
# ALL GAME LOGIC SHOULD GO BELOW THIS COMMENT
Pacman.update(wall_list,gate)
returned = Pinky.changespeed(Pinky_directions,False,p_turn,p_steps,pl)
p_turn = returned[0]
p_steps = returned[1]
Pinky.changespeed(Pinky_directions,False,p_turn,p_steps,pl)
Pinky.update(wall_list,False)
returned = Blinky.changespeed(Blinky_directions,False,b_turn,b_steps,bl)
b_turn = returned[0]
b_steps = returned[1]
Blinky.changespeed(Blinky_directions,False,b_turn,b_steps,bl)
Blinky.update(wall_list,False)
returned = Inky.changespeed(Inky_directions,False,i_turn,i_steps,il)
i_turn = returned[0]
i_steps = returned[1]
Inky.changespeed(Inky_directions,False,i_turn,i_steps,il)
Inky.update(wall_list,False)
returned = Clyde.changespeed(Clyde_directions,"clyde",c_turn,c_steps,cl)
c_turn = returned[0]
c_steps = returned[1]
Clyde.changespeed(Clyde_directions,"clyde",c_turn,c_steps,cl)
Clyde.update(wall_list,False)
# See if the Pacman block has collided with anything.
blocks_hit_list = pygame.sprite.spritecollide(Pacman, block_list, True)
# Check the list of collisions.
if len(blocks_hit_list) > 0:
score +=len(blocks_hit_list)
# ALL GAME LOGIC SHOULD GO ABOVE THIS COMMENT
# ALL CODE TO DRAW SHOULD GO BELOW THIS COMMENT
screen.fill(black)
wall_list.draw(screen)
gate.draw(screen)
all_sprites_list.draw(screen)
monsta_list.draw(screen)
text=font.render("Score: "+str(score)+"/"+str(bll), True, red)
screen.blit(text, [10, 10])
if score == bll:
doNext("Congratulations, you won!",145,all_sprites_list,block_list,monsta_list,pacman_collide,wall_list,gate)
monsta_hit_list = pygame.sprite.spritecollide(Pacman, monsta_list, False)
if monsta_hit_list:
doNext("Game Over",235,all_sprites_list,block_list,monsta_list,pacman_collide,wall_list,gate)
# ALL CODE TO DRAW SHOULD GO ABOVE THIS COMMENT
pygame.display.flip()
clock.tick(10)
def doNext(message,left,all_sprites_list,block_list,monsta_list,pacman_collide,wall_list,gate):
while True:
# ALL EVENT PROCESSING SHOULD GO BELOW THIS COMMENT
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
pygame.quit()
if event.key == pygame.K_RETURN:
del all_sprites_list
del block_list
del monsta_list
del pacman_collide
del wall_list
del gate
startGame()
#Grey background
w = pygame.Surface((400,200)) # the size of your rect
w.set_alpha(10) # alpha level
w.fill((128,128,128)) # this fills the entire surface
screen.blit(w, (100,200)) # (0,0) are the top-left coordinates
#Won or lost
text1=font.render(message, True, white)
screen.blit(text1, [left, 233])
text2=font.render("To play again, press ENTER.", True, white)
screen.blit(text2, [135, 303])
text3=font.render("To quit, press ESCAPE.", True, white)
screen.blit(text3, [165, 333])
pygame.display.flip()
clock.tick(10)
startGame()
pygame.quit()
</code></pre>
|
<python><pygame>
|
2023-03-04 12:16:33
| 0
| 1,958
|
Alon Alush
|
75,635,590
| 2,613,271
|
Subplots with common x and y labels and a common legend under the x-axis
|
<p>I am trying to plot a subplot with a common legend displayed at the bottom of the figure below a common x axis label, and with a common y-axis label. I have two ways of almost getting it working, except the first has the common y-axis label overlapping the axis tick labels, while with the second I can't figure out how to get the legend to show on the plot (it hangs off the page).</p>
<p>Option 2, using the newer supx/ylabel, puts too much space between the subplots and the labels as well - but I think that is fixable (quite a few questions on that one).</p>
<p>These are just example plots, actual plots use more decimals places in the labels, so the overlap is considerable. I will likely also be setting the figure sizes to print (and save) the plots was well.</p>
<p><a href="https://i.sstatic.net/VefDB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VefDB.jpg" alt="Plots showing two options given by code below" /></a></p>
<p>MWE</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Some points to plot
x = np.linspace(0, 2 * np.pi, 400)
y = np.sin(x ** 2)
z = np.sin((1.03* x) ** 2)
#option 1 - problem is with my real data the common y label is over the labels of the left hand plot
fig, axs = plt.subplots(2, 2)
axs[0, 0].plot(x, y)
axs[0, 0].plot(x, z, '--')
axs[0, 1].plot(x, y)
axs[0, 1].plot(x, z, '--')
axs[1, 0].plot(x, -y)
axs[1, 0].plot(x, -z, '--')
axs[1, 1].plot(x, -y)
axs[1, 1].plot(x, -z, '--')
fig.add_subplot(111, frameon=False)
plt.tick_params(labelcolor='none', which='both', top=False, bottom=False, left=False, right=False)
plt.xlabel("The X label")
plt.ylabel("The Y label")
fig.subplots_adjust(bottom=0.2)
labels = ["A","B"]
fig.legend(labels,loc='lower center', ncol=len(labels), bbox_to_anchor=(0.55, 0))
fig.tight_layout()
# Option 2 - problem is I can't get the legend to show (it is off the page)
fig, axs = plt.subplots(2, 2)
axs[0, 0].plot(x, y)
axs[0, 0].plot(x, z, '--')
axs[0, 1].plot(x, y)
axs[0, 1].plot(x, z, '--')
axs[1, 0].plot(x, -y)
axs[1, 0].plot(x, -z, '--')
axs[1, 1].plot(x, -y)
axs[1, 1].plot(x, -z, '--')
fig.supxlabel("The X label")
fig.supylabel("The Y label")
fig.subplots_adjust(bottom=0.2)
labels = ["A","B"]
fig.legend(labels,loc='lower center', ncol=len(labels), bbox_to_anchor=(0.55, 0))
fig.tight_layout()
</code></pre>
|
<python><matplotlib>
|
2023-03-04 12:13:05
| 1
| 1,530
|
Esme_
|
75,635,528
| 12,439,683
|
Optuna sample fixed parameter depending on another parameter
|
<p>In my setting I have an abstract situation like the following, which shall only function as an example case:</p>
<pre class="lang-py prettyprint-override"><code>base = trial.suggest_int(1, 3)
power = trial.suggest_int(1, 10)
# value = base ** power
</code></pre>
<p>As when the <code>base == 1</code> the power parameter becomes irrelevant and I would like to fix it to 1.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>base = trial.suggest_int("base", 1, 3)
if base == 1:
# Different distribution! But still inside the other.
power = trial.suggest_int("power", 1, 1)
else:
power = trial.suggest_int("power", 1, 10)
</code></pre>
<p>While this works it later causes problems in the form of <code>ValueError</code>s because the underlying distributions are not the same.</p>
<hr />
<p>How can I suggest a fixed value <strong>with the same parameter name</strong> that depends on another value that is sampled within the trial?</p>
|
<python><machine-learning><sampling><hyperparameters><optuna>
|
2023-03-04 12:01:19
| 1
| 5,101
|
Daraan
|
75,635,119
| 363,028
|
Set anchortype="paragraph" for image using odf.text
|
<p>I want to change <a href="https://github.com/turulomio/pdf2odt" rel="nofollow noreferrer">https://github.com/turulomio/pdf2odt</a> so that images are 21cm wide (that was easy) and so that images are anchored to paragraph instead of as-char, so that the image goes all the way to the edge of the page.</p>
<p>The ODT file contains:</p>
<pre><code> <draw:frame draw:style-name="fr1" draw:name="Frame.3" text:anchor-type="as-char" svg:width="21.001cm" svg:height="29.713cm" draw:z-index="2">
</code></pre>
<p>and I want it to contain:</p>
<pre><code> <draw:frame draw:style-name="fr1" draw:name="Frame.3" text:anchor-type="paragraph" svg:width="21.001cm" svg:height="29.713cm" draw:z-index="2">
</code></pre>
<p>I have the feeling that I have to add <code>anchortype="paragraph"</code> or possibly <code>SetAttribute('text:anchor-type', 'paragraph')</code> somewhere in this section:</p>
<pre><code> for filename in sorted(glob("pdfpage*.png")):
img = Image.open(filename)
x,y=img.size
cmx=21
cmy=y*cmx/x
img.close()
doc.addImage(filename, filename)
p = P(stylename="Illustration")
p.addElement(doc.image(filename, cmx,cmy))
doc.insertInCursor(p, after=True)
if args.tesseract==True:
for line in open(filename[:-4] +".txt", "r", encoding='UTF-8').readlines():
p=P(stylename="Standard")
p.addText(line)
doc.insertInCursor(p, after=True)
doc.save()
</code></pre>
<p>But I get errors like these:</p>
<pre><code>TypeError: ODT.image() got an unexpected keyword argument 'anchortype'
AttributeError: Attribute text:anchor-type is not allowed in <draw:frame>
</code></pre>
<p>How do I find out how and where to add this?</p>
|
<python><odf>
|
2023-03-04 10:42:33
| 1
| 34,146
|
Ole Tange
|
75,634,747
| 1,815,739
|
I want to get keys as integers using json serialization and deserialization and python
|
<p>I have this dict:</p>
<pre><code>d={0:[1,2,3,None,1],1:[1,2,4,0,3],2:[4,6,2,3,4],3:[4,2,6,1,2],4:[2,2,6,2,None]}
</code></pre>
<p>I save it:</p>
<pre><code>fo=open("test.json","w")
json.dump(d,fo, sort_keys=True, indent=2, separators=(',', ': '))
fo.close()
</code></pre>
<p>I restore it:</p>
<pre><code>fi.open("test.json","r")
g=json.load(fi)
</code></pre>
<p>And g becomes:</p>
<pre><code>{'0': [1, 2, 3, None, 1], '1': [1, 2, 5, 0, 3], '2': [4, 6, 2, 3, 4],...
</code></pre>
<p>Indexes of the dict have been converted to strings! I need them to be integers. How can I do that easily in python?</p>
|
<python><json><dictionary><serialization><deserialization>
|
2023-03-04 09:25:57
| 1
| 496
|
The Dare Guy
|
75,634,703
| 2,202,718
|
csrf_exempt for class based views
|
<p>Code:</p>
<pre><code>class ErrorReportView(View):
def get(self, request, *args, **kwargs):
return HttpResponse('Hello, World!')
@method_decorator(csrf_exempt)
def post(self, request, *args, **kwargs):
return HttpResponse('Hello, World!')
</code></pre>
<p><a href="https://i.sstatic.net/Cxmnw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cxmnw.png" alt="enter image description here" /></a></p>
<p>I use Postman, but I get</p>
<pre><code><p>CSRF verification failed. Request aborted.</p>
</code></pre>
<p>Documentation: <a href="https://docs.djangoproject.com/en/4.1/topics/class-based-views/intro/#decorating-the-class" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.1/topics/class-based-views/intro/#decorating-the-class</a></p>
<p>What can I try to resolve this?</p>
|
<python><django><django-views>
|
2023-03-04 09:17:13
| 1
| 1,069
|
Trts
|
75,634,630
| 866,333
|
pytest command reports "Exception ignored" and "OSError: [WinError 6] The handle is invalid"
|
<p>I found this error running pytest from the command line. All my tests still passed but it bothers me:</p>
<pre><code>Exception ignored in: <function Pool.__del__ at 0x000001F5C70214E0>
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\multiprocessing\pool.py", line 271, in __del__
self._change_notifier.put(None)
File "C:\Program Files\Python311\Lib\multiprocessing\queues.py", line 374, in put
self._writer.send_bytes(obj)
File "C:\Program Files\Python311\Lib\multiprocessing\connection.py", line 199, in send_bytes
self._send_bytes(m[offset:offset + size])
File "C:\Program Files\Python311\Lib\multiprocessing\connection.py", line 279, in _send_bytes
ov, err = _winapi.WriteFile(self._handle, buf, overlapped=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError 6] The handle is invalid
</code></pre>
<p>Running from PyCharm the error is completeley hidden.</p>
<p>What could be causing it?</p>
<p>I narrowed it down to a recently introduced test, and this fragment in particular:</p>
<pre><code> thread_pool = ThreadPool(pool_size)
results = run_in_pool(function_list, thread_pool)
</code></pre>
|
<python><threadpool>
|
2023-03-04 09:02:53
| 1
| 6,796
|
John
|
75,634,466
| 5,962,321
|
Include or exclude (license) files from package data with pyproject.toml and setuptools
|
<h2>TL;DR</h2>
<p>How does one reliably include files from <code>LICENSES/</code> (REUSE-style) in source archive and wheels for a Python package with a <code>src/</code> layout? How does one exclude specific files?</p>
<h2>Details</h2>
<p>I have a project structure that looks like</p>
<pre><code>.
├── pyproject.toml
├── LICENSES
│ ├── MAIN.txt
│ ├── SECUNDARY.txt
├── MANIFEST.in
├── random_package
│ ├── __init__.py
│ ├── foo1.cpp
│ ├── foo2.cpp
│ ├── submodule1
│ │ ├── __init__.py
│ │ ├── bar1.cpp
│ ├── submodule2
│ │ ├── __init__.py
│ │ ├── bar2.cpp
</code></pre>
<p>The <code>pyproject.toml</code> looks like</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "random_package"
version = "0.1.0"
license = {file = "LICENSES/MAIN.txt"}
[metadata] # EDIT: metadata was the issue
license-files = ["LICENSES/*.txt"] # this line should be in [tool.setuptools]
[tool.setuptools]
package-dir = {"" = "."}
include-package-data = true # tried both true and false
[tool.setuptools.packages.find]
where = ["."]
include = ["random_package*"]
</code></pre>
<p>How do I include all cpp files <em>except</em> <code>submodule1/bar1.cpp</code> into the installation?</p>
<p>I have tried the following entries in the toml (one at a time):</p>
<pre class="lang-ini prettyprint-override"><code>[tool.setuptools.exclude-package-data]
"*" = ["bar1.cpp"]
"random_package.submodule1" = ["bar1.cpp"]
</code></pre>
<p>I even set <code>include-package-data</code> to false and entered cpp files manually (except bar1.cpp) and even that did not work for both source and wheels.</p>
<p>Nothing works reliably: for any and all combinations of these options, I always get bar1.cpp in either the zip/tar.gz archive or the wheel when I do <code>python -m build</code>.</p>
<p>As for the license files, I get <code>LICENSE/MAIN.txt</code> in the source build, but not the others and no licenses are present in the wheels.</p>
<h2>Partial solution</h2>
<p>I have something that works for source dist using a <code>MANIFEST.in</code> with an include for the <code>LICENSES/*.txt</code> files and a manual include for the .cpp files instead of the data options in <code>pyproject.toml</code> but even this does not work for the wheel: I don't get the licenses in <code>random_package-0.1.0.dist-info</code>.</p>
<p>Am I wrong in expecting the license files in the wheel? With the old <code>setup.py</code> scheme, back when I was using a single <code>License.txt</code> file, I did get the license file in there... And is there no way to do that with the toml alone?</p>
|
<python><setuptools><python-packaging><python-wheel><pyproject.toml>
|
2023-03-04 08:22:54
| 1
| 2,011
|
Silmathoron
|
75,634,357
| 3,368,201
|
Dynamically modifying a class upon creation
|
<p>I have a <code>classmethod</code> that I want to be called automatically before or when any child of the class it belongs to gets created. How can I achieve that?</p>
<p>Since I understand this can be an XY problem, here is why I (think I) need it, and my basic implementation.</p>
<p>I have a class that gets inherited, and the children of that class can specify a list of parameters that need to be converted into properties. Here is the relevant code:</p>
<pre><code>class BaseData:
@staticmethod
def internalName(name: str) -> str:
return '_' + name
def __init__(self):
for k, v in self._dataProperties.items():
setattr(self, BaseData.internalName(k), v)
self._isModified = False
@classmethod
def initProperties(cls):
for k, v in cls._dataProperties.items():
# Create dynamic getter and setter function
getterFunc = eval(f'lambda self: self.getProperty("{k}")')
setterFunc = eval(f'lambda self, v: self.setProperty("{k}", v)')
# Make them a property
setattr(cls, k, property(fget=getterFunc, fset=setterFunc))
class ChildData(BaseData):
_dataProperties = {
'date': None,
'value': '0',
}
ChildData.initProperties()
</code></pre>
<p>Function <code>initProperties</code> need to be called once for each child, and to enforce that I need to write it below function definition. I find it a bit ugly, so.. Is there any other wat to do it?</p>
<p>I already tried to put the code in <code>__init__</code>, but it does not get called when I unpickle the objects.</p>
<p>So, basically:</p>
<ol>
<li>Generic question: is there any way to force a function to be called the first time a child class is used (even when <code>__init__</code> does not get called)?</li>
<li>If not, more specific question: is there a way to automatically call a specific function (it can be <code>__init__</code>, but also another one) when unpickling?</li>
<li>If not, single use-case question: Is there a better way to do what I'm doing here?</li>
</ol>
<p>I already read <a href="https://stackoverflow.com/q/45505605/3368201">Python class constructor (static)</a>, and while it has the same question I did not find a reply that could solve my use case.</p>
|
<python>
|
2023-03-04 07:52:31
| 3
| 2,880
|
frarugi87
|
75,634,294
| 4,473,615
|
Pandas DataFrame header to html
|
<p>Unable to convert pandas header alone to html.
I have below code,</p>
<pre><code>df = pd.read_csv("Employees.csv")
df1= df.columns.to_frame()
df1.to_html("upload.html")
</code></pre>
<p>Result of the code is ,</p>
<p><a href="https://i.sstatic.net/jZvwh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jZvwh.png" alt="enter image description here" /></a></p>
<p>Expected result is,</p>
<p><a href="https://i.sstatic.net/DdmCe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdmCe.png" alt="enter image description here" /></a></p>
<p>Unable to get the header alone from the index of the data frame. Any suggestion would be appreciated.</p>
|
<python><html><pandas>
|
2023-03-04 07:35:07
| 3
| 5,241
|
Jim Macaulay
|
75,634,191
| 761,069
|
Django unions and temporary fields in serializers
|
<p>I have a DjangoRest view that finds several QuerySets of a Model (Item), calculates a "rank" for each item, and then passes the final list to a serializer.</p>
<p>My "rank" field is a temporary field added by the serializer.</p>
<p>When I try something like the this:</p>
<pre class="lang-py prettyprint-override"><code>q_a = Item.objects.filter(some filter)
q_b = Item.objects.filter(some other filter)
q_c = Item.objects.filter(some other filter)
for item in q_a:
item['rank'] = 5 #fixed rank
for item in q_b:
item['rank'] = calulcateRank(type_b) #my function
for item in q_c:
item['rank'] = calculateRank(type_c) #my function
final_q = q_a | q_b | q_c
serializer = ItemSertializer(final_q, many=True)
</code></pre>
<p>My rank field is lost by the serializer.</p>
<p>However, if I do this:</p>
<pre class="lang-py prettyprint-override"><code>q_a = Item.objects.filter(some filter)
q_b = Item.objects.filter(some other filter)
q_c = Item.objects.filter(some other filter)
final_q = q_a | q_b | q_c
for item in final_q:
item['rank'] = calulcateRank() # with type logic inside now
serializer = ItemSertializer(final_q, many=True)
</code></pre>
<p>It works fine.</p>
<p>The second version is cleaner code and probably superior but I don't really understand what the issue is and would like to know.</p>
|
<python><django><django-rest-framework>
|
2023-03-04 07:07:56
| 1
| 610
|
tanbog
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.