QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,639,647 | 324,362 | How can I JSON serialize a Python generic class? | <p>I want to serialize a generic Python class:</p>
<pre><code>from typing import TypeVar, Generic
import json
T = TypeVar('T')
class MyGenericClass(Generic[T]):
def __init__(self, param: T) -> None:
super().__init__()
self.param = param
pass
r = MyGenericClass[str]('test')
print(json.dumps(r, default= lambda o: o.__dict__))
</code></pre>
<p>But I receive the following exception:</p>
<blockquote>
<p>AttributeError: 'mappingproxy' object has no attribute '<strong>dict</strong>'. Did you mean: '<strong>dir</strong>'?</p>
</blockquote>
<p>I searched a lot on why I receive such error, but I didn't find any relevant result.</p>
<p>How can I resolve the issue?</p>
| <python><json><generics><serialization> | 2023-07-07 19:13:08 | 0 | 2,554 | anonim |
76,639,622 | 3,077,881 | Wikipedia data via Beautiful Soup | <p>I am new to beautiful soup and trying to pull some tables into a Python notebook. There are multiple tables at the site, but if I can get one working I think I can figure out the others. Have followed a few tutorials and think I must be missing something... maybe due to the <code>collapsible</code> class?</p>
<p>Here is my most recent attempt:</p>
<pre><code>from bs4 import BeautifulSoup
import requests as r
import pandas as pd
uswnt_wiki_request = r.get("https://en.wikipedia.org/wiki/United_States_women%27s_national_soccer_team_results")
uswnt_wiki_text = uswnt_wiki_request.text
uswnt_soup = BeautifulSoup(uswnt_wiki_text, 'html.parser')
table = uswnt_soup.find('table', class_="wikitable collapsible collapsed")
df = pd.read_html(str(table))
df = pd.concat(df)
print(df)
</code></pre>
<p>Can somebody help nudge me in the right direction?</p>
| <python><pandas><beautifulsoup> | 2023-07-07 19:08:14 | 2 | 1,754 | pyll |
76,639,594 | 15,461,255 | ImportError: Couldn't import Django having it installed | <p>I'm trying to run a Django app on an Ubuntu VM but when I execute <code>python manage.py runserver</code> I get this error message:</p>
<p><code>ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?</code></p>
<p>The code is de default of a Django project, ie,</p>
<pre><code> try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
</code></pre>
<p>I've tried with all of the things described in <a href="https://stackoverflow.com/questions/46210934/importerror-couldnt-import-django">this</a> post (ie, I'm in a venv reciently created with django installed, checked the django installation, verified that PYTHONPATH is pointing to <code>venv/lib/python3.8/site-packages</code>, etc.) but I still have the error. Any idea?</p>
| <python><django><python-venv> | 2023-07-07 19:03:56 | 2 | 350 | Palinuro |
76,639,580 | 1,915,846 | How to get exact message sent to LLM using LangChain's LLMChain (python)? | <p>Currently, when using an <code>LLMChain</code> in LangChain, I can get the template prompt used and the response from the model, but is it possible to get the exact text message sent as query to the model, without having to manually do the prompt template filling?</p>
<p>An example:</p>
<pre class="lang-py prettyprint-override"><code>from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
llm = OpenAI(model_name="gpt-3.5-turbo-0613")
prompt = PromptTemplate(input_variables=["a", "b"], template="Hello {a} and {b}")
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.call({"a": "some text", "b": "some other text"})
</code></pre>
<p>I cannot find something like I am looking for in the <code>chain</code> or <code>result</code> objects. I tried some options such as <code>return_final_only=True</code> and <code>include_run_info=True</code> but they don't include what I am looking for.</p>
| <python><langchain><py-langchain> | 2023-07-07 19:01:54 | 5 | 912 | cserpell |
76,639,506 | 8,297,745 | How to maintain a connection on SSH using Paramiko to execute further commands? | <p>I am trying to create a Python application that establish a SSH command and sends a few commands.</p>
<p>At the moment, I am at the step where I just want to open a SSH connection, send a pwd command, and disconnect from the session.</p>
<p>This is the code I did so far:</p>
<pre><code>import paramiko
class Host:
"""This class represents the SSH connection to a host.
It has attributes related to the remote host being connected and the state of the SSH connection."""
def __init__(self, remote_host, login, password):
self.remote_host = remote_host
self.login = login
self.password = password
self.ssh_client = None # Fix Variable Before Assignment Warning.
def connect(self):
try:
# Try to connect and handle errors - Instantiating an object of the class, not a local object.
self.ssh_client = paramiko.SSHClient() # Self instantiates the object, not local.
self.ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Try for only 1 second.
self.ssh_client.connect(self.remote_host, username=self.login, password=self.password, timeout=1)
# Multiple Assignment / Data Unpacking
# stdin, stdout, stderr = ssh_client.exec_command('pwd')
# response_stdout = stdout.read().decode().strip()
return "Connected to SSH!"
except paramiko.ssh_exception.SSHException as ssh_exception:
return f"SSHException - Failed to connect to the host: {ssh_exception}"
except TimeoutError as timeout_error:
return f"TimeoutError - Host unavailable: {timeout_error}"
# finally:
# ssh_client.close()
def isconnected(self):
if self.ssh_client:
return self.ssh_client
else:
return None
def disconnect(self):
# Use the object attribute that represents the connection state to disconnect.
if self.ssh_client is not None:
self.ssh_client.close()
return "Disconnected from SSH!"
else:
return "No active SSH connection."
</code></pre>
<p>Now, what I expected was the following flow:</p>
<p>--- Connect to SSH (Retain Connection Open) ----</p>
<p>--- Do something ... Maybe a PWD or something ----</p>
<p>--- Close the SSH Connection ---</p>
<p>But for some reason, when calling:</p>
<pre><code>print("Testing SSH Connection.")
host = Host("192.168.1.10", 'administrator', 'SuperStrongP477')
if host.isconnected():
print("SSH Connected.")
else:
print("SSH Not Connected.")
</code></pre>
<p>I get:</p>
<blockquote>
<p>Testing SSH Connection. SSH Not Connected.</p>
</blockquote>
<p>I already looked into other posts but they did not help me, since I did not understand clearly how to adapt it to my code... The answers I looked up and did not help, were:</p>
<p><a href="https://stackoverflow.com/questions/40230978/execute-git-command-in-a-remote-machine-using-paramiko">execute git command in a remote machine using paramiko</a>
<a href="https://stackoverflow.com/questions/36490989/how-to-keep-ssh-session-not-expired-using-paramiko">How to keep ssh session not expired using paramiko?</a>
<a href="https://stackoverflow.com/questions/49492621/execute-multiple-commands-in-paramiko-so-that-commands-are-affected-by-their-pre">Execute multiple commands in Paramiko so that commands are affected by their predecessors</a></p>
<p>Those two did clarify some doubts but I still was not able to implement their solutions into my own code.</p>
| <python><ssh><paramiko> | 2023-07-07 18:48:19 | 1 | 849 | Raul Chiarella |
76,639,478 | 857,741 | set() inside list comprehension inefficient? | <p>I'd like to compute the difference between two lists while maintaining ordering. ("stable" removal):</p>
<p>Compute a list that has all values from list contained in <code>b</code> removed from list <code>a</code> while maintaining <code>a</code>'s order in the result .</p>
<p>I found comments that this solution is inefficient because of using <code>set</code> in a list comprehension:</p>
<pre><code>def diff1(a, b):
return [x for x in a if x not in set(b)]
</code></pre>
<p>And that this is more efficient:</p>
<pre><code>def diff2(a, b):
bset = set(b)
return [x for x in a if x not in bset]
</code></pre>
<p>I tried to verify it empirically. On the face of it it seems like it is true. I tried to test it using <code>iPython</code>:</p>
<pre><code>mysetup='''
a = [1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19]
b=[3,8,9,10]
'''
s1='def diff1(a, b): return [x for x in a if x not in set(b)]'
s2='''
def diff2(a, b):
bset = set(b)
return [x for x in a if x not in bset]
'''
</code></pre>
<p>Result:</p>
<pre><code>In [80]: timeit.timeit(setup=mysetup, stmt=s1, number=100000000)
Out[80]: 3.826001249952242
In [81]: timeit.timeit(setup=mysetup, stmt=s2, number=100000000)
Out[81]: 3.703278487082571
</code></pre>
<p>I've tested this several times and <code>diff2</code> is consistently faster than <code>diff1</code>.</p>
<p>Why? Shouldn't <code>set(b)</code> inside list comprehension be computed once?</p>
| <python><list><performance><list-comprehension> | 2023-07-07 18:42:01 | 1 | 6,914 | LetMeSOThat4U |
76,639,423 | 5,684,405 | Installing libmagic with pip fails | <p>After installing in my Jupyter Notebook (as a container of JupyterLab as jovan user without access to root) the <code>libmagic</code> while having <code>cmake 3.26.4</code> already installed in the conda env. I try to install install libmagic with pip:</p>
<pre><code>pip install python-libmagic
</code></pre>
<p>but I keep getting error:</p>
<pre><code>Collecting python-libmagic
Using cached python_libmagic-0.4.0-py3-none-any.whl
Collecting cffi==1.7.0 (from python-libmagic)
Using cached cffi-1.7.0.tar.gz (400 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: pycparser in /opt/conda/envs/cho_env/lib/python3.10/site-packages (from cffi==1.7.0->python-libmagic) (2.21)
Building wheels for collected packages: cffi
Building wheel for cffi (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [254 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-310
creating build/lib.linux-x86_64-cpython-310/cffi
copying cffi/ffiplatform.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/verifier.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/commontypes.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/vengine_gen.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/recompiler.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/cparser.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/lock.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/__init__.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/model.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/api.py -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/_cffi_include.h -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/parse_c_type.h -> build/lib.linux-x86_64-cpython-310/cffi
copying cffi/_embedding.h -> build/lib.linux-x86_64-cpython-310/cffi
running build_ext
building '_cffi_backend' extension
creating build/temp.linux-x86_64-cpython-310
creating build/temp.linux-x86_64-cpython-310/c
gcc -pthread -B /opt/conda/envs/cho_env/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/cho_env/include -fPIC -O2 -isystem /opt/conda/envs/cho_env/include -fPIC -DUSE__THREAD -I/usr/include/ffi -I/usr/include/libffi -I/opt/conda/envs/cho_env/include/python3.10 -c c/_cffi_backend.c -o build/temp.linux-x86_64-cpython-310/c/_cffi_backend.o
In file included from c/_cffi_backend.c:274:
c/minibuffer.h: In function ‘mb_ass_slice’:
c/minibuffer.h:66:5: warning: ‘PyObject_AsReadBuffer’ is deprecated [-Wdeprecated-declarations]
66 | if (PyObject_AsReadBuffer(other, &buffer, &buffer_len) < 0)
| ^~
In file included from /opt/conda/envs/cho_env/include/python3.10/genobject.h:12,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:110,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/abstract.h:343:17: note: declared here
343 | PyAPI_FUNC(int) PyObject_AsReadBuffer(PyObject *obj,
| ^~~~~~~~~~~~~~~~~~~~~
In file included from c/_cffi_backend.c:277:
c/file_emulator.h: In function ‘PyFile_AsFile’:
c/file_emulator.h:54:14: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
54 | mode = PyText_AsUTF8(ob_mode);
| ^
In file included from c/_cffi_backend.c:281:
c/wchar_helper.h: In function ‘_my_PyUnicode_AsSingleWideChar’:
c/wchar_helper.h:83:5: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
83 | Py_UNICODE *u = PyUnicode_AS_UNICODE(unicode);
| ^~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
| ^~~~~~~~~~~~~~~~~~~
In file included from c/_cffi_backend.c:281:
c/wchar_helper.h:84:5: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
84 | if (PyUnicode_GET_SIZE(unicode) == 1) {
| ^~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from c/_cffi_backend.c:281:
c/wchar_helper.h:84:5: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
84 | if (PyUnicode_GET_SIZE(unicode) == 1) {
| ^~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
| ^~~~~~~~~~~~~~~~~~~
In file included from c/_cffi_backend.c:281:
c/wchar_helper.h:84:5: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
84 | if (PyUnicode_GET_SIZE(unicode) == 1) {
| ^~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from c/_cffi_backend.c:281:
c/wchar_helper.h: In function ‘_my_PyUnicode_SizeAsWideChar’:
c/wchar_helper.h:99:5: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
99 | Py_ssize_t length = PyUnicode_GET_SIZE(unicode);
| ^~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from c/_cffi_backend.c:281:
c/wchar_helper.h:99:5: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
99 | Py_ssize_t length = PyUnicode_GET_SIZE(unicode);
| ^~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
| ^~~~~~~~~~~~~~~~~~~
In file included from c/_cffi_backend.c:281:
c/wchar_helper.h:99:5: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
99 | Py_ssize_t length = PyUnicode_GET_SIZE(unicode);
| ^~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from c/_cffi_backend.c:281:
c/wchar_helper.h: In function ‘_my_PyUnicode_AsWideChar’:
c/wchar_helper.h:118:5: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
118 | Py_UNICODE *u = PyUnicode_AS_UNICODE(unicode);
| ^~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
| ^~~~~~~~~~~~~~~~~~~
c/_cffi_backend.c: In function ‘ctypedescr_dealloc’:
c/_cffi_backend.c:352:23: error: lvalue required as left operand of assignment
352 | Py_REFCNT(ct) = 43;
| ^
c/_cffi_backend.c:355:23: error: lvalue required as left operand of assignment
355 | Py_REFCNT(ct) = 0;
| ^
c/_cffi_backend.c: In function ‘cast_to_integer_or_char’:
c/_cffi_backend.c:3331:26: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
3331 | PyUnicode_GET_SIZE(ob), ct->ct_name);
| ^~~~~~~~~~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
c/_cffi_backend.c:3331:26: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
3331 | PyUnicode_GET_SIZE(ob), ct->ct_name);
| ^~~~~~~~~~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
| ^~~~~~~~~~~~~~~~~~~
c/_cffi_backend.c:3331:26: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
3331 | PyUnicode_GET_SIZE(ob), ct->ct_name);
| ^~~~~~~~~~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
c/_cffi_backend.c: In function ‘b_complete_struct_or_union’:
c/_cffi_backend.c:4251:17: warning: ‘PyUnicode_GetSize’ is deprecated [-Wdeprecated-declarations]
4251 | do_align = PyText_GetSize(fname) > 0;
| ^~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:177:43: note: declared here
177 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_ssize_t) PyUnicode_GetSize(
| ^~~~~~~~~~~~~~~~~
c/_cffi_backend.c:4283:13: warning: ‘PyUnicode_GetSize’ is deprecated [-Wdeprecated-declarations]
4283 | if (PyText_GetSize(fname) == 0 &&
| ^~
In file included from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:177:43: note: declared here
177 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_ssize_t) PyUnicode_GetSize(
| ^~~~~~~~~~~~~~~~~
c/_cffi_backend.c:4353:17: warning: ‘PyUnicode_GetSize’ is deprecated [-Wdeprecated-declarations]
4353 | if (PyText_GetSize(fname) > 0) {
| ^~
In file included from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:177:43: note: declared here
177 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_ssize_t) PyUnicode_GetSize(
| ^~~~~~~~~~~~~~~~~
c/_cffi_backend.c: In function ‘prepare_callback_info_tuple’:
c/_cffi_backend.c:5214:5: warning: ‘PyEval_InitThreads’ is deprecated [-Wdeprecated-declarations]
5214 | PyEval_InitThreads();
| ^~~~~~~~~~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/Python.h:130,
from c/_cffi_backend.c:2:
/opt/conda/envs/cho_env/include/python3.10/ceval.h:122:37: note: declared here
122 | Py_DEPRECATED(3.9) PyAPI_FUNC(void) PyEval_InitThreads(void);
| ^~~~~~~~~~~~~~~~~~
c/_cffi_backend.c: In function ‘b_callback’:
c/_cffi_backend.c:5255:5: warning: ‘ffi_prep_closure’ is deprecated: use ffi_prep_closure_loc instead [-Wdeprecated-declarations]
5255 | if (ffi_prep_closure(closure, &cif_descr->cif,
| ^~
In file included from c/_cffi_backend.c:15:
/opt/conda/envs/cho_env/include/ffi.h:347:1: note: declared here
347 | ffi_prep_closure (ffi_closure*,
| ^~~~~~~~~~~~~~~~
In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046,
from /opt/conda/envs/cho_env/include/python3.10/Python.h:83,
from c/_cffi_backend.c:2:
c/ffi_obj.c: In function ‘_ffi_type’:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:744:29: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
744 | #define _PyUnicode_AsString PyUnicode_AsUTF8
| ^~~~~~~~~~~~~~~~
c/_cffi_backend.c:72:25: note: in expansion of macro ‘_PyUnicode_AsString’
72 | # define PyText_AS_UTF8 _PyUnicode_AsString
| ^~~~~~~~~~~~~~~~~~~
c/ffi_obj.c:191:32: note: in expansion of macro ‘PyText_AS_UTF8’
191 | char *input_text = PyText_AS_UTF8(arg);
| ^~~~~~~~~~~~~~
c/lib_obj.c: In function ‘lib_build_cpython_func’:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:744:29: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
744 | #define _PyUnicode_AsString PyUnicode_AsUTF8
| ^~~~~~~~~~~~~~~~
c/_cffi_backend.c:72:25: note: in expansion of macro ‘_PyUnicode_AsString’
72 | # define PyText_AS_UTF8 _PyUnicode_AsString
| ^~~~~~~~~~~~~~~~~~~
c/lib_obj.c:129:21: note: in expansion of macro ‘PyText_AS_UTF8’
129 | char *libname = PyText_AS_UTF8(lib->l_libname);
| ^~~~~~~~~~~~~~
c/lib_obj.c: In function ‘lib_build_and_cache_attr’:
/opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:744:29: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
744 | #define _PyUnicode_AsString PyUnicode_AsUTF8
| ^~~~~~~~~~~~~~~~
c/_cffi_backend.c:71:24: note: in expansion of macro ‘_PyUnicode_AsString’
71 | # define PyText_AsUTF8 _PyUnicode_AsString /* PyUnicode_AsUTF8 in Py3.3 */
| ^~~~~~~~~~~~~~~~~~~
c/lib_obj.c:208:15: note: in expansion of macro ‘PyText_AsUTF8’
208 | char *s = PyText_AsUTF8(name);
| ^~~~~~~~~~~~~
In file included from c/cffi1_module.c:16,
from c/_cffi_backend.c:6636:
c/lib_obj.c: In function ‘lib_getattr’:
c/lib_obj.c:506:7: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
506 | p = PyText_AsUTF8(name);
| ^
In file included from c/cffi1_module.c:19,
from c/_cffi_backend.c:6636:
c/call_python.c: In function ‘_get_interpstate_dict’:
c/call_python.c:20:30: error: dereferencing pointer to incomplete type ‘PyInterpreterState’ {aka ‘struct _is’}
20 | builtins = tstate->interp->builtins;
| ^~
c/call_python.c: In function ‘_ffi_def_extern_decorator’:
c/call_python.c:73:11: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
73 | s = PyText_AsUTF8(name);
| ^
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for cffi
Running setup.py clean for cffi
Failed to build cffi
ERROR: Could not build wheels for cffi, which is required to install pyproject.toml-based projects```
how can I fix this?
</code></pre>
| <python><pip><libmagic> | 2023-07-07 18:31:20 | 2 | 2,969 | mCs |
76,639,223 | 13,721,819 | How can I assert in a unit test that a regex pattern occurs multiple times in a string? | <p>I have a unit test that asserts that a certain regex pattern appears in a multiline string:</p>
<pre class="lang-py prettyprint-override"><code>import unittest
class TestStrRegex(unittest.TestCase):
def test_pattern_matches_thrice(self):
string = """\
repeatedval=78
badval=89
# comment
repeatedval=20
repeatedval=8978
"""
self.assertRegex(string, 'repeatedval=\d+')
</code></pre>
<p>I would like to assert that the <code>'repeatedval=\d+'</code> pattern matches exactly three times in the string, something like this:</p>
<pre class="lang-py prettyprint-override"><code>self.assertRegex(string, 'repeatedval=\d+', exact_number_of_matches=3)
</code></pre>
<p>But that doesn't work. Is there another way to make this assertion?</p>
| <python><python-unittest><python-re> | 2023-07-07 17:58:23 | 1 | 612 | Wilson |
76,639,117 | 1,737,830 | How to update a nested object, pushing new dictionary to an array? | <p>There's a document in a MongoDB collection:</p>
<pre><code>{
_id: 'd2f5',
url: 'https://exam.pl/b-4',
creation_time: '2023.07.07 16:28:45'
}
</code></pre>
<p>If a <code>check</code> appears for the first time, I'd like to have it updated like this:</p>
<pre><code>{
_id: 'd2f5',
url: 'https://exam.pl/b-4',
creation_time: '2023.07.07 16:28:45',
checks: {
'10.08.2023-11.08.2023': {
start_date: '10.08.2023',
current_temp_forecast: '29.5',
temp_forecast_history: [
{
'2023.07.07 18:08:21': '29.5'
}
]
}
}
}
</code></pre>
<p>When existing <code>check</code> gets updated with new value, I'd like to have the old value stored in <code>temp_forecast_history</code> together with the new value, and have the current value extracted one level up:</p>
<pre><code>{
_id: 'd2f5',
url: 'https://exam.pl/b-4',
creation_time: '2023.07.07 16:28:45',
checks: {
'10.08.2023-11.08.2023': {
start_date: '10.08.2023',
current_temp_forecast: '28.2',
temp_forecast_history: [
{
'2023.07.08 19:39:52': '28.2'
},
{
'2023.07.07 18:08:21': '29.5'
}
]
}
}
}
</code></pre>
<p>How to achieve it, using <code>pymongo</code>?</p>
<p>Already tried doing updates like:</p>
<pre class="lang-py prettyprint-override"><code>update_query = {
"$set": {
f"checks.{check}.start_date": start_date,
f"checks.{check}.current_temp_forecast": current_temp_forecast,
},
"$addToSet": { # used "$push" before, but with error as well
f"checks.{check}.temp_forecast_history": {
"$each": [
{
timestamp: current_temp_forecast
}
],
"$position": 0
}
}
}
collection.update_one({"_id": document_id}, update_query)
</code></pre>
<p>But it failed with error saying:</p>
<pre><code>pymongo.errors.WriteError: The dollar ($) prefixed field '$addToSet' in '$addToSet' is not allowed in the context of an update's replacement document. Consider using an aggregation pipeline with $replaceWith., full error: {'index': 0, 'code': 52, 'errmsg': "The dollar ($) prefixed field '$addToSet' in '$addToSet' is not allowed in the context of an update's replacement document. Consider using an aggregation pipeline with $replaceWith."}
</code></pre>
| <python><pymongo> | 2023-07-07 17:39:34 | 0 | 2,368 | AbreQueVoy |
76,638,901 | 12,670,481 | Circular import problems for Python typing | <h2>Introduction</h2>
<p>I am trying to use a one-to-many relationship in Python but I have some troubles actually implementing because of problems of circular dependencies using Python typing.</p>
<p>I get the following code is a problem, but I spent hours searching for a way of workaround and did not find anything.</p>
<h2>Class Diagram</h2>
<p>Let's say I have the following class diagram:
<a href="https://i.sstatic.net/KCdGS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KCdGS.png" alt="enter image description here" /></a></p>
<h2>What I try to achieve</h2>
<p>I am trying to write such a child of Task:</p>
<p><em>modules/specific_task.py</em></p>
<pre><code>from specific_executor import SpecificExecutor
from task import Task
class SpecificTask(Task[SpecificExecutor]):
def run(self):
self.my_executor.function_in_specific_executor()
</code></pre>
<p>Knowing the following <code>SpecificExecutor</code>:</p>
<pre class="lang-py prettyprint-override"><code>from executor import Executor
class SpecificExecutor(Executor):
def function_in_specific_executor(self):
pass
</code></pre>
<p>I need <code>self.my_executor</code> to be set to type <code>SpecificExecutor</code> to make sure that I have access to functions from <code>SpecificExecutor</code>.</p>
<h2>What it would look w/o circular problems</h2>
<p>I have thus the following files to implement them:</p>
<p><em>modules/executor.py</em></p>
<pre class="lang-py prettyprint-override"><code>from task import Task
from typing import Generic, TypeVar, List
TaskType = TypeVar("TaskType", bound=Task)
class Executor(Generic[TaskType]):
def __init__(self, tasks: List[TaskType]):
self.my_tasks = tasks
def run_function(self):
pass
</code></pre>
<p><em>modules/task.py</em></p>
<pre class="lang-py prettyprint-override"><code>from executor import Executor
from typing import Generic, TypeVar
ExecutorType = TypeVar("ExecutorType", bound=Executor)
class Task(Generic[ExecutorType]):
def __init__(self, executor: ExecutorType):
self.my_executor = executor
def do_something(self):
self.my_executor.run_function()
</code></pre>
<h2>What I did try</h2>
<ul>
<li>I am aware of the existence of <code>TYPE_CHECKING</code> variable but it does not work when using <code>bound</code> in <code>TypeVar</code> because this is evaluated at running time.</li>
<li>It would be possible to use <code>cast(SpecificExecutor, self.executor)</code> every time I use this variable but this is very verbose and I would prefer to avoid that, because I know that for <code>SpecificTask</code>, <code>self.executor</code> will always be <code>SpecificExecutor</code>.</li>
<li>I tried to add <code>from __future__ import annotations</code> at the beginning of every file but it does not fix circular import issue</li>
</ul>
| <python><python-typing> | 2023-07-07 17:00:09 | 1 | 593 | HenriChab |
76,638,679 | 1,303,577 | How to use quart-db with hypercorn? | <p>I'm trying to use the quart-db extension to connect to a postgresql DB from a Quart app that is served by hypercorn. But I'm not understanding how to use the db connection for an app that is served by hypercorn where the app initialization and route definitions are spread across multiple files.</p>
<p>My simplified setup is:</p>
<p>App is served by running: <code>hypercorn myapp:app</code></p>
<p><code>myapp.py</code> has</p>
<pre><code>import os
from app import create_app
app = create_app(os.getenv('MYAPP_CONFIG') or 'default')
</code></pre>
<p><code>app/__init__.py</code> has</p>
<pre><code>from quart import Quart
from config import config
def create_app(config_name):
app = Quart(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
return app
</code></pre>
<p>And <code>main/__init__.py</code> has the following:</p>
<pre><code>from quart import Blueprint
import os
main = Blueprint('main', __name__)
from .views import views
</code></pre>
<p>And <code>main/views/views.py</code> has the views where the routes are defined where I want to access the postgresql db connection.</p>
<pre><code>from quart import render_template, session, redirect, url_for, current_app, request, Response, g
from quart_db import QuartDB
from .. import main
# I tried this but it didn't work
#db = QuartDB(current_app, url="postgresql://user:password@postgresfqdn:5432/mydb")
@main.route('/', methods=['GET', 'HEAD'])
async def index():
# Do something cool with the db connection here
return "Myapp is running!\n"
</code></pre>
<p>Should <code>db = QuartDB(...)</code> be defined in main/views/views.py, or should be be defined elsewhere and somehow imported into main/views/views.py?</p>
| <python><quart><hypercorn> | 2023-07-07 16:26:03 | 1 | 1,945 | Rusty Lemur |
76,638,630 | 6,699,447 | How to select last column of polars dataframe | <p>I have the following polars dataframe and I wanted to select the last column dynamically.</p>
<pre class="lang-py prettyprint-override"><code>>>> import polars as pl
>>>
>>> df = pl.DataFrame({
... "col1": [1, 2],
... "col2": ["2", "3"],
... "col3": [3, 4]
... })
>>>
>>> df
shape: (2, 3)
┌──────┬──────┬──────┐
│ col1 ┆ col2 ┆ col3 │
│ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ i64 │
╞══════╪══════╪══════╡
│ 1 ┆ 2 ┆ 3 │
│ 2 ┆ 3 ┆ 4 │
└──────┴──────┴──────┘
>>> # How to select col3? which is the last column in df
</code></pre>
<p>How can i do this in polars?. I can do <a href="https://stackoverflow.com/a/40145561/6699447"><code>df.iloc[:,-1:]</code> to select the last column if it's a pandas dataframe</a>.</p>
<p>Additional info:</p>
<pre class="lang-py prettyprint-override"><code>>>> import sys
>>> sys.version_info
sys.version_info(major=3, minor=11, micro=0, releaselevel='final', serial=0)
>>> import polars
>>> polars.__version__
'0.18.3'
</code></pre>
| <python><python-3.x><python-polars> | 2023-07-07 16:19:55 | 5 | 25,841 | user459872 |
76,638,502 | 14,637,258 | FastAPI authorization via decorator | <p>The following FastAPI app can be executed, returns a valid JWT for the POST request to the endpoint <code>http://localhost:8000/login</code> with <code>{"username": "admin", "password": "password"}</code> as the body. Then this JWT can be sent in the header of a GET request to the endpoint <code>http://localhost:8000/items</code> and decoded again. The GET request as shown below in <code>test.py</code> works if the decorator <code>@authorize(["admin"])</code> is removed in the <code>item_routes.py</code> file. Without the decorator it returns the following:</p>
<pre><code>{
"items": [
{
"id": 1,
"name": "item A"
}
]
}
</code></pre>
<p>However, when the decorator stays in the code, the FastAPI app logs <code>422 Unprocessable Entity</code> and returns:</p>
<pre><code>{
"detail": [
{
"loc": [
"query",
"kwargs"
],
"msg": "field required",
"type": "value_error.missing"
}
]
}
</code></pre>
<p>What needs to be changed in the code to have <code>@authorize(["admin"])</code> work when running <code>test.py</code>?</p>
<pre><code># test.py
import requests
base_url = "http://localhost:8000"
login_url = base_url + "/login"
login_data = {"username": "admin", "password": "password"}
response = requests.post(login_url, json=login_data)
token = response.json()["token"]
headers = {"Authorization": f"Bearer {token}"}
print(headers) # {'Authorization': 'Bearer eyJh... and so on'}
items_url = base_url + "/items"
response = requests.get(items_url, headers=headers)
print(response.json()) # {'detail': [{'loc': ['query', 'kwargs'],
# 'msg': 'field required', 'type': 'value_error.missing'}]}
item_id = 1
item_url = f"{base_url}/items/{item_id}"
response = requests.get(item_url, headers=headers)
print(response.json()) # {'detail': [{'loc': ['query', 'kwargs'],
# 'msg': 'field required', 'type': 'value_error.missing'}]}
</code></pre>
<pre><code># main.py
from fastapi import FastAPI
from item_routes import router as item_router
from login import router as login_router
import uvicorn
app = FastAPI()
app.include_router(item_router)
app.include_router(login_router)
if __name__ == "__main__":
uvicorn.run(app, host="localhost", port=8000)
</code></pre>
<pre><code># login.py
router = APIRouter()
class Login(BaseModel):
username: str
password: str
@router.post("/login")
async def login(login: Login):
if login.username == "admin" and login.password == "password":
role = "admin"
token = create_jwt(login.username, role)
return {"token": token}
raise HTTPException(status_code=401, detail="Invalid credentials")
</code></pre>
<pre><code># jwt.py
from jose import jwt
SECRET_KEY = "secret-key"
def create_jwt(username, role):
payload = {"sub": username, "role": role}
token = jwt.encode(payload, SECRET_KEY, algorithm="HS256")
return token
</code></pre>
<pre><code># item_routes.py
from fastapi import APIRouter
from authorization import authorize
router = APIRouter()
@router.get("/items")
@authorize(["admin"])
async def get_items():
return {"items": [{"id": 1, "name": "item A"}]}
@router.get("/items/{item_id}")
@authorize(["admin"])
async def get_item(item_id):
return {"item": item_id}
</code></pre>
<pre><code># authorization.py
from fastapi import Depends, HTTPException
from fastapi.security import HTTPBearer
from jose import jwt, JWTError
from jwt import SECRET_KEY
ROLES = {"admin": []}
async def get_current_user(token=Depends(HTTPBearer())):
try:
payload = jwt.decode(token.credentials, SECRET_KEY, algorithms=["HS256"])
username = payload.get("sub")
role = payload.get("role")
if role not in ROLES:
raise HTTPException(status_code=403, detail="Invalid role")
print({"username": username, "role": role})
return {"username": username, "role": role}
except JWTError:
raise HTTPException(status_code=401, detail="Invalid token")
def authorize(roles):
def decorator(func):
async def wrapper(current_user=Depends(get_current_user), **kwargs):
if current_user["role"] not in roles:
raise HTTPException(status_code=403, detail="Unauthorized")
return await func(**kwargs)
return wrapper
return decorator
</code></pre>
| <python><fastapi> | 2023-07-07 16:01:20 | 2 | 329 | Anne Maier |
76,638,234 | 2,641,187 | Python method cache as object dict like cached_property | <p>What I want is cache decorator for a method that works like <code>functools.cached_property</code>. <code>cached_property</code> caches the value of a property by adding it to the object <code>__dict__</code>, not to some global dict. Therefore, the object need not be hashable, as the "cache" (it's only one value) is stored in the object itself.</p>
<p>Now I would like the same thing for a method. So, something like a mechanism that creates a cache dict for the method in the object and memoizes method calls there. I know there are
<code>functools.cache</code> and <code>functools.lru_cache</code>, but they require the class object to be hashable, as the method is just treated as a function with first argument <code>self</code>.</p>
<p>I have looked quite a bit, the best thing I could find is the <a href="https://github.com/martenlienen/cached_method" rel="nofollow noreferrer">cached_method</a> package, but it only has very little stars. So either nobody needs this, there is a more popular implementation which I just can't find or this is trivial to implement in a few lines of code.</p>
<p>Does anyone know other more popular implementations?</p>
| <python><caching><methods><python-decorators> | 2023-07-07 15:22:35 | 0 | 931 | Darkdragon84 |
76,638,199 | 2,281,274 | Python source code line normalizing formatter | <p>There are many formatters which will do pep8/flake or alternative styles (e.g. black). But are there tools, which do reverse? Given a python code, convert it to have one stanza per line? No multiline arguments for function call, etc?</p>
<p>I need it to to make few automatic changes, and multiline arguments just break my sed (I want to replace debug/info stanza with 'pass' in a large project to see if it speedup things).</p>
<p>Or, may be, is there a way to configure existing fromatters to do so?</p>
| <python><pep8> | 2023-07-07 15:18:29 | 0 | 8,055 | George Shuklin |
76,638,184 | 8,089,441 | Django model with properties that come from an external API | <p>I want to have properties on a model that come from an external API. I also want to leave those properties blank when the ORM loads an instance of the model so that queries within the ORM aren't making API requests unless explicitly asked. It'd be nice if I could call a function on the model to retrieve those external properties, maybe something like <code>.with_external()</code>. How could I achieve this? Here's what I'm starting with:</p>
<pre><code>class Project(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=255, null=True, blank=True)
@property
def external_id(self):
external_id = requests.get({API_URL})
return external_id
</code></pre>
| <python><django><class><django-models><django-rest-framework> | 2023-07-07 15:16:13 | 1 | 731 | rustyshackleford |
76,638,174 | 850,781 | How to keep track of default function arguments? | <p>I have a bunch of function what call each other which have a few optional parameters:</p>
<pre><code>def f0(..., p1=3, p2=5):
...
def f1(..., p1=3, p2=5, p3=8, p4=13):
...
f0(..., p1=p1, p2=p2)
...
def f2(..., p1=3, p2=5, p3=8, p4=13, p5=21, p6=123):
...
f1(..., p1=p1, p2=p2, p3=p3, p4=p4)
...
def f3(..., p1=3, p2=5, p3=8, p4=13, p5=21, p6=123, ...):
...
f2(..., p1=p1, p2=p2, p3=p3, p4=p4, p5=p5, p6=p6)
...
</code></pre>
<p>Obviously, I don't want to keep repeating those default values (123, 21, 13 &c)
all the time. It would also be nice to be able to pass <code>None</code> <em>explicitly</em> to
mean "use the appropriate default".</p>
<p>The obvious way to deal with this is something like</p>
<pre><code>p1_default=3
p2_default=5
def f0(..., p1=None, p2=None):
p1 = p1 if p1 is not None else p1_default
p2 = p2 if p2 is not None else p2_default
...
p3_default=12
p4_default=15
def f1(..., p1=None, p2=None, p3=None, p4=None):
p3 = p3 if p3 is not None else p3_default
p4 = p4 if p4 is not None else p4_default
...
f0(..., p1=p1, p2=p2)
...
</code></pre>
<p>but this looks ugly and repetitive.</p>
<p>What is the "pythonic" way?</p>
| <python><optional-parameters> | 2023-07-07 15:15:06 | 3 | 60,468 | sds |
76,638,153 | 18,950,910 | Recording microphone input with pyaudio fails | <p>I am trying to record input from several microphones using pyaudio but the recording is failing. I have built a test script based which attempts to read 10 seconds of audio input. However, the reading function returns immediately while I would expect to wait for the required time. The wav file the script creates is the correct length, however, the data within the wav file is predominately zeroes. I have tried several microphones, different hostApis and different data rates with no luck. I am running the latest pyaudio library version, <code>0.2.13</code> on Windows 10. I have checked my microphone privacy restrictions where desktop apps are allowed to access the microphone, and the correct python executable is shown to have accessed a microphone at an appropriate time.</p>
<p>I would be grateful for any advice.</p>
<p>The code I am using is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import pyaudio
import wave
FORMAT = pyaudio.paInt16 # Data format
CHANNELS = 1 # Mono audio
RATE = 44100 # Sample rate
CHUNK = 1024 # Frames per buffer
RECORD_SECONDS = 10 # Duration to record, adjust as needed
name_filter = 'UMM-6'
host_api_filter = pyaudio.paDirectSound
# Create a PyAudio instance
p = pyaudio.PyAudio()
def record_audio(device_index):
device_info = p.get_device_info_by_index(device_index)
print(f"Recording started for device {device_info.get('name')} ({device_index}) hostAPI {device_info.get('hostApi')} rate {RATE} channels {CHANNELS} format {FORMAT}")
# Start the stream
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK,
input_device_index=device_index)
frames = []
for _ in range(0, int((RATE * RECORD_SECONDS) / CHUNK)):
data = stream.read(CHUNK)
frames.append(data)
# Stop the stream
stream.stop_stream()
stream.close()
# Save the recorded data as a WAV file
with wave.open(f"recording_{device_index}.wav", 'wb') as wf:
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(b''.join(frames))
print(f"Recording finished for device {device_index}")
# Get a list of microphones
microphones = [idx for idx in range(p.get_device_count())
if (device_info := p.get_device_info_by_index(idx)) and
device_info.get('maxInputChannels') > 0 and
name_filter in device_info.get('name') and
((host_api_filter == 0) or (device_info.get('hostApi') == host_api_filter))]
for mic in microphones:
record_audio(mic)
p.terminate()
</code></pre>
| <python><pyaudio> | 2023-07-07 15:12:04 | 1 | 805 | John M. |
76,638,068 | 10,095,440 | using Matplotlib animation to simulate pendulum on a cart | <p>For me, The solution of the dynamical system looks good, but the visualization has a bug and I can't find it, I appreciate your help. I defined two shapes, the first is a circle and simulates the tip of the pendulum and second, is a rectangle and simulates the cart. The rectangle doesn't show at all in the animation. Moreover, I want a line that connect the cart(rectangle) with the circle.</p>
<pre><code>import numpy
import scipy
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
#from matplotlib.animation import FuncAnimation
import matplotlib.animation as animation
def cart_pend(t, x, pend_mass, cart_Mass, L, g, d, u):
"""
This function is to idenstify the our equations that link our states to the inputs
"""
Sx = numpy.sin(x[2])
Cx = numpy.cos(x[2])
D = pend_mass * L * L * (cart_Mass + pend_mass * (1 - Cx**2))
dx1_dt = x[1]
dx2_dt = (1 / D) * (
-(pend_mass**2) * L**2 * g * Cx * Sx
+ pend_mass * L**2 * (pend_mass * L * x[3] ** 2 * Sx - d * x[1])
) + pend_mass * L * L * (1 / D) * u
dx3_dt = x[3]
dx4_dt = (1 / D) * (
(pend_mass + cart_Mass) * pend_mass * g * L * Sx
- pend_mass * L * Cx * (pend_mass * L * x[3] ** 2 * Sx - d * x[1])
) - pend_mass * L * Cx * (1 / D) * u
return [dx1_dt, dx2_dt, dx3_dt, dx4_dt]
m = 1
M = 5
L = 2
g = -10
d = 1
tspan = (1.0, 10.0)
x0 = [0, 0, numpy.pi, 0.5]
print("the shape of initial state vector is", numpy.shape(x0))
sol = solve_ivp(lambda t, x: cart_pend(t, x, m, M, L, g, d, -1), tspan, x0)
t = sol.t
y1, y2, y3, y4 = sol.y
print("Time points:", t)
print("Solution for y1:", y1)
print("Solution for y2:", y2)
plt.plot(t, y1, label="y1")
plt.plot(t, y2, label="y2")
plt.xlabel("Time")
plt.ylabel("State Variables")
plt.legend()
plt.grid(True)
plt.show()
#fig, ax = plt.subplots()
#(line,) = ax.plot([], [], "b")
#
#
#def init():
# ax.set_xlim(-2, 2)
# ax.set_ylim(-2, 2)
# return (line,)
#
#
#def update(frame):
# line.set_data(y1[:frame], y2[:frame])
# return (line,)
#
#
#ani = FuncAnimation(fig, update, frames=len(t), init_func=init, blit=True)
#ani.save("simulation.gif", writer="Pillow")
#
#plt.show()
# Initialize figure and axis
fig, ax = plt.subplots()
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
# Initialize shapes
shape1 = plt.Circle((0, 0), 0.1, color='red')
shape2 = plt.Rectangle((0, 0), 0.2, 0.2, color='blue')
def init():
shape1 = plt.Circle((0, 0), 0.1, color='red')
shape2 = plt.Rectangle((0, 0), 0.2, 0.2, color='blue')
return shape1, shape2
# Update function to compute new positions
def update(frame):
theta = sol.y[0, frame]
x = sol.y[2, frame]
shape1.center = (numpy.sin(theta), numpy.cos(theta))
shape2.set_xy([x - 0.1, -0.1])
return shape1, shape2
# Animation function
def animate(frame):
ax.clear()
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
ax.add_patch(shape1)
ax.add_patch(shape2)
return update(frame)
# Create animation
anim = animation.FuncAnimation(fig, animate, frames=sol.y.shape[1], interval=100, blit=True)
# Display the animation
plt.show()
# Convert animation to HTML format
html_anim = anim.to_jshtml()
# Save animation to an HTML file
with open("animation.html", "w") as f:
f.write(html_anim)
# Display the animation
plt.close()
</code></pre>
| <python><matplotlib><matplotlib-animation> | 2023-07-07 15:00:29 | 1 | 317 | Ibrahim zawra |
76,637,924 | 1,292,104 | unittest Mock - how to set function variable value as return_value for patch | <p>I'm trying to test following method under <code>my_module</code> module.</p>
<pre><code>def function_to_test(inputs):
foo = compute_foo(inputs) #Test this
foo = store_and_return_foo() # Do not test this
do_work(foo) # Test this
</code></pre>
<p>Is there a way to skip line 2 in test? (Assumption: <code>do_work</code> can give expected result regardless of if I use <code>foo</code> from line 1 or line 2. Line 2 is not just a side effect. that return value is useful in production but not in development/testing)</p>
<p>I can add a flag to method. eg. <code>function_to_test(inputs, store: bool)</code> but just wanted to see if there's a way to get around without adding it and still unittest it.</p>
<p>During test can I assign foo from line 1 to foo on line 2. something like <code>patch("my_module.get_complex_data_structure", return_value = 'my_module.function_to_test.foo')</code> .</p>
<p>Open to use any other mock library as well.</p>
| <python><python-3.x> | 2023-07-07 14:40:01 | 1 | 3,898 | nir |
76,637,851 | 10,805,665 | How to authenticate against ms project online odata interface in python? | <p>My goal is to access the odata-api of MS-project (online = cloud-version) in python. The api is accessible under:</p>
<pre><code>https://mycompany.sharepoint.com/sites/myprojectinstance/_api/ProjectData/
</code></pre>
<p>My active-directory-user has the rights to access the ms-project-instance and the odata-api: I can access the data using a webbrowser (being logged in to my microsoft-account) and in Power-BI. In following a part of the successfull request:</p>
<pre><code>service xml:base="https://mycompany.sharepoint.com/sites/myprojectinstance/_api/ProjectData/">
<workspace>
<atom:title>Default</atom:title>
<collection href="Projekte">
<atom:title>Projekte</atom:title>
</collection>
</code></pre>
<p>But what I want to do is writing a python script to access the odata-feed. The problem: authentication isn't working using basic-auth oder ntlm-auth. Here is my ntlm example:</p>
<pre><code>import json
import requests
from requests_ntlm import HttpNtlmAuth
# Set the API endpoint URL
ms_project_api_url = 'https://mycompany.sharepoint.com/sites/myprojectinstance/_api/ProjectData/'
with open('../secrets.json') as f:
secrets = json.load(f)
response = requests.get(ms_project_api_url, auth=HttpNtlmAuth(secrets['username'], secrets['password']))
</code></pre>
<p>For the username I tried different syntax: prename.lastname, domain\prename.lastname, domain\prename.lastname@company.de etc.</p>
<p>I always get an:</p>
<pre><code><m:error xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<m:code>-2147024891, System.UnauthorizedAccessException</m:code>
<m:message xml:lang="de-DE">Attempted to perform an unauthorized operation.</m:message
</m:error>
</code></pre>
<p>I know that it would propably work using the <a href="https://github.com/AzureAD/microsoft-authentication-library-for-python" rel="nofollow noreferrer">microsoft authentication library for python</a>, but I want to find a way without registering an App in Azure Portal.
During my research I found a <a href="https://stackoverflow.com/questions/74049708/project-online-authenticate-odata-feed-using-azure-ad-and-postman">post</a> regarding postman using the app-registration, but no (working) solution on how to use ntlm-auth. <strong>How can I get the ntlm-auth to work?</strong></p>
| <python><odata><ms-project> | 2023-07-07 14:29:01 | 0 | 308 | ediordna |
76,637,841 | 1,908,126 | Why pytorch is running slow in conda env | <p>I'm using a m1 pro macbook, and I installed pytoch directly on my machine using <code>pip3 install torch torchvision torchaudio</code> this command. Also I created a virtualized env(phthon 3.10) with conda and using the same install command.</p>
<p>When I run a very small training set the result differed significantly. The non-virtualized version resulted in 20s and virtualized version resulted in 40s.</p>
<p>First I thought maybe the Apple SIP has some effect on the execution of the conda env, so I disabled it. But it turned out that it made no difference.</p>
<p>So, what is causing training process in conda env so much slower?</p>
| <python><pytorch><conda><apple-silicon> | 2023-07-07 14:27:44 | 1 | 1,187 | Liang |
76,637,684 | 5,346,123 | Regex with multiple cues: find all shortest options | <p>I have a problem closely related to this question:
<a href="https://stackoverflow.com/questions/44841244/regex-find-match-within-a-string">Regex find match within a string</a></p>
<p>In that case the problem is to find <code>Warner Music Group</code> instead of <code>XYZ becomes Chief Digital Officer and EVP, Business Development of Warner Music Group</code> for</p>
<pre><code>Ole Abraham of XYZ becomes Chief Digital Officer and EVP, Business Development of Warner Music Group.
</code></pre>
<p>which is solved using <code>.*\bof\s+([^.]+)</code></p>
<p>Now I have a very similar problem, with the difference that I want all matches, and the previous solution returns only one. Here you have my basic setup with the solution above: <a href="https://regex101.com/r/bIbFaW/1" rel="nofollow noreferrer">https://regex101.com/r/bIbFaW/1</a></p>
<p>The problem is that for the string</p>
<pre><code>This is a test with a string with punctuation, and an end. Then test words, and more text. And here whith more text with more punctuation, like that.
</code></pre>
<p>the pattern <code>.*\bwith(.*?),</code> will only get me <code> more punctuation</code> (a good match), missing an earlier option <code> punctuation</code> from the first sentence.</p>
<p>Is it possible to do this or should I approach it differently? For example <code>with(.*?),</code> gets all matches, but they are the longer options (<code> a string with punctuation</code> instead of <code> punctuation,</code>). I could then try to find matches within my matches, but doing this at this moment has unrelated overhead which would be nice to avoid if possible.</p>
<p><a href="https://i.sstatic.net/8IDHK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8IDHK.png" alt="example text, with colours highlighting different parts of the string" /></a></p>
| <python><regex> | 2023-07-07 14:06:49 | 2 | 1,491 | Pablo |
76,637,630 | 3,510,201 | Can I type hint my object as a TypedDict? | <p>I have a an object that looks and behaves similar to a dictionary that I have minimal control over, but I would like to add precise typing to function that receives it, maybe type hinted in a similar way to <code>TypedDict</code>? Is this possible?</p>
<p>The object that behaves like a dictionary is implemented something like this</p>
<pre><code>class MyDictImplementation:
def __init__(self, values):
self.values = values
def __setitem__(self, key, value):
self.values[key] = value
def __getitem__(self, item):
return self.values[item]
def __repr__(self):
return f'{self.__class__.__name__}({self.values.__repr__()})'
</code></pre>
<p>This would be initiated with</p>
<pre><code>foo = MyDictImplementation({'project': MyDictImplementation({'states': MyDictImplementation({0: MyDictImplementation({'enabled': True, 'group': 'Meta'})})})})
</code></pre>
<p>Which would be this as a dictionary</p>
<pre><code>foo = {'project': {"states": {0: {"enabled": True, "group": 'Meta'}}}}
</code></pre>
<p>To type hint this dictionary I could create a chain of <code>TypedDict</code> and set that as the functions type hint.</p>
<pre><code>class StatusState(typing.TypedDict):
enabled: bool
group: str
class States(typing.TypedDict):
states: typing.Dict[int, StatusState]
class Project(typing.TypedDict):
project: States
def typing_test(input_: Project):
pass
foo: Project = {'project': {"states": {0: {"enabled": True, "group": 'Meta'}}}}
typing_test(foo) # Works!
</code></pre>
<p>I would like to give <code>typing_test</code> a <code>MyDictImplementation</code> object instead and be properly type hinted. Is this possible?</p>
<pre><code>foo = MyDictImplementation({'project': MyDictImplementation({'states': MyDictImplementation({0: MyDictImplementation({'enabled': True, 'group': 'Meta'})})})})
typing_test(foo)
</code></pre>
<hr />
<p>Full test/example for your convenience</p>
<pre><code>class StatusState(typing.TypedDict):
enabled: bool
group: str
class States(typing.TypedDict):
states: typing.Dict[int, StatusState]
class Project(typing.TypedDict):
project: States
class MyDictImplementation:
def __init__(self, values):
self.values = values
def __setitem__(self, key, value):
self.values[key] = value
def __getitem__(self, item):
return self.values[item]
def __repr__(self):
return f'{self.__class__.__name__}({self.values.__repr__()})'
def typing_test(input_: Project):
pass
foo: Project = {'project': {"states": {0: {"enabled": True, "group": 'Meta'}}}}
bar = MyDictImplementation({'project': MyDictImplementation({'states': MyDictImplementation({0: MyDictImplementation({'enabled': True, 'group': 'Meta'})})})})
typing_test(foo) # This is fine
typing_test(bar) # This isn't
</code></pre>
| <python><python-typing> | 2023-07-07 13:59:39 | 0 | 539 | Jerakin |
76,637,539 | 7,429,624 | DataprocInstantiateInlineWorkflowTemplateOperator : Error in pysparkJob template | <p>Hello fellow Stackoverflowers,</p>
<p>I have been trying to use the <code>DataprocInstantiateInlineWorkflowTemplateOperator</code> to run a pyspark job. Sadly after following all the documentation I am getting error in Composer <code>ValueError: Protocol message OrderedJob has no "stepID" field.</code></p>
<p>Here is the template that I am using.</p>
<pre><code>{
"id": "my-workflow-template",
"jobs": [
{
"stepID": "123456dfgy",
"pysparkJob": {
"mainPythonFileUri": "gs://gcp-gmp/app.py"
}
}
],
"name": "My Workflow Template",
"placement": {
"managedCluster": {
"clusterName": "my-managed-cluster",
"config": {
"master_config": {
"disk_config": {
"boot_disk_size_gb": 1024,
"boot_disk_type": "pd-standard"
},
"machine_type_uri": "n1-standard-4",
"num_instances": 1
},
"worker_config": {
"disk_config": {
"boot_disk_size_gb": 1024,
"boot_disk_type": "pd-standard"
},
"machine_type_uri": "n1-standard-4",
"num_instances": 2
}
}
}
}
}
</code></pre>
<p>Here is the entire python code.</p>
<pre><code>import json
from datetime import datetime ,timedelta
from airflow import DAG
from airflow.utils.trigger_rule import TriggerRule
from airflow.providers.google.cloud.operators.dataproc import DataprocInstantiateInlineWorkflowTemplateOperator
from airflow.operators.dummy import DummyOperator
DAG_ID= 'Dataproc_Instantiate_Inline_Workflow_TemplateOper_example'
JSON_CONTENT = """{
"id": "my-workflow-template",
"jobs": [
{
"stepID": "123456dfgy",
"pysparkJob": {
"mainPythonFileUri": "gs://my-bucket/app.py"
}
}
],
"name": "My Workflow Template",
"placement": {
"managedCluster": {
"clusterName": "my-managed-cluster",
"config": {
"master_config": {
"disk_config": {
"boot_disk_size_gb": 1024,
"boot_disk_type": "pd-standard"
},
"machine_type_uri": "n1-standard-4",
"num_instances": 1
},
"worker_config": {
"disk_config": {
"boot_disk_size_gb": 1024,
"boot_disk_type": "pd-standard"
},
"machine_type_uri": "n1-standard-4",
"num_instances": 2
}
}
}
}
}"""
template_dict = json.loads(JSON_CONTENT)
default_args = {
'start_date': datetime(2023, 6, 29),
'retries': 1,
'retry_delay': timedelta(minutes=2),
}
dag = DAG(
dag_id = DAG_ID,
default_args=default_args,
schedule_interval=None,
)
start = DummyOperator(
task_id = 'start',
dag = dag
)
create_dataproc_template = DataprocInstantiateInlineWorkflowTemplateOperator(
template = template_dict,
task_id = 'create_dataproc_template',
project_id= 'my-project',
region = 'us-central1',
gcp_conn_id = 'google_cloud_default',
dag = dag
)
complete = DummyOperator(
task_id = 'complete',
trigger_rule = TriggerRule.NONE_FAILED,
dag = dag
)
start >> create_dataproc_template >> complete
</code></pre>
<p>Strangely when I was not using the stepID field the error was <code>ValueError: Protocol message OrderedJob has no "pysparkJob" field.</code></p>
<p>Any help is appreciated.</p>
| <python><google-cloud-platform><pyspark><google-cloud-dataproc><google-cloud-composer> | 2023-07-07 13:46:17 | 1 | 826 | Aman Saurav |
76,637,392 | 1,734,843 | "Invalid value encountered in intersects" when comparing polygons with shapely | <p>I've seen "invalid value encountered in intersection" before, and it seemed like adding a check for <code>if a.intersects(b)</code> was the fix for that. Now I'm getting a warning from <code>intersects()</code> itself. Any idea why <code>intersects()</code> would be throwing warnings?</p>
<p>My code looks like this:</p>
<pre><code>results = []
for geom in geom_list:
if target_polygon.intersects(geom) and target_polygon.intersection(geom).geom_type != 'Point':
results.append(geom)
</code></pre>
<p><code>make_valid()</code> is being called elsewhere, so I don't think that's the issue.</p>
| <python><shapely> | 2023-07-07 13:23:48 | 0 | 1,251 | melicent |
76,637,339 | 17,676,984 | How to append 'OPTION (MAXDOP 1)' to SQLAlchemy select statement? | <p>I am translating queries from raw Transact-SQL to SQLAlchemy (v1.4 with future mode, happy to consider v2 as well).</p>
<p>Some of the queries limit the max degree of parallelism with <code>OPTION (MAXDOP 1)</code>.</p>
<pre class="lang-sql prettyprint-override"><code>SELECT
[...]
OPTION (MAXDOP 1);
</code></pre>
<p>Is there a way to append this <code>OPTION (MAXDOP 1)</code> to a SQLAlchemy select without a compiler extension ?</p>
| <python><sql-server><t-sql><sqlalchemy> | 2023-07-07 13:16:53 | 1 | 5,373 | ljmc |
76,637,315 | 4,948,705 | How to avoid AG Grid theme selection overwriting the Voila theme | <p>I have a voila that displays an AG Grid, as soon as the table is created the Voila theme is changed.
See below minimal reproducible example: I use am using voila's "dark" theme, once the table is created the voila theme changes to "light":</p>
<pre><code>{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from time import sleep\n",
"from ipyaggrid import Grid\n",
"print('Theme is dark now')\n",
"sleep(10)\n",
"Grid(grid_data=[{'Foo': 'spam', 'Bar': 'eggs'}], grid_options={'columnDefs': [{'field': 'Foo'}, {'field': 'Bar'}]})\n",
"print('Theme is light now')"
]
}
],
"metadata": {
"jupytext": {
"formats": "ipynb,py:light",
"text_representation": {
"extension": ".py",
"format_name": "light",
"format_version": "1.4",
"jupytext_version": "1.2.0"
}
},
"kernelspec": {
"display_name": "Python 3.8.10 ('env_local')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"voila": {
"theme": "dark"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
</code></pre>
<hr />
<p>voila == 0.4.0</p>
<p>ipyaggrid == 0.4.0</p>
| <python><ag-grid><voila> | 2023-07-07 13:14:32 | 0 | 529 | Diego F Medina |
76,637,096 | 8,219,760 | Type hint input and output dtype as a TypeVar for a numpy function | <p>For example, having two functions</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numpy.typing import NDArray
from typing import TypeVar
T = TypeVar("T", bound=np.dtype)
A = TypeVar("A")
def generic_example(value: A) -> A:
print(type(value))
return value
def create_from_array(array: NDArray[T]) -> NDArray[T]:
return array * np.arange(array.size, dtype=array.dtype).reshape(array.shape)
if __name__ == "__main__":
array = np.array([[1, 2, 3, 4], [4, 3, 2, 1]])
mylist = generic_example(list(range(19)))
array = generic_example(array)
print(create_from_array(array))
</code></pre>
<p>cause <code>pyright</code> error</p>
<pre><code>.../numpy_typing_test.py
.../numpy_typing_test.py:14:30 - error: Could not specialize type "NDArray[ScalarType@NDArray]"
Type "T@create_from_array" cannot be assigned to type "generic"
"dtype[Unknown]*" is incompatible with "generic"
.../numpy_typing_test.py:14:45 - error: Could not specialize type "NDArray[ScalarType@NDArray]"
Type "T@create_from_array" cannot be assigned to type "generic"
"dtype[Unknown]*" is incompatible with "generic"
.../numpy_typing_test.py:15:12 - error: Operator "*" not supported for types "NDArray[Unknown]" and "ndarray[Any, dtype[ScalarType@NDArray]]" (reportGeneralTypeIssues)
3 errors, 0 warnings, 0 informations
</code></pre>
<p>The <code>TypeVar</code> for <code>np.dtype</code> does not work.</p>
<p>How can I type hint for pyright that the <code>dtype</code> of the function output has the same value as some of the inputs or the type that has been given as a input?</p>
| <python><numpy><python-typing><pyright> | 2023-07-07 12:44:08 | 0 | 673 | vahvero |
76,637,091 | 13,014,864 | PySpark filter intersection of two columns | <p>I have a PySpark DataFrame and I would like to filter it such that only rows where entries in <code>col_a</code> are present in <code>col_b</code> and vice versa. Example:</p>
<pre><code>df = spark.createDataFrame(
[
("abc", "ddc", 1, 4.5),
("abb", "ddc", 4, 9.1),
("baa", "abc", 2, 3.2),
("abb", "bca", 1, 5.1),
("ddc", "abc", 2, 3.6),
("abc", "baa", 3, 2.6)
],
["col_a", "col_b", "col_c", "col_d"]
)
</code></pre>
<p>This yields a DataFrame that looks like this:</p>
<pre><code>+-----+-----+-----+-----+
|col_a|col_b|col_c|col_d|
+-----+-----+-----+-----+
| abc| ddc| 1| 4.5|
| abb| ddc| 4| 9.1|
| baa| abc| 2| 3.2|
| abb| bca| 1| 5.1|
| ddc| abc| 2| 3.6|
| abc| baa| 3| 2.6|
+-----+-----+-----+-----+
</code></pre>
<p>So, what I would like is a Spark-y method to keep only rows where <code>col_a</code> is in <code>col_b</code>, or in other words, perform an intersection between <code>col_a</code> and <code>col_b</code>. My naive way of doing this is as follows:</p>
<pre><code>col_a_and_col_b = (
set(df.select("col_a").distinct().toPandas()["col_a"].tolist())
.intersection(df.select("col_b").distinct().toPandas()["col_b"].tolist())
)
filtered_df = df.filter(
(col("col_a").isin(col_a_and_col_b))
& (col("col_b").isin(col_a_and_col_b))
)
</code></pre>
<p>(Here <code>col</code> is taken from <code>from pyspark.sql.functions import col</code>.) However, when working with large datasets, the variable <code>col_a_and_col_b</code> can be pretty large and then Spark sends that single variable out to all the nodes for processing. This strikes me as inefficient (and Spark does warn me about this), but I can't quite figure out a good way to do this. The result I'm looking for is:</p>
<pre><code>+-----+-----+-----+-----+
|col_a|col_b|col_c|col_d|
+-----+-----+-----+-----+
| abc| ddc| 1| 4.5|
| baa| abc| 2| 3.2|
| ddc| abc| 2| 3.6|
| abc| baa| 3| 2.6|
+-----+-----+-----+-----+
</code></pre>
| <python><apache-spark><pyspark><intersection> | 2023-07-07 12:42:50 | 2 | 931 | CopyOfA |
76,636,893 | 12,320,370 | Python YouTube Reporting API Authentication | <p>I've sorted through many threads on this topic but most seem outdated.</p>
<p>I am trying to call the YouTube Reporting API with <a href="https://developers.google.com/youtube/reporting/v1/code_samples/python?authuser=1" rel="nofollow noreferrer">this script</a></p>
<p>However I keep getting an error:</p>
<ul>
<li>When using 'Desktop Application' OAuth, I get:</li>
</ul>
<blockquote>
<p><strong>Error 400: invalid_request</strong>, The out-of-band (OOB) flow has been blocked in order to keep users secure.</p>
</blockquote>
<ul>
<li>When using 'Web Application' OAuth, I get:</li>
</ul>
<blockquote>
<p><strong>Error 400: redirect_uri_mismatch</strong>, The redirect URI in the request, urn:ietf:wg:oauth:2.0:oob, can only be used by a Client ID for native application. It is not allowed for the WEB client type.</p>
</blockquote>
<p>I am just still testing my code and been running out of jupyter notebook and Visual Studio Code. Same errors on both.</p>
<p>I am still a little confused as to which one I should be using, but adding redirect URI's for my localhost is not working, not sure how to proceed.</p>
<p><strong>EDIT:</strong>
I am using the below code sample directly from YouTube Documentation:</p>
<pre><code>import os
import google.oauth2.credentials
import google_auth_oauthlib.flow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from google_auth_oauthlib.flow import InstalledAppFlow
SCOPES = ['https://www.googleapis.com/auth/yt-analytics.readonly']
API_SERVICE_NAME = 'youtubeAnalytics'
API_VERSION = 'v2'
CLIENT_SECRETS_FILE = 'CREDENTIALS.json'
def get_service():
flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS_FILE, SCOPES)
credentials = flow.run_console()
return build(API_SERVICE_NAME, API_VERSION, credentials = credentials)
def execute_api_request(client_library_function, **kwargs):
response = client_library_function(
**kwargs
).execute()
print(response)
if __name__ == '__main__':
# Disable OAuthlib's HTTPs verification when running locally.
# *DO NOT* leave this option enabled when running in production.
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'
youtubeAnalytics = get_service()
execute_api_request(
youtubeAnalytics.reports().query,
ids='channel==MINE',
startDate='2017-01-01',
endDate='2017-12-31',
metrics='estimatedMinutesWatched,views,likes,subscribersGained',
dimensions='day',
sort='day'
)
</code></pre>
<p>Like mentioned I have http://localhost:8888, http://localhost:8888/oauth2callback etc. In my json file + in Google Console.</p>
<p>'urn:ietf:wg:oauth:2.0:oob' is not in my code/file/google console at any point. I also have the .../auth/yt-analytics.readonly defined in my OAuth consent screen settings.</p>
<p>Really not sure what I have missed out on.</p>
| <python><google-api><youtube><youtube-api><google-api-python-client> | 2023-07-07 12:13:13 | 2 | 333 | Nairda123 |
76,636,713 | 6,199,146 | Using InteractiveBrowserCredential in DataFactoryManagementClient gives error InvalidAuthenticationTokenTenant | <p>I want to start a pipeline run in my Azure Data Factory using python. I have a multi-tenant system, where my Azure account is in one tenant, and the data factory is in the other. I have contributor role to use the data factory and I can use it from the UI.</p>
<p>In python, I use the <code>DataFactoryManagementClient</code> to connect to the ADF using <code>InteractiveBrowserCredential</code> like in the code below, and I get an error saying:</p>
<pre><code>Code: InvalidAuthenticationTokenTenant
Message: The access token is from the wrong issuer ...
</code></pre>
<p>Full code:</p>
<pre class="lang-py prettyprint-override"><code>credential = InteractiveBrowserCredential(additionally_allowed_tenants='*')
adf_client = DataFactoryManagementClient(credential=credential, subscription_id='<subscription-id>')
run_response = adf_client.pipelines.create_run(
resource_group_name=rg_name,
factory_name=adf_name,
pipeline_name=pipeline_name,
parameters={'counter': '100'})
</code></pre>
<p>I know that I can create a service principal, give it access to the data factory and use it for creating a pipeline run, but in my org, getting a service principal can take some time because it has to come from the IT.</p>
<p>Can I not use <code>DataFactoryManagementClient</code> with a token from a different tenant at all? Is there an alternative to creating a service principal?</p>
| <python><azure><azure-data-factory><azure-authentication> | 2023-07-07 11:46:35 | 1 | 510 | ar7 |
76,636,371 | 534,238 | Python type hint of type "any type"? | <p>I want to provide a type hint that is any <strong>type</strong>.</p>
<p>For example the type can be <code>int</code>, <code>float</code>, <code>str</code>, or even a yet-to-be-created custom type. It is being used in a case where I am created custom types, and I'll be creating more.</p>
<p>So I need something like:</p>
<pre class="lang-py prettyprint-override"><code>def myfunc(entry: Union[str, float, char, MyCustomType1, MyCustomType2, ...]) -> Bool:
</code></pre>
<p>where <code>entry</code> can be anything as long as it is a <em>type</em>. To be clear, I am not saying it can be <em>anything</em>. E.g. <code>entry</code> can be <code>int</code>, but it cannot be <code>7</code>.</p>
<p>It is not clear to me what I can provide in the type hint to represent a <code>Union</code> of any type, including types not yet defined in the module (to allow for the library to grow, and break nothing).</p>
| <python><python-typing> | 2023-07-07 11:01:16 | 1 | 3,558 | Mike Williamson |
76,636,266 | 1,230,911 | Alembic autogenerate from current database? | <p>I have a database with a few tables in schema_a and views of these tables in schema_b, and want to use <code>alembic revision --autogenerate</code> to run have alembic detect the views i've created.</p>
<p>When I run <code>alembic revision --autogenerate</code> either using metadata with models.py generated from <code>sqlacodegen</code> my views are represented as tables in the resultant code. Ideally I would like to create a migration that represents the <em>current state</em> of the database, as it was initialized using handmade SQL for all the views and tables. When I look into the autogenerated migrations, i see discrepancies between schemas and views/tables as mentioned. Is there anyway to get a <code>MetaData()</code> object for a running PSQL database instead of relying on <code>models.py</code> metadata object (which doesnt seem to be correct)?</p>
| <python><sqlalchemy><alembic> | 2023-07-07 10:47:05 | 1 | 725 | enrm |
76,636,253 | 1,333,294 | Django models relationships - the most efficient way to create\update and save | <p>Sorry for the newbie question.<br />
Im wondering what is the most efficient way to save multiple relationship models to the database.<br />
Lets say I got 3 models with relationships:</p>
<pre><code>class ModelA(BaseModel):
some_field = models.BooleanField()
</code></pre>
<pre><code>class ModelB(BaseModel):
some_field = models.BooleanField()
relation = models.OneToOneField(ModelA, on_delete=models.CASCADE, related_name="modelB", null=True, blank=True)
</code></pre>
<pre><code>class ModelC(BaseModel):
some_field = models.BooleanField()
relation = models.ForeignKey(ModelA, on_delete=models.CASCADE, related_name='modelC')
</code></pre>
<p>And I'm creating new <code>ModelA</code> in the database, and I want to save <code>ModelB</code> and <code>ModelC</code> as relationships.</p>
<pre><code>model_a = ModelA.objects.create(some_field=True)
model_b = ModelB(some_field=True, relation= model_a)
model_c1 = ModelC(some_field=True, relation= model_a)
model_c2 = ModelC(some_field=True, relation= model_a)
model_a.save() //<--This won't save model_b and c models to the to the DB
</code></pre>
<p>I looking for more efficient way to save model_an instead of saving after each relationship creation.<br />
Thanks</p>
| <python><django><django-models> | 2023-07-07 10:44:08 | 1 | 989 | ItayAmza |
76,636,041 | 2,060,348 | How to scale an overlay image based on the Google Maps zoom level | <p>I am trying to paste an overlay image on a static google map screenshot. Here is the code snippet -</p>
<pre><code>import requests
from PIL import Image
import io
API_KEY = '***'
plot_center = [75.49927187059548, 26.613400716542262]
zoom = 18
image_size = "640x480"
# Download Google image
url = f"https://maps.googleapis.com/maps/api/staticmap?center={plot_center[1]},{plot_center[0]}&zoom={zoom}&size={image_size}&key={API_KEY}&maptype=satellite"
response = requests.get(url)
google_image = Image.open(io.BytesIO(response.content))
# Download the overlay image
response = requests.get("https://storage.googleapis.com/kawa-public/4ed05009-8a61-4eb9-8b17-faa8d0d65ae3/kvi/kvi-s2-2023-07-03T15-01-25.838Z.png")
overlay_image = Image.open(io.BytesIO(response.content))
output_image = Image.new("RGBA", google_image.size)
output_image.paste(google_image, (0, 0))
overlay_position = (
(google_image.width - overlay_image.width) // 2,
(google_image.height - overlay_image.height) // 2,
)
output_image.paste(overlay_image, overlay_position, mask=overlay_image)
output_image.save("output_image.png")
</code></pre>
<p>The current output just pastes the overlay image to the center of the map which -</p>
<p><a href="https://i.sstatic.net/AuXuR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AuXuR.png" alt="enter image description here" /></a></p>
<p>I want the overlay image to cover the farm plot in which it is currently in.</p>
| <python><python-3.x><google-maps><python-imaging-library><google-maps-static-api> | 2023-07-07 10:15:30 | 2 | 4,428 | Anurag-Sharma |
76,636,003 | 1,942,868 | Adding column to serializer whichi is not the member of database table | <p>I have a <code>model</code> and a <code>serializer</code> with <code>restframework</code></p>
<p>This is the <code>model</code></p>
<pre><code>class Drawing(models.Model):
detail = models.JSONField(default=dict,null=True, blank=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
</code></pre>
<p>and <code>serializer</code>.</p>
<pre><code>class DrawingSerializer(ModelSerializer):
detail = serializers.JSONField(allow_null=True)
s3_url = serializers.CharField(read_only=True)
class Meta:
model = m.Drawing
fields = ('id','detail','s3_url','created_at','updated_at')
</code></pre>
<p>In <code>serializer</code>, it has <code>s3_url</code> column, but it is not in the <code>model</code>.</p>
<p>So, I want to make <code>s3_url</code> for each objects and return like this below.</p>
<p>This is not correct code but shows what I want to do.</p>
<p>calculate <code>s3_url</code> from <code>detail</code> for each rows of objects.</p>
<pre><code>class DrawingViewSet(viewsets.ModelViewSet):
queryset = m.Drawing.objects.all()
serializer_class = s.DrawingSerializer
def list(self, request):
queryset = m.Drawing.objects.filter(user=request.user).order_by("-id")
for each in queryset:
each['s3_url'] = detailToUrl(each['detail'])
serializer = s.DrawingSerializer(queryset, many=True)
return Response(serializer.data)
</code></pre>
<p>return json shuold be</p>
<pre><code>[
{
"id": 1,
"detail": {'test':1}
"s3_url": "http://test1/"
},{
"id": 2,
"detail": {'test':2}
"s3_url": "http://test2/"
}
]
</code></pre>
<p>How can I make it?</p>
| <python><django><django-rest-framework> | 2023-07-07 10:10:53 | 2 | 12,599 | whitebear |
76,635,996 | 11,552,661 | Scraping Issue: Only retrieving quotes from one page despite attempts to scrape multiple pages | <p>I was tasked with implementing a program that automatically scrapes all quotes (from all next pages) from the <a href="http://quotes.toscrape.com/js-delayed/" rel="nofollow noreferrer">http://quotes.toscrape.com/js-delayed/</a> website and saves them to a single jsonl file upon execution.</p>
<p>To achieve this, I created the following script:</p>
<pre><code>import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from webdriver_manager.chrome import ChromeDriverManager
from dotenv import load_dotenv
import os
import jsonlines
class QuotesScraper:
def __init__(self):
self.driver = None
self.wait = None
self.load_env_vars()
def load_env_vars(self):
load_dotenv()
self.proxy = os.getenv('PROXY')
self.input_url = os.getenv('INPUT_URL')
self.output_file = os.getenv('OUTPUT_FILE')
def setup_driver(self):
chrome_options = Options()
chrome_options.add_experimental_option("excludeSwitches", ["enable-logging"])
self.driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
self.wait = WebDriverWait(self.driver, 30)
def scrape_page(self):
time.sleep(20)
quote_elements = self.driver.find_elements(By.CLASS_NAME, "quote")
quotes = []
for quote_element in quote_elements:
text = quote_element.find_element(By.CLASS_NAME, "text").get_attribute("innerText")
author = quote_element.find_element(By.CLASS_NAME, "author").get_attribute("innerText")
tags = [tag.get_attribute("innerText") for tag in quote_element.find_elements(By.CLASS_NAME, "tag")]
quotes.append({"text": text, "by": author, "tags": tags})
return quotes
def scrape_quotes(self):
self.driver.get(self.input_url)
all_quotes = []
while True:
all_quotes.extend(self.scrape_page())
try:
next_page_button = self.wait.until(EC.element_to_be_clickable((By.CLASS_NAME, "next")))
next_page_button.click()
except Exception:
break
return all_quotes
def write_quotes_to_file(self, quotes):
with jsonlines.open(self.output_file, mode='w') as writer:
writer.write_all(quotes)
def run(self):
self.setup_driver()
quotes = self.scrape_quotes()
self.write_quotes_to_file(quotes)
self.driver.quit()
if __name__ == "__main__":
scraper = QuotesScraper()
scraper.run()
</code></pre>
<p><strong>The <code>output.jsonl</code> file is being saved correctly</strong>, but it only does so for one page. I've tried different approaches to bypass this issue, but each time it only retrieves one page. My output.jsonl looks like this:</p>
<pre><code>{"text": "“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”", "by": "Albert Einstein", "tags": ["change", "deep-thoughts", "thinking", "world"]}......
</code></pre>
<br>
<p><strong>How can I fix this? Is there a more optimal approach to this problem?</strong></p>
| <python><selenium-webdriver> | 2023-07-07 10:10:06 | 1 | 1,354 | tbone |
76,635,974 | 11,850,171 | Why there are less number of nodes in the saved file than that in the original graph? | <p>I am trying to save a graph as edge list using network as follows:</p>
<pre><code>nx.write_edgelist(G, 'test_edges.csv', delimiter= ' ,',data=False)
</code></pre>
<p>And then i read it back as follows:</p>
<pre><code>G11=nx.read_edgelist('test_edges.csv',create_using=nx.Graph(), nodetype=int,delimiter= ' ,')
</code></pre>
<p>but when I print number of nodes i get different results:</p>
<pre><code>print(G.number_of_nodes())6606
print(G11.number_of_nodes())1790
</code></pre>
<p>what's wrong? and what should I do? Thank you for your support</p>
| <python><networkx> | 2023-07-07 10:06:52 | 0 | 321 | TinaTz |
76,635,884 | 11,961,036 | Catching CTRL-C in PowerShell script | <p>I have created a shell script for both Bash and PowerShell that creates a python <code>venv</code>, enters it, installs requirements and runs the django migrations and server, like so:</p>
<pre><code>cd my-proj
python -m virtualenv venv
source venv/bin/activate #### .\venv\Scripts\activate FOR PS
pip install -r requirements.txt
python manage.py makemigrations
python manage.py migrate
python manage.py runserver
deactivate
cd ..
</code></pre>
<p>In Bash, when I stop the server with CTRL-C, the script will continue its execution, so it will correctly exit the <code>venv</code> and return to the previous folder.</p>
<p>But not in PowerShell. In PS it will stop the whole script. I would like to safely stop the <code>runserver</code> command by catching the CTRL-C. Is it possible?</p>
| <python><powershell><try-catch> | 2023-07-07 09:54:59 | 0 | 1,574 | Marco Frag Delle Monache |
76,635,750 | 1,739,884 | Type hint for special keys of dict subclass | <p>I would like to create type hints for a specific key of a dictionary. That is to say that the value of a key with a specific name has a specific type.</p>
<p>Say we have a <code>dict</code> with value type <code>Union[int,str]</code> and now want to specify that only the key <code>"intkey"</code> has a corresponding value of type <code>int</code> and all other keys have a value type <code>str</code>.</p>
<p>How can we do this? My attempt is below, but that does not work with mypy.</p>
<p>Note: <strong>We cannot use <a href="https://docs.python.org/3/library/typing.html#typing.TypedDict" rel="nofollow noreferrer"><code>TypedDict</code></a> here</strong> because with that all keys must be known in advance.</p>
<pre class="lang-py prettyprint-override"><code>class SpecialDict(Dict[str,Union[int,str]]):
@overload # <---- error A
def __getitem__(self, key: Literal["intkey"],/) -> int: ...
@overload
def __getitem__(self, key: str,/) -> str: ...
def __getitem__(self, key,/): return super().__getitem__(key)
@overload # <---- error B
def __setitem__(self, key: Literal["intkey"], val: int,/) -> None: ...
@overload
def __setitem__(self, key: str, val: str, /) -> None: ...
def __setitem__(self, key, val,/): return super().__setitem__(key, val)
</code></pre>
<p>But that does not work. mypy 1.4 generates the following errors</p>
<ul>
<li>A
<pre><code>Overloaded function signatures 1 and 2 overlap with incompatible return types [misc]
</code></pre>
</li>
<li>B
<pre><code>Signature of "__setitem__" incompatible with supertype "dict" [override]
Superclass:
def __setitem__(self, str, Union[int, str], /) -> None
Subclass:
@overload
def __setitem__(self, Literal['intkey'], int, /) -> None
@overload
def __setitem__(self, str, str, /) -> None
Signature of "__setitem__" incompatible with supertype "MutableMapping" [override]
Superclassmypy(note)
def __setitem__(self, str, Union[int, str], /) -> None
Subclass:
@overload
def __setitem__(self, Literal['intkey'], int, /) -> None
@overload
def __setitem__(self, str, str, /) -> None
</code></pre>
</li>
</ul>
<p>I don't know why these errors are there and not even the slightest idea what the problem is of error B.</p>
| <python><dictionary><overloading><python-typing> | 2023-07-07 09:37:55 | 1 | 6,230 | Andreas H. |
76,635,665 | 8,740,854 | Call __iadd__ in case of non-existence of __add__ in python object? | <p>I know that, if an object <code>obj</code> has the method <code>__add__</code> but don't have <code>__iadd__</code>, then</p>
<pre class="lang-py prettyprint-override"><code>obj = obj + other # Works, obj.__add__(other) is called
obj += other # Works, try to call obj.__iadd__(other)
# then obj.__add__(other) is called
</code></pre>
<p><strong>Question:</strong> What about the reverse? Is there a way to, in case of <code>__add__</code> doesn't exist, create a copy, and call <code>__iadd__</code> on this copy, without implementing the function <code>__add__</code>?</p>
<p><strong>Description:</strong>
I will use this example with a number to explain my question, only the structure matters.</p>
<pre class="lang-py prettyprint-override"><code>class MyInteger:
def __init__(self, number: int):
self.internal = number
def __add__(self, other: int):
return self.__class__(self.internal + other)
def __str__(self) -> str:
return str(self.internal)
myinteger = MyInteger(1)
print(myinteger, id(myinteger)) # 1 2614144923680
myinteger = myinteger + 1
print(myinteger, id(myinteger)) # 2 2614144921856
myinteger += 1
print(myinteger, id(myinteger)) # 3 2614144923680
</code></pre>
<p>We see that <code>id(#2) != id(#3)</code>, The operation <code>+=</code> creates a new object instead of modifying the one that already exists.</p>
<p>To avoid creating a new object whenever <code>+=</code> is called, I can change the method <code>__add__</code> to <code>__iadd__</code>, but doing so breaks the operation <code>myinteger = myinteger + 1</code>, which still should create a new object:</p>
<pre class="lang-py prettyprint-override"><code>class MyInteger:
# ...
def __iadd__(self, other: int):
self.internal += other
return self
# ...
myinteger = MyInteger(1)
print(myinteger, id(myinteger)) # 1 2302027888624
myinteger += 1
print(myinteger, id(myinteger)) # 2 2302027888624
myinteger = myinteger + 1 # TypeError
</code></pre>
<p><strong>Disclaimer:</strong> My class is not a number, but a structure which must have <code>+</code>, <code>-</code>, <code>*</code>, <code>/</code>, <code>|</code> and <code>&</code> operations. I came with <code>MyInteger</code> only as example</p>
| <python><methods><operator-overloading> | 2023-07-07 09:28:46 | 0 | 472 | Carlos Adir |
76,635,478 | 3,521,180 | why my python program is failing for 2 test cases? | <p>I have below code to find the winner based on winner:</p>
<pre><code>def minion_game(string):
vowels = 'AEIOU'
kevin = 0
stuart = 0
n = len(string)
for i in range(n):
if string[i] in vowels:
kevin += n - i
else:
stuart += n - i
if kevin > stuart:
winner = "Kevin"
score = kevin
elif kevin < stuart:
winner = "Stuart"
score = stuart
else:
winner = "Draw"
score = kevin
print(winner, score)
if __name__ == '__main__':
s = input()
minion_game(s)
</code></pre>
<p>Below is the user story that I got from hacker rank <a href="https://www.hackerrank.com/challenges/the-minion-game/problem?isFullScreen=true" rel="nofollow noreferrer">website</a></p>
<pre><code>Kevin and Stuart want to play the 'The Minion Game'.
Game Rules
Both players are given the same string,
.
Both players have to make substrings using the letters of the string
.
Stuart has to make words starting with consonants.
Kevin has to make words starting with vowels.
The game ends when both players have made all possible substrings.
Scoring
A player gets +1 point for each occurrence of the substring in the string
.
For Example:
String
= BANANA
Kevin's vowel beginning word = ANA
Here, ANA occurs twice in BANANA. Hence, Kevin will get 2 Points.
</code></pre>
<p>I have below queries:</p>
<pre><code>- The code works fine in IDE, and for the first time when I started to execute the code in hacker rank it was throwing error. I was returning "score" and name("Kevin", "Stuart"). Then I replaced "return" with print(), and then the code passed. Why?
- There are 15 test cases, but 2 of them failed. Not sure why. I do not have enough "heckos" to know the details of the test cases that failed.
</code></pre>
<p>Please suggest</p>
| <python> | 2023-07-07 09:02:58 | 1 | 1,150 | user3521180 |
76,635,461 | 5,239,013 | Function App not able to connect to Storage container | <p>I am using Azure Python Function with consumption based model. In my Azure Function , I am trying to connect to storage account and download a file using code as below. It works when connecting to a storage container which is not behind a firewall , but fails to connect to storage…</p>
<pre><code> blob = BlobClient(account_url="https://"+StorageContName+".blob.core.windows.net",
container_name=ContName+"/"+"mdhubinput",
blob_name=filename+'.csv',
credential=storagecred)
stream = blob.download_blob()
</code></pre>
| <python><azure-functions> | 2023-07-07 08:59:37 | 1 | 378 | Anshul Dubey |
76,635,283 | 2,390,362 | How to send a list as multipart/form-data in Python Requests? | <p>I'm surprised by how hard this has been for such a trivial thing, but I can't for the life of me get it to work.</p>
<p>I want to write a <code>requests.post</code> function call that basically recreates this <code>curl</code>:</p>
<pre><code>curl -s --user 'api:YOUR_API_KEY' \
https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages \
-F to=alice@example.com \
-F to=bob@example.com \
</code></pre>
<p>Ignoring the auth portion, I tried this</p>
<pre><code>requests.post(url, files={"to":["alice@example.com", "bob@example.com"]})
</code></pre>
<p>However using <code>curlify</code> you can see that only <code>bob@example.com</code> is passed to the request (and the API behavior mirrors this fact).</p>
<p>A deeper look into the library source code for <code>requests</code> shows why this happens (see <code>_encode_files</code> in <a href="https://github.com/psf/requests/blob/main/requests/models.py" rel="nofollow noreferrer">https://github.com/psf/requests/blob/main/requests/models.py</a>) but I still can't figure out how to get it to do what I want.</p>
<p>Help would be greatly appreciated.</p>
| <python><python-requests><multipartform-data> | 2023-07-07 08:39:48 | 0 | 3,055 | Jad S |
76,635,011 | 4,489,082 | Plotting independent dataset in matplotlib 3D | <p>I am interested in plotting independent dataset using <code>matplotlib</code>. For that purpose I incorporated the independent data coordinates in column of X, Y and Z.</p>
<p>Consider the following python code-</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
X = np.array([[1, 2 ,5], [6, 1, 7]])
Y = np.array([[4, 6, 9], [7, 1, 5]])
Z = np.array([[5, 7, 4], [2, 1, 6]])
fig1, (ax) = plt.subplots(subplot_kw={"projection": "3d"})
ax.plot(X, Y, Z)
</code></pre>
<p>I believed that for two-dimensional array the columns represent separate data sets, but the line <code>ax.plot(X, Y, Z)</code> produces the following error-</p>
<pre><code>ValueError: operands could not be broadcast together with remapped shapes [original-
>remapped]: (6,) and requested shape (2,)
...
AttributeError: 'Line3D' object has no attribute '_verts3d'
</code></pre>
<p>I can produce the desired output by replacing <code>ax.plot(X, Y, Z)</code> with the following-</p>
<pre><code>for i in range(X.shape[1]):
ax.plot(X[:, i], Y[:, i], Z[:, i])
</code></pre>
<p>Is there a way to avoid the loop.</p>
| <python><matplotlib><matplotlib-3d> | 2023-07-07 08:00:21 | 1 | 793 | pkj |
76,634,998 | 7,339,624 | Pip: ERROR: Could not install packages due to an OSError: [Errno 5] Input/output error | <p>I'm trying to install the package <code>xformers</code> via pip on a remote server. But I get this error:</p>
<pre><code>ERROR: Could not install packages due to an OSError: [Errno 5] Input/output error
</code></pre>
<p>I don't know what to do! I looked at some other similar questions but non of them helped.</p>
| <python><python-3.x><pip> | 2023-07-07 07:58:00 | 1 | 4,337 | Peyman |
76,634,987 | 301,513 | What's python socket's equivalent of socket.on('end') event in Node.js to handle socket closure? | <p>My TCP server is written in Node.js. In Node.js there is a <a href="https://nodejs.org/api/net.html#event-end" rel="nofollow noreferrer">close event</a> to handle socket closure. So a simple node.js socket code will be like this, copy from the <a href="https://nodejs.org/api/net.html#netcreateconnectionoptions-connectlistener" rel="nofollow noreferrer">official document</a>.</p>
<pre><code>const net = require('node:net');
const client = net.createConnection({ port: 8124 }, () => {
// 'connect' listener.
console.log('connected to server!');
...
});
client.on('data', (data) => {
//receive data
});
client.on('end', () => {
console.log('disconnected from server');
});
</code></pre>
<p>I have some tcp client code written in Python. But to my surprise, I can't find the equivalent close event handler, my Python script like this,</p>
<pre><code>sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', 8124)
try:
data = sock.recv(1024)
except ConnectionResetError:
print("ConnectionResetError closed by peer")
except socket.error as e:
print("socket.error closed by peer")
except Exception as e:
print("Exception closed by peer")
</code></pre>
<p>I had thought the close event will be handled either in <code>ConnectionResetError</code> or <code>socket.error</code> but when Node.js server closes the socket, none of those except codes are called. The Python program just jumps out of the try block and runs the next line after that.</p>
<p>So how do I handle the close event in my python script?</p>
<p>---- update ----</p>
<p>With the answer I got I also found these similar questions. Coming from Node.js background I had thought there would be a straightforward way as Node.js provides.</p>
<ol>
<li><p><a href="https://stackoverflow.com/questions/28018799/python-sockets-how-to-detect-that-client-has-closed-disconnected-from-the-serve">python sockets: how to detect that client has closed/disconnected from the server side</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/35861484/how-to-know-the-if-the-socket-connection-is-closed-in-python">How to know the if the socket connection is closed in python?</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/17386487/python-detect-when-a-socket-disconnects-for-any-reason">Python: detect when a socket disconnects for any reason?</a></p>
</li>
</ol>
| <python><node.js><sockets> | 2023-07-07 07:56:29 | 1 | 12,833 | Qiulang |
76,634,778 | 3,999,951 | Cannot rename indices of multi-index pandas dataframe | <p>There is a multi-index dataframe, with two indices, result_df:</p>
<pre><code>print(result_df)
</code></pre>
<p>returns</p>
<pre><code> a_column ... another_column
datetime_end_format datetime_end_format ...
1 2023 1159 ... 1027
2 2023 2765 ... 2466
3 2023 2493 ... 2303
4 2023 3016 ... 2860
5 2023 4174 ... 4036
6 2023 3916 ... 3870
7 2023 539 ... 542
</code></pre>
<p>Both indices are named "datetime_end_format":</p>
<pre><code>print(result_df.index)
</code></pre>
<p>returns</p>
<pre><code>MultiIndex([(1, 2023),
(2, 2023),
(3, 2023),
(4, 2023),
(5, 2023),
(6, 2023),
(7, 2023)],
names=['datetime_end_format', 'datetime_end_format'])
</code></pre>
<p>It is attempted to rename these:</p>
<pre><code>result_df.index.rename(['month','year'])
</code></pre>
<p>However the index names are unchanged - running print(result_df.index) again shows no change.</p>
<p>What is the error in the code or the approach?</p>
| <python><python-3.x><pandas> | 2023-07-07 07:27:12 | 0 | 467 | acolls_badger |
76,634,309 | 15,075,450 | Convert SVG with embedded images to PDF | <p>I am making a script that auto-generates PDF documents from a template. The document is generated as an SVG, which should then be converted to a PDF using <a href="https://cairosvg.org/" rel="nofollow noreferrer">cairosvg</a>. This step, however, fails when an embedded image is encountered.</p>
<pre class="lang-xml prettyprint-override"><code><!-- test.svg -->
<svg viewBox="0 0 1 1" width="100" height="100" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<rect x="0" y="0" width="1" height="1" fill="green"/>
<image x="0" y="0" width="1" height="1" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAMAAAADCAYAAABWKLW/AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsIAAA7CARUoSoAAAAATSURBVBhXYwCC/yACicZgMPwHAFnNBPyimoN0AAAAAElFTkSuQmCC"/>
</svg>
</code></pre>
<p>Cairosvg appears to be silently disregarding <code><image></code> tags when converting to other formats, not just pdf.</p>
<pre class="lang-py prettyprint-override"><code>import cairosvg
cairosvg.svg2pdf(url='test.svg', write_to='out.pdf')
cairosvg.svg2png(url='test.svg', write_to='out.png')
cairosvg.svg2svg(url='test.svg', write_to='out.svg')
</code></pre>
<p>In this example, all outputs are missing the embedded image. Even svg output, which yields</p>
<pre class="lang-xml prettyprint-override"><code><!-- out.svg -->
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="75pt" height="75pt" viewBox="0 0 75 75" version="1.1">
<g id="surface7">
<rect x="0" y="0" width="75" height="75" style="fill:rgb(0%,50.196078%,0%);fill-opacity:1;stroke:none;"/>
</g>
</svg>
</code></pre>
<p>I have tried both filesystem paths and data URLs for <code>href</code>, and running cairosvg from a command line. In all cases, the embeds were silently discarded.</p>
<p>I have seen examples of cairosvg converting documents with embedded images properly, but without any code I could compare mine against. How can the images be included in conversion?</p>
| <python><pdf><svg><cairo> | 2023-07-07 06:05:46 | 1 | 1,048 | IWonderWhatThisAPIDoes |
76,634,211 | 3,457,761 | Read outlook inbox mails using Python | <p>I need to read the unread outlook inbox mails using <code>win32com</code> library.
I tried:</p>
<pre><code>import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6)
print(inbox)
for account in outlook.Accounts:
print(account.DeliveryStore.DisplayName)
messages = inbox.Items
message = messages.GetLast()
body_content = message.body
print (body_content)
</code></pre>
<p>I can see my account display name but<code>body_content</code> is empty.
Where am I wrong? How can I read all unread mails?</p>
| <python><python-3.x><outlook><win32com> | 2023-07-07 05:45:03 | 1 | 7,288 | Harsha Biyani |
76,634,039 | 2,769,240 | How do I update to new version of conda? | <p>My conda version is 4.x</p>
<p>while new version is 23.x</p>
<p>I am trying to run the command it asks me to run to update, but that doesn't seem to update it.</p>
<p>I installed miniconda in base while doing initial installation</p>
<pre><code>==> WARNING: A newer version of conda exists. <==
current version: 4.11.0
latest version: 23.5.0
Please update conda by running
$ conda update -n base -c defaults conda
</code></pre>
<p>I have tried the above and all I get is:</p>
<pre><code>Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
</code></pre>
<p>But it still shows version as 4.x</p>
| <python><anaconda><conda> | 2023-07-07 04:54:34 | 2 | 7,580 | Baktaawar |
76,633,836 | 1,870,832 | What does langchain CharacterTextSplitter's chunk_size param even do? | <p>My default assumption was that the <code>chunk_size</code> parameter would set a ceiling on the size of the chunks/splits that come out of the <code>split_text</code> method, but that's clearly not right:</p>
<pre><code>from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
chunk_size = 6
chunk_overlap = 2
c_splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
text = 'abcdefghijklmnopqrstuvwxyz'
c_splitter.split_text(text)
</code></pre>
<p>prints: <code>['abcdefghijklmnopqrstuvwxyz']</code>, i.e. one single chunk that is much larger than <code>chunk_size=6</code>.</p>
<p>So I understand that it didn't split the text into chunks because it never encountered the separator. But so then the question is what <em>is</em> the <code>chunk_size</code> even doing?</p>
<p>I checked the documentation page for <code>langchain.text_splitter.CharacterTextSplitter</code> <a href="https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/character_text_splitter" rel="noreferrer">here</a> but did not see an answer to this question. And I asked the "mendable" chat-with-langchain-docs search functionality, but got the answer "The chunk_size parameter of the CharacterTextSplitter determines the maximum number of characters in each chunk of text."...which is not true, as the code sample above shows.</p>
| <python><machine-learning><text><nlp><langchain> | 2023-07-07 03:50:27 | 3 | 9,136 | Max Power |
76,633,821 | 222,279 | Why is numpy converting floats to string during an array conversion? | <p>I am seeing something strange. I have an initial python list of strings where the you can consider the strings as the rows in a table with the first row containing the header labels. I want this in a numpy array. However when I convert the python list to a numpy array the values in the list seem to be converted back to strings unless I don't include the header row in the numpy array. Here is the code:</p>
<pre><code>import numpy as np
mylist = ['Col1,Col2,Col3,Label','1,2,3,0','2,2,2,0','3,3,3,0']
X = []
for line in range(len(mylist)):
sp = mylist[line].split(',')
if line != 0:
sp = [float(element) for element in sp]
X.append(sp)
else:
# Header labels
X.append(sp)
print(X)
W = np.array(X)
print(W)
</code></pre>
<p>You can see the output here where the first line is before the conversion of the list to an numpy array and the second line is after the conversion.</p>
<pre><code>[['Col1', 'Col2', 'Col3', 'Label'], [1.0, 2.0, 3.0, 0.0], [2.0, 2.0, 2.0, 0.0], [3.0, 3.0, 3.0, 0.0]]
[['Col1' 'Col2' 'Col3' 'Label']
['1.0' '2.0' '3.0' '0.0']
['2.0' '2.0' '2.0' '0.0']
['3.0' '3.0' '3.0' '0.0']]
</code></pre>
<p>If I remove the appending of the header row then it works...of course without the header array but the types are correct:</p>
<pre><code>mylist = ['Col1,Col2,Col3,Label','1,2,3,0','2,2,2,0','3,3,3,0']
X = []
for line in range(len(mylist)):
sp = mylist[line].split(',')
if line != 0:
sp = [float(element) for element in sp]
X.append(sp)
print(X)
W = np.array(X)
print(W)
</code></pre>
<p>Output:</p>
<pre><code>[[1.0, 2.0, 3.0, 0.0], [2.0, 2.0, 2.0, 0.0], [3.0, 3.0, 3.0, 0.0]]
[[1. 2. 3. 0.]
[2. 2. 2. 0.]
[3. 3. 3. 0.]]
</code></pre>
| <python><numpy> | 2023-07-07 03:45:04 | 2 | 13,026 | GregH |
76,633,787 | 3,604,745 | Persistent CORS issues with Python-based Google Cloud Functions, even using the Docs example | <p>CORS is causing a highly persistent error for my web app when trying to query a Cloud Function with <code>axios</code>. I have numerous Node.js Cloud Functions that work seemlessly with my Flutter apps, but in this Next.js app I specifically need to leverage a Python Cloud Function.</p>
<p>I've tried handling CORS numerous ways, but even if I just literally use the <a href="https://cloud.google.com/functions/docs/samples/functions-http-cors#functions_http_cors-python" rel="nofollow noreferrer">Docs</a>' example I still get the CORS error. Here's the Cloud Function code:</p>
<pre><code>import functions_framework
@functions_framework.http
def myFun(request):
# Set CORS headers for the preflight request
if request.method == "OPTIONS":
# Allows GET requests from any origin with the Content-Type
# header and caches preflight response for an 3600s
headers = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Max-Age": "3600",
}
return ("", 204, headers)
# Set CORS headers for the main request
headers = {"Access-Control-Allow-Origin": "*"}
return ("Hello World!", 200, headers)
</code></pre>
<p>This is how I call the function:</p>
<pre><code> axios
.post(
"https://myurl.cloudfunctions.net/myFun",
body,
{
headers: {
Authorization: `bearer ${token}`,
"Content-Type": "application/json",
},
},
)
.then((response) => {
console.log("Raw response: ", response.data);
const completion = response.data;
botMessage.content = completion;
console.log("Clean response: ", completion);
get().onNewMessage(botMessage);
})
.catch((error) => {
console.error(
"Error while calling Cloud Function:",
error,
);
});
</code></pre>
<p>The error:</p>
<blockquote>
<pre><code>message: "Network Error", name: "AxiosError", code: "ERR_NETWORK", config: {…}, request: XMLHttpRequest }
app-index.js:33:22
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource
</code></pre>
</blockquote>
<p>I tried changing the "GET" in the docs example to "POST".</p>
| <python><google-cloud-platform><google-cloud-functions><cors> | 2023-07-07 03:33:14 | 1 | 23,531 | Hack-R |
76,633,627 | 9,422,114 | Python logger has no attribute 'info', 'debug' | <p>I have a class that I use to have a single logger across my project.</p>
<pre><code>import os
from logging import DEBUG, FileHandler, Formatter, Logger, StreamHandler, getLogger
from typing import Set
class SingleLogger:
__logger: Logger = None # type: ignore
__handler: StreamHandler = None # type: ignore
__logging_files: Set[str] = set()
@classmethod
def __create_and_configure_logger(cls) -> None:
if cls.__logger is not None:
return
logger = getLogger("shared_logger")
logger.level = DEBUG
stream_handler = StreamHandler()
stream_handler.level = DEBUG
stream_handler.formatter = Formatter(
"%(asctime)s %(name)s %(levelname)s: %(message)s"
)
__handler = stream_handler
logger.addHandler(__handler)
cls.__logger = logger
@classmethod
def add_filehandler_to_shared_logger(
cls, file: str = "/tmp/gipcheslogs.txt", for_notebook: bool = False
) -> None:
if file in cls.__logging_files:
return
try:
if not os.path.exists(file):
with open(file=file, mode="w", encoding="utf-8") as log_file:
log_file.write("")
log_file.close()
except IOError:
return
cls.__logging_files.add(file)
file_handler = FileHandler(file)
file_handler.level = DEBUG
file_handler.formatter = Formatter(
"%(asctime)s %(name)s %(levelname)s: %(message)s"
)
cls.shared_logger.addHandler(file_handler)
if for_notebook:
cls.__logger.removeHandler(cls.__handler)
@classmethod
@property
def shared_logger(cls) -> Logger:
"""
Returns:
Logger: the shared singleton logger, and create it if necessary.
"""
if not cls.__logger:
cls.__create_and_configure_logger()
return cls.__logger
SingleLogger.shared_logger.info(msg="Hello")
SingleLogger.shared_logger.debug(msg="Hello")
</code></pre>
<p>The code above runs with no problems on Python3.9 or above, but when I run the same code on Python3.8 or lower, I immediately get the following error:</p>
<pre><code>AttributeError: 'property' object has no attribute 'info'
</code></pre>
<p>I'm not sure what is the difference between the two version of Python and why that affects the Logger.</p>
<p>Removing the <code>@property</code> from the <code>shared_logger</code> method and calling it using <code>SingleLogger.shared_logger().info("Hi")</code> fixes the issue, but once again, I don't know why.</p>
| <python><python-3.8><python-3.9><python-logging> | 2023-07-07 02:41:29 | 1 | 1,401 | Jacobo |
76,633,395 | 11,501,976 | In matplotlib, how can I directly adjust the legend box size? | <p>I am using matplotlib to save the plot as svg file for my LaTeX document. I am disabling <code>"text.parse_math"</code> setting to let the text typeset while building my document (see comment).</p>
<p>Here is an example for my Python code (matplotlib 3.6.2):</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
plt.rcParams["svg.fonttype"] = "none"
plt.rcParams["text.parse_math"] = False
plt.rcParams["axes.unicode_minus"] = False
plt.bar([1, 2, 3], [1, 2, 3], label=r"$\tilde{H}\left(t_\infty\right)$")
plt.legend()
plt.show()
</code></pre>
<p>and matplotlib shows the plot like this</p>
<p><a href="https://i.sstatic.net/2ig5O.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ig5O.jpg" alt="Plot rendered by matplotlib" /></a></p>
<p>while the LaTeX shows like this</p>
<p><a href="https://i.sstatic.net/a1ufN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a1ufN.jpg" alt="Plot rendered by LaTeX" /></a></p>
<p>The label in the legend is converted to math mode by LaTeX as expected. However, since the legend box size is determined by the raw label text, its width is much larger than I want it to be in LaTeX mode.</p>
<p>I searched for the solution but the documents and the other stackoverflow questions are all about modifying the legend artist or controlling the padding. How can I directly reduce the legend box size? Specifically, I want to know if any of these is possible and which will be the best:</p>
<ol>
<li><p>Use something like "proxy label", i.e., make the box size adjusted to "H(t_oo)" but the real lable has longer LaTeX syntax.</p>
</li>
<li><p>Directly adjust the legend size from <code>plt.legend()</code></p>
</li>
<li><p>Directly create <code>Legend()</code> instance with appropriate size</p>
</li>
<li><p>Subclass <code>Legend</code> class to define my own legend.</p>
</li>
</ol>
<p>(edit: enhance code and specify matplotlib version)</p>
| <python><matplotlib><svg><latex> | 2023-07-07 01:17:50 | 2 | 378 | JS S |
76,633,121 | 7,284,068 | Getting ValueError: time data doesn't match format "%Y-%m-%d %H:%M:%S.%f%z" error | <p>I am trying to subtract <code>start_time_ns</code> from <code>end_time_ns</code> in the pandas data frame by using:
<code> df['time'] = pd.to_datetime(df['end_time_ns']) - pd.to_datetime(df['start_time_ns'])</code> which are given in nanoseconds.<br />
I am reading the csv as <code>pd.read_csv(filename,parse_dates=[2, 3],chunksize=chunksize)</code> where column 2 and 3 are <code>start_time_ns</code> and <code>end_time_ns</code> respectively.
The subtraction works fine for the first chunk, but getting error when applying on 30~GB CSV file. The error I get is :</p>
<pre><code>Traceback (most recent call last):
File "2rg.py", line 17, in <module>
df['time'] = pd.to_datetime(df['end_time_ns']) - pd.to_datetime(df['start_time_ns'])
File "/home/nnazarov/.local/lib/python3.8/site-packages/pandas/core/tools/datetimes.py", line 1050, in to_datetime
values = convert_listlike(arg._values, format)
File "/home/nnazarov/.local/lib/python3.8/site-packages/pandas/core/tools/datetimes.py", line 453, in _convert_listlike_datetimes
return _array_strptime_with_fallback(arg, name, utc, format, exact, errors)
File "/home/nnazarov/.local/lib/python3.8/site-packages/pandas/core/tools/datetimes.py", line 484, in _array_strptime_with_fallback
result, timezones = array_strptime(arg, fmt, exact=exact, errors=errors, utc=utc)
File "pandas/_libs/tslibs/strptime.pyx", line 530, in pandas._libs.tslibs.strptime.array_strptime
File "pandas/_libs/tslibs/strptime.pyx", line 351, in pandas._libs.tslibs.strptime.array_strptime
ValueError: time data "2023-06-20 20:41:11+00:00" doesn't match format "%Y-%m-%d %H:%M:%S.%f%z", at position 816780. You might want to try:
- passing `format` if your strings have a consistent format;
- passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;
- passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.
</code></pre>
<p>I am pasting line 816780 as well as an information :</p>
<pre><code>NGN,NGN,2023-06-20 20:30:08.305255+00:00,2023-06-20 20:41:08.317472+00:00,131.243.51.211,144.195.208.70,49851,8801,active,UDP,503,0.000107876,"[0, 52, 46, 405, 0, 0, 0, 0]"
NGN,NGN,2023-06-20 20:40:53.903338+00:00,2023-06-20 20:41:11+00:00,2001:400:0:40::200:205,2001:400:211:81::d1,161,56640,idle,UDP,503,0.0001688016,"[0, 0, 0, 503, 0, 0, 0, 0]"
NGN,NGN,2023-06-20 20:40:53.890268+00:00,2023-06-20 20:41:10.986850+00:00,2001:400:211:81::d1,2001:400:0:40::200:205,56640,161,idle,UDP,503,4.6164e-05,"[0, 503, 0, 0, 0, 0, 0, 0]"
</code></pre>
<p>How can I resolve the issue?</p>
| <python><pandas><datetime><types> | 2023-07-06 23:41:42 | 1 | 372 | Nagmat |
76,632,939 | 11,974,225 | read_csv using Dask Dataframe - UnicodeDecodeError | <p>I'm trying to read multiple csv files using Dask:</p>
<pre><code>df = dd.read_csv(path + '*.csv', usecols = relevant_columns)
df = df.compute()
</code></pre>
<p>and I encounter the following error:</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x93 in position 73732: invalid start byte
</code></pre>
<p>I've tried multiple <code>encoding</code> types.</p>
| <python><pandas><dask><dask-dataframe> | 2023-07-06 22:44:00 | 0 | 907 | qwerty |
76,632,765 | 1,494,193 | How to optimise an inequality join in Pandas? | <p>I have a <em>cross-join-like operation</em> that I have implemented using a for loop. I need to make it fast and preferably elegant. It creates a block entry per day with a date range condition.</p>
<p>This works fine for <strong>small datasets</strong> but completely stalls into a very slow runtime for <strong>larger datasets</strong>. I know that it can be vectorized. My implementation is very bad.</p>
<p><em>I have looked at the other posts</em> on how to vectorize loops in DataFrames. I read <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html" rel="nofollow noreferrer">10 minutes to pandas</a> as per suggested by this post <a href="https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas/16476974#16476974">How to iterate over rows in a DataFrame in Pandas</a>, tried using lambda functions. Messed with Cython. I just can't get it.</p>
<p>I tried implementing <a href="https://pandas.pydata.org/docs/reference/api/pandas.MultiIndex.to_frame.html" rel="nofollow noreferrer">[pandas.MultiIndex.to_frame]</a> and I have a strong feeling this, or one of it's cousins, is a good way to go. I have also tried a bunch of other things and nothing.</p>
<p>I want to learn to code elegantly. All suggestions, variations on the solution and, comments are welcome.</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
import pandas as pd
beginning = pd.to_datetime('14/09/2021', dayfirst=True)
today = pd.to_datetime(datetime.today())
date_range = pd.date_range(start=beginning, end=today) # .tolist()
frame = pd.DataFrame(columns=['Record_Date', 'Identifier', 'start_date', 'end_date', 'color'])
block = pd.DataFrame(
{'Identifier': ['4913151F', 'F4E9124A', '31715888', 'D0C57FCA', '57B4D7EB', 'E46F1E5D', '99E0A2F8', 'D77E342E',
'C596D233', 'D0EED63F', 'D0C57FCA'],
'start_date': ['03/11/2020', '05/07/2022', '22/12/2016', '17/03/2024', '14/10/2022', '08/08/2022', '04/11/2020',
'13/03/2023', '05/11/2021', '12/27/2022', '13/06/2022'],
'end_date': ['11/07/2023', '11/04/2023', '14/12/2018', '20/01/2025', '15/06/2023', '09/01/2023', '16/07/2022',
'19/05/2024', '24/09/2022', '17/11/2023', '13/06/2023'],
'color': ['red', 'green', 'magenta', 'yellow', 'light_blue', 'dark_blue', 'black', 'white', 'pink', 'orange',
'yellow']})
block.start_date = pd.to_datetime(block.start_date, dayfirst=True, format='mixed')
block.end_date = pd.to_datetime(block.end_date, dayfirst=True, format='mixed')
block_uniques = block.drop_duplicates(['Identifier', 'start_date'])
for x in date_range:
temp_df = block_uniques[(block_uniques.start_date <= x) & (block_uniques.end_date >= x)]
temp_df.insert(0, 'Record_Date', x)
frame = pd.concat([frame, temp_df])
frame = frame.sort_values(['Record_Date', 'Identifier'])
frame = frame.reset_index().drop('index', axis=1)
print(frame)
</code></pre>
<p>Output and solution:</p>
<pre><code> Record_Date Identifier start_date end_date color
0 2021-09-14 4913151F 2020-11-03 2023-07-11 red
1 2021-09-14 99E0A2F8 2020-11-04 2022-07-16 black
2 2021-09-15 4913151F 2020-11-03 2023-07-11 red
3 2021-09-15 99E0A2F8 2020-11-04 2022-07-16 black
4 2021-09-16 4913151F 2020-11-03 2023-07-11 red
... ... ... ... ... ...
2641 2023-07-05 D0EED63F 2022-12-27 2023-11-17 orange
2642 2023-07-05 D77E342E 2023-03-13 2024-05-19 white
2643 2023-07-06 4913151F 2020-11-03 2023-07-11 red
2644 2023-07-06 D0EED63F 2022-12-27 2023-11-17 orange
2645 2023-07-06 D77E342E 2023-03-13 2024-05-19 white
[2646 rows x 5 columns]
</code></pre>
| <python><pandas><dataframe><optimization><vectorization> | 2023-07-06 22:01:22 | 1 | 401 | Gorgonzola |
76,632,637 | 2,386,605 | Langchain for document chat with references | <p>I would like to use Langchain for a chat that answers based on documents we make available to a model with Langhchain. However, I would also like to give answers within the chat that make references/citations to the respective document, from which they got the information from.</p>
<p>Ideally, I also would like to self-host a model that is used by Langchain, i.e., I do not want to rely on a proprietary service.</p>
<p>Is such a thing possible? If so, how?</p>
| <python><huggingface-transformers><langchain> | 2023-07-06 21:32:22 | 1 | 879 | tobias |
76,632,582 | 10,918,680 | Pandas remove rows where cells in certain columns equates to certain values | <p>Suppose I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame({'animal': ['Falcon', 'Falcon',
'Parrot', 'Parrot'],
'speed': [380., 370., 24., None],
'height': [380., 370., [], None],
'weight': [380., 370., None, None]})
animal speed height weight
0 Falcon 380.0 380.0 380.0
1 Falcon 370.0 370.0 370.0
2 Parrot 24.0 [] NaN
3 Parrot NaN None NaN
</code></pre>
<p>I like remove rows where the values in 'height' and 'weight' columns are either NaN/None or [ ].
So far, I can only remove rows where the cells in the columns of interest are NaN/None, by doing this:</p>
<pre><code>cols = ['height','weight'] #suppose there are potentially many cols
df2 = df.dropna(axis=0, how='all', subset=cols)
animal speed height weight
0 Falcon 380.0 380.0 380.0
1 Falcon 370.0 370.0 370.0
2 Parrot 24.0 [] NaN
</code></pre>
<p>But I want the row with the '[ ]' removed as well. Please advise.
Thanks</p>
| <python><pandas> | 2023-07-06 21:20:01 | 3 | 425 | user173729 |
76,632,387 | 2,469,027 | Understanding code for pricing Asian Option | <p>I am looking at the code here for Asian option pricing:</p>
<p><a href="https://github.com/saulwiggin/finance-with-python/blob/master/Monte%20Carlo%20and%20Pricing%20Exotic%20Options/asian-option.py" rel="nofollow noreferrer">https://github.com/saulwiggin/finance-with-python/blob/master/Monte%20Carlo%20and%20Pricing%20Exotic%20Options/asian-option.py</a>.</p>
<p>Here is the code:</p>
<pre><code>import scipy as sp
s0=40. #today stock price
x=40. #excercise price
T=0.5 #maturity in years
r=0.05 #risk-free rate
sigma=0.2 # volatility
n_simulation =100 # number of simulations
n_steps=100
dt=T/n_steps
call=sp.zeros([n_simulation],dtype=float)
for j in range(0, n_simulation):
sT*=s0
total=0
for i in range(0,int(n_steps)):
e=sp.random.normal()
sT*=sp.exp((r-0.5*sigma*sigma)*dt+sigma*e*sp.sqrt(dt))
total+=sT
price_average=total/n_steps
call[j]=max(price_average-x,0)
call_price=mean(call)*exp(-r*T)
print 'call price = ', round(call_price,3)
</code></pre>
<p>What is the <code>sT</code> variable defined as here? I don't see a definition. What is the for loop <code>for j in range(0, n_simulation)</code> doing exactly? From context it seems like it should be the stock price at time T. If so how should <code>sT</code> be defined?</p>
<p>Thanks.</p>
<p>EDIT: I understand the code is illegal/won't compile, I'm just asking if anyone can help fill in the blanks</p>
| <python><python-2.x> | 2023-07-06 20:34:08 | 1 | 11,287 | lost_in_the_source |
76,632,235 | 9,462,829 | Why is data transmission slow when running API locally? | <p>Preamble: I'm very new to APIs and computer science is not my background.</p>
<p>So, I was testing an API, through <code>fastapi</code>, my function is very simple, it loads a dataset, performs a couple validations and then returns it in <code>json</code> format. A simplified version would look like this:</p>
<pre class="lang-py prettyprint-override"><code>@app.get('/data/{dataset}/{version}')
async def download_data(dataset: str, version: str):
now = time()
file = "data/{dataset}/{dataset}_{year}.feather".format(dataset = dataset, year = version)
data = pd.read_feather(file)
json_data = {"data": data.to_dict()}
print(time() - now)
return json_data
</code></pre>
<p>If I run this code locally for a given dataset (which is stored in my computer), it takes about ~7 seconds (the result from <code>print(time() - now)</code>), but if I run it through the API, like this:</p>
<pre><code>r = requests.get('http://127.0.0.1:8000/data/dataset_name/version_name')
</code></pre>
<p>It takes about 80 seconds. In my coworker's computer it takes about 30 (if he's running it locally on his machine). So, my question is why does data transmission take lots of time if everything's running on my machine? What am I missing?</p>
<p>Sorry I can't provide a reproducible example, not sure if possible in this case.</p>
| <python><rest><fastapi> | 2023-07-06 20:08:41 | 1 | 6,148 | Juan C |
76,632,167 | 8,245,469 | How to build a string by repeating a string with inserted values from an array | <p>I'm trying to build a regular expression to search for alternative values populated from an array. I'm sure there is a shorthand way to achieve this.</p>
<pre><code>searchValues = ['aa', 'bb', 'cc']
testValue = "|(x)"
</code></pre>
<p>I'm looking for some magic using <code>testValue</code> where it is repeated with <code>searchValues</code> items replacing <code>x</code>, resulting in <code>"(aa)|(bb)|(cc)"</code>.</p>
| <python><arrays> | 2023-07-06 19:55:32 | 1 | 385 | Crashmeister |
76,632,108 | 14,584,978 | Does python polars have a native way to store attributes? | <p>I'm regularly working with unique, non-repeating data often found in CSV files or dataframes, and these are considered as attributes of the dataframe.</p>
<p>For instance, I might have a dataframe that looks like this:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
'A': [1, 2, 3, 4, 5],
'B': ['a', 'b', 'c', 'd', 'e'],
})
</code></pre>
<p>And the associated attributes are stored in JSON format:</p>
<pre><code>{
"source": "dataset.csv",
"created_on": "2023-07-06",
"unique_identifier": "ABC123"
}
</code></pre>
<p>While PyArrow has the ability to store these attributes in a Parquet file, I haven't yet used it for this particular application due to the complexity and convolution of the documentation.</p>
<p>As a frequent user of the Polars library, I'm looking to reliably read and write these attributes using the built-in Parquet implementation from polars.</p>
| <python><attributes><python-polars> | 2023-07-06 19:45:24 | 1 | 374 | Isaacnfairplay |
76,632,075 | 850,781 | How do I show animation in Jupyter? | <p>I copied <a href="https://matplotlib.org/stable/tutorials/introductory/animation_tutorial.html" rel="nofollow noreferrer">code from the manual</a>:</p>
<pre><code>%matplotlib notebook
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
fig, ax = plt.subplots()
t = np.linspace(0, 3, 40)
g = -9.81
v0 = 12
z = g * t**2 / 2 + v0 * t
v02 = 5
z2 = g * t**2 / 2 + v02 * t
scat = ax.scatter(t[0], z[0], c="b", s=5, label=f'v0 = {v0} m/s')
line2 = ax.plot(t[0], z2[0], label=f'v0 = {v02} m/s')[0]
ax.set(xlim=[0, 3], ylim=[-4, 10], xlabel='Time [s]', ylabel='Z [m]')
ax.legend()
def update(frame):
# for each frame, update the data stored on each artist.
x = t[:frame]
y = z[:frame]
# update the scatter plot:
data = np.stack([x, y]).T
scat.set_offsets(data)
# update the line plot:
line2.set_xdata(t[:frame])
line2.set_ydata(z2[:frame])
return (scat, line2)
ani = animation.FuncAnimation(fig=fig, func=update, frames=40, interval=30)
plt.show()
</code></pre>
<p>into a new notebook and run it.</p>
<p>Alas, all I see is the figure:</p>
<p><a href="https://i.sstatic.net/kyhll.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kyhll.png" alt="enter image description here" /></a></p>
<p>(and the cell is marked with <code>*</code>).</p>
<p>When I restart the kernel, sometimes I get the beginning of the animation:</p>
<p><a href="https://i.sstatic.net/X4zQx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X4zQx.png" alt="enter image description here" /></a></p>
<p>but I never get the controls present in the matplotlib manual.</p>
<pre><code>Server Information:
You are using Jupyter Notebook.
The version of the notebook server is: 6.5.4-762f3618
The server is running on this version of Python:
Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0]
Current Kernel Information:
Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0]
</code></pre>
<p>I can save the animation in an <code>mp4</code> file and view it, but I would much prefer the interactive facility.</p>
<p>What am I doing wrong?</p>
| <python><matplotlib><animation><jupyter-notebook> | 2023-07-06 19:39:38 | 1 | 60,468 | sds |
76,631,783 | 5,550,284 | How to compare two data frames and keep rows of the left one based on common values and a time diff in pandas? | <p>I have two data that looks like below</p>
<pre><code>saved_info = [datetime.datetime(2023, 7, 4, 23, 18, 22, 113476), 't55643', 'ab$ff$55'),
datetime.datetime(2023, 7, 4, 23, 26, 22, 113476), 't55643', '5b$ff$15'),
datetime.datetime(2023, 7, 4, 23, 27, 22, 133470), 't55643', 'ab$ff$55')
]
new_info = [('t55643', 'ab$ff$55', 44),
('t55643', 'be$qf$34', 33)
]
</code></pre>
<p>I load them into pandas as follows</p>
<pre><code>df1 = pd.DataFrame(new_info)
df1.columns = ["tid", "cid", "val"]
df2 = pd.DataFrame(saved_info)
df2.columns = ["modified_at", "tid", "cid"]
</code></pre>
<p>So the data frames look like below</p>
<p><strong>df1</strong></p>
<pre><code> tid cid val
0 t55643 ab$ff$55 44
1 t55643 be$qf$34 33
</code></pre>
<p><strong>df2</strong></p>
<pre><code> modified_at tid cid
0 datetime.datetime(2023, 7, 4, 23, 18, 22, 113476) t55643 ab$ff$55
1 datetime.datetime(2023, 7, 4, 23, 26, 22, 112471) t55643 5b$ff$15
2 datetime.datetime(2023, 7, 4, 23, 27, 22, 133470) t55643 ab$ff$55
</code></pre>
<p>Now I want to get rows from <code>df1</code> that have common <code>cid</code> value with <code>df2</code> and <code>modified_at</code> value of <code>df2</code> should be greater than <code>15mins</code></p>
<p>So lets say datetime right now is <code>2023-07-04 23:36:38</code> So accordingly the final result of <code>df1</code> should be</p>
<p><strong>df1 (final)</strong></p>
<pre><code> tid cid val
0 t55643 ab$ff$55 44
</code></pre>
<p>As you can see the <code>cid</code> value of first row of <code>df1</code> matches with the first row of <code>df2</code> and also the time diff of <code>modified_at</code> value of first row of <code>df2</code> with current time is greater than <code>15 mins</code>.</p>
<p>Now I can get rows of <code>df1</code> that share common value on <code>cid</code> column with <code>df2</code> by doing something like below</p>
<pre><code>common = df1.merge(df2, on=['cid'])
df1_final = df1[(df1.cid.isin(common.cid))]
</code></pre>
<p>For comparing time diff between rows of two data frames, I found a stackoverflow answer <a href="https://stackoverflow.com/a/46966942/5550284">https://stackoverflow.com/a/46966942/5550284</a></p>
<p>But in my case I need to check a column value against the current <strong>UTC</strong> time and furthermore I don't know how do I chain these two conditions together.</p>
<p>Can someone help me?</p>
| <python><pandas> | 2023-07-06 18:55:32 | 1 | 3,056 | Souvik Ray |
76,631,780 | 3,247,006 | Why is session unexpectedly saved in Django? | <p>I could properly experiment when session is and isn't saved in Django as shown in <a href="https://stackoverflow.com/questions/76622009/how-to-experiment-when-session-is-and-isnt-saved-in-django/76630505#76630505">my answer</a> referring <a href="https://docs.djangoproject.com/en/4.2/topics/http/sessions/#when-sessions-are-saved" rel="nofollow noreferrer">When sessions are saved</a>. *I use <strong>Django 4.2.3</strong>.</p>
<p>But if I run all cases together as shown below (The 1st run):</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.http import HttpResponse
def test(request):
# The 1st case
request.session["foo"] = "bar"
# The 2nd case
del request.session["foo"]
# The 3rd case
request.session["foo"] = {}
# The 4th case
request.session["foo"]["bar"] = "baz"
return HttpResponse('Test')
</code></pre>
<p>Then, the 4th case is unexpectedly saved as shown below (The 2nd run):</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.http import HttpResponse
def test(request):
print(request.session.get('foo')) # {'bar': 'baz'}
return HttpResponse('Test')
</code></pre>
<p>Because if I run the 1st, 2nd and 3rd cases together except the 4th case as shown below (The 1st run):</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.http import HttpResponse
def test(request):
# The 1st case
request.session["foo"] = "bar"
# The 2nd case
del request.session["foo"]
# The 3rd case
request.session["foo"] = {}
return HttpResponse('Test')
</code></pre>
<p>Then, run only the 4th case as shown below (The 2nd run):</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.http import HttpResponse
def test(request):
# The 4th case
request.session["foo"]["bar"] = "baz"
return HttpResponse('Test')
</code></pre>
<p>Then, the 4th case is not saved as shown below (The 3rd run):</p>
<pre class="lang-py prettyprint-override"><code># "views.py"
from django.http import HttpResponse
def test(request):
print(request.session.get('foo')) # {}
return HttpResponse('Test')
</code></pre>
<p>Actually, I don't use the code below in the code above as you can see:</p>
<pre class="lang-py prettyprint-override"><code>request.session.modified = True
</code></pre>
<p>And, I set <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#session-save-every-request" rel="nofollow noreferrer">SESSION_SAVE_EVERY_REQUEST</a> <code>False</code> in <code>settings.py</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "settings.py"
SESSION_SAVE_EVERY_REQUEST = False
</code></pre>
<p>So, why is the 4th case unexpectedly saved if I run all cases together?</p>
| <python><django><session><save><django-sessions> | 2023-07-06 18:55:08 | 2 | 42,516 | Super Kai - Kazuya Ito |
76,631,540 | 3,971,621 | Why is this query so slow compared to Python code? | <p>I have an SQLite database with multiple tables. Two of these tables are <code>users</code> and <code>posts</code> created using the following statements in Python:</p>
<pre><code>cursor.execute('''CREATE TABLE IF NOT EXISTS users
(user_id INTEGER PRIMARY KEY,
user_name TEXT,
num_posts_parsed INTEGER DEFAULT 0,
num_posts_userpage INTEGER DEFAULT 0,
num_received_likes INTEGER DEFAULT 0,
num_points INTEGER DEFAULT 0,
banned INTEGER DEFAULT 0,
deleted INTEGER DEFAULT 0,
join_date TEXT)''')
cursor.execute('''CREATE TABLE IF NOT EXISTS posts
(post_id INTEGER PRIMARY KEY,
thread_id INTEGER,
author_id INTEGER,
post_number INTEGER,
creation_date TEXT,
post_text TEXT,
post_link TEXT,
number_chars INTEGER DEFAULT 0,
number_words INTEGER DEFAULT 0,
number_smilies INTEGER DEFAULT 0,
number_likes INTEGER DEFAULT 0,
number_edits INTEGER DEFAULT 0,
FOREIGN KEY (thread_id) REFERENCES threads(thread_id),
FOREIGN KEY (author_id) REFERENCES users(user_id))''')
</code></pre>
<p><code>posts</code> table has 2.1 million rows and <code>user</code> table contains about 10k entries. I want an SQL statement that:</p>
<ol>
<li>Obtains all posts created after 2019-04-01 and contain at least one word.</li>
<li>For each author that created one of these posts, counts how many likes (column <code>number_likes</code>) per post were received.</li>
<li>But only includes those authors who have written at least 100 posts since 2019-04-01.</li>
<li>Retrieves top n authors (user names) with the highest like per post ratio.</li>
</ol>
<p>I came up with:</p>
<pre><code>query = '''SELECT users.user_name, CAST(SUM(posts.number_likes) AS FLOAT) / COUNT(posts.post_id) as ratio
FROM users
JOIN (
SELECT *
FROM posts
WHERE number_words > 0 AND creation_date >= '2019-04-01'
) AS posts ON users.user_id = posts.author_id
WHERE users.user_id IN (
SELECT author_id
FROM posts
WHERE creation_date >= '2019-04-01'
GROUP BY author_id
HAVING COUNT(*) >= 100
)
GROUP BY users.user_name
HAVING SUM(posts.number_likes) > 0
ORDER BY ratio DESC
LIMIT ?'''
</code></pre>
<p>This is terribly slow. I stopped it after several hours. Instead, I wrote a Python function:</p>
<pre><code>def get_users_likes_per_post_after_like_introduction(db, n):
query = '''SELECT author_id, number_likes
FROM posts
JOIN users ON users.user_id = posts.author_id
WHERE number_words > 0 AND creation_date >= '2019-04-01'
'''
db.cursor.execute(query)
authors_likes = db.cursor.fetchall()
res = {}
for (author_id, likes) in authors_likes:
if author_id in res:
res[author_id][0] += 1
res[author_id][1] += likes
else:
res[author_id] = [1, likes]
likes_per_post = [(author_id, likes / num_posts) for author_id, (num_posts, likes) in res.items() if
num_posts >= 100]
likes_per_post_sorted = sorted(likes_per_post, key=lambda x: x[1], reverse=True)
relevant_part = likes_per_post_sorted[:n]
user_ids = [x[0] for x in relevant_part]
like_ratio = [x[1] for x in relevant_part]
user_names = []
for uid in user_ids:
query = f"SELECT user_name FROM users WHERE user_id = {uid}"
user_names.append(db.cursor.execute(query).fetchone()[0])
return [(name, like_ratio) for name, like_ratio in zip(user_names, like_ratio)]
</code></pre>
<p>This finishes after seconds. There may be ways to clean up this function and make it even faster, but that is not the point. Why is the SQL statement so slow? How would an SQL statement look that is of similar speed as my Python code? There are no indexes in the database.</p>
| <python><sql><sqlite><query-optimization> | 2023-07-06 18:15:47 | 1 | 1,831 | Merlin1896 |
76,631,451 | 5,842,023 | ModuleNotFoundError on Google Cloud Run, not locally | <p>I am deploying a Python Flask application behind gunicorn on Google Cloud run. The container runs fine locally, but I notice when I build and push the service revision I get an error and my gunicorn workers crash:</p>
<pre><code>ModuleNotFoundError: No module named 'lib'
</code></pre>
<p>My directory structure is like so:</p>
<pre><code>├── Dockerfile
├── README.md
├── gunicorn.conf.py
├── lib
│ ├── __init__.py
│ └── misc
│ ├── __init__.py
│ └── daily_message.py
├── requirements.txt
└── server.py
</code></pre>
<p>I import the function <code>get_daily_message</code> in <code>server.py</code> like so:</p>
<pre><code>from lib.misc.daily_message import get_daily_message
</code></pre>
<p>This all works fine locally and also when I build and run my container image locally.
Here is my Dockerfile.</p>
<pre><code># Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.11-slim
# Allow statements and log messages to immediately appear in the logs
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
# Copy the entire project directory to the container.
COPY . ./
# Install production dependencies.
RUN pip install --no-cache-dir -r requirements.txt
# Run the web service on container startup.
CMD ["gunicorn", "--bind", ":8080", "--workers", "2", "--threads", "4", "--timeout", "0", "server:app"]
</code></pre>
| <python><docker><flask><google-cloud-run> | 2023-07-06 18:01:18 | 1 | 1,965 | Smitty |
76,631,028 | 1,524,372 | How can I create a FractionEnum in Python without a metaclass conflict? | <p>I am trying to create a FractionEnum similar to StrEnum or IntEnum. My first attempt resulted in a metaclass conflict:</p>
<pre><code>class FractionEnum(fractions.Fraction, Enum):
VALUE_1 = 1, 1
VALUE_2 = 8, 9
</code></pre>
<pre><code>TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
</code></pre>
<p>I followed the suggestion from this answer <a href="https://stackoverflow.com/questions/69005034/multiple-inheritance-metaclass-conflict-involving-enum">Multiple inheritance metaclass conflict involving Enum</a> and created a new metaclass:</p>
<pre><code>class FractionEnumMeta(type(Enum), type(fractions.Fraction)):
pass
class FractionEnum(fractions.Fraction, Enum, metaclass=FractionEnumMeta):
VALUE_1 = 1, 1
VALUE_2 = 8, 9
</code></pre>
<p>This solved the above error but now I get:</p>
<pre><code> File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/enum.py", line 289, in __new__
enum_member = __new__(enum_class, *args)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/fractions.py", line 93, in __new__
self = super(Fraction, cls).__new__(cls)
TypeError: Enum.__new__() missing 1 required positional argument: 'value'
</code></pre>
<p>The issue seems to be that the <code>__new__</code> call inside Fraction is trying to create an enum, from the call inside the EnumMeta metaclass:</p>
<pre><code> else:
enum_member = __new__(enum_class, *args)
</code></pre>
<p>I'm misunderstanding how the metaclasses can work together to create an object that is both a fraction and an Enum - it seems to work out of the box with int or str or classes that don't define a metaclass.</p>
<h2>Update:</h2>
<p>I was able to use the code below to have the enumeration replace the Fraction's new method, but I am getting an error if I try deepcopy a class that has the enum as a member:</p>
<pre><code>/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/enum.py:497: in _create_
_, first_enum = cls._get_mixins_(cls, bases)
</code></pre>
<p>in the Enum code:</p>
<pre><code> # ensure final parent class is an Enum derivative, find any concrete
# data type, and check that Enum has no members
first_enum = bases[-1]
if not issubclass(first_enum, Enum):
raise TypeError("new enumerations should be created as "
"`EnumName([mixin_type, ...] [data_type,] enum_type)`")
member_type = _find_data_type(bases) or object
if first_enum._member_names_:
> raise TypeError("Cannot extend enumerations")
E TypeError: Cannot extend enumerations
</code></pre>
<p>Sample to reproduce:</p>
<pre><code>class TestFractionEnum(FractionEnum):
VALUE_1 = 1, 1
VALUE_2 = 8, 9
class C:
def __init__(self):
self.fraction_enum = TestFractionEnum.VALUE_1
c = C()
print(c)
print(c.fraction_enum)
d = copy.copy(c)
print(d)
e = copy.deepcopy(c)
print(e)
</code></pre>
<h2>Update 2:</h2>
<p>Overriding deepcopy on the enum seems to work:</p>
<pre><code>def __deepcopy__(self, memo):
if type(self) == Fraction:
return self
for item in self.__class__:
if self == item:
return item
assert f'Invalid enum: {self}'
</code></pre>
| <python><enums><metaclass> | 2023-07-06 16:55:42 | 1 | 311 | Paul |
76,630,992 | 4,502,950 | Filter out rows of the last available date of the quarter pandas | <p>I have a dataset that looks like this</p>
<pre><code>data = {'Date': ['2022-01-01', '2022-02-15', '2022-03-10', '2022-04-20', '2022-05-05', '2022-06-30', '2022-07-15', '2022-08-10', '2022-09-25', '2022-09-25'],
'Value': [10, 15, 20, 25, 30, 35, 40, 45, 50, 50]}
</code></pre>
<p>What I want to do is get the last available date of each quarter so that the result would look like</p>
<pre><code> Date Value
0 2022-03-10 20
1 2022-06-30 35
2 2022-09-25 50
3 2022-09-25 50
</code></pre>
<p>What I have done is something like this</p>
<pre><code>import pandas as pd
# Create sample DataFrame
data = {'Date': ['2022-01-01', '2022-02-15', '2022-03-10', '2022-04-20', '2022-05-05', '2022-06-30', '2022-07-15', '2022-08-10', '2022-09-25', '2022-09-25'],
'Value': [10, 15, 20, 25, 30, 35, 40, 45, 50, 50]}
df = pd.DataFrame(data)
# Convert 'Date' column to datetime format
df['Date'] = pd.to_datetime(df['Date'])
# Create a new DataFrame with last dates of quarters
last_dates = df.resample('Q', on='Date').last().reset_index()
# Merge the original DataFrame with the last_dates DataFrame
df = pd.merge(df, last_dates, on='Date')
print(df)
</code></pre>
<p>But it keeps throwing me the error</p>
<pre><code>ValueError: cannot insert Date, already exists
</code></pre>
<p>How can I resolve this issue?</p>
| <python><pandas> | 2023-07-06 16:50:18 | 1 | 693 | hyeri |
76,630,965 | 14,427,714 | Service /usr/bin/chromedriver unexpectedly exited Status code was: 1 | <p>I have been trying to host a django project with selenium inside digital droplet. I installed all the necessary things but I am getting this error:</p>
<pre><code>Service /usr/bin/chromedriver unexpectedly exited. Status code was: 1\n
</code></pre>
<p>If I write this command: chromedriver I get this:</p>
<pre><code>Starting ChromeDriver 114.0.5735.198 (c3029382d11c5f499e4fc317353a43d411a5ce1c-refs/branch-heads/5735@{#1394}) on port 9515
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
</code></pre>
<p>This is my chromedriver version:</p>
<pre><code>ChromeDriver 114.0.5735.198 (c3029382d11c5f499e4fc317353a43d411a5ce1c-refs/branch-heads/5735@{#1394})
</code></pre>
<p>This is my google-chrome version:</p>
<pre><code>Google Chrome 114.0.5735.198
</code></pre>
<p>I have deployed it using nginx gunicorn. The server is running well eveything running well but I am getting error while I send request which uses selenium chromedriver.</p>
<p>Here is a code snippet for this automation.py:</p>
<pre><code>class Scrape:
def find_email_and_phone(self, url):
payloads = {
"company_name": self.remove_and_fetch_name(url),
"email": "",
"links": [],
"numbers": []
}
links = []
driver_location = "/usr/bin/google-chrome"
# driver_service = Service("/chromedriver_linux64/chromedriver")
chrome_options_ = Options()
chrome_options_.add_argument('--verbose')
chrome_options_.add_argument('--headless')
chrome_options_.binary_location = '/usr/bin/google-chrome'
chrome_options_.add_argument('--no-sandbox')
chrome_options_.add_argument('--disable-dev-shm-usage')
chrome_options_.add_argument('')
driver_ = webdriver.Chrome(options=chrome_options_, service=Service(executable_path=driver_location))
try:
driver_.get(url)
page_content = driver_.page_source
email_pattern = re.search(r"[\w.+-]+@[\w-]+\.[\w.-]+", page_content)
# links_pattern = re.search(r"")
if email_pattern:
payloads["email"] = email_pattern.group()
links.append(email_pattern.group())
# print(links)
else:
print("No Email Found!")
# finding all social links (searching for linkedin / facebook)
links_pattern = re.findall(r'href=[\'"]?([^\'" >]+)', page_content)
https_links = [link for link in links_pattern if link.startswith("https://")]
filtered_links = []
keywords = ["linkedin"]
for link in https_links:
if any(keyword in link for keyword in keywords):
filtered_links.append(link)
payloads["links"] = [link for link in filtered_links]
# finding phone numbers that are present inside the website
phone_numbers = re.findall(
r'\b(?:\+?\d{1,3}\s*(?:\(\d{1,}\))?)?[.\-\s]?\(?(\d{3})\)?[.\-\s]?(\d{3})[.\-\s]?(\d{4})\b',
page_content)
formatted_phone_numbers = [
f"({number[0]}) {number[1]}-{number[2]}" for number in set(phone_numbers)]
payloads["numbers"] = [number for number in formatted_phone_numbers]
# df = pd.DataFrame([payloads])
# df['numbers'] = df['numbers'].apply(lambda x: ', '.join(x))
# df.to_csv(f"{datetime.now()}.csv", index=False)
return payloads
except Exception as e:
return str(e)
finally:
driver_.quit()
</code></pre>
<p>Here is my views.py:</p>
<p>def post(self, request):
try:
email_and_phone = []
scrap = Scrape()
query = request.data.get("query")
data = scrap.extract_important_links(query, int(request.data.get("number_of_results")))
for d in data:
sc = scrap.find_email_and_phone(d)
email_and_phone.append(sc)</p>
<pre><code> for item in email_and_phone:
dataset = DataSet.objects.create(
company_name=item["company_name"],
email=item["email"]
)
for n in item["numbers"]:
numbers = Numbers.objects.create(
number=n
)
dataset.numbers.add(numbers.id)
for li in item["links"]:
links = Links.objects.create(
link=li
)
dataset.links.add(links.id)
return response({
"success": True,
"data": email_and_phone
}, status=status.HTTP_200_OK)
except Exception as e:
return response({
"success": False,
"message": str(e)
}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
</code></pre>
<p>I saw a lot of solution from stackoverflow. But couldn't find any solution for me. It runs well when I run the script like this:</p>
<p>python3 automation.py, it doesn't throw any exception also it runs well when I run runserver using this command:</p>
<pre><code>python3 manage.py runser my_ip:8000
</code></pre>
<p>But it doesn't work properly when I request it from the server without running runserver command.</p>
| <python><django><selenium-webdriver><selenium-chromedriver><digital> | 2023-07-06 16:45:29 | 1 | 549 | Sakib ovi |
76,630,950 | 1,709,440 | Fit the chat response into a list in GPT API | <p>I'm trying to get the emotion in a text using chatgpt API</p>
<pre><code>def infer_feeling(text):
prompt = f"What feeling is filled in the following text?\nText: {text}\nFeeling:"
response = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
)
reply = response.choices[0].message['content']
emotions = ["happiness", "sadness", "anger", "fear", "trust", "curiosity", "hope", "despair"]
</code></pre>
<p>What I want is getting the reply as an array element (emotions). Is it possible to match the response of gpt to the element of this array? I want it to return the best matching emotion in that array, and nothing else.</p>
<p>Thanks in advance for any help</p>
| <python><openai-api><chatgpt-api><large-language-model> | 2023-07-06 16:43:35 | 2 | 325 | mhmtemnacr |
76,630,830 | 8,262,535 | Images sizes not matched when downsampling then upsampling images | <p>I am trying to downsample an image (for speed), run prediction, then upsample it back. Due to rounding, I get mismatches with the original image size for some pixel dimensions/voxel sizes. What is the best way to handle this?</p>
<blockquote>
<p>Forward pass</p>
</blockquote>
<pre><code>original_size = (192, 192, 299)
original_spacing = (3.6458332538605, 3.6458332538605, 3.27)
out_spacing= (5.0, 5.0, 5.0)
out_size = [
int(np.round(original_size[0] * (original_spacing[0] / out_spacing[0]))),
int(np.round(original_size[1] * (original_spacing[1] / out_spacing[1]))),
int(np.round(original_size[2] * (original_spacing[2] / out_spacing[2])))]
</code></pre>
<blockquote>
<p>= [140, 140, 196]</p>
</blockquote>
<blockquote>
<p>Reverse Pass</p>
</blockquote>
<pre><code>original_size = (140, 140, 196)
original_spacing = (5.0, 5.0, 5.0)
out_spacing= (3.6458332538605, 3.6458332538605, 3.27)
out_size = [
int(np.round(original_size[0] * (original_spacing[0] / out_spacing[0]))),
int(np.round(original_size[1] * (original_spacing[1] / out_spacing[1]))),
int(np.round(original_size[2] * (original_spacing[2] / out_spacing[2])))]
</code></pre>
<blockquote>
<p>out_size = [192, 192, 300]</p>
</blockquote>
<p>The foward-reverse output size has 300 slices vs the input which has 299 due to rounding.</p>
| <python><image><image-processing><resampling><simpleitk> | 2023-07-06 16:25:11 | 1 | 385 | illan |
76,630,777 | 2,886,575 | Python urllib encoding graphql POST data? | <p>I am trying to make a POST request to a graphql server using <code>urllib</code>. I can successfully accomplish this with <code>requests</code>, but I need to use <code>urllib</code> because reasons.</p>
<p>Here is a minimal example of the functional <code>requests</code> implementation (<a href="https://swapi-graphql.netlify.app/?query=query%20MyQuery(%24pid%3A%20ID)%20%7B%0A%09person(id%3A%20%24pid)%20%7B%0A%20%20%20%20name%0A%20%20%7D%0A%7D&operationName=MyQuery&variables=%7B%0A%20%20%22pid%22%3A%20%22cGVvcGxlOjE%3D%22%0A%7D" rel="nofollow noreferrer">link to graphql playground</a>):</p>
<pre><code>import requests
url = "https://swapi-graphql.netlify.app/.netlify/functions/index"
data = {
"query": "query MyQuery($pid: ID) {\n\tperson(id: $pid) {\n name\n }\n}",
"variables": {
"pid": "cGVvcGxlOjE="
},
"operationName": "MyQuery"
}
response = requests.post(url=url, json=data)
print(response.content)
</code></pre>
<p>When I try to implement this with <code>urllib</code>, something about the encoding isn't working correctly, and I get a 400: Bad Request:</p>
<pre><code>import urllib.parse
import urllib.request
encoded_data = urllib.parse.urlencode(data).encode()
req = urllib.request.Request(url, data=encoded_data, method="POST")
with urllib.request.urlopen(req) as response:
the_page = response.read()
print(the_page.decode("utf-8"))
</code></pre>
<p>If I delete the "variables" and "operationName" bits in my data dictionary, the request goes through, and I get the obvious complaint about the missing variable.... so, somehow, the problem is in my encoding of the <code>data</code> dictionary.</p>
<p>How can I properly encode graphql queries to use with <code>urllib</code>?</p>
| <python><graphql><urllib> | 2023-07-06 16:19:17 | 0 | 5,605 | Him |
76,630,768 | 19,130,803 | Python property not working in match-case | <p>I am trying to access <code>Foo</code> using <code>Bar</code> having a <code>property</code>.</p>
<pre><code>class Foo:
ONE = "one"
TWO = "two"
THREE = "three"
class Bar:
@property
def foo(self):
return Foo
word = "one"
match word:
case Foo.ONE: # This is working
print("GOOD")
case Bar().foo.ONE: # This is not working
print("BEST")
</code></pre>
<p>What I am missing?</p>
| <python><python-3.10> | 2023-07-06 16:17:51 | 1 | 962 | winter |
76,630,640 | 4,551,325 | Pandas rolling-apply to calculate correlations | <p>This question is linked to my other question <a href="https://stackoverflow.com/questions/76628358/python-built-in-function-for-exponential-weighting-with-half-life-parameter?noredirect=1#comment135104793_76628358">here</a>.</p>
<p>I want to calculate rolling exponentially-weighted correlation between two columns in a pandas Dataframe.</p>
<pre><code>import pandas as pd
import numpy as np
n_obs = 10000
window = 260
min_obs = 130
halflife = 130
# create a dummy dataframe
df = pd.DataFrame(np.random.rand(n_obs, 2), columns=['A', 'B'])
# initialize weights
w = 0.5**((window-np.arange(window)-1)/(halflife-1))
# correlation for a single sliding window.
# note because of `min_obs`, df_win might be of various lengths
def _corr_single_window_(df_win):
return df_win.mul(w[-df_win.shape[0]:], axis=0).corr().iloc[0, 1]
# test on the dummy dataframe
df.rolling(window=window, min_periods=min_obs).apply(_corr_single_window_)
</code></pre>
<p>And I got this error:</p>
<blockquote>
<p>DataError: No numeric types to aggregate</p>
</blockquote>
| <python><pandas><numpy><rolling-computation> | 2023-07-06 16:00:19 | 1 | 1,755 | data-monkey |
76,630,627 | 2,542,516 | How to ensure that two properties are always overriden together in Python? | <p>I have an abstract parent class as follows:</p>
<pre class="lang-py prettyprint-override"><code>import abc
class Parent(abc.ABC):
def __init__(self):
pass
@property
def prop1(self):
return None
@property
def prop2(self):
return None
</code></pre>
<p>Is there a way to enforce the constraint that when either of <code>prop1</code> or <code>prop2</code> is overriden in an inheriting class, the other is also overriden?</p>
| <python><inheritance> | 2023-07-06 15:58:13 | 1 | 2,937 | Priyatham |
76,630,604 | 3,017,906 | Symbolic polynomial factorization in Sympy/Python | <p>I would like to see whether it is possible to get single variable polynomials,given as a string of coefficient/variable degree, into a factorized form. The coefficients of these polynomials are to be "symbols" and so have to be the factors in the output of the function.</p>
<p>So, if I gave as input a polynomial in the form</p>
<pre><code>x**2-c**2
</code></pre>
<p>I would expect the function to give back:</p>
<pre><code>(x+c)*(x-c)
</code></pre>
<p>However, in <code>sympy</code> I cannot make it work for anything more than trivial. If I first declare:</p>
<pre><code>from sympy import *
x, a, b, c = symbols('x a b c')
</code></pre>
<p>Then something as simple as:</p>
<pre><code>factor(x**2-a**2)
</code></pre>
<p>Works:</p>
<p>(−𝑎+𝑥)(𝑎+𝑥)</p>
<p>But already:</p>
<pre><code>factor(x**2-a)
</code></pre>
<p>Fails:</p>
<p>−𝑎+𝑥2</p>
<p>Of course, even slightly more complicated polynomials like a full second order won't work.</p>
<pre><code>factor(-x**2-a*x-3,x)
</code></pre>
<p>gives out:</p>
<p>−𝑎𝑥−𝑥2−3</p>
<p>Am I just misusing the function?</p>
| <python><sympy> | 2023-07-06 15:54:54 | 2 | 1,343 | Michele Ancis |
76,630,592 | 2,483,642 | Create empty pandas dataframe from pandera DataFrameModel | <p>Is there a way to create an <strong>empty pandas dataframe</strong> from a <strong>pandera schema</strong>?</p>
<p>Given the following schema, I would like to get an empty dataframe as shown below:</p>
<pre><code>from pandera.typing import Series, DataFrame
class MySchema(pa.DataFrameModel):
state: Series[str]
city: Series[str]
price: Series[int]
def get_empty_df_of_schema(schema: pa.DataFrameModel) -> pd.DataFrame:
pass
wanted_result = pd.DataFrame(
columns=['state', 'city', 'price']
).astype({'state': str, 'city': str, 'price': int})
wanted_result.info()
</code></pre>
<p>Desired result:</p>
<pre><code>Index: 0 entries
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 state 0 non-null object
1 city 0 non-null object
2 price 0 non-null int64
</code></pre>
<hr />
<p>Edit:</p>
<p>Found a working solution:</p>
<pre><code>def get_empty_df_of_pandera_model(model: [DataFrameModel, MetaModel]) -> pd.DataFrame:
schema = model.to_schema()
column_names = list(schema.columns.keys())
data_types = {column_name: column_type.dtype.type.name for column_name, column_type in schema.columns.items()}
return pd.DataFrame(columns=column_names).astype(data_types)
</code></pre>
| <python><pandas><pandera> | 2023-07-06 15:52:44 | 2 | 427 | MJA |
76,630,555 | 3,416,725 | Airflow API POST trigger new dag run conf splits payload | <p>I am using <code>requests</code> to POST to trigger a new DAG run in Airflow.</p>
<p>However I set the conf payload. But when it gets passed to my script that is executed in Airflow, it has some how split the json string. I have a work around where I join it. But surely this should not happen.</p>
<p>For example in my python code I make the request like so:</p>
<pre><code>data = {
'my_data': {
'id': '1',
'firstName': 'John',
'lastName': 'Smith'
}
}
body = {
"conf": {
"data": data
},
}
headers = {
'accept': 'application/json',
'Content-Type': 'application/json',
}
req = requests.post(
url=f"https://example_url.com/api/v1/dags/my_dag_id/dagRuns",
data=json.dumps(body),
auth=("admin", os.getenv("AIRFLOW_PASSWORD")),
headers=headers
)
</code></pre>
<p>This posts fine and triggers the new dag.</p>
<p>My python script then gets passed the <code>conf</code> via an Airflow K8 pod operator like so:</p>
<pre><code>processor = KubernetesPodOperator(
image="my_image_url",
cmds=["bash"],
arguments=["-c", "python3 -m processer {{ dag_run.conf['data'] }}"],
name="processor",
task_id="processor",
container_resources=resources_w_gpu,
**common_options
)
</code></pre>
<p>Then the script executes from <code>__main__</code> like so:</p>
<pre><code>if __name__ == "__main__":
if len(sys.argv) >= 2:
try:
logger.info(sys.argv[1:])
data = json.loads(sys.argv[1:])
main(data)
except Exception as e:
traceback.print_exc()
logger.error(str(e))
</code></pre>
<p>But currently this throws a type error for the <code>json.loads(sys.argv[1:])</code>:</p>
<p><code>TypeError: the JSON object must be str, bytes or bytearray, not list</code></p>
<p>So I added the logger above to see what is going on and it logged this:</p>
<p><code>['{my_data:', '{id:', '1,', 'firstName:', 'John,', 'lastName:', 'Smith']</code></p>
<p>It seems it has split somehow on the commas.</p>
<p>A way I hacked it to work was use <code>replace('"', '\\"')</code> here:</p>
<pre><code>body = {
"conf": {
"data": json.dumps(data).replace('"', '\\"')
},
}
</code></pre>
<p>Then back in my script in the main block I joined like so:</p>
<pre><code>if __name__ == "__main__":
if len(sys.argv) >= 2:
data = json.loads(" ".join(sys.argv[1:]))
main(data)
</code></pre>
<p>But surely airflow should handle this better?</p>
| <python><airflow> | 2023-07-06 15:47:46 | 1 | 493 | mp252 |
76,630,471 | 3,446,051 | Coverage.py returns error "No source for code:..." when running tensorflow.keras.model.fit function | <p>I have built unittests for my code and everything works fine when running them from vscode. Even running <code>coverage run</code> runs successfully.
But when I try to run <code>coverage report</code> I get the following output:</p>
<pre><code>No source for code: 'C:\Users\XXX\AppData\Local\Temp\1\__autograph_generated_file4ilniiln.py'
</code></pre>
<p>I found out that this happens exactly when I add a test case which contains <code>tensorflow.keras.Model.fit</code> function. If I remove <code>tensorflow.keras.Model.fit</code> then this message does not appear for <code>coverage report</code> command.</p>
<p>How can I solve this issue?</p>
| <python><unit-testing><tensorflow><keras><coverage.py> | 2023-07-06 15:35:13 | 1 | 5,459 | Code Pope |
76,630,414 | 3,357,270 | How can I mock dynamic attributes using django test case? | <p>I have some functionality that makes a request to a third party. Within that request are some values that are dynamically created—for example, a date.</p>
<pre><code># example.py
...
payload = json.dumps({
'id': uuid.uuid4(),
'name': 'example',
'date': datetime.now(),
'foo': 'bar'
})
...
response = self.make_request(url, params=None, data=payload, method='post')
</code></pre>
<p>The functionality works great. Now I am attempting to write a unit test.</p>
<pre><code># test_example.py
@mock.patch('path.to.module.make_request')
test_successfully_does_stuff(self, make_request_mock):
make_request_mock.return_value = {
'status': 'OK'
'success': True,
}
self.make_request()
...
make_request_mock.assert_called_with('https://example.com', params=None, data=json.dumps({
'id': uuid.uuid4(),
'name': 'example',
'date': datetime.now(),
'foo': 'bar'
})
</code></pre>
<p>Asserting <code>called_once</code> works great:</p>
<pre><code>make_request_mock.assert_called_once()
</code></pre>
<p>What I would like to assert, is that the <code>make_requet</code> call has all of the necessary attributes. Meaning I'd like to assert that <code>id</code>, <code>name</code>, <code>date</code> are included in the request. If I add a new attribute—or accidentally remove an attribute in the code, I'd like my test to pick that up.</p>
<p>When I run this test, I get an error that my expected and actual data doesn't match. This makes sense as <code>datetime.now()</code> and <code>uuid4()</code> will render different values when they are called.</p>
<p>For example, here is the result from date:</p>
<pre><code>Expected: "date": "2023-07-05T20:00:28.130604"
Actual: "date": "2023-07-05T20:40:51.575151",
</code></pre>
<p>I realize that the mock data is completely arbitrary and it could be anything I want. However I feel it may be important to test this. Is there a way to assert that my call has the necessary attributes?</p>
<p><strong>EDIT</strong></p>
<p>For clarification, it's the object attributes I want to assert are there, the actual values aren't as important.</p>
| <python><unit-testing><django-unittest> | 2023-07-06 15:27:56 | 1 | 4,544 | Damon |
76,630,308 | 15,439,115 | confusion in async flow using python | <pre class="lang-py prettyprint-override"><code>import asyncio
import json
def save_settings(event, context):
async def background_processing():
import time
time.sleep(2)
print("123")
time.sleep(1)
print(1)
loop = asyncio.get_event_loop()
loop.run_until_complete(background_processing())
loop.close()
# Return the response immediately
response = {
'statusCode': 200,
'body': json.dumps({
'message': 'Background process initiated successfully'
})
}
return response
</code></pre>
<p>actually I want to run this function in background and return success response immediately . Only this code should run in background:</p>
<pre class="lang-py prettyprint-override"><code>async def background_processing():
import time
time.sleep(2)
print("123")
time.sleep(1)
print(1)
</code></pre>
<p>but on running all code is running in sequence and not getting json return</p>
| <python><python-3.x><async-await> | 2023-07-06 15:15:26 | 0 | 309 | Ninja |
76,630,196 | 1,219,317 | How to fix random seed in pytorch, while keeping the dropout randomness? | <p>I am trying to approximate a Bayesian model by keeping the dropout probability during both training and inference (Monte Carlo dropout), in order to obtain the epistemic uncertainty of the model.</p>
<p>Is there a way to fix all source of randomness for reproducibility (random seed), but to maintain the randomness of the dropout?</p>
<pre><code># Set random seed for reproducibility
seed = 123
torch.manual_seed(seed)
random.seed(seed)
np.random.seed(seed)
# Training and Inference phase (with dropout)
dropout_mask = torch.bernoulli(torch.full_like(input, 1 - self.dropout))
skip = self.skip0(input * dropout_mask / (1 - self.dropout))
for i in range(self.layers):
residual = x
filter = self.filter_convs[i](x)
filter = torch.tanh(filter)
gate = self.gate_convs[i](x)
gate = torch.sigmoid(gate)
x = filter * gate
dropout_mask = torch.bernoulli(torch.full_like(x, 1 - self.dropout))
x = x * dropout_mask / (1 - self.dropout)
s = x
s = self.skip_convs[i](s)
skip = s + skip
if self.gcn_true:
x = self.gconv1[i](x, adp) + self.gconv2[i](x, adp.transpose(1, 0))
else:
x = self.residual_convs[i](x)
x = x + residual[:, :, :, -x.size(3):]
if idx is None:
x = self.norm[i](x, self.idx)
else:
x = self.norm[i](x, idx)
skip = self.skipE(x) + skip
x = F.relu(skip)
x = F.relu(self.end_conv_1(x))
x = self.end_conv_2(x)
return x
</code></pre>
<p>The above code produces same result everytime, which is not what I am trying to do.</p>
| <python><pytorch><bayesian><montecarlo><dropout> | 2023-07-06 15:02:29 | 2 | 2,281 | Travelling Salesman |
76,630,107 | 188,740 | VSCode's Jupyter Notebook: environment variables mysteriously loads from .env | <p>I'm using VSCode's Jupyter Notebook extension. I'm trying to figure out how environment variables get magically loaded from an .env file. I'm not using <code>python-dotenv</code> or doing anything special to load these variables, but it just seems to work.</p>
<p>Here's what I did.</p>
<pre><code>mkdir testing
cd testing
python -m venv env
source env/bin/activate
touch .env
</code></pre>
<p>Inside of a new <code>.env</code> file, I added this:</p>
<pre><code>FOO=bar
</code></pre>
<p>Then inside of a notebook, magic:</p>
<p><a href="https://i.sstatic.net/e1q7Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e1q7Y.png" alt="enter image description here" /></a></p>
<p>Anyone know how this is possible? Is this only a VSCode thing (i.e. open this notebook in another tool and env variables won't load)?</p>
<p>Tested in macOS and Windows and they both behave the same.</p>
| <python><visual-studio-code><jupyter-notebook><environment-variables> | 2023-07-06 14:52:49 | 1 | 57,942 | Johnny Oshika |
76,630,012 | 5,132,559 | python-pptx insert picture in PlaceholderPicture | <p>I'm trying to insert a picture in a pptx Placeholder</p>
<pre><code>for shape in presentation.slides[(len(presentation.slides) - 1)].placeholders:
print(shape.name)
print(shape.shape_type)
print(shape.placeholder_format.type)
if "Bild" in shape.name:
shape.insert_picture(os.path.join(self.image_folder, "media_value.png"))
</code></pre>
<p>my output:</p>
<pre><code>Bildplatzhalter 17
PLACEHOLDER (14)
PICTURE (18)
</code></pre>
<p>the error:</p>
<pre><code>shape.insert_picture(os.path.join(self.image_folder, "media_value.png"))
AttributeError: 'PlaceholderPicture' object has no attribute 'insert_picture'
</code></pre>
<p>From the <a href="https://python-pptx.readthedocs.io/en/latest/dev/analysis/placeholders/slide-placeholders/picture-placeholder.html" rel="nofollow noreferrer">Documentation</a>, it should work like this.
what i'm doing wrong?</p>
<p><em>python3.9</em><br />
<em>python-pptx0.6.21</em></p>
| <python><python-pptx> | 2023-07-06 14:42:24 | 1 | 1,562 | Tobias |
76,629,944 | 3,176,696 | performant writes to apache Iceberg | <p>I'm trying to achieve performant record writes from pandas (or ideally Polars if possible) in a Python environment to our Apache Iceberg deployment (with hive metastore) directly, or via Trino query engine based on which is the most performant.</p>
<p>Given what I've already tried, my current record write throughput is still pretty rubbish. If someone can please point my head in the right direction of how to get performant write connections directly to iceberg (or via Trino if expected to work through a query engine)... as I'm also not seeing documentation for guidance.</p>
<p>My attempts as follows:</p>
<ul>
<li>Originally via <a href="https://github.com/trinodb/trino-python-client" rel="nofollow noreferrer">Trino python package</a> to parse the content of a pandas df into batches of sql syntax queries. - clunky and slowest</li>
<li>Then digged and found a <a href="https://pypi.org/project/aiotrino/" rel="nofollow noreferrer">async Trino python package</a> which would atleast allow me to make the calls with asyncio and benefit from call concurrency based on the number of trino workers deployed.</li>
<li>Utilized Trino's jdbc connector type, with underlying trino-jdbc.jar,and python's <em>jaydebeapi package</em> to see if a performance improvement can be expected with the underlying java runtime execution - data read successful, but write unsuccessful</li>
<li>Iceberg direct connection via <em>pySpark</em> - connection issues described under this <a href="https://stackoverflow.com/questions/76014842/pyspark-read-iceberg-table-via-hive-metastore-onto-s3">post</a></li>
<li>pyIceberg as of version 0.4.0 still does not contain record writing functionality from what I could gather from <a href="https://py.iceberg.apache.org/" rel="nofollow noreferrer">documentation</a> or digging in the package</li>
<li>Stumbled onto this approach with <strong>pandas.to_sql()</strong> method - best so far but still not nearly enough</li>
</ul>
<blockquote>
<p>Through one communication thread I also read how a dataframe can be uploaded to S3 directly as parquet file format, then from there possible to link up with iceberg's relevant catalog table via meta data calls (makes sense compression transfer wise)... but also haven't been able to get working this way round</p>
</blockquote>
<p>The code snippet of pandas.sql() via Trino is the only one worth sharing:</p>
<pre class="lang-py prettyprint-override"><code>import warnings
# Ignore all warnings
warnings.filterwarnings("ignore")
import logging
logging.getLogger().setLevel(logging.DEBUG)
from sqlalchemy import create_engine
from trino.auth import BasicAuthentication
import pandas as pd
from datetime import datetime
trino_target_host = "..."
trino_target_port = 443
trino_target_catalog = 'iceberg'
trino_target_schema = '...'
def write_content(table: str, df, batch_size: int):
print(f"size: {len(df)}")
try:
start = datetime.utcnow()
engine = create_engine(
f"trino://{trino_target_host}:{trino_target_port}/{trino_target_catalog}/{trino_target_schema}",
connect_args={
"auth": BasicAuthentication("...", "..."),
"http_scheme": "https"
}
)
# chunksize = writing df in batches of size - saving memory
# method = instead of writing a single record at a time, 'multi' will insert multiple rows as 1 statement
output = df.to_sql(table, engine, if_exists='append', index=False, chunksize=batch_size, method='multi')
print(f"elapsed: {datetime.utcnow() - start}")
return output
except Exception as e:
if 'Request Entity Too Large' in str(e):
print(f"Batch size '{batch_size}' too large. Reduce size")
return None
print(f"failed: {e}")
return None
sample = df[:70000].copy(deep=True)
write_content('table_name', sample, 1000)
</code></pre>
<blockquote>
<p>size: 70000</p>
<p>EXECUTE IMMEDIATE not available for trino.dp.iotnxt.io:443; defaulting to legacy prepared statements (TrinoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:19: mismatched input ''SELECT 1''. Expecting: 'USING', ", query_id=20230629_161633_03350_rptf8))</p>
<p>elapsed: 0:14:05.315631</p>
</blockquote>
<p>This is very deployment-specific, but just to give a relative idea in comparison:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>count</th>
<th>Read</th>
<th>write (batch = 1000)</th>
<th>write (batch = 1500)</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td></td>
<td>3.67 / 3.7 / 3.6 secs</td>
<td></td>
</tr>
<tr>
<td>100</td>
<td>2.9 / 4.0 / 2.8 / 2.2 sec</td>
<td>4.4 / 4.2 / 4.1 secs</td>
<td></td>
</tr>
<tr>
<td>1000</td>
<td>2.2 / 3.8 / 2.4 secs</td>
<td>11.2 / 11.2 / 11.0 secs</td>
<td></td>
</tr>
<tr>
<td>10 000</td>
<td>3.2 / 2.8 / 2.9 secs</td>
<td>118.4 / 105.7 / 108.8 sec</td>
<td>98.2 / 104.6 / 108.5 secs</td>
</tr>
<tr>
<td>70 000</td>
<td>6.8 / 6.5 / 7.2 secs</td>
<td>845.31 secs</td>
<td></td>
</tr>
<tr>
<td>140 000</td>
<td>9.8 secs</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div> | <python><bigdata><trino><apache-iceberg> | 2023-07-06 14:34:08 | 0 | 906 | Paul |
76,629,925 | 2,178,774 | Subplots for shap.plots.bar() plots? | <p>I'd like to add the <code>shap.plots.bar</code> (<a href="https://github.com/slundberg/shap" rel="nofollow noreferrer">https://github.com/slundberg/shap</a>) figure to a subplot. Something like this...</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8,20))
for (X, y) in [(x1, y1), (x2, y2)]:
model = xgboost.XGBRegressor().fit(X, y)
explainer = shap.Explainer(model, check_additivity=False)
shap_values = explainer(X, check_additivity=False)
shap.plots.bar(shap_values, max_display=6, show=False) # ax=ax ??
plt.show()
</code></pre>
<p>However, <code>ax</code> is undefined for <code>shap.plots.bar</code>, unlike some other plotting methods such as <code>shap.dependence_plot(..., ax=ax[0, 0], show=False)</code>. Is there a way to add many bar plots to a subplot?</p>
| <python><matplotlib><xgboost><shap> | 2023-07-06 14:32:20 | 2 | 516 | There |
76,629,738 | 422,953 | How do I pass the same kwargs to all classes I inherit from? | <p>I am trying to write an <code>__init__</code> method for a class such that all the classes it derives from get the same set of keyword arguments. Let's say,</p>
<pre class="lang-py prettyprint-override"><code>class First(object):
def __init__(self, *args, **kwargs):
super(First, self).__init__()
print('First', kwargs)
class Second(object):
def __init__(self, *args, **kwargs):
super(Second, self).__init__()
print('Second', kwargs)
class Third(First, Second):
def __init__(self, *args, **kwargs):
super(Third, self).__init__(*args, **kwargs)
print('Third', kwargs)
</code></pre>
<p>If I instantiate <code>Third</code> with a bunch of keywords, I can see that they all get "taken out" by the <code>__init__</code> method of <code>First</code>,</p>
<pre class="lang-ipython prettyprint-override"><code>In [5]: tc = Third(key='value', someother='nothing')
Second {}
First {'key': 'value', 'someother': 'nothing'}
Third {'key': 'value', 'someother': 'nothing'}
</code></pre>
<p>I understand that this is the expected behavior, so I'm not complaining. However, is there a way to write this class hierarchy such that <strong>both</strong> <code>First</code> and <code>Second</code> get the same set of <code>kwargs</code>? More generally, I'd like to write the hierarchy such that if <code>Third</code> derives from <code>N</code> classes, all <code>N</code> of those should get the same set of <code>kwargs</code> unless one of the explicitly pops one of the keys.</p>
<p>I have looked at other questions such as <a href="https://stackoverflow.com/q/75078418/422953">this one</a>, and answers have pointed out that that's not the default beahviour. I get that. But, <strong>is there</strong> a way to do it?</p>
| <python><inheritance><class-hierarchy> | 2023-07-06 14:11:51 | 0 | 340 | TM5 |
76,629,307 | 7,082,564 | Find minimum and maximum values in OHLC data | <p>I would like to find (in python) the local minimum and maximum values in OHLC data, under the condition that the distance between these values is at least +-5%.</p>
<p><strong>Temporal Condition</strong></p>
<p>Note that</p>
<ul>
<li>for an UP movement (close>open), <code>low</code> price comes BEFORE <code>high</code> price</li>
<li>for a DOWN movement (close<open), <code>low</code> price comes AFTER <code>high</code> price</li>
</ul>
<p>The best way to explain what I would like to achieve is by a graphical example:</p>
<p><a href="https://i.sstatic.net/H1KXA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H1KXA.png" alt="enter image description here" /></a></p>
<p>OHLC data is in this format:</p>
<pre class="lang-none prettyprint-override"><code>open_time open high low close
2023-07-02 0.12800000 0.12800000 0.12090000 0.12390000
2023-07-03 0.12360000 0.13050000 0.12220000 0.12830000
2023-07-04 0.12830000 0.12830000 0.12320000 0.12410000
2023-07-05 0.12410000 0.12530000 0.11800000 0.11980000
2023-07-06 0.11990000 0.12270000 0.11470000 0.11500000
</code></pre>
<p>The result should be something like:</p>
<pre class="lang-none prettyprint-override"><code>date1 val1 date2 val2 <---up
date2 val2 date3 val3 <---down
date3 val3 date4 val4 <---up
date4 val4 date5 val5 <---down
.
.
.
</code></pre>
<p>As for the data in the example the result should be:</p>
<pre class="lang-none prettyprint-override"><code>2023-07-02 0.1280 2023-07-02 0.1209 -5.55%
2023-07-02 0.1209 2023-07-03 0.1305 7.94%
2023-07-03 0.1305 2023-07-06 0.1147 -12.11%
</code></pre>
<p>Is there a name for this task?</p>
<hr />
<p><strong>ADDENDUM</strong></p>
<p>I add a new example, with a different condition (+-3%).</p>
<p>This is the data:</p>
<pre class="lang-none prettyprint-override"><code>2022-02-25 38340.4200 39699.0000 38038.4600 39237.0600
2022-02-26 39237.0700 40300.0000 38600.4600 39138.1100
2022-02-27 39138.1100 39881.7700 37027.5500 37714.4300
2022-02-28 37714.4200 44200.0000 37468.2800 43181.2700
2022-03-01 43176.4100 44968.1300 42838.6800 44434.0900
</code></pre>
<p>And the final result shold be:</p>
<pre class="lang-none prettyprint-override"><code>2022-02-25 38038 2022-02-26 40300 5.95%
2022-02-26 40300 2022-02-26 38600 -4.22%
2022-02-26 38600 2022-02-27 39881 3.32%
2022-02-27 39881 2022-02-27 37027 -7.16%
2022-02-27 37027 2022-02-28 44200 19.37%
2022-02-28 44200 2022-03-01 42838 -3.08%
</code></pre>
| <python><python-3.x><time><time-series><ohlc> | 2023-07-06 13:20:58 | 3 | 346 | soo |
76,629,269 | 2,225,190 | Confusing documentation from Python Boto3 to create bucket? | <p>As per the Request Syntax in below link, we can pass ACL parameter to create_bucket method with ACL as 'public-read'.</p>
<p><a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/create_bucket.html" rel="nofollow noreferrer">https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/create_bucket.html</a></p>
<p>but when I pass it giving the error as</p>
<blockquote>
<p>botocore.exceptions.ClientError: An error occurred (InvalidBucketAclWithBlockPublicAccessError) when calling the CreateBucket operation: Bucket cannot have public ACLs set with BlockPublicAccess enabled</p>
</blockquote>
<p>If the "public-read" can raise that error, why it mentioned about that option in the documentation? We can simply call "put_public_access_block" and then "put_bucket_acl" methods right?</p>
<p>Below is code sample of what I tried</p>
<pre><code>def create_bucket(bucket_name, acl):
bucket = boto3.client('s3')
response = bucket.create_bucket(
Bucket=bucket_name,
ObjectOwnership='BucketOwnerPreferred',
ACL=acl,
CreateBucketConfiguration={
'LocationConstraint':'us-west-1',
}
)
create_bucket('sample_bucket', 'public-read')
</code></pre>
<p><strong>Account level setting for block public access</strong>
<a href="https://i.sstatic.net/kLGoL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kLGoL.png" alt="Account level setting for block public access" /></a></p>
| <python><amazon-web-services><amazon-s3><boto3> | 2023-07-06 13:17:24 | 1 | 549 | user2225190 |
76,629,191 | 9,640,238 | ArrowInvalid: Cannot locate timezone 'UTC': Timezone database not found | <p>I'm starting to experiment with pyarrow, and I'm hitting a strange error when writing a CSV file. Say I have this CSV input as <code>dates.csv</code>:</p>
<pre><code>dates
2022-10-04T15:52:25.000Z
2022-03-29T08:08:13.000Z
2023-01-05T19:24:13.000Z
2020-12-04T18:56:30.000Z
</code></pre>
<p>Now, if I just try to load and write back to CSV, here's what I get:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: from pyarrow import csv
In [2]: t = csv.read_csv("dates.csv")
In [3]: t
Out[3]:
pyarrow.Table
dates: timestamp[ns, tz=UTC]
----
dates: [[2022-10-04 15:52:25.000000000,2022-03-29 08:08:13.000000000,2023-01-05 19:24:13.000000000,2020-12-04 18:56:30.000000000]]
In [4]: csv.write_csv(t, "out.csv")
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
Cell In[4], line 1
----> 1 csv.write_csv(t, "out.csv")
File c:\Users\user\miniconda3\envs\py311\Lib\site-packages\pyarrow\_csv.pyx:1483, in pyarrow._csv.write_csv()
File c:\Users\user\miniconda3\envs\py311\Lib\site-packages\pyarrow\error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: Cannot locate timezone 'UTC': Timezone database not found at "C:\Users\user\Downloads\tzdata"
</code></pre>
<p>Now, I see <a href="https://github.com/apache/arrow/issues/35600" rel="nofollow noreferrer">here</a> that it's hard coded that the timezone database be located in the profile's <code>Downloads</code> folder (on Windows). Not ideal, but workable, if I can find what exactly I need to place in that folder. Any hint?</p>
<p>Alternatively, I guess I could remove the timezone from the <code>timestamp</code> column, but I couldn't find how it's done in pyarrow.</p>
<p>In the end, I hope the backend will be updated so the location is no longer hard coded and the database is installed along with pyarrow.</p>
| <python><datetime><timezone><pyarrow><apache-arrow> | 2023-07-06 13:07:42 | 0 | 2,690 | mrgou |
76,629,079 | 11,460,896 | PydanticUserError: A non-annotated attribute was detected in Airflow db init command | <p>I'm trying to run the <code>airflow db init</code> command in my Airflow project, but I'm encountering the following error:</p>
<pre><code>(venv) ➜ aflow airflow standalone
/home/ugur/aflow/venv/lib/python3.10/site-packages/pydantic/_internal/_config.py:261 UserWarning: Valid config keys have changed in V2:
* 'orm_mode' has been renamed to 'from_attributes'
/home/ugur/aflow/venv/lib/python3.10/site-packages/pydantic/_internal/_config.py:261 UserWarning: Valid config keys have changed in V2:
* 'orm_mode' has been renamed to 'from_attributes'
Traceback (most recent call last):
File "/home/ugur/aflow/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/cli/cli_config.py", line 51, in command
func = import_string(import_path)
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/utils/module_loading.py", line 36, in import_string
module = import_module(module_path)
File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/cli/commands/standalone_command.py", line 35, in <module>
from airflow.jobs.scheduler_job_runner import SchedulerJobRunner
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 58, in <module>
from airflow.models.serialized_dag import SerializedDagModel
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/models/serialized_dag.py", line 34, in <module>
from airflow.serialization.serialized_objects import DagDependency, SerializedDAG
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 57, in <module>
from airflow.serialization.pydantic.dag_run import DagRunPydantic
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/serialization/pydantic/dag_run.py", line 24, in <module>
from airflow.serialization.pydantic.dataset import DatasetEventPydantic
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/airflow/serialization/pydantic/dataset.py", line 40, in <module>
class TaskOutletDatasetReferencePydantic(BaseModelPydantic):
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/pydantic/_internal/_model_construction.py", line 95, in __new__
private_attributes = inspect_namespace(
File "/home/ugur/aflow/venv/lib/python3.10/site-packages/pydantic/_internal/_model_construction.py", line 328, in inspect_namespace
raise PydanticUserError(
pydantic.errors.PydanticUserError: A non-annotated attribute was detected: `dag_id = <class 'str'>`. All model fields require a type annotation; if `dag_id` is not meant to be a field, you may be able to resolve this error by annotating it as a `ClassVar` or updating `model_config['ignored_types']`.
For further information visit https://errors.pydantic.dev/2.0.2/u/model-field-missing-annotation
</code></pre>
<p>How can I resolve this issue?</p>
<p>python --version:
Python 3.10.12
airflow version:
2.6.2</p>
| <python><airflow><virtualenv><pydantic> | 2023-07-06 12:56:04 | 2 | 307 | birdalugur |
76,629,004 | 2,163,392 | Unable to serialize to JSON. Unrecognized type Eagertensor when using ModelCheckpoint callback and validation data | <p>I am trying to replicate the <a href="https://keras.io/examples/vision/vit_small_ds/" rel="nofollow noreferrer">Keras example of Vision Transformers with small datasets</a>. The differences between my code and the original code is that I added and called a callback in training and I also have validation data. The callback and its call are described below.</p>
<pre><code>cbs = [ModelCheckpoint("ViT-Model-new-small-dataset.h5", save_best_only=True), EarlyStopping(patience=50, monitor='val_Accuracy', mode='max' ,restore_best_weights=True)]
history = model.fit(
x=x_train,
y=y_train,
validation_data=(x_valid, y_valid),
batch_size=BATCH_SIZE,
epochs=EPOCHS, callbacks=cbs
)
</code></pre>
<p>However, whenever I run such I code I have the following error after the first training epoch:</p>
<pre><code> File "vit_test_small_datasets.py", line 359, in <module>
history = run_experiment(vit_sl)
File "vit_test_small_datasets.py", line 335, in run_experiment
history = model.fit(
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/usr/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: Unable to serialize [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143] to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
</code></pre>
<p><strong>What I tried:</strong>
There is a discussion <a href="https://discuss.tensorflow.org/t/using-efficientnetb0-and-save-model-will-result-unable-to-serialize-2-0896919-2-1128857-2-1081853-to-json-unrecognized-type-class-tensorflow-python-framework-ops-eagertensor/12518" rel="nofollow noreferrer">here</a> saying that it must be a tensorflow bug and with tf 2.9.1 or 2.9.0 it can work. However in my case the error persists even after dowgrading the tensorflow version to those described.</p>
<p><strong>Edit:</strong> I realized that the error happens when the validation accuracy is calculated. If I run model.fit without validation_data=(x_valid,y_valid) and without ModelCheckpoint the error is not happening.</p>
<p>Is that possible to run the <a href="https://keras.io/examples/vision/vit_small_ds/" rel="nofollow noreferrer">Keras example of Vision Transformers with small datasets</a> source code with a callback like I showed above?</p>
| <python><tensorflow><keras> | 2023-07-06 12:48:03 | 0 | 2,799 | mad |
76,628,731 | 3,518,757 | Airflow trigger rule, all but one has succeeded | <p>I'm new to Airflow. Is there an Airflow trigger rule (or a custom rule) that if all but one upstream tasks have succeeded (the remaining one is still running), the task will trigger?</p>
<p>How about trigger rule for when a certain proportion of the upstream tasks have succeeded, say 80% (to the closest integer of course)?</p>
| <python><airflow> | 2023-07-06 12:15:30 | 1 | 7,991 | oikonomiyaki |
76,628,619 | 312,605 | AWS Codebuild fails since buildspec is unable to install python version | <p>My build fails, unable to install python 3.8, though python 3.8 is a supported runtime.
<a href="https://i.sstatic.net/o2Ot1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o2Ot1.png" alt="enter image description here" /></a></p>
<p>Reference:</p>
<p><a href="https://docs.aws.amazon.com/codebuild/latest/userguide/runtime-versions.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/codebuild/latest/userguide/runtime-versions.html</a>
<a href="https://docs.aws.amazon.com/codebuild/latest/userguide/sample-runtime-versions.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/codebuild/latest/userguide/sample-runtime-versions.html</a></p>
<p>My build spec file:</p>
<pre><code> {
"version": "0.2",
"phases": {
"install": {
"runtime-versions": {
"python": "3.8"
}
},
"build": {
"commands": [
"cd lambda",
"python -m pip install -r requirements.txt",
"python -m pip install -r test/requirements.txt",
"python -m unittest",
]
}
},
"artifacts": {
"files": "**/*"
}
}
</code></pre>
<p>Error:</p>
<pre><code>[Container] 2023/07/06 11:54:17 Selecting 'python' runtime version '3.8' based on manual selections...
[Container] 2023/07/06 11:54:17 Phase complete: DOWNLOAD_SOURCE State: FAILED
[Container] 2023/07/06 11:54:17 Phase context status code: YAML_FILE_ERROR Message: Unknown runtime version named '3.8' of python. This build image has the following versions: 3.10
</code></pre>
| <python><amazon-web-services><aws-codebuild> | 2023-07-06 12:02:11 | 1 | 2,801 | aspdeepak |
76,628,570 | 11,426,624 | pandas groupby nunique per multiple columns | <p>I have a dataframe</p>
<pre><code>df = pd.DataFrame({'id':[1,1,2,2,3],
'user':['u1', 'u1', 'u2', 'u2', 'u3'],
'date':['2021-04-25','2021-04-25','2021-04-25','2021-04-26', '2021-04-25'],
'sth_else1':['xx','yy','xx','xx','xx'], 'sth_else2':['zz','yy','zz','xx','xx']})
</code></pre>
<pre><code> id user date sth_else1 sth_else2
0 1 u1 2021-04-25 xx zz
1 1 u1 2021-04-25 yy yy
2 2 u2 2021-04-25 xx zz
3 2 u2 2021-04-26 xx xx
4 3 u3 2021-04-25 xx xx
</code></pre>
<p>and I would like to add a column (so probably use groupby with transform?) to that dataframe that shows me the number of unique user/date combinations I have per id in the whole dataframe, so that I would get this</p>
<pre><code> id user date sth_else1 sth_else2 count_per_id_user_and_date
0 1 u1 2021-04-25 xx zz 1
1 1 u1 2021-04-25 yy yy 1
2 2 u2 2021-04-25 xx zz 2
3 2 u2 2021-04-26 xx xx 2
4 3 u3 2021-04-25 xx xx 1
</code></pre>
<p>how would I do this?</p>
| <python><pandas><dataframe><group-by> | 2023-07-06 11:55:10 | 2 | 734 | corianne1234 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.