QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,319,132 | 8,947,889 | How do you use custom prompts with LangChains RetrievalQA without specifying from_chain_type? | <p>According to <a href="https://python.langchain.com/docs/use_cases/question_answering/vector_db_qa" rel="nofollow noreferrer">LangChain's documentation</a>,</p>
<p>"There are two ways to load different chain types. First, you can specify the chain type argument in the from_chain_type method. This allows you to pass in the name of the chain type you want to use....The above way allows you to really simply change the chain_type, but it doesn't provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly."</p>
<p>The difference between the two options is:</p>
<pre><code>qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", retriever=docsearch.as_retriever())
</code></pre>
<p>compared to:</p>
<pre><code> qa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")
qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())
</code></pre>
<p>The issue I am running into is that I can't use the second option's flexibility with custom prompts. Running <code>inspect.getfullargspec(RetrievalQA.from_chain_type)</code> shows a <code>chain_type_kwargs</code> argument, which is how you <a href="https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/" rel="nofollow noreferrer">pass a prompt</a>. There is no <code>chain_type_kwards</code> argument in either <code>load_qa_chain</code> or <code>RetrievalQA</code>. Langchain's documentation does not provide any additional information on how to access the ability to send prompts using the more flexible method. I would like to be able to combine the use of prompts with the ability to change the parameters of the chain type.</p>
<p>Logically, one would things something like this would work:</p>
<pre><code>qa_chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")
qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)
</code></pre>
<p>But it results in a:</p>
<pre><code>ValidationError: 1 validation error for RetrievalQA chain_type_kwargs extra fields not permitted (type=value_error.extra)
</code></pre>
<p>As mentioned before, this does work:</p>
<pre><code>qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)
</code></pre>
| <python><chatbot><langchain><large-language-model> | 2023-10-18 19:33:58 | 1 | 677 | Stephen Strosko |
77,319,016 | 10,219,001 | Pandas extract substring between two characters | <p>I have a dataframe column in pandas which contains a long piece of text</p>
<pre><code> Value_column
Car:"Ford",Colour:"Black", Price:2000,
</code></pre>
<p>I'd like to split this into three columns</p>
<p>So it would look like</p>
<pre><code>Car Colour Price
Ford Black 2000
</code></pre>
<p>I've been able to do it for the first split using</p>
<pre><code>df['Car']=df['Value_column].str.split("Car",expand=True).iloc[:,1:]
df['Car']=df['Car'].str[0:5]
</code></pre>
<p>But can't figure out a neat way of doing it for all values. The tricky bit I'm finding is telling the code when to end. It only works for Ford because I know Ford is 4 letters long</p>
| <python><pandas> | 2023-10-18 19:09:57 | 2 | 2,155 | fred.schwartz |
77,318,902 | 17,653,423 | Why using yield instead of return in pytest fixtures? | <p>I see why using <code>yield</code> keyword when we want to run our tests and then clean things up going back to the fixture and running some code after the <code>yield</code> statement. As the code below.</p>
<pre><code>@pytest.fixture
def sending_user(mail_admin):
user = mail_admin.create_user()
yield user
mail_admin.delete_user(user)
</code></pre>
<p>However why is it necessary to use <code>yield</code> when we only want to return an object and don't comeback to the fixture, returning a <code>patch</code> for example?</p>
<pre><code>from unittest.mock import patch, mock_open
from pytest import raises, fixture
import os.path
def read_from_file(file_path):
if not os.path.exists(file_path):
raise Exception("File does not exists!")
with open(file_path, "r") as f:
return f.read().splitlines()[0]
@fixture()
def mock_open_file():
with patch("builtins.open", new_callable=mock_open, read_data="Correct string") as mocked:
yield mocked #instead of: return mocked
@fixture()
def mock_os():
with patch("os.path.exists", return_value=True) as mocked:
yield mocked #instead of: return mocked
def test_read_file_and_returns_the_correct_string_with_one_line(mock_os, mock_open_file):
result = read_from_file("xyz")
mock_open_file.assert_called_once_with("xyz", "r")
assert result == "Correct string"
def test_throws_exception_when_file_doesnt_exist(mock_os, mock_open_file):
mock_os.return_value = False
with raises(Exception):
read_from_file("xyz")
</code></pre>
| <python><python-3.x><pytest><pytest-mock> | 2023-10-18 18:50:31 | 1 | 391 | Luiz |
77,318,838 | 14,179,793 | How to diagnose docker compose "Attaching to ..." taking a long time? | <p>We have a repo that uses a python subprocess to call docker compose and run a container that processes an input file and produces a new output file. For most files this works as expected, but it appears that as the file grows in size the "Attaching to ..." step takes significantly longer.</p>
<p>I am not sure where to look to diagnose the cause of this and resolve it.</p>
<p>I have experimented with adding a <code>.dockerignore</code> but this makes no difference. And given that the attaching only takes longer as the input file is larger I don't think this is the cause.</p>
<p>This is how compose is called in the python code:</p>
<pre><code>compose = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
while True:
line = compose.stdout.readline()
if not line:
break
print(line.decode())
</code></pre>
<p>Where <code>cmd</code> will be this:</p>
<pre><code>docker compose -f /path/to/install/location/service_env/lib/python3.8/site-packages/file_generator/docker-compose.yml up service_1
</code></pre>
<p>Example compose.yml:</p>
<pre><code>version: '3'
services:
service_1:
image: some/image:latest
environment:
- PAYLOAD=${PAYLOAD}
- ARGS=${DMRPP_ARGS}
volumes:
- ${INPUT_FILES_PATH:-/tmp}:/usr/share/
</code></pre>
<p>Using a <code>16mb</code> input file:</p>
<pre><code>$ time service -p /some/path/to/inputs/
Container service_1 Created
Attaching to service_1 <- Gets delayed here for all but the last ~1.0 seconds of the execution
service_1 | ...logs...
service_1 exited with code 0
real 1m39.291s
user 0m0.261s
sys 0m0.093s
</code></pre>
<p>Using a <code>924K</code> input file:</p>
<pre><code>$ time service -p /some/path/to/inputs/ --no-validate
Container service_1 Created
Attaching to service_1
service_1 | ...logs...
service_1 exited with code 0
real 0m3.534s
user 0m0.262s
sys 0m0.113s
</code></pre>
| <python><docker><docker-compose> | 2023-10-18 18:38:06 | 0 | 898 | Cogito Ergo Sum |
77,318,785 | 11,827,673 | How do I detect the usage of a ContextManager in Python? | <p>I am trying to refactor the following:</p>
<pre class="lang-py prettyprint-override"><code>with MyContext() as ctx:
ctx.some_function()
</code></pre>
<p>...into something more like this:</p>
<pre class="lang-py prettyprint-override"><code>with MyContext():
some_function()
</code></pre>
<p>How can I detect in the body of <code>some_function</code> that I have called it within the context of <code>MyContext()</code>? I would prefer that this is done in a thread-safe way.</p>
<p>This appears to be possible because it is done in the builtin <code>decimal</code> module:</p>
<pre class="lang-py prettyprint-override"><code>from decimal import localcontext
with localcontext() as ctx:
ctx.prec = 42 # Perform a high precision calculation
s = calculate_something()
</code></pre>
| <python><refactoring><contextmanager> | 2023-10-18 18:28:28 | 2 | 378 | JeremyEastham |
77,318,623 | 4,953,820 | Check if dataframe or specific string literal was passed to function | <p>I have a function that takes an argument which, among other things, could be either a dataframe or a string literal.</p>
<pre><code>def func(could_be_df):
if could_be_df=='optionA':data=stuffA()
elif could_be_df=='optionB':data=stuffB()
else:data=get_data(could_be_df)
</code></pre>
<p>When a dataframe is passed, this results in the "the truth value of an array with multiple elements is ambiguous" error. I want df=='optionA' to resolve to False, since it obviously isn't the string literal I'm looking for. How do I cleanly perform this check?</p>
<p>Stuff I've tried:</p>
<ol>
<li><code>if could_be_df is 'optionA'</code>. Syntax warning, and sounds like it may not be safe to use "is".</li>
<li><code>if isinstance(could_be_df,pd.DataFrame)</code>. Could also be a series, array, list, etc.</li>
<li><code>if isinstance(could_be_df,str)</code>. Works, but gets messy since strings other than the literals should still go to get_data. Also, not duck-typing.</li>
</ol>
<p>Is there a clean, pythonic way to perform this check?</p>
| <python><pandas><equality> | 2023-10-18 18:00:40 | 1 | 617 | Kalev Maricq |
77,318,492 | 13,118,009 | Building wheel for pyarrow (pyproject.toml) did not run successfully | <p>Just had IT install Python 3.12 on my Windows machine. I do not have admin rights on my machine, which may or may not be important. During install, the following were done:</p>
<ul>
<li>Clicked "Add Python 3.n to Path" box.</li>
<li>Went into Customize
installation and made sure pip was selected, and, selected "install
for all users".</li>
</ul>
<p>I'm having trouble installing Snowflake Connector Python, which has output further below.</p>
<p>For sanity, I tried installing numpy, which seemingly worked even though "platform independent libraries not found":</p>
<pre><code>py -m pip install numpy1
Could not find platform independent libraries <prefix>
Collecting numpy1
Using cached numpy1-0.0.1-py3-none-any.whl
Installing collected packages: numpy1
Successfully installed numpy1-0.0.1
</code></pre>
<p>And here's the attempt at installing the Snowflake Connector. Would anyone have ideas on my problem(s)?</p>
<pre><code>#Have tried both, producing same errors
py -m pip install snowflake-connector-python
py -m pip install --upgrade snowflake-connector-python
Could not find platform independent libraries <prefix>
Collecting snowflake-connector-python
Using cached snowflake-connector-python-3.3.0.tar.gz (716 kB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [326 lines of output]
Could not find platform independent libraries <prefix>
Collecting setuptools>=40.6.0
Using cached setuptools-68.2.2-py3-none-any.whl.metadata (6.3 kB)
Collecting wheel
Using cached wheel-0.41.2-py3-none-any.whl.metadata (2.2 kB)
Collecting cython
Using cached Cython-3.0.4-cp312-cp312-win_amd64.whl.metadata (3.2 kB)
Collecting pyarrow<10.1.0,>=10.0.1
Using cached pyarrow-10.0.1.tar.gz (994 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting numpy>=1.16.6 (from pyarrow<10.1.0,>=10.0.1)
Using cached numpy-1.26.1-cp312-cp312-win_amd64.whl.metadata (61 kB)
Using cached setuptools-68.2.2-py3-none-any.whl (807 kB)
Using cached wheel-0.41.2-py3-none-any.whl (64 kB)
Using cached Cython-3.0.4-cp312-cp312-win_amd64.whl (2.8 MB)
Using cached numpy-1.26.1-cp312-cp312-win_amd64.whl (15.5 MB)
Building wheels for collected packages: pyarrow
Building wheel for pyarrow (pyproject.toml): started
Building wheel for pyarrow (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
Building wheel for pyarrow (pyproject.toml) did not run successfully.
exit code: 1
[290 lines of output]
Could not find platform independent libraries <prefix>
<string>:36: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
WARNING setuptools_scm.pyproject_reading toml section missing 'pyproject.toml does not contain a tool.setuptools_scm section'
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-312
creating build\lib.win-amd64-cpython-312\pyarrow
copying pyarrow\benchmark.py -> build\lib.win-amd64-cpython-312\pyarrow
copying pyarrow\cffi.py -> build\lib.win-amd64-cpython-312\pyarrow
copying pyarrow\compute.py -> build\lib.win-amd64-cpython-312\pyarrow
copying pyarrow\conftest.py -> build\lib.win-amd64-cpython-312\pyarrow
copying pyarrow\csv.py -> build\lib.win-amd64-cpython-312\pyarrow
[had to remove similar rows to fit question]
copying pyarrow\tests\test_util.py -> build\lib.win-amd64-cpython-312\pyarrow\tests
copying pyarrow\tests\util.py -> build\lib.win-amd64-cpython-312\pyarrow\tests
copying pyarrow\tests\__init__.py -> build\lib.win-amd64-cpython-312\pyarrow\tests
creating build\lib.win-amd64-cpython-312\pyarrow\vendored
copying pyarrow\vendored\docscrape.py -> build\lib.win-amd64-cpython-312\pyarrow\vendored
copying pyarrow\vendored\version.py -> build\lib.win-amd64-cpython-312\pyarrow\vendored
copying pyarrow\vendored\__init__.py -> build\lib.win-amd64-cpython-312\pyarrow\vendored
creating build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\common.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\conftest.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\encryption.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_basic.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_compliant_nested_type.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_dataset.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_data_types.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_datetime.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_encryption.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_metadata.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_pandas.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_parquet_file.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_parquet_writer.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
copying pyarrow\tests\parquet\__init__.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet
running egg_info
writing pyarrow.egg-info\PKG-INFO
writing dependency_links to pyarrow.egg-info\dependency_links.txt
writing entry points to pyarrow.egg-info\entry_points.txt
writing requirements to pyarrow.egg-info\requires.txt
writing top-level names to pyarrow.egg-info\top_level.txt
ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any
reading manifest file 'pyarrow.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '..\LICENSE.txt'
warning: no files found matching '..\NOTICE.txt'
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '#*' found anywhere in distribution
warning: no previously-included files matching '.git*' found anywhere in distribution
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
no previously-included directories found matching '.asv'
writing manifest file 'pyarrow.egg-info\SOURCES.txt'
copying pyarrow\__init__.pxd -> build\lib.win-amd64-cpython-312\pyarrow
copying pyarrow\_compute.pxd -> build\lib.win-amd64-cpython-312\pyarrow
copying pyarrow\_compute.pyx -> build\lib.win-amd64-cpython-312\pyarrow
copying pyarrow\_csv.pxd -> build\lib.win-amd64-cpython-312\pyarrow
[had to remove similar rows to fit question]
copying pyarrow\includes\libarrow_flight.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes
copying pyarrow\includes\libarrow_fs.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes
copying pyarrow\includes\libarrow_python.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes
copying pyarrow\includes\libarrow_substrait.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes
copying pyarrow\includes\libgandiva.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes
copying pyarrow\includes\libplasma.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes
copying pyarrow\includes\__init__.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes
creating build\lib.win-amd64-cpython-312\pyarrow\src
copying pyarrow\src\ArrowPythonConfig.cmake.in -> build\lib.win-amd64-cpython-312\pyarrow\src
copying pyarrow\src\ArrowPythonFlightConfig.cmake.in -> build\lib.win-amd64-cpython-312\pyarrow\src
copying pyarrow\src\CMakeLists.txt -> build\lib.win-amd64-cpython-312\pyarrow\src
copying pyarrow\src\arrow-python-flight.pc.in -> build\lib.win-amd64-cpython-312\pyarrow\src
copying pyarrow\src\arrow-python.pc.in -> build\lib.win-amd64-cpython-312\pyarrow\src
creating build\lib.win-amd64-cpython-312\pyarrow\tensorflow
copying pyarrow\tensorflow\plasma_op.cc -> build\lib.win-amd64-cpython-312\pyarrow\tensorflow
copying pyarrow\tests\bound_function_visit_strings.pyx -> build\lib.win-amd64-cpython-312\pyarrow\tests
copying pyarrow\tests\pyarrow_cython_example.pyx -> build\lib.win-amd64-cpython-312\pyarrow\tests
creating build\lib.win-amd64-cpython-312\pyarrow\src\arrow
creating build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\CMakeLists.txt -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\api.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\arrow_to_pandas.cc -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\arrow_to_pandas.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\arrow_to_python_internal.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\benchmark.cc -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
[had to remove similar rows to fit question]
copying pyarrow\src\arrow\python\serialize.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\type_traits.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\udf.cc -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\udf.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\visibility.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python
creating build\lib.win-amd64-cpython-312\pyarrow\tests\data
creating build\lib.win-amd64-cpython-312\pyarrow\tests\data\feather
copying pyarrow\tests\data\feather\v0.17.0.version.2-compression.lz4.feather -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\feather
creating build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\README.md -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.emptyFile.jsn.gz -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.emptyFile.orc -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.test1.jsn.gz -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.test1.orc -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.testDate1900.jsn.gz -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.testDate1900.orc -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\decimal.jsn.gz -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\decimal.orc -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc
creating build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.all-named-index.parquet -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.column-metadata-handling.parquet -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.parquet -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.some-named-index.parquet -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet
running build_ext
creating C:\Users\myusername\AppData\Local\Temp\pip-install-3n6isyev\pyarrow_2f98fb6170b843699540aba751d4b9e7\build\cpp
-- Running CMake for PyArrow C++
cmake -DARROW_BUILD_DIR=build -DCMAKE_BUILD_TYPE=release -DCMAKE_INSTALL_LIBDIR=lib -DCMAKE_INSTALL_PREFIX=C:\Users\myusername\AppData\Local\Temp\pip-install-3n6isyev\pyarrow_2f98fb6170b843699540aba751d4b9e7\build\dist -DPYTHON_EXECUTABLE=C:\Users\myusername\AppData\Local\Programs\Python\Python312\python.exe -DPython3_EXECUTABLE=C:\Users\myusername\AppData\Local\Programs\Python\Python312\python.exe -DPYARROW_CXXFLAGS= -DPYARROW_WITH_DATASET=off -DPYARROW_WITH_PARQUET_ENCRYPTION=off -DPYARROW_WITH_HDFS=off -DPYARROW_WITH_FLIGHT=off -G "Visual Studio 15 2017 Win64" C:\Users\myusername\AppData\Local\Temp\pip-install-3n6isyev\pyarrow_2f98fb6170b843699540aba751d4b9e7\pyarrow/src
error: command 'cmake' failed: None
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyarrow
Failed to build pyarrow
ERROR: Could not build wheels for pyarrow, which is required to install pyproject.toml-based projects
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
| <python><python-3.x><pip><snowflake-cloud-data-platform> | 2023-10-18 17:32:50 | 2 | 6,494 | Isolated |
77,318,367 | 12,888,866 | Prevent pandas resample from filling in gap day while downsampling | <p>I have a pandas Series with measurements at a 1 minute interval. I want to downsample this data to a 5 minute interval. <code>series</code> contains measurements from at end of October 18th, none from October 19th and then measurements at the start of October 20th. Using <code>series.resample("5T").mean()</code> fills October 19th with <code>NaN</code>'s, and <code>series.resample("5T").sum()</code> fills the missing day with <code>0</code>'s:</p>
<pre><code>index1 = pd.date_range("2023-10-18 23:50", "2023-10-18 23:59", freq="T")
index2 = pd.date_range("2023-10-20 00:00", "2023-10-20 00:10", freq="T")
series1 = pd.Series(range(len(index1)), index=index1)
series2 = pd.Series(range(100, len(index2)+100), index=index2)
series = pd.concat([series1, series2])
series.resample("5T").mean()
</code></pre>
<p>Out:</p>
<pre><code>2023-10-18 23:50:00 2.0
2023-10-18 23:55:00 7.0
2023-10-19 00:00:00 NaN
2023-10-19 00:05:00 NaN
2023-10-19 00:10:00 NaN
...
2023-10-19 23:50:00 NaN
2023-10-19 23:55:00 NaN
2023-10-20 00:00:00 102.0
2023-10-20 00:05:00 107.0
2023-10-20 00:10:00 110.0
Freq: 5T, Length: 293, dtype: float64
</code></pre>
<p>I need <code>pd.Series.resample</code> to stick to the days that are in <code>series</code> and not fill in anything for the missing day. How can this be done?</p>
| <python><pandas><datetime><pandas-resample> | 2023-10-18 17:10:53 | 1 | 377 | Timo |
77,318,361 | 4,324,496 | ModuleNotFoundError: No module named 'some_module' | <p>Could anyone please point me where I am going wrong ??? I tried all options , I don't want to go with adding sys.path option..</p>
<p>Below is my project structure, I am trying to import module</p>
<pre><code> from pipeline_modules.pdf2docx_convertor import *
Getting error : ModuleNotFoundError: No module named 'pipeline_modules'
</code></pre>
<p>I printed sys.path values</p>
<pre><code> ['c:\\microservice\\pdf_processing_pipepline\\pipeline', 'C:\\microservice_env\\python39.zip', 'C:\\microservice_env\\DLLs', 'C:\\microservice_env\\lib', 'C:\\microservice_env', 'C:\\microservice_env\\lib\\site-packages']
</code></pre>
<p><a href="https://i.sstatic.net/H54WC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H54WC.png" alt="enter image description here" /></a></p>
| <python><module><package> | 2023-10-18 17:10:20 | 1 | 2,954 | ZKS |
77,318,322 | 4,844,359 | Python breaks http response status code after trying to alter returned response with a new array or map | <p>I tried to alter a response body in lambda before returning to my front end (Flutter). However I got status code 500 internal server error even though in my cloudwatch log, it shows printing of the newly created response ok.</p>
<p>my lambda code where i got <code>sample_json_response</code> and try to rebuild the response with a new list (also tried with dict but both returned same result</p>
<pre><code> # print(sample_json_response)
ingredient_response = json.dumps(inference_response['body']['ingredients']['items'])
print('ingredient response: ')
print(ingredient_response)
# add _id details to return unique id for identifying documentdb result
unique_id = request_id + "-" + upload_timestamp
result = []
result.append("unique_id": unique_id)
result.append("ingredient_response": ingredient_response)
print('updated result: ')
print(result)
</code></pre>
<p>After that i return the response with status code 200 and body</p>
<pre><code> return {
'statusCode': 200,
'body': result
}
</code></pre>
<p>I checked in my log the array or map printed ok, no error in my lambda as well. This image shows the new array i built, just prepending with additional details
<a href="https://i.sstatic.net/HtGws.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HtGws.png" alt="enter image description here" /></a></p>
<p>however i got error 500 internal server error in my flutter end. when i print from response.statusCode it shows 500. If i change back to my initial sample response, it works ok.</p>
<p>What's the reason causing the status code overwritten when i am returning the same format but just rebuilding to a new array and trying to add details?</p>
| <python><flutter><http> | 2023-10-18 17:05:41 | 1 | 1,100 | unacorn |
77,318,163 | 22,466,650 | How to iteratively merge dfs and increment the suffixes at the same time? | <p>I receive a list of csv files (that varies from 3 to 20) and I need to make an inner merge on the primary key <code>A</code> and grab all the remaining columns and put them next to each other in a consolidated single dataframe.</p>
<p>Since I can't share the whole csv files, I made some examples :</p>
<pre><code>from functools import reduce
df1 = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
df2 = pd.DataFrame({'A': [2, 3, 4], 'C': [10, 11, 12], 'B': [4, 5, 6]})
df3 = pd.DataFrame({'A': [3], 'B': [0], 'D': [13]})
df4 = pd.DataFrame({'A': [3, 7], 'B': [9, 1], 'D': [1, 2]})
list_1 = [df1, df2, df3, df4]
list_2 = [df1, df2, df3]
merged_df = reduce(lambda left, right: pd.merge(left, right, on='A'), list_1)
</code></pre>
<p>As you can see, the resulting dataframe should have only one row because <code>3</code> is the only primary key shared between all dataframes. But when I use the code above I got no errors with 3 dataframes but with 4, i got error :</p>
<p><strong>MergeError: Passing 'suffixes' which cause duplicate columns {'B_x'} is not allowed.</strong></p>
<p>Here is my expected output (<code>1 ROW / 8 COLUMNS</code>) :</p>
<p>I added the merge titles and the pipes just for clarify</p>
<pre><code> # | MERGE 1 | MERGE 2 | MERGE 3
A B_1 | C B_2 | B_3 D_1 | B_4 D_2
3 6 | 11 5 | 0 13 | 9 1
</code></pre>
<p>The duplicated columns <code>B and D</code> should have an incremented suffix like a counter.</p>
<p>Do you guys have any ideas to fix that ? I feel like we need to use <code>enumerate</code> inside <code>reduce</code> but I really don't know how to incorporate that. The website have suggested this <a href="https://stackoverflow.com/questions/48236438/merge-multiple-dataframe-with-specified-suffix">merge multiple dataframe with specified suffix</a> but I don't think it's suited for my usecase.</p>
| <python><pandas> | 2023-10-18 16:37:36 | 1 | 1,085 | VERBOSE |
77,318,043 | 274,460 | How to use python zeroconf in an asyncio application? | <p>Here's a minimal reproducible example demonstrating the problem:</p>
<pre><code>import asyncio
from zeroconf import Zeroconf, ServiceStateChange
from zeroconf.asyncio import AsyncServiceBrowser
def on_service_state_change(zeroconf, service_type, name, state_change):
if state_change == ServiceStateChange.Added:
info = zeroconf.get_service_info(service_type, name)
print(info)
async def browse_services():
zeroconf = Zeroconf()
service_type = "_http._tcp.local."
browser = AsyncServiceBrowser(zeroconf, service_type, handlers=[on_service_state_change])
try:
await asyncio.Event().wait()
finally:
zeroconf.close()
if __name__ == "__main__":
asyncio.run(browse_services())
</code></pre>
<p>This fails, like this:</p>
<pre><code>Exception in callback _SelectorDatagramTransport._read_ready()
handle: <Handle _SelectorDatagramTransport._read_ready()>
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/lib/python3.10/asyncio/selector_events.py", line 1035, in _read_ready
self._protocol.datagram_received(data, addr)
File "src/zeroconf/_listener.py", line 80, in zeroconf._listener.AsyncListener.datagram_received
File "src/zeroconf/_listener.py", line 161, in zeroconf._listener.AsyncListener.datagram_received
File "src/zeroconf/_handlers/record_manager.py", line 162, in zeroconf._handlers.record_manager.RecordManager.async_updates_from_response
File "src/zeroconf/_handlers/record_manager.py", line 71, in zeroconf._handlers.record_manager.RecordManager.async_updates_complete
File "src/zeroconf/_services/browser.py", line 441, in zeroconf._services.browser._ServiceBrowserBase.async_update_records_complete
File "src/zeroconf/_services/browser.py", line 451, in zeroconf._services.browser._ServiceBrowserBase.async_update_records_complete
File "src/zeroconf/_services/browser.py", line 454, in zeroconf._services.browser._ServiceBrowserBase._fire_service_state_changed_event
File "src/zeroconf/_services/browser.py", line 454, in zeroconf._services.browser._ServiceBrowserBase._fire_service_state_changed_event
File "src/zeroconf/_services/browser.py", line 464, in zeroconf._services.browser._ServiceBrowserBase._fire_service_state_changed_event
File "/home/tkcook/.virtualenvs/veeahub/lib/python3.10/site-packages/zeroconf/_services/__init__.py", line 56, in fire
h(**kwargs)
File "/home/tkcook/Scratch/zeroconf_test.py", line 7, in on_service_state_change
info = zeroconf.get_service_info(service_type, name)
File "/home/tkcook/.virtualenvs/veeahub/lib/python3.10/site-packages/zeroconf/_core.py", line 270, in get_service_info
if info.request(self, timeout, question_type):
File "src/zeroconf/_services/info.py", line 752, in zeroconf._services.info.ServiceInfo.request
RuntimeError: Use AsyncServiceInfo.async_request from the event loop
</code></pre>
<p>I can't use <code>async_request()</code> here (and by implication <code>async_get_service_info()</code>) because the service listener is not async and so the resulting coroutine can't be awaited. As far as I can tell, there is no way to add an async service listener. <code>AsyncZeroconf.async_add_service_listener</code> does not result in the callback functions being awaited and therefore doesn't help with this.</p>
<p>Am I missing something? Is there some way to do this that I'm missing?</p>
| <python><python-asyncio><zeroconf> | 2023-10-18 16:19:09 | 0 | 8,161 | Tom |
77,317,546 | 9,525,983 | Caching pip packages in requirements.txt using vercel.json | <p>Whenever I am deploying to Vercel, it takes 10+ minutes to install pip packages.
I am digging into Vercel documentation, I cannot locate any caching pip packages articles. Any insight would be greatly appreciated.</p>
<p>requirements.txt</p>
<pre><code>Flask==2.3.3
Flask-Admin==1.6.1
Flask-BabelEx==0.9.4
Flask-Cors==4.0.0
Flask-Login==0.6.2
Flask-Mail==0.9.1
Flask-Migrate==4.0.5
Flask-Principal==0.4.0
Flask-RESTful==0.3.10
Flask-Security-Too==5.3.0
Flask-SQLAlchemy==3.1.1
Flask-WTF==1.1.1
</code></pre>
<p>vercel.json</p>
<pre><code>{
"rewrites": [
{ "source": "/(.*)", "destination": "/api/app" }
]
}
</code></pre>
| <python><flask><pip><serverless><vercel> | 2023-10-18 15:07:36 | 0 | 1,231 | Wils |
77,317,536 | 1,914,781 | atlassian module has been installed but import report not found | <p>I have installed the module with below command in ubuntu2204:</p>
<pre><code>$ pip3 install atlassian
Requirement already satisfied: atlassian in /home/funny/.local/lib/python3.8/site-packages (0.0.0)
</code></pre>
<p>But when try below python code in x.py:</p>
<pre><code>#!/usr/bin/env python3
from atlassian import Jira
</code></pre>
<p>When run list, package is there:</p>
<pre><code>$ pip3 list |grep atlassian
atlassian 0.0.0
</code></pre>
<p>Below error happen:</p>
<pre><code>$ ./x.py
Traceback (most recent call last):
File "./x.py", line 2, in <module>
from atlassian import Jira
ModuleNotFoundError: No module named 'atlassian'
</code></pre>
<p>Why install say success but import report error?</p>
| <python> | 2023-10-18 15:06:55 | 0 | 9,011 | lucky1928 |
77,317,463 | 5,561,649 | Why doesn't stubgen generate types for literal collections? | <p>For example, on this file:</p>
<pre class="lang-py prettyprint-override"><code>MY_DICT = {'a': 0, 'b': 1, 'c': 2}
MY_LIST = ['a', 'b', 'c']
MY_TUPLE = ('a', 'b', 'c')
</code></pre>
<p>It generates:</p>
<pre class="lang-py prettyprint-override"><code>from _typeshed import Incomplete
MY_DICT: Incomplete
MY_LIST: Incomplete
MY_TUPLE: Incomplete
</code></pre>
<p>I would have expected:</p>
<pre class="lang-py prettyprint-override"><code>MY_DICT: dict[str, int]
MY_LIST: list[str]
MY_TUPLE: tuple[str]
</code></pre>
<p>Why can't it infer it? Or does it operate on the premise that the values can change and we can only really know what the <em>intended</em> type is from a handwritten annotation?</p>
| <python><mypy><stub> | 2023-10-18 14:57:57 | 0 | 550 | LoneCodeRanger |
77,317,418 | 955,273 | pandas.Timestamp.floor gives different results | <p>I am running pandas 1.50</p>
<p>Below, I am running the same code for 2 different timestamps, and getting 2 different answers.</p>
<pre><code>def test(now):
now = pd.Timestamp(now)
floored = now.floor(pd.Timedelta("48 hours"))
diff = now - floored
print(f"{now}, {interval}, {floored}, {diff}")
test('2023-10-18 23:59:03.793642')
test('2023-10-11 23:59:03.793642')
</code></pre>
<p>In the <code>test</code> function I floor the timestamp to a 48 hour interval, and then calculate the timedelta difference between the original timestamp and the floored timestamp.</p>
<p>The first timestamp floors to midnight on the same day (so to a 24 hour interval).<br />
The second timestamp floors to midnight on the previousday (so to a 48 hour interval - as requested).</p>
<p><strong>Output:</strong></p>
<blockquote>
<pre><code>2023-10-18 23:59:03.793642, 48 hours, 2023-10-18 00:00:00, 0 days 23:59:03.793642
2023-10-11 23:59:03.793642, 48 hours, 2023-10-10 00:00:00, 1 days 23:59:03.793642
</code></pre>
</blockquote>
<p>What is happening here?</p>
<p>How come I get different results?</p>
| <python><pandas> | 2023-10-18 14:51:26 | 1 | 28,956 | Steve Lorimer |
77,317,412 | 12,257,924 | MarianMTModel stops translating on encountering “-” character | <p>I’m trying to translate simple text from Polish to English:</p>
<p>“Życie nigdy się nie kończy – przygotuj się zatem na ciąg dalszy. Zasilany twoją energią zegarek z widocznym mechanizmem dopasuje się do ciebie..”</p>
<p>The model behaves strangely, when it encounters the <code>-</code> char it stops translating only returning the translation of what precedes the <code>-</code> char.
When I move this char the translation always ends before it.</p>
<p>After further investigation, the model returns the generated ids:
<code>tensor([[63429, 7157, 522, 10126, 15, 0]])</code></p>
<p>when decoded:
<code>'<pad> Life never ends </s>'</code>.</p>
<p>Surprisingly, when I use <code>num_beams</code> set to 2 instead of 1 I get a good result. The problem is that because of time constraints I can't use <code>num_beams=2</code></p>
<p>Does anyone know what is happening?</p>
| <python><machine-learning><deep-learning><nlp><huggingface-transformers> | 2023-10-18 14:50:27 | 0 | 639 | Karol |
77,317,144 | 20,386,689 | Trade-offs and considerations for conditional tuple unpacking in Python: primarily performance but also stylistic considerations for maintainability | <p>I have a piece of logic in <strong>Python</strong> that involves tuple unpacking based on conditions.</p>
<p>The code instantiates two variables, and assigns to these variables the unpacked values of a tuple.</p>
<p>This tuple is defined conditionally: if a conditional_expression is true, then a function executes returning a tuple, else default values are provided and assigned.</p>
<p>My understanding is there are two simple ways to implement this, as follows:</p>
<pre class="lang-py prettyprint-override"><code># Method 1 #
if conditional_expression:
processed_data, context = tuple_returner(raw_data)
else:
processed_data, context = raw_data[0], None
# Method 2 #
processed_data, context = (
tuple_returner(raw_data)
if conditional_expression
else raw_data[0], None
)
</code></pre>
<p>Now I have tried both ways, and both work well as expected.</p>
<p>My question is two-fold:</p>
<ol>
<li><p>Is either method likely to be more <strong>performant</strong> than the other (in a standard CPython implementation)? I do not notice any substantial differences when running <code>timeit</code> tests (although method 2 seems ever so slightly slower - <em>see bottom snippet</em>), but I was wondering if anything deeper is happening under-the-hood that I'm not aware about.</p>
</li>
<li><p>Is either method more <strong>Pythonic</strong> and generally preferred by the community than the other? I couldn't see anything specific inside <strong>PEP8</strong> to guide me on this. Both seem readable to me but this code-base will last a long time and be worked on by several contributors, so would like to adhere to best practices.</p>
</li>
</ol>
<p>FWIW the actual code is part of a large <strong>Django</strong> project, involving some complex business logic inside a Celery task, and this paradigm is repeated a lot throughout the tasks where we process data and maintain context <em>(and sometimes we unpack far more than just two values)</em>. Therefore if there are Django-specific stylistic considerations then that would be useful to know!</p>
<hr />
<p>For those who want to see my <code>timeit</code> evaluation, see below:</p>
<pre class="lang-py prettyprint-override"><code>import timeit
import random
def tuple_returner(data):
return data * 2, data * 3
def method1():
conditional_expression = random.choice([True, False])
raw_data = random.randint(1, 1000)
if conditional_expression:
processed_data, context = tuple_returner(raw_data)
else:
processed_data, context = raw_data, None
def method2():
conditional_expression = random.choice([True, False])
raw_data = random.randint(1, 1000)
processed_data, context = tuple_returner(raw_data) if conditional_expression else (raw_data, None)
time1_a = timeit.timeit(method1, number=10000000)
time1_b = timeit.timeit(method1, number=10000000)
time1_c = timeit.timeit(method1, number=10000000)
time1_d = timeit.timeit(method1, number=10000000)
time1_e = timeit.timeit(method1, number=10000000)
time1_mean = (time1_a + time1_b + time1_c + time1_d + time1_e) / 5
time1_median = sorted([time1_a, time1_b, time1_c, time1_d, time1_e])[2]
time2_a = timeit.timeit(method2, number=10000000)
time2_b = timeit.timeit(method2, number=10000000)
time2_c = timeit.timeit(method2, number=10000000)
time2_d = timeit.timeit(method2, number=10000000)
time2_e = timeit.timeit(method2, number=10000000)
time2_mean = (time2_a + time2_b + time2_c + time2_d + time2_e) / 5
time2_median = sorted([time2_a, time2_b, time2_c, time2_d, time2_e])[2]
print("Method 1: mean = {0}, median = {1}".format(time1_mean, time1_median))
## -> Output: Method 1: mean = 10.77958505996503, median = 10.693453999934718
print("Method 2: mean = {0}, median = {1}".format(time2_mean, time2_median))
## -> Output: Method 2: mean = 11.350917899981141, median = 11.356302100000903
</code></pre>
| <python><django> | 2023-10-18 14:13:21 | 1 | 780 | KingRanTheMan |
77,317,117 | 19,024,379 | pyparsing how to express functions inside an infixNotation | <p>I'm building a recursive grammar using pyparsing, to express a series of possible operations, but I've hit an issue with how to express functions. I wish to describe an operator that has a name, followed by a comma delimited set of arguments:</p>
<pre class="lang-py prettyprint-override"><code>import pyparsing as pp
test_string = "f(x, g(h(i,j)))"
expr = pp.Forward()
variable = pp.Word(pp.alphas)
term = pp.infixNotation(
variable, ... # Insert operators here
)
expr <<= term
expr.parse_string(test_string)
</code></pre>
<p>I originally came up with a version where I added a comma operator, but this unfortunately also recognises: x,y,z without being in a function.</p>
| <python><pyparsing> | 2023-10-18 14:10:40 | 1 | 1,172 | Mark |
77,317,085 | 14,044,486 | How do I make a file "executable" in github actions? | <p>I build a little command-line tool using python, and have been trying to use Github Actions to build it for the various different platforms. To build the project into a standalone binary, I am using the <a href="https://github.com/Nuitka/Nuitka-Action" rel="nofollow noreferrer">Nuitka Action</a>, and then I'm releasing it using the <a href="https://github.com/softprops/action-gh-release" rel="nofollow noreferrer">Github Release</a>. The relevant parts of the <code>.yml</code> file looks like this:</p>
<pre><code> - name: Build python script into a stand-alone binary (ubuntu)
if: startsWith(matrix.os, 'ubuntu')
uses: Nuitka/Nuitka-Action@main
with:
working-directory: src
nuitka-version: main
script-name: directory/main.py
onefile: true
output-file: main.bin
- name: Make binary executable
run: chmod +x ./src/build/main.bin
- name: Upload Artifact
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.os }} build
path: src/build/main.bin
- name: Release
uses: softprops/action-gh-release@v1
with:
files: src/build/main.bin
</code></pre>
<p>When I push the appropriate tag, the correct action gets triggered, and it creates a Release, builds my binary and attaches it.</p>
<p>The issue is that the file I download from the release is not marked as executable. I literally need to locally <code>chmod +x</code> the file I've downloaded, otherwise it opens and looks like this:</p>
<p><a href="https://i.sstatic.net/DGKzR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DGKzR.png" alt="A screenshot of the file internals that looks like corrupted data" /></a></p>
<p>You can see that I've tried adding a "Make binary executable step" but this doesn't seem to fix my issue. How do I force my file to remember it's supposed to be an executable upon download?</p>
| <python><github><github-actions><nuitka> | 2023-10-18 14:07:01 | 1 | 593 | Drphoton |
77,317,072 | 2,497,309 | Download docx file created in Flask using VueJS | <p>I'm trying to figure out how to download a docx file in VueJS. I initially tried to create it in the frontend but the file ends up being corrupt so I'm using flask to create the docx file (which when I save locally opens just fine) and then sending it over to the frontend where I want to download it but the file ends up being corrupted.</p>
<p>For the Flask API this is what I use to create the file:</p>
<pre><code>def download_docx():
try:
data = request.get_json()
write_to_docx = data.get('docxData')
document = Document()
new_parser = HtmlToDocx()
new_parser.add_html_to_document(write_to_docx, document)
docx_file = BytesIO()
file_name = "test.docx"
document.save(docx_file)
docx_file.seek(0)
return send_file(docx_file,
attachment_filename=file_name,
as_attachment=True,
mimetype='application/vnd.openxmlformats-officedocument.wordprocessingml.document',
cache_timeout=0)
except Exception as e:
print(traceback.print_exc())
return render_template('index.html')
</code></pre>
<p>For the VueJS side I have:</p>
<pre><code>export function downloadDocx(token, data) {
const headers = authHeader(token)
return mainAxios.post(`${API_URL}/download/docx`, data, { headers })
}
downloadDocx() {
let data = {
docxData: this.fullHtml
}
this.$store.dispatch('downloadDocx', data)
.then(response => {
if (response.data && response.status == 200) {
const blob = new Blob([ response.data ], { type: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document' })
const url = URL.createObjectURL(blob);
var element = document.createElement('a');
element.setAttribute('href', url);
element.setAttribute('download', 'test.docx');
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
}
else {
console.log("FAILED")
}
})
}
</code></pre>
| <javascript><python><vue.js><flask> | 2023-10-18 14:05:09 | 1 | 947 | asm |
77,317,045 | 4,451,521 | My dash subplots are too flat. How can I enlarge the Y axis? | <p>I have used python dash to make some subplots but when I run it it is like this</p>
<p><a href="https://i.sstatic.net/a1NnU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a1NnU.png" alt="enter image description here" /></a></p>
<p>You can see that the Y axis is too small compared to the X axis.
I don't want to make them equal since the values from both axis are quite different but for visualization purposes it should be a bit more square
like</p>
<p><a href="https://i.sstatic.net/7ONwR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7ONwR.png" alt="enter image description here" /></a></p>
<p>Notice that in the second one the Y is more elongated</p>
<p>How can I do this?</p>
| <python><plotly-dash> | 2023-10-18 14:02:39 | 0 | 10,576 | KansaiRobot |
77,317,040 | 1,034,637 | Transform.orient function in python Math3D ver 4.0.0 | <p>In python math3d version 4.0.0 which function correspond to the transform.orient function in version 3.4.1?</p>
| <python> | 2023-10-18 14:02:23 | 0 | 10,844 | Itay Gal |
77,316,817 | 9,137,547 | Python sibling relative import error: 'no known parent package' | <p>I want to import a module from a subpackage so I went here <a href="https://stackoverflow.com/questions/14057464/relative-importing-modules-from-parent-folder-subfolder">Relative importing modules from parent folder subfolder</a> and since it was not working I read all the literature here on stack and found a smaller problem that I can reproduce but cannot solve.</p>
<p>I want to use relative imports because I don't wanna deal with sys.path etc and I don't wanna install every module of my project to be imported everywhere. I wanna make it work with relative imports.</p>
<p>My project structure:</p>
<pre><code>project/
__init__.py
bar.py
foo.py
main.py
</code></pre>
<p>bar.py:</p>
<pre><code>from .foo import Foo
class Bar():
@staticmethod
def get_foo():
return Foo()
</code></pre>
<p>foo.py:</p>
<pre><code>class Foo():
pass
</code></pre>
<p>main.py:</p>
<pre><code>from bar import Bar
def main():
f = Bar.get_foo()
if __name__ == '__main__':
main()
</code></pre>
<p>I am running the project code from terminal with <code>python main.py</code> and I get the following:</p>
<pre><code>Traceback (most recent call last):
File "**omitted** project/main.py", line 1, in <module>
from bar import Bar
File "**omitted** project/bar.py", line 1, in <module>
from .foo import Foo
ImportError: attempted relative import with no known parent package
</code></pre>
<p>Why am I getting this error? It seems that bar doesn't recognize project as the parent package but:</p>
<ul>
<li><code>__init__.py</code> is in place</li>
<li><code>bar.py</code> is being run as a module not a script since it is called from main and not from the command line (so <code>__package__</code> and <code>__name__</code> should be in place to help solve the relative import <a href="https://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time">Relative imports for the billionth time</a>)</li>
</ul>
<p>Why am I getting this error? What am I getting wrong? I have worked for a while just adding the parent of cwd to the PYTHONPATH but I wanna fix this once and for all.</p>
| <python><python-import><relative-import> | 2023-10-18 13:34:26 | 3 | 659 | Umberto Fontanazza |
77,316,751 | 512,480 | How to Actually pass parameters from API Gateway test screen to Lambda | <p>I am learning API Gateway. I have created an API method that is linked to a lambda function. I can demonstrate that the lambda function is actually called in a timely fashion. However, when I put a query string in the "Query strings" space in the test screen and then press the Test button, the information put there does not arrive to the Lambda function. No matter what I put in there, the event handed to the Lambda comes up empty, and returns as empty. What am I doing wrong?</p>
<p>Lambda code:</p>
<pre><code>def lambda_handler(event, context):
print("hello")
print("event:", event)
print("hello")
return event
</code></pre>
<p>Log:</p>
<pre><code>INIT_START Runtime Version: python:3.11.v14 Runtime Version ARN: arn:aws:lambda:us-west-2::runtime:9c87c21a94b293e1a306aad2c23cfa6928e7a79a3d3356158b15f4cbe880b390
START RequestId: 459979a0-f0f5-4571-9198-a5bb57ff1943 Version: $LATEST
hello
event: {}
hello
END RequestId: 459979a0-f0f5-4571-9198-a5bb57ff1943
REPORT RequestId: 459979a0-f0f5-4571-9198-a5bb57ff1943 Duration: 1.72 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 72 MB Init Duration: 316.97 ms
</code></pre>
<p>API setup:<a href="https://i.sstatic.net/WsM2r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WsM2r.png" alt="enter image description here" /></a></p>
<p>API test: <a href="https://i.sstatic.net/8LZgy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8LZgy.png" alt="enter image description here" /></a></p>
| <python><aws-lambda><aws-api-gateway> | 2023-10-18 13:25:29 | 1 | 1,624 | Joymaker |
77,316,603 | 12,226,377 | Extract text from Word document and store in an excel file using python | <p>I have a word document that is a standard template used for our wider meetings. There are two columns in my word document, First Column has all the headers second column holds the actual details of that. I have attached a screenshot of the same to show the structure of the word document.<a href="https://i.sstatic.net/7zyJn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7zyJn.png" alt="enter image description here" /></a></p>
<p>I would now like to extract the text from both of the columns using python and store them in a dataframe. The resultant dataframe should look like the following:</p>
<pre><code>Title In Force? Date Who attended the event?
Test Yes 03/10/1999 X, Y
</code></pre>
<p>How can I achieve this?</p>
| <python><ocr><docx><text-parsing> | 2023-10-18 13:07:14 | 1 | 807 | Django0602 |
77,316,538 | 19,128,410 | How can you format y-axis differently in each subplot? | <p>I have the following piece of code that draws some subplots:</p>
<pre><code>from matplotlib import pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
df = pd.DataFrame(data=[[0, 1], [1, 0]], columns=['prc_col', 'money_col'])
fig, axs = plt.subplots(1, 2)
fig.tight_layout(pad=5.0)
formatting_dict = {'prc_col': lambda num: f'{round(num)}%', 'money_col': lambda num: f'${round(num)}'}
for idx, col in enumerate(df.columns):
ax = axs[idx]
g = sns.boxplot(data=df, y=col, showfliers=False, ax=ax)
g.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: formatting_dict[col](x)))
plt.show()
</code></pre>
<p>and I would like to see y-axis for boxplot corresponding to prc_column to be formatted as percentages and y-axis for money_col to be formatted as money. However, both plots get formatted as money, as seen in the picture below.</p>
<p><img src="https://i.sstatic.net/6bueo.png" alt="Output I get" /></p>
| <python><matplotlib><lambda><seaborn> | 2023-10-18 12:58:40 | 0 | 311 | Sanjin Juric Fot |
77,316,505 | 2,110,805 | I'm having a bad time installing some modules with poetry | <p><strong>First, a bit of context</strong></p>
<p>I have an old project managed with poetry and I wanted to install new modules. I tried installing them with <code>poetry add <mymodule></code>... but it failed, <code>poetry update</code> failed, everything failed... So, after a big headache time, as always with poetry, I decided to start from scratch.</p>
<p><strong>What I did</strong></p>
<p>I installed the latest python 3.12</p>
<pre><code>sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt install python3.12-full
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.12 1
</code></pre>
<p>Then, the latest poetry (1.6.1) and created a new project</p>
<pre><code>poetry 1.6.1: curl -sSL https://install.python-poetry.org | python3 -
poetry env remove <my_old_env>
poetry init (I selected python 3.12)
poetry env use /usr/bin/python3.12
poetry add <all_my_needed_modules>
</code></pre>
<p><strong>And it failed again....</strong></p>
<p>So I tried adding modules one by one and, at the end, I had 2 failing modules : <code>pyarrow</code> and <code>pytickersymbols</code>.</p>
<p>For <code>pytickersymbols</code>, it was <code>pyyaml</code> who failed when downgrading from 6.0.1 to 6.0</p>
<p>For <code>pyarrow</code>, poetry said it was a problem with wheel (and, as a matter of fact, it said the same thing with <code>pyyaml</code> 6.0)</p>
<p>With both of them, I had a message at the end :</p>
<pre><code>Note: This error originates from the build backend, and is likely not a problem with poetry but with pyarrow (13.0.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "pyarrow (==13.0.0)"'
</code></pre>
<p>Same for pyyaml (<code>pip wheel --use-pep517 "pyyaml (==6.0)"</code>)</p>
<p><strong>Really?</strong></p>
<p>Well, I tried these 2 commands, <code>pip wheel --use-pep517 "pyarrow (==13.0.0)</code> and <code>pip wheel --use-pep517 "pyyaml (==6.0)"</code>... and it downloaded the .whl files just fine. So, after a bit of research, I found that I could force these files in the <code>pyproject.toml</code> configuration file :</p>
<pre><code>[tool.poetry.dependencies]
pyyaml = { file = "whl/PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl" }
pytickersymbols = "^1.13.0"
pyarrow = { file = "whl/pyarrow-13.0.0-cp310-cp310-manylinux_2_28_x86_64.whl" }
</code></pre>
<p>And they installed without complaining.</p>
<p><strong>And now?</strong></p>
<p>It seems to work for <code>pyyaml</code>/<code>pytickersymbols</code>, but <code>pyarrow</code> is not functionnal : <code>ImportError: Missing optional dependency 'pyarrow'. pyarrow is required for parquet</code></p>
<p>I don't know what to do from here...How can I install <code>pyarrow</code> ?</p>
<p><strong>[EDIT]</strong>
I guess I don't understand the installation mechanisms, but it seems poetry do not download wheels, but try to compile modules instead :</p>
<pre><code>$ poetry add pyarrow --python=3.12
Using version ^13.0.0 for pyarrow
Updating dependencies
Resolving dependencies... (0.5s)
Package operations: 1 install, 0 updates, 0 removals
• Installing pyarrow (13.0.0): Failed
ChefBuildError
Backend subprocess exited when trying to invoke build_wheel
<string>:34: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
WARNING setuptools_scm.pyproject_reading toml section missing 'pyproject.toml does not contain a tool.setuptools_scm section'
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-312
creating build/lib.linux-x86_64-cpython-312/pyarrow
copying pyarrow/filesystem.py -> build/lib.linux-x86_64-cpython-312/pyarrow
copying pyarrow/pandas_compat.py -> build/lib.linux-x86_64-cpython-312/pyarrow
(...)
copying pyarrow/src/arrow/python/visibility.h -> build/lib.linux-x86_64-cpython-312/pyarrow/src/arrow/python
running build_ext
creating /tmp/tmpbsvpo2s2/pyarrow-13.0.0/build/temp.linux-x86_64-cpython-312
-- Running cmake for PyArrow
cmake -DCMAKE_INSTALL_PREFIX=/tmp/tmpbsvpo2s2/pyarrow-13.0.0/build/lib.linux-x86_64-cpython-312/pyarrow -DPYTHON_EXECUTABLE=/tmp/tmpbfzby4k_/.venv/bin/python -DPython3_EXECUTABLE=/tmp/tmpbfzby4k_/.venv/bin/python -DPYARROW_CXXFLAGS= -DPYARROW_BUILD_CUDA=off -DPYARROW_BUILD_SUBSTRAIT=off -DPYARROW_BUILD_FLIGHT=off -DPYARROW_BUILD_GANDIVA=off -DPYARROW_BUILD_ACERO=off -DPYARROW_BUILD_DATASET=off -DPYARROW_BUILD_ORC=off -DPYARROW_BUILD_PARQUET=off -DPYARROW_BUILD_PARQUET_ENCRYPTION=off -DPYARROW_BUILD_GCS=off -DPYARROW_BUILD_S3=off -DPYARROW_BUILD_HDFS=off -DPYARROW_BUNDLE_ARROW_CPP=off -DPYARROW_BUNDLE_CYTHON_CPP=off -DPYARROW_GENERATE_COVERAGE=off -DCMAKE_BUILD_TYPE=release /tmp/tmpbsvpo2s2/pyarrow-13.0.0
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- System processor: x86_64
-- Performing Test CXX_SUPPORTS_SSE4_2
-- Performing Test CXX_SUPPORTS_SSE4_2 - Success
-- Performing Test CXX_SUPPORTS_AVX2
-- Performing Test CXX_SUPPORTS_AVX2 - Success
-- Performing Test CXX_SUPPORTS_AVX512
-- Performing Test CXX_SUPPORTS_AVX512 - Success
-- Arrow build warning level: PRODUCTION
-- Using ld linker
-- Build Type: RELEASE
-- CMAKE_C_FLAGS: -Wall -fno-semantic-interposition -msse4.2 -fdiagnostics-color=always -fno-omit-frame-pointer -Wno-unused-variable -Wno-maybe-uninitialized
-- CMAKE_CXX_FLAGS: -Wno-noexcept-type -Wall -fno-semantic-interposition -msse4.2 -fdiagnostics-color=always -fno-omit-frame-pointer -Wno-unused-variable -Wno-maybe-uninitialized
-- Generator: Unix Makefiles
-- Build output directory: /tmp/tmpbsvpo2s2/pyarrow-13.0.0/build/temp.linux-x86_64-cpython-312/release
CMake Error at /usr/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find Python3 (missing: Python3_INCLUDE_DIRS Development.Module
NumPy) (found version "3.12.0")
Call Stack (most recent call first):
/usr/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.22/Modules/FindPython/Support.cmake:3180 (find_package_handle_standard_args)
/usr/share/cmake-3.22/Modules/FindPython3.cmake:490 (include)
cmake_modules/FindPython3Alt.cmake:51 (find_package)
CMakeLists.txt:255 (find_package)
-- Configuring incomplete, errors occurred!
See also "/tmp/tmpbsvpo2s2/pyarrow-13.0.0/build/temp.linux-x86_64-cpython-312/CMakeFiles/CMakeOutput.log".
error: command '/usr/bin/cmake' failed with exit code 1
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/chef.py:147 in _prepare
143│
144│ error = ChefBuildError("\n\n".join(message_parts))
145│
146│ if error is not None:
→ 147│ raise error from None
148│
149│ return path
150│
151│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with pyarrow (13.0.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "pyarrow (==13.0.0)"'.
</code></pre>
<p>and why "python 3.10" in <code>at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/chef.py</code>?</p>
| <python><linux><python-poetry> | 2023-10-18 12:55:44 | 0 | 14,653 | Cyrille |
77,316,494 | 16,383,578 | PyQt6 how to update 96 widgets simutaneously as fast as possible? | <p>This is about my GUI application.</p>
<p><a href="https://i.sstatic.net/7xH67.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7xH67.png" alt="enter image description here" /></a></p>
<p>What you are seeing is the window I have created to let the user customize the style of the application. I don't know if it looks good, but I did my best to make it look the best according to my aesthetics.</p>
<p>The window is split into 9 areas, each area is used to change the style of a group of widgets that are related. The preview of the widgets is on the left in each area, on the right of each area is the scroll-area containing widgets that change a certain aspect of the style.</p>
<p>The available options for customization, or the attributes of the widgets styles that can be changed, are the border-style, border-color, text-color, and background color if the widget uses flat background, else the start color of the gradient and the middle color of the gradient if the background uses a gradient. The gradient has 3 stops, the first and the last stops use the same color.</p>
<p>These attributes can be changed independently for each state of all possible states the widget can be in.</p>
<p>There are two types of widgets that change the styles. One type changes the color, the other changes border-style. The border-style is changed via a QComboBox, and the colors can be changed using both a QLineEdit and QColorDialog, the dialog is called by clicking the associated button.</p>
<p>Each group controls a key in the underlining dictionary that stores the style configurations, when the state of the group is changed, the corresponding key is changed and the style is compiled using a class and then the window is updated.</p>
<p>When the start button is clicked, its text will be changed to stop and many widgets in the preview area will be updated every 125 milliseconds. And the animation for the board is a live tic-tac-toe game between two AIs I wrote. Pressing the pause button and it becomes the resume button, and the animation pauses. Pressing the resume button and the animation resumes. Pressing the stop button and everything stops and is reset.</p>
<p>Pressing the randomize button and a random style is generated and all 96 groups that change the style is updated at once, and the style is applied, like so:</p>
<p><a href="https://i.sstatic.net/jg1HD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jg1HD.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/PSpog.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PSpog.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/V1YEJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V1YEJ.png" alt="enter image description here" /></a></p>
<p>It works, the problem is updating all these widgets simultaneously causes stuttering, the window stops responding for a few seconds and then the window is updated. I want to eliminate the lagging.</p>
<p>Due to the size of the application I won't post the full code here. Also because the problem arises when 96 widgets are updated simultaneously I won't provide a minimal reproducible example. The application isn't complete and isn't working as intended therefore I cannot post it on Code Review just yet, in truth it is almost complete and I will post it on Code Review when it is.</p>
<p>I will show some fragments of code related to this issue, you won't be able to run the code, I uploaded the complete project to Google Drive so that you can run it, <a href="https://drive.google.com/file/d/1vnI9eBmYesFw6G0mI1Wqjb3382vCr5qG/view?usp=sharing" rel="nofollow noreferrer">link</a>. Everything I have implemented so far is completely working, there are no bugs.</p>
<pre><code>import asyncio
import qasync
import random
import sys
from concurrent.futures import ThreadPoolExecutor
class ColorGetter(Box):
instances = []
def __init__(self, widget: str, key: str, name: str):
super().__init__()
ColorGetter.instances.append(self)
self.config = CONFIG[widget] if widget else CONFIG
self.key = key
self.name = name
self.set_color(self.config[key])
self.init_GUI()
self.button.clicked.connect(self._show_color)
self.picker.accepted.connect(self.pick_color)
self.color_edit.returnPressed.connect(self.edit_color)
def init_GUI(self):
self.color_edit = ColorEdit(self.color_text)
self.button = Button(self.color_text)
self.vbox = make_vbox(self)
self.vbox.addWidget(Label(self.name))
self.vbox.addWidget(self.color_edit)
self.vbox.addWidget(self.button)
self.picker = ColorPicker()
def set_color(self, text: str):
self.color = [int(text[a:b], 16) for a, b in ((1, 3), (3, 5), (5, 7))]
@property
def color_text(self):
r, g, b = self.color
return f"#{r:02x}{g:02x}{b:02x}"
def _show_color(self):
self.picker.setCurrentColor(QColor(*self.color))
self.picker.show()
def _update(self):
self.button.setText(self.color_text)
self.config[self.key] = self.color_text
GLOBALS["Window"].update_style()
def pick_color(self):
self.color = self.picker.currentColor().getRgb()[:3]
self.color_edit.setText(self.color_text)
self.color_edit.color = self.color_text
self._update()
def edit_color(self):
text = self.color_edit.text()
self.color_edit.color = text
self.set_color(text)
self.color_edit.clearFocus()
self._update()
def sync_config(self):
color_html = self.config[self.key]
self.color = [int(color_html[a:b], 16) for a, b in ((1, 3), (3, 5), (5, 7))]
self.color_edit.setText(color_html)
self.color_edit.color = color_html
self.button.setText(color_html)
class BorderStylizer(Box):
instances = []
def __init__(self, widget: str, key: str, name: str):
super().__init__()
BorderStylizer.instances.append(self)
self.config = CONFIG[widget] if widget else CONFIG
self.key = key
self.name = name
self.borderstyle = self.config[key]
self.init_GUI()
def init_GUI(self):
self.vbox = make_vbox(self)
self.vbox.addWidget(Label(self.name))
self.combobox = ComboBox(BORDER_STYLES)
self.vbox.addWidget(self.combobox)
self.combobox.setCurrentText(self.borderstyle)
self.combobox.currentTextChanged.connect(self._update)
def _update(self):
self.borderstyle = self.combobox.currentText()
self.config[self.key] = self.borderstyle
GLOBALS["Window"].update_style()
def sync_config(self):
self.borderstyle = self.config[self.key]
self.combobox.setCurrentText(self.borderstyle)
async def sync_config():
loop = asyncio.get_event_loop()
with ThreadPoolExecutor(max_workers=64) as executor:
await asyncio.gather(
*(
loop.run_in_executor(executor, instance.sync_config)
for instance in ColorGetter.instances + BorderStylizer.instances
)
)
def random_style():
for entry in CONFIG.values():
for k in entry:
entry[k] = (
random.choice(BORDER_STYLES)
if k == "borderstyle"
else f"#{random.randrange(16777216):06x}"
)
asyncio.run(sync_config())
GLOBALS["Window"].update_style()
GLOBALS["Window"].qthread.change.emit()
if __name__ == "__main__":
app = QApplication(sys.argv)
app.setStyle("Fusion")
if sys.platform == "win32":
asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
loop = qasync.QEventLoop(app)
asyncio.set_event_loop(loop)
</code></pre>
<p>I tried to eliminate the lagging by making the update run asynchronously, and it doesn't work very well.</p>
<p>What is a better solution?</p>
| <python><asynchronous><python-asyncio><pyqt6> | 2023-10-18 12:53:57 | 2 | 3,930 | Ξένη Γήινος |
77,316,382 | 2,248,271 | database locked when using sqlalchemy to_sql method | <p>I have a function to upload a dataframe to a SQL server, defined as follows:</p>
<pre><code>
def uploadtoDatabase(data,columns,tablename):
connect_string = urllib.parse.quote_plus(f'DRIVER={{ODBC Driver 17 for SQL Server}};Server=ServerName;Database=DatabaseName;Encrypt=yes;Trusted_Connection=yes;TrustServerCertificate=yes')
engine = sqlalchemy.create_engine(f'mssql+pyodbc:///?odbc_connect={connect_string}', fast_executemany=False)
# define data to upload and chunksize (max 2100 parameters)
requireddata = columns
chunksize = 1000//len(requireddata)
#convert boolean columns
boolcolumns=data.select_dtypes(include=['bool']).columns
data.loc[:,boolcolumns] = data[boolcolumns].astype(int)
#convert objects to string
objectcolumns=data.select_dtypes(include=['object']).columns
data.loc[:,objectcolumns] = data[objectcolumns].astype(str)
#load data
with engine.connect() as connection:
data[requireddata].to_sql(tablename, connection, index=False, if_exists='replace',schema = 'dbo')
connection.close()
engine.dispose()
</code></pre>
<p>I have tried different options but every time I execute this insert (4M records, which takes a while), the database is locked. Other processes are waiting on the Python process to finish.</p>
<p>Is there any way to ensure other database transactions continu to execute while this function is running, other than creating a for loop and execute the function multiple times on smaller batches of the data?</p>
| <python><sql-server><sqlalchemy> | 2023-10-18 12:36:55 | 1 | 6,020 | Wietze314 |
77,316,349 | 11,329,736 | Send email after Snakemake workflow finishes successfully on SLURM | <p>Is there way to send an email after a <code>Snakemake</code> workflow finished when using the <code>--slurm</code> option?
Something like this:</p>
<pre><code>onsuccess:
email(example@hello.com, title="Snakemake job xxx finished successfully")
onerror:
email(example@hello.com, title="Snakemake job xxx failed")
</code></pre>
<p>SLURM scripts have this option:</p>
<pre><code>#!/bin/bash
#SBATCH --mail-type=END,FAIL
</code></pre>
<p>I would not like an email after each rule has finished though, but after the whole workflow has finished.</p>
| <python><python-3.x><shell><email><snakemake> | 2023-10-18 12:32:54 | 1 | 1,095 | justinian482 |
77,316,345 | 1,581,090 | How to use re.split in python but keep the splitting expression? | <p>I have a string</p>
<pre><code>a = "Example text 123"
</code></pre>
<p>which I want to split by the text <code>text</code></p>
<pre><code>b = re.split(r"text", a)
</code></pre>
<p>which gives</p>
<pre><code>['Example ', ' 123']
</code></pre>
<p>but I want to keep the text I used in the regex, so finally I have as result</p>
<pre><code>['Example ', 'text 123']
</code></pre>
<p>I can do that by</p>
<pre><code>myresult = "text" + re.split(r"text", a)
</code></pre>
<p>but maybe there is a better way to do that?</p>
| <python><regex> | 2023-10-18 12:32:32 | 1 | 45,023 | Alex |
77,316,276 | 16,739,843 | How to solve or debug that IPyWidgets are just partially shown? | <p>I have just installed Jupyter Notebooks and a new Python environment using <code>venv</code> on my vanilla Macbook (I have not used Jupyter on this hardware before). Now I am running the Jupyter Notebook, that is using IPyWidgets as input controls, and recognized that only some of the widgets are shown.</p>
<p>A minimal example, which reproduces the behaviour on my machine:</p>
<pre><code>import ipywidgets
display(
ipywidgets.Combobox(description='test1'),
ipywidgets.Checkbox(description='test2'),
ipywidgets.Button(description='test3')
)
</code></pre>
<p>This is what the output looks on my machine:</p>
<p><a href="https://i.sstatic.net/dDY32.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dDY32.png" alt="enter image description here" /></a></p>
<p>And this is what it's supposed to look like (taken on another machine):</p>
<p><a href="https://i.sstatic.net/LmMfi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LmMfi.png" alt="enter image description here" /></a></p>
<p>As you can see, on my machine only the button is displayed (which also means that <code>ipywidgets</code> is essentially working). As no error is shown, I do not have any clue where to start debugging this nor how to solve it and would appreciate advice on how to continue from here.</p>
<p>I have already tried different browsers (namely Chrome & Safari) without any effect on the output.</p>
| <python><macos><jupyter-notebook><ipywidgets> | 2023-10-18 12:23:56 | 1 | 1,191 | Mathias Kemeter |
77,315,840 | 827,927 | How to use matplotlib for debugging, without making it a dependency | <p>I am developing a library of algorithms on graphs. During development, I regularly need to plot graphs generated during the algorithms, for debugging. I use matplotlib for this, so in my source code I need to <code>import matplotlib</code>.</p>
<p>But, I do not want matplotlib a dependency of the library, because the library itself does not need it.</p>
<p>How can I use matplotlib during development, but not make it a dependency of the library?</p>
| <python><matplotlib> | 2023-10-18 11:21:42 | 1 | 37,410 | Erel Segal-Halevi |
77,315,780 | 13,222,679 | How to merge rows in dictionary python using pandas | <p>i have a dataframe like that</p>
<pre><code>MV id NAME ADDRESS DOC DOCTYPE PHONE
1 100 Mark Home 299 NI {123,456}
2 100 John Work A123 Pass {789,101}
3 100 Club
</code></pre>
<p>what i want to do is to merge the columns that has the same id into one cell in dictionary like this and make the key of a value in dictionary from another column</p>
<pre><code>id NAME ADDRESS DOC PHONE
100 {1:Mark,2:John} {1:'Home',2:'Work',3:'Club'} {NI:'299',Pass:'A123'} {1:{123,456},2:{789,101}}
</code></pre>
<p>as u can see i used 2 columns as a key column <code>mv</code> used as a key in <code>(Name,Address,Phone)</code>
and i used column <code>DOCTYPE</code> as key in <code>DOC</code> so how can i do something like that i tried this</p>
<pre><code>agg={'id':'first','NAME':dict,'ADDRESS':dict,'PHONE':dict}
df_new=df.groupby(['CUSTOMER_CODE'],as_index=False).aggregate(agg)
return df_new
</code></pre>
<p>but it gave me this output</p>
<pre><code>id Name Address Phone
100 {0:Mark,1:John} {0:Home,1:Work,2:Club} {0:{123,456},1:{789,101}}
</code></pre>
| <python><pandas><dataframe><dictionary><group-by> | 2023-10-18 11:12:14 | 2 | 311 | Bahy Mohamed |
77,315,715 | 5,775,358 | Python subclass of abstract baseclass shows type error | <p>pylance gives an type error while the types of the variables are defined. Below a small example which shows the structure of the code and gives also the error:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Generator, Any
@dataclass
class Base(ABC):
data: list[str | int]
def __iter__(self) -> Generator[str | int, Any, None]:
for xin self.data:
yield x
def __getitem__(self, key) -> str | int:
return self.data[key]
def __len__(self) -> int:
return len(self.data)
@abstractmethod
def some_general_function(self) -> None:
pass
class Test1(Base):
data: list[str]
def some_general_function(self) -> None:
print("General function from list with strings")
if __name__ == '__main__':
test = Test1(['a', 'b', 'abc'])
for ele in test:
print(ele.capitalize)
</code></pre>
<p>The error is on the last line, the error says that the function <code>.capitalize</code> is not available for the type <code>int</code>.</p>
<p>In the implementation of the class (<code>Test1</code>) it is defined that the list data consist of elements with type <code>str</code>.</p>
<p>Also when this is tested in a <code>__post_init__</code> method in the <code>Test1</code> class the error still exists.</p>
<pre><code>class Test1(Base):
data: list[str]
def __post_init__(self):
if not any(map(lambda x: isinstance(x, str), self.data)):
raise TypeError("data should be a `str`")
</code></pre>
<p>The only solution is to rewrite the <code>__iter__</code> of the baseclass, since the type hint there says that the generator can be a <code>str</code> and an <code>int</code>.</p>
<p>Is it possible to only rewrite the the return type (or yield type) of a function without rewriting the function itself?</p>
| <python><typing><pylance> | 2023-10-18 11:03:17 | 1 | 2,406 | 3dSpatialUser |
77,315,541 | 7,052,826 | Plotly express bar re-orders the x-axis due to missing data | <p>I have some data which includes dates in a year-quarter format.</p>
<p>Sorting the dataframe works fine, however when plotting the data, <code>Plotly</code> automatically re-orders the x-axis, by placing data with missing values at the end, instead of adhering to the desired order.</p>
<pre><code># Example data that is not yet ordered on 'Date'
import pandas as pd
import plotly.express as px
df = pd.DataFrame([
['2021-Q4', 'A', 1],
['2021-Q4', 'B', 5],
['2022-Q1', 'B', 5],
['2023-Q2', 'B', 3],
['2023-Q3', 'B', 16],
['2022-Q2', 'B', 4],
['2022-Q2', 'A', 1],
['2022-Q3', 'B', 5],
['2022-Q4', 'B', 6],
['2022-Q4', 'A', 4],
['2023-Q1', 'A', 1],
['2023-Q1', 'B', 9],
['2023-Q3', 'A', 1]
], columns=['Date', 'Type', 'Count'])
# we explicity order the data
# Note that now the 2022-Q1 is in between 2021-Q4 and 2022-Q2
df = df.sort_values('Date', key=lambda e: e.replace('Q',''))
# Now the x-axis and the broken chronology, i.e., 2022-Q1 at the end
fg = px.bar(df, x="Date", y="Count", color="Type", barmode="group")
fg.show()
</code></pre>
<p><a href="https://i.sstatic.net/gWbqR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gWbqR.png" alt="enter image description here" /></a></p>
<p>My desired behavior is that the x-axis remains in the same order as the <code>df</code> is after applying <code>sort_values</code>. Instead, rows with empty data are places in the end, no longer on the chronological order. How can I override this behavior?</p>
| <python><pandas><plotly><plotly-express> | 2023-10-18 10:36:30 | 1 | 4,155 | Mitchell van Zuylen |
77,315,369 | 3,433,875 | Import a global variable from an external function into flask app | <p>I am really new to flask and fairly new to python.</p>
<p>Here is my issue:
I have created a flask app that displays a bunch of matplotlib charts with the following structure:</p>
<pre><code>└──app
├──templates
| └──index.html
├──assets
├──functions
| └──by_continent.py
└──app.py
</code></pre>
<p>So the app gets the country selected by the user in the index.html (countryVal), passes it to my matplotlib chart by_country.py and plots the in-memory image generated in the html file. Works perfect.</p>
<p>My app.py looks like:</p>
<pre><code>from flask import Flask, render_template, request
from functions.by_continent import scatter_plot
app = Flask(__name__)
@app.route("/")
def hello():
return render_template('index.html')
@app.route('/', methods=['POST', 'GET'])
def getCountry():
if request.method == 'POST':
scatter_img = scatter_plot(countryVal)
return render_template('index.html',
scatter_img= scatter_img,
country=countryVal)
else:
return render_template('index.html')
</code></pre>
<p>and my by_continent.py script where the matplotlib chart and the variable countryCode is generated and it looks like:</p>
<pre><code>import matplotlib.pyplot as plt
def scatter_plot(countryVal):
global countryCode
......
pop = pd.read_sql_query("SELECT * from population", conn)
df = df.filter(['Edition','Country/Territory','Region', 'Total', 'status_name'])
countryCode = pop[pop.country_name.str.lower() == current_country.lower()]['country_iso_2code'].values[0]
.....
return scatter_img
</code></pre>
<p>There is a variable on by_country.py that I want to pass to the html.code, so I defined it as a global variable and I am passing it to the app as <strong>country_code= scatter_plot.countryCode</strong>.
See below:</p>
<pre><code>from functions.by_continent import scatter_plot
from flask import Flask, render_template, request
app = Flask(__name__)
@app.route("/")
def hello():
return render_template('index.html')
@app.route('/', methods=['POST', 'GET'])
def getCountry():
**country_code= scatter_plot.countryCode**
if request.method == 'POST':
scatter_img = scatter_plot(countryVal)
return render_template('index.html',
scatter_img= scatter_img,
country=countryVal)
else:
return render_template('index.html')
</code></pre>
<p>But I get the following error:</p>
<pre><code>AttributeError: 'function' object has no attribute 'countryCode'
</code></pre>
<p>What am I doing wrong?</p>
| <python><flask> | 2023-10-18 10:08:59 | 2 | 363 | ruthpozuelo |
77,315,308 | 9,997,385 | How to convert string to Python object (Flask specific) | <p>If I have this code:</p>
<pre><code>from datetime import timezone
DEFAULT_TZ = timezone.utc
</code></pre>
<p>And I want to pass <code>timezone.utc</code> as an input variable from a JSON file:</p>
<pre><code>from datetime import timezone
app.config.from_file(filename="config.json", load=json.load)
DEFAULT_TZ = app.config['DEFAULT_TIMEZONE']
</code></pre>
<p>Where <code>config.json</code> is:</p>
<pre><code>{
"DEFAULT_TIMEZONE": "timezone.utc"
}
</code></pre>
<p>Is something like this possible?</p>
| <python><flask> | 2023-10-18 10:01:48 | 1 | 643 | A. Gardner |
77,314,901 | 1,279,355 | Strip scripts from pybuild | <p>I need to build a debian package from a python package.
The python package is not under my control so i can not easily change the setup.py file.<br />
It is using setuptools and has a very standard setup.py containing
scripts. But for my resulting debian package i don't want any of the scripts included.</p>
<pre><code>setup(
...
scripts=glob.glob(os.path.join('examples', '*.py')),
...
)
</code></pre>
<p>I tried to use <code>--install-scripts=/dev/null</code> but since it is in a debian build that means they are still in the package as <code>./dev/null/script.py</code>.</p>
<p>Is there a way to control the setuptools install to suppress/omit the scripts or do i need to remove them from the setup.py via some sed magic?</p>
| <python><debian><setuptools><packaging> | 2023-10-18 09:01:20 | 1 | 4,420 | Sir l33tname |
77,314,839 | 12,081,269 | How do I connect to Clickhouse locally using sqlalchemy | <p>I am trying to establish a Clickhouse connection from my local machine but I receive an error that <code>sqlalchemy</code> does not know how to deal with clickhouse dialect (<code>Can't load plugin: sqlalchemy.dialects:clickhouse.native</code>). However, everything works in JupyterHub.</p>
<p>I cannot use JupyterHub due to large files that I need to upload into the environment.</p>
<p>Here's how I am trying to connect:</p>
<pre><code>import sqlalchemy as sa
import pandas as pd
import polars as pl
import os
ch_host = os.getenv('CH_HOST', default='my_host')
ch_cert = os.getenv('CH_CERT', default='../path/to/cert')
ch_port = os.getenv('CH_PORT', default='my_port')
ch_db = os.getenv('CH_DB', default='name_db')
ch_user = os.getenv('CH_USER', default='username')
ch_pass = os.getenv('CH_PASS', default='pass')
engine = sa.create_engine(f'clickhouse+native://{ch_user}:{ch_pass}@{ch_host}:{ch_port}/{ch_db}?secure=True&ca_certs={ch_cert}')
# test connection
print(pd.read_sql('show tables', engine).head(5))
</code></pre>
<p>Package versions are the following:</p>
<pre><code>print(pd.__version__)
print(pl.__version__)
print(sa.__version__)
2.0.3
0.18.10
2.0.19
</code></pre>
<p>How do I solve this problem?</p>
| <python><sqlalchemy><clickhouse> | 2023-10-18 08:52:25 | 0 | 897 | rg4s |
77,314,836 | 12,590,879 | Storing 'formattable' strings in database | <p>I'm working on a project where I need to store certain descriptions in a database. These descriptions need to be strings akin to prepared statements in SQL, where I can later 'inject' my own values wherever necessary. The reason is that the data that goes in the fields needs to be generated by the backend on demand (also the descriptions will potentially be localized later on).</p>
<p>Currently I'm simply storing the descriptions in the form of "This is the description for {} and {}" and in the backend I call <code>.format</code> on it to pass the generated values. It's probably worth noting that the user can't interact with this procedure, meaning it shouldn't really be a security hole. However, is there a better way of doing this in Django? Or is my design somehow flawed?</p>
<p>Thanks.</p>
| <python><sql><django> | 2023-10-18 08:51:36 | 0 | 325 | Pol |
77,314,758 | 1,306,747 | VSCode Python unittest discovery fails: 'utils' is not a package -> sys.path ordering? | <p>since a few days, Visual Studio Code Python unittest discovery no longer works for me in several projects that contain a package called <code>utils</code>:</p>
<pre><code>2023-10-18 09:37:36.901 [info] Discovering unittest tests with arguments: /home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles/unittestadapter/discovery.py,--udiscovery,-v,-s,./tests,-p,test_*.py
...
ModuleNotFoundError: No module named 'utils.export_to_dict_helper'; 'utils' is not a package
</code></pre>
<p>It works as soon as I rename the whole <code>utils</code> folder to something else, like <code>helper_utils</code>. Running the tests from the terminal using</p>
<pre><code>python3 -m unittest discover -v tests
</code></pre>
<p>works just fine, even with the directory still named <code>utils</code>. I already tried to downgrade the "Python" extension by a few months and switching to another Python version (tried 3.8 and 3.11 so far), but the error is consistent.</p>
<p>The top of the test file looks like this (print of sys.path for debugging):</p>
<pre class="lang-py prettyprint-override"><code>import sys
print(sys.path)
# Other (working) imports...
from utils.data_storage_selector import DataStorageSelector
from utils.export_to_dict_helper import ExportToDictHelper
</code></pre>
<p>The full output of the test discovery looks like this:</p>
<pre><code>2023-10-18 09:37:36.901 [info] Discovering unittest tests with arguments: /home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles/unittestadapter/discovery.py,--udiscovery,-v,-s,./tests,-p,test_*.py
2023-10-18 09:37:36.901 [info] > ~/.pyenv/versions/3.8.16/bin/python ~/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles/unittestadapter/discovery.py --udiscovery -v -s ./tests -p test_*.py
2023-10-18 09:37:36.901 [info] cwd: .
2023-10-18 09:37:37.218 [info] [
'/home/phip/.../poleno-event-model/tests',
'/home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles/lib/python',
'/home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles/unittestadapter',
'/home/phip/.../poleno-event-model',
'/home/phip/.pyenv/versions/3.8.16/lib/python38.zip',
'/home/phip/.pyenv/versions/3.8.16/lib/python3.8',
'/home/phip/.pyenv/versions/3.8.16/lib/python3.8/lib-dynload',
'/home/phip/.pyenv/versions/3.8.16/lib/python3.8/site-packages',
'/home/phip/.pyenv/versions/3.8.16/lib/python3.8/site-packages/CharPyLS-1.0.3-py3.8-linux-x86_64.egg',
'/home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles',
'/home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles',
'/home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles/lib/python',
'/home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles',
'/home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles/lib/python'
]
2023-10-18 09:37:37.498 [info] Test server connected to a client.
2023-10-18 09:37:37.571 [error] Unittest test discovery error
Failed to import test module: test_data_model_base
Traceback (most recent call last):
File "/home/phip/.pyenv/versions/3.8.16/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/home/phip/.pyenv/versions/3.8.16/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/home/phip/.../poleno-event-model/tests/test_data_model_base.py", line 20, in <module>
from poleno_event_model.data_model_base import DataModelBase
File "/home/phip/.../poleno-event-model/poleno_event_model/__init__.py", line 18, in <module>
from .event_model_decl import event_model
File "/home/phip/.../poleno-event-model/poleno_event_model/event_model_decl.py", line 44, in <module>
from utils.export_to_dict_helper import AttributeFilter
ModuleNotFoundError: No module named 'utils.export_to_dict_helper'; 'utils' is not a package
Failed to import test module: test_poleno_event_model
Traceback (most recent call last):
File "/home/phip/.pyenv/versions/3.8.16/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/home/phip/.pyenv/versions/3.8.16/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/home/phip/.../poleno-event-model/tests/test_poleno_event_model.py", line 26, in <module>
from poleno_event_model import BackgroundData, MeasurementEvent
File "/home/phip/.../poleno-event-model/poleno_event_model/__init__.py", line 18, in <module>
from .event_model_decl import event_model
File "/home/phip/.../poleno-event-model/poleno_event_model/event_model_decl.py", line 44, in <module>
from utils.export_to_dict_helper import AttributeFilter
ModuleNotFoundError: No module named 'utils.export_to_dict_helper'; 'utils' is not a package
</code></pre>
<p>For comparison, this is the output when running the tests manually:</p>
<pre><code>python3 -m unittest discover tests
[
'/home/phip/.../poleno-event-model/tests',
'/home/phip/.../poleno-event-model',
'/home/phip/.../poleno-event-model',
'/home/phip/.pyenv/versions/3.8.16/lib/python38.zip',
'/home/phip/.pyenv/versions/3.8.16/lib/python3.8',
'/home/phip/.pyenv/versions/3.8.16/lib/python3.8/lib-dynload',
'/home/phip/.pyenv/versions/3.8.16/lib/python3.8/site-packages',
'/home/phip/.pyenv/versions/3.8.16/lib/python3.8/site-packages/CharPyLS-1.0.3-py3.8-linux-x86_64.egg'
]
................
----------------------------------------------------------------------
Ran 17 tests in 0.192s
OK
</code></pre>
<p>The <code>utils</code> directory is located at the root of the project (same level as <code>tests</code> and most other local package directories) and contains an <code>__init__.py</code> file.</p>
<p>In case it matters, tool versions are as follows:</p>
<ul>
<li>OS: Ubuntu 23.04</li>
<li>Python: Tested 3.8 and 3.11 via pyenv</li>
<li>VSCode: 1.83.1 (same problem with 1.83.0 and no older snap available to compare)</li>
<li>Python extension: v2023.19.12901009 (pre-release for testing, but same problem with current and past release version)</li>
</ul>
<h3>Progress</h3>
<p>While writing the question, I probably discovered the problem. The VSCode extension folder <code>unittestadapter</code> that is put into <code>sys.path</code> during discovery and <em>before the workspace folder</em> looks like this:</p>
<pre><code>ls -alh /home/phip/.vscode/extensions/ms-python.python-2023.19.12901009/pythonFiles/unittestadapter
total 44K
drwxrwxr-x 3 phip phip 4.0K Okt 17 16:18 .
drwxrwxr-x 8 phip phip 4.0K Okt 17 16:17 ..
-rw-rw-r-- 1 phip phip 4.6K Okt 17 16:17 discovery.py
-rw-rw-r-- 1 phip phip 11K Okt 17 16:17 execution.py
-rw-rw-r-- 1 phip phip 94 Okt 17 16:17 __init__.py
drwxrwxr-x 2 phip phip 4.0K Okt 17 16:25 __pycache__
-rw-rw-r-- 1 phip phip 7.9K Okt 17 16:17 utils.py
</code></pre>
<p>This <code>utils.py</code> module there shadows my project-local <code>utils</code> package.</p>
<h3>Question</h3>
<p>With the finding above, the problem boils down to <strong>How to fix the ordering of the sys.path elements when running the VSCode unittest discovery so that the workspace folder comes <em>before</em> any extension directories?</strong></p>
<p>The secondary question would be why this suddenly changes and apparently only affects me; at least I wasn't able to find something similar in online searches. If this was a problem with a recent VSCode version, I'd assume the web to be riddled with reports about it.</p>
<h3>Update</h3>
<p>I still haven't found a solution for the problem, but investigated some more: Apparently, the <code>unittestadapter</code> directory in <code>sys.path</code> should not be there at all, as its internal <code>utils</code> module is used in a qualified way (<code>unittestadapter.utils</code>). So only the <code>pythonFiles</code> path should be added, which is in the list a bit further down.</p>
<p>Interestingly, I can't reproduce the problem when creating a new minimal project (just a main module, a utils package with a single module and a test file); no additional <code>unittestadapter</code> gets added in this case. So it is probably not a problem with the VSCode installation. However, the issue persists with at least two projects containing a <code>utils</code> module. And it does not get fixed by removing the <code>__pycache__</code> directories or renaming the project folder (to invalidate any cache in VSCode installation/config files).</p>
| <python><visual-studio-code><python-unittest><pythonpath> | 2023-10-18 08:42:38 | 2 | 989 | Philipp Burch |
77,314,703 | 7,029,724 | QGIS python plugins: How to install python packages, f.e. cx_Oracle | <p>I want to develop a python- plugin for QGIS. For this I need to install packages (f.e. cx_Oracle) on my computer and of course - when deployed on client- computers.
Can anyone tell me, how to do this, preferably without the need for admin rights?</p>
| <python><plugins><qgis> | 2023-10-18 08:36:30 | 2 | 371 | am2 |
77,314,656 | 8,849,071 | Maintaning isolation between modules in Django monolith | <p>In our company, we have a monolith. Right now is not that big but we are already seeing some problems with the isolation between the modules we have. Our setup is pretty similar to other Django backends: we have a common User model and another UserProfile model where we store additional information about the user. As an additional thing, we also have a company model to group our users by company.</p>
<p>These models are shared across all the modules, at least they are coupled as foreign keys in the models of the rest of the modules. They are contained in a <code>companies</code> module, where we have our business rules about how many users a company can have, invariants in their profile, etc. I think this is fine because we usually just do simple and fast join by user ID or filter by it, so we treat this dependency as read-only so we don't skip any business rules/boundaries related to the companies module.</p>
<p>My problem comes when, for instance, some <code>moduleA</code>, needs to trigger an application service in <code>moduleB</code>. For instance, let's say <code>moduleB</code> needs to access a resource in <code>moduleA</code>. The code would be something like:</p>
<pre class="lang-py prettyprint-override"><code># The GetResourceInterface is defined in module A!
from moduleA import GetResourceInterface
class AServiceInModuleB:
# We use dependency injection
def __init__(service: GetResourceInterface):
pass
</code></pre>
<p>Now this is a direct import from a different module, so it seems we are breaking the boundaries (even though is not the domain layer, is just the application layer). Nonetheless, sometimes we don't return DTOs from services because our domain objects are not that big or complex to define DTOs (we would just be duplicating a class with a different name but almost the same attributes). So in those cases, we would indeed be breaking the isolation.</p>
<p>A possible alternative we have thought about is to contact other modules using their REST API (we define an API for every module because we also use it in our clients). In that case, the code would be something like:</p>
<pre class="lang-py prettyprint-override"><code># The name could be different (not sure which for now)
from dependencies import ModuleAConnectorInterface
class AServiceInModuleB:
def __init__(connector: ModuleAConnectorInterface):
pass
</code></pre>
<p>The main advantage I think here is that the dependencies between modules are way more explicit to read in the file structure and in the code. Also, if more operations are needed from <code>moduleA</code>, they can be added to that connector too.</p>
<p>What possible problems do we see in this approach? The additional overhead of an additional HTTP request. This would be authenticated with JWT because that's pretty easy and fast to verify. The HTTP request would possibly just be made locally because even though we have a monolith, we have around 10 or 15 replicas of the Django app running the web server. Also, we had a bit of a worry about having a chatty architecture, where every service makes a lot of calls to other services, but honestly, this is a minor concern because we don't see it as a real thing happening with our current/future requirements.</p>
<p>So, my question is, is this a valid solution? Are we totally overengineering this isolation of boundaries? Our main concern is that we come from a messy monolith with no boundaries, and now we are remaking the app and the team is growing. For the sake of simplicity, we are keeping a mono-repository with our monolith, but every team is going to own a couple of modules, so we want to avoid interdependencies/breaking other people's code/stopping other people from refactoring/etc.</p>
<p>Sorry for the long text and let me know if you need any additional information to answer the question! Thanks! :D</p>
<p>Note: not sure if useful, we are using DDD, so we try to define clear bounded contexts with different domains and isolated data structures to represent the domain.<strong>strong text</strong></p>
| <python><django><architecture><domain-driven-design> | 2023-10-18 08:31:12 | 1 | 2,163 | Antonio Gamiz Delgado |
77,314,581 | 6,851,715 | dynamically add a sequential date column to a data frame with a starting with each date is repeated N times in consecutive rows | <p>I've got the following data.frame:</p>
<pre><code>import pandas as pd
import random
data = {
'Column1': [random.randint(1, 100) for _ in range(9)],
'Column2': [random.uniform(0, 1) for _ in range(9)],
'Column3': [chr(random.randint(65, 90)) for _ in range(9)],
'Column4': [random.choice(['A', 'B', 'C']) for _ in range(9)]
}
df = pd.DataFrame(data)
Column1 Column2 Column3 Column4
0 87 0.208179 M C
1 85 0.049071 Q C
2 4 0.474926 X C
3 35 0.966357 L B
4 58 0.295134 C B
5 23 0.633367 R B
6 87 0.069583 V B
7 83 0.427594 N A
8 16 0.592413 R C
</code></pre>
<p>I'd like to add a new sequential DATE column with entries starting from a chosen start_date (= '2022-01-01'), so that each date is repeated N (=2) times, for the whole dataset. I also would like to add another column called SHIFT with n (=2) alternative selected_values=['Day','Night'].</p>
<pre><code>## desired output for N=2 and start_date = '2022-01-01', and n=2 with selected_values = ['Day','Night']
Column1 Column2 Column3 Column4 DATE SHIFT
0 87 0.208179 M C 2022-01-01 Day
1 85 0.049071 Q C 2022-01-01 Night
2 4 0.474926 X C 2022-01-02 Day
3 35 0.966357 L B 2022-01-02 Night
4 58 0.295134 C B 2022-01-03 Day
5 23 0.633367 R B 2022-01-03 Night
6 87 0.069583 V B 2022-01-04 Day
7 83 0.427594 N A 2022-01-04 Night
8 16 0.592413 R C 2022-01-05 Day
</code></pre>
<p><strong>- N, n, selected_values, and start_date are all dynamic. with n = number of elements in selected_values</strong></p>
<p>to make it more clear, here's the desired output for different parameters:</p>
<pre><code>## desired output for N=2 and start_date = '2022-01-01', and n=3 with selected_values = ['Day','Night','Afternoon']
Column1 Column2 Column3 Column4 DATE SHIFT
0 87 0.208179 M C 2022-01-01 Day
1 85 0.049071 Q C 2022-01-01 Night
2 4 0.474926 X C 2022-01-02 Afternoon
3 35 0.966357 L B 2022-01-02 Day
4 58 0.295134 C B 2022-01-03 Night
5 23 0.633367 R B 2022-01-03 Afternoon
6 87 0.069583 V B 2022-01-04 Day
7 83 0.427594 N A 2022-01-04 Night
8 16 0.592413 R C 2022-01-05 Afternoon
</code></pre>
| <python><dataframe><date><repeat> | 2023-10-18 08:20:55 | 1 | 1,430 | Ankhnesmerira |
77,314,559 | 7,745,011 | Is it possible to partially build a pydantic basemodel before validation? | <p>Working on a data access package which handles queries from a database and should return <code>pydantic</code> models.</p>
<p><b>Here is the Problem:</b><br>
Most of the models are derived from multiple tables. This means after querying a models table, do not yet have all of the models properties (I first have to query them from tables which are given by foreign keys for example).</p>
<p><b>Current (bad) solution</b><br>
Currently I have simply solved this by making the properties <code>Optional</code> and constructing the model in steps. The database queries are set to return dictionaries, so I use <code>model_validate()</code> to instantiate the models. Here is an example:</p>
<pre><code>with self.connection.cursor(dictionary=True, buffered=True) as cursor:
cursor.execute(
query,
)
result = cursor.fetchone()
datasets = []
while isinstance(result, dict):
dataset = DatasetModel.model_validate(result)
dataset.format = FormatModel.model_validate(result)
dataset.scans = self.scan_access.get(
dataset_ids=[result["dataset_id"]]
)
result = cursor.fetchone()
</code></pre>
<p>The issue with this is, that in fact these properties are <strong>not</strong> optional and in the code using the package a lot of asserts are needed to keep the IDE quiet (I am using types, black, ruff,...)</p>
<p>Other solutions I thought of but don't like are tinkering with the results dictionary (adding all properties), before calling <code>model_validate()</code> or creating separate models mirroring the respective tables (both seem like a bit too much hassle tbh).</p>
<p>So the question is simple: Is it possible to construct the model partially and do the validation only when it is completed?</p>
| <python><validation><pydantic> | 2023-10-18 08:17:53 | 0 | 2,980 | Roland Deschain |
77,314,507 | 972,647 | How to pass a constant value (a list) to concurrent.futures.Executor.map? | <p>Related question that doesn't answer my exact use-case:</p>
<p><a href="https://stackoverflow.com/questions/6785226/pass-multiple-parameters-to-concurrent-futures-executor-map">Pass multiple parameters to concurrent.futures.Executor.map?</a></p>
<p>I have a function that takes 2 arguments. The second argument is a list and in my case constant for all calls to <code>executor.map</code>.</p>
<pre><code>a = [1,2,3] # values to map and pass as single value
b = ["constant", "values"]
for result in executor.map(f, a, b): # how to pass always all values of b to f?
# do stuff
</code></pre>
<p>I don't rely on concurrent, I could also use multiprocessing if that makes this possible or easier.</p>
| <python><concurrent.futures> | 2023-10-18 08:08:51 | 2 | 7,652 | beginner_ |
77,314,320 | 5,079,255 | Is there any way to use custom Consumer Tag with pika | <p>I am using pika to work with RabbitMQ. I have a queue with a dozen consumers. I want to map consumer information in Rabbit with my application logs. So is there any way of either getting the attached consumer_tag after a connection was established or using a custom tag generated by my app?</p>
<p>Thanks</p>
| <python><python-2.7><rabbitmq><pika> | 2023-10-18 07:37:53 | 0 | 2,148 | GhostKU |
77,313,860 | 1,059,860 | Ruff doesn't catch undefined argument to function | <p>I'm not sure even if it is ruff's job to catch the undefined argument to a function, but is there a tool that does this? or is there a ruff configuration that I'm missing to add so that ruff can do this?</p>
<pre><code>from typing import Optional
def my_func(bar: int, baz: int, foo: Optional[int] = None) -> int:
if foo:
return foo + bar + baz
else:
return bar + baz
my_func(foo=1, bar=2, baz=3)
my_func(bar=2, baz=3)
my_func(f=1, bar=2, baz=3) # this should have been caught by ruff, but isn't and I'm
</code></pre>
<p>my ruff.toml</p>
<pre><code>select = [
"F", # https://beta.ruff.rs/docs/rules/#pyflakes-f
"W", # https://beta.ruff.rs/docs/rules/#warning-w
"E", # https://beta.ruff.rs/docs/rules/#error-e
"I", # https://beta.ruff.rs/docs/rules/#isort-i
"N", # https://beta.ruff.rs/docs/rules/#pep8-naming-n
"ANN", # https://beta.ruff.rs/docs/rules/#flake8-annotations-ann
"B", # https://beta.ruff.rs/docs/rules/#flake8-bugbear-b
"RUF", # https://beta.ruff.rs/docs/rules/#ruff-specific-rules-ruf
"PT", # https://beta.ruff.rs/docs/rules/#flake8-pytest-style-pt
"D",
]
include = ["*.py"]
force-exclude = true
fixable = ["ALL"]
pydocstyle.convention = "numpy"
exclude = [".mypy_cache", ".ruff_cache", ".venv", "__pypackages__"]
ignore = [
"ANN101", # https://beta.ruff.rs/docs/rules/#flake8-annotations-ann -- missing self type annotation
"E501", # Line-length is handled by black
"N812", # https://beta.ruff.rs/docs/rules/lowercase-imported-as-non-lowercase/
"ANN401", # Ignore typing Any
"D1", # Don't complain about missing docstrings
]
</code></pre>
| <python><python-3.x><lint><ruff> | 2023-10-18 06:16:09 | 1 | 2,258 | tandem |
77,313,849 | 2,541,276 | ModuleNotFoundError: No module named '_tkinter' when installing python 3.12.0 using pyenv in Fedora 38 | <p>I'm tryiny to install latest python 3.12.0 in Fedora 38 using pyenv. The command is <code>pyenv install 3.12.0</code></p>
<p>I get below error -</p>
<pre class="lang-bash prettyprint-override"><code>~ pyenv install 3.12.0
Downloading Python-3.12.0.tar.xz...
-> https://www.python.org/ftp/python/3.12.0/Python-3.12.0.tar.xz
Installing Python-3.12.0...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/raj/.pyenv/versions/3.12.0/lib/python3.12/tkinter/__init__.py", line 38, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named '_tkinter'
WARNING: The Python tkinter extension was not compiled and GUI subsystem has been detected. Missing the Tk toolkit?
Installed Python-3.12.0 to /home/raj/.pyenv/versions/3.12.0
</code></pre>
<p>I also installed tkinter in fedora but its version is 3.11.6.</p>
<pre class="lang-bash prettyprint-override"><code> ~ sudo dnf install python3-tkinter -y
Last metadata expiration check: 1:29:18 ago on Tue 17 Oct 2023 09:30:15 PM PDT.
Package python3-tkinter-3.11.6-1.fc38.x86_64 is already installed.
Dependencies resolved.
</code></pre>
<p>How can I fix this error?</p>
<p>Note:</p>
<p><b> OS Details </b></p>
<pre class="lang-bash prettyprint-override"><code>~ lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: Fedora
Description: Fedora release 38 (Thirty Eight)
Release: 38
Codename: ThirtyEight
</code></pre>
<p><b> pyenv version </b></p>
<pre class="lang-bash prettyprint-override"><code>~ pyenv --version
pyenv 2.3.30
</code></pre>
| <python><tkinter><pyenv> | 2023-10-18 06:12:25 | 4 | 10,555 | user51 |
77,313,847 | 8,523,868 | Could not able to click the link from the container | <p>hi all thanks for reading my question. I tried to automate some of my work. I tried to get the path of Add button using xpath finder I found three links below. I tried to click on the add button but its not working. The curser is in the first name box. Below my code is there. Please help me to click the Add button.</p>
<pre><code>[![#driver.find_element(By.XPATH,"//div\[@class='content col-center'\]/..//input\[@name='Add'\]\]").click()
driver.find_element(By.XPATH,"//body/div\[@id='contWrapper'\]/div\[@id='ContContent'\]/div\[@id='pageContent'\]/form\[@id='viewAllSales'\]/div\[@class='container'\]/div\[@class='content col-center'\]/div\[5\]/..//input\[@name='Add'\]").click()
#driver.find_element(By.XPATH,"(//input\[@name='Add'\])\[1\]").click()
driver.implicitly_wait(60)][1]][1]
</code></pre>
<p><a href="https://i.sstatic.net/aGpfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aGpfg.png" alt="enter image description here" /></a></p>
| <python><selenium-webdriver><automation> | 2023-10-18 06:12:12 | 2 | 911 | vivek rajagopalan |
77,313,546 | 1,538,049 | Numpy advanced indexing with index arrays | <p>In numpy, I got stuck with an advanced case of indexing problem. Suppose that I have an array <code>A</code> with the shape<code>(D0,D1,...,D(N-1),A,B)</code>. So, the first N dimensions are where I want to query for indices. The last two dimensions (A,B) are holding the actual data. I have a list of tuples of N dimensions, of length <code>K</code>, that act as indices into the first N dimensions, say <code>indices</code>. I did expect that <code>A[indices]</code> would return me another array of the shape <code>(K,A,B)</code>. However this does not seems to be the case. An example:</p>
<pre><code>arr = np.random.uniform(size=(4, 4, 4, 3, 3))
indices = [(0, 1, 2), (2, 1, 1), (0, 0, 1), (1, 1, 2), (2, 2, 2)]
indices = np.stack(indices, axis=0)
selected_arr = arr[indices]
print(selected_arr.shape)
</code></pre>
<p>The result is <code>(5, 3, 4, 4, 3, 3)</code> instead of <code>(5, 3, 3)</code>. I thought that every row of the indices array would index the first three dimensions of arr, however this doesnt seem to be true.</p>
<p>Is there a way to achieve the indexing behavior I am thinking of in numpy?</p>
| <python><arrays><numpy> | 2023-10-18 04:48:48 | 0 | 3,679 | Ufuk Can Bicici |
77,313,491 | 7,636,462 | Patching the `__call__()` method of a child class from a parent class in Python | <p>I have the following lines of code in Python:</p>
<pre class="lang-py prettyprint-override"><code>from functools import partial
class DummyClass:
def load_values(self, prompt, num_inference_steps):
self = partial(self, prompt=prompt, num_inference_steps=num_inference_steps)
class AnotherDummyClass(DummyClass):
def __init__(self):
self.some_val = 5
def __call__(self, prompt=None, num_inference_steps=10):
if prompt is None:
raise ValueError("prompt cannot be None.")
return f"Called with {prompt} and {num_inference_steps}"
</code></pre>
<p>I am testing this with:</p>
<pre class="lang-py prettyprint-override"><code>some_class = AnotherDummyClass()
some_class.load_values(prompt="Sayak", num_inference_steps=25)
some_class()
</code></pre>
<p>As one can see the <code>load_values()</code> function explicitly patches the call arguments of the child class. But even then, when I call <code>some_class()</code> it leads to the following error:</p>
<pre><code>ValueError: prompt cannot be None.
</code></pre>
<p>How can I remedy this?</p>
<p>If I do the patching outside the scope of the parent class i.e.,</p>
<pre class="lang-py prettyprint-override"><code>from functools import partial
some_class = AnotherDummyClass()
some_class = partial(some_class, prompt="hey", num_inference_steps=20)
some_class()
</code></pre>
<p>it works as expected.</p>
| <python><inheritance> | 2023-10-18 04:28:53 | 2 | 400 | S. P |
77,313,357 | 4,451,521 | python assigment of dataframes points to the same direction? | <p>I am debugging a python script that uses dataframes and there is a line like this</p>
<pre><code>df=myObject.df
df["error_value"] = ""
</code></pre>
<p>myObject is an object that has a dataframe as a member variable (df)</p>
<p>I run it until the first line and sure enough <code>df</code> has the same value as <code>myObject.df</code> and the <code>error_value</code> column is not yet added.
Then I run the next line and the <code>df</code> has an empty column <code>error_value</code></p>
<p>But then if I do <code>myObject.df.error_value</code> I can see that this column has been added too. So I am preplexed.</p>
<p>I thought df was a <em>copy</em> of the internal dataframe of the object. But is it just pointing to the same place? (as pointers in C?)</p>
| <python><pandas><dataframe> | 2023-10-18 03:36:22 | 0 | 10,576 | KansaiRobot |
77,313,188 | 138,776 | HTTP request fails in Flask request, succeeds from command line | <p>In trying to write a flask app, I ran into a problem making an outbound HTTP call using <a href="https://requests.readthedocs.io/" rel="nofollow noreferrer">requests</a> to an API while processing the request. If I run the same function from the python command line, it works. <strong>Why?</strong></p>
<p><strong>Update</strong> If I add gunicorn and make the call to flask via gunicorn, it works!</p>
<p>I've simplified my flask app and I get the same error.</p>
<pre class="lang-py prettyprint-override"><code>import requests
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello_world():
try:
response = requests.get("https://httpbin.org/get")
data = response.json()
except Exception as e:
print(f"Caught exception {e}")
data = "no data: error"
return(f'<pre>{data}</pre>')
</code></pre>
<p>Saved as <code>sslerror.py</code>, requires installing <code>flask</code> and <code>requests</code>.</p>
<p><strong>Web Request</strong></p>
<p>I run the flask app using <code>flask --app sslerror run</code>. Flask listens at <code>http://127.0.0.1:5000</code>. When I visit that in my browser, I get this error:</p>
<pre><code>Caught exception HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /get (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3098)')))`
</code></pre>
<p><strong>Python REPL</strong></p>
<p>From the command line, on the other hand:</p>
<pre class="lang-py prettyprint-override"><code>>>> from sslerror import hello_world
>>> hello_world()
"<pre>{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.31.0', 'X-Amzn-Trace-Id': 'Root=1-652f409a-abcdefabcdef'}, 'origin': '108.x.x.x', 'url': 'https://httpbin.org/get'}</pre>"
</code></pre>
<p><strong>Adding Gunicorn</strong></p>
<pre class="lang-bash prettyprint-override"><code>$ pip install gunicorn
$ gunicorn -w 1 sslerror:app
</code></pre>
<p>Gunicorn listens at <code>http://127.0.0.1:8000</code>. When I visit that, I see results in my browser</p>
<p>Environment info: MacOS 13.6 (Ventura) on Apple m2 chip. I'm using python 3.11.5 from <a href="https://github.com/pyenv/pyenv" rel="nofollow noreferrer">pyenv</a>.</p>
<p>I've also tried adding <code>verify=False</code> and <code>allow_redirects=False</code> to the <code>get()</code> call. The result was the same each time.</p>
<p><strong>WHY</strong> does the outbound HTTP request work from the command line but not in the context of the flask request?</p>
<p><em>Update:</em> I also tried in new virtualenvs with python 3.8 (from pyenv) and 3.11 from homebrew. Same behavior.</p>
<p><em>Update 2:</em> I added gunicorn and it works?!</p>
<p><em>Update 3:</em> I tried swapping out requests and using <a href="https://www.python-httpx.org/" rel="nofollow noreferrer">httpx</a>. Breaks the same way. This is smelling like a flask issue.</p>
<p><em>Update 4:</em> I'm now also getting this when making an HTTP call from a django runserver process. No flask in the mix for that.</p>
<p><em>Update 5:</em> If I change the URL in the <code>requests.get()</code> call to use <code>http</code> rather than <code>https</code>, it works through flask. That doesn't solve the problem since service I really want to connect to only uses https. I mention it here as a data point that might help determine the cause.</p>
| <python><flask><ssl><python-requests><gunicorn> | 2023-10-18 02:33:35 | 1 | 3,429 | Doug Harris |
77,312,806 | 14,947,895 | Defekt Python-C linking leads code to deviates after relative number of loops and not absolute | <p>For performance reasons I implemented a C function of an (adaptive) Gaussian filter. For a sanity check, I compared it to my old Python implementation. It seems to work well at first, but then starts to break down. Strangely, this does not seem to depend on the absolute number of frequencies evaluated, but somehow on the relative number. So I assume it has something to do with memory allocation or something? I am fairly new to C programming, so I may have overlooked a common problem.</p>
<p>Here is the signature of the function; I used gcc-13 to compile on macOS13.4 (intel chip)</p>
<pre class="lang-c prettyprint-override"><code>double complex* adapt_gf(double complex* rfft_, double* c_freq, double* freq_out, int n_freq_in, int n_freq_out)
</code></pre>
<p>Attached are two images showing the discrepancies I am talking of:</p>
<p><a href="https://i.sstatic.net/a4oD3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a4oD3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/IzYdJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IzYdJ.png" alt="enter image description here" /></a></p>
| <python><c><parallel-processing><simd> | 2023-10-17 23:52:45 | 1 | 496 | Helmut |
77,312,801 | 2,153,235 | Spark Row object instantiated differently from overloaded prototypes? | <p>The Spark <code>Row</code> class in <code>pyspark/sql/types.py</code> contains no <code>__init__</code>
method, but shows the following overloaded type hints for <code>__new__</code>:</p>
<pre><code>@overload
def __new__(cls, *args: str) -> "Row"
@overload
def __new__(cls, **kwargs: Any) -> "Row"
def __new__(cls, *args: Optional[str], **kwargs: Optional[Any]) -> "Row"
</code></pre>
<p>The doc string for <code>Row</code> shows various instantiations:</p>
<pre><code>>>> Person = Row("name", "age")
>>> row1 = Row("Alice", 11) # This is the one that is hard to understand
>>> row2 = Row(name="Alice", age=11)
>>> row1 == row2
True
</code></pre>
<p>The <em>second</em> line above does not fit any of the overloaded prototypes.
It <em>almost</em> fits the prototype with <code>*args</code>, except for the fact that all
of the arguments for <code>*args</code> are supposed to be <em>strings</em>. This is
obviously not the case for <code>Row("Alice",11)</code>, but that invocation
doesn't generate any messages when issued at the REPL prompt.
Obviously, there is something that I am missing about how
type hinting and overloading works. Can someone please explain?</p>
<p><strong>P.S.</strong> For context,
I got to this point by trying to see how the constructor
knows that <code>Row("name","age")</code> specifies field names while
<code>Row("Alice", 11)</code> specifies field values. The source code for
<code>__new__</code> shows that it depends on whether the argument list is
<code>*args</code> or <code>**kwargs</code>. Both of the <code>Row</code> method invocations in
this paragraph use <code>*args</code>, but the
second one simply doesn't fit the prototype for <code>*args</code> above.</p>
| <python><apache-spark><pyspark><overloading><python-typing> | 2023-10-17 23:50:45 | 1 | 1,265 | user2153235 |
77,312,694 | 11,277,108 | AttributeError: 'ClassManager' object has no attribute 'obj' when listening for "first_init" event | <p>I'd like to update a table in a standard way when any of a variety of scripts I've written runs. To that end I'd like to use the <code>event.listens_for</code> decorator so I don't need to call my update function at the beginning of each script.</p>
<p>I opted to listen for the <code>first_init</code> event using the example in the SQLALchemy docs <a href="https://docs.sqlalchemy.org/en/20/orm/events.html#sqlalchemy.orm.InstanceEvents.first_init" rel="nofollow noreferrer">here</a>.</p>
<p>However, the following MRE:</p>
<pre><code>import sqlalchemy as sa
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from sqlalchemy.schema import CreateSchema, DropSchema
from database.schemas.base import Base
class Base(DeclarativeBase):
pass
class Country(Base):
__tablename__ = "country"
__table_args__ = {"schema": "country_test"}
country_id: Mapped[int] = mapped_column(sa.SmallInteger, primary_key=True)
country_name: Mapped[str] = mapped_column(sa.String(50), unique=True)
def __init__(self, country_name: str) -> None:
self.country_name = country_name
@sa.event.listens_for(Country, "first_init")
def my_update_function(manager, cls):
# code that scrapes an up to date list of countries and adds new ones as required
print("my_update_function ran")
if __name__ == "__main__":
engine = sa.create_engine(<conn_str>)
connection = engine.connect()
connection.execute(DropSchema("country_test", if_exists=True))
connection.execute(CreateSchema("country_test"))
Base.metadata.create_all(bind=engine)
record_1 = Country("test1")
record_2 = Country("test2")
</code></pre>
<p>Gives me the error message:</p>
<pre><code>AttributeError: 'ClassManager' object has no attribute 'obj'
</code></pre>
<p>If I remove <code>my_update_function</code> then everything runs fine so I'm pretty sure the issue is with my implementation of the decorator. However, I'm not sure what the error message means so I'm a little stuck on where to go from here...</p>
| <python><sqlalchemy> | 2023-10-17 23:07:00 | 0 | 1,121 | Jossy |
77,312,659 | 2,805,482 | plotting timeseries line graph for unique values in a column | <p>I am trying to plot a timeseries graph for all the unique keys in my dataset, below is my dataset, I have 7 unique keys and I am trying to plot event_date on x-axis and line graph for count on y axis.</p>
<p>I am looping through each unique key and trying to plot date vs count however getting below error. Based on the error I am not able to understand why shape for y is (1,0)</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import pandas as pd
df_pandas = df.toPandas()
df_pandas.event_date = pd.to_datetime(df_pandas.event_date) # converting object to pandas datetime
colors = ['r', 'g', 'b', 'y', 'k', 'c', 'm']
for i, k in enumerate(df_pandas.key.unique()):
plt.plot(df_pandas[df_pandas.key == k].event_date, df_pandas[df_pandas.key == k].count, '-o', label=k, c=colors[i])
plt.gcf().autofmt_xdate() ## Rotate X-axis so you can see dates clearly without overlap
plt.legend() ## Show legend
</code></pre>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/tmp/1697561012681-0/zeppelin_python.py", line 153, in <module>
exec(code, _zcUserQueryNameSpace)
File "<stdin>", line 13, in <module>
File "/usr/local/lib64/python3.7/site-packages/matplotlib/pyplot.py", line 2813, in plot
is not None else {}), **kwargs)
File "/usr/local/lib64/python3.7/site-packages/matplotlib/__init__.py", line 1805, in inner
return func(ax, *args, **kwargs)
File "/usr/local/lib64/python3.7/site-packages/matplotlib/axes/_axes.py", line 1603, in plot
for line in self._get_lines(*args, **kwargs):
File "/usr/local/lib64/python3.7/site-packages/matplotlib/axes/_base.py", line 393, in _grab_next_args
yield from self._plot_args(this, kwargs)
File "/usr/local/lib64/python3.7/site-packages/matplotlib/axes/_base.py", line 370, in _plot_args
x, y = self._xy_from_xy(x, y)
File "/usr/local/lib64/python3.7/site-packages/matplotlib/axes/_base.py", line 231, in _xy_from_xy
"have shapes {} and {}".format(x.shape, y.shape))
ValueError: x and y must have same first dimension, but have shapes (21,) and (1,)
</code></pre>
<pre class="lang-none prettyprint-override"><code>event_date key count
7/23/23 0:00 389628-135052 74858
7/28/23 0:00 389628-135052 75139
7/12/23 0:00 389631-135055 60910
7/18/23 0:00 389632-135056 68850
7/26/23 0:00 389632-135056 33704
7/27/23 0:00 389630-135054 119679
7/20/23 0:00 389632-135056 71281
7/15/23 0:00 389632-135056 68854
7/23/23 0:00 389634-135058 69020
7/20/23 0:00 389629-135053 59536
7/21/23 0:00 389631-135055 71065
7/25/23 0:00 389629-135053 66887
7/15/23 0:00 389629-135053 66150
7/12/23 0:00 389633-135057 53096
7/14/23 0:00 389634-135058 62948
7/25/23 0:00 389628-135052 74872
7/15/23 0:00 389631-135055 73870
7/18/23 0:00 389631-135055 74548
7/17/23 0:00 389632-135056 68402
7/20/23 0:00 389633-135057 54665
7/15/23 0:00 389633-135057 64637
7/30/23 0:00 389630-135054 123113
7/21/23 0:00 389630-135054 67368
7/12/23 0:00 389632-135056 55618
7/19/23 0:00 389633-135057 70942
7/21/23 0:00 389633-135057 68221
7/18/23 0:00 389628-135052 76602
8/1/23 0:00 389631-135055 13252
7/17/23 0:00 389629-135053 64287
7/25/23 0:00 389634-135058 68104
7/17/23 0:00 389634-135058 66301
7/12/23 0:00 389628-135052 61841
7/18/23 0:00 389630-135054 71472
7/24/23 0:00 389629-135053 68495
7/30/23 0:00 389629-135053 122907
7/26/23 0:00 389630-135054 26650
7/30/23 0:00 389632-135056 134425
7/22/23 0:00 389634-135058 62225
7/18/23 0:00 389633-135057 61047
7/22/23 0:00 389633-135057 60926
7/18/23 0:00 389634-135058 67725
7/16/23 0:00 389633-135057 64254
7/14/23 0:00 389633-135057 61383
7/24/23 0:00 389633-135057 66471
7/16/23 0:00 389629-135053 66548
7/19/23 0:00 389628-135052 75846
7/17/23 0:00 389631-135055 73452
7/13/23 0:00 389631-135055 82725
7/31/23 0:00 389634-135058 41786
7/26/23 0:00 389629-135053 68862
8/1/23 0:00 389633-135057 12333
7/21/23 0:00 389628-135052 72381
7/30/23 0:00 389628-135052 77991
7/19/23 0:00 389630-135054 68765
8/1/23 0:00 389630-135054 12798
7/21/23 0:00 389632-135056 66499
7/29/23 0:00 389633-135057 16644
7/20/23 0:00 389631-135055 74593
7/24/23 0:00 389630-135054 72015
7/27/23 0:00 389632-135056 98245
7/31/23 0:00 389630-135054 56117
7/22/23 0:00 389629-135053 62669
7/23/23 0:00 389631-135055 74936
7/25/23 0:00 389632-135056 69935
7/29/23 0:00 389630-135054 23579
7/13/23 0:00 389632-135056 71917
7/13/23 0:00 389633-135057 67979
7/19/23 0:00 389631-135055 74154
7/23/23 0:00 389632-135056 71347
7/27/23 0:00 389634-135058 57570
7/26/23 0:00 389633-135057 60073
7/17/23 0:00 389633-135057 66860
7/15/23 0:00 389628-135052 75962
7/29/23 0:00 389628-135052 69251
7/27/23 0:00 389631-135055 69051
7/28/23 0:00 389631-135055 74231
7/23/23 0:00 389633-135057 66237
7/30/23 0:00 389634-135058 130063
7/25/23 0:00 389631-135055 73097
7/31/23 0:00 389628-135052 75428
7/24/23 0:00 389634-135058 69958
7/13/23 0:00 389634-135058 72563
7/27/23 0:00 389629-135053 63235
8/1/23 0:00 389629-135053 12444
7/26/23 0:00 389628-135052 80514
7/14/23 0:00 389628-135052 72334
7/30/23 0:00 389631-135055 130862
7/29/23 0:00 389634-135058 21234
7/14/23 0:00 389629-135053 62070
7/30/23 0:00 389633-135057 117653
7/17/23 0:00 389628-135052 74903
7/24/23 0:00 389628-135052 76074
7/28/23 0:00 389630-135054 58201
7/25/23 0:00 389630-135054 70594
7/21/23 0:00 389629-135053 70020
7/20/23 0:00 389634-135058 59776
7/22/23 0:00 389631-135055 73678
7/19/23 0:00 389632-135056 68493
7/28/23 0:00 389632-135056 69294
7/29/23 0:00 389632-135056 16416
8/1/23 0:00 389634-135058 12202
7/16/23 0:00 389628-135052 74739
7/13/23 0:00 389628-135052 78198
7/16/23 0:00 389630-135054 70980
7/31/23 0:00 389632-135056 50267
7/26/23 0:00 389634-135058 77612
7/31/23 0:00 389631-135055 45171
7/22/23 0:00 389630-135054 70867
7/15/23 0:00 389634-135058 67374
7/31/23 0:00 389633-135057 50583
7/19/23 0:00 389629-135053 72029
7/22/23 0:00 389628-135052 75503
7/14/23 0:00 389632-135056 65704
7/12/23 0:00 389634-135058 54886
7/21/23 0:00 389634-135058 70547
7/16/23 0:00 389634-135058 68285
7/27/23 0:00 389628-135052 68078
7/17/23 0:00 389630-135054 70450
7/31/23 0:00 389629-135053 50228
7/28/23 0:00 389629-135053 65580
7/13/23 0:00 389629-135053 69155
7/23/23 0:00 389630-135054 70885
7/14/23 0:00 389630-135054 67892
7/19/23 0:00 389634-135058 73446
7/27/23 0:00 389633-135057 60125
7/28/23 0:00 389633-135057 64199
7/29/23 0:00 389631-135055 33164
7/16/23 0:00 389631-135055 73831
7/24/23 0:00 389632-135056 70333
7/16/23 0:00 389632-135056 68069
7/18/23 0:00 389629-135053 67103
7/24/23 0:00 389631-135055 73852
7/20/23 0:00 389630-135054 72357
7/15/23 0:00 389630-135054 70673
7/12/23 0:00 389630-135054 57477
7/29/23 0:00 389629-135053 18198
7/14/23 0:00 389631-135055 63845
8/1/23 0:00 389632-135056 12744
7/28/23 0:00 389634-135058 67665
7/25/23 0:00 389633-135057 68534
7/12/23 0:00 389629-135053 53612
8/1/23 0:00 389628-135052 12663
7/20/23 0:00 389628-135052 75662
7/26/23 0:00 389631-135055 75014
7/22/23 0:00 389632-135056 68687
7/13/23 0:00 389630-135054 72816
7/23/23 0:00 389629-135053 68272
</code></pre>
| <python><pandas><matplotlib><time-series><line-plot> | 2023-10-17 22:57:04 | 1 | 1,677 | Explorer |
77,312,571 | 3,562,088 | Numerical implementation of ODE differs largely from analytical solution | <p>I am trying to solve the ODE of a free fall including air resistance.</p>
<p>I therefore defined my ODE as:</p>
<pre><code>def f(v, g, k, m):
return g - k/m * v**2
</code></pre>
<p>which in my opinion should represent the system correctly because</p>
<pre><code>m*a = m*g -k*v**2
</code></pre>
<p>where <code>a=vdot</code>.</p>
<p>Now I solve this ODE using the explicit Euler method like this:</p>
<pre><code>h = 0.1
t = np.arange(0, 1000 + h, h)
v0 = 0
g = 9.81
k = 0.1
m = 1.
# Explicit Euler Method
v_num = np.zeros(len(t))
v_num[0] = 0
x_num = np.zeros(len(t))
x_num[0] = 100
for i in range(0, len(t) - 1):
v_num[i + 1] = v_num[i] + h*f(v_num[i], g, k, m)
x_num[i + 1] = x_num[i] - v_num[i + 1] * h
</code></pre>
<p>On first glance this seems to work fine. However I plotted it against the analytical solution of the ODE that I found online</p>
<pre><code>v_ana = m*g*(1.-np.exp(-k/m*t))/k
</code></pre>
<p>And they seem to differ largely, as shown below.</p>
<p><a href="https://i.sstatic.net/ZlwAg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZlwAg.png" alt="Chart of numerical and analytical solution of ODE. Analytical solution reaches much higher terminal velocity" /></a></p>
<p>Where did I go wrong here?</p>
| <python><numerical-methods><differential-equations> | 2023-10-17 22:28:10 | 1 | 1,485 | Axel |
77,312,513 | 2,595,216 | infinite generator integers from range with step | <p>I need infinite generator, like:</p>
<pre class="lang-python prettyprint-override"><code>def zigzag_iterator(current_value, step, max_val):
direction = 1
while True:
if current_value >= max_val:
direction = -1
elif current_value <= 0:
direction = 1
current_value += step * direction
if direction == 1:
current_value = min(current_value, max_val)
else:
current_value = max(0, current_value)
yield current_value
</code></pre>
<p>For input like:</p>
<pre><code>g = zigzag_iterator(current_value=2, step=1, max_val=4)
print(*[next(g) for _ in range(15)])
</code></pre>
<p>it produce correct sequence:
3, 4, 3, 2, 1, 0, 1, 2, 3, 4, 3....
I need first generated value be next of current.</p>
<p>But for: <code>current_value=0, step=32, max_val=260</code>
I would like to get: 31, 63, 95, 127, 159, 191, 223, 255, 260, 228, 196 ...</p>
<p>Count from 0 and include <code>max_val</code> even is step is smaller (see 255, 260) and then subtract full step... same story when we approaching 0. i.e. ... 51, 19, 0, 31</p>
| <python><iterator><generator> | 2023-10-17 22:09:02 | 1 | 553 | emcek |
77,312,448 | 10,364,235 | simplejwt token work for all tenants for django-tenants | <p>I just want to mention about my problem. I have already researched many documents but I could not find any solution.</p>
<p>I just try to use <code>django-tenants</code>. Everything seems OK. But If any user gets the token after login, <strong>that token works for all tenants.</strong></p>
<p>It's a big leak of security.</p>
<p>I have been thinking about an idea <strong>SIGNING_KEY</strong>. If I can change SIGNING_KEY for each tenant, it may be fixed. But It did not.</p>
<pre><code>class TenantJWTAuthenticationMiddleware(MiddlewareMixin):
def process_request(self, request):
tenant = Client.objects.get(schema_name=connection.schema_name)
jwt_secret_key = tenant.jwt_secret_key
settings.SIMPLE_JWT['SIGNING_KEY'] = jwt_secret_key
</code></pre>
<p>This is my Middleware to change SIGNING_KEY for each tenant.</p>
<pre><code>class Client(TenantMixin):
name = models.CharField(max_length=100)
paid_until = models.DateField()
on_trial = models.BooleanField()
created_on = models.DateField(auto_now_add=True)
jwt_secret_key = models.CharField(max_length=100, null=True, blank=True)
# default true, schema will be automatically created and synced when it is saved
auto_create_schema = True
class Domain(DomainMixin):
pass
</code></pre>
<p>This is my model.</p>
<p>So, I added <code>jwt_secret_key</code> into my model and I got this field in Middleware and tried to set SIGNING_KEY for jwt. But still, jwt after user login can be used for all tenants.</p>
<p>Does anyone have any idea about my problem? Any suggestion or amendment to my solution?</p>
| <python><django><django-rest-framework><django-rest-framework-simplejwt><django-tenants> | 2023-10-17 21:51:54 | 3 | 436 | kalememre |
77,312,382 | 2,153,235 | How to interpret DataFrame[approx_count_distinct(salary): bigint]? | <p>I am simultaneously spinning up on Python and Spark via PySpark by
following <a href="https://sparkbyexamples.com/pyspark/pyspark-aggregate-functions" rel="nofollow noreferrer">this
tutorial</a>.
Here is the minimum working example:</p>
<pre><code>>>> from pyspark.sql import SparkSession
>>> spark = SparkSession.builder.master("local[10]").appName("SparkExamples.com").getOrCreate()
>>> simpleData = [ ("James", "Sales", 3000),
("Michael", "Sales", 4600),
("Robert", "Sales", 4100),
("Maria", "Finance", 3000),
("James", "Sales", 3000),
("Scott", "Finance", 3300),
("Jen", "Finance", 3900),
("Jeff", "Marketing", 3000),
("Kumar", "Marketing", 2000),
("Saif", "Sales", 4100) ]
>>> schema = ["employee_name", "department", "salary"]
>>> df = spark.createDataFrame(data=simpleData, schema = schema)
>>> df.show(truncate=False)
+-------------+----------+------+
|employee_name|department|salary|
+-------------+----------+------+
| James| Sales| 3000|
| Michael| Sales| 4600|
| Robert| Sales| 4100|
| Maria| Finance| 3000|
| James| Sales| 3000|
| Scott| Finance| 3300|
| Jen| Finance| 3900|
| Jeff| Marketing| 3000|
| Kumar| Marketing| 2000|
| Saif| Sales| 4100|
+-------------+----------+------+
</code></pre>
<p>Section "approx_count_distinct Aggregate Function" shows the
following example for counting the distinct values in a column:</p>
<pre><code>>>> from pyspark.sql.functions import approx_count_distinct
>>> print( "approx_count_distinct: " +
str( df.select(approx_count_distinct("salary"))
.collect()[0][0] ) )
approx_count_distinct: 6
</code></pre>
<p>I tried to get some insight into some of the intermediate objects
returned by the methods <code>approx_count_distinct</code>, <code>select</code>, and
<code>collect</code>:</p>
<pre><code>>>> approx_count_distinct("salary")
Out[148]: Column<'approx_count_distinct(salary)'>
df.select(approx_count_distinct("salary"))
Out[149]: DataFrame[approx_count_distinct(salary): bigint]
</code></pre>
<p>I can guess that a scalar <code>bigint</code> is a large integer type for the
count that is being sought. However, I'm not sure how the square
brackets are supposed to be interpreted. Square brackets are normally
for indexing and slicing, but that doesn't make sense when the inside
is a scalar count value. I'm not aware of any type hinting
conventions for which the above use of square brackets makes sense.</p>
<p><em><strong>Would it be right to conclude that this doesn't follow any convention
at all,</strong></em> and that it is just an ad hoc way to say that result is new
object of class <code>DataFrame</code> with a single cell containing the large
integer value representing the count of distinct values of <code>df.salary</code>?
That's what the following seems to show, but I want to be sure that
I'm not ignorant of a specific convention that I should become
familiar with, which can help my navigation of the documentation:</p>
<pre><code>>>> df.select(approx_count_distinct("salary")).show()
+-----------------------------+
|approx_count_distinct(salary)|
+-----------------------------+
| 6|
+-----------------------------+
</code></pre>
| <python><apache-spark><pyspark> | 2023-10-17 21:37:37 | 1 | 1,265 | user2153235 |
77,312,205 | 850,781 | Merge weighted averages in groupby | <p>I have a <a href="https://pandas.pydata.org/docs/reference/frame.html" rel="nofollow noreferrer">Pandas <code>DataFrame</code></a>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
"commodity":["Potatos","Potatos","Apples","Apples","Apples"],
"amount":[1,2,3,4,None],
"price":[4,5,6,7,8],
"attr1":[None,1,None,"foo","bar"],
})
</code></pre>
<p>(actually, I have a gazillion columns like <code>attr1</code> of different types).</p>
<p>I need to summarize the frame by <code>commodity</code>, keeping the <em><strong>first</strong></em> non-null <code>attr1</code> (and <em><strong>last</strong></em> non-null <code>attr2</code>).</p>
<p>Here is how I do it now:</p>
<pre><code>df["cost"] = df.amount * df.price
def first(se):
"Get the first non-None element of the Series"
assert isinstance(se, pd.Series)
se = se.dropna()
if se.empty:
return None
return se.iloc[0]
summary = df.groupby("commodity").agg({"amount":sum, "cost":sum, "attr1":first})
df.drop(columns=["cost"], inplace=True)
summary["price"] = summary.cost / summary.amount
summary.drop(columns=["cost"], inplace=True)
columns = list(df.columns)
columns.remove("commodity")
summary = summary[columns]
summary
amount price attr1
commodity
Apples 7.0 6.571429 foo
Potatos 3.0 4.666667 1
</code></pre>
<p>This does what I want, but the function <code>first</code> seems to be unbearably expensive (as <a href="https://stackoverflow.com/q/26812763/850781">expected</a>).</p>
<p>I wonder if this can be done more efficiently.</p>
| <python><pandas><dataframe><group-by><aggregate> | 2023-10-17 20:54:30 | 2 | 60,468 | sds |
77,312,008 | 6,248,790 | codebuild stage fails with "Specified key does not exist error"" in AWS CDK 2.63.2 | <p>I have the following CDK code that creates a CloudFormation stack with an S3 bucket, codecommit and codepipeline.</p>
<pre><code> assets = s3d.BucketDeployment(
self,
"codeassets",
destination_bucket=pipeline_bucket,
role=reader_role,
cache_control=[
s3d.CacheControl.from_string(
"max-age=0,no-cache,no-store,must-revalidate"
)
],
memory_limit=2048,
sources=[s3d.Source.asset(assetspath)],
)
q = codecommit.CfnRepository(
scope=self,
code={
"branch_name": "main",
"s3": {"bucket": pipeline_bucket.bucket_name, "key": "code.zip"},
},
id="coderepo",
repository_name=pipeline.repo,
)
q.node.add_dependency(assets)
p = codepipeline.Pipeline(
scope=self,
id=f"{pipeline.name}",
pipeline_name=f"{pipeline.name}",
restart_execution_on_update=True,
artifact_bucket=pipeline_bucket,
)
p.node.add_dependency(q)
</code></pre>
<p>Even though the dependencies are added correctly, the CodePipeline stages are failing with the following error every time after the CloudFormation stack is updated.</p>
<pre><code>[Container] 2023/10/17 10:01:14 Waiting for agent ping
[Container] 2023/10/17 10:01:26 Waiting for DOWNLOAD_SOURCE
NoSuchKey: The specified key does not exist.
status code: 404, request id: 3Z90D1YGC46ZQTXZ, host id: sgsg+kiejfbfbhj+sgrf+/sg= for primary source and source version arn:aws:s3:::pipeline-name-us-east-1-63bd58d0/pipeline-name/Artifact_S/SJKGDS
</code></pre>
<p>Provided below is one of the stage where the error pops up after the stack update.</p>
<h1>Source stage</h1>
<pre><code> p.add_stage(
stage_name="Source",
actions=[
codepipeline_actions.CodeCommitSourceAction(
action_name="CodeCommit",
branch="main",
output=source_artifact,
trigger=codepipeline_actions.CodeCommitTrigger.EVENTS,
repository=codecommit.Repository.from_repository_name(
self, "source_glue_repo", repository_name=pipeline.repo
),
)
],
)
p.add_stage(
stage_name="UnitTests",
actions=[
codepipeline_actions.CodeBuildAction(
action_name="UnitTests",
input=source_artifact,
project=build_project_run_tests,
outputs=[build_artifact_run_tests],
)
],
)
build_artifact_run_tests = codepipeline.Artifact()
</code></pre>
<p>When I check the S3, I cannot even find the folder "pipeline-name/Artifact_S/"</p>
<p>But when I click on the "Release Change" button on the right top of the pipeline page in AWS console, those artifacts folders are created and pipeline executions are successful.</p>
<p>How do I ensure the Source Output Artifacts are in S3 before CodePipeline stage UnitTests uses it as input artifact??</p>
| <python><amazon-s3><aws-cdk><aws-codepipeline><aws-codebuild> | 2023-10-17 20:14:31 | 1 | 1,279 | Ashfaq |
77,311,966 | 1,422,096 | Find how many events in each 30-minute interval, without looping many times on the events | <p>This works and prints the number of events in each 30-minute intervals:</p>
<p>00:00 to 00:30, 00:30 to 01:00, ..., 23:30 to 24:00</p>
<pre><code>import time, datetime
L = ["20231017_021000", "20231017_021100", "20231017_021200", "20231017_052800", "20231017_093100", "20231017_093900"]
d = datetime.datetime.strptime("20231017_000000", "%Y%m%d_%H%M%S")
M = [(d + datetime.timedelta(minutes=30*k)).strftime("%Y%m%d_%H%M%S") for k in range(49)]
Y = [sum([m1 < l <= m2 for l in L]) for m1, m2 in zip(M, M[1:])]
print(Y)
# [0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
# => 3 events between 02:00 and 02:30
# => 1 event between 05:00 and 05:30
# => 2 events between 09:30 and 10:00
</code></pre>
<p>Problem: it loops 48 times on the list <code>L</code> which can be long.</p>
<p><strong>How to do the same with a single loop pass on <code>L</code>?</strong> (without pandas, numpy, etc. but just Python built-in modules)?</p>
| <python><performance><loops><datetime><intervals> | 2023-10-17 20:07:46 | 1 | 47,388 | Basj |
77,311,784 | 16,383,578 | How to split a list of integers into two groups of integers with minimal difference of their sums? | <p>I don't know if this is a duplicate, but Google search returns nothing relevant as usual.</p>
<p>So I have a list of integers, which represents the lengths of segments. And I want to split the list of integers into two lists of integers, such that the difference between the sum of lengths of the groups is minimal.</p>
<p>This is about my GUI program:</p>
<p><a href="https://i.sstatic.net/Fc0Oy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fc0Oy.png" alt="enter image description here" /></a></p>
<p>I created the program completely by myself, the program is nearing its completion, I will post it to Code Review when it is complete. I developed each part separately, and now I am trying to improve the layout, to minimize unused area inside the window.</p>
<p>This is a rectangle packing problem, but because the height of the parts are equal, this is simplified to a line packing problem. I only need to split them into two rows and minimize the width difference of the rows.</p>
<p>By the way, many parts of the GUI is controlled by a thread and they animate if triggered. And that is a live Tic Tac Toe game between two AIs I wrote. You can see my program in action <a href="https://drive.google.com/file/d/1ZlSLfhIbyYVEJQUQEFxOugi7DcbjX6yJ/view?usp=sharing" rel="nofollow noreferrer">here</a>.</p>
<p>So I tried to solve the problem, and I did find a solution, but that solution is extremely inefficient, it takes 282240 iterations to find the solution for just 8 items.</p>
<p>My mathematics isn't very good, so I just chose the most direct approach, I just generated every permutation of the sequence, then split each permutation into every possible pairs of sub-lists by index slicing, and then calculate the absolute difference and store the pairs and corresponding difference in a dictionary, and then find the minimum.</p>
<pre><code>from itertools import permutations
lengths = [240, 255, 270, 240, 220, 230, 420, 470]
items = {}
for perm in permutations(lengths):
for i in range(1, 8):
a = perm[:i]
b = perm[i:]
items[(a, b)] = abs(sum(a) - sum(b))
min(items.items(), key=lambda x: x[1])
</code></pre>
<p>I got this:</p>
<pre><code>(((240, 270, 240, 420), (255, 220, 230, 470)), 5)
</code></pre>
<p>Is there a more efficient way to do this?</p>
| <python><algorithm><math> | 2023-10-17 19:34:33 | 3 | 3,930 | Ξένη Γήινος |
77,311,671 | 10,565,820 | "reportMissingImports" error even though code and import works fine in VSCode | <p>Small question but annoying. In VSCode, it is reporting a "reportMissingImports" error even though the import and the code is working fine. Here is the relevant portion.</p>
<pre><code>import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.ssl_ import create_urllib3_context
...
context = create_urllib3_context(ciphers=":".join(custom_cipher_suite))
</code></pre>
<p>The line with import create_urllib3_context is being reported as an error but the code runs fine and executes the module as the last line shows. In the python interpreter it runs fine:</p>
<pre><code>Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from requests.packages.urllib3.util.ssl_ import create_urllib3_context
>>> create_urllib3_context()
<ssl.SSLContext object at 0x7f6c22d2d340>
</code></pre>
<p>Any idea why this is? I would like to avoid ignoring the error as that will also avoid it for legitimate purposes.</p>
<p>EDIT:
Here is a screenshot of the program and problem in VS Code:</p>
<p><a href="https://i.sstatic.net/q8JCq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q8JCq.png" alt="enter image description here" /></a></p>
<p>Here is the output:</p>
<pre><code>TLS_AES_256_GCM_SHA384
TLS_CHACHA20_POLY1305_SHA256
TLS_AES_128_GCM_SHA256
ECDHE-ECDSA-AES256-GCM-SHA384
ECDHE-RSA-AES256-GCM-SHA384
ECDHE-ECDSA-CHACHA20-POLY1305
ECDHE-RSA-CHACHA20-POLY1305
ECDHE-ECDSA-AES256-CCM8
ECDHE-ECDSA-AES256-CCM
ECDHE-ECDSA-ARIA256-GCM-SHA384
ECDHE-ARIA256-GCM-SHA384
ECDHE-ECDSA-AES128-GCM-SHA256
ECDHE-RSA-AES128-GCM-SHA256
ECDHE-ECDSA-AES128-CCM8
ECDHE-ECDSA-AES128-CCM
ECDHE-ECDSA-ARIA128-GCM-SHA256
ECDHE-ARIA128-GCM-SHA256
ECDHE-ECDSA-AES256-SHA384
ECDHE-RSA-AES256-SHA384
ECDHE-ECDSA-CAMELLIA256-SHA384
ECDHE-RSA-CAMELLIA256-SHA384
ECDHE-ECDSA-AES128-SHA256
ECDHE-RSA-AES128-SHA256
ECDHE-ECDSA-CAMELLIA128-SHA256
ECDHE-RSA-CAMELLIA128-SHA256
ECDHE-ECDSA-AES256-SHA
ECDHE-RSA-AES256-SHA
ECDHE-ECDSA-AES128-SHA
ECDHE-RSA-AES128-SHA
RSA-PSK-AES256-GCM-SHA384
DHE-PSK-AES256-GCM-SHA384
RSA-PSK-CHACHA20-POLY1305
DHE-PSK-CHACHA20-POLY1305
ECDHE-PSK-CHACHA20-POLY1305
DHE-PSK-AES256-CCM8
DHE-PSK-AES256-CCM
RSA-PSK-ARIA256-GCM-SHA384
DHE-PSK-ARIA256-GCM-SHA384
AES256-GCM-SHA384
AES256-CCM8
AES256-CCM
ARIA256-GCM-SHA384
PSK-AES256-GCM-SHA384
PSK-CHACHA20-POLY1305
PSK-AES256-CCM8
PSK-AES256-CCM
PSK-ARIA256-GCM-SHA384
RSA-PSK-AES128-GCM-SHA256
DHE-PSK-AES128-GCM-SHA256
DHE-PSK-AES128-CCM8
DHE-PSK-AES128-CCM
RSA-PSK-ARIA128-GCM-SHA256
DHE-PSK-ARIA128-GCM-SHA256
AES128-GCM-SHA256
AES128-CCM8
AES128-CCM
ARIA128-GCM-SHA256
PSK-AES128-GCM-SHA256
PSK-AES128-CCM8
PSK-AES128-CCM
PSK-ARIA128-GCM-SHA256
AES256-SHA256
CAMELLIA256-SHA256
AES128-SHA256
CAMELLIA128-SHA256
ECDHE-PSK-AES256-CBC-SHA384
ECDHE-PSK-AES256-CBC-SHA
SRP-DSS-AES-256-CBC-SHA
SRP-RSA-AES-256-CBC-SHA
SRP-AES-256-CBC-SHA
RSA-PSK-AES256-CBC-SHA384
DHE-PSK-AES256-CBC-SHA384
RSA-PSK-AES256-CBC-SHA
DHE-PSK-AES256-CBC-SHA
ECDHE-PSK-CAMELLIA256-SHA384
RSA-PSK-CAMELLIA256-SHA384
DHE-PSK-CAMELLIA256-SHA384
AES256-SHA
CAMELLIA256-SHA
PSK-AES256-CBC-SHA384
PSK-AES256-CBC-SHA
PSK-CAMELLIA256-SHA384
ECDHE-PSK-AES128-CBC-SHA256
ECDHE-PSK-AES128-CBC-SHA
SRP-DSS-AES-128-CBC-SHA
SRP-RSA-AES-128-CBC-SHA
SRP-AES-128-CBC-SHA
RSA-PSK-AES128-CBC-SHA256
DHE-PSK-AES128-CBC-SHA256
RSA-PSK-AES128-CBC-SHA
DHE-PSK-AES128-CBC-SHA
ECDHE-PSK-CAMELLIA128-SHA256
RSA-PSK-CAMELLIA128-SHA256
DHE-PSK-CAMELLIA128-SHA256
AES128-SHA
CAMELLIA128-SHA
PSK-AES128-CBC-SHA256
PSK-AES128-CBC-SHA
PSK-CAMELLIA128-SHA256
</code></pre>
| <python><visual-studio-code><pylance> | 2023-10-17 19:13:21 | 1 | 644 | geckels1 |
77,311,656 | 8,858,832 | Access parent object data member from sub-object in Python | <p>I have a setup like the following</p>
<pre><code>class decorator:
def __init__(self, fn):
self.fn = fn
@staticmethod
def wrap(fn):
return decorator(fn)
def __call__(self):
print(f"decorator {self} function called")
class A:
@decorator.wrap
def foo(self):
print(f"Object {self} called")
def __init__(self):
self.boo = 'Boo'
</code></pre>
<p>How do I from the decorator object access <code>boo</code> variable?</p>
| <python><decorator><composition> | 2023-10-17 19:10:42 | 1 | 1,824 | Niteya Shah |
77,311,649 | 11,197,796 | Fastest way to assign row to dataframe in pandas groupby loop | <p>Ok so I have 2 dataframes:</p>
<pre><code>df = pd.DataFrame({'A':['German Shepherd','Border Collie','Golden Retriever','Beagle','Daschund']})
df = df.T
df.columns = df.iloc[0]
df = df.drop(df.index[0])
A German Shepherd Border Collie Golden Retriever Beagle Daschund
df2 = pd.DataFrame({'ID':['A','A','A','B','C','C','C','C','C'],
'Breed':['German Shepherd','Beagle','Dashung','Border Collie',
'German Shepherd','Border Collie','Golden Retriever','Beagle','Daschund']})
ID Breed
0 A German Shepherd
1 A Beagle
2 A Dashung
3 B Border Collie
4 C German Shepherd
5 C Border Collie
6 C Golden Retriever
7 C Beagle
8 C Daschund
</code></pre>
<p>I want to find which ID the dog breed is in in df2 and then update df if it is present for that ID:</p>
<pre><code>dogs_grouped = df2.groupby('ID')
missing_dogs = []
vals = [np.nan for i in df.columns]
for group_name, df_group in dogs_grouped:
print(f'Cluster: {group_name}')
cluster_dogs = sorted(list(set(df_group['Breed'].to_list())))
cluster_dogs = [i for i in cluster_dogs if i in all_dogs]
weird_dogs = [i for i in cluster_dogs if i not in all_dogs]
missing_dogs.append(weird_dogs)
df = df.append(pd.Series(vals, index=df.columns, name=group_name))
df.loc[group_name][cluster_dogs] = 1
df = df.fillna(0)
</code></pre>
<p>My code works, but it's extremely slow for large datasets. I have a dataset with 500k rows I am iterating through and it's taking hours to create a 4000 x 30,000 matrix.</p>
<pre><code>A German Shepherd Border Collie Golden Retriever Beagle Daschund
A 1 0 0 1 0
B 0 1 0 0 0
C 1 1 1 1 1
</code></pre>
<p>There has to be a more pythonic/pandas way to approach this?</p>
| <python><pandas><dataframe><numpy> | 2023-10-17 19:09:25 | 1 | 440 | skiventist |
77,311,583 | 6,423,456 | How can I recursively iterate through a directory in Python while ignoring some subdirectories? | <p>I have a directory structure on my filesystem, like this:</p>
<pre><code>folder_to_scan/
important_file_a
important_file_b
important_folder_a/
important_file_c
important_folder_b/
important_file_d
useless_folder/
...
</code></pre>
<p>I want to recursively scan through <code>folder_to_scan/</code>, and get all the file names.
At the same time, I want to ignore <code>useless_folder/</code>, and anything under it.</p>
<p>If I do something like this:</p>
<pre><code>path_to_search = Path("folder_to_scan")
[pth for pth in path_to_search.rglob("*") if pth.is_file() and 'useless_folder' not in [parent.name for parent in pth.parents]]
</code></pre>
<p>It will work (probably - I didn't bother trying), but the problem is, <code>useless_folder/</code> contains millions of files, and <code>rglob</code> will still traverse all of them, take ages, and only apply the filter when constructing the final list.</p>
<p>Is there a way to tell Python not to waste time traversing useless folders (<code>useless_folder/</code> in my case)?</p>
| <python><pathlib> | 2023-10-17 18:57:38 | 3 | 2,774 | John |
77,311,427 | 17,653,423 | How to mock open file and raise if path does not exists? | <p>I'am facing a problem when testing a function that reads the first line of a file and raises an <code>Exception</code> when the path of the file doesn't exist.</p>
<p>Current code:</p>
<pre><code>from unittest.mock import patch, mock_open
from pytest import raises
from os.path import exists
def read_from_file(file_path):
if not exists(file_path):
raise Exception("File does not exists!")
with open(file_path, "r") as f:
return f.read().splitlines()[0]
@patch("builtins.open", new_callable=mock_open, read_data="Correct string\nWrong string\nWrong string")
@patch("os.path.exists", return_value=True)
def test_read_file_and_returns_the_correct_string_with_multiple_lines(mock_os, mock_file):
result = read_from_file("xyz")
mock_file.assert_called_once_with("xyz", "r")
assert result == "Correct string"
@patch("builtins.open", new_callable=mock_open, read_data="Correct string")
@patch("os.path.exists", return_value=False)
def test_throws_exception_when_file_doesnt_exist(mock_os, mock_file):
with raises(Exception):
read_from_file("xyz")
</code></pre>
<p>The decorators <code>@patch("os.path.exists", return_value=True)</code> and <code>@patch("os.path.exists", return_value=False)</code> seems to have no effect in both tests.</p>
<p>How can I mock the existence of a file?</p>
| <python><pytest><python-unittest><python-unittest.mock><pytest-mock> | 2023-10-17 18:28:15 | 1 | 391 | Luiz |
77,311,398 | 12,027,869 | optuna: Different Results Even With Same random_state | <p>I am trying to understand why running the below code for hypterparameter tuning using optuna gives me different best parameter values even if I am running the exact same code with the same <code>random_state = 42</code>. Where is the random part coming from?</p>
<pre><code>import optuna
import sklearn
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
def objective(trial):
digits = sklearn.datasets.load_digits()
x, y = digits.data, digits.target
max_depth = trial.suggest_int("rf_max_depth", 2, 64, log=True)
max_samples = trial.suggest_float("rf_max_samples", 0.2, 1)
rf_model = RandomForestClassifier(
max_depth = max_depth,
max_samples = max_samples,
n_estimators = 50,
random_state = 42
)
score = cross_val_score(rf_model, x, y, cv=3).mean()
return score
study = optuna.create_study(direction = "maximize")
study.optimize(objective, n_trials = 3)
trial = study.best_trial
print("Best Score: ", trial.value)
print("Best Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
</code></pre>
| <python><optuna> | 2023-10-17 18:24:47 | 2 | 737 | shsh |
77,311,383 | 8,849,755 | Python get element pointed by iterator and not move | <p>Consider the iterator <code>i = iter([1,2,3,4])</code>. I am looking for some <code>current</code> function that does this:</p>
<pre><code>first = next(i) # 1
second = next(i) # 2
second_2 = current(i) # 2
third = next(i) # 3
...
</code></pre>
<p>I.e., it gives me the current element pointed by the iterator without advancing. How can I do that?</p>
| <python><iterator> | 2023-10-17 18:21:21 | 2 | 3,245 | user171780 |
77,311,352 | 9,112,151 | How to parse this simple XML? | <p>I'm trying to parse thhis simple XML:</p>
<pre><code>import xml.etree.ElementTree as ET
data = """
<ns2:Request xmlns:ns2="urn://www.example.com">
<ns2:User>
<ns2:Name>John</ns2:Name>
<ns2:Surname>Snow</ns2:Surname>
<ns2:Email>joihn.show@gmail.com</ns2:Email>
<ns2:Birthday>2005-10-23T04:00:00+03:00</ns2:Birthday>
</ns2:User>
</ns2:Request>
"""
namespaces = {"ns2": "urn://www.example.com"}
xml = ET.fromstring(data)
name = xml.find("ns2:Email", namespaces) # returns None
</code></pre>
<p><code>Name</code> is read as None. The code above does not works as expected. How to parse single <code>Name</code>, <code>Surname</code>, <code>Email</code> and <code>Birthday</code> fields?</p>
| <python><xml> | 2023-10-17 18:16:12 | 3 | 1,019 | Альберт Александров |
77,311,289 | 2,274,981 | Is there a more readable/efficient way of doing this? | <p>I have a bot that process some metadata from a replay file (it reads the binary) I then parse out all the variables I require.</p>
<p>Basically due to the game dev the files can be a bit confusing so I've written some code to ascertain the correct result I need (Basically appending player names to a list that is then pumped into an embed).</p>
<p>I'm curious to know if there is a much more efficient or readable way of doing what I have got working with this logic. I've covered all the eventualities from the metadata, it just looks god awful.</p>
<pre><code> if ownervictory < 3:
for player in playersdata:
if len(playersdata) == 2:
if owneralliance == int(player['PlayerAlliance']):
loserlist.append(player)
else:
winnerlist.append(player)
elif len(playersdata) == 4:
replayPlayer = 0
if owneralliance == 2 or owneralliance == 3:
replayPlayer = 1
if replayPlayer == int(player['PlayerAlliance']):
loserlist.append(player)
else:
winnerlist.append(player)
elif len(playersdata) == 6:
replayPlayer = 0
if owneralliance == 3 or owneralliance == 4:
replayPlayer = 1
if replayPlayer == int(player['PlayerAlliance']):
loserlist.append(player)
else:
winnerlist.append(player)
elif len(playersdata) == 8:
replayPlayer = 0
if owneralliance == 4 or owneralliance == 5 or owneralliance == 6 or owneralliance == 7:
replayPlayer = 1
if replayPlayer == int(player['PlayerAlliance']):
loserlist.append(player)
else:
winnerlist.append(player)
elif ownervictory >= 3:
for player in playersdata:
if len(playersdata) == 2:
if owneralliance != int(player['PlayerAlliance']):
loserlist.append(player)
else:
winnerlist.append(player)
elif len(playersdata) == 4:
replayPlayer = 0
if owneralliance == 2 or owneralliance == 3:
replayPlayer = 1
if replayPlayer != int(player['PlayerAlliance']):
loserlist.append(player)
else:
winnerlist.append(player)
elif len(playersdata) == 6:
replayPlayer = 0
if owneralliance == 3 or owneralliance == 4 or owneralliance == 4:
replayPlayer = 1
if replayPlayer != int(player['PlayerAlliance']):
loserlist.append(player)
else:
winnerlist.append(player)
elif len(playersdata) == 8:
replayPlayer = 0
if owneralliance == 4 or owneralliance == 5 or owneralliance == 6 or owneralliance == 7:
replayPlayer = 1
if replayPlayer != int(player['PlayerAlliance']):
loserlist.append(player)
else:
winnerlist.append(player)
</code></pre>
| <python> | 2023-10-17 18:05:33 | 1 | 1,149 | Lynchie |
77,311,288 | 11,354,959 | Apply a function to all dict elements of an array in python | <p>Hi I am wondering what would be the best way to apply a function to all elements of an array (those elements are dict). I have dict object that contains elements, one of them being another array of dict.
I want to avoid doing the following</p>
<pre><code>my_array = api.get_elements().json() # return an array of dict
for elem in my_array:
for sub_elem in elem['sub_array']: # here I access the the sub array
process(sub_elem)
</code></pre>
<p>process would be a function where I modify some values of my sub_elem.
From what I read so far I could use the map() function like</p>
<pre><code>map_res = map(process, elem['sub_array'])
</code></pre>
<p>or I could use a list comprehension</p>
<pre><code>res = [process(sub_elem) for sub_elem in elem['sub_array']]
</code></pre>
<p>and then someone told me I could also use generator because he told me it's better for memory when dealing with large data. The problem is that he didn't tell me how. So I tried it my way:</p>
<pre><code>processed_data_generator = (process(d) for d in elem['sub_array'])
def process(elem: dict):
elem['name'] = 'NAME'
for processed_dict in processed_data_generator:
print(processed_dict)
</code></pre>
<p>but I am not sure do I have to return the item? Put yield statement? because I can't find a way to access the result of generator right now and it seems like my code is not doing anything besides "initializing" a generator object.</p>
<p>What would be the correct way to deal with my problem?</p>
| <python><python-3.x><dictionary><generator> | 2023-10-17 18:05:18 | 2 | 370 | El Pandario |
77,311,284 | 3,319,713 | Merge tuples in a list based on their dates items | <p>I have a list of tuples, where the first two items in each tuple are dates and the third one is always some name. I want to check: 1) if any two or more tuples have the name as the third item; 2) if the two dates of the tuple are part of or within the two dates of any other tuple. If 1) and 2) are true then only keep the tuple that has the same name and has the longest time span by its two dates.</p>
<p>I have a example data list below</p>
<pre><code>data = [(pd.Timestamp('2017-01-02 00:00:00'), pd.Timestamp('2017-01-21 00:00:00'), 'John'),
(pd.Timestamp('2017-01-02 00:00:00'), pd.Timestamp('2017-01-21 00:00:00'), 'John'),
(pd.Timestamp('2017-01-02 00:00:00'), pd.Timestamp('2017-02-04 00:00:00'), 'Jane'),
(pd.Timestamp('2017-01-02 00:00:00'), pd.Timestamp('2017-02-04 00:00:00'), 'John'),
(pd.Timestamp('2017-01-21 00:00:00'), pd.Timestamp('2017-02-04 00:00:00'), 'John'),
(pd.Timestamp('2017-01-01 00:00:00'), pd.Timestamp('2017-02-10 00:00:00'), 'Jane'),]
</code></pre>
<p>It should output the list below because the two tuples have dates that cover all the other tuples that have the same name (i.e., "John" or "Jane"):</p>
<pre><code>[(Timestamp('2017-01-02 00:00:00'), Timestamp('2017-02-04 00:00:00'), 'John'),
(Timestamp('2017-01-01 00:00:00'), Timestamp('2017-02-10 00:00:00'), 'Jane')]
</code></pre>
<p>However, my code as now below</p>
<pre><code>names = set([x[2] for x in data])
to_remove = []
for i in range(len(data)):
for j in range(i+1, len(data)):
if data[i][2] == data[j][2]:
if data[i][0] >= data[j][0] and data[i][1] <= data[j][1]:
to_remove.append(j)
elif data[j][0] >= data[i][0] and data[j][1] <= data[i][1]:
to_remove.append(i)
data = [x for i,x in enumerate(data) if i not in set(to_remove)]
</code></pre>
<p>output the wrong answer:</p>
<pre><code>[(Timestamp('2017-01-02 00:00:00'), Timestamp('2017-01-21 00:00:00'), 'John'),
(Timestamp('2017-01-02 00:00:00'), Timestamp('2017-02-04 00:00:00'), 'Jane'),
(Timestamp('2017-01-21 00:00:00'), Timestamp('2017-02-04 00:00:00'), 'John')]
</code></pre>
| <python><python-3.x><pandas> | 2023-10-17 18:04:31 | 3 | 3,206 | Blue482 |
77,311,257 | 7,200,174 | Check if Dataframe Values are Equal across specific columns | <p><strong>CONTEXT</strong></p>
<p>I have two Dataframes and I want to validate that each DataFrame has the same value for each ID. I want to create a dataframe of all the rows/outliers that don't match.</p>
<p>The ID's are not in the same order and the dataframes do not have the same amount of rows.</p>
<p><strong>SAMPLE DATA</strong></p>
<pre><code>DF1
ID State Occupation
111 AZ Doctor
222 NY Teacher
333 MO Analyst
444 NC Nurse
DF2
ID State Occupation
111 AZ Doctor
222 NY Teacher
333 MO Analyst
444 NC Student <---- **It should flag this
</code></pre>
<p>The actual dataset is about 30,000+ rows. Is it possible to do this based on the ID and assigning specific columns such as "State" and "Occupation" to check on for the two dataframes?</p>
| <python><pandas><dataframe><compare> | 2023-10-17 17:59:05 | 3 | 331 | KL_ |
77,311,076 | 10,270,246 | How to construct a Python object with multiple inheritance? | <p>I succeeded to construct a Python object with multiple inheritance but I feel like I didn't use the proper way...</p>
<p>Here is my code :</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self, a):
print("A ctor called!")
self.a_ = a
def printa(self):
print(self.a_)
class B:
def __init__(self, b):
print("B ctor called!")
self.b_ = b
def printb(self):
print(self.b_)
class C(A, B):
def __init__(self, a, b, c):
self.a_ = a
self.b_ = b
self.c_ = c
</code></pre>
<p>If I type :</p>
<pre class="lang-py prettyprint-override"><code>c = C(11, 22, 33)
c.printb()
</code></pre>
<p>The output is :</p>
<pre class="lang-py prettyprint-override"><code>22
</code></pre>
<p>which is what I wanted. But the thing which puzzles me is that I managed to build an object from my class <code>C</code> without calling constructors from class <code>A</code> and <code>B</code>...</p>
<p>Is it normal or did I do something wrong ?</p>
<p>I think what I should do is using the <code>super()</code> function but I don't know how to use it with multiple inheritance to call both parents <code>A</code> and <code>B</code>...</p>
<p>Cheers</p>
<p>EDIT : My question has been labelled as a duplicate but I'm not sure it is totally one. In this topic <a href="https://stackoverflow.com/questions/9575409/calling-parent-class-init-with-multiple-inheritance-whats-the-right-way">Calling parent class __init__ with multiple inheritance, what's the right way?</a> this issue doesn't deal with constructors with parameters. But in my question, it's the parameters that entail the problem.</p>
| <python><multiple-inheritance> | 2023-10-17 17:29:03 | 0 | 633 | Autechre |
77,311,036 | 13,132,728 | How to merge pandas dataframes (on a common key) that are dictionary values | <p>I have a dictionary <code>data</code> where the values are pandas dataframes:</p>
<pre><code>{'x': player x1
0 foo x_val
1 bar x_val
'y': player y1
0 foo y_val
1 bar y_val
'z': player z1
0 foo z_val
1 bar z_val
}
</code></pre>
<p>I would like to merge all of these dataframes on <code>player</code> and get the following result:</p>
<pre><code> player x1 y1 z1
0 foo x_val y_val z_val
1 bar x_val y_val z_val
</code></pre>
<p>Yes, I know I could just index them and merge like so:</p>
<pre><code>df = (data['x']
.merge(data['y'],on='player',how='outer')
.merge(data['z'],on='player',how='outer')
)
</code></pre>
<p>But the problem with this is that the number of items in <code>data</code> is variable. Sometimes I might have all of <code>x</code>, <code>y</code>, and <code>z</code>, sometimes I might not have <code>z</code>, and sometimes I might also have <code>a</code>, etc. etc.. so I can't brute force it like this.</p>
<p>I tried using a try except block, but that didn't work. I thought maybe I could use a for loop and <code>enumerate()</code> to try and initiate these merges, but that didn't work either.</p>
| <python><pandas><dictionary><merge> | 2023-10-17 17:22:28 | 0 | 1,645 | bismo |
77,311,031 | 6,013,700 | CVXPY finds no solution (unbounded) to a problem that definitely has a solution - how to debug | <p>I encountered the following special case where CVXPY was returning a <code>None</code> solution, with status <code>unbounded</code>, for a problem that definitely has a solution.</p>
<p>It doesn't seem like I can enter LaTeX, so <code>@</code> signifies matrix multiplication, <code>.T</code> means transpose, small letters are column vectors and caps letters are matrices. I'm looking to solve a minimization (or also argmin) problem of the form:</p>
<pre><code>min_w ( w.T @ A @ w + 2 w.T @ b )
</code></pre>
<p>If <code>A</code> is symmetric positive definite, this should have a solution. If <code>A</code> symmetric we can complete the square and reformulate this as</p>
<pre><code>min_w ( (U.T @ w + c).T @ (U.T @ w + c) - c.T @ c )
</code></pre>
<p>with <code>c = U^(-1) @ b</code>, and <code>A = U @ U.T</code>, where <code>U</code> is the eigenvector matrix multiplied by the diagonal matrix containing the square roots of the eigenvalues. We can calculate <code>c</code> and <code>U</code> if <code>A</code> is symmetric PD.</p>
<p>However, in the following example, I have a symmetric positive definite <code>A</code>, and yet CVXPY fails to find a solution</p>
<pre><code>import numpy as np
import cvxpy as cp
A = np.array([
[4.06361264e-05, 4.78152602e-05, 3.58988931e-05, 3.45978540e-05, 4.56347190e-05, 3.30402841e-05],
[4.78152602e-05, 6.20243040e-05, 4.44641914e-05, 4.39168274e-05, 5.87695986e-05, 4.13444571e-05],
[3.58988931e-05, 4.44641914e-05, 5.11820897e-05, 3.16788337e-05, 4.22214735e-05, 3.07681460e-05],
[3.45978540e-05, 4.39168274e-05, 3.16788337e-05, 3.43880152e-05, 4.18058439e-05, 3.05503889e-05],
[4.56347190e-05, 5.87695986e-05, 4.22214735e-05, 4.18058439e-05, 5.68791131e-05, 3.93696860e-05],
[3.30402841e-05, 4.13444571e-05, 3.07681460e-05, 3.05503889e-05, 3.93696860e-05, 3.06327553e-05]
])
K = np.array([
[4.37566619e-05, 2.69688463e-05, 4.38180809e-05],
[5.45281929e-05, 3.46869714e-05, 5.63081373e-05],
[4.43561108e-05, 2.42773248e-05, 4.05177865e-05],
[3.92084931e-05, 2.55688144e-05, 3.98470925e-05],
[5.20176190e-05, 3.29607102e-05, 5.30698420e-05],
[3.72710707e-05, 2.41940081e-05, 3.80516226e-05]
])
h = np.array([ 5151.2868695 , 1970.57848573, 10322.36834168])
b = K @ h
w = cp.Variable(len(A))
objective_1 = cp.Minimize(w @ A @ w + 2 * w @ b)
constraints = [True]
prob = cp.Problem(objective_1, constraints)
prob.solve()
solution = w.value
print(f"Solution: {solution}, status: {prob.status}")
Solution: None, status: unbounded
</code></pre>
<p>The condition number of A isn't particularly bad,</p>
<p><strong>My questions</strong>:</p>
<ul>
<li>What could be the cause of issues like this? Some sort of floating point arithmetic problem?</li>
<li>I'm not very experienced with CVXPY and quadratic programming, but how would you go about debugging a case where a solver can't find a solution when it should have one?</li>
<li>Last alternative: was I maybe wrong and am I missing a reason why this optimization problem really doesn't have a solution?</li>
</ul>
<p>Also, I'm not looking for the solution, it can be calculated analytically. But I'm just trying to under see what aspect of cvxpy and optimization I'm misunderstanding here.</p>
<p>This is not a duplicate - other questions are also asking why they get an unbounded solution but not with this optimization problem.</p>
| <python><optimization><cvxpy><convex-optimization><quadratic-programming> | 2023-10-17 17:22:11 | 0 | 1,602 | Marses |
77,310,945 | 20,266,647 | Issue seeing xticks with subplots | <p>I have a problem to see x-axis description for first subplot (I used Axes.set_xticks). See sample of code:</p>
<pre><code>import matplotlib.axes
import matplotlib.pyplot as plt
y_lst=[100,200,150,500]
x_lst=[2,4,8,16]
plt.style.use("bmh") #"ggplot" "seaborn-v0_8-poster"
fig, ax = plt.subplots(2, 1, sharex="all", squeeze=False, figsize=(15, 6))
ax_cur: matplotlib.axes.Axes = ax[0][0]
plt.suptitle("Performance",weight='bold', fontsize=18, ha="center", va="top")
ax_cur.set_title("title", fontsize=14,ha="center", va="top")
ax_cur.plot(x_lst, y_lst, color='green', linestyle="-")
ax_cur.legend()
ax_cur.set_ylabel('RTF [calls/sec]')
ax_cur.set_xticks(x_lst)
plt.grid()
plt.show()
</code></pre>
<p>See the output:
<a href="https://i.sstatic.net/yife9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yife9.png" alt="enter image description here" /></a></p>
<p>Did you solve the same problem?</p>
| <python><matplotlib><subplot> | 2023-10-17 17:07:27 | 2 | 1,390 | JIST |
77,310,849 | 7,457,808 | Mesh smoothing/minimal surface algorithm, ideally in python | <p>I am in need of an algorithm for approximating a minimal surface (with some added constraints) starting from an initial triangular 3D mesh. For now, I would be quite satisfied with a reproduction of what is implemented in Blender as "smooth vertices".</p>
<p>I believe that effectively one would move every node in the direction of the normal vectors (or edges?) of neighboring faces, probably weighing them by the adjacent angle. But I am not sure about what really matches the minimal surface criterion best. In addition, I would love to have a reference implementation to see how others handle the stepsize/convergence and "in-face-smoothing" issues.</p>
<p>I am working with python, but in the end it is more about the algorithm than about a preexisting suitable implementation.</p>
| <python><algorithm><computational-geometry><mesh> | 2023-10-17 16:53:01 | 1 | 360 | Franz |
77,310,823 | 3,878,377 | How to extract best parameters from a crossvalidator in pyspark | <p>I have a question regarding extracting the hyperparameters from crossvalidator in PySpark. I have tried every single solution in stackoverflow and none of them work for me. I am not sure either those answers are old or I am making mistake somewhere.</p>
<p>I can perform distributed training and prediction successfully with SparkXGBRegressor, I can do cross validation using:</p>
<pre><code> crossval = CrossValidator(estimator=estimator,
estimatorParamMaps=parm_grid,
evaluator=evaluator,
numFolds=numFolds,
parallelism = parallelism)
cv_model = crossval.fit(data)
</code></pre>
<p>The Param_grid is</p>
<pre><code>param_grid = ParamGridBuilder()\
.addGrid(estimator.n_estimators, [100, 200]) \
.addGrid(estimator.max_depth, [4,6])\
.build()
</code></pre>
<p>However after training I want to get the best hyperparameters and I am not sure how to do that since:= this is not working:</p>
<pre><code>cv_model.bestModel
</code></pre>
<p>and returns something like this
`</p>
<pre><code>SparkXGBRegressor_7651797ba9ba
</code></pre>
<p>`
Which does not make any sense. Can anyone explain how to return the best hyperparameters similar to Python which is like</p>
<pre><code>cv_model.best_params_
</code></pre>
<p>I am sure there is a way to do this.</p>
<p>I have looked at answers such as</p>
<p><a href="https://stackoverflow.com/questions/36697304/how-to-extract-model-hyper-parameters-from-spark-ml-in-pyspark">How to extract model hyper-parameters from spark.ml in PySpark?</a></p>
<p><a href="https://stackoverflow.com/questions/39529012/pyspark-get-all-parameters-of-models-created-with-paramgridbuilder">Pyspark - Get all parameters of models created with ParamGridBuilder</a></p>
<p><a href="https://stackoverflow.com/questions/31749593/how-to-extract-best-parameters-from-a-crossvalidatormodel">How to extract best parameters from a CrossValidatorModel</a></p>
<p>None worked for me. I would appreciate if anyone can help me here.</p>
| <python><apache-spark><pyspark><scikit-learn><cross-validation> | 2023-10-17 16:49:24 | 0 | 1,013 | user59419 |
77,310,754 | 4,655,853 | lambda function Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'cerberus' | <p>I am trying to use Cerberus in AWS lambda but running into the below error when trying to execute the code. I created a custom layer and pointed it to zip file in S3 downloaded from <a href="https://pypi.org/project/Cerberus/#files" rel="nofollow noreferrer">here</a>. Runtime env for lambda and layer is <code>python3.11</code>. Any ideas on how to fix this and use Cerberus in lambda?</p>
<blockquote>
<p>[ERROR] Runtime.ImportModuleError: Unable to import module
'lambda_function': No module named 'cerberus'</p>
</blockquote>
<pre><code>from cerberus import Validator
...
v = Validator(schema)
...
</code></pre>
<p>Zip file structure:</p>
<p><a href="https://i.sstatic.net/YkFKI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YkFKI.jpg" alt="enter image description here" /></a></p>
| <python><aws-lambda><cerberus> | 2023-10-17 16:35:25 | 1 | 387 | user0000 |
77,310,400 | 6,595,663 | Join or concat Polars DataFrames without knowing the matching column names | <p>I am using the polars library in python and have two data frames the look like this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data1 = {
'A': [1, 2, 3],
'B': [4, 5, 6],
'C': [7, 8, 9]
}
df1 = pl.DataFrame(data1)
data2 = {
'B': [4, 5, 6],
'C': [7, 8, 9],
'D': [1, 2, 3]
}
df2 = pl.DataFrame(data2)
# the column B and C are same in both data frames
# TODO: Join/Concat the data frames into one.
</code></pre>
<p>The data2 can vary some time it can have 2 common columns, some time it can have 1 common column and some times more.</p>
<p>The result should look like:</p>
<pre class="lang-py prettyprint-override"><code>result = {
'A': [1, 2, 3],
'B': [4, 5, 6],
'C': [7, 8, 9],
'D': [1, 2, 3]
}
pl.DataFrame(result)
</code></pre>
<pre><code>shape: (3, 4)
┌─────┬─────┬─────┬─────┐
│ A ┆ B ┆ C ┆ D │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╪═════╡
│ 1 ┆ 4 ┆ 7 ┆ 1 │
│ 2 ┆ 5 ┆ 8 ┆ 2 │
│ 3 ┆ 6 ┆ 9 ┆ 3 │
└─────┴─────┴─────┴─────┘
</code></pre>
<p>I am not quite sure how to <code>join</code> or <code>concat</code> the polars data frames in order to achieve this.</p>
| <python><dataframe><python-polars> | 2023-10-17 15:40:26 | 1 | 3,271 | BhanuKiran |
77,310,361 | 10,573,543 | Why I am getting blank response from the AWS lambda function even though AWS lambda test works properly? | <h4>Description</h4>
<hr />
<blockquote>
<p>I am creating an intermediary service between my frontend and my backend using AWS lambda function.
The job of the function is to take the payload from the UI and send to the backend, if Backend is live then send the <code>200</code> status (progress of ML model, short pooling). Suppose the backend is not live and the backend pod is restarting use the previous JSON saved inside a <code>response.json</code> file and return that back to the frontend until the pod is restarted and live to serve.
Here is my lambda function.</p>
</blockquote>
<hr />
<p>My AWS Lambda code</p>
<pre class="lang-py prettyprint-override"><code>import json
import os
import requests
import boto3
# Initialize the S3 client
s3 = boto3.client('s3')
def save_json_file_to_s3(response_payload, filename="response.json"):
# Name of your S3 bucket
bucket_name = 'mys3store'
s3.put_object(Bucket=bucket_name, Key=filename, Body=json.dumps(response_payload))
def read_json_file_from_s3(filename="response.json"):
bucket_name = 'mys3store'
response = s3.get_object(Bucket=bucket_name, Key=filename)
content = response['Body'].read().decode('utf-8')
response_payload= json.loads(content)
return response_payload
def lambda_handler(event, context):
"""Forwards a POST request to another service and redirects back to the first microservice.
Args:
event: The Lambda event.
context: The Lambda context.
Returns:
A JSON response object.
"""
print(event)
print(type(event))
print("=========================================================")
# Get the request data from the event.
try:
request_data = json.loads(event["body"])
except KeyError as e:
request_data = event
# Get the destination URL from the request data.
dest_url = request_data["dest_url"]
# Forward the POST request to the destination service.
response = requests.post(dest_url, json=request_data)
# Check the response status code.
if response.status_code == 200:
# Get the response payload.
response_payload = json.loads(response.content)
print("++++++++++++++++++++++++++++++++++")
print(response_payload)
print("++++++++++++++++++++++++++++++++++")
# Save the response payload to a temporary file.
save_json_file_to_s3(response_payload)
# Redirect back to the first microservice with the response payload.
return {
"statusCode": 200,
"body": json.dumps(response_payload),
"headers": {
"Content-Type": "application/json"
}
}
elif response.status_code == 502:
# Read the response payload from the temporary file.
response_payload = read_json_file_from_s3()
# Redirect back to the first microservice with the response payload.
return {
"statusCode": 200,
"body": json.dumps(response_payload),
"headers": {
"Content-Type": "application/json"
}
}
else:
# Return an error response.
return {
"statusCode": 500,
"body": json.dumps({
"error": "Unexpected response from destination service."
}),
"headers": {
"Content-Type": "application/json"
}
}
</code></pre>
<hr />
<p>Now to test the lambda I created a test event <code>checkLiveOrNot</code></p>
<pre><code>{
"app": "my_app",
"session_id": "3432428374827347234",
"table": "progress",
"user_id": "myaccount@gmail.cm",
"flag": false,
"project_name": "mybig1gbproject",
"user_name": "john doe",
"file_name": "",
"fit_progress": 0,
"model_fit": 0,
"total_fit": 0,
"model_calc": 0,
"total_calc": 0,
"calc_progress": 0,
"model_arch": "lambda",
"dest_url": "https://mybackendservice/api/maincore/progress"
}
</code></pre>
<p>and invoked and it returned me the proper output in the lambda test console.</p>
<pre><code>Test Event Name
checkLiveOrNot
Response
{
"statusCode": 200,
"body": "{\"_id\": \"34347tyi738483h8d73h\", \"session_id\": \"3432428374827347234\", \"data\": 0, \"model_no\": 0, \"table\": \"progress\", \"total_fit\": 1, \"total_calc\": 1, \"orc_fail\": 0, \"model_progress\": 0.0, \"current_model\": 0, \"model_progress\": 100.0, \"current_calc\": 1}",
"headers": {
"Content-Type": "application/json"
}
}
Function Logs
START RequestId: 0d820ce34rtry9-2e11-420dsdfa-961e-c4434515b18f Version: $LATEST
{'app': 'my-app', 'session_id': '34347tyi738483h8d73h', 'table': 'progress', 'user_id': 'danish.xavier@iqvia.com', 's3_flag': False, 'project_name': 'checksampleforapitesting', 'user_name': 'danish.xavier', 'file_name': '', 'model_fit_progress': 0, 'current_model_fit': 0, 'total_model_fit': 0, 'current_model_calc': 0, 'total_model_calc': 0, 'model_calc_progress': 0, 'model_arch': 'eks', 'dest_url': 'https://promo-internal-promo-usv-demotemp-env.mmx.mco.solutions.iqvia.com/api/core/progress_lambda'}
<class 'dict'>
=========================================================
++++++++++++++++++++++++++++++++++
{'_id': '34347tyi738483h8d73h', 'session_id': '34347tyi738483h8d73h', 'data': 0, 'model_no': 0, 'table': 'progress', 'total_fit': 1, 'total_calc': 1, 'orc_fail': 0, 'model_progress': 0.0, 'current_model': 0, 'model_progress': 100.0, 'current_calc': 1}
++++++++++++++++++++++++++++++++++
END RequestId: 0d820ce9-2e11-420a-961e-c4434515b18f
REPORT RequestId: 0d82dfdf0ce9-2dfdfe11-42dfdf0a-961e-c44345dfdf15b18f Duration: 281.79 ms Billed Duration: 282 ms Memory Size: 128 MB Max Memory Used: 54 MB Init Duration: 324.60 ms
Request ID
0d820ce9-2e11-420a-961e-c4434515b18f
</code></pre>
<blockquote>
<p>Now when I sent a POST request to this lambda via postman with the same payload I am getting 200 Response but showing nothing on the Postman screen. I tried using python and sent request.
here is the code.</p>
</blockquote>
<hr />
<pre class="lang-py prettyprint-override"><code>import json
import requests
# Create the request headers
headers = {
'Content-Type': 'application/json'
}
# Create the request body
payload = {
"app": "my_app",
"session_id": "3432428374827347234",
"table": "progress",
"user_id": "myaccount@gmail.cm",
"flag": false,
"project_name": "mybig1gbproject",
"user_name": "john doe",
"file_name": "",
"fit_progress": 0,
"model_fit": 0,
"total_fit": 0,
"model_calc": 0,
"total_calc": 0,
"calc_progress": 0,
"model_arch": "lambda",
"dest_url": "https://mybackendservice/api/maincore/progress"
}
# Send the POST request
response = requests.post(
'https://8973043wa7lrta.execute-api.us-east-1.amazonaws.com/dev/progress_lambda',
headers=headers,
json=payload
)
# Check the response status code
if response.status_code == 200:
# The request was successful
print('Request successful')
# Get the response payload
response_payload = response.content
print(response_payload)
</code></pre>
<p>The Output from the program:</p>
<pre><code>Request successful
b''
</code></pre>
| <python><json><amazon-web-services><aws-lambda> | 2023-10-17 15:33:26 | 2 | 1,166 | Danish Xavier |
77,310,349 | 13,184,183 | How to take sample from pyspark dataframe by column? | <p>I have very large pyspark with two columns of interest : <code>date</code> column and <code>id</code> column. Dataframe is partitioned by <code>date</code>. I want to take a sample from that dataset, such that if some <code>id</code> is in the sample, then all rows with that <code>id</code> should be in the sample.</p>
<p>I can think of the following method:</p>
<pre><code>id_df = df[['id']].sample(fraction = 0.001) # can be duplicates, ok for now
sampled_df = id_df.join(df, on='id')
</code></pre>
<p>Is there a way to do it faster?</p>
| <python><pyspark> | 2023-10-17 15:32:09 | 0 | 956 | Nourless |
77,310,131 | 7,211,014 | ansible proper way to split an extended variable and create new variable to use in other tasks? | <p><strong>Running ansible 2.10.17</strong></p>
<p>I keep getting <code>% Invalid input detected at '^' marker.</code> when trying to run a role.
here is my code. Note "filter", "trust", and "untrust" are passed in with <code>-e</code> as extra variables.</p>
<pre class="lang-yaml prettyprint-override"><code>-- name: Include vars
include_vars:
file: ../../../settings.yaml
name: settings
- name: Extract tags from trust + untrust variables
set_fact:
trust_tag_filter: "{ {{ trust }}.split(':')[0] }"
untrust_tag_filter: "{ {{ untrust }}.split(':')[0] }"
- name: Routers configuration, filter "{{ filter }}", with trust tag filter "{{ trust_tag_filter }}", and untrust tag filter "{{ untrust_tag_filter }}"
iosxr_command:
commands:
- show run formal object-group network ipv4 | u egrep {{ filter }}
- show run ipv4 access-list acl_trust_in | u egrep {{ filter }}
- show run formal interface | u egrep "BVI{{ trust_tag_filter }}"
- show run formal interface | u egrep "BVI{{ untrust_tag_filter }}"
- show run formal interface | u egrep "Bundle-Ether21.{{ trust_tag_filter }}"
- show run formal interface | u egrep "Bundle-Ether21.{{ untrust_tag_filter }}"
- show run formal interface | u egrep "Bundle-Ether22.{{ trust_tag_filter }}"
- show run formal interface | u egrep "Bundle-Ether22.{{ untrust_tag_filter }}"
- show run formal interface | u egrep "Bundle-Ether23.{{ trust_tag_filter }}"
- show run formal interface | u egrep "Bundle-Ether23.{{ untrust_tag_filter }}"
- show run formal interface | u egrep "Bundle-Ether24.{{ trust_tag_filter }}"
- show run formal interface | u egrep "Bundle-Ether24.{{ untrust_tag_filter }}"
- show run l2vpn bridge group TRUST bridge-domain "VLAN{{ trust_tag_filter }}"
- show run l2vpn bridge group UNTRUST bridge-domain "VLAN{{ untrust_tag_filter }}"
- show run router static address-family ipv4 unicast | utility egrep 'router\ static|family|{{settings.lan_subnet_prefix}}{{filter}}|10.30.{{filter}}'
- show evpn evi | u egrep 'VPN-ID|{{ trust_tag_filter }}|{{ untrust_tag_filter }}'
register: response
- debug: msg="{{ response.stdout }}"
</code></pre>
<p>But when I run it I keep getting this error</p>
<pre><code>The full traceback is:
File "/tmp/ansible_iosxr_command_payload_kxd7v36e/ansible_iosxr_command_payload.zip/ansible_collections/cisco/iosxr/plugins/module_utils/network/iosxr/iosxr.py", line 586, in run_commands
return connection.run_commands(commands=commands, check_rc=check_rc)
File "/tmp/ansible_iosxr_command_payload_kxd7v36e/ansible_iosxr_command_payload.zip/ansible/module_utils/connection.py", line 200, in __rpc__
raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)
fatal: [router.our.domain]: FAILED! => changed=false
invocation:
module_args:
commands:
- show run formal object-group network ipv4 | u egrep 89
- show run ipv4 access-list acl_trust_in | u egrep 89
- show run formal interface | u egrep "BVI{ 1234:192.168.20.0/29.split(':')[0] }"
- show run formal interface | u egrep "BVI{ 1235:192.168.30.0/29.split(':')[0] }"
- show run formal interface | u egrep "Bundle-Ether21.{ 1234:192.168.20.0/29.split(':')[0] }"
- show run formal interface | u egrep "Bundle-Ether21.{ 1235:192.168.30.0/29.split(':')[0] }"
- show run formal interface | u egrep "Bundle-Ether22.{ 1234:192.168.20.0/29.split(':')[0] }"
- show run formal interface | u egrep "Bundle-Ether22.{ 1235:192.168.30.0/29.split(':')[0] }"
- show run formal interface | u egrep "Bundle-Ether23.{ 1234:192.168.20.0/29.split(':')[0] }"
- show run formal interface | u egrep "Bundle-Ether23.{ 1235:192.168.30.0/29.split(':')[0] }"
- show run formal interface | u egrep "Bundle-Ether24.{ 1234:192.168.20.0/29.split(':')[0] }"
- show run formal interface | u egrep "Bundle-Ether24.{ 1235:192.168.30.0/29.split(':')[0] }"
- show run l2vpn bridge group TRUST bridge-domain "VLAN{ 1234:192.168.20.0/29.split(':')[0] }"
- show run l2vpn bridge group UNTRUST bridge-domain "VLAN{ 1235:192.168.30.0/29.split(':')[0] }"
- show run router static address-family ipv4 unicast | utility egrep 'router\ static|family|10.35.89|10.30.89'
- show evpn evi | u egrep 'VPN-ID|{ 1234:192.168.20.0/29.split(':')[0] }|{ 1235:192.168.30.0/29.split(':')[0] }'
interval: 1
match: all
provider: null
retries: 10
wait_for: null
msg: |-
show run formal interface | u egrep "BVI{ 1234:192.168.20.0/29.split(':')[0] }"
^
% Invalid input detected at '^' marker.
RP/0/RP0/CPU0:router#
</code></pre>
<p>Why? Am I not defining the variables correctly?</p>
| <python><variables><split><ansible><cisco> | 2023-10-17 15:01:11 | 1 | 1,338 | Dave |
77,310,109 | 3,015,186 | How to check if a datetime value is valid (and not in time gap caused by DST change)? | <p>Daylight Saving Time (DST) changes occur usually twice a year. During fall clocks are moved backwards which creates a fold which means that a given local time has two possible meanings. This is covered in <a href="https://peps.python.org/pep-0495/" rel="nofollow noreferrer">PEP-495</a>. During spring clocks are moved forwards which creates a gap in possible local time values, typically with length of one hour.</p>
<p>For example in "Europe/London" timezone the possible minutes at 2023-03-26 near the DST change are: 00:58, 00:59, 02:00, 02:01; The values 01:00 to 01:59 do not exist.</p>
<h2>Example</h2>
<p>Here is an example of cases which should be detected as valid (possible) and impossible:</p>
<pre class="lang-py prettyprint-override"><code>import datetime as dt
from zoneinfo import ZoneInfo
timestamp_possible = dt.datetime.strptime(
"2023-03-26 00:30:00", "%Y-%m-%d %H:%M:%S"
).replace(tzinfo=ZoneInfo("Europe/London"))
timestamp_impossible = dt.datetime.strptime(
"2023-03-26 01:30:00", "%Y-%m-%d %H:%M:%S"
).replace(tzinfo=ZoneInfo("Europe/London"))
</code></pre>
<p>Here is how I could do it using <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer">pandas.to_datetime</a>, but I'm not entirely sure <em>why</em> it works and if that is an implementation detail of pandas (which might change?) or guaranteed behaviour.</p>
<pre class="lang-py prettyprint-override"><code>def is_valid(timestamp):
return pd.to_datetime(timestamp).to_pydatetime(timestamp) == timestamp
</code></pre>
<p>which gives</p>
<pre class="lang-py prettyprint-override"><code>>>> is_valid(timestamp_possible)
True
>>> is_valid(timestamp_impossible)
False
</code></pre>
<h2>Question</h2>
<p>What would be the simplest possible way to detect if a given <code>datetime.datetime</code> object is valid or hitting the "nonexistent time values" gap? Is there some way to do this in Python standard library?</p>
| <python><datetime><timezone><dst> | 2023-10-17 14:58:07 | 1 | 35,267 | Niko Fohr |
77,310,098 | 3,105,520 | fit function expected 2D array, got 1D array instead | <p>I am using Linear Regression to predict the 'Value' throughout the years, I have the following data, i imported from a csv file :
<code>**df.head()**</code> :</p>
<pre><code>LOCATION INDICATOR SUBJECT MEASURE FREQUENCY TIME Value Flag Codes
0 IRL LTINT TOT PC_PA M 1.167610e+18 4.04 NaN
1 IRL LTINT TOT PC_PA M 1.170288e+18 4.07 NaN
2 IRL LTINT TOT PC_PA M 1.172707e+18 3.97 NaN
3 IRL LTINT TOT PC_PA M 1.175386e+18 4.19 NaN
4 IRL LTINT TOT PC_PA M 1.177978e+18 4.32 NaN
</code></pre>
<p>The format of Time column was YYYY-DD-MM before i used the following</p>
<pre><code>***df['TIME'] = pd.to_datetime(df['TIME'])
df['TIME'] = pd.to_numeric(df['TIME'])
df['TIME'] = df['TIME'].astype(float)***
</code></pre>
<p>In order to fit data i used the following code :</p>
<pre><code>***X=df['TIME']
Y=df['Value']
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.4, random_state=101)
lm= LinearRegression()***
</code></pre>
<p>when executing the fit function <em><strong>lm.fit(X_train, Y_train)</strong></em>, I got the following error :</p>
<pre><code>
***ValueError Traceback (most recent call last)
Cell In[12], line 1
----> 1 lm.fit(X_train, Y_train)
File ~\anaconda3\Lib\site-packages\sklearn\base.py:1151, in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs)
1144 estimator._validate_params()
1146 with config_context(
1147 skip_parameter_validation=(
1148 prefer_skip_nested_validation or global_skip_validation
1149 )
1150 ):
-> 1151 return fit_method(estimator, *args, **kwargs)
File ~\anaconda3\Lib\site-packages\sklearn\linear_model\_base.py:678, in LinearRegression.fit(self, X, y, sample_weight)
674 n_jobs_ = self.n_jobs
676 accept_sparse = False if self.positive else ["csr", "csc", "coo"]
--> 678 X, y = self._validate_data(
679 X, y, accept_sparse=accept_sparse, y_numeric=True, multi_output=True
680 )
682 has_sw = sample_weight is not None
683 if has_sw:
File ~\anaconda3\Lib\site-packages\sklearn\base.py:621, in BaseEstimator._validate_data(self, X, y, reset, validate_separately, cast_to_ndarray, **check_params)
619 y = check_array(y, input_name="y", **check_y_params)
620 else:
--> 621 X, y = check_X_y(X, y, **check_params)
622 out = X, y
624 if not no_val_X and check_params.get("ensure_2d", True):
File ~\anaconda3\Lib\site-packages\sklearn\utils\validation.py:1147, in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, estimator)
1142 estimator_name = _check_estimator_name(estimator)
1143 raise ValueError(
1144 f"{estimator_name} requires y to be passed, but the target y is None"
1145 )
-> 1147 X = check_array(
1148 X,
1149 accept_sparse=accept_sparse,
1150 accept_large_sparse=accept_large_sparse,
1151 dtype=dtype,
1152 order=order,
1153 copy=copy,
1154 force_all_finite=force_all_finite,
1155 ensure_2d=ensure_2d,
1156 allow_nd=allow_nd,
1157 ensure_min_samples=ensure_min_samples,
1158 ensure_min_features=ensure_min_features,
1159 estimator=estimator,
1160 input_name="X",
1161 )
1163 y = _check_y(y, multi_output=multi_output, y_numeric=y_numeric, estimator=estimator)
1165 check_consistent_length(X, y)
File ~\anaconda3\Lib\site-packages\sklearn\utils\validation.py:940, in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator, input_name)
938 # If input is 1D raise error
939 if array.ndim == 1:
--> 940 raise ValueError(
941 "Expected 2D array, got 1D array instead:\narray={}.\n"
942 "Reshape your data either using array.reshape(-1, 1) if "
943 "your data has a single feature or array.reshape(1, -1) "
944 "if it contains a single sample.".format(array)
945 )
947 if dtype_numeric and hasattr(array.dtype, "kind") and array.dtype.kind in "USV":
948 raise ValueError(
949 "dtype='numeric' is not compatible with arrays of bytes/strings."
950 "Convert your data to numeric values explicitly instead."
951 )
ValueError: Expected 2D array, got 1D array instead:
array=[1.3437792e+18 1.6198272e+18 1.3596768e+18 ... 1.3596768e+18 1.5805152e+18
1.5751584e+18].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.***
</code></pre>
<p>Do you have any idea how can i resolve this error ?
Thank you in advance.</p>
| <python><pandas><linear-regression><reshape><data-fitting> | 2023-10-17 14:57:10 | 2 | 421 | Najoua |
77,310,054 | 11,665,178 | How to authenticate with a service account or OAuth consent automatically for Google Drive API? | <p>I am trying to make a simple POST request as the following :</p>
<pre><code>POST https://www.googleapis.com/drive/v3/changes/watch
Authorization: Bearer auth_token_for_current_user
Content-Type: application/json
{
"id": "4ba78bf0-6a47-11e2-bcfd-0800200c9a77", // Your channel ID.
"type": "web_hook",
"address": "https://www.example.com/notifications", // Your receiving URL.
...
"token": "target=myApp-myChangesChannelDest", // (Optional) Your changes channel token.
"expiration": 1426325213000 // (Optional) Your requested channel expiration date and time.
}
</code></pre>
<p>From this <a href="https://developers.google.com/drive/api/guides/push" rel="nofollow noreferrer">guide</a> and i need the <code>auth_token_for_current_user</code> value from authentication.</p>
<p>I need to make this request from python, but i am unable to find a documentation on how to use a client library or getting the authenticated token to use a POST request with the <code>requests</code> package.</p>
<p>My question is how to authenticate for this API ? I prefer an automated solution than an OAuth consent screen that i should fill manually.</p>
<p>Thanks in advance</p>
| <python><google-drive-api> | 2023-10-17 14:51:55 | 1 | 2,975 | Tom3652 |
77,310,043 | 5,924,264 | How to prefilter a list of dicts that I want to insert into a sql table where each dict has an extra column | <p>I have a sqlite3 connector, and I would like to insert a list of dicts into the sql table.
However, this list of dicts may have an extra column that isn't part of the sql table schema.</p>
<p>Obviously, I can pre-filter the list of dicts to remove this column prior to using <code>executemany</code>, and that's what I'm currently doing. Is there a native way for sqlite3 to handle this filtering directly for me? If I just try to insert the dicts without filtering, it'll fail and say I provided more columns than necessary.</p>
| <python><sql><sqlite> | 2023-10-17 14:50:33 | 2 | 2,502 | roulette01 |
77,309,956 | 5,231,110 | Compare two strings by meaning using LLMs | <p>I'd like to use some of the good large language models to estimate how similar the <strong>meanings</strong> of two strings are, for example "cat" and "someone who likes to play with yarn", or "cat" and "car".</p>
<p>Maybe some libraries provide a function for comparing strings, or we could implement some method such as measuring the similarity of their embeddings in a deep layer or whatever is appropriate.</p>
<p>I hope that something without much boilerplate code is possible. Something like:</p>
<pre class="lang-py prettyprint-override"><code>import language_models, math
my_llm = language_models.load('llama2')
print(math.dist(
my_llm.embedding('cat'),
my_llm.embedding('someone who likes to play with yarn')))
</code></pre>
<p>Ideally, it should be easy to try different recent LLMs. (In the "example" above, that would mean replacing <code>'llama2'</code> by another model name.)</p>
| <python><neural-network><artificial-intelligence><large-language-model> | 2023-10-17 14:37:12 | 3 | 2,936 | root |
77,309,900 | 4,399,016 | Saving multiple Pandas data frames to different work sheets of a Single Excel file | <p>I have this code:</p>
<pre><code>import pandas as pd
import yfinance as yF
import datetime
from functools import reduce
def get_returns_change_period(tic,com,OHLC_COL1,OHLC_COL2):
df_Stock = yF.download(tickers = tic, period = "38y", interval = "1mo", prepost = False, repair = False)
df_Stock['MONTH'] = pd.to_datetime(df_Stock.index)
df_Stock = df_Stock.sort_values(by='MONTH')
df_Stock[com + ' % Change '+'3M'] = (((df_Stock[OHLC_COL1].shift(2) - df_Stock[OHLC_COL2]))/df_Stock[OHLC_COL2]) *100
df_Stock[com + ' % Change '+'3M'] = (df_Stock[com + ' % Change '+'3M'] * 100 )/df_Stock[OHLC_COL2]
df_Stock[com + ' % Change '+'2M'] = (((df_Stock[OHLC_COL1].shift(1) - df_Stock[OHLC_COL2]))/df_Stock[OHLC_COL2]) *100
df_Stock[com + ' % Change '+'2M'] = (df_Stock[com + ' % Change '+'2M'] * 100 )/df_Stock[OHLC_COL2]
df_Stock[com + ' % Change '+'M'] = (((df_Stock[OHLC_COL1].shift(0) - df_Stock[OHLC_COL2]))/df_Stock[OHLC_COL2]) *100
df_Stock[com + ' % Change '+'M'] = (df_Stock[com + ' % Change '+'M'] * 100 )/df_Stock[OHLC_COL2]
return df_Stock.filter(regex='% Change')
</code></pre>
<p>Function defined and called.</p>
<pre><code>get_returns_change_period('XOM','EXXON MOBIL','High','Open')
</code></pre>
<p>df_Industry contains 2 different companies and corresponding ticker symbols.</p>
<pre><code>df_Industry = pd.DataFrame({'ID':['1', '2'], 'Ticker': ['AIN', 'TILE'], 'Company':['Albany International', 'Interface']})
df1 = df_Industry.apply(lambda x: get_returns_change_period(x.Ticker, x.Company, 'High','Open'), axis=1)
</code></pre>
<p>Using Apply Lambda function to pass Ticker and Company names of different companies from Data Frame as Parameters to the function.</p>
<pre><code>merged_df = pd.concat(df1.tolist(), axis=1)
merged_df
#merged_df.to_excel("output.xlsx")
#merged_df.to_csv("output.csv", index=False)
</code></pre>
<p>This gives a single pandas data frame with both company details.</p>
<p>How to store the results in a single Excel File in 2 separate spreadsheets named after company names from df_Industry.</p>
| <python><pandas><excel><dataframe> | 2023-10-17 14:29:04 | 1 | 680 | prashanth manohar |
77,309,857 | 19,299,757 | How to pass parameter values to lambda event from API call | <p>I've a lambda function which runs a Docker Image and creates a CSV file. This lambda function expects 5 arguments which are passed from command line when invoking the lambda as given below.
These 5 parameters are configured in input.json file and passed to the CLI.</p>
<pre><code>aws lambda invoke --function-name dev-poc-csv-utility-CsvUtility-v1 --payload file://input.json --cli-binary-format raw-in-base64-out response.json
</code></pre>
<p>This works great.
Now I've enabled REST API on this lambda function.</p>
<p>This is my lambda function</p>
<pre><code>def respond(err, res=None):
return {
'statusCode': '400' if err else '200',
'body': err.message if err else json.dumps(res),
'headers': {
'Content-Type': 'application/json',
},
}
def lambda_handler(event, context):
# Logic for creating csv file here and upload it to S3 bucket
return respond(None, res="Success")
</code></pre>
<p>When I test this lambda from AWS console, I am getting "Internal Server Error"
Lambda execution failed with status 200 due to customer function error: 'rows'</p>
<p>I know this is due to the parameters not being sent to the lambda correctly. I am new to API and I can't figure out a way to pass these 5 parameters when calling the API for this lambda.
The 5 parameters are "rows", "client", "insType", "Source", "email" respectively.</p>
<p>Any help is very much appreciated.</p>
| <python><aws-lambda> | 2023-10-17 14:23:32 | 1 | 433 | Ram |
77,309,717 | 9,891,565 | Version controller in websocket with fastapi | <p>So, I want to use router within the websocket call.
This is what I have, at the moment (using dummy data)</p>
<p>version->v1-><strong>init</strong>.py</p>
<pre><code>router.include_router(websocket_controller, prefix="/ws")
router_without_prefix.include_router(websocket_controller)
</code></pre>
<p>version->v1->websocket_controller.py
So this is the enpoint:</p>
<pre><code>router = APIRouter()
@router.websocket("/ws/something")
async def websocket_endpoint(websocket: WebSocket):
</code></pre>
<p>but I want to remove this line:
<code>router_without_prefix.include_router(websocket_controller).</code> But when I remove this line and call the websocket I have a 403. Can someone explain to me how I can correct this please?</p>
| <python><fastapi> | 2023-10-17 14:04:31 | 1 | 487 | rainbow12 |
77,309,676 | 6,423,456 | Does Django DRF have a way to do read-only tokens? | <p>I have a Django REST API using Django REST Framework.
I want to use <a href="https://github.com/encode/django-rest-framework/blob/f56b85b7dd7e4f786e0769bba6b7609d4507da83/rest_framework/authentication.py#L151" rel="nofollow noreferrer">TokenAuthentication</a>, but I want to have read-only tokens and read-write tokens. Read-only just has access to GET methods. Read-Write have access to everything (POST, PUT, DELETE, etc).</p>
<p>Does DRF already have something for this? Or would I just create my own <a href="https://github.com/encode/django-rest-framework/blob/f56b85b7dd7e4f786e0769bba6b7609d4507da83/rest_framework/authtoken/models.py#L9" rel="nofollow noreferrer">Token</a> model with an extra field that keeps track of the type of token, and use my own version of <code>TokenAuthentication</code> that returns a 403 if the token is READ_ONLY, and the method is other than a GET method?</p>
<p>I don't want to be re-inventing the wheel here if this already exists.</p>
| <python><django><django-rest-framework> | 2023-10-17 14:00:03 | 1 | 2,774 | John |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.