QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,260,702 | 11,246,348 | How do I know if my `concurrent.futures.ThreadPoolExecutor` is actually running in parallel? | <p>I am trying to parallelize a for loop that appends to a list using Python using the <code>concurrent.futures.ThreadPoolExecutor</code> class. My code works, but I'm not sure if it's actually <em>doing</em> multithreading. According to Python's documentation <a href="https://docs.python.org/3/library/concurrent.futures.html" rel="nofollow noreferrer">here</a>, the <code>.map(func, iterable)</code> method of <code>Executor</code> instances should call the function over each element of the iterable in a somewhat arbitrary order:</p>
<pre><code>map(func, *iterables, timeout=None, chunksize=1)
Similar to map(func, *iterables) except:
- the iterables are collected immediately rather than lazily;
- func is executed asynchronously and several calls to func may
be made concurrently.
</code></pre>
<p>But the output of my code appears in the same order as the iterable, which makes me think it isn't actually running in parallel:</p>
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ThreadPoolExecutor
import itertools
def foo(xy):
x, y = xy
return f"{x} + {y} = {x + y}"
if __name__ == "__main__":
n = 8
with ThreadPoolExecutor(4) as e:
prod = itertools.product(range(0, n), range(n, 2 * n))
lst = list(e.map(foo, prod))
assert len(lst) == n * n # make sure we got them all
print(lst)
</code></pre>
<p>The output is</p>
<pre><code>> python main.py
['0 + 8 = 8', '0 + 9 = 9', '0 + 10 = 10', '0 + 11 = 11', '0 + 12 = 12', '0 + 13 = 13', '0 + 14 = 14', '0 + 15 = 15', '1 + 8 = 9', '1 + 9 = 10', '1 + 10 = 11', '1 + 11 = 12', '1 + 12 = 13', '1 + 13 = 14', '1 + 14 = 15', '1 + 15 = 16', '2 + 8 = 10', '2 + 9 = 11', '2 + 10 = 12', '2 + 11 = 13', '2 + 12 = 14', '2 + 13 = 15', '2 + 14 = 16', '2 + 15 = 17', '3 + 8 = 11', '3 + 9 = 12', '3 + 10 = 13', '3 + 11 = 14', '3 + 12 = 15', '3 + 13 = 16', '3 + 14 = 17', '3 + 15 = 18', '4 + 8 = 12', '4 + 9 = 13', '4 + 10 = 14', '4 + 11 = 15', '4 + 12 = 16', '4 + 13 = 17', '4 + 14 = 18', '4 + 15 = 19', '5 + 8 = 13', '5 + 9 = 14', '5 + 10 = 15', '5 + 11 = 16', '5 + 12 = 17', '5 + 13 = 18', '5 + 14 = 19', '5 + 15 = 20', '6 + 8 = 14', '6 + 9 = 15', '6 + 10 = 16', '6 + 11 = 17', '6 + 12 = 18', '6 + 13 = 19', '6 + 14 = 20', '6 + 15 = 21', '7 + 8 = 15', '7 + 9 = 16', '7 + 10 = 17', '7 + 11 = 18', '7 + 12 = 19', '7 + 13 = 20', '7 + 14 = 21', '7 + 15 = 22']
</code></pre>
<p>which is ordered exactly the same way as the product iterator.</p>
<p>Compare, for instance, the following Julia code:</p>
<pre><code>using Base.Threads
foo(x, y)::String = "$x + $y = $(x + y)"
function main()
vec = String[]
n = 8
tuples = Iterators.product(1:n, n+1:2n) |> collect
@threads for (x, y) in tuples
push!(vec, foo(x, y))
end
@assert length(vec) == n * n
display(vec)
end
main()
</code></pre>
<p>The output is not in the same order as the iterator, which implies that
the code was executed in parallel—which is what I want:</p>
<pre><code>> julia --threads=auto main.jl
64-element Vector{String}:
"1 + 9 = 10"
"2 + 9 = 11"
"3 + 9 = 12"
"4 + 9 = 13"
"5 + 9 = 14"
"6 + 9 = 15"
"3 + 13 = 16"
"4 + 13 = 17"
"5 + 13 = 18"
"6 + 13 = 19"
"7 + 13 = 20"
"5 + 10 = 15"
"6 + 10 = 16"
"7 + 10 = 17"
"8 + 10 = 18"
"1 + 11 = 12"
"2 + 11 = 13"
"5 + 14 = 19"
"6 + 14 = 20"
...
</code></pre>
<h2>Questions</h2>
<ul>
<li>How do can I tell if my Python code is actually running in parallel?</li>
<li>If it isn't, how can I make it so?</li>
</ul>
<hr />
<p><a href="https://stackoverflow.com/questions/2846653/how-do-i-use-threading-in-python">This question</a> shows how to obtain similar behavior to Julia's <code>@threads</code> using <code>multiprocessing.dummy import Pool</code>, but I am trying to learn the <code>concurrent.futures.ThreadPoolExecutor</code> class, hence the new question.</p>
| <python> | 2023-10-09 17:19:52 | 2 | 887 | Max |
77,260,645 | 7,077,532 | Write All Filenames in Specific Folder to Pandas Dataframe Table Column | <p>Let's say I have the following folder which contains .txt and .wav audio files.</p>
<p><a href="https://i.sstatic.net/CQWRk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CQWRk.png" alt="enter image description here" /></a></p>
<p>I want to create a dataframe table that has each row of data as one of those .wav files. It would look something like this as a sample output:</p>
<p><a href="https://i.sstatic.net/Mnm0m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mnm0m.png" alt="enter image description here" /></a></p>
<p>In total, there should be 460 rows of data, one for each .wav file.</p>
<p>How do I do this in Python code? I've searched all around StackOverflow but can't find an example of this.</p>
| <python><dataframe><import><directory><filenames> | 2023-10-09 17:09:12 | 1 | 5,244 | PineNuts0 |
77,260,536 | 4,911,426 | Run ADB Shell and then interact with console app in Python | <p>I have a situation where I need to run the <code>adb shell</code> command which takes me to the root access of my device.</p>
<p>Then I run the interactive console app against my device using this command: <code>/oem/console --no-logging</code>.</p>
<p>The command returns this output:</p>
<ul>
<li>Console Process: connecting to application...</li>
<li>Console Process: connected to application...</li>
</ul>
<p>After this, I need to PRESS ENTER so that the console accepts events (The arrow cursor is ready to accept invoke events). This console app takes certain events that I can simulate my device's behavior. For example:</p>
<ol>
<li><code>invoke put.device.into.charge 1</code></li>
<li><code>invoke call.cell.hangup 1</code></li>
<li><code>invoke xyz</code></li>
</ol>
<p>So my question is, how do I one-by-one enter these commands into this console app and wait a couple of seconds after each command?</p>
<p>This is the code I have so far:</p>
<pre><code>def run_adb_shell_command(command, return_process=False):
"""Run an ADB shell command."""
cmd = command.split()
process = Popen(cmd, stdin=subprocess.PIPE)
if return_process:
return process
def send_command_to_console_app(command, process):
"""SEND Commands to Console App."""
new_line = '\n'.encode()
# encoded_command = str(command).encode()
s0 = command
s1 = 'event 1'
s2 = 'event 2'
s3 = 'event 3'
concat_query = "{}\n".format(s0)
process.communicate(input=concat_query.encode('utf-8'))[0]
process.kill()
</code></pre>
<p>The problem is that when <code>communicate()</code> function is called for one of the events it just hangs up and never finishes so that I can run the next event.</p>
<p>What is the best approach for this?</p>
| <python><android><python-3.x> | 2023-10-09 16:46:51 | 1 | 556 | Shahboz |
77,260,375 | 13,000,229 | `pip install pmdarima` fails due to missing Numpy even though Numpy exists | <h2>Problem</h2>
<p><code>pip install pmdarima</code> failed due to <code>ModuleNotFoundError: No module named 'numpy'</code> even though another part of the log says <code>Requirement already satisfied: numpy>=1.21.2 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pmdarima) (1.26.0)</code>.</p>
<p>I tried (1) reinstalling Numpy and (2) running <code>pip install -I pmdarima --no-cache-dir --force-reinstall</code>, but the result was the same.</p>
<p>I don't understand why this happen and how to solve this issue.</p>
<h2>Environment</h2>
<ul>
<li>Windows 11 (I used Power Shell)</li>
<li>Python 3.12.0 (I downloaded Python from Microsoft Store)</li>
<li>Numpy 1.26.0</li>
</ul>
<h2>Error message</h2>
<pre><code>(.venv) PS C:\Users\my.name\code\prediction> pip install pmdarima
Collecting pmdarima
Using cached pmdarima-2.0.3.tar.gz (630 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: joblib>=0.11 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pmdarima) (1.3.2)
Collecting Cython!=0.29.18,!=0.29.31,>=0.29 (from pmdarima)
Obtaining dependency information for Cython!=0.29.18,!=0.29.31,>=0.29 from https://files.pythonhosted.org/packages/f0/a7/42116e4be098b5ae75669b76ad62216e2f67c5a9b8f87d6aa2b99bc9f9d7/Cython-3.0.3-cp312-cp312-win_amd64.whl.metadata
Using cached Cython-3.0.3-cp312-cp312-win_amd64.whl.metadata (3.2 kB)
Requirement already satisfied: numpy>=1.21.2 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pmdarima) (1.26.0)
Requirement already satisfied: pandas>=0.19 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pmdarima) (2.1.1)
Requirement already satisfied: scikit-learn>=0.22 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pmdarima) (1.3.1)
Requirement already satisfied: scipy>=1.3.2 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pmdarima) (1.11.3)
Collecting statsmodels>=0.13.2 (from pmdarima)
Obtaining dependency information for statsmodels>=0.13.2 from https://files.pythonhosted.org/packages/a5/59/a4c19b49684ca2a469d7cd1a5682950e327c95c68e13aeea15533e576a8e/statsmodels-0.14.0-cp312-cp312-win_amd64.whl.metadata
Using cached statsmodels-0.14.0-cp312-cp312-win_amd64.whl.metadata (9.3 kB)
Requirement already satisfied: urllib3 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pmdarima) (2.0.6)
Collecting setuptools!=50.0.0,>=38.6.0 (from pmdarima)
Obtaining dependency information for setuptools!=50.0.0,>=38.6.0 from https://files.pythonhosted.org/packages/bb/26/7945080113158354380a12ce26873dd6c1ebd88d47f5bc24e2c5bb38c16a/setuptools-68.2.2-py3-none-any.whl.metadata
Using cached setuptools-68.2.2-py3-none-any.whl.metadata (6.3 kB)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pandas>=0.19->pmdarima) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pandas>=0.19->pmdarima) (2023.3.post1)
Requirement already satisfied: tzdata>=2022.1 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from pandas>=0.19->pmdarima) (2023.3)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from scikit-learn>=0.22->pmdarima) (3.2.0)
Collecting patsy>=0.5.2 (from statsmodels>=0.13.2->pmdarima)
Using cached patsy-0.5.3-py2.py3-none-any.whl (233 kB)
Requirement already satisfied: packaging>=21.3 in c:\users\my.name\code\prediction\.venv\lib\site-packages (from statsmodels>=0.13.2->pmdarima) (23.2)
Requirement already satisfied: six in c:\users\my.name\code\prediction\.venv\lib\site-packages (from patsy>=0.5.2->statsmodels>=0.13.2->pmdarima) (1.16.0)
Using cached Cython-3.0.3-cp312-cp312-win_amd64.whl (2.8 MB)
Using cached setuptools-68.2.2-py3-none-any.whl (807 kB)
Using cached statsmodels-0.14.0-cp312-cp312-win_amd64.whl (9.1 MB)
Building wheels for collected packages: pmdarima
Building wheel for pmdarima (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for pmdarima (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [38 lines of output]
<string>:15: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
Partial import of pmdarima during the build process.
Traceback (most recent call last):
File "<string>", line 190, in check_package_status
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.240.0_x64__qbz5n2kfra8p0\Lib\importlib\__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
File "<frozen importlib._bootstrap>", line 1318, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'numpy'
Requirements: ['joblib>=0.11\nCython>=0.29,!=0.29.18,!=0.29.31\nnumpy>=1.21.2\npandas>=0.19\nscikit-learn>=0.22\nscipy>=1.3.2\nstatsmodels>=0.13.2\nurllib3\nsetuptools>=38.6.0,!=50.0.0\n']
Adding extra setuptools args
Traceback (most recent call last):
File "C:\Users\my.name\code\prediction\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\my.name\code\prediction\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\my.name\code\prediction\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\my.name\AppData\Local\Temp\pip-build-env-yz_r_c0e\overlay\Lib\site-packages\setuptools\build_meta.py", line 434, in build_wheel
return self._build_with_temp_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\my.name\AppData\Local\Temp\pip-build-env-yz_r_c0e\overlay\Lib\site-packages\setuptools\build_meta.py", line 419, in _build_with_temp_dir
self.run_setup()
File "C:\Users\my.name\AppData\Local\Temp\pip-build-env-yz_r_c0e\overlay\Lib\site-packages\setuptools\build_meta.py", line 507, in run_setup
super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
File "C:\Users\my.name\AppData\Local\Temp\pip-build-env-yz_r_c0e\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 340, in <module>
File "<string>", line 327, in do_setup
File "<string>", line 210, in check_package_status
ImportError: numpy is not installed.
pmdarima requires numpy >= 1.16.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pmdarima
Failed to build pmdarima
ERROR: Could not build wheels for pmdarima, which is required to install pyproject.toml-based projects
</code></pre>
| <python><numpy><pip><pmdarima> | 2023-10-09 16:18:20 | 2 | 1,883 | dmjy |
77,260,253 | 482,439 | VSCode, Django and javascript: Why does VSCode indicate an error when I use {{ }}? | <p>I have the following code that works in my html file:</p>
<pre><code> {% for stocks_and_pair_data in stocks_and_pair_data_list %}
<script>
var ctxClose = document.getElementById('ChartOfClosePrices{{ forloop.counter }}');
new Chart(ctxClose, {
type: 'line',
data: {
labels: {{stocks_and_pair_data.dates|safe}},
datasets: [{
label: '{{ stocks_and_pair_data.stock1_name|safe }}',
data: {{stocks_and_pair_data.stock1_close_prices|safe}}
},{
label: '{{ stocks_and_pair_data.stock2_name|safe }}',
data: {{stocks_and_pair_data.stock2_close_prices|safe}}
}]
}
});
</script>
{% endfor %}
</code></pre>
<p>But VSCode indicates the error "Property assignment expected" at 'labels' line and at both 'data' lines, which are where I use double {{ }} without quotes. I can't use quotes there because these are lists, not strings.</p>
<p>Am I doing it wrong, despite of the code working? What should I do to correct it?</p>
| <javascript><python><django> | 2023-10-09 15:55:52 | 1 | 625 | AntonioR |
77,260,170 | 2,529,125 | dataframe to_excel - versionchanged:: 1.4.0 Zstandard support TypeError: ExcelFormatter.write() got an unexpected keyword argument 'engine_kwargs' | <p>Based on the version used, I am experiencing the following error when writing a pandas dataframe to an excel workbook:</p>
<blockquote>
<p>File
~\AppData\Local\anaconda3\envs\prod-env-text-bert\Lib\site-packages\pandas\core\generic.py:2345
in to_excel
.. versionchanged:: 1.4.0 Zstandard support. TypeError: ExcelFormatter.write() got an unexpected keyword argument
'engine_kwargs'</p>
</blockquote>
<pre><code># importing packages
import pandas as pd
# dictionary of data
dct = {'ID': {0: 23, 1: 43, 2: 12,
3: 13, 4: 67, 5: 89,
6: 90, 7: 56, 8: 34},
'Name': {0: 'Ram', 1: 'Deep',
2: 'Yash', 3: 'Aman',
4: 'Arjun', 5: 'Aditya',
6: 'Divya', 7: 'Chalsea',
8: 'Akash' },
'Marks': {0: 89, 1: 97, 2: 45, 3: 78,
4: 56, 5: 76, 6: 100, 7: 87,
8: 81},
'Grade': {0: 'B', 1: 'A', 2: 'F', 3: 'C',
4: 'E', 5: 'C', 6: 'A', 7: 'B',
8: 'B'}
}
# forming dataframe
data = pd.DataFrame(dct)
# storing into the excel file
data.to_excel("output.xlsx")
</code></pre>
<p>library snapshot</p>
<pre><code> numpy 1.24.4
numba 0.57.1
xlrd 2.0.1
XlsxWriter 3.1.2
pandas 2.0.3
Python 3.11.5
</code></pre>
<p>The complete list of packages used to setup the environment is the following:</p>
<p>NB: packages without "pip install" where install via anaconda IDE.
prod-env-text-bert setup</p>
<pre><code>Python 3.11.5
https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst
https://visualstudio.microsoft.com/visual-cpp-build-tools/
download build tools
Select: Workloads → Desktop development with C++
Then for Individual Components, select only:
Windows 10 SDK
C++ x64/x86 build tools
please restart at this point
#pip install -U bertopic
pip install bertopic[spacy]
pandas
pip install numpy==1.24.4
pip install numba==0.57.1 -- please check versions between numba and numpy
pip install matplotlib
wordcloud
pyarrow
openpyxl
pip install xlsxwriter==3.1.2
pip install gensim
pip install xlrd
python -m spacy download en_core_web_sm
pip install langdetect
pip install google_trans_new
pip install symspellpy
seaborn
</code></pre>
| <python><pandas><dataframe> | 2023-10-09 15:42:04 | 0 | 511 | Sade |
77,260,102 | 1,110,328 | Python `typing.cast` method vs colon type hints | <p>What the difference between using the Python <code>typing.cast</code> method</p>
<pre><code>x = cast(str, x)
</code></pre>
<p>compared with using type hints/colon notation/left hand side type annotation?</p>
<pre><code>x: str
</code></pre>
<p>I've seen the use of the cast method in codebases but don't have an apparent reason to use it instead of type hint notation, which is more concise.</p>
| <python><python-typing> | 2023-10-09 15:31:54 | 5 | 1,691 | joshlk |
77,260,101 | 14,637 | How to use Dask to parallelize up a `sel()` operation in xarray? | <p>I have an array of values called <code>speed</code>, and I'm mapping it to another array of values of the same shape called <code>power</code> by looking up the nearest value in a lookup table <code>speed_to_power_lut</code>. This process takes about 2.5 seconds on my machine, and I want to speed it up.</p>
<pre class="lang-py prettyprint-override"><code>import time
import numpy as np
import xarray as xr
LON = np.arange(0, 360, 0.25)
LAT = np.arange(-90, 90, 0.25)
TIME = np.arange(0, 24)
speed = xr.DataArray(
np.random.uniform(high=10, size=(len(LON), len(LAT), len(TIME))),
coords={'lon': LON, 'lat': LAT, 'time': TIME})
speed_to_power_lut = xr.DataArray(
np.random.uniform(high=100.0, size=(100,)),
coords={'speed': np.arange(0, 10, 0.1)})
start = time.perf_counter()
power = speed_to_power_lut.sel(speed=speed, method='nearest')
print(f'Without chunk: {time.perf_counter() - start:.3f} s')
speed = speed.chunk({'lon': len(LON) // 16})
start = time.perf_counter()
power = speed_to_power_lut.sel(speed=speed, method='nearest')
print(f'With chunk: {time.perf_counter() - start:.3f} s')
</code></pre>
<p>The <a href="https://docs.xarray.dev/en/stable/user-guide/dask.html#using-dask-with-xarray" rel="nofollow noreferrer">xarray documentation</a> suggests that, if I chunk the array, Dask will automatically be used under the hood to make things faster. Unfortunately, that's not what I'm seeing:</p>
<pre><code>Without chunk: 2.477 s
With chunk: 2.499 s
</code></pre>
<p>I'm somewhat new to xarray and entirely new to Dask, so maybe I'm just missing something trivial. Or is this particular use case not parallelized?</p>
| <python><dask><python-xarray> | 2023-10-09 15:31:51 | 1 | 183,379 | Thomas |
77,260,097 | 2,299,245 | Using PySpark Pandas to read in filename with a space in it | <p>trying to read this file:</p>
<pre><code>abfss://myaccount@myorganisation.dfs.core.windows.net/perils_database/Bushfire AUSTRALIA - PERILS Industry Exposure Database 2023.xlsx
</code></pre>
<p>Using this Python code:</p>
<pre><code>import pyspark.pandas as ps
input_file = "abfss://myaccount@myorganisation.dfs.core.windows.net/perils_database/Bushfire AUSTRALIA - PERILS Industry Exposure Database 2023.xlsx"
ps.read_excel(input_file)
</code></pre>
<p>I get this error:</p>
<pre><code>IllegalArgumentException: Illegal character in path at index 97: abfss://myaccount@myorganisation.dfs.core.windows.net/perils_database/Bushfire AUSTRALIA - PERILS Industry Exposure Database 2023.xlsx
</code></pre>
<p>Because the file path has spaces in it. I'm unsure how to fix this using PySpark Pandas. Help?</p>
| <python><apache-spark><pyspark><pyspark-pandas> | 2023-10-09 15:31:10 | 0 | 949 | TheRealJimShady |
77,259,950 | 6,464,947 | How do you change a value in a cell from python to excel with the insert python in the new beta channel of Microsoft insider? | <p>I just started with python in excel (the insider beta channel of Microsoft office) and I get how to get a value in a cell into python, but I can't get how to do the opposite: put a value from python into a cell.</p>
| <python><excel> | 2023-10-09 15:07:30 | 1 | 23,563 | PythonProgrammi |
77,259,947 | 11,426,624 | Add a 2nd header to a pandas dataframe | <p>I have a dataframe</p>
<pre><code>df = pd.DataFrame({'col1':[1,2], 'col2':[3,4], 'col3':[5,6], 'col4':[7,8], 'col5':[9,10]})
</code></pre>
<p>and would like to add a second header to it such that col1, col2 and col3 go under header1 and col4 and col5 go under header2, so instead of this</p>
<pre><code> col1 col2 col3 col4 col5
0 1 3 5 7 9
1 2 4 6 8 10
</code></pre>
<p>I would like to have this</p>
<pre><code> header1 header2
col1 col2 col3 col4 col5
0 1 3 5 7 9
1 2 4 6 8 10
</code></pre>
| <python><pandas><dataframe> | 2023-10-09 15:07:13 | 2 | 734 | corianne1234 |
77,259,921 | 1,497,139 | neo4j.exceptions.ServiceUnavailable Connection to 127.0.0.1:7687 closed without handshake response | <p>I keep getting</p>
<pre><code>neo4j.exceptions.ServiceUnavailable: Couldn't connect to localhost:7687 (resolved to ()):
Connection to 127.0.0.1:7687 closed without handshake response
</code></pre>
<p>without a proper explanation why the service is considered to be unavailable. I am using the official neoj4:latest docker image as outlined below. The check for the port to be available is positive and the web environment works fine at port 7474 happily connecting to port 7687. Just the python driver complains.</p>
<p>Interestingly the CI test works of github fine - just the test in my Jenkins environment on a local ubuntu server reports the problem.</p>
<p><strong>What might be the cause of this misbehavior?</strong></p>
<p><strong>My Environment</strong>
Python Version: Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Driver Version: Name: neo4j Version: 5.10.0
Summary: Neo4j Bolt driver for Python
Server Version and Edition: neo4j:latest docker images
Operating System:Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy</p>
<p>Issue to be addressed: <a href="https://github.com/WolfgangFahl/pyCEURmake/issues/84" rel="nofollow noreferrer">https://github.com/WolfgangFahl/pyCEURmake/issues/84</a>.
You'll also find the source code of the snippets below in the above project.</p>
<p><strong>Script to start neo4j</strong>
see <a href="https://github.com/WolfgangFahl/pyCEURmake/blob/main/scripts/neo4j" rel="nofollow noreferrer">https://github.com/WolfgangFahl/pyCEURmake/blob/main/scripts/neo4j</a></p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
# WF 2023-07-15
# Stop and remove any existing Neo4j container
docker stop neo4j-instance
docker rm neo4j-instance
# Run the Neo4j container
# https://neo4j.com/developer/docker-run-neo4j/
docker run -d --name neo4j-instance \
--publish=7474:7474 --publish=7687:7687 \
--env NEO4J_AUTH=neo4j/password \
--volume=$HOME/neo4j/data:/data \
neo4j:latest
# Function to display a progress bar
display_progress() {
local progress=$1
local max_progress=$2
local bar_length=40
local filled_length=$((progress * bar_length / max_progress))
local empty_length=$((bar_length - filled_length))
# Build the progress bar string
local bar="["
bar+="$(printf "%${filled_length}s" | tr ' ' '#')"
bar+="$(printf "%${empty_length}s" | tr ' ' '-')"
bar+="]"
# Print the progress bar
printf "\r%s %d%%" "$bar" $((progress * 100 / max_progress))
}
# Wait for Neo4j to start
progress=0
max_progress=20
sleep_duration=0.5
while [ "$(docker inspect -f '{{.State.Running}}' neo4j-instance 2>/dev/null)" != "true" ] && [ $progress -lt $max_progress ]; do
display_progress $progress $max_progress
((progress++))
sleep $sleep_duration
done
# Clear the progress bar line
printf "\r%${COLUMNS}s\r"
if [ "$(docker inspect -f '{{.State.Running}}' neo4j-instance 2>/dev/null)" == "true" ]; then
echo "Neo4j is running."
else
echo "Failed to start Neo4j."
fi
# Display the logs to check the status
docker logs neo4j-instance
</code></pre>
<p><strong>neo4j wrapper class</strong></p>
<pre class="lang-py prettyprint-override"><code>import argparse
import json
import os
import re
import requests
import socket
import sys
from typing import List, Optional
from dataclasses import dataclass, field
from neo4j import GraphDatabase
from neo4j.exceptions import ServiceUnavailable, AuthError, ConfigurationError
class Neo4j:
"""
Neo4j wrapper class
"""
def __init__(self,host:str="localhost",bolt_port:int=7687,auth=("neo4j", "password"),encrypted:bool=False):
"""
constructor
"""
self.driver = None
self.error = None
self.host=host
self.bolt_port=bolt_port
self.encrypted=encrypted
try:
uri=f"bolt://{host}:{bolt_port}"
if not Neo4j.is_port_available(host,bolt_port):
raise ValueError(f"port at {uri} not available")
self.driver = GraphDatabase.driver(uri, auth=auth,encrypted=encrypted)
except (ServiceUnavailable, AuthError, ConfigurationError) as e:
self.error = e
@classmethod
def is_port_available(cls,host, port:int)->bool:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1) # 1 Second Timeout
try:
sock.connect((host, port))
except socket.error:
return False
return True
def close(self):
if self.driver is not None:
self.driver.close()
@dataclass
class Volume:
"""
Represents a volume with its attributes.
"""
acronym: str
title: str
loctime: str
editors: List['Editor'] = field(default_factory=list)
@classmethod
def from_json(cls, json_data):
"""
Create a Volume instance from JSON data.
Args:
json_data (dict): The JSON data representing the volume.
Returns:
Volume: The Volume instance created from the JSON data.
"""
editor_names = json_data.get('editors')
if editor_names:
editor_names = editor_names.split(",")
else:
editor_names = []
editors = [Editor(name=name.strip()) for name in editor_names]
return cls(
acronym=json_data.get('acronym'),
title=json_data.get('title'),
loctime=json_data.get('loctime'),
editors=editors
)
def create_node(self, tx) -> int:
"""
Create a Volume node in Neo4j.
Args:
tx: The Neo4j transaction.
Returns:
int: The ID of the created node.
"""
query = """
CREATE (v:Volume {acronym: $acronym, title: $title, loctime: $loctime})
RETURN id(v) as node_id
"""
parameters = {
"acronym": self.acronym,
"title": self.title,
"loctime": self.loctime
}
result = tx.run(query, parameters)
record = result.single()
if record is not None:
return record["node_id"]
else:
return None
@staticmethod
def load_json_file(source: str) -> List['Volume']:
"""
Load volumes from the source JSON file.
Args:
source (str): Path to the source JSON file.
Returns:
List[Volume]: The list of loaded volumes.
"""
with open(source, 'r') as file:
json_data = json.load(file)
volumes = [Volume.from_json(volume_data) for volume_data in json_data]
return volumes
@classmethod
def default_source(cls)->str:
"""
get the default source
"""
default_source = os.path.expanduser('~/.ceurws/volumes.json')
return default_source
@classmethod
def parse_args(cls,argv:list=None):
"""
Parse command line arguments.
Args:
argv(list): command line arguments
Returns:
argparse.Namespace: The parsed command line arguments.
"""
default_source = cls.default_source()
parser = argparse.ArgumentParser(description="Volume/Editor/Location Information")
parser.add_argument('--source', default=default_source, help="Source JSON file path")
# Add progress option
parser.add_argument('--progress', action='store_true', help="Display progress information")
return parser.parse_args(argv)
@staticmethod
def main(argv=None):
if argv is None:
argv=sys.argv[1:]
args = Volume.parse_args(argv)
volumes = Volume.load_json_file(args.source)
# Connect to Neo4j
driver = GraphDatabase.driver("bolt://localhost:7687", auth=("neo4j", "password"))
with driver.session() as session:
for volume in volumes:
volume_node_id = volume.create_node(session)
for editor in volume.editors:
editor.search_by_name()
editor.create_node(session, volume_node_id)
@dataclass
class Editor:
"""
Represents an editor with their name and ORCID.
"""
name: str
orcid: str = None
likelihood: float = None
@classmethod
def from_json(cls, json_data):
"""
Create an Editor instance from JSON data.
Args:
json_data (dict): The JSON data representing the editor.
Returns:
Editor: The Editor instance created from the JSON data.
"""
return cls(
name=json_data.get('name'),
orcid=json_data.get('orcid')
)
def search_by_name(self):
"""
Search the editor by name using the ORCID API and calculate the likelihood.
"""
if self.name:
url = f"https://pub.orcid.org/v3.0/search/?q={self.name}"
headers = {
"Accept": "application/json"
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
data = response.json()
num_results = data.get("num-found", 0)
self.likelihood = num_results / 10 # Arbitrary calculation, adjust as needed
def create_node(self, tx, volume_node_id: int) -> int:
"""
Create an Editor node in Neo4j and establish a relationship with a Volume node.
Args:
tx: The Neo4j transaction.
volume_node_id (int): The ID of the volume node.
Returns:
int: The ID of the created Editor node.
"""
query = """
MATCH (v:Volume)
WHERE id(v) = $volume_node_id
CREATE (v)-[:HAS_EDITOR]->(e:Editor {name: $name, orcid: $orcid, likelihood: $likelihood})
RETURN id(e) as node_id
"""
parameters = {
"volume_node_id": volume_node_id,
"name": self.name,
"orcid": self.orcid,
"likelihood": self.likelihood
}
result = tx.run(query, parameters)
record = result.single()
if record is not None:
return record["node_id"]
else:
return None
@dataclass
class Location:
city: str
country: str
date: str
@staticmethod
def parse(location_str: str) -> Optional['Location']:
"""
Parse a location string of the format "City, Country, Date"
Args:
location_str: The location string to parse.
Returns:
A Location object or None if the string could not be parsed.
"""
match = re.match(r'^(.*), (.*), (.*)$', location_str)
if match:
city, country, date = match.groups()
return Location(city, country, date)
else:
return None
if __name__ == "__main__":
Volume.main()
</code></pre>
<p><strong>unittest</strong></p>
<pre class="lang-py prettyprint-override"><code>import json
from ceurws.volume_neo4j import Neo4j,Volume, Editor
from ceurws.location import LocationLookup
from tests.basetest import Basetest
class TestVolumeEditorLocation(Basetest):
"""
Unit tests for Volume, Editor, and Location classes.
"""
def setUp(self, debug=False, profile=True):
Basetest.setUp(self, debug=debug, profile=profile)
self.neo4j=Neo4j()
def tearDown(self):
Basetest.tearDown(self)
self.neo4j.close()
def test_neo4j_available(self):
"""
test the port availability
"""
for service,port in [("bolt",self.neo4j.bolt_port),("http",7474)]:
available=Neo4j.is_port_available(self.neo4j.host, port)
self.assertTrue(available,f"{service} service at {port}")
def create_test_volume(self, year: int=2023) -> int:
"""
Creates a test Volume node for the given year.
Args:
year (int): The year for which to create the Volume.
Returns:
int: The ID of the created Volume node.
"""
with self.neo4j.driver.session() as session:
with session.begin_transaction() as tx:
acronym = f"CILC {year}"
title = f"Proceedings of the {year-1985}th Italian Conference on Computational Logic"
loctime = f"Some City, Italy, June 21-23, {year}"
volume = Volume(acronym=acronym, title=title, loctime=loctime)
volume_id = volume.create_node(tx)
return volume_id
def test_volume_create_node(self):
"""
Test the create_node method of the Volume class.
"""
volume_id=self.create_test_volume()
self.assertIsNotNone(volume_id)
def test_editor_create_editor_node(self):
"""
Test the create_editor_node method of the Editor class.
"""
volume_id_2023 = self.create_test_volume(2023)
volume_id_2024 = self.create_test_volume(2024)
with self.neo4j.driver.session() as session:
with session.begin_transaction() as tx:
# Test creating one editor for multiple volumes
editor = Editor(name="John Doe", likelihood=0.8)
editor_id_2023 = editor.create_node(tx, volume_id_2023)
editor_id_2024 = editor.create_node(tx, volume_id_2024)
self.assertIsNotNone(editor_id_2023)
self.assertIsNotNone(editor_id_2024)
def test_location_lookup(self):
"""
Test the lookup method of the LocationLookup class.
"""
location_lookup = LocationLookup()
location = location_lookup.lookup("Amsterdam, Netherlands")
self.assertIsNotNone(location)
self.assertEqual(location.name, "Amsterdam")
self.assertEqual(location.country.iso, "NL")
def test_parse_args(self):
"""
Test the parse_args function.
"""
source=Volume.default_source()
args = Volume.parse_args(["--source", source])
self.assertEqual(args.source, source)
def test_json_loading(self):
"""
Test the JSON loading from a file.
"""
with open(Volume.default_source()) as file:
json_data = json.load(file)
self.assertIsNotNone(json_data)
pass
</code></pre>
| <python><docker><neo4j> | 2023-10-09 15:03:31 | 1 | 15,707 | Wolfgang Fahl |
77,259,854 | 8,661,071 | langchain and aws lambda layer in Python | <p>I need to create a lambda layer for lambda using python 3.9 to be deployed in aws lambda.</p>
<p>My lambda uses:</p>
<ul>
<li>langchain.prompts,</li>
<li>langchain.retrievers,</li>
<li>langchain.llms,</li>
<li>langchain.chains</li>
</ul>
<p>I was force to pin the architecture and Python version because otherwise, pedantic, a dependency on langchain won’t work.</p>
<p>So I created requirement.txt for the layer as shown below</p>
<pre><code>langchain[all]
</code></pre>
<p>I run pip install as under so to do not break pydantic:</p>
<pre><code>pip install -r requirements.txt --platform manylinux1x86_64 --python-version 39 -only-binary=:all:
</code></pre>
<p>However now I’m getting the following error when invoking the lambda</p>
<p><code>Can not import PrompTemplate from langchain.prompts ... Runtime Import Module</code></p>
<p>===========SECOND OPTION=====================</p>
<p>As a second option I decided to modify requirements.txt to be like this</p>
<pre><code>langchain
langchain.prompts
langchain.retrievers
langchain.llms.bedrock
langchain.chains
</code></pre>
<p>However now pip install command described above fails with</p>
<pre><code>Can not find a version that satisfies the requirement langchain.prompts ( from version:none)
</code></pre>
| <python><aws-lambda><artificial-intelligence><aws-lambda-layers> | 2023-10-09 14:53:42 | 1 | 1,349 | MasterOfTheHouse |
77,259,741 | 1,256,925 | Keras CNN example giving much worse output than stated on example | <p>I am trying to gain some understanding of Keras, by trying to run some of the provided examples (such as <a href="https://github.com/keras-team/keras/blob/tf-keras-2/examples/mnist_cnn.py" rel="nofollow noreferrer">https://github.com/keras-team/keras/blob/tf-keras-2/examples/mnist_cnn.py</a> in this particular case) and seeing how I can work with them to see what happens. However, the very baseline output of the example as stated at the top of that file, namely 99.25% accuracy, is way higher than the output I'm getting in Google Colab (using a T4 GPU), namely 85% (0.8503000140190125).</p>
<p>My output, when simply copying and pasting the linked file into Google Colab, gives me the following output:</p>
<pre><code>x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
Epoch 1/12
469/469 [==============================] - 8s 10ms/step - loss: 2.2807 - accuracy: 0.1435 - val_loss: 2.2415 - val_accuracy: 0.3414
Epoch 2/12
469/469 [==============================] - 4s 10ms/step - loss: 2.2157 - accuracy: 0.2814 - val_loss: 2.1615 - val_accuracy: 0.5900
Epoch 3/12
469/469 [==============================] - 4s 9ms/step - loss: 2.1305 - accuracy: 0.4081 - val_loss: 2.0526 - val_accuracy: 0.6552
Epoch 4/12
469/469 [==============================] - 4s 9ms/step - loss: 2.0150 - accuracy: 0.4893 - val_loss: 1.9049 - val_accuracy: 0.6928
Epoch 5/12
469/469 [==============================] - 5s 10ms/step - loss: 1.8653 - accuracy: 0.5421 - val_loss: 1.7169 - val_accuracy: 0.7290
Epoch 6/12
469/469 [==============================] - 4s 9ms/step - loss: 1.6864 - accuracy: 0.5822 - val_loss: 1.4985 - val_accuracy: 0.7573
Epoch 7/12
469/469 [==============================] - 5s 10ms/step - loss: 1.4975 - accuracy: 0.6175 - val_loss: 1.2778 - val_accuracy: 0.7841
Epoch 8/12
469/469 [==============================] - 4s 9ms/step - loss: 1.3218 - accuracy: 0.6478 - val_loss: 1.0859 - val_accuracy: 0.8070
Epoch 9/12
469/469 [==============================] - 5s 10ms/step - loss: 1.1783 - accuracy: 0.6739 - val_lloss: 0.9350 - val_accuracy: 0.8256
Epoch 10/12
469/469 [==============================] - 4s 10ms/step - loss: 1.0702 - accuracy: 0.6944 - val_loss: 0.8224 - val_accuracy: 0.8354
Epoch 11/12
469/469 [==============================] - 4s 9ms/step - loss: 0.9836 - accuracy: 0.7120 - val_loss: 0.7383 - val_accuracy: 0.8433
Epoch 12/12
469/469 [==============================] - 4s 9ms/step - loss: 0.9166 - accuracy: 0.7276 - val_loss: 0.6741 - val_accuracy: 0.8503
Test loss: 0.6741476655006409
Test accuracy: 0.8503000140190125oss: 0.9350 - val_accuracy: 0.8256
</code></pre>
<p>As you can see, the Google Colab takes much less time on any epoch when compared to the comment at the top of the example file. I'd love to know if there's something I'm missing here. For example, why do they say "there is still a lot of margin for parameter tuning"? Is this supposed to be some kind of 'tutorial' where I'm supposed to tweak those parameters until I get their 'holy grail' of 99.25%?</p>
| <python><keras><conv-neural-network> | 2023-10-09 14:38:25 | 1 | 19,172 | Joeytje50 |
77,259,728 | 996,247 | How to organize python package file structure for development and deployment? | <p>For a Python package I am developing, I have my package contents organized under a src/ subdir like so:</p>
<pre><code>mylib/
dist/
docs/
src/
__init__.py
foo/
__init__.py
foo1.py
...
bar/
__init__.py
bar1.py
...
...
README.rst
pyproject.toml
requirements.txt
</code></pre>
<p>Building a wheel with python build and deployment works, but on my dev machine, I would like to import the new package right from the development directory rather than local-installing it every time I make a change. How do I skip the src/ directory from the import statement? I.e., in Python, I would like to be able to do</p>
<pre><code>>>> from mylib import foo
</code></pre>
<p>rather than</p>
<pre><code>>>> from mylib.src import foo
</code></pre>
<p>How do I do that?</p>
| <python><package><project-organization> | 2023-10-09 14:36:34 | 0 | 995 | Sven |
77,259,606 | 7,819,329 | Problem in running an executable file in Python | <p>I have a simple Python code (3.12 version). It just computes the sum of the ASCII's of a name entered by the user:</p>
<pre><code>import sys
args = sys.argv
name=args[1]
print(f"Hello, {name}!")
print("Here is some intelligence that will return a number you will NEVER understand how it is
computed :-D \n")
name = name.lower()
ascii_sum = 0
for char in name:
if char.isalpha():
ascii_value = ord(char)
ascii_sum += ascii_value
print(ascii_sum)
</code></pre>
<p>that I have transformed in an executable called <code>dummy_fct.exe</code>. Because for some reason I need to be able to call a (Python) executable from an ipynb notebook.</p>
<p>Now, I am trying to call this executable from a new notebook:</p>
<pre><code>import subprocess
import sys
name = input("Quel est ton nom ? ")
subprocess.run(["dummy_fct.exe", name], capture_output=True, text=True)
</code></pre>
<p>But got the following error:</p>
<blockquote>
<p>CompletedProcess(args=['dummy_fct.exe', 'PDH'], returncode=1, stdout='', stderr='[4004] Module object for pyimod02_importers is NULL!\nTraceback (most recent call last):\n File "PyInstaller\loader\pyimod02_importers.py", line 22, in \n File "pathlib.py", line 14, in \n File "urllib\parse.py", line 40, in \nModuleNotFoundError: No module named 'ipaddress'\nTraceback (most recent call last):\n File "PyInstaller\loader\pyiboot01_bootstrap.py", line 17, in \nModuleNotFoundError: No module named 'pyimod02_importers'\n[4004] Failed to execute script 'pyiboot01_bootstrap' due to unhandled exception!\n')</p>
</blockquote>
<p>I do say that I do not understand at all this error message... Can you help me ?</p>
<p>EDIT: The executable was created running these two commands:</p>
<pre><code>jupyter nbconvert --to script dummy_fct.ipynb
pyinstaller --onefile dummy_fct.py
</code></pre>
| <python><subprocess><executable> | 2023-10-09 14:20:54 | 1 | 1,161 | MysteryGuy |
77,259,572 | 4,508,605 | Curl command in Python | <p>I am using Python3.9.
I have below curl command :</p>
<pre><code>curl -x proxy1.req:8080 --location --request POST 'https://auth.trst.com/2.0' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'ct_id=33d3e1-4988-9a08-e7773634ba67' \
--data-urlencode 'ct_sec=56GgtoQA8.WIH9nFM3Eh3sT~cwH' \
--data-urlencode 'scope=https://auth.trst.com/settlement/.default' \
--data-urlencode 'grant_type=client_credentials'
</code></pre>
<p>Which is returning response as below:</p>
<pre><code>% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1355 100 1163 100 192 1340 221 --:--:-- --:--:-- --:--:-- 1566{"access_token":"eyJhbGciOiJSUzI1NiIsImtpZCI6Ilg1ZVhrNHh5b2pORnVtMWtsMll0djhkbE5QNC1jNTdkTzZRR1RWQndhTmsiLCJ0eXAiOiJKV9SRSIsInNjcCI6ImRlZmF1bHQiLCJhenBhY3IiOiIxIiwic3ViIjoiZWI3OTU1OTktMzZhMy00SqQe5v9xuYRSfOb_gMgpt7Kez8B0dsnJ_SmT17Hbd7dXLS5p5xTva2iteHp80E2PBYy8jIdtDFcyyC3JSb_d1jzeRrtn5ILlR9eMiMMQa5k65rEXuWYlDpCCtyNS0vcbzA6raxuT1ux8-riCRnGe_TX-VP7FTzPxE7bN1N6N60Ahm7cvzmdDjtKgga0ltybKT2LKtT_hwIwYnuPOdYCGy5jYQ78kaZH86PVSXBuzHw","token_type":"Bearer","not_before":1696859605,"expires_in":3600,"expires_on":1696863205,"resource":"6f4cd1ba-556d-4cb3-ab69-12eac3243387"}
</code></pre>
<p>I want to to execute this <code>curl</code> command in <code>Python</code> and store the token value in variable so that I can use this variable for further logic in <code>Python</code> but not sure how to execute this <code>curl</code> command in <code>Python</code>.</p>
| <python><python-3.x><variables><curl><token> | 2023-10-09 14:15:54 | 2 | 4,021 | Marcus |
77,259,458 | 13,959,133 | Trying to get domain id from Cloudflare, always getting response 400 | <p>I'm currently trying to get a list of the domains in a specific Cloudflare zone in order to get the id of a subdomain. I have tried it with the following code but it didn't work:</p>
<pre><code>url = CLOUDFLARE_API_URL + CLOUDFLARE_ZONE_ID + '/dns_records?name=' + subdomain
headers = {
'Authorization': 'Bearer ' + CLOUDFLARE_API_KEY,
'Content-Type': 'application/json'
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
identifier = response.json()['result'][0]['id']
url = CLOUDFLARE_API_URL + CLOUDFLARE_ZONE_ID + '/dns_records/' + identifier
response = requests.delete(url, headers=headers)
if response.status_code == 200:
filename = subdomain + '.conf'
path = '/etc/nginx/sites-available/' + filename
try:
os.remove(path)
os.remove('/etc/nginx/sites-enabled/' + filename)
path = '/var/www/' + subdomain
try:
os.rmdir(path)
result = reload_nginx()
if result:
return True
else:
return False
except:
return False
except:
return False
</code></pre>
<p>Does anybody know why I'm getting a code 400 in response?</p>
| <python> | 2023-10-09 14:00:49 | 0 | 387 | BestRazer |
77,259,343 | 583,464 | remove strings and numbers after new line character in list | <p>I have the following list.</p>
<pre><code>mylist = [[
['K', 'X', 'Y'],
['1', '351822.546', '4521159.92'],
['2', '351846.066', '4521086.924'],
['3', '351873.561', '4521010.066'],
['4', '351922.38', '4521027.586'],
['5', '351923.346', '4521027.937'],
['6', '351880.618', '4521159.601'],
['7', '351877.79', '4521159.243'],
['8', '351868.626', '4521159.13'],
['9', '351832.0', '4521158.375'],
['10', '351822.546', '4521159.92'],
['K', 'X', 'Y'],
['1', '352818.88899999997', '4512961.064277913'],
['2', '352818.7852508776', '4512961.008231428'],
['3', '352818.1413800079', '4512961.541700357'],
['4', '352818.13964983355', '4512961.542853194'],
['5', '352818.13771866815', '4512961.543623349'],
['6', '352818.13566998736', '4512961.543977531'],
['7', '352818.13359234604', '4512961.54390043'],
['8', '352818.1315755511', '4512961.54339538'],
['9', '352818.1297067792', '4512961.54248421'],
['10', '352807.6488787196', '4512955.003591482'],
['11', '352804.8280548779', '4512953.398691706'],
['12', '352804.8263759837', '4512953.397491489'],
['', '13', '352804.8249792202', '4512953.395972193', ''],
[None, '14', '352804.82392407843', '4512953.394198529', None],
[None, '15', '352804.82325549907', '4512953.3922460405', None],
[None, '16', '352804.8230019582', '4512953.390197889', None],
[None, '17', '352818.7027811786\nDate: 06/10/2', '4512961.279168891\n023 14:38', None],
[None, '19', '352818.7027811786\nDate: 06/10/2', '4512961.279168891\n23 14:39', None],
[None, '20', '352818.7027811786\nDate: 06/10/2', '4512961.279168891\n 14:40', None],
[None, '21', '352818.7027811786\nDate: 06/10/2', '4512961.279168891\n15:38', None],
['22', '352818.7707811786', '4512961.225168891'],
['23', '352818.7722922728', '4512961.224177455'],
['K', 'X', 'Y'],
['1', '352748.814', '4512151.882'],
['2', '352813.891', '4512068.331'],
['3', '352813.891', '4512068.331'],
['4', '352833.26206323004', '4512074.104626368'],
['5', '352859.8678037648', '4512078.776475903'],
['6', '352858.816', '4512082.705'],
['7', '352837.633', '4512169.397'],
['8', '352748.814', '4512151.882'],
]]
</code></pre>
<p>How can I get rid of <code>\nDate..</code> and <code>\n023..</code> that are after some numbers ?</p>
<p>Also, I want to get rid of <code>None</code> and <code>''</code> values. So, I want to have only 3 columns (get rid of <code>K, X, Y</code> also.</p>
<p>So, I want to use the data from lines :</p>
<pre><code>[None, '17', '352818.7027811786\nDate: 06/10/2', '4512961.279168891\n023 14:38', None],
[None, '19', '352818.7027811786\nDate: 06/10/2', '4512961.279168891\n23 14:39', None],
[None, '20', '352818.7027811786\nDate: 06/10/2', '4512961.279168891\n 14:40', None],
[None, '21', '352818.7027811786\nDate: 06/10/2', '4512961.279168891\n15:38', None],
</code></pre>
<p>but only the columns and numbers:</p>
<pre><code>['17', '352818.7027811786', '4512961.279168891'],
['19', '352818.7027811786', '4512961.279168891'],
['20', '352818.7027811786', '4512961.279168891'],
['21', '352818.7027811786', '4512961.279168891'],
</code></pre>
<p>So, my final list will be:</p>
<pre><code>[
['1', '351822.546', '4521159.92'],
['2', '351846.066', '4521086.924'],
['3', '351873.561', '4521010.066'],
['4', '351922.38', '4521027.586'],
['5', '351923.346', '4521027.937'],
['6', '351880.618', '4521159.601'],
['7', '351877.79', '4521159.243'],
['8', '351868.626', '4521159.13'],
['9', '351832.0', '4521158.375'],
['10', '351822.546', '4521159.92'],
['1', '352818.88899999997', '4512961.064277913'],
['2', '352818.7852508776', '4512961.008231428'],
['3', '352818.1413800079', '4512961.541700357'],
['4', '352818.13964983355', '4512961.542853194'],
['5', '352818.13771866815', '4512961.543623349'],
['6', '352818.13566998736', '4512961.543977531'],
['7', '352818.13359234604', '4512961.54390043'],
['8', '352818.1315755511', '4512961.54339538'],
['9', '352818.1297067792', '4512961.54248421'],
['10', '352807.6488787196', '4512955.003591482'],
['11', '352804.8280548779', '4512953.398691706'],
['12', '352804.8263759837', '4512953.397491489'],
['13', '352804.8249792202', '4512953.395972193'],
['14', '352804.82392407843', '4512953.394198529'],
['15', '352804.82325549907', '4512953.3922460405'],
['16', '352804.8230019582', '4512953.390197889'],
['17', '352818.7027811786', '4512961.279168891'],
['19', '352818.7027811786', '4512961.279168891'],
['20', '352818.702781178', '4512961.279168891],
['21', '352818.7027811786', '4512961.279168891],
['22', '352818.7707811786', '4512961.225168891'],
['23', '352818.7722922728', '4512961.224177455'],
['1', '352748.814', '4512151.882'],
['2', '352813.891', '4512068.331'],
['3', '352813.891', '4512068.331'],
['4', '352833.26206323004', '4512074.104626368'],
['5', '352859.8678037648', '4512078.776475903'],
['6', '352858.816', '4512082.705'],
['7', '352837.633', '4512169.397'],
['8', '352748.814', '4512151.882'],
]]
</code></pre>
| <python><list> | 2023-10-09 13:45:21 | 3 | 5,751 | George |
77,259,323 | 19,238,204 | Can Sympy Compute Definite Integral By Cases? | <p>This is a heat equation integration problem.</p>
<p>So we have <code>u(x,0)</code>, the initial condition, that we want to integrate, that is:</p>
<ol>
<li><strong>x</strong>, when <code>0 <= x <= 10</code></li>
<li><strong>20-x</strong>, when <code>10 <= x <= 20</code></li>
</ol>
<p>Now, the problem is, if I want to calculate the integral of <code>u(x,0)</code> from 0 to 20, if we use hand, it should be:
<code>int_{0}^{10} x dx + int_{10}^{20} (20-x) dx</code></p>
<p>with Python' SymPy code this is what I can create, very manual (the integral of <code>u(x,0)</code> from 0 to 20 is represented by function <code>g</code>):</p>
<pre><code>import numpy as np
import sympy as sm
from sympy import *
from spb import *
x = sm.symbols("x")
t = sm.symbols("t")
n = sm.symbols("n", integer=True)
L = 20
f1 = (2/L)*x*sin(n*np.pi*x/20)
f2 = (2/L)*(20-x)*sin(n*np.pi*x/20)
fint1 = sm.integrate(f1,(x,0,10))
fint2 = sm.integrate(f2,(x,10,20))
D = 0.475
g = (fint1+fint2)*sin(n*np.pi*x/20)*exp(-(n**2)*(np.pi**2)*D*t/400).nsimplify()
</code></pre>
<p>is there a SymPy' technique of integration for this case, so I do not need to write manually <code>fint1</code>, <code>fint2</code>, ... ?</p>
| <python><numpy><sympy> | 2023-10-09 13:43:01 | 1 | 435 | Freya the Goddess |
77,259,232 | 13,174,189 | How to test nlu model on multiple requests per second? | <p>I am testing nlu model called LaBSE. I want to know how much gpu memory is required when i run it. Here is example how to run it on sentence and turn it into embeddings:</p>
<pre><code>from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence"]
model = SentenceTransformer('sentence-transformers/LaBSE')
embeddings = model.encode(sentences)
print(embeddings)
</code></pre>
<p>It shows that 1.9GB of gpu memory are used. But i want to know how much gpu memory will be used when i run it with 10, 25, 50 requests per second. How could I do that? How could i run model.encode on 25 requests per second for example? I basically want to know how much gpu and RAM memory is required if I deploy it and there will be multiple requests per second.</p>
| <python><python-3.x><deep-learning><transformer-model><sentence-transformers> | 2023-10-09 13:29:47 | 1 | 1,199 | french_fries |
77,259,224 | 2,858,084 | langchain conversation completes both human and AI conversation [sometimes] | <p>It happens sometimes, and it happens only with langchain so far. here are few things that can be the problem.</p>
<pre><code> prompt = PromptTemplate.from_template(
template= role + '''\
\n
Current conversation:
{history}
Human: {input}
AI:\
''')
</code></pre>
<p>model:</p>
<pre><code>msgs = StreamlitChatMessageHistory(key="langchain_messages")
memory = ConversationBufferMemory(chat_memory=msgs)
llm_chain = LLMChain(
llm=ChatOpenAI(
openai_api_key=openai_api_key,
temperature=temperature,
model=model,
),
prompt=prompt,
memory=memory)
</code></pre>
<p>temp is usually <code>0.25</code> and model is <code>gpt-3.5-turbo-16k</code></p>
<ul>
<li>I tried changing the order of the prompt</li>
<li>i set the temp to 0 or 1 and sometimes it doesnt happen at all</li>
<li>I will not change the model because i know it works</li>
</ul>
| <python><langchain><large-language-model><py-langchain> | 2023-10-09 13:28:22 | 0 | 1,347 | MAFiA303 |
77,259,162 | 8,184,694 | How to specify correct dtype for column of lists when creating a dask dataframe from_pandas? | <p>When creating a dask <code>Dataframe</code> with the <code>from_pandas</code> method, the formerly correct dtype <code>object</code> becomes a <code>string[pyarrow]</code>.</p>
<pre><code>import dask.dataframe as dd
import pandas as pd
df = pd.DataFrame(
{
"lists": [["a", "b"], ["c", "b", "a"], ["b"]],
"a": [1, 1, 0],
"b": [1, 0, 1],
"c": [0, 1, 0],
}
)
print(df)
lists a b c
0 [a, b] 1 1 0
1 [c, b, a] 1 0 1
2 [b] 0 1 0
print(df.dtypes)
lists object
a int64
b int64
c int64
dtype: object
</code></pre>
<p>I can correctly access the first element of the lists, pandas works as expected:</p>
<pre><code>print(df["lists"].str[0])
0 a
1 c
2 b
Name: lists, dtype: object
</code></pre>
<p>But with dask, the dtype is incorrect:</p>
<pre><code>df = dd.from_pandas(df, npartitions=2)
print(df.dtypes)
lists string[pyarrow]
a int64
b int64
c int64
</code></pre>
<p>Setting the dtype explicitly also doesn't work</p>
<pre><code>df["lists"] = df["lists"].astype(object)
print(df.dtypes)
lists string[pyarrow]
a int64
b int64
c int64
</code></pre>
<p>How do I tell dask that this column does not consist of strings?</p>
| <python><dask> | 2023-10-09 13:17:15 | 1 | 541 | spettekaka |
77,259,096 | 4,880,184 | Python global variables in shared modules | <p>I am trying to understand the behavior of variables shared across modules. My impression was a global variable in one module could be accessed in another, and sometimes this seems to be what happens, but sometimes it is not. (I recognize global variables are usually not the right tool, but there are situations where their use is appropriate.)</p>
<p>Here is a code example to illustrate my confusion:</p>
<p>shared.py</p>
<pre><code>globalvar=0 # a global scalar variable
globalarray1=[1,2,3] # a global array modified by .append()
globalarray2=['A','B','C'] # a global array modified by reassignment
def set_globals():
global globalvar
global globalarray1
global globalarray2
globalvar=1
globalarray1.append(4)
globalarray2=['a','b','c']
print("set_globals: globalvar="+str(globalvar))
print("set_globals: globalarray1="+str(globalarray1))
print("set_globals: globalarray2="+str(globalarray2))
</code></pre>
<p>module.py</p>
<pre><code>from paas.shared import *
def _main_():
global globalvar
global globalarray1
global globalarray2
set_globals()
print("main: globalvar="+str(globalvar))
print("main: globalarray1="+str(globalarray1))
print("main: globalarray2="+str(globalarray2))
_main_()
</code></pre>
<p>When I run the program, I get the following output:</p>
<pre><code>set_globals: globalvar=1
set_globals: globalarray1=[1, 2, 3, 4]
set_globals: globalarray2=['a', 'b', 'c']
main: globalvar=0
main: globalarray1=[1, 2, 3, 4]
main: globalarray2=['A', 'B', 'C']
</code></pre>
<p>So</p>
<ul>
<li>'main' is not seeing the same scalar variable 'globalvar' that is being set by the 'set_globals' method in the shared module (the assignment 'took' and was printed within 'set_globals' but 'main' is printing something else.</li>
<li>It does appear to be seeing the same 'globalarray1' - the append(4) that modified globalarray1 in set_globals is recognized when it is printed by 'main'.</li>
<li>It is not seeing the same 'globalarray2' - the variable assigned in set_globals is correctly printed by 'set_globals', but 'main' is printing the original value assigned to globalarray2.</li>
</ul>
<p>Can someone explain what is going on? I have only seen fairly simple tutorial-type documentation about global variables; if there is documentation that describes what is happening here I would appreciate a pointer.</p>
<p>ADDED NOTE: This is not related to the way a local variable in a function hides a global variable with the same name; those are two different variables with the same name and different initial values. It does appear that the import creates a new module variable with the same name, but it starts with the initial value of the variable of the same name from the imported module.</p>
| <python><global> | 2023-10-09 13:04:43 | 1 | 1,399 | John Elion |
77,259,092 | 10,367,360 | Textures Segmentation | <p>I need to segment different textures on image without AI or Machine Learning methods, based only on <code>opencv2</code> or <code>skimage</code> (of course python, numpy, etc.).</p>
<p>Image:</p>
<p><a href="https://i.sstatic.net/5apcP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5apcP.png" alt="enter image description here" /></a></p>
<p>Ground truth:</p>
<p><a href="https://i.sstatic.net/WXIw3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WXIw3.png" alt="enter image description here" /></a></p>
<p>This is how the solution should look like:</p>
<p><a href="https://i.sstatic.net/QJoNG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QJoNG.png" alt="enter image description here" /></a></p>
<p>I have already tried a lot of everything: watershed algorithm + clustering KMeans (possible to use), morphological operators, Gaussian blur but still cannot find a good solution since I don't have an idea how to do it. If you have ever encountered with the same problem, please let me know. All images are black and white. If you want to get any code from me - let me know too.</p>
<p>UPD1:</p>
<p>Here you can find other examples: <a href="https://mosaic.utia.cas.cz/index.php?act=comp_imgs&bid=4&sort=2&dir=0&nt=-2&ndl=-1&alg=-1&ver=&f1=0&f2=-1&f3=-1&f4=-1&vis=1&hr=0&dyn=0" rel="nofollow noreferrer">https://mosaic.utia.cas.cz/index.php?act=comp_imgs&bid=4&sort=2&dir=0&nt=-2&ndl=-1&alg=-1&ver=&f1=0&f2=-1&f3=-1&f4=-1&vis=1&hr=0&dyn=0</a></p>
<p>Here is my Google Colab notebook where I try to write my own solution: <a href="https://colab.research.google.com/drive/10PtPfBArM0AgDPKHYY7tLwozpaZZho3I?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/10PtPfBArM0AgDPKHYY7tLwozpaZZho3I?usp=sharing</a></p>
| <python><numpy><opencv><computer-vision><scikit-image> | 2023-10-09 13:04:08 | 0 | 306 | Mark Minerov |
77,258,901 | 9,038,562 | Can we use GNN on graphs with only edge features? | <p>I'm trying to use GNN to classify phylogeny data (fully bifucated, single directed trees). I converted the trees from phylo format in R to PyTorch dataset. Taking one of the tree as an example:</p>
<p><a href="https://i.sstatic.net/n48b0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n48b0.png" alt="enter image description here" /></a></p>
<pre><code>Data(x=[83, 1], edge_index=[2, 82], edge_attr=[82, 1], y=[1], num_nodes=83)
</code></pre>
<p>It has <code>83</code> nodes (internals + tips, <code>x=[83, 1]</code>), I assigned <code>0</code>s to all the nodes, so every node has a feature value <code>0</code>. I constructed a <code>82 X 1</code> matrix containing all the lengths of the directed edges between nodes (<code>edge_attr=[82, 1]</code>), I intend to use <code>edge_attr</code> express edge lengths and use it as weights. There is a label for each tree for classification purpose (<code>y=[1]</code>, values in {0, 1, 2}).</p>
<p><strong>As you can see, node feature is not important in my case, the only thing matters is edge feature (edge length).</strong></p>
<p>Below is my code implementation for modelling and training:</p>
<pre class="lang-py prettyprint-override"><code>tree_dataset = TreeData(root=None, data_list=all_graphs)
class GCN(torch.nn.Module):
def __init__(self, hidden_size=32):
super(GCN, self).__init__()
self.conv1 = GCNConv(tree_dataset.num_node_features, hidden_size)
self.conv2 = GCNConv(hidden_size, hidden_size)
self.linear = Linear(hidden_size, tree_dataset.num_classes)
def forward(self, x, edge_index, edge_attr, batch):
# 1. Obtain node embeddings
x = self.conv1(x, edge_index, edge_attr)
x = x.relu()
x = self.conv2(x, edge_index, edge_attr)
# 2. Readout layer
x = global_mean_pool(x, batch) # [batch_size, hidden_channels]
# 3. Apply a final classifier
x = F.dropout(x, p=0.5, training=self.training)
x = self.linear(x)
return x
model = GCN(hidden_size=32)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
criterion = torch.nn.CrossEntropyLoss()
train_loader = DataLoader(tree_dataset, batch_size=64, shuffle=True)
print(model)
def train():
model.train()
lost_all = 0
for data in train_loader:
optimizer.zero_grad() # Clear gradients.
out = model(data.x, data.edge_index, data.edge_attr, data.batch) # Perform a single forward pass.
loss = criterion(out, data.y) # Compute the loss.
loss.backward() # Derive gradients.
lost_all += loss.item() * data.num_graphs
optimizer.step() # Update parameters based on gradients.
return lost_all / len(train_loader.dataset)
def test(loader):
model.eval()
correct = 0
for data in loader: # Iterate in batches over the training/test dataset.
out = model(data.x, data.edge_index, data.edge_attr, data.batch)
pred = out.argmax(dim=1) # Use the class with highest probability.
correct += int((pred == data.y).sum()) # Check against ground-truth labels.
return correct / len(loader.dataset) # Derive ratio of correct predictions.
for epoch in range(1, 20):
loss = train()
train_acc = test(train_loader)
# test_acc = test(test_loader)
print(f'Epoch: {epoch:03d}, Train Acc: {train_acc:.4f}, Loss: {loss:.4f}')
</code></pre>
<p>It seems that my code is not working at all:</p>
<pre><code>......
Epoch: 015, Train Acc: 0.3333, Loss: 1.0988
Epoch: 016, Train Acc: 0.3333, Loss: 1.0979
Epoch: 017, Train Acc: 0.3333, Loss: 1.0938
Epoch: 018, Train Acc: 0.3333, Loss: 1.1044
Epoch: 019, Train Acc: 0.3333, Loss: 1.1012
......
Epoch: 199, Train Acc: 0.3333, Loss: 1.0965
</code></pre>
<p>Is it because we can't use GNN without meaningful node features? Or is there any problem with my implementation?</p>
| <python><machine-learning><neural-network><pytorch-geometric><graph-neural-network> | 2023-10-09 12:36:47 | 1 | 639 | Tianjian Qin |
77,258,872 | 4,635,676 | Extract all lines between almost the same string in python excluding the last if it contains a different string | <p>I need to extract all the lines between each 'Fetching' string or EOF, including the first 'Fetching', check if it contains <code>40500000</code> and write it to '405' file or else write it a 'Not'. Note there could potentially be hundreds of thousands (or more) of these 'Fetching' strings. My text file excerpt as follows:</p>
<pre><code>Fetching: 0000007442476828
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
Fetching: 0000007442477158
0000007442477158|40320|4000|1|0||0|||12345679
Fetching: 0000007442477234
0000007442477234|40320|4000|1|0||0|||40500000
Fetching: 0000007442477223
Fetching: 0000007442477333
0000007442477333|500000|4001|1|0||0|||40500000
0000007442477333|40320|4000|1|0||0|||40500000
0000007442477333|12345|3000|1|0||0|||40500000
Fetching: 0000007442477999
<EOF>
</code></pre>
<p>My 'Not' file would look like this:</p>
<pre><code>Fetching: 0000007442476828
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
gzip: Archive/*2023-09-18*.gz: No such file or directory
Fetching: 0000007442477158
0000007442477158|40320|4000|1|0||0|||12345679
Fetching: 0000007442477234
Fetching: 0000007442477223
Fetching: 0000007442477999
</code></pre>
<p>And my '405' file would then look like this</p>
<pre><code>Fetching: 0000007442477234
0000007442477234|40320|4000|1|0||0|||40500000
Fetching: 0000007442477333
0000007442477333|500000|4001|1|0||0|||40500000
0000007442477333|40320|4000|1|0||0|||40500000
0000007442477333|12345|3000|1|0||0|||40500000
</code></pre>
<p>Everything I'm reading up on, seems to be about a <code><start></code> and <code><end></code> string. As you can imagine I have no idea what my <code>end</code> string would be other than what it starts with and I have no idea on the amount of entries between each <code>Fetching</code>. I'm still in the infancies of the code an i'm already stuck just printing the lines between the two Fetching:</p>
<pre><code>a = []
with open("FileName") as file:
for i in file.readlines():
a.append(i)
for j in a:
c=2
while c!=0:
if 'Fetching:' in j:
c=c-1
print(j)
break
</code></pre>
<p>It honestly looks like my approach is all wrong</p>
| <python> | 2023-10-09 12:31:21 | 2 | 436 | imp |
77,258,737 | 10,759,785 | How can I automatically calculate the derivative of an array based on the best group of points? | <p>I have a group of points like the one below.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.array([1,1,1,1,2,3,4,4,4,4,4.5,5,5.5,6,6.5,7,7.5,8,8.1,8.2,8.3,8.4,8.5,8.5,8.5,8.5])
y = np.linspace(10,20,len(x))
plt.plot(x*0.1,y,'o', label='x')
plt.xlabel('x * 0.1')
plt.ylabel('y')
</code></pre>
<p>Figure 1</p>
<p><a href="https://i.sstatic.net/ASAJ7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ASAJ7.png" alt="enter image description here" /></a></p>
<p>And I want to calculate the derivative so that it finds the best combination of a group of points that have growth trends in common. In the end, I would like to have a result like this:</p>
<pre><code>x_diff = np.diff(x)
y_diff = np.linspace(10.5,19.5,len(x_diff))
plt.plot(x_diff,y_diff, label='diff(x)')
plt.legend()
</code></pre>
<p>Figure 2</p>
<p><a href="https://i.sstatic.net/gqTPO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gqTPO.png" alt="enter image description here" /></a></p>
<p>To get the result above, I used Numpy's diff function and it calculated the derivative point by point. As the data is noiseless, it works perfectly. But in real data with random noise, the diff function would give me a result like this:</p>
<pre><code>x_noise = x + np.random.rand(len(x))*0.3
plt.plot(x_noise*0.1,y,'o', label='x + noise')
plt.xlabel('x + noise * 0.1')
plt.ylabel('y')
x_diff = np.diff(x_noise)
y_diff = np.linspace(10.5,19.5,len(x_diff))
plt.plot(x_diff,y_diff, label='diff(x + noise)')
plt.legend()
</code></pre>
<p>Figure 3</p>
<p><a href="https://i.sstatic.net/IV0fj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IV0fj.png" alt="enter image description here" /></a></p>
<p>Would there be any way to obtain the same result as in Figure 2 with the noisy data in Figure 3? In other words, a way to calculate not the derivative point by point, but an algorithm that automatically finds the best group of points whose derivative would be smoothest?</p>
| <python><numpy><automation><derivative><integral> | 2023-10-09 12:12:28 | 2 | 331 | Lucas Oliveira |
77,258,715 | 2,386,113 | Generate normal noise column-wise using numpy in python | <p>I have a matrix <code>A</code> in which the signals are contained along the columns, I want to generate a noise matrix in which each <strong>column contains Gaussian normal noise</strong> (mean = 0, std = 1).</p>
<p><a href="https://i.sstatic.net/SLfI5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SLfI5.png" alt="enter image description here" /></a></p>
<p><strong>Wrong Logic Code:</strong></p>
<p>I can do something like below but in that case the condition <code>mean = 0</code> would be true for the whole matrix and not just for the columns.</p>
<pre><code>import numpy as np
A = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
)
# Generate Gaussian noise with mean 0 and std deviation 1
noise = np.random.normal(0, 1, size=A.shape)
</code></pre>
<p><strong>PS:</strong> Need to be performance optimized (avoid for-loops)</p>
| <python><numpy> | 2023-10-09 12:07:48 | 1 | 5,777 | skm |
77,258,711 | 357,313 | Change column in-place with copy-on-write | <p>I have a large DataFrame. I have enabled <a href="https://pandas.pydata.org/docs/user_guide/copy_on_write.html" rel="nofollow noreferrer">copy_on_write</a>, as is expected to be the default for 3.0. I want to limit the values of some of the columns (in-place). Like this:</p>
<pre><code>import pandas as pd
pd.options.mode.copy_on_write = True
df = pd.DataFrame({'a': '.', 'x': [1, 2, 3]})
df['x'].clip(lower=0, inplace=True)
</code></pre>
<p>This gives me a <code>ChainedAssignmentError</code>:</p>
<blockquote>
<p>A value is trying to be set on a copy of a
DataFrame or Series through chained assignment using an inplace
method.<br />
When using the Copy-on-Write mode, such inplace method never
works to update the original DataFrame or Series, because the
intermediate object on which we are setting values always behaves as a
copy.</p>
<p>For example, when doing <code>df[col].method(value, inplace=True)</code>, try
using <code>df.method({col: value}, inplace=True)</code> instead, to perform the
operation inplace on the original object.</p>
</blockquote>
<p>How is this supposed to work? I tried variations of the suggestion in the last paragraph and other things, but I keep getting the error.</p>
| <python><pandas><in-place><copy-on-write> | 2023-10-09 12:07:21 | 1 | 8,135 | Michel de Ruiter |
77,258,698 | 3,458,788 | SettingWithCopyWarning pandas using str.split | <p>I am trying to understand why the <code>SettingWithCopyWarning</code> is appearing here and how I can fix it. Here's my reproducible code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
grants = pd.read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2023/2023-10-03/grants.csv', na_values = [" ", "Not Available"])
grants_sub = grants[(grants["expected_number_of_awards"] > 0) | (grants["estimated_funding"] > 0)]
## extract the year, month, and day
grants_sub[["year", "month", "day"]] = grants_sub["posted_date"].str.split(pat = "-", expand = True)
grants_sub
</code></pre>
<p>Which provides the following messages</p>
<pre class="lang-py prettyprint-override"><code>/var/folders/77/jqqmprj106jgtfpl96dr7910wy6yyk/T/ipykernel_81958/3239128668.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
grants_sub[["year", "month", "day"]] = grants_sub["posted_date"].str.split(pat = "-", expand = True)
/var/folders/77/jqqmprj106jgtfpl96dr7910wy6yyk/T/ipykernel_81958/3239128668.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
grants_sub[["year", "month", "day"]] = grants_sub["posted_date"].str.split(pat = "-", expand = True)
/var/folders/77/jqqmprj106jgtfpl96dr7910wy6yyk/T/ipykernel_81958/3239128668.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
grants_sub[["year", "month", "day"]] = grants_sub["posted_date"].str.split(pat = "-", expand = True)
</code></pre>
<p>I interpreted it (wrongly?) to use <code>.loc</code> instead, so if I do the following instead:</p>
<pre class="lang-py prettyprint-override"><code>grants_sub.loc[:, ["year", "month", "day"]] = grants_sub.loc[:,"posted_date"].str.split(pat = "-", expand = True)
grants_sub
</code></pre>
<p>It gives <code>NaN</code> for the year, month, and day columns.</p>
<p>Clearly this is an operator issue, so what am I doing wrong and why? The referenced page uses an example related to chaining and that chaining can result in slow performance or unexpected results, but the suggest <code>pd</code> fix here, doesn't work. I am guessing this is related to pandas being unsure if I want to work with a view or a copy, but I don't understand why.</p>
| <python><pandas><dataframe> | 2023-10-09 12:04:38 | 0 | 515 | cdd |
77,258,692 | 18,904,265 | How can I efficiently get multiple slices out of a large dataset? | <p>I want to get multiple small slices out of a large time-series dataset (~ 25GB, ~800M rows). At the moment, this looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>from polars import pl
sample = pl.scan_csv(FILENAME, new_columns=["time", "force"]).slice(660_000_000, 3000).collect()
</code></pre>
<p>This code takes about 0-5 minutes, depending on the position of the slice I want to get. If I want 5 slices, this takes maybe 15 minutes to run everything. However, since polars is reading the whole csv anyhow, I was wondering if there is a way to get all my slices I want in one go, so polars only has to read the csv once.</p>
<p>Chaining multiple slices (obviously) doesn't work, maybe there is some other way?</p>
| <python><python-polars> | 2023-10-09 12:03:38 | 1 | 465 | Jan |
77,258,642 | 221,270 | Feature Importance keras regressionmodel | <p>I am trying to extract the feature importance or saliency map from my keras regression model:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import shap
from sklearn.inspection import permutation_importance
# Load data (modify as needed)
s_data = np.array([
[1.0, 0.0, np.nan, 0.0],
[2.0, 0.0, np.nan, 2.0],
[0.0, 1.0, 2.0, 0.0]
])
# Load phenotype labels as a NumPy array (modify as needed)
labels = np.array([
[7.],
[9.],
[2.] ])
s_data[np.isnan(s_data)] = -1
# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(s_data, labels, test_size=0.2, random_state=42)
# Standardize the input features
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Define a deep learning model
model = keras.Sequential([
layers.Input(shape=(X_train.shape[1],)),
layers.Dense(128, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='linear')
])
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_split=0.2)
</code></pre>
<p>I have tried to use the eli5 library:</p>
<pre><code>import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(model, random_state=1).fit(X_train,y_train)
eli5.show_weights(perm, feature_names = X_train.columns.tolist())
# Evaluate the model on the test set
test_loss = model.evaluate(X_test, y_test)
print(f'Test Loss: {test_loss}')
</code></pre>
<p>But it raises this error:</p>
<blockquote>
<p>TypeError: If no scoring is specified, the estimator passed should have a 'score' method. The estimator <keras.engine.sequential.Sequential object at 0x000002B1ABA41448> does not.</p>
</blockquote>
<p>I also tried to write my own permutation features importance function:</p>
<pre><code>n_permutations = 100 # Number of permutations to perform
feature_importances = np.zeros(X_test.shape[1])
print (feature_importances.max())
for _ in range(n_permutations):
shuffled_X_test = X_test.copy()
np.random.shuffle(shuffled_X_test) # Permute the feature values
shuffled_loss = model.evaluate(shuffled_X_test, y_test, verbose=0)
importance = test_loss - shuffled_loss
feature_importances += importance
# Normalize feature importances
feature_importances /= n_permutations
# Print feature importances
print("Feature Importances:")
for i, importance in enumerate(feature_importances):
print(f"SNP{i+1}: {importance:.4f}")
</code></pre>
<p>But this gives me the same value for each data point.</p>
<p>How can I extract the feature important for a regression model?</p>
| <python><keras> | 2023-10-09 11:55:47 | 1 | 2,520 | honeymoon |
77,258,525 | 5,273,308 | Importing a Cython object into a Python file | <p>I have code in two files, one of which imports the other.
File1</p>
<h1>parameters.pyx</h1>
<pre><code>cdef class Parameters:
cdef int myInt
def __init__(self, int myInt):
self.myInt = myInt
cdef set_myInt(self, int value):
self.myInt = value
cdef int get_myInt(self):
return self.myInt
</code></pre>
<p>File2</p>
<h1>model.py</h1>
<pre><code>from parameters import Parameters
def single_run():
parameters = Parameters(10)
print(parameters.get_myInt()) # Access it using the get_myInt method
parameters.set_myInt(100) # Use the set_myInt method to modify it
print(parameters.get_myInt()) # Access it again to see the modified value
</code></pre>
<p>I get the following error:</p>
<p>'parameters.Parameters' object has no attribute 'set_myInt'</p>
| <python><class><cython> | 2023-10-09 11:37:11 | 1 | 1,633 | user58925 |
77,258,483 | 2,520,186 | OpenCV: Prepare angle-based kernel for erosion | <p>I have an image that contains several straight lines at various angles (e.g. horizontal, vertical, diagonal, cross-diagonal). I have a code that can filter the lines in a desired direction since I know how to set a kernel matrix for horizontal, vertical, diagonal, and cross-diagonal directions. However, for lines at an arbitrary angle, I have no clue how to do it. See my code below.</p>
<pre><code>import cv2
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
# Define a image display function to work on Jupyter notebook
def showimage(title,myimage):
# Reverse OpenCV's BGR order to RGB for colored images (ndim>2)
if (myimage.ndim>2):
myimage = myimage[:,:,::-1]
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(myimage, cmap = 'gray', interpolation = 'bicubic')
plt.xticks([]), plt.yticks([]) # hide tick values on X and Y axes
plt.title(title)
plt.show()
# Load the image
imfile = 'multilines.jpg' # working image file
image = cv2.imread(imfile)
showimage('Original',image)
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply Canny edge detection
edges = cv2.Canny(gray, 50, 150, apertureSize=3)
### Erosion:
# Create a horizontal kernel for erosion
kernel_h = np.array([[0, 0, 0],
[1, 1, 1],
[0, 0, 0]], dtype=np.uint8)
# Create a vertical kernel for erosion
kernel_v = np.array([[0, 1, 0],
[0, 1, 0],
[0, 1, 0]], dtype=np.uint8)
# Create a diagonal kernel for erosion (45 degrees)
kernel_d = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]], dtype=np.uint8)
# Create a cross-diagonal kernel for erosion (45 degrees)
kernel_crossd = np.array([[0, 0, 1],
[0, 1, 0],
[1, 0, 0]], dtype=np.uint8)
kernel = kernel_h; tag = 'Horizontal'
kernel = kernel_v; tag = 'Vertical'
kernel = kernel_d; tag = 'Diagonal'
kernel = kernel_crossd; tag = 'Cross-diagonal'
# TO DO begins ----------------------
# Create a generic angle dependent kernel
theta = 90
theta = np.deg2rad(theta)
## kernel_angle = ? <- Here to uncomment and modify
## kernel = kernel_angle ; tag = 'Generic angle-based' # Uncomment once modify above
# TO DO ends ------------------------
print(f'{tag} erosion applied.')
# Apply erosion to isolate desired lines
erosion_lines = cv2.erode(edges, kernel, iterations=1)
showimage('After erosion',erosion_lines)
# Invert the colors (convert to negative to get white background)
result = cv2.bitwise_not(erosion_lines)
# Show final image
showimage('Final',result)
</code></pre>
<p><strong>How can I create a generic kernel that creates filters for any arbitrarily angled line?</strong> This will also reduce the tedious task of assigning the kernel of each type of line (e.g. horizontal, vertical, diagonal, etc).</p>
<p>The "TO DO" part needs to get modified. Any solution?</p>
<p>I using this image (filename: 'multilines.jpg') :</p>
<p><a href="https://i.sstatic.net/Ybrr7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ybrr7.jpg" alt="enter image description here" /></a></p>
<p>Output image:
<a href="https://i.sstatic.net/z1poo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z1poo.png" alt="enter image description here" /></a></p>
| <python><opencv><image-processing><filtering><mathematical-morphology> | 2023-10-09 11:30:52 | 1 | 2,394 | hbaromega |
77,258,241 | 13,280,838 | What is the behaviour/internal working of a normal python function in Spark/Databricks | <p>Hope you are all doing well.</p>
<p>Once again request your help in understanding a very small concept which continues to confuse me.</p>
<p>Say I have a databricks notebook with few cells. In cell 1, I have a small python function</p>
<pre class="lang-py prettyprint-override"><code>from dateutil import tz
def getCurTsZn(z):
tz = tz.gettz(z)
ts = datetime.now(tz)
return ts
</code></pre>
<p>Now this function is called in the subsequent cells say within normal python/pyspark code. The following are some of the questions I have.</p>
<ol>
<li>Is the function a User Defined Function ? Or only functions written in the format mentioned <a href="https://docs.databricks.com/en/udf/index.html" rel="nofollow noreferrer">here</a> be considered as a UDF in Spark/Databricks.</li>
<li>How does this function work internally ? When invoked in a python code in subsequent cells, I read something about data going to the code and causing some performance issue ?</li>
<li>I read that UDFs are black boxes which causes the optimizer to restrict its optimization at before UDF and after UDF. Does even such simple function (not registered) also act like a black box and hinder optimizations ?</li>
<li>I understand that the function needs to be registered for it to be used within <code>spark.sql("SELECT getCurTsZn("some zone")")</code> but if it is not needed, does registering actually make a difference ? Already looked at <a href="https://stackoverflow.com/questions/38296609/spark-functions-vs-udf-performance">linked</a> post. But not looking at Spark level optimized functions...just simple function like the one mentioned above.</li>
<li>Where does vectorized Python UDFs make an appearance ? I understand that vectorized python udf works on multiple rows instead of a single row. So does creating a function and passing a dataframe to it and acting upon it make a vectorized function ?
I hope to have a better understanding of the basics with your kind help.
Thank you,
Cheers...</li>
</ol>
| <python><apache-spark><pyspark><apache-spark-sql><databricks> | 2023-10-09 10:57:27 | 1 | 669 | rainingdistros |
77,258,199 | 19,238,204 | How to Replace the Variable from Sympy Computation so it can be Plotted with Matplotlib and Numpy? | <p>I have this computation (integral and then summation)</p>
<p>The result is <code>s</code> it is in term of independent variables <code>x</code> and <code>t</code>.</p>
<p>But, when I try to define it as <code>u(x,t)</code> with <code>return s</code>, it cannot replace the <code>x</code> variable with the one I define as <code>x = np.linspace(0, 20, 50)</code> and the <code>t</code> variable as <code>t = np.linspace(0, 10, 50)</code>. I think it is still following <code>x</code> that is defined by sympy at the beginning, the same happens with <code>t</code>.</p>
<p>I want to draw a 3D plot / the wireframe of this. the <code>Z</code> axis is the <code>u(x,t)</code>. Hopefully anyone can help me on this.</p>
<p>My MWE:</p>
<pre><code>import numpy as np
import sympy as sm
from sympy import *
#from spb import *
#from mpmath import nsum, inf
x = sm.symbols("x")
t = sm.symbols("t")
n = sm.symbols("n", integer=True)
L = 20
f1 = (2/L)*x*sin(n*np.pi*x/20)
f2 = (2/L)*(20-x)*sin(n*np.pi*x/20)
fint1 = sm.integrate(f1,(x,0,10))
fint2 = sm.integrate(f2,(x,10,20))
D = 0.475
g = (fint1+fint2)*sin(n*np.pi*x/20)*exp(-(n**2)*(np.pi**2)*D*t/400).nsimplify()
print('')
sm.pretty_print(g)
#gn = g.subs({n:1})
s = 0
for c in range(10):
s += g.subs({n:c})
print(s)
print('')
print('The function u(x,t) : ')
print('')
sm.pretty_print(s)
from mpl_toolkits import mplot3d
import numpy as np
import matplotlib.pyplot as plt
# defining surface and axes
x = np.linspace(0, 20, 50)
t = np.linspace(0, 10, 50)
def u(x, t):
return s
X, T = np.meshgrid(x, t)
Z = u(X, T)
print('')
print('u(x,t)')
print(Z)
#fig = plt.figure()
# syntax for 3-D plotting
#ax = plt.axes(projection='3d')
# syntax for plotting
#ax.plot_wireframe(X,T,Z, cmap='viridis',edgecolor='green')
#ax.set_title('Wireframe')
#plt.show()
</code></pre>
| <python><numpy><matplotlib><sympy> | 2023-10-09 10:50:02 | 1 | 435 | Freya the Goddess |
77,258,198 | 5,743,692 | Full list symbols stripped with str.strip() by default | <p>As said in <a href="https://docs.python.org/3/library/stdtypes.html?highlight=str%20strip#str.strip" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>str.strip([chars])<br />
Return a copy of the string with the leading and trailing characters removed. The chars argument is a string specifying the set of characters to be removed. If omitted or None, the chars argument defaults to removing whitespace.</p>
</blockquote>
<p>What is <strong>whitespace</strong>?</p>
<pre><code> import string
print(string.whitespace)
</code></pre>
<p>gives smth like <code>' \t\n\r\x0b\x0c'</code></p>
<p>in the same time,</p>
<pre><code>'\t\n\r\f\x85\x1c\x1d\v\u2028\u2029'.strip()
</code></pre>
<p>gives '' too.</p>
<p>So the question is: what is the full list of symbols striped by default with str.strip()?<br />
Sorry but GPT says <a href="https://sl.bing.net/h0DLiPy4v36" rel="nofollow noreferrer">rubbish</a> on it.</p>
| <python><string><strip><removing-whitespace> | 2023-10-09 10:49:59 | 3 | 451 | Vasyl Kolomiets |
77,258,190 | 4,171,008 | Expert system using PyDatalog to model and infer various family relations | <p>I'm working on an expert system using <code>PyDatalog</code> to model and infer various family relations.</p>
<p>This is the data used (loaded from a CSV file):</p>
<pre><code> name father_name mother_name gender
0 Ahmad Bachar Rafah Male
1 Amjad Bachar Rafah Male
2 Danah Bachar Rafah Female
3 Yazan Hassan Ghalia Male
4 Leen Hassan Ghalia Female
5 Dema Faiaz Sahar Female
6 Dania Faiaz Sahar Female
7 Tareq Faiaz Sahar Male
8 Asmaa Waleed Hanan Female
9 Alaa Waleed Hanan Female
10 Tasneem Waleed Hanan Female
11 Firas Waleed Hanan Male
12 Farouk Shareef Sameah Male
13 Usaema Saeed Adebeh Female
14 Bachar Farouk Usaema Male
15 Hassan Farouk Usaema Male
16 Rafah Zuhair Rukaieh Female
17 Zoukaa Zuhair Rukaieh Female
18 Lujain Adnan Zoukaa Female
19 Mohammad Adnan Zoukaa Male
</code></pre>
<p>First I added the terms:</p>
<pre><code>pyDatalog.create_terms('X,Y,Z,W, parent, male, female, father_of, mother_of,
son, daughter, aunt, uncle, cousin, niece, nephew, sibling, brother, sister')
</code></pre>
<p>And this is how I am defining the rules <code>son, male, daughter, female, father_of, mother_of</code>:</p>
<pre><code>for index, row in data.iterrows():
name, father_name, mother_name, gender = row['name'], row['father_name'], row['mother_name'], row['gender']
if gender == 'Male':
+son(name, father_name)
+son(name, mother_name)
+male(name)
else:
+daughter(name, father_name)
+daughter(name, mother_name)
+female(name)
+father_of(father_name, name)
+mother_of(mother_name, name)
</code></pre>
<p>Testing the rules gave the right answers:</p>
<pre><code>print(son(X, "Bachar"))
print(son(X, "Sahar"))
# Output:
X
-----
Amjad
Ahmad
X
-----
Tareq
</code></pre>
<p>This is for the <code>parent, brother, sister, sibling</code> also: <em>(Won't include the results to avoid making the question long but tested and worked well 100%)</em></p>
<pre><code>parent(X, Y) <= father_of(X, Y)
parent(X, Y) <= mother_of(X, Y)
brother(X, Y) <= (parent(Z, X) & parent(Z, Y) & male(X) & (X != Y))
sister(X, Y) <= (parent(Z, X) & parent(Z, Y) & (X != Y) & female(X))
sibling(X,Y) <= parent(Z, X) & parent(Z, Y) & (X != Y)
</code></pre>
<p>The issue and purpose of the question is defining the <code>uncle</code> and <code>aunt</code>:
I wrote this:</p>
<pre><code>uncle(X, Y) <= brother(X, Z) & parent(Z, Y) & male(X)
aunt(X, Y) <= sister(X, Z) & parent(Z, Y) & female(X)
cousin(X, Y) <= parent(Z, X) & (sister(X, Z) & parent(Z, Y))
</code></pre>
<p>Testing it with the "Yazan" and "Ahmad" is OK but with "Asmaa" it outputs an empty list, if you go to the data above the output should be "Bachar, Hassan". The same in the <code>aunt</code> and <code>cousin</code> rules also output wrong answers. Is there something I'm missing here or needs to be corrected?</p>
| <python><python-3.x><pydatalog> | 2023-10-09 10:49:17 | 1 | 1,884 | Ahmad |
77,258,119 | 15,468,925 | Why docker-entrypoint.sh is not being found within the directory? | <p>I'm running a Flask app in Docker using docker-compose:</p>
<pre><code>version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgresql://postgres:postgres@users-db:5432/users_dev
- DATABASE_TEST_URL=postgresql://postgres:postgres@users-db:5432/users_test
depends_on:
- users-db
users-db:
build:
context: ./services/users/project/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM python:3.11.6-alpine
RUN apk update && apk add --virtual --no-cache gcc python3-dev musl-dev && \
apk add postgresql-dev && apk add netcat-openbsd
WORKDIR /usr/src/app
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
COPY ./docker-entrypoint.sh /usr/src/app/docker-entrypoint.sh
RUN chmod +x /usr/src/app/docker-entrypoint.sh
COPY . /usr/src/app
CMD ["/usr/src/app/docker-entrypoint.sh"]
</code></pre>
<p>error output:</p>
<blockquote>
<p>Cannot start service users: failed to create task for container:
failed to create shim task: OCI runtime create failed: runc create
failed: unable to start container process: exec:
"/usr/src/app/docker-entrypoint.sh": stat
/usr/src/app/docker-entrypoint.sh: no such file or directory: unknown</p>
</blockquote>
<p>I've double checked if the <code>docker-entrypoint.sh</code> file is in correct location and it is.
Strangely enough the build works locally but when I deploy it on AWS EC2 service the build fails with the error output above. Both environments use the same operating system. Yet I get different behaviours.</p>
<p>The error does not make sense to me since locally the build finds the file just fine but in remote server it cannot be found.</p>
<p>Thank you for any suggestions.</p>
| <python><docker><docker-compose> | 2023-10-09 10:35:39 | 0 | 305 | itsallgood |
77,258,114 | 12,428,848 | How to address N+1 problem in django with prefetch related? | <p>With the following code I am getting N numbers of queries based on the loop. How to avoid that ?</p>
<p>I tried using prefetch_related but that didn't worked or am i doing the wrong way ?</p>
<p>models</p>
<pre><code>class Product(models.Model):
name = models.CharField()
....
class ProdcutOffer(models.Model):
product = models.ForeignKey(Product, related_name="offers")
....
def my_view(request):
qs = Product.objects.filter(is_active=True).prefetch_related("offers")
data = []
for product in qs:
data.append(
{
....,
"offer":product.offers.filter(is_active=True, is_archived=False).last()
}
)
paginator = Paginator(data, 10)
try:
page_obj = paginator.page(page_num)
except PageNotAnInteger:
page_obj = paginator.page(1)
except EmptyPage:
page_obj = paginator.page(paginator.num_pages)
return page_obj
</code></pre>
| <python><django><query-optimization> | 2023-10-09 10:34:46 | 2 | 862 | D_P |
77,258,097 | 16,383,578 | How to programmatically open QComboBox dropdown in PyQt6 | <p>I want to make a QComboBox automatically open its dropdown menu at program start-up and without mouse clicking, and the menu should stay open indefinitely, with a cell being highlighted.</p>
<p>I want to do this because I am writing a widget to customize the stylesheets of every single GUI component of an application I am writing, and there are hundreds of lines of stylesheets alone. I want to make a dummy QComboBox stay open so as to let the user preview the styled result. And the dummy QComboBox should be disabled.</p>
<p>I have searched ways to open the dropdown programmatically and haven't found a method that works with PyQt6. I have tried a few methods myself and they don't work.</p>
<p>Minimal reproducible example:</p>
<pre><code>import sys
from PyQt6.QtCore import *
from PyQt6.QtGui import *
from PyQt6.QtWidgets import *
app = QApplication([])
app.setStyle("Fusion")
class ComboBox(QComboBox):
def __init__(self, texts):
super().__init__()
self.addItems(texts)
self.setContentsMargins(3, 3, 3, 3)
self.setFixedWidth(100)
class Window(QMainWindow):
def __init__(self):
super().__init__()
self.centralwidget = QWidget(self)
self.setCentralWidget(self.centralwidget)
self.vbox = QVBoxLayout(self.centralwidget)
box = ComboBox(["Lorem", "Ipsum", "Dolor"])
self.vbox.addWidget(box)
box.showPopup()
box.DroppedDown = True
box.setProperty("DroppedDown", True)
box.raise_()
self.setStyleSheet("""
QMainWindow {
background: #201a33;
}
QComboBox {
border: 3px outset #552b80;
border-radius: 8px;
background: #422e80;
color: #c000ff;
selection-background-color: #7800d7;
selection-color: #ffb2ff;
}
QComboBox::drop-down {
border: 0px;
padding: 0px 0px 0px 0px;
}
QComboBox::down-arrow {
image: url(D:/Asceteria/downarrow.png);
width: 10px;
height: 10px;
}
QComboBox QAbstractItemView {
border: 2px groove #8000c0;
border-radius: 6px;
background: #422e80;
}""")
window = Window()
window.show()
sys.exit(app.exec())
</code></pre>
<p>The combobox only opens its dropdown menu when it is clicked, I want it to open automatically without needing to click it. How do I do this?</p>
<hr />
<p>I am perfectly capable of cheating and using a combination of QGroupBox and QLabels to make a mock-up, but I want to know if there is a proper way to do this.</p>
| <python><pyqt><pyqt6><qcombobox> | 2023-10-09 10:31:46 | 1 | 3,930 | Ξένη Γήινος |
77,258,063 | 1,833,326 | Pydantic Version 2: Check for complex fields | <p>I would like to migrate from <code>pydantic</code> Version >1,<2 to Version >2. However, I cannot find a solution how to check for complex fields using the newest Version (currently V2.4.2)</p>
<p>Here is the code that worked with the "legacy" <code>pydantic</code>:</p>
<pre><code>from pydantic import BaseModel
import typing as t
class ClassItems(BaseModel):
item_id: str
class A(BaseModel):
id: str
items: t.List[ClassItems]
for i_field_name, i_field in A.model_fields.items():
if i_field.is_complex():
print(f"{i_field_name} is complex")
</code></pre>
<p>This returns now the error:</p>
<blockquote>
<p>AttributeError: 'FieldInfo' object has no attribute 'is_complex'</p>
</blockquote>
| <python><pydantic> | 2023-10-09 10:26:19 | 0 | 1,018 | Lazloo Xp |
77,257,989 | 4,267,439 | Python subprocess communicate hangs calling an external program, which ask for a keypress at the end | <p>I'm trying to integrate <a href="https://johndfenton.com/Steady-waves/Fourier.zip" rel="nofollow noreferrer">this</a> executable in a python program. Basically the executable is a console application, it does some calculations based on input files, print some results on console and writes a couple of text files, then ask the user to press any key to exit.
Now, since I prefer to not re-building the executable I'd like to deal with the following issue. This is the code:</p>
<pre><code>exe_path = 'Fourier/Fourier.exe'
cwd_path = 'Fourier'
process = Popen(exe_path, cwd=cwd_path, stdin=PIPE, stdout=PIPE, stderr=PIPE)
(stdout, stderr) = process.communicate(b"\r\n")
</code></pre>
<p>The problem is that the program is being run but it hangs on process.communicate. Removing last line everything works but the program remains running in the background, which is not fine for me. Any idea ?</p>
| <python><subprocess> | 2023-10-09 10:12:54 | 0 | 2,825 | rok |
77,257,983 | 13,793,478 | python Dict operation for flask | <p>how to add (sum) all the pnl values of the dictionaries with the same date</p>
<p>from this <code>[{'pnl': 10, 'date': 2}, {'pnl': 30, 'date': 2}, {'pnl': 20, 'date': 3}]</code></p>
<p>To this <code>[{'pnl': 40, 'date': 2}, {'pnl': 20, 'date': 3}]</code></p>
| <python> | 2023-10-09 10:12:22 | 1 | 514 | Mt Khalifa |
77,257,954 | 502,144 | How to use common expression elimination together (CSE) with codegen | <p>I'm trying to use sympy.utilities.codegen. I need compute a complicated function and its derivative. As a simplified example the function <code>f</code> is</p>
<pre class="lang-py prettyprint-override"><code>x = Symbol('x')
y = Symbol('y')
f = 1 / (x - y)
df = f.diff(x)
[(_, c_code), _] = codegen([('f', f), ('df', df)], 'C99', header=False, empty=False)
print(c_code)
</code></pre>
<p>This generates the following code:</p>
<pre class="lang-c prettyprint-override"><code>#include "f.h"
#include <math.h>
double f(double x, double y) {
double f_result;
f_result = 1.0/(x - y);
return f_result;
}
double df(double x, double y) {
double df_result;
df_result = -1/pow(x - y, 2);
return df_result;
}
</code></pre>
<p>Although C compiler might be able to eliminate common sub-expressions, I don't want to fully rely on this. Moreover, I doubt that the compiler can do that across different functions. Also, I want the generated code to be at least somewhat readable. So, I'm using common expression elimination</p>
<pre class="lang-py prettyprint-override"><code>substitutions, result = cse([f, df])
print(substitutions, result)
</code></pre>
<pre class="lang-none prettyprint-override"><code>[(x0, x - y)] [1/x0, -1/x0**2]
</code></pre>
<p>But when I try to generate code for substitutions, I get function instead of variable for <code>x0</code> and can't use it for calculation of <code>f</code> and <code>df</code></p>
<pre><code>[(_, c_code), _] = codegen((str(substitutions[0][0]), substitutions[0][1]), 'C99', header=False, empty=False)
print(c_code)
</code></pre>
<pre class="lang-c prettyprint-override"><code>#include "x0.h"
#include <math.h>
double x0(double x, double y) {
double x0_result;
x0_result = x - y;
return x0_result;
}
</code></pre>
<p>I've also tried to use <code>CodeBlock</code></p>
<pre class="lang-py prettyprint-override"><code>f2 = Symbol('f')
code_block = CodeBlock(Assignment(*substitutions[0]), Assignment(f2, result[0]))
[(c_name, c_code), _] = codegen(('f', code_block), 'C99', header=False, empty=False)
</code></pre>
<p>But the resulting code is ill-formed:</p>
<pre class="lang-c prettyprint-override"><code>#include "f.h"
#include <math.h>
double f(double x, double y) {
double f_result;
f_result = x0 = x - y;
f = 1.0/x0;
return f_result;
}
</code></pre>
<p>There is the question <a href="https://stackoverflow.com/questions/43442940/common-sub-expression-elimination-using-sympy">Common sub-expression elimination using sympy</a>, but it uses <code>sympy.printing</code>, which is lower level and doesn't fit me.</p>
<p>There is also relevant answer <a href="https://stackoverflow.com/a/25323791/502144">https://stackoverflow.com/a/25323791/502144</a>, but it doesn't show how to generate code from simplified expression.</p>
<p><strong>So, the question is:</strong> how can I generate code using <code>sympy.utilities.codegen</code> after expression is simplified with <code>cse</code>? I want the code to look like this:</p>
<pre class="lang-c prettyprint-override"><code>#include <math.h>
void compute(double x, double y, double* f, double* df) {
double x0 = x - y;
*f = 1.0/x0;
*df = -1/pow(x0, 2);
}
</code></pre>
| <python><sympy><code-generation><abstract-syntax-tree> | 2023-10-09 10:07:29 | 1 | 3,744 | fdermishin |
77,257,750 | 22,466,650 | How to move up values in specific columns as long as possible? | <p>My input is this dataframe :</p>
<pre><code>df = pd.DataFrame({'class': ['class_a',
'class_a',
'class_a',
'class_a',
'class_b',
'class_b',
'class_c',
'class_c',
'class_d'],
'id': ['id1', 'id2', 'id3', '', '', 'id4', 'id5', '', 'id6'],
'name': ['abc1', '', '', 'abc2', 'abc3', '', '', 'abc4', 'abc5']})
print(df)
class id name
0 class_a id1 abc1
1 class_a id2
2 class_a id3
3 class_a abc2
4 class_b abc3
5 class_b id4
6 class_c id5
7 class_c abc4
8 class_d id6 abc5
</code></pre>
<p>And I need to have one single row per pair (class/id). We need to move up the values in <code>id</code> and <code>name</code> as long as it's possible.</p>
<pre><code> class id name
0 class_a id1 abc1
1 class_a id2 abc2
2 class_a id3
3 class_b id4 abc3
4 class_c id5 abc4
5 class_d id6 abc5
</code></pre>
<p>I tried with the code below but I get a messed up result :</p>
<pre><code>
df.replace('',np.nan,inplace=True)
df.groupby(['class', 'id'], as_index=False, sort=False).first()
class id name
0 class_a id1 abc1
1 class_a id2 None
2 class_a id3 None
3 claas_b id4 None
4 class_c id5 None
5 class_d id6 abc5
</code></pre>
<p>Can you provide an explanation please ?</p>
| <python><pandas> | 2023-10-09 09:36:59 | 2 | 1,085 | VERBOSE |
77,257,739 | 1,632,812 | iteratively selecting a subset of a model by select fields in odoo 14 | <p>I have a model</p>
<p>I need to provide 4 select fields</p>
<p>The values choosen in the first select field will be used to calculate a subset of the model to show in the second select fiedl</p>
<p>the valued choosen in the second select field will be used to calculate a subset of the model to show in the third select field</p>
<p>and so on</p>
<p>I read returning domains from on_change methods is deprecated in odoo 14</p>
<p>I'm a bit confused</p>
<p>How am I supposed to do this ?</p>
| <python><odoo><odoo-14> | 2023-10-09 09:35:48 | 1 | 603 | user1632812 |
77,257,710 | 2,386,113 | How to multiply rows with corresponding columns in Python (cuPy/Numpy)? | <p>I need to multiply each row with a column at the corresponding row-index. Consider the graphics below in which I have a <code>3 x 3</code> matrix. The required operation is to multiply the <code>row[0]</code> of matrix with <code>col[0]</code> of <code>transposed_matrix</code>, <code>row[1]</code> of matrix with <code>col[1]</code> of <code>transposed_matrix</code>, and so on.</p>
<p><a href="https://i.sstatic.net/gib7b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gib7b.png" alt="enter image description here" /></a></p>
<p><strong>Question:</strong> How can achieve it in <strong>cupPy/Numpy</strong> Python in a smart way (i.e. without using for-loops)?</p>
| <python><python-3.x><numpy><cupy> | 2023-10-09 09:30:29 | 3 | 5,777 | skm |
77,257,598 | 206,253 | Chaining dataframe sort in pandas produces unexpected result | <p>This is the dataframe <code>activity</code>:</p>
<pre><code>| player_id | device_id | event_date | games_played |
| --------- | --------- | ---------- | ------------ |
| 1 | 2 | 2016-03-01 | 5 |
| 1 | 2 | 2016-05-02 | 1 |
| 1 | 3 | 2015-06-25 | 2 |
| 3 | 1 | 2016-03-02 | 10 |
| 3 | 4 | 2016-02-03 | 15 |
</code></pre>
<p>Running</p>
<pre><code>activity.sort_values('event_date').assign(\
games_played_so_far = activity.groupby('player_id')\
.games_played.cumsum())[['player_id', 'event_date', 'games_played_so_far']]
</code></pre>
<p>produces</p>
<pre><code>| player_id | event_date | games_played_so_far |
| --------- | ---------- | ------------------- |
| 1 | 2015-06-25 | 8 |
| 3 | 2016-02-03 | 25 |
| 1 | 2016-03-01 | 5 |
| 3 | 2016-03-02 | 10 |
| 1 | 2016-05-02 | 6 |
</code></pre>
<p>whereas doing the sorting separately</p>
<pre><code>activity.sort_values('event_date', inplace = True)
activity.assign(games_played_so_far = activity.groupby('player_id')\
.games_played.cumsum())[['player_id', 'event_date', 'games_played_so_far']]
</code></pre>
<p>produces</p>
<pre><code>| player_id | event_date | games_played_so_far |
| --------- | ---------- | ------------------- |
| 1 | 2015-06-25 | 2 |
| 1 | 2016-03-01 | 7 |
| 1 | 2016-05-02 | 8 |
| 3 | 2016-02-03 | 15 |
| 3 | 2016-03-02 | 25 |
</code></pre>
<p>Why?</p>
<p>I noticed that this unexpected behaviour occurs irregularly - for many dataframes the result is the same.</p>
| <python><pandas> | 2023-10-09 09:17:40 | 2 | 3,144 | Nick |
77,257,474 | 6,839,048 | how to find the largest subsets of a correlation matrix whose correlation are below a given value? | <p>I used following code to get a correlation matrix.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.rand(20,4))
df = df.corr()
</code></pre>
<p>say the df is as below</p>
<pre><code> 0 1 2 3
0 1.000000 -0.156813 0.294344 -0.034569
1 -0.156813 1.000000 -0.238828 0.222677
2 0.294344 -0.238828 1.000000 0.071389
3 -0.034569 0.222677 0.071389 1.000000
</code></pre>
<p>the largest subsets of above correlation matrix whose correlation are below 0.23 is [0,1,3] and [1,2,3], as all correlations within [0,1,3] and [1,2,3] are below 0.23 and other subsets like [0,3], [1,2] only contains two elements.</p>
<p>if the df is 200 * 200 matrix, what's the quickest way to find such subsets? check all 2^200 possibilities would be too slow.</p>
| <python><pandas><algorithm> | 2023-10-09 08:58:18 | 1 | 321 | Lei Yu |
77,257,099 | 2,695,082 | Link a dll with a static library containing boost python import code causing the application to crash | <p>I have created a static library containing a function used to import a module :</p>
<pre><code>void callFn()
{
_putenv_s("PYTHONPATH", ".");
Py_Initialize();
namespace python = boost::python;
try
{
python::object my_python_class_module = python::import("pythonFile");
python::object test = my_python_class_module.attr("Test")();
test.attr("fn")("from c++");
}
catch (const python::error_already_set&)
{
PyErr_Print();
}
}
</code></pre>
<p>Content of pythonFile.py:</p>
<pre><code>class Test():
def fn(self,message):
print ("From python ")
</code></pre>
<p>I am calling this function(callFn()) from another x.dll used in an exe:</p>
<pre><code>void ClassName::abc()
{
callFn();
}
</code></pre>
<p>Initially when I compiled x.dll, I got a linking error:
LINK : fatal error LNK1104: cannot open file 'boost_python39-vc142-mt-x64-1_71.lib'</p>
<p>Then I built the boost source code to create boost_python39-vc142-mt-x64-1_71.lib.
Doing this made the compilation successful, however on running the application, the application crashes giving an error:</p>
<p>Error loading library x - Cannot load library x: The specified module could not be found.</p>
<p>Please note, on commenting the 3 lines in "try" in callFn(), and repeating the above process does not crash the application. Any ideas why boost::python::import() causes application to crash?</p>
| <python><c++><python-3.x><boost> | 2023-10-09 07:54:42 | 1 | 329 | user2695082 |
77,257,048 | 951,757 | how to transform pandas dataframe by type | <p>I have python dataframe like below</p>
<pre><code> index date hour type tiantan
2014-01-01 00:00:00 20140101 0 PM2.5 32.0
2014-01-01 00:00:00 20140101 0 PM10 110.0
2014-01-01 01:00:00 20140101 1 PM2.5 56.0
2014-01-01 01:00:00 20140101 1 PM10 126.0
</code></pre>
<p>I would like to transform to below format, what's the best way archive this?</p>
<pre><code> index date hour pm25 pm10
2014-01-01 00:00:00 20140101 0 32.0 110.0
2014-01-01 01:00:00 20140101 1 56.0 126
</code></pre>
| <python><pandas><dataframe> | 2023-10-09 07:47:27 | 1 | 45,630 | billz |
77,257,035 | 188,331 | Exception: Custom Normalizer cannot be serialized | <p>I'm using a custom normalizer for my custom tokenizer.</p>
<p>The custom normalizer is as follows:</p>
<pre><code>class CustomNormalizer:
def normalize(self, normalized: NormalizedString):
# Most of these can be replaced by a `Sequence` combining some provided Normalizer,
# (ie Sequence([ NFKC(), Replace(Regex("\s+"), " "), Lowercase() ])
# and it should be the prefered way. That being said, here is an example of the kind
# of things that can be done here:
try:
if normalized is None:
noramlized = NormalizedString("")
else:
normalized.nfkc()
normalized.filter(lambda char: not char.isnumeric())
normalized.replace(Regex("\s+"), " ")
normalized.lowercase()
except TypeError as te:
print("CustomNormalizer TypeError:", te)
print(normalized)
</code></pre>
<p>which the codes are adopted here: <a href="https://github.com/huggingface/tokenizers/blob/b24a2fc1781d5da4e6ebcd3ecb5b91edffc0a05f/bindings/python/examples/custom_components.py" rel="nofollow noreferrer">https://github.com/huggingface/tokenizers/blob/b24a2fc1781d5da4e6ebcd3ecb5b91edffc0a05f/bindings/python/examples/custom_components.py</a></p>
<p>When I use this normalizer with a custom Tokenizer (codes below) and try to save the trained tokenizer, it said:</p>
<blockquote>
<p>Exception: Custom Normalizer cannot be serialized</p>
</blockquote>
<p>The custom tokenizer code is as follows:</p>
<pre><code>model = models.WordPiece(unk_token="[UNK]")
tokenizer = Tokenizer(model)
tokenizer.normalizer = Normalizer.custom(CustomNormalizer())
trainer = trainers.WordPieceTrainer(
vocab_size=2500,
special_tokens=special_tokens,
show_progress=True
)
tokenizer.train_from_iterator(get_training_corpus(), trainer=trainer, length=len(dataset))
# Save the Tokenizer result
tokenizer.save('saved.json') # in this line, it gives Exception
</code></pre>
<p>How can I resolve this exception?</p>
| <python><huggingface-tokenizers> | 2023-10-09 07:44:14 | 1 | 54,395 | Raptor |
77,256,848 | 19,238,204 | How to Implement Sum with Substitute in For Loop with Python | <p>I have the computation of integral with a lot of symbolic variables <strong>x,t,n.</strong></p>
<p>After I get <code>g</code> function in terms of <strong>x,t,n</strong>. I want to do summation with for loop to replace the <strong>n</strong> variable into value.</p>
<p>The command <code>g.subs({n:c})</code> is not working. the variable <code>n</code> here should be replaced by 1, then 2, then to the range number at the for loop.</p>
<p>In mathematical term, <code>g</code> at the last line is the sum of a function <code>g(x,t)</code> with <code>n</code> from 1 to 10.</p>
<p>Is there a series function from sympy that can replace symbolic variable of <code>n</code>? if we have three unknown variable in a function ?</p>
<p>this is my MWE:</p>
<pre><code>
from sympy import pi, series
#from spb import *
#from mpmath import nsum, inf
import sympy as sm
x = sm.symbols("x")
t = sm.symbols("t")
n = sm.symbols("n", integer=True)
L = 20
f1 = (2/L)*x*sm.sin(n*pi*x/20)
f2 = (2/L)*(20-x)*sm.sin(n*pi*x/20)
sm.pretty_print(sm.integrate(f1,(x,0,10)))
sm.pretty_print(sm.integrate(f2,(x,10,20)))
D = 0.475
g = (f1+f2)*sm.sin(n*pi*x/20)*sm.exp(-(n**2)*(pi**2)*D*t/400)
sm.pretty_print(g)
gn = g.subs({n:1})
sm.pretty_print(gn)
for c in range(10):
g += g.subs({n:c})
print(g)
</code></pre>
| <python><sympy> | 2023-10-09 07:07:30 | 1 | 435 | Freya the Goddess |
77,256,804 | 12,881,307 | pandas json normalize directly from file | <p>I am writing a wrapper class for <code>pymongo</code>. The main functionality I am aiming for is for it to be compatible with <code>pandas</code>, so that I can do this:</p>
<pre class="lang-py prettyprint-override"><code>from loader import PyMongoLoader
import pandas as pd
_loader = PyMongoLoader(url="my_url", port="my_port")
df = pd.read_json(_loader, orient="records")
</code></pre>
<p>From the pandas documentation, <code>pd.read_json</code> accepts any object with a <code>read</code> method as the first argument. Internally, I call <code>collection.find</code> and parse the result to a string using <code>bson.dumps</code>:</p>
<pre class="lang-py prettyprint-override"><code> def read(self, size: int = -1) -> str:
assert self._is_valid, "MongoDB connection must be valid for retrieving data"
self.logger.debug(
f"Retrieving documents from {self._collection}. Expected size: \033[94m{self._collection.count_documents({})}\033[0m"
)
result = list(self._collection.find())
self.logger.debug(
f"Retrieved {len(result)} documents from remote. Size of json: {sum([len(document) for document in result])}"
)
return dumps(result)
</code></pre>
<p><strong>Problem</strong></p>
<p>This works fine. However, the database I am working with has complex field names like "26344T Control Measure [5.3-10.8]". Because of these names, the data is stored as a nested json in the database. I want to be able to normalize these fields, but as far as I know, <code>pd.json_normalize</code> does not accept a path argument, only a string.</p>
<p>I've thought of two solutions, but neither convince me much:</p>
<ul>
<li>Changing the database field names. This would solve the issue, but it would be a hassle to keep in mind any time I have to add new data to the database.</li>
<li>Patching <code>pd.resd_json</code> or <code>pd.json_normalize</code> in the loader source file. While this would also solve the problem, I'm sure it is not a good solution and it could potentially break other code using pandas.</li>
</ul>
<p><strong>Question</strong></p>
<p>Is there a supported way to normalize a json directly from a file? If not, how can I normalize the json I'm passing to <code>pd.read_json</code> to get rid of the weird indentation problems in my database?</p>
<p><strong>Edit</strong></p>
<p>The JSON results I get from my database follow this format:</p>
<pre class="lang-py prettyprint-override"><code>{
"_id": {
"$oid": "651ec788c110096a55c8d4de"
},
"DateTime": "20/02/2017 15:00:00",
"546321B": {
"measure": 0
},
"538612B": {
"measure": 80
},
"517713B": {
"measure": 70
},
"508021V": {
"avg": 37
}
}
</code></pre>
<p>and I want my dataframe to be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>DateTime</th>
<th>546321B.measure</th>
<th>538612B.measure</th>
<th>517713B.measure</th>
<th>508021V.avg</th>
</tr>
</thead>
<tbody>
<tr>
<td>"20/02/2017 15:00:00"</td>
<td>0</td>
<td>80</td>
<td>70</td>
<td>37</td>
</tr>
</tbody>
</table>
</div>
<p>Ideally, I'd want to get this result directly from <code>pd.read_json(loader, orient="records")</code></p>
| <python><json><pandas><pymongo> | 2023-10-09 06:57:57 | 1 | 316 | Pollastre |
77,256,712 | 7,585,973 | Pandas dataframe has 0 column after `pandas.read_json()` even the json have value | <p>I have JSON file that consist of one json file that consist of 1 object, 36 key, and each key have single value (not a nested object)</p>
<pre><code>{
...
"created_at": "Sat Apr 14 11:15:29 +0000 2012",
"description": "Pemerhati sospol hukum dan ekonomi",
"is_translator": false,
"can_media_tag": true,
"pinned_tweet_ids_str": []
...
}
</code></pre>
<p>I try to read using pandas using <code>pd.read_json('...')</code> but i show that pandas has <code>0 rows × 36 columns</code> not <code>1 rows × 36 columns</code></p>
| <python><json><pandas> | 2023-10-09 06:40:14 | 1 | 7,445 | Nabih Bawazir |
77,256,626 | 2,170,269 | Type-annotating a curry function | <p>I'm having a problem annotating a curry function correctly. I use <code>Concatenate</code> and <code>ParamSpec</code> to catch the first argument to extract, and then I return a function from that. However, <code>mypy</code> complains when I try to call the function. A minimal example looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, ParamSpec, Concatenate
from typing import Callable as Fn, reveal_type
P = ParamSpec("P")
R = TypeVar("R")
T = TypeVar("T")
def curry(f: Fn[Concatenate[T, P], R]) -> Fn[[T], Fn[P, R]]:
"""An attempt at currying."""
def outer(x: T) -> Fn[P, R]:
def inner(*args: P.args, **kwargs: P.kwargs) -> R:
return f(x, *args, **kwargs)
return inner
return outer
@curry
def f(x: T, y: int) -> T:
"""Test function."""
return x
def g(x: T, /) -> Fn[[int], T]:
"""Test function."""
return lambda _: x
reveal_type(f)
reveal_type(g)
reveal_type(f(1)) # <- this fails in mypy
reveal_type(g(1))
</code></pre>
<p>I get the impression that <code>mypy</code> binds the type variable too early here, so it cannot be bound to the call later, but that is a bit of a wild guess. The playground for <code>mypy</code> is <a href="https://mypy-play.net/?mypy=latest&python=3.11&gist=22c1e6390ee923bad7f99b34a04768ab" rel="nofollow noreferrer">here</a>. In <code>pyright</code> the code is checked and understood as I would expect, <a href="https://pyright-playground.decorator-factory.su/?gzip=H4sIAE2bI2UC_42RwWqEMBCG73mKwVOUdEuvQkvLQs9iQy8iS9ZNRKpRYmz17TuJuqvbSwMyyZ9_vpmMyrQN2KmrdAlV07XGAp86-SkMg0QY0Xx0smBwbHUhrNT4EfU35SjqWpxrCaKHd83AyG8p6hOaJCEJPN9QNEiCkKQoLWVokKLAtwJHgZCLVFAMxkxUxQjNNi1kHJvLGaR5CA8v7jLjeMSYODGPCeAKguBNg7BWNp3FONOw6wPeEG9xNdrBSkPHGPgK85CZsZoqrdEUCVP2MSQHFxlE0dfPqsw7T0hvqW4ZaQejQdERM_aJISF3Ll9nVhfFt4fjePXd-6ko3y2DKUa_9TX59clc9hbUoAtbtdq_dAMbl7mWC-HxOj8E4QR5_j9OLZrzRcApdsTNz6Yq3B3L_VHRp_DO4JVf4E6QtYUCAAA%3D" rel="nofollow noreferrer">playground here</a>, but unfortunately I need to sneak the code by <code>mypy</code> for the QA setup I have in the project I'm working on.</p>
<p>Is there any way I can change the type annotation so <code>mypy</code> will accept what I am trying to tell it?</p>
| <python><mypy> | 2023-10-09 06:21:28 | 0 | 1,844 | Thomas Mailund |
77,256,543 | 19,506,623 | How to print list in grid format with continuous lines? | <p>I´m looking for way to print (plot) one or more grid with continuous lines and within each square the values from NxN list. I found something similar in <a href="https://stackoverflow.com/questions/20368413/draw-grid-lines-over-an-image-in-matplotlib">this</a> thread, but in that case they are loading an image.</p>
<p>In my case, if I have an even number (4 in this case) of 2D lists of 5x5, I´d like to print like image below (not with "---+---" like markdown format, but with continuous lines). How can this be done. Thanks in advance.</p>
<pre><code>list1 = [[258,658,937,560,455],[868,108,638,992,392],[649,370,825,773,751],[507,734,894,658,442],[157,831,619,614,707]]
list2 = [[130,637,502,634,901],[232,340,820,609,323],[979,549,551,116,839],[591,471,310,865,872],[969,300,321,113,218]]
list3 = [[109,908,748,165,572],[450,540,261,991,231],[327,379,965,566,932],[280,950,862,159,710],[137,450,158,441,139]]
list4 = [[987,470,155,487,851],[816,750,636,696,699],[826,264,654,114,728],[826,847,197,539,245],[915,269,821,755,552]]
</code></pre>
<p><a href="https://i.sstatic.net/z0nkv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z0nkv.png" alt="enter image description here" /></a></p>
| <python><grid> | 2023-10-09 06:03:34 | 0 | 737 | Rasec Malkic |
77,256,470 | 211,858 | How to use aws encryption sdk against localstack kms? | <p>I would like to use the python aws encryption sdk to encrypt and decrypt text. I would like to test my code locally against an instance of kms provided by localstack. I have <code>localstack</code> running in a docker image. Below is some sample code. How does one point the <code>EncryptionSDKClient</code> to the kms instance at <code>localhost:4566</code>? Thank you.</p>
<pre><code>import aws_encryption_sdk
import boto3
kms = boto3.client("kms", endpoint_url="http://localhost:4566")
kms.list_keys()
create_key_response = kms.create_key()
key_id = create_key_response["KeyMetadata"]["KeyId"]
client = aws_encryption_sdk.EncryptionSDKClient()
kms_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider(
key_ids=[key_id],
)
my_plaintext = "password123"
my_ciphertext, encryptor_header = client.encrypt(
source=my_plaintext, key_provider=kms_key_provider
)
decrypted_plaintext, decryptor_header = client.decrypt(
source=my_ciphertext, key_provider=kms_key_provider
)
</code></pre>
| <python><amazon-web-services><amazon-kms><localstack> | 2023-10-09 05:42:13 | 0 | 1,459 | hwong557 |
77,256,367 | 1,422,058 | How can I calculate overlaps in Pandas dataframe scoped to groups | <p>I want to extract overlapping events in an dataframe which contains a grouping criteria of the events.</p>
<pre><code>df = pd.DataFrame({
'id': [1, 2, 3, 4, 5, 6, 7, 8],
'group': ['a', 'a', 'a', 'a', 'a', 'b', 'b', 'b'],
'event': ['event1', 'event2', 'event3', 'event4', 'event5', 'event6', 'event7', 'event8'],
'time_start': ['2000-01-01 08:00:00',
'2000-01-01 07:30:00',
'2000-01-01 11:00:00',
'2000-01-01 12:30:00',
'2000-01-01 13:00:00',
'2000-01-01 08:00:00',
'2000-01-01 07:30:00',
'2000-01-01 08:30:00'
],
'time_end': ['2000-01-01 09:00:00',
'2000-01-01 10:30:00',
'2000-01-01 12:00:00',
'2000-01-01 13:30:00',
'2000-01-01 14:00:00',
'2000-01-01 09:00:00',
'2000-01-01 09:30:00',
'2000-01-01 10:00:00'
]
})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">id</th>
<th>group</th>
<th>event</th>
<th style="text-align: right;">time_start</th>
<th style="text-align: right;">time_end</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td>a</td>
<td>event1</td>
<td style="text-align: right;">2000-01-01 08:00:00</td>
<td style="text-align: right;">2000-01-01 09:00:00</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td>a</td>
<td>event2</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:30:00</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td>a</td>
<td>event3</td>
<td style="text-align: right;">2000-01-01 11:00:00</td>
<td style="text-align: right;">2000-01-01 12:00:00</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td>a</td>
<td>event4</td>
<td style="text-align: right;">2000-01-01 12:30:00</td>
<td style="text-align: right;">2000-01-01 13:30:00</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td>a</td>
<td>event5</td>
<td style="text-align: right;">2000-01-01 13:00:00</td>
<td style="text-align: right;">2000-01-01 14:00:00</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td>b</td>
<td>event6</td>
<td style="text-align: right;">2000-01-01 08:00:00</td>
<td style="text-align: right;">2000-01-01 09:00:00</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td>b</td>
<td>event7</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 09:30:00</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td>b</td>
<td>event8</td>
<td style="text-align: right;">2000-01-01 08:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
</tr>
</tbody>
</table>
</div>
<p>event1 + event2 and event4 + event5 are overlapping in group 'a', and event6, event7, event8 are overlapping in group 'b'.
The events of group 'a' do not overlap with events of group 'b'.</p>
<p>The result which I try to achieve is the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">id</th>
<th>group</th>
<th>event</th>
<th style="text-align: right;">time_start</th>
<th style="text-align: right;">time_end</th>
<th style="text-align: right;">overlap</th>
<th style="text-align: right;">time_start_overlap</th>
<th style="text-align: right;">time_end_overlap</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td>a</td>
<td>event1</td>
<td style="text-align: right;">2000-01-01 08:00:00</td>
<td style="text-align: right;">2000-01-01 09:00:00</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:30:00</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td>a</td>
<td>event2</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:30:00</td>
<td style="text-align: right;">10</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:30:00</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td>a</td>
<td>event3</td>
<td style="text-align: right;">2000-01-01 11:00:00</td>
<td style="text-align: right;">2000-01-01 12:00:00</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2000-01-01 11:00:00</td>
<td style="text-align: right;">2000-01-01 12:00:00</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td>a</td>
<td>event4</td>
<td style="text-align: right;">2000-01-01 12:30:00</td>
<td style="text-align: right;">2000-01-01 13:30:00</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">2000-01-01 12:30:00</td>
<td style="text-align: right;">2000-01-01 14:00:00</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td>a</td>
<td>event5</td>
<td style="text-align: right;">2000-01-01 13:00:00</td>
<td style="text-align: right;">2000-01-01 14:00:00</td>
<td style="text-align: right;">11</td>
<td style="text-align: right;">2000-01-01 12:30:00</td>
<td style="text-align: right;">2000-01-01 14:00:00</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td>b</td>
<td>event6</td>
<td style="text-align: right;">2000-01-01 08:00:00</td>
<td style="text-align: right;">2000-01-01 09:00:00</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td>b</td>
<td>event7</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 09:30:00</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td>b</td>
<td>event8</td>
<td style="text-align: right;">2000-01-01 08:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
</tr>
</tbody>
</table>
</div>
<p>The column overlap 'marks' the rows which belong to the same overlap, and the columns time_start_overlap and time_end_overlap is the period of that overlap, which includes the period of the event.</p>
<p>I can get it working when there is only one group (e.g. name = 'a'), but when multiple groups are involved I struggle. Here is the code I was able to assemble:</p>
<pre><code>iix = pd.IntervalIndex.from_arrays(df['time_start'], df['time_end'], closed='both')
overlaps = [iix.overlaps(x) for x in iix if iix.overlaps(x).sum()>1]
df['overlap'] = df.index
i = len(df)+1
for overlap in overlaps:
df['overlap'] = df['overlap'].where(~overlap, other=i, inplace=False)
i+=1
g = df.groupby(['group', 'overlap']).agg(
time_start_overlap = ('time_start', min),
time_end_overlap = ('time_end', max)
)
df = pd.merge(df, g, on=['group', 'overlap'])
</code></pre>
<p>Leading to the following result. It can be seen that overlap 15 marks events both from group a and b, where as I want to tread them as separate groups and separate overlaps.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">id</th>
<th>group</th>
<th>event</th>
<th style="text-align: right;">time_start</th>
<th style="text-align: right;">time_end</th>
<th style="text-align: right;">overlap</th>
<th style="text-align: right;">time_start_overlap</th>
<th style="text-align: right;">time_end_overlap</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td>a</td>
<td>event1</td>
<td style="text-align: right;">2000-01-01 08:00:00</td>
<td style="text-align: right;">2000-01-01 09:00:00</td>
<td style="text-align: right;">15</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:30:00</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td>a</td>
<td>event2</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:30:00</td>
<td style="text-align: right;">15</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:30:00</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td>a</td>
<td>event3</td>
<td style="text-align: right;">2000-01-01 11:00:00</td>
<td style="text-align: right;">2000-01-01 12:00:00</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2000-01-01 11:00:00</td>
<td style="text-align: right;">2000-01-01 12:00:00</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td>a</td>
<td>event4</td>
<td style="text-align: right;">2000-01-01 12:30:00</td>
<td style="text-align: right;">2000-01-01 13:30:00</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">2000-01-01 12:30:00</td>
<td style="text-align: right;">2000-01-01 14:00:00</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td>a</td>
<td>event5</td>
<td style="text-align: right;">2000-01-01 13:00:00</td>
<td style="text-align: right;">2000-01-01 14:00:00</td>
<td style="text-align: right;">12</td>
<td style="text-align: right;">2000-01-01 12:30:00</td>
<td style="text-align: right;">2000-01-01 14:00:00</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td>b</td>
<td>event6</td>
<td style="text-align: right;">2000-01-01 08:00:00</td>
<td style="text-align: right;">2000-01-01 09:00:00</td>
<td style="text-align: right;">15</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td>b</td>
<td>event7</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 09:30:00</td>
<td style="text-align: right;">15</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td>b</td>
<td>event8</td>
<td style="text-align: right;">2000-01-01 08:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
<td style="text-align: right;">15</td>
<td style="text-align: right;">2000-01-01 07:30:00</td>
<td style="text-align: right;">2000-01-01 10:00:00</td>
</tr>
</tbody>
</table>
</div> | <python><pandas> | 2023-10-09 05:06:42 | 1 | 1,029 | Joysn |
77,256,337 | 10,480,181 | Different dataclass object show equal | <p>I have a simple dataclass with 3 member variables. When I create two different objects for the class and check for equality it always comes out True even if the values for the member variables is different.</p>
<p><strong>Data Class:</strong></p>
<pre><code>@dataclass(init=False)
class DiagCodes:
__slots__ = ["indx_nbr", "diag_cd", "diag_typ_cd"]
def __init__(
self,
indx_nbr: Optional[int] = None,
diag_cd: Optional[str] = None,
diag_typ_cd: Optional[str] = None,
) -> None:
self.indx_nbr = indx_nbr
self.diag_cd = diag_cd
self.diag_typ_cd = diag_typ_cd
if __name__ == "__main__":
dc = DiagCodes(1)
dc_empty = DiagCodes()
print(dc == dc_empty)
</code></pre>
<p>Output:</p>
<blockquote>
<p>True</p>
</blockquote>
<p>Shouldn't the output come out as false with the default implementation of <code>__eq__</code> function of the dataclass?</p>
<p>The python version I am working on is <strong>Python 3.8.10</strong></p>
| <python><python-dataclasses> | 2023-10-09 04:55:05 | 1 | 883 | Vandit Goel |
77,256,079 | 3,492,125 | XSD unit test in python / sql | <p>I'm working on a data feed for an external partner. The partner defined a huge XSD schema for the XML data to be fed to them. The concep is that they'll send me a user id, and I'll send them an XML file containing data that's relevant to the user. The XML file may contains various sub-components and each component may nest deep into multiple levels.</p>
<p>For example, a mockup XML can be something like below:</p>
<pre><code><user id='abc'>
<job>
<title>manager</title>
<company>
<name>pepsi</name
....
</company>
</job>
<residence>
<address>....</address>
....
</residence>
<education>...</education>
</user>
</code></pre>
<p>For sub-coponents, the data is sourced from different system. I need to gather the data and compose the XML components and then group them into one final XML document. While I could validate the final XML document against the XSD schema, I wonder if it's possible for me to validate each sub-component against the corresponding fraction of XSD schema?</p>
<p>New to XML/XSD, I might be using impropser terminologies. Any advice would be appreciated.</p>
| <python><xml><unit-testing><xsd> | 2023-10-09 03:07:51 | 1 | 3,062 | Lee |
77,256,043 | 2,981,639 | Celery circular import | <p>I'm using celery in a streamlit application, and whilst reading the documentation the <a href="https://docs.celeryq.dev/en/stable/getting-started/next-steps.html" rel="nofollow noreferrer">example</a> they present doesn't make sense to me - it seems that it will always result in a circular import error</p>
<p>The project structure is</p>
<pre><code>proj/__init__.py
/celery.py
/tasks.py
</code></pre>
<p>celery.py</p>
<pre class="lang-py prettyprint-override"><code>from celery import Celery
app = Celery('proj',
broker='amqp://',
backend='rpc://',
include=['proj.tasks'])
# Optional configuration, see the application user guide.
app.conf.update(
result_expires=3600,
)
if __name__ == '__main__':
app.start()
</code></pre>
<p>How can this work? In the file named <code>celery.py</code> the class <code>Celery</code> is intended to be imported from the <code>celery</code> package - but this is shadowed by the fact that the module itself is named the same as the package (i.e. celery) so instead it tries to import <code>Celery</code> from <code>celery.py</code></p>
<p>I'm wondering if I missing something here - or are the examples simply wrong? Most of the unofficial tutorials avoid this by using names like <code>celeryapp.py</code> or <code>tasks.py</code> so I'm a little confused how why it's like this in the official examples?</p>
| <python><celery> | 2023-10-09 02:53:35 | 0 | 2,963 | David Waterworth |
77,256,028 | 754,136 | Multiple defaults in Hydra from command line | <p>I have the following folder structure</p>
<pre><code>configs
|------- default.yaml
|------- setting_A.yaml
|------- setting_B.yaml
|------- a bunch of folders
</code></pre>
<p>My <code>default.yaml</code> is</p>
<pre><code>defaults:
- agent: default
# - setting_A
# - setting_B
other configs from subfolders
</code></pre>
<p><code>setting_A</code> and <code>setting_B</code> define parameters from different configs file in the subfolders. For instance size of network, timeout, etc... For example, in <code>setting_A</code> I am doing something simple so I need small networks, short timeout, etc...
I can select which set of defaults to use by uncommenting either <code>setting_A</code> or <code>setting_B</code> in <code>default.yaml</code>. But this is very annoying. Is there a way to pass it via command line?</p>
<p>Something like <code>python main.py +defaults=setting_A</code>.</p>
| <python><fb-hydra> | 2023-10-09 02:47:58 | 1 | 5,474 | Simon |
77,255,766 | 482,439 | How to pass a list of dates from Django ORM to a javascript date list to use as labels in chart.js | <p>I'm trying to plot some charts comparing the close price of pairs of stocks in a timeline.</p>
<p>My Django view.py function looks like this:</p>
<pre><code>
# Access stock pairs for the user settings
stock_pairs = StockPair.objects.filter(setting=setting)
# Create a context dictionary including, for each stock_pair in stock_pairs, the close prices of the stocks in the pair and pass the dictionary to the template
context = {}
stocks_and_pair_data_list = []
for stock_pair in stock_pairs:
stocks_and_pair_data = {}
stocks_and_pair_data['stock1_name'] = stock_pair.stock1.name
stocks_and_pair_data['stock2_name'] = stock_pair.stock2.name
stocks_and_pair_data['dates'] = [dt for dt in stock_pair.stock_pair_data_set.values_list('date', flat=True)]
stocks_and_pair_data['stock1_close_prices'] = [float(close) for close in stock_pair.stock1.stock_data_set.values_list('close', flat=True)]
stocks_and_pair_data['stock2_close_prices'] = [float(close) for close in stock_pair.stock2.stock_data_set.values_list('close', flat=True)]
stocks_and_pair_data_list.append(stocks_and_pair_data)
context['stocks_and_pair_data_list'] = stocks_and_pair_data_list
return render(request, 'my_page.html', context)
</code></pre>
<p>and my_page.html has this script section:</p>
<pre><code>
<!-- For each stocks_and_pair_data in stocks_and_pair_data_list create, using chart.js, a chart of the close prices of stock1 and stock2 -->
{% for stocks_and_pair_data in stocks_and_pair_data_list %}
<script>
const ctxClose = document.getElementById('ChartOfClosePrices{{ forloop.counter }}');
new Chart(ctxClose, {
type: 'line',
data: {
labels: '{{stocks_and_pair_data.dates}}',
datasets: [{
label: '{{ stocks_and_pair_data.stock1_name }}',
data: '{{stocks_and_pair_data.stock1_close_prices}}'
},{
label: '{{ stocks_and_pair_data.stock2_name }}',
data: '{{stocks_and_pair_data.stock2_close_prices}}'
}]
}
});
</script>
{% endfor %}
</code></pre>
<p>I'm having 2 problems. The first problem is that I get the following error on the browser for each stocks_and_pair_data:</p>
<p><code>SyntaxError: Identifier 'ctxClose' has already been declared</code></p>
<p>The second problem is that the x axis doesn't show the dates in a proper format. Instead the axis shows values that doesn't mean anything to me, like</p>
<blockquote>
<p>[ a e i e d t ( 0 2 0 ) a e i e d t ( 0 2 0 ) (and so on...)</p>
</blockquote>
<p>So how to name ctxClose dinamically so they don't conflict with each other and what am I doing wrong when I'm passing the data from the view function to javascript so the date axis is shown properly?</p>
<p>Thanks in advance for any suggestions that may help.</p>
| <javascript><python><django> | 2023-10-09 00:21:57 | 1 | 625 | AntonioR |
77,255,760 | 22,407,544 | Why is my Django view not automatically downloading a file | <p>My view transcribes a file and outputs the SRT transcript. Then it locates and is supposed to auto download theT transcript but nothing happens after transcription completes(the file was submitted on a previous page that uses the transcribeSubmit view. The view that handles and returns the file is the initiate_transcription view). Here is my views.py:</p>
<pre><code>@csrf_protect
def transcribeSubmit(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
uploaded_file = request.FILES['file']
fs = FileSystemStorage()
filename = fs.save(uploaded_file.name, uploaded_file)
request.session['uploaded_file_name'] = filename
request.session['uploaded_file_path'] = fs.path(filename)
#transcribe_file(uploaded_file)
#return redirect(reverse('proofreadFile'))
return render(request, 'transcribe/transcribe-complete.html', {"form": form})
else:
else:
form = UploadFileForm()
return render(request, 'transcribe/transcribe.html', {"form": form})
@csrf_protect
def initiate_transcription(request):
if request.method == 'POST':
# get the file's name and path from the session
file_name = request.session.get('uploaded_file_name')
file_path = request.session.get('uploaded_file_path')
if file_name and file_path:
with open(file_path, 'rb') as f:
path_string = f.name
transcribe_file(path_string)
file_extension = ('.' + (str(file_name).split('.')[-1]))
transcript_name = file_name.replace(file_extension, '.srt')
transcript_path = file_path.replace((str(file_path).split('\\')[-1]), transcript_name)
file_location = transcript_path
with open(file_location, 'r') as f:
file_data = f.read()
response = HttpResponse(file_data, content_type='text/plain')
response['Content-Disposition'] = 'attachment; filename="' + transcript_name + '"'
return response
else:
#complete
return render(request, 'transcribe/transcribe-complete.html', {"form": form})
def transcribe_file(path):
#transcription logic
</code></pre>
<p>JS:</p>
<pre><code>const form = document.querySelector('form');
form.addEventListener('submit', function (event) {
event.preventDefault();
const formData = new FormData(form);
const xhr = new XMLHttpRequest();
xhr.open('POST', form.getAttribute('action'), true);
xhr.onreadystatechange = function () {
console.log("ready state: ", xhr.readyState);
console.log("status: ", xhr.status);
}
xhr.responseType = 'blob';
xhr.send(formData);
});
</code></pre>
<p>HTML:</p>
<pre><code><form id="initiate-transcription-form" method="post" action="{% url 'initiate_transcription' %}" enctype="multipart/form-data">
{% csrf_token %}
<div>
<label for="output-lang--select-summary"></label>
<select class="output-lang-select" id="output-lang--select-summary" name="audio_language">
<option value="detect">Detect language</option>
<option value="zh">Chinese</option>
<option value="en">English</option>
<option value="es">Spanish</option>
<option value="de">German</option>
<option value="jp">Japanese</option>
<option value="ko">Korean</option>
<option value="ru">Russian</option>
<option value="pt">Portugese</option>
<!-- add more language options as needed -->
</select>
</div>
<button id="transcribe-button" type="submit">Click to Transcribe
<svg class="download-transcript" width="18" height="18" viewBox="0 0 24 24">
<path d="M19,9h-4V3H9v6H5l7,7L19,9z M5,18v2h14v-2H5z" fill="white"/>
</svg>
</button>
</form>
</code></pre>
<p>The only response I get is the status: 200 console log in JS.</p>
| <javascript><python><html><django><file> | 2023-10-09 00:17:47 | 0 | 359 | tthheemmaannii |
77,255,758 | 20,235,789 | How can I mock my environment variables for my pytest? | <p>I've looked at some examples to mock environment variables.
<a href="https://adamj.eu/tech/2020/10/13/how-to-mock-environment-variables-with-pytest/" rel="noreferrer">Here</a></p>
<p>In my case, I have to mock a variable in my <code>config.py</code> file that is setup as follows:</p>
<pre><code>class ServiceConfig(BaseSettings):
...
API_AUDIENCE: str = os.environ.get("API_AUDIENCE")
...
...
</code></pre>
<p>I have that called here:</p>
<pre><code>class Config(BaseSettings):
auth: ClassVar[ServiceConfig] = ServiceConfig()
</code></pre>
<p>I've attempted to patch that value as follows:</p>
<pre><code>@mock.patch.dict(os.environ, {"API_AUDIENCE": "https://mock.com"})
class Test(TestCase):
</code></pre>
<p>Another way I've tried is:</p>
<pre><code>@patch('config.Config.API_AUDIENCE', "https://mock.com")
def ....:
</code></pre>
<p>The setup is incorrect both ways, getting this error:</p>
<pre><code>E pydantic_core._pydantic_core.ValidationError: 1 validation error for
ServiceConfig
E API_AUDIENCE
</code></pre>
<p>What is the correct way to setup mocking an environment variable?</p>
| <python><pytest><pydantic><pytest-mock-patch> | 2023-10-09 00:15:49 | 4 | 441 | GustaMan9000 |
77,255,588 | 897,272 | Python tab completion without defining arguments directly in class as required by cmd | <p>We have a command line interface for our program written in python. We want to add tab completion to this interface. the cmd module seems the obvious choice, except for one problem. It requires all the possible arguments to be defined right in the class in the form of methods.</p>
<p>We have a number of commands in our CLI, and modules can add new commands specific to them in a sort of plug in interface. We do this by having a command class for each potential argument that defines the command, it's description and behaviors etc. This has proven cleaner and more flexible for us and I don't want to change this by trying to throw dozens of methods in one class; especially since I won't know all the possible commands that can be run since some are dependent on third party modules that will be added later.</p>
<p>So is there a way to get pretty tab completion without having to predefine every possible command ahead of time in the cmd class directly?</p>
<p>Ideally we would also like to allow tab completion of some arguments, not just commands as well. I'd love to also be able to capture a few hotkeys; but that's not important enough to reinvent the wheel with everything a terminal does for us if I can't find a third party tool that makes adding handling of hotkeys easier. This has to run on both Linux and Windows so I don't want any OS dependent terminal hacking either.</p>
<p>Can I get the CMD module to work without predefining commands directly as methods to a class? Failing that are there any other 3rd party python tools which might support a more complex command line behavior then the cmd module allows?</p>
| <python><command-line-interface><python-3.9> | 2023-10-08 22:48:28 | 1 | 6,521 | dsollen |
77,255,552 | 15,781,591 | How to make bar chart that shows percent change with fainter color on same bar in python | <p>I have the following dataframe that shows dollar cost associated with each color hat:</p>
<pre><code> Hat Dollars
----------------------
0 Blue 33
1 Green 42
2 Red 54
</code></pre>
<p>And so the blue hat costs $33, the green hat costs $42, and the red hat costs $54.</p>
<p>I can plot this as a bar chart using this simple code with <code>seaborn</code>:</p>
<pre><code>dataplot = sns.barplot(x = 'Hat',
y = 'Dollars',
data = df,hue='Hat')
plt.gca().set_xticklabels([])
plt.ylabel("Dollars", fontsize=15)
plt.xlabel("Hat", fontsize=0)
plt.legend(title="Hat Costs")
</code></pre>
<p>Now, let's say because of inflation the price of hats goes up the next year, and so the next year's hat costs are represented in this dataframe:</p>
<pre><code> Hat Dollar
---------------------
0 Blue 33
1 Green 42
2 Red 54
</code></pre>
<p>What I want to do is take my first bar chart showing hat costs, and then insert the new year's hat costs into the first chart so that there are still only three bars, one for each color, but the new amount is shown as an extension of the original bar, either going up or down, but with the new area marked by the same color faded, or if that is not doable, filled with faint diagonal lines. How can I modify the code to my original bar chart to incorporate the new values for each of the original bars to be displayed on the original bar?</p>
| <python><pandas><matplotlib><seaborn> | 2023-10-08 22:29:25 | 1 | 641 | LostinSpatialAnalysis |
77,255,536 | 8,782,672 | How to execute an mpiexec command from ansible (ad-doc or playbook) | <p>How to run <code>mpiexec -host node1,node2,node3,node4 python3 hello_mpi.py</code> from ansible either using script or shell module where hello_mpi.py file resides on the ansible executor (local machine not the server where it is executed). I know you can use script module for a script on your local machine that needs to be executed on the remote server but I don't know how to do it for python scripts that use <code>mpiexec</code>.</p>
<p>UPDATE: <code>ansible master -m script -a "mpi_run.py chdir=/home/pi/cloud executable='mpiexec -host master,node1,node2,node3 python3'"</code> doesn't through any errors but freezes and drops the ssh connection with the host unreachable mssg.</p>
<p><code>/home/pi/cloud</code> is the NFS directory shared between all the nodes. The script <code>mpi_run.py</code> has to be run from that directory so that other nodes can access it.</p>
| <python><shell><ansible><mpi><mpi4py> | 2023-10-08 22:24:04 | 0 | 574 | Coddy |
77,255,463 | 8,081,835 | PyCharm Not Launching After macOS Update to Sonoma 14.0 | <p>I am encountering issues with PyCharm following the latest macOS Sonoma 14.0 update. Here are the problems I'm facing:</p>
<ul>
<li>PyCharm is not appearing in Spotlight Search.</li>
<li>When I attempt to run PyCharm from the JetBrains Toolbox, it doesn't open.</li>
<li>When running from iTerm using <code>pycharm .</code> it returns the following error:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>The application /Users/user_name/Applications/PyCharm Professional Edition.app/Contents/MacOS/pycharm
cannot be opened for an unexpected reason, error=Error Domain=NSCocoaErrorDomain Code=260
"The file “pycharm” couldn’t be opened because there is no such file."
UserInfo={NSURL=file:///Users/user_name/Applications/PyCharm%20Professional%20Edition.app/Contents/MacOS/pycharm,
NSFilePath=/Users/user_name/Applications/PyCharm Professional Edition.app/Contents/MacOS/pycharm,
NSUnderlyingError=0x6000029e8780
{Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
</code></pre>
| <python><macos><pycharm> | 2023-10-08 21:50:05 | 1 | 771 | Mateusz Dorobek |
77,255,420 | 22,674,380 | Keras training ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 3 dimensions | <p>I'm trying to train a simple binary classification model using <code>Keras</code> and <code>Tensorflow</code> on a dataset of around 21,000 images. But when I run the training code, it throws this error:</p>
<pre><code>Traceback (most recent call last):
File "train.py", line 83, in <module>
training_images = np.array([i[0] for i in training_data])
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 3 dimensions. The detected shape was (21527, 300, 300) + inhomogeneous part.
</code></pre>
<p>I used this dataset and the training code before, so not sure what is causing the problem now. What is the issue? How to fix this?</p>
<p>Here is my simple training code that loads the dataset and tries training/fitting the model:</p>
<pre><code>from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import BatchNormalization
from PIL import Image
from random import shuffle, choice
import numpy as np
import os
from keras.callbacks import ModelCheckpoint
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from sklearn.model_selection import train_test_split
from keras import optimizers
IMAGE_SIZE = 300
epochs_num = 50
batch_size = 64
IMAGE_DIRECTORY = './data'
retrain_from_prior_model=True
def label_img(name):
if name == 'object': return np.array([1, 0])
elif name == 'none' : return np.array([0, 1])
def load_data():
print("Loading images...")
train_data = []
directories = next(os.walk(IMAGE_DIRECTORY))[1]
for dirname in directories:
print("Loading {0}".format(dirname))
file_names = next(os.walk(os.path.join(IMAGE_DIRECTORY, dirname)))[2]
for i in range(len(file_names)):
image_name = choice(file_names)
image_path = os.path.join(IMAGE_DIRECTORY, dirname, image_name)
label = label_img(dirname)
if "DS_Store" not in image_path:
try:
img = Image.open(image_path)
img = img.resize((IMAGE_SIZE, IMAGE_SIZE), Image.LANCZOS)
train_data.append([np.array(img), label])
except Exception as e:
print(f"Error processing image: {image_path}")
print(f"Error message: {str(e)}")
continue # Skip this image and continue with the next one
shuffle(train_data)
return train_data
def create_model():
channels = 3
model = Sequential()
#change first one to 64
model.add(Conv2D(32, kernel_size = (5, 5), activation='relu', input_shape=(IMAGE_SIZE, IMAGE_SIZE, channels)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(128, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(256, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(128, activation='relu'))
model.add(Dense(2, activation = 'softmax'))
return model
training_data = load_data()
training_images = np.array([i[0] for i in training_data])
training_labels = np.array([i[1] for i in training_data])
print(str(len(training_images)))
# Split the data
training_images, validation_images, training_labels, validation_labels = train_test_split(training_images, training_labels, test_size=0.2, shuffle= True)
print(str(len(training_images)))
print('creating model')
#========================
if retrain_from_prior_model == False:
model = create_model()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print('training model...model.count_params: '+str(model.count_params()) + '...model.count_layers: '+ str(len(model.layers)))
else:
model = load_model("model_0.989.h5")
model.compile(loss='binary_crossentropy', optimizer= optimizers.Adam(learning_rate=0.0001), metrics=['accuracy'])
filepath="./checkpoints/model_{epoch:03d}_{accuracy:.4f}_{val_accuracy:.4f}_{val_loss:.7f}.h5"
checkpoint = ModelCheckpoint(filepath, monitor=["accuracy"], verbose=1, mode='max', save_weights_only=False)
callbacks_list = [checkpoint]
# if you want data augmentation: rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, brightness_range=[0.9,1.1]
datagen = ImageDataGenerator(zoom_range=0.2, horizontal_flip=True)
datagen.fit(training_images)
train_gen=datagen.flow(training_images, training_labels, batch_size=batch_size)
#validation
val_datagen = ImageDataGenerator(horizontal_flip=True)
val_datagen.fit(training_images)
val_gen=datagen.flow(validation_images, validation_labels, batch_size=batch_size)
model.fit(train_gen, validation_data=val_gen, epochs=epochs_num, verbose=1, callbacks=callbacks_list)
print('Training finished. Model saved.')
</code></pre>
| <python><numpy><tensorflow><keras><deep-learning> | 2023-10-08 21:35:01 | 1 | 5,687 | angel_30 |
77,255,391 | 1,422,096 | Higher-level than socket functions with Python? | <p>I'm doing a client/server communication with Python <code>socket</code>:</p>
<pre><code>for i in range(10):
connection.send(f"{i} and some other data\n".encode()) # real data is more complex
connection.send(b"OK, finished")
</code></pre>
<p>On the other side, I receive it with:</p>
<pre><code>while True:
res = clientsocket.recv(16384)
if not res or res == b"OK, finished": # doesn't always work
break
print(res)
</code></pre>
<p>The problem is that sometimes the received data is <code>"1234 and some other data\nOK, finished"</code> (then the previous condition doesn't work), and sometimes <code>"OK, finished"</code> on its own.</p>
<p>More generally, it is not very handy to have to receive an arbitrary size (here 16 KB), and assemble the potentially multiple parts, and to test if some ending condition is met.</p>
<p><strong>Is there a way to have a higher level access to the <code>socket</code> functions, allowing multi-line data, multi-part binary data to be sent/received with <code>socket</code>?</strong></p>
<p>Note: I want to keep the stateful aspect of a TCP <code>socket</code>, with a handler that lasts for the whole time of the client connection - thus I cannot use a HTTP server (which is stateless).</p>
| <python><sockets><tcp> | 2023-10-08 21:24:37 | 0 | 47,388 | Basj |
77,255,341 | 10,065,556 | Interning objects in shell and .py file | <p>In Python shell if we do:</p>
<pre class="lang-bash prettyprint-override"><code>>>> a = 98765
>>> b = 98765
>>> a is b
>>> False
</code></pre>
<p>However, when we run the following .py file:</p>
<pre class="lang-py prettyprint-override"><code>a = 98765
b = 98765
print(a is b)
</code></pre>
<p>Output</p>
<pre class="lang-bash prettyprint-override"><code>True
</code></pre>
<p>This is understandable because Python seems to be optimising memory by interning immutable objects.</p>
<p>Why is this treated differently in a Python shell?</p>
<p><a href="https://stackoverflow.com/a/38189759/10065556">This answer</a> states</p>
<blockquote>
<p>Generally speaking, every time you define an object in Python, you'll create a new object with a new identity</p>
</blockquote>
<p>which seems to be wrong when we are defining immutable objects in a .py file.</p>
| <python><memory-management> | 2023-10-08 21:05:09 | 0 | 994 | ScriptKiddieOnAComputer |
77,255,242 | 7,212,809 | How to get number of workers available in ProcessPoolExecutor? | <p>I am running some cpu bound code inside an async function, using <code>ProcessPoolExecutor</code></p>
<pre><code>pool = concurrent.futures.ProcessPoolExecutor(10)
...
loop.run_in_executor(
pool, cpu_bound_func)
</code></pre>
<p><strong>How do I get the number of workers available to be used?</strong></p>
| <python><python-3.x><proc> | 2023-10-08 20:33:52 | 1 | 7,771 | nz_21 |
77,255,095 | 15,140,144 | How to suppress mypy error: "some/path/to/module" shadows library module "module" | <p>I'm well aware that normally you wouldn't want to suppress this error, but my case is special. I'm "poly-filling" <a href="https://skulpt.org" rel="nofollow noreferrer">skulpt</a> with some modules that it lacks in stdlib ("typing" and "typing_extensions" to be exact). How can I suppress this error? Preferably where:</p>
<ol>
<li>Suppression only applies for specific files.</li>
<li>Mypy doesn't use type-hins/annotations from my local file, but rather from where it would take them if it didn't exist...</li>
<li>Mypy still runs on that specified file.</li>
</ol>
| <python><mypy><error-suppression><skulpt> | 2023-10-08 19:47:01 | 0 | 316 | oBrstisf8o |
77,254,892 | 4,932,316 | Finding and replacing match for exact abbreviations using Regex in Python | <p>I have a <code>REPLACEMENTS</code> dictionary whose keys are the strings that I want to find <strong>exactly</strong>. Then I want to replace them with their corresponding dictionary values.</p>
<p>For example,</p>
<pre><code>REPLACEMENTS = dict([('max.' , ' maximum '),
('inkl.' , ' inklusive '),
('z.b.' , ' zum beispiel '),
('ggf.', ' gegebenfalls ')])
sample_input_text = "Hallo, ggf ggf. max z.b. alpha z.b beta ca. 25 cm ca inkl. inkl. inkl"
</code></pre>
<p><strong>Expected output</strong></p>
<pre><code>"Hallo, ggf gegebenfalls max zum beispiel alpha z.b beta circa 25 cm ca inklusive inklusive inkl"`
</code></pre>
<p>As you can notice, I do not want to replace words like <code>ggf</code>, <code>ca</code>, and <code>inkl</code> because they don't match exactly with the dictionary keys <code>ggf.</code>, <code>ca.</code>, and <code>inkl.</code> due to the missing dot.</p>
<p><strong>My Attempt:</strong></p>
<p>As you can see below, I am getting matches like <code>'ggf ', 'max '</code> and <code>'z.b '</code> that do not match exactly with the dictionary keys. These partial matches are then replaced by a blank character when I use <code>re.sub</code>.</p>
<pre><code>import re
REPLACEMENTS = dict([('max.' , ' maximum '),
('inkl.' , ' inklusive '),
('z.b.' , ' zum beispiel '),
('ggf.', ' gegebenfalls ')])
sample_input_text = "Hallo, ggf ggf. max z.b. alpha z.b beta ca. 25 cm ca inkl. inkl. inkl"
joined = '|'.join(REPLACEMENTS.keys())
print(re.findall(joined, sample_input_text))
>> ['ggf ', 'ggf.', 'max ', 'z.b.', 'z.b ', 'inkl.', 'inkl.']
pattern = re.compile(joined)
output_text = pattern.sub(lambda m: REPLACEMENTS.get(m.group()), sample_input_text)
print(output_text)
>> 'Hallo, gegebenfalls zum beispiel alpha beta ca. 25 cm ca inklusive inklusive inkl'
</code></pre>
<p>What would be the regex pattern for this problem?</p>
| <python><python-3.x><regex> | 2023-10-08 18:44:52 | 1 | 39,202 | Sheldore |
77,254,887 | 7,657,180 | Decode Arabic characters cid mapping | <p>I am trying to decode a string of Arabic characters using the following code
some characters are not replaced correctly and the spaces are not right at all</p>
<pre><code>char_code_mapping = {
"(cid:993)": "م",
"(cid:987)": "ك",
"(cid:931)": "ح",
"(cid:991)": "ل",
"(cid:909)": "ا",
"(cid:937)": "د",
"(cid:913)": "ب",
"(cid:971)": "ع",
"(cid:941)": "ر",
"(cid:984)": "ق",
"(cid:955)": "ص",
"(cid:917)": "ت",
"(cid:943)": "ز",
"(cid:997)": "ن",
"(cid:910)": "ـا",
"(cid:1004)": "هـ",
"(cid:1011)": "ـيـ",
"(cid:927)": "جـ",
"(cid:3)": " ",
}
line = "(cid:993)(cid:987)(cid:931)(cid:991)(cid:909)(cid:3)(cid:937)(cid:913)(cid:971)(cid:3)(cid:941)(cid:984)(cid:955)(cid:3)(cid:917)(cid:943)(cid:971)(cid:3)(cid:997)(cid:910)(cid:1004)(cid:1011)(cid:927)"
decoded_line = ""
i = 0
while i < len(line):
if line[i:i+9] in char_code_mapping:
decoded_line += char_code_mapping[line[i:i+9]]
if line[i:i+9] == "(cid:3)":
decoded_line += " "
i += 9
else:
i += 1
reversed_decoded_line = decoded_line[::-1]
text_output_path = "decoded_output.txt"
with open(text_output_path, "w", encoding="utf-8") as text_file:
text_file.write(reversed_decoded_line)
print(f"Reversed decoded Arabic text saved to {text_output_path}")
</code></pre>
<p>The correct output should be <code>جيهان عزت صقر عبد الحكم</code></p>
| <python> | 2023-10-08 18:43:34 | 1 | 9,608 | YasserKhalil |
77,254,868 | 219,153 | Inconsistent results of NumPy memory allocation | <p>This Python 3.11 script on Ubuntu 22.04 with about 16.4GB memory available (no swap file):</p>
<pre><code>import numpy as np, psutil
print(f'{psutil.virtual_memory().available:,}')
try:
a = np.empty((100_000_000_000), dtype='u1') # variant A
# a = np.empty((30_000_000_000), dtype='u1') # variant B
# a = np.ones((30_000_000_000), dtype='u1') # variant C
print(f'{a.size:,}')
except RuntimeError as e:
print(e)
print('Done')
</code></pre>
<p>produces unexpected inconsistent results.</p>
<p>Variant A produces an exception as expected:</p>
<pre><code>16,436,457,472
...
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 93.1 GiB for an array with shape (100000000000,) and data type uint8
</code></pre>
<p>Variant B unexpectedly allocates an array larger than available memory:</p>
<pre><code>16,335,642,624
30,000,000,000
Done
</code></pre>
<p>Variant C fails silently:</p>
<pre><code>16,337,346,560
</code></pre>
<p>Can someone explain variant B and C behavior?</p>
| <python><arrays><numpy><out-of-memory><allocation> | 2023-10-08 18:38:27 | 0 | 8,585 | Paul Jurczak |
77,254,777 | 13,123,667 | Alternative to .concat() of empty dataframe, now that it is being deprecated? | <p>I have two dataframes that can both be empty, and I want to concat them.</p>
<p>Before I could just do :</p>
<pre><code>output_df= pd.concat([df1, df2])
</code></pre>
<p>But now I run into</p>
<blockquote>
<p>FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.</p>
</blockquote>
<p>An easy fix would be:</p>
<pre><code>if not df1.empty and not df2.empty:
result_df = pd.concat([df1, df2], axis=0)
elif not df1.empty:
result_df = df1.copy()
elif not df2.empty:
result_df = df2.copy()
else:
result_df = pd.DataFrame()
</code></pre>
<p>But that seems pretty ugly. Does anyone have a better solution ?</p>
<p>FYI: this appeared after pandas released <a href="https://pandas.pydata.org/docs/whatsnew/v2.1.0.html#deprecations" rel="noreferrer">v2.1.0</a></p>
| <python><pandas><dataframe><concatenation> | 2023-10-08 18:12:04 | 8 | 896 | Timothee W |
77,254,706 | 1,034,974 | aggregate calculation vmap jax | <p>I'm trying to implement fast routine to calculate array of energies and find the smallest calculated value and its index. Here is my code that is working fine:</p>
<pre class="lang-py prettyprint-override"><code>@jit
def findMinEnergy(x):
def calcEnergy(a):
return a*a # very simplified body, it is actually 15 lines of code
energies = vmap(calcEnergy, in_axes=(0))(x)
idx = energies.argmin(axis=0)
minenrgy = energies[idx]
return idx, minenrgy
</code></pre>
<p>I wonder if it is possible to not use the (separate) argmin call, but return the min calculated energy value and it's index from the vmap (similar like other aggregate functions work, e.g. jax.sum)? I hope that it could be more efficient.</p>
| <python><numpy><jax> | 2023-10-08 17:51:40 | 2 | 348 | Terry |
77,254,697 | 4,505,998 | View numpy array of pairs as tuples | <p>I have an array of pairs of size (N,2), and I want to convert them to tuples to be able to perform an <code>isin</code> operation.</p>
<p>I tried using <code>view</code>, but it doesn't seem to work:</p>
<pre class="lang-py prettyprint-override"><code>coo = np.array([[1,2],[1,6],[5,3],[3,6]]) # coordinates
coo.view(dtype='i,i').reshape((-1))
>>> (array([(1, 0), (2, 0), (1, 0), (6, 0), (5, 0), (3, 0), (3, 0), (6, 0)],
dtype=[('f0', '<i4'), ('f1', '<i4')]),)
</code></pre>
<p>The solution should be something like:</p>
<pre><code>[(1,2), (1,6), (5,3), (3,6)]
</code></pre>
<p>But it appears that <code>view</code> instead of creating tuples with the two elements it justs adds a zero to the second element of the tuple.</p>
<p>I know I can use a map, but I'd like to use something like view so the operation is faster, as the original array takes a couple of mb.</p>
| <python><numpy><numpy-ndarray> | 2023-10-08 17:49:19 | 0 | 813 | David Davó |
77,254,506 | 15,111,105 | Python thread started but stuck because of twisted reactor in other thread (possibly) | <p>Trying to make an API based on twisted work (<a href="https://github.com/spotware/OpenApiPy" rel="nofollow noreferrer">https://github.com/spotware/OpenApiPy</a>)</p>
<p>Here is some incomplete code that starts a <code>twisted reactor</code>:</p>
<pre class="lang-py prettyprint-override"><code>def onError(failure):
print("Message Error: ", failure)
def connected(client):
print("\nConnected")
request = ProtoOAApplicationAuthReq()
request.clientId = app_id
request.clientSecret = app_secret
deferred = client.send(request)
deferred.addErrback(onError)
def disconnected(client, reason): # Callback for client disconnection
print("\nDisconnected: ", reason)
def onMessageReceived(client, message): # Callback for receiving all messages
print("Message received: \n", Protobuf.extract(message))
def send_messages():
print('waiting for messages to send')
while True:
print(1)
message = message_queue.get()
deferred = client.send(message)
deferred.addErrback(onError)
time.sleep(10)
def send_message_thread(message):
message_queue.put(message)
def main():
client.setConnectedCallback(connected)
client.setDisconnectedCallback(disconnected)
client.setMessageReceivedCallback(onMessageReceived)
client.startService()
# send_messages()
send_thread = threading.Thread(target=send_messages)
send_thread.daemon = True # The thread will exit when the main program exits
send_thread.start()
# Run Twisted reactor
reactor.run()
if __name__ == "__main__":
main()
</code></pre>
<p>It connects successfully and this is the output:</p>
<pre><code>waiting for messages to send
1
Connected
Message received:
Message received:
Message received:
Message received:
...
</code></pre>
<p>So it looks like the <code>send_thread</code> is started but then the <code>while True</code> is actually executed just once - does anybody know why? (I guess it is blocked by <code>reactor.run()</code> yet Im not familiar with this twisted stuff)</p>
<blockquote>
<p>If the <code>send_thread</code> was running I guess there should be more <code>1</code> printed out</p>
</blockquote>
| <python><python-multithreading><twisted> | 2023-10-08 16:59:18 | 1 | 311 | wiktor |
77,253,733 | 2,386,113 | Memory leak with cuPy (CUDA in Python) | <p>I am using raw CUDA kernels in python scripts. In the below MWE, I have a super simple raw kernel which is not doing anything. In the code below, I am simply creating a large array (around 2 GB) and passing it to the CUDA kernel.</p>
<p><strong>MWE (Python - cupPy, Not Working):</strong></p>
<pre><code>import numpy as np
import cupy as cp
# custom raw kernel
custom_kernel = cp.RawKernel(r'''
extern "C" __global__
void custom_kernel(double* large_array)
{
int y = blockIdx.y * blockDim.y + threadIdx.y;
int x = blockIdx.x * blockDim.x + threadIdx.x;
int frame = blockIdx.z * blockDim.z + threadIdx.z;
}
''', 'custom_kernel')
# launch kernel
large_array_gpu = cp.zeros((101*101*9*9*301), dtype=cp.float64) # around 2 GB
block_dim_2 = (32, 32, 1)
bx2 = int((101 * 101 + block_dim_2[0] - 1) / block_dim_2[0])
by2 = int((9 * 9 + block_dim_2[1] - 1) / block_dim_2[1])
bz2 = int((301 + block_dim_2[2] -1 ) / block_dim_2[2])
grid_dim_2 = (bx2, by2, bz2)
custom_kernel(grid_dim_2, block_dim_2, large_array_gpu) # gets stuck at this statement, and RAM usage keeps increasing
large_array_cpu = cp.asnumpy(large_array_gpu)
print('done')
</code></pre>
<p><strong>Problem:</strong> As soon as the kernel gets called at the line <code>custom_kernel(grid_dim_2, block_dim_2, large_array_gpu)</code>, my RAM usage starts increasing to the maximum capacity of 32 GBs (almost exponentially), and the kernel never gets finished. As shown in the screenshot below, the GPU memory usage is around 2 GB (which was expected) but the CPU RAM usage is kept increasing. <em><strong>Just as a test, I wrote the C++ version of the program, it works fine and quite fast (given below).</strong></em></p>
<ul>
<li>Why there is such a memory leak on the CPU side?</li>
<li>Why the CUDA kernel is never getting finished?</li>
</ul>
<p><a href="https://i.sstatic.net/L7yKV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L7yKV.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/WYAoo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WYAoo.png" alt="enter image description here" /></a></p>
<p><strong>C++ Version (Working Fine):</strong></p>
<pre><code>#include <stdio.h>
// gpu
#include <cuda_runtime.h>
#include "device_launch_parameters.h"
__global__ void test_kernel(double* large_array)
{
int y = blockIdx.y * blockDim.y + threadIdx.y;
int x = blockIdx.x * blockDim.x + threadIdx.x;
int frame = blockIdx.z * blockDim.z + threadIdx.z;
if (y < (9 * 9) && x < (101 * 101) && frame < 301)
{
int resultIdx = (frame * (101 * 101) * (9 * 9)) + (y * (101 * 101) + x);
large_array[resultIdx] = 1.1;
}
}
int main()
{
printf("start...");
cudaError_t cudaStatus;
// device
double* dev_largeArray = 0;
// Memory allocations
cudaStatus = cudaMalloc((void**)&dev_largeArray, 101 * 101 * 9 * 9 * 301 * sizeof(double));
cudaMemset(dev_largeArray, 0, 101 * 101 * 9 * 9 * 301 * sizeof(double)); // initialize the result with zeros
dim3 blockSize(32, 32, 1);
int bx2 = ((101 * 101) + blockSize.x - 1) / blockSize.x;
int by2 = ((9 * 9) + blockSize.y - 1) / blockSize.y;
int bz2 = (301 + blockSize.z - 1) / blockSize.z;
dim3 gridSize = dim3(bx2, by2, bz2);
test_kernel << <gridSize, blockSize >> > (dev_largeArray);
// Check for any errors launching the kernel
cudaStatus = cudaGetLastError();
if (cudaStatus != cudaSuccess) {
fprintf(stderr, "Kernel launch failed: %s\n", cudaGetErrorString(cudaStatus));
}
// cudaDeviceSynchronize waits for the kernel to finish, and returns
// any errors encountered during the launch.
cudaStatus = cudaDeviceSynchronize();
if (cudaStatus != cudaSuccess) {
fprintf(stderr, "cudaDeviceSynchronize returned error code %d after launching Kernel!\n", cudaStatus);
}
// Copy the results back to the host
double* h_largeArray = new double[101 * 101 * 9 * 9 * 301];
cudaStatus = cudaMemcpy(h_largeArray, dev_largeArray, 101 * 101 * 9 * 9 * 301 * sizeof(double), cudaMemcpyDeviceToHost);
if (cudaStatus != cudaSuccess) {
fprintf(stderr, "cudaMemcpy failed!");
}
delete[] h_largeArray;
cudaFree(dev_largeArray);
return 0;
}
</code></pre>
| <python><cuda><cupy> | 2023-10-08 13:07:11 | 1 | 5,777 | skm |
77,253,713 | 10,353,865 | calling loc with a boolean array containing NA | <p>The pandas doc on loc states that it can be used with boolean arrays, more specifically it states the following:</p>
<p><em>"Allowed inputs are: ... A boolean array (any <code>NA</code> values will be treated as <code>False</code>)."</em></p>
<p>My question: How can u create a boolean array containing NA-values? I mean: A numpy bool array can't contain Nans and if we interpret this more liberal as stating "a list containing boolean values and na" then loc throws exceptions, e.g.:</p>
<pre><code>d_test = pd.DataFrame({"id": [1,2,3,5], "q1": [1,4,4,2], "q2": [4,np.nan,9,0]}, index=["a","b","c","d"])
t1 = [True,False,False,np.nan]
d_test.loc[t1] # KeyError
#same with None:
t1 = [True,False,False,None]
</code></pre>
<p>So my question: How is this sentence to be interpreted?</p>
| <python><pandas><dataframe> | 2023-10-08 13:00:48 | 2 | 702 | P.Jo |
77,253,645 | 402,281 | Why is __get__() specified to have an optional third argument? | <p>The <a href="https://docs.python.org/3/howto/descriptor.html#descriptor-protocol" rel="nofollow noreferrer">Descriptor HowTo Guide (Descriptor protocol)</a> specifies that <code>__get__()</code> must be callable with 2 and 3 arguments:</p>
<pre><code>descr.__get__(self, obj, type=None) -> value
</code></pre>
<p>Why has it been defined like this? AFAIK CPython will always call this method with 3 arguments, so that <code>None</code> default argument is never used.</p>
<p>I have seen <a href="https://stackoverflow.com/questions/31626653/why-do-people-default-owner-parameter-to-none-in-get">Why do people default owner parameter to None in __get__?</a>, but that asks a different question, i.e. why people write their descriptors with that signature, not why the signature has been defined to be like this, and none of the current answers go into why it has been specified like this.</p>
| <python><python-descriptors> | 2023-10-08 12:39:34 | 1 | 10,085 | Feuermurmel |
77,253,558 | 9,206,337 | LocalEntryNotFoundError while building Docker Image with Hugging Face model | <p>I am trying to create a AWS Lambda function using Docker Image as source.
I am executing the following code as part of the image build phase to download all the dependencies</p>
<pre class="lang-py prettyprint-override"><code>import logging
logging.basicConfig(level=logging.DEBUG)
from langchain.embeddings import HuggingFaceInstructEmbeddings
from transformers import AutoModelForSequenceClassification, AutoTokenizer
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-large",
model_kwargs={"device": "cpu"},encode_kwargs={"batch_size": 1})
# Replace 'model_id' with your specific model identifier
model_id = "SamLowe/roberta-base-go_emotions"
# Download the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_id,force_download=True)
tokenizer = AutoTokenizer.from_pretrained(model_id,force_download=True)
# Save the model and tokenizer to a directory (e.g., "./models/")
model.save_pretrained("./model")
tokenizer.save_pretrained("./tokenizer")
</code></pre>
<p>Here is the dockerfile</p>
<pre><code># lambda base image for Docker from AWS
FROM public.ecr.aws/lambda/python:latest
# Update the package list and install development tools for C++ support
RUN yum update -y && \
yum install deltarpm -y && \
yum groupinstall "Development Tools" -y
# Export CXXFLAGS environment variable for C++11 support
ENV CXXFLAGS="-std=c++11"
ENV AWS_DEFAULT_REGION="eu-central-1"
ENV TRANSFORMERS_OFFLINE="1"
ENV SENTENCE_TRANSFORMERS_HOME="/root/.cache/huggingface/"
# Install packages
COPY requirements.txt ./
RUN python3 -m pip install -r requirements.txt
# Setup directories
RUN mkdir -p /tmp/content/pickles/ ~/.cached
RUN chmod 0700 ~/.cached
# copy all code and lambda handler
COPY *.py ./
RUN python3 dependency.py
# run lambda handler
CMD ["function.handler"]
</code></pre>
<p>While runnging the docker build command, the phase for executing the file <code>dependency.py</code> leads to the below error</p>
<pre><code>docker build . -t ml-server
[+] Building 6.5s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.38kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 32B 0.0s
=> [internal] load metadata for public.ecr.aws/lambda/python:latest 1.3s
=> [internal] load build context 0.0s
=> => transferring context: 287B 0.0s
=> [1/9] FROM public.ecr.aws/lambda/python:latest@sha256:d8a8324834a079dbdfc6551831325113512a147bf70003622412565f216 0.0s
=> CACHED [2/9] RUN yum update -y && yum install deltarpm -y && yum groupinstall "Development Tools" -y 0.0s
=> CACHED [3/9] COPY requirements.txt ./ 0.0s
=> CACHED [4/9] RUN echo `whoami` 0.0s
=> CACHED [5/9] RUN python3 -m pip install -r requirements.txt 0.0s
=> [6/9] RUN mkdir -p /tmp/content/pickles/ ~/.cached 0.3s
=> [7/9] RUN chmod 0700 ~/.cached 0.2s
=> [8/9] COPY *.py ./ 0.0s
=> ERROR [9/9] RUN python3 dependency.py 4.6s
------
> [9/9] RUN python3 dependency.py:
#13 1.260 INFO:numexpr.utils:NumExpr defaulting to 4 threads.
#13 2.806 INFO:sentence_transformers.SentenceTransformer:Load pretrained SentenceTransformer: hkunlp/instructor-large
#13 2.807 DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
#13 3.658 DEBUG:urllib3.connectionpool:https://huggingface.co:443 "GET /api/models/hkunlp/instructor-large HTTP/1.1" 200 140783
#13 4.016 Traceback (most recent call last):
#13 4.016 File "/var/task/dependency.py", line 7, in <module>
#13 4.016 instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-large",
#13 4.016 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#13 4.016 File "/var/lang/lib/python3.11/site-packages/langchain/embeddings/huggingface.py", line 150, in __init__
#13 4.016 self.client = INSTRUCTOR(
#13 4.016 ^^^^^^^^^^^
#13 4.016 File "/var/lang/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py", line 87, in __init__
#13 4.017 snapshot_download(model_name_or_path,
#13 4.017 File "/var/lang/lib/python3.11/site-packages/sentence_transformers/util.py", line 491, in snapshot_download
#13 4.017 path = cached_download(**cached_download_args)
#13 4.017 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#13 4.017 File "/var/lang/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
#13 4.017 return fn(*args, **kwargs)
#13 4.017 ^^^^^^^^^^^^^^^^^^^
#13 4.017 File "/var/lang/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 749, in cached_download
#13 4.018 raise LocalEntryNotFoundError(
#13 4.018 huggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
------
executor failed running [/bin/sh -c python3 dependency.py]: exit code: 1
</code></pre>
<p>When I try to execute the same file on macOS it runs without an issue. I am able to get packages from pip within the image so network connection should not be an issue</p>
<p>Judging from the error message I thought this would be an error related to the embeddings method, so I add the <code>force_download</code> download flag and set the download <code>batch_size</code> to 1.</p>
<p>That also resulted in the same error
The size of the downloaded dependencies is not larger than 3 GB.
Is there a CPU bottleneck while running the download pass?</p>
| <python><docker><aws-lambda><dockerfile><huggingface-transformers> | 2023-10-08 12:12:15 | 1 | 1,060 | Fardeen Khan |
77,253,135 | 2,312,666 | python string characters to key and value? | <p>I'm doing something wrong here, but not sure what. I would like to be able to iterate through the characters of a string, see if the characters match a key in a dictionary and if so return the corresponding value.</p>
<pre><code>string = 'happy'
dictionary = {
"A":"10","B":"20","C":"30","D":"40","E":"50","F":"60","G":"70","H":"80","I":"90","J":"100","K":"110",
"L":"120","M":"130","N":"140","O":"150","P":"160","Q":"170","R":"180","S":"190","T":"200","U":"210","V":"220",
"W":"230","X":"240","Y":"250","Z":"260"}
for character in string:
print (character)
for key in dictionary.keys():
if str(key) == str(character):
print (value)
</code></pre>
<p>Thank you</p>
| <python><string><dictionary> | 2023-10-08 09:57:02 | 2 | 309 | yeleek |
77,253,088 | 10,919,370 | Python variable annotation and syntax error on version < 3.6 | <p>I have next line</p>
<pre><code>status_dict : dict = result.get('status', {})
</code></pre>
<p>I'm using the variable annotation, to make my codding easier after that (autocompletion).
It works perfect on python >= 3.6, but on older versions I get</p>
<pre><code>File "/home/marcel/work/py/svn_utils.py", line 261
status_dict : dict = svn_result.get('status', {})
^
SyntaxError: invalid syntax
</code></pre>
<p>Any idea how can I make the code to throw a meaningful error message, e.g. "Python 3.6 or newer is required".</p>
<p>What I've try until now: at the beginning of the script I've added next lines</p>
<pre><code>import sys
MIN_PYTHON = (2, 6)
if sys.version_info < MIN_PYTHON:
sys.exit("Python %s.%s or newer is required.\n" % MIN_PYTHON)
</code></pre>
<p>but it doesn't work because first is the parser complaining by the syntax.</p>
<p>The only solution that I see a wrapper script to check the minimum version and decide if the main script will be executed or not.
But maybe there is some other solution, to avoid the wrapper.</p>
<p>I've got some hints/questions if <a href="https://stackoverflow.com/questions/33323433/is-there-a-strategy-for-making-python-3-5-code-with-type-annotations-backwards-c">Is there a strategy for making Python 3.5 code with type-annotations backwards compatible?</a> is not the answer to my question.</p>
<p>The answer is "No", reason: it is refereeing to type annotation for functions (parameters and return value). I'm looking for the syntax for variable type annotation is newer peps.python.org/pep-0526 is refereeing to type annotation for functions (parameters and return value). The syntax for variable type annotation is newer peps.python.org/pep-0526</p>
| <python><python-typing> | 2023-10-08 09:45:00 | 0 | 1,215 | Marcel Preda |
77,253,053 | 20,892,267 | Two way communication between two Processes python | <p>I am running two classes on two different processes. I am able to set up one way communication and Initiate my <code>run</code> method of Runner class via GUI class.</p>
<p>My requirement is once <code>run</code> method finishes its job the GUI class should initiate <code>terminate</code> method.</p>
<p><strong>Note:</strong> I do not want to use multi-threading because my GUI is lagging.</p>
<p>Below is my code:</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import *
from tkinter import ttk
from multiprocessing import Process, Pipe
import time
class GUI:
def __init__(self,conn) -> None:
self.conn = conn
self.root = Tk()
self.root.geometry("100x100")
self.start_btn = Button(self.root, text ="Start", command = self.start)
self.start_btn.pack()
self.root.mainloop()
def start(self):
self.conn.send("Start")
self.start_btn.pack_forget()
self.prog = ttk.Progressbar(self.root,orient='horizontal',mode='indeterminate',length=280)
self.prog.pack()
self.prog.start()
def terminate(self):
self.prog.pack_forget()
terminate_btn = Button(self.root,text="Terminate",command=self.root.destroy)
terminate_btn.pack()
class Runner:
def __init__(self,conn) -> None:
print("Initiating Runner")
self.conn = conn
self.main()
def run(self):
print("running")
time.sleep(3)
print("Done")
return
def main(self):
print("in Runner main")
while 1:
msg = self.conn.recv()
if msg == "Start":
self.run()
break
class Main:
def __init__(self) -> None:
parent_conn, child_conn = Pipe()
gui = Process(target=GUI,args=(parent_conn,))
runner = Process(target=Runner,args=(child_conn,))
gui.start()
runner.start()
gui.join()
runner.join()
if __name__ == "__main__":
Main()
</code></pre>
<p>I tried multiple methods to communicate back to GUI but it's not working. Please help.</p>
| <python><tkinter><multiprocessing> | 2023-10-08 09:33:04 | 1 | 332 | Roshan Shetty |
77,252,899 | 8,547,986 | what is the recommended way to use `not` in python? | <p>I have observed that both of the following expression works and would like to know which is the recommended way to use and why?</p>
<pre class="lang-py prettyprint-override"><code>>>> not(True)
False
>>> not True
False
</code></pre>
<p>Coming from functional programming background (I have experience in R), I am more inclined to use the first way. Is that accepted or are there any downfalls to it?</p>
| <python><python-3.x> | 2023-10-08 08:45:36 | 3 | 1,923 | monte |
77,252,880 | 4,502,721 | File executable not working on Raspberry Pi | <p>I have a .py file (<code>camera.py</code>) that is programmed to capture images from a USB camera connected to the Raspberry Pi. The <code>camera.py</code> file is below and is running as expected from the terminal of the Raspberry Pi.</p>
<pre><code>import cv2
cam = cv2.VideoCapture(0)
while True:
ret, image = cam.read()
cv2.imshow('Imagetest',image)
k = cv2.waitKey(1)
if k != -1:
break
cv2.imwrite('/home/pi/testimage.jpg', image)
cam.release()
cv2.destroyAllWindows()
</code></pre>
<p>Now I want to turn this <code>camera.py</code> file into a standalone .exe and here is where I am getting into trouble.</p>
<p>I have tried two(2) ways to accomplish this; pyinstaller and auto-py-to-exe GUI.</p>
<p>In both cases, I received no issues while the .exe got built, but when I click to run it, it does nothing (when I click 'Execute' as well as 'Execute in Terminal' - in the latter case, a terminal window also doesn't open.)</p>
<p>When I try to execute the file manually from terminal with below commands:</p>
<pre><code>chmod a+x camera
./camera
</code></pre>
<p>I get the below error:</p>
<pre class="lang-none prettyprint-override"><code>./camera: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.33' not found (required by ./camera)
./camera: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.34' not found (required by ./camera)
</code></pre>
<p>My guess is the problem lies in version incompatibility with GLIBC, so I have looked up the version installed, and the highest available in my Debian 11 is 2.31.</p>
<p><a href="https://i.sstatic.net/tiWjX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tiWjX.png" alt="enter image description here" /></a></p>
<p>If it is really a glibc version problem, can you tell me how I can install the required version in my Debian 11 Raspberry Pi. I guess there are issues with it given, my Google search.</p>
<p>If it is not the version incompatibility, can you tell me what I should correct.</p>
<p><strong>Create .desktop file Approach</strong></p>
<p>As per suggestions, i created a .desktop file (contents below) following this <a href="https://linuxhint.com/run-python-script-desktop-icon-linux/" rel="nofollow noreferrer">documentation</a>. But still the program fails to launch the videostream. A window opens up momentarily but that's it.</p>
<pre><code>[Desktop Entry]
Version= 1.0
Icon= /home/pi3/videorec.png
Name= videorec
Exec=//home/pi3/camera.py
Terminal=true
Type=Application
</code></pre>
| <python><opencv><raspberry-pi><debian><glibc> | 2023-10-08 08:37:07 | 2 | 335 | Amrita Deb |
77,252,705 | 4,281,353 | How to exactly match a word with one regexp? | <p>Using Python 3.9.13.</p>
<p>How can I match an exact word without preceding or trailing character with a simple one regexp expression?</p>
<p>There are answers using the boundary word <code>\b</code>. However, according to the <a href="https://docs.python.org/3/library/re.html#regular-expression-syntax" rel="nofollow noreferrer">Python doc</a>, it can match non word characters.</p>
<blockquote>
<p><code>\b</code>: Matches the empty string, but only at the beginning or end of a word. A word is defined as a sequence of word characters. Note that formally, <code>\b</code> is defined as the boundary between a <code>\w</code> and a <code>\W</code> character (or vice versa), or between <code>\w</code> and the beginning/end of the string.</p>
</blockquote>
<p>For instance, I want to get a match only with the word <code>cool</code>, but no match with <code>#cool</code> or <code>cool.</code>.</p>
<pre><code>re.findall(
pattern=r'\bcool\b',
string=" #cool",
flags=re.IGNORECASE|re.ASCII
)
---
['cool']
</code></pre>
<p>Alternative can be listing multiple patterns but it is not clean.</p>
<pre><code>re.findall(
pattern=r'\Acool\Z|\s+cool\s+|^cool\s+|\s+cool\Z',
string=" #cool",
flags=re.IGNORECASE|re.ASCII
)
---
</code></pre>
<p>Please advise if there is better and simple one regexp pattern to achieve this.</p>
<hr />
<h1>Update</h1>
<p>As answered by <a href="https://stackoverflow.com/users/5424988/the-fourth-bird">The fourth bird</a></p>
<pre><code>re.findall(
pattern=r'(?<!\S)cool(?!\S)',
string=" #cool cool& cool cool\n (cool)",
flags=re.IGNORECASE|re.ASCII
)
---
['cool', 'cool']
</code></pre>
<p>or</p>
<pre><code>re.findall(
pattern=r'(?<=\s|^)cool(?=\s|$)',
string=" #cool cool& cool cool\n (cool)",
flags=re.IGNORECASE|re.ASCII
)
---
['cool', 'cool']
</code></pre>
| <python><regex> | 2023-10-08 07:31:44 | 1 | 22,964 | mon |
77,252,602 | 8,971,938 | Is there a table widget for Shiny for python that calls back edited cells? | <p>Similar question to <a href="https://stackoverflow.com/questions/76579007/is-there-an-interactive-table-widget-for-shiny-for-python-that-calls-back-select">Is there an interactive table widget for Shiny for python that calls back selected rows?</a>, but slightly different: Instead of just selecting rows (and getting the information about which row is selected), I want to edit cells in the displayed table and get back the edited table in a callback.</p>
<p>On <a href="https://shiny.posit.co/py/docs/ipywidgets.html" rel="nofollow noreferrer">https://shiny.posit.co/py/docs/ipywidgets.html</a>, there are three ipywidgets libraries mentioned. I tried for example with <a href="https://github.com/finos/ipyregulartable/tree/main" rel="nofollow noreferrer">ipyregulartable</a>, this example should be runnable:</p>
<pre><code>from shiny import ui, render, App, reactive
import ipyregulartable as rt
import pandas as pd
from shinywidgets import output_widget, reactive_read, register_widget
app_ui = ui.page_fluid(
ui.layout_sidebar(
ui.panel_sidebar(),
ui.panel_main(
output_widget("table"),
ui.output_table("changed_table"),
),
)
)
def server(input, output, session):
dummy = pd.DataFrame(
{
"title": ["A", "B", "C"],
"img_name": ["food.png", "food2.jpg", "food3.jpg"],
}
)
tbl = rt.RegularTableWidget(dummy)
tbl.editable(0, 100) # don't know if necessary
register_widget("table", tbl)
@output
@render.table
def changed_table():
changed = reactive_read(tbl, "datamodel")
return pd.DataFrame(changed._data)
app = App(app_ui, server)
</code></pre>
<p>The dummy data is both displayed as a widget and the data received by the <code>reactive_read()</code> is displayed as well. However, when I edit the table, the changes are not reflected in the attribute "datamodel" which is caught by <code>reactive_read</code>.</p>
<p>Does anybody know how to catch the changes with ipyregulartable? Or any other possibility to have an interactive table in shiny/python with a callback for cell edits? Similar question was also posted <a href="https://community.rstudio.com/t/will-shiny-for-python-have-editable-datatables/171417" rel="nofollow noreferrer">here</a>.</p>
| <python><py-shiny> | 2023-10-08 06:50:44 | 1 | 597 | Requin |
77,252,333 | 10,565,820 | SSL error with Zeep - how to change cipher suite? | <p>I am trying to use Zeep to load a WSDL file, but when I do, I receive the following error:</p>
<pre><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='api-mte.itespp.org', port=443): Max retries exceeded with url: /markets/VirtualService/v2/?WSDL (Caused by SSLError(SSLError(1, '[SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:997)')))
</code></pre>
<p>I have read in another answer (<a href="https://stackoverflow.com/questions/38015537/python-requests-exceptions-sslerror-dh-key-too-small">Python - requests.exceptions.SSLError - dh key too small</a>) that this can be solved using a different cipher suite (as I think the server is old which is what's causing this error), but I don't know how to do this with Zeep. Any ideas? Thanks!</p>
| <python><ssl><openssl><wsdl><zeep> | 2023-10-08 04:34:55 | 1 | 644 | geckels1 |
77,252,144 | 15,587,184 | How to calculate deltas and percentage changes in Pandas grouping on date by specific conditions | <p>I have a dataset named df with the following structure:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date</th>
<th>word</th>
<th>total</th>
<th>top</th>
</tr>
</thead>
<tbody>
<tr>
<td>14/10/2023</td>
<td>python</td>
<td>52</td>
<td>5</td>
</tr>
<tr>
<td>15/10/2023</td>
<td>python</td>
<td>54</td>
<td>9</td>
</tr>
<tr>
<td>16/10/2023</td>
<td>R</td>
<td>52</td>
<td>2</td>
</tr>
<tr>
<td>17/10/2023</td>
<td>R</td>
<td>12</td>
<td>1</td>
</tr>
<tr>
<td>18/10/2023</td>
<td>R</td>
<td>45</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I need to manipulate the data in a way that I can create two new columns:</p>
<p>The column "delta_top" should display the difference in the "top" column from the current date to the previous date for that word.</p>
<p>The column "delta_total" should show the percentage increase or decrease for that word on the current date compared to the previous date. Note that if there is no previous date, we should assign "NA."</p>
<p>For instance, for the word "R," the oldest date of reference is 16/10/2023, so we can't calculate its "delta_top" or "delta_total." Therefore, we assign "NA." But on date 17/10/2023, the word "R" went from top 2 to top 1, so we subtract the previous value from the current value, resulting in 1 (it has gone up 1 spot). However, "delta_total" will show -0.76, indicating that the subtotal went down by 76%.</p>
<p>My desired output would look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date</th>
<th>word</th>
<th>total</th>
<th>top</th>
<th>delta_top</th>
<th>delta_total</th>
</tr>
</thead>
<tbody>
<tr>
<td>14/10/2023</td>
<td>python</td>
<td>52</td>
<td>5</td>
<td>NA</td>
<td>NA</td>
</tr>
<tr>
<td>15/10/2023</td>
<td>python</td>
<td>54</td>
<td>9</td>
<td>4</td>
<td>0.037037037</td>
</tr>
<tr>
<td>16/10/2023</td>
<td>R</td>
<td>52</td>
<td>2</td>
<td>NA</td>
<td>NA</td>
</tr>
<tr>
<td>17/10/2023</td>
<td>R</td>
<td>12</td>
<td>1</td>
<td>1</td>
<td>-0.769230769</td>
</tr>
<tr>
<td>18/10/2023</td>
<td>R</td>
<td>45</td>
<td>1</td>
<td>0</td>
<td>2.75</td>
</tr>
</tbody>
</table>
</div>
<p>I have been trying to create this table using the chaining method in pandas, but I always get an error. The truth is that my actual dataset has over 3 million records, so I need to come up with a fast and convenient solution. I'm quite new to Python.</p>
<p>Work around:</p>
<pre><code>df = (
df.assign(date=pd.to_datetime(df['date'], format='%d/%m/%Y'))
.sort_values(by=['word', 'date'])
.assign(delta_top=lambda x: x['top'] - x.groupby('word')['top'].shift(1),
delta_total=lambda x: ((x['total'] - x.groupby('word')['total'].shift(1)) / x.groupby('word')['total'].shift(1)).fillna('NA'))
</code></pre>
<p>I feel like I'm following a proper logic way but this code is taking forever to load.</p>
| <python><pandas><group-by><data-wrangling> | 2023-10-08 02:51:55 | 2 | 809 | R_Student |
77,252,117 | 10,650,942 | What is the difference between built-in sum() and math.fsum() in Python 3.12? | <p>In the <a href="https://docs.python.org/3/whatsnew/3.12.html#other-language-changes:%7E:text=sum()%20now%20uses%20Neumaier%20summation" rel="nofollow noreferrer">What's New document</a> for 3.12 there is a point:</p>
<blockquote>
<p>sum() now uses Neumaier summation to improve accuracy and commutativity when summing floats or mixed ints and floats. (Contributed by Raymond Hettinger in gh-100425.)</p>
</blockquote>
<p>But also there is a <a href="https://docs.python.org/3/library/math.html#math.fsum" rel="nofollow noreferrer"><code>math.fsum</code></a> function that is intended to be used for exact floating-point number summation. I guess it should be using some similar algorithm.</p>
<p>I tried different cases on 3.11 and 3.12. On 3.11 <code>sum()</code> gives less precise results as expected. But on 3.12 they always return the same good results and I wasn't able to find a case when it differs.</p>
<p>Which one should I use? Is there cases left when one should prefer <a href="https://docs.python.org/3/library/math.html#math.fsum" rel="nofollow noreferrer"><code>math.fsum()</code></a> over built-in <a href="https://docs.python.org/3/library/functions.html#sum" rel="nofollow noreferrer"><code>sum()</code></a>?</p>
| <python><sum><python-3.12> | 2023-10-08 02:37:46 | 1 | 2,915 | Andrey Semakin |
77,251,901 | 14,509,604 | Pydantic.BaseModel.model_dump() through's AttributeError | <p>I'm trying to use <code>Pydantic.BaseModel.model_dump()</code> but when I call it <code>AttributeError: type object 'BaseModel' has no attribute 'model_dump'</code> raises. Also tried it instantiating the <code>BaseModel</code> class. The thing is that the vscode hint tool shows it as an available method to use, and when I use <code>Pydantic.BaseModel.dict()</code> it warns me that I should use <code>model_dump()</code>.</p>
<p>I'm using Python 3.10.4</p>
<pre><code>> pip freeze
annotated-types==0.6.0
anyio==3.7.1
exceptiongroup==1.1.3
fastapi==0.103.2
idna==3.4
pydantic==2.4.2
pydantic_core==2.10.1
sniffio==1.3.0
starlette==0.27.0
</code></pre>
<p>What's really strange is that when I do <code>pydantic.__version__</code> in my <code>FastApi</code> web api, it returns v.1.10.2. What's going on here?</p>
<p>EDIT:</p>
<pre><code>from typing import Optional
from fastapi import FastAPI
from fastapi.params import Body
from pydantic import BaseModel
import pydantic
app = FastAPI()
class Post(BaseModel):
title: str
content: str
published: bool = True # default value
rating: Optional[int] = None
@app.post("/createpost")
def create_posts(new_post: Post):
print('>>>>', pydantic.__version__)
print(pydantic.BaseModel.model_dump()) # => raises the error
# print(new_post.model_dump()) # => raises the error
return {"data": "new_post"}
</code></pre>
<pre><code>(fastapi-course) ➜ fastapi-course uvicorn main:app --reload
INFO: Will watch for changes in these directories: ['/home/juanc/Documents/fastapi-course']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [76395] using WatchFiles
INFO: Started server process [76397]
INFO: Waiting for application startup.
INFO: Application startup complete.
'>>>>' 1.10.2
INFO: 127.0.0.1:40264 - "POST /createpost HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/juanc/.local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 404, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/juanc/.local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/juanc/.local/lib/python3.8/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 75, in __call__
raise exc
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 64, in __call__
await self.app(scope, receive, sender)
File "/home/juanc/.local/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/juanc/.local/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/routing.py", line 680, in __call__
await route.handle(scope, receive, send)
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/routing.py", line 275, in handle
await self.app(scope, receive, send)
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/home/juanc/.local/lib/python3.8/site-packages/fastapi/routing.py", line 231, in app
raw_response = await run_endpoint_function(
File "/home/juanc/.local/lib/python3.8/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/home/juanc/.local/lib/python3.8/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "/home/juanc/.local/lib/python3.8/site-packages/anyio/to_thread.py", line 28, in run_sync
return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,
File "/home/juanc/.local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 818, in run_sync_in_worker_thread
return await future
File "/home/juanc/.local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 754, in run
result = context.run(func, *args)
File "/home/juanc/Documents/fastapi-course/./main.py", line 26, in create_posts
print(pydantic.BaseModel.model_dump())
AttributeError: type object 'BaseModel' has no attribute 'model_dump'
</code></pre>
| <python><fastapi><pydantic> | 2023-10-08 00:19:53 | 1 | 329 | juanmac |
77,251,865 | 6,197,439 | Python subprocess (calling gdb) freezes when called from make? | <p>Ok, this is a tricky one.</p>
<p>I have a CMake project to build an executable, and then I have a separate independent "target" which lets me run a Python script which uses subprocess to call <code>gdb</code> and to inspect this executable. Note that I run all of this in a MINGW64 (MSYS2) <code>bash</code> shell on Windows 10.</p>
<p>A script that generates the example files is posted at end of the post as <code>gen_files.sh</code>; to generate the example, run the following <code>bash</code> commands:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir C:/tmp/cmake_py_gdb
cd C:/tmp/cmake_py_gdb
# save the file as gen_files.sh there
bash gen_files.sh
mkdir build
cd build
cmake ../ -DCMAKE_BUILD_TYPE=Debug -G "MSYS Makefiles"
make
./myprogram.exe
# to trigger the freeze:
make py_gdb_inspect
</code></pre>
<p>Here is the tricky part: <code>make py_gdb_inspect</code> essentially calls <code>python3 py_gdb_inspect.py</code>; however, if I call <code>python3 py_gdb_inspect.py</code> directly from the MINGW64 <code>bash</code> shell, everything works as intended:</p>
<pre class="lang-none prettyprint-override"><code>$ (cd C:/tmp/cmake_py_gdb; python3 py_gdb_inspect.py)
> Working directory C:\tmp\cmake_py_gdb.
> Working directory C:\tmp\cmake_py_gdb\build.
> Before start_engine...<function start_engine at 0x000001bb42f94ae0>
> start_engine
> Starting debug engine/process: "myprogram.exe"
> resp1:
> resp2: Breakpoint 1 at 0x1400014d3: file C:/tmp/cmake_py_gdb/myprogram.c, line 11.
> [New Thread 20724.0x3c1c]
> mynumber_val: -1
>
> [... snip empty lines ...]
>
py_gdb_inspect.py finished!
</code></pre>
<p>However, when I call <code>make py_gdb_inspect</code>, the script freezes here:</p>
<pre class="lang-none prettyprint-override"><code>$ make py_gdb_inspect
make[1]: Entering directory '/c/tmp/cmake_py_gdb/build'
make[2]: Entering directory '/c/tmp/cmake_py_gdb/build'
make[3]: Entering directory '/c/tmp/cmake_py_gdb/build'
make[3]: Leaving directory '/c/tmp/cmake_py_gdb/build'
make[3]: Entering directory '/c/tmp/cmake_py_gdb/build'
[100%] run py_gdb_inspect.py in C:/tmp/cmake_py_gdb
> Working directory C:\tmp\cmake_py_gdb.
> Working directory C:\tmp\cmake_py_gdb\build.
> Before start_engine...<function start_engine at 0x0000011da8014ae0>
> start_engine
> Starting debug engine/process: "myprogram.exe"
> resp1:
> resp2: Breakpoint 1 at 0x1400014d3: file C:/tmp/cmake_py_gdb/myprogram.c, line 11.
</code></pre>
<p>... and I cannot even stop this with <kbd>CTRL</kbd>+<kbd>C</kbd>, I have to use <code>pkill -9 -f python</code>, upon which <code>make</code> outputs also:</p>
<pre class="lang-none prettyprint-override"><code>make[3]: *** [CMakeFiles/py_gdb_inspect.dir/build.make:58: CMakeFiles/py_gdb_inspect] Killed
make[3]: Leaving directory '/c/tmp/cmake_py_gdb/build'
make[2]: *** [CMakeFiles/Makefile2:110: CMakeFiles/py_gdb_inspect.dir/all] Error 2
make[2]: Leaving directory '/c/tmp/cmake_py_gdb/build'
make[1]: *** [CMakeFiles/Makefile2:117: CMakeFiles/py_gdb_inspect.dir/rule] Error 2
make[1]: Leaving directory '/c/tmp/cmake_py_gdb/build'
make: *** [Makefile:131: py_gdb_inspect] Error 2
</code></pre>
<p>My guess is, probably whe the time comes for <code>gdb</code> to start "[New Thread 20724.0x3c1c]", some mess with stdout/stderr file descriptors for terminals gets messed up -- especially since here we have a call chain: make -> python (the MINGW64 one) -> gdb -> python (the one built in GDB) <sup>[1]</sup></p>
<p>Unfortunately I cannot figure out exactly what happens - so my question is: how can I force the <code>python3 py_gdb_inspect.py</code> command to complete, and not freeze, also when it is called via <code>make py_gdb_inspect</code>?</p>
<p><strong><code>gen_files.sh</code></strong></p>
<pre class="lang-bash prettyprint-override"><code>read -r -d '' README <<'EOF'
To run this example:
mkdir C:/tmp/cmake_py_gdb
cd C:/tmp/cmake_py_gdb
# save this file as gen_files.sh there
bash gen_files.sh
mkdir build
cd build
cmake ../ -DCMAKE_BUILD_TYPE=Debug -G "MSYS Makefiles"
make
./myprogram.exe
# to trigger the freeze:
make py_gdb_inspect
EOF
cat > myprogram.c <<EOF
#include <stdio.h>
int add_two_vars(int a, int b) {
int result = a + b;
return result;
}
int mynumber = -1;
int main() {
int a = 5;
int b = 2;
mynumber = add_two_vars(a, b);
printf("Hello from myprogram.exe! mynumber is %d\n", mynumber);
return 0;
}
EOF
cat > CMakeLists.txt <<'EOF'
cmake_minimum_required(VERSION 3.13)
find_package(Python3 COMPONENTS Interpreter)
message(STATUS
"Python: version=${Python3_VERSION} interpreter=${Python3_EXECUTABLE}")
if(NOT Python3_FOUND)
# find_package() will not abort the build if anything's missing.
string(JOIN "\n" errmsg
" Python3 not found."
" - Python3_FOUND=${Python3_FOUND}"
)
message(FATAL_ERROR ${errmsg})
endif()
project(myprogram C)
set(CMAKE_C_STANDARD 11)
set(${PROJECT_NAME}_sources
myprogram.c
)
add_executable(${PROJECT_NAME} ${${PROJECT_NAME}_sources})
add_custom_target(py_gdb_inspect
${Python3_EXECUTABLE} py_gdb_inspect.py
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
COMMENT "run py_gdb_inspect.py in ${CMAKE_CURRENT_SOURCE_DIR}"
)
EOF
cat > py_gdb_inspect.py <<'EOF'
import sys,os
import subprocess
GDB = "D:/msys64/mingw64/bin/gdb-multiarch.exe"
# GDB scripts are known as "GDB command files";
# extension ".gdb" for "GDB's own command language"
GDB_TEST_SCRIPT = "test_cmdfile.gdb"
GDB_TEST_CMD = [ GDB, "-q", "-batch", "-x", GDB_TEST_SCRIPT ]
try:
result = subprocess.Popen(GDB_TEST_CMD, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True, cwd=os.getcwd(), bufsize=1, close_fds=True )
except Exception as ex:
print(ex)
sys.exit(1)
while result.poll() is None: # for line in iter(result.stdout.readline, b''):
tline = result.stdout.readline().strip()
print("> {}".format(tline))
print("py_gdb_inspect.py finished!")
EOF
cat > test_cmdfile.gdb <<'EOF'
python
import time
def start_engine(in_exe, printresp=False):
print('start_engine')
print('Starting debug engine/process: "{}"'.format(in_exe))
response_1 = gdb.execute('file {}'.format(in_exe), to_string=True)
if printresp: print("resp1: " + response_1.strip())
# silent breakpoint (avoid printing "Thread 1 hit Breakpoint 1"): https://stackoverflow.com/q/58528800
response_2 = gdb.execute("b main\ncommands\nsilent\nend", to_string=True)
if printresp: print("resp2: " + response_2.strip())
# FOR SOME REASON, now when we run gdb.execute('r') when script is called from make, it BLOCKS! from_tty=True does not help
#time.sleep(1)
#response_3 = gdb.execute('r', to_string=True, from_tty=True)
#if printresp: print("resp3: " + response_3.strip())
def print_value():
mynumber_val = gdb.execute('printf "%d", mynumber', to_string=True)
print("mynumber_val: {}".format(mynumber_val))
end
pwd
cd build
pwd
python print("Before start_engine...{}".format(start_engine))
python start_engine("myprogram.exe", printresp=True)
#tty gdb_tty.txt
run
py print_value()
EOF
</code></pre>
<p><sup>[1] Sorry, I cannot resist pointing out the meme "Yo Dawg, I herd you like Python, so I put an Python in your gdb so you can Python while you Python." <code>:)</code> </sup></p>
| <python><subprocess><gdb> | 2023-10-07 23:55:40 | 1 | 5,938 | sdbbs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.