QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,828,089
| 812,102
|
Setting DynamicMap as stream source
|
<p>In the following MWE, clicking on a point of the scatter plot should show its index in the "debug" textbox. It does when the layout is <code>scatter * dmap</code>, it doesn’t when the layout is simply <code>dmap</code>. Indeed, <code>scatter</code> being the source of the <code>Selection1D</code> stream, its points are selected, not the points of the <code>DynamicMap</code>. Setting <code>source=dmap</code> doesn’t seem to help.</p>
<p>This is not satisfying because in my real use case I need the scatter plot to be updated after the first click (then a click on the second scatter plot to be registered), hence the <code>DynamicMap</code>.</p>
<p>How do I get rid of <code>scatter</code> in <code>layout</code> so that <code>dmap</code> is the actual source of events ?</p>
<pre><code>import numpy as np
import holoviews as hv
import panel as pn
data = np.random.randn(100, 2)
scatter = hv.Scatter(
data
).opts(
tools = ["tap"]
)
debug = pn.widgets.TextInput(name="debug", value="")
def on_click(index):
if index is None:
return scatter
debug.value = str(index)
return scatter
stream = hv.streams.Selection1D(source=scatter)
dmap = hv.DynamicMap(on_click, streams=[stream])
#layout = scatter * dmap # works
layout = dmap # doesn’t work
pn.Column(
debug,
layout
)
</code></pre>
|
<python><panel><holoviews><holoviz-panel>
|
2024-01-16 18:49:13
| 1
| 7,852
|
Skippy le Grand Gourou
|
77,827,982
| 11,357,623
|
Python generate documentation using pydoc
|
<p>I have nested folder structure library and the question is about generating documentation from the source-code files.</p>
<h1>Folder Structure</h1>
<pre><code> mylib
├── foo
├── __init__.py
│ ├── Foo.py
├── bar
│ ├── __init__py
│ ├── Bar.py
├── baz
│ ├── qux
│ │ ├── file1.py
│ │ ├── file2.py
│ │ ├── file3.py
│ │ ├── file4.py
│ │ ├── file5.py
│ │ └── __init__.py
</code></pre>
<h1>Documentation</h1>
<p>I was looking for a built-in solution (without downloading 3rd party plugins) to generate documentation for the python library and came across pydoc</p>
<p><strong>Documentation</strong>: <a href="https://docs.python.org/3/library/pydoc.html" rel="nofollow noreferrer">https://docs.python.org/3/library/pydoc.html</a></p>
<p>from the documentation I rant this source file</p>
<p><code>python3 -m pydoc mylib</code></p>
<h1>Issue:</h1>
<p>I have only one file generated mylib that lists 3 sub-directories as links</p>
<h1>Code</h1>
<p>as per the request, here's a sample of code file.</p>
<pre><code>class Foo:
"""
:class: Foo
A base `Foo` implementation.
"""
def __init__(self, name: str = None, data: Any = None):
"""
Constructor
:param name: The name of the foo. Defaults to None.
:type name: str
:param data: The data. Defaults to None.
:type data: Any
"""
super().__init__()
self.name = name
self.data = data
</code></pre>
|
<python><pydoc>
|
2024-01-16 18:31:28
| 0
| 2,180
|
AppDeveloper
|
77,827,945
| 13,930,389
|
Get maximum and minimum theoretical output from xgboost classifier
|
<p>Assume I have a trained XGBClassifier object (the sklearn interface). How could I determine what is the highest possible output the model could provide, in terms of the output from model.predict_proba(input)[:,1]? That is, the probability of the class 1 in a binary classification problem.</p>
<p>Of course, I know the max predicted value found in my training data. However, given unseen data, I assume that there could be higher values, because of the high number of possible traversal paths on every tree that XGBoost constructed.</p>
<p>So how could I come up with an answer to this using Python?</p>
<p>What I've tried so far was basically to determine the min and max values of every feature I had in training data (which were all continuous, possibly containing np.nans) and use scipy.optimize.minimize, specifying the min/max bounds for every feature. My "cost" function would simply be the model predict_proba. Also, to account for the np.nans, I used a placeholder (-1) and made the search space account for that.</p>
<p>This was not successful at all. All methods I've tried gave different answers, and all of them were equal or lower than my known max value in training data.</p>
<p>Code I used, simplified:</p>
<pre class="lang-py prettyprint-override"><code>minmax = {feature: {'min': df[feature].min(), 'max': df[feature].max()} for feature in features}
md = joblib.load('my_xgboost_model.joblib')
def cost(x):
x[x < 0] = np.nan
cst = md.predict_proba(np.array([x]))[:, 1]
return -float(cst[0])
# random initial guess
x0 = np.random.randn(22)
x0[x0 <= -1] = -0.5
opt = minimize(
cost,
x0 = x0,
method='Nelder-Mead',
tol=1e-15,
bounds=[(-1, v['max']) for k, v in minmax.items()]
)
</code></pre>
|
<python><xgboost>
|
2024-01-16 18:23:58
| 1
| 1,781
|
eduardokapp
|
77,827,800
| 14,566,295
|
Failing to apply negative lookahead to select column using regex
|
<p>I have the below Pandas dataframe:</p>
<pre><code>import pandas as pd
dat1 = pd.DataFrame({'col1' : ['A', 'B', 'A', 'C'], 'col2_y' : ['Z', 'Z', 'X', 'Y']})
dat1.filter(regex = "(?!_y)")
</code></pre>
<p>I want to select the columns which do NOT contain <code>"_y"</code>. However the above code is returning the whole dataframe.</p>
<p>Could you please help to point out what went wrong with the above code?</p>
|
<python><python-3.x><pandas><regex><python-re>
|
2024-01-16 17:58:59
| 1
| 1,679
|
Brian Smith
|
77,827,769
| 991,077
|
Applying conditional rate limiting decorator for async class method
|
<p>I am using <code>ccxt</code> library for downloading OHLC data from various crypto exchanges. This is done internally using REST requests. These exchanges have different request rate limits, so I have to apply rate limiting to my methods. For rate limiting I use the library <code>limiter</code>. Here is what I have at the moment:</p>
<pre><code>import ccxt.async_support as ccxt
from limiter import Limiter
limiter_binance = Limiter(rate=19, capacity=19, consume=1)
limiter_coinbase = Limiter(rate=3, capacity=3, consume=1)
class CCXTFeed:
def __init__(self, exchange_name):
self._exchange = ccxt.binance()
self._exchange_name = exchange_name
@limiter_binance
async def __FetchSymbolOHLCChunk(self, symbol, timeframe, startTime, limit=1000):
data = await self._exchange.fetch_ohlcv(symbol, timeframe, since=startTime, limit=limit)
return data
</code></pre>
<p>What I now want is to somehow apply <code>limiter_coinbase</code> if the class was instantiated with <code>exchange_name="coinbase"</code>. I want to be able to choose applied rate limiter decorator based on the exchange I am currently working with. The exchange name is defined at runtime.</p>
|
<python><python-asyncio><python-decorators><rate-limiting><ccxt>
|
2024-01-16 17:53:44
| 1
| 734
|
shda
|
77,827,719
| 8,231,100
|
Python Memory Leak - Scikit-learn + Numpy
|
<p>Can't seem to fix my memory leak, I managed to replicate it easily. I seems to happen when i use sklearn GaussianMixture multiple times, while varying the the number of training pixels I give it.</p>
<p>This code <strong>does not</strong> leak:</p>
<pre><code>import os, psutil
from sklearn import mixture
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
process = psutil.Process()
def get_size():
process = psutil.Process()
return process.memory_info().rss / 100000
sizes = []
for _ in range(100):
training_pixels = np.random.random((500000,3))
gmm = mixture.GaussianMixture(
n_components=5, covariance_type="full", random_state=40
).fit(training_pixels)
size = get_size()
sizes.append(size)
del gmm, training_pixels
plt.plot(sizes)
</code></pre>
<p><a href="https://i.sstatic.net/dh2Jf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dh2Jf.png" alt="Weird oscillations, but why not" /></a></p>
<p>This code, however, leaks (notice how the number of training pixels is never larger than in the previous example, it can only be smaller or equal to 500000):</p>
<pre><code>import os, psutil
from sklearn import mixture
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
process = psutil.Process()
def get_size():
process = psutil.Process()
return process.memory_info().rss / 100000
sizes = []
for _ in range(100):
training_pixels = np.random.random((np.random.randint(200000,500000),3))
gmm = mixture.GaussianMixture(
n_components=5, covariance_type="full", random_state=40
).fit(training_pixels)
size = get_size()
sizes.append(size)
del gmm, training_pixels
plt.plot(sizes)
</code></pre>
<p><a href="https://i.sstatic.net/wPTi3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wPTi3.png" alt="enter image description here" /></a></p>
<p>Any idea why?</p>
|
<python><numpy><scikit-learn><memory-management><memory-leaks>
|
2024-01-16 17:44:53
| 0
| 365
|
Adrien Nivaggioli
|
77,827,689
| 1,911,036
|
OpenCV python slow performance in Win10
|
<p>I have a simple code that works much slower in Win10, and fast as hell in Linux.
The code returns the sum of all pixel intensity for all frames in video file.
Unfortunately, I must run it in Win10, so I'd like to know the reason for such a slowdown.
I have tried different CPUs, but that's not the case.</p>
<p>In Win10 I can process tested video file in 25 seconds.
<a href="https://i.sstatic.net/K5OQF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K5OQF.png" alt="enter image description here" /></a></p>
<p>In Linux, it takes only 1.5 seconds
<a href="https://i.sstatic.net/Kz7Ok.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kz7Ok.png" alt="enter image description here" /></a></p>
<p>What is the reason for such behavior? Can I speed up this somehow in Win10?</p>
<pre><code>import sys
import cv2
from tqdm import tqdm
import time
start = time.time()
file_name = sys.argv[1]
cap = cv2.VideoCapture(file_name)
success, image = cap.read()
count = 0
# width = int(cap.get(3)) #.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH) # float `width`
# height = int(cap.get(4)) #.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT) # float `height`
fps = cap.get(cv2.CAP_PROP_FPS)
N_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
width = int(cap.get(3)) # float `width`
height = int(cap.get(4)) # float `height`
# print(width, height)
w = 70
print(f"FPS = {fps}, N_frames = {N_frames}, WxH={width}x{height}")
# print(width, height)
with open("res_vid.txt", "w") as res:
pbar = tqdm(total=N_frames)
while success:
count += 1
res.write(f"{count:5d} {gray_img.sum():10d}\n")
pbar.update(1)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
success, image = cap.read()
pbar.close()
cap.release()
cv2.destroyAllWindows()
end = time.time()
print(end - start)
</code></pre>
<p>UPDATE:</p>
<p>win10 getBuildInformation()</p>
<pre><code>
General configuration for OpenCV 4.9.0 =====================================
Version control: 4.9.0
Platform:
Timestamp: 2023-12-31T11:21:12Z
Host: Windows 10.0.17763 AMD64
CMake: 3.24.2
CMake generator: Visual Studio 14 2015
CMake build tool: MSBuild.exe
MSVC: 1900
Configuration: Debug Release
CPU/HW features:
Baseline: SSE SSE2 SSE3
requested: SSE3
Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2
requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
SSE4_1 (16 files): + SSSE3 SSE4_1
SSE4_2 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2
FP16 (0 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
AVX (8 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
AVX2 (36 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
C/C++:
Built as dynamic libs?: NO
C++ standard: 11
C++ Compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe (ver 19.0.24247.2)
C++ flags (Release): /DWIN32 /D_WINDOWS /W4 /GR /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise /EHa /wd4127 /wd4251 /wd4324 /wd4275 /wd4512 /wd4589 /wd4819 /MP /O2 /Ob2 /DNDEBUG
C++ flags (Debug): /DWIN32 /D_WINDOWS /W4 /GR /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise /EHa /wd4127 /wd4251 /wd4324 /wd4275 /wd4512 /wd4589 /wd4819 /MP /Zi /Ob0 /Od /RTC1
C Compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe
C flags (Release): /DWIN32 /D_WINDOWS /W3 /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise /MP /O2 /Ob2 /DNDEBUG
C flags (Debug): /DWIN32 /D_WINDOWS /W3 /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise /MP /Zi /Ob0 /Od /RTC1
Linker flags (Release): /machine:x64 /NODEFAULTLIB:atlthunk.lib /INCREMENTAL:NO /NODEFAULTLIB:libcmtd.lib /NODEFAULTLIB:libcpmtd.lib /NODEFAULTLIB:msvcrtd.lib
Linker flags (Debug): /machine:x64 /NODEFAULTLIB:atlthunk.lib /debug /INCREMENTAL /NODEFAULTLIB:libcmt.lib /NODEFAULTLIB:libcpmt.lib /NODEFAULTLIB:msvcrt.lib
ccache: NO
Precompiled headers: YES
Extra dependencies: wsock32 comctl32 gdi32 ole32 setupapi ws2_32
3rdparty dependencies: libprotobuf ade ittnotify libjpeg-turbo libwebp libpng libtiff libopenjp2 IlmImf zlib ippiw ippicv
OpenCV modules:
To be built: calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo python3 stitching video videoio
Disabled: java world
Disabled by dependency: -
Unavailable: python2 ts
Applications: -
Documentation: NO
Non-free algorithms: NO
Windows RT support: NO
GUI: WIN32UI
Win32 UI: YES
VTK support: NO
Media I/O:
ZLib: build (ver 1.3)
JPEG: build-libjpeg-turbo (ver 2.1.3-62)
SIMD Support Request: YES
SIMD Support: NO
WEBP: build (ver encoder: 0x020f)
PNG: build (ver 1.6.37)
TIFF: build (ver 42 - 4.2.0)
JPEG 2000: build (ver 2.5.0)
OpenEXR: build (ver 2.3.0)
HDR: YES
SUNRASTER: YES
PXM: YES
PFM: YES
Video I/O:
DC1394: NO
FFMPEG: YES (prebuilt binaries)
avcodec: YES (58.134.100)
avformat: YES (58.76.100)
avutil: YES (56.70.100)
swscale: YES (5.9.100)
avresample: YES (4.0.0)
GStreamer: NO
DirectShow: YES
Media Foundation: YES
DXVA: YES
Parallel framework: Concurrency
Trace: YES (with Intel ITT)
Other third-party libraries:
Intel IPP: 2021.11.0 [2021.11.0]
at: D:/a/opencv-python/opencv-python/_skbuild/win-amd64-3.7/cmake-build/3rdparty/ippicv/ippicv_win/icv
Intel IPP IW: sources (2021.11.0)
at: D:/a/opencv-python/opencv-python/_skbuild/win-amd64-3.7/cmake-build/3rdparty/ippicv/ippicv_win/iw
Lapack: NO
Eigen: NO
Custom HAL: NO
Protobuf: build (3.19.1)
Flatbuffers: builtin/3rdparty (23.5.9)
OpenCL: YES (NVD3D11)
Include path: D:/a/opencv-python/opencv-python/opencv/3rdparty/include/opencl/1.2
Link libraries: Dynamic load
Python 3:
Interpreter: C:/hostedtoolcache/windows/Python/3.7.9/x64/python.exe (ver 3.7.9)
Libraries: C:/hostedtoolcache/windows/Python/3.7.9/x64/libs/python37.lib (ver 3.7.9)
numpy: C:/hostedtoolcache/windows/Python/3.7.9/x64/lib/site-packages/numpy/core/include (ver 1.17.0)
install path: python/cv2/python-3
Python (for build): C:\\hostedtoolcache\\windows\\Python\\3.7.9\\x64\\python.exe
Java:
ant: NO
Java: YES (ver 1.8.0.392)
JNI: C:/hostedtoolcache/windows/Java_Temurin-Hotspot_jdk/8.0.392-8/x64/include C:/hostedtoolcache/windows/Java_Temurin-Hotspot_jdk/8.0.392-8/x64/include/win32 C:/hostedtoolcache/windows/Java_Temurin-Hotspot_jdk/8.0.392-8/x64/include
Java wrappers: NO
Java tests: NO
Install to: D:/a/opencv-python/opencv-python/_skbuild/win-amd64-3.7/cmake-install
-----------------------------------------------------------------
</code></pre>
<p>Linux getBuildInformation()</p>
<pre><code>
General configuration for OpenCV 4.9.0 =====================================
Version control: 4.9.0-dirty
Platform:
Timestamp: 2023-12-31T11:18:53Z
Host: Linux 5.15.0-1053-azure x86_64
CMake: 3.28.1
CMake generator: Unix Makefiles
CMake build tool: /bin/gmake
Configuration: Release
CPU/HW features:
Baseline: SSE SSE2 SSE3
requested: SSE3
Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2 AVX512_SKX
requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
SSE4_1 (16 files): + SSSE3 SSE4_1
SSE4_2 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2
FP16 (0 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
AVX (8 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
AVX2 (36 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
AVX512_SKX (5 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX_512F AVX512_COMMON AVX512_SKX
C/C++:
Built as dynamic libs?: NO
C++ standard: 11
C++ Compiler: /opt/rh/devtoolset-10/root/usr/bin/c++ (ver 10.2.1)
C++ flags (Release): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
C++ flags (Debug): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
C Compiler: /opt/rh/devtoolset-10/root/usr/bin/cc
C flags (Release): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
C flags (Debug): -Wl,-strip-all -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
Linker flags (Release): -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -L/ffmpeg_build/lib -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined
Linker flags (Debug): -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -L/ffmpeg_build/lib -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined
ccache: YES
Precompiled headers: NO
Extra dependencies: /lib64/libopenblas.so Qt5::Core Qt5::Gui Qt5::Widgets Qt5::Test Qt5::Concurrent /usr/local/lib/libpng.so /lib64/libz.so dl m pthread rt
3rdparty dependencies: libprotobuf ade ittnotify libjpeg-turbo libwebp libtiff libopenjp2 IlmImf ippiw ippicv
OpenCV modules:
To be built: calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo python3 stitching video videoio
Disabled: world
Disabled by dependency: -
Unavailable: java python2 ts
Applications: -
Documentation: NO
Non-free algorithms: NO
GUI: QT5
QT: YES (ver 5.15.0 )
QT OpenGL support: NO
GTK+: NO
VTK support: NO
Media I/O:
ZLib: /lib64/libz.so (ver 1.2.7)
JPEG: libjpeg-turbo (ver 2.1.3-62)
WEBP: build (ver encoder: 0x020f)
PNG: /usr/local/lib/libpng.so (ver 1.6.40)
TIFF: build (ver 42 - 4.2.0)
JPEG 2000: build (ver 2.5.0)
OpenEXR: build (ver 2.3.0)
HDR: YES
SUNRASTER: YES
PXM: YES
PFM: YES
Video I/O:
DC1394: NO
FFMPEG: YES
avcodec: YES (59.37.100)
avformat: YES (59.27.100)
avutil: YES (57.28.100)
swscale: YES (6.7.100)
avresample: NO
GStreamer: NO
v4l/v4l2: YES (linux/videodev2.h)
Parallel framework: pthreads
Trace: YES (with Intel ITT)
Other third-party libraries:
Intel IPP: 2021.10.0 [2021.10.0]
at: /io/_skbuild/linux-x86_64-3.7/cmake-build/3rdparty/ippicv/ippicv_lnx/icv
Intel IPP IW: sources (2021.10.0)
at: /io/_skbuild/linux-x86_64-3.7/cmake-build/3rdparty/ippicv/ippicv_lnx/iw
VA: NO
Lapack: YES (/lib64/libopenblas.so)
Eigen: NO
Custom HAL: NO
Protobuf: build (3.19.1)
Flatbuffers: builtin/3rdparty (23.5.9)
OpenCL: YES (no extra features)
Include path: /io/opencv/3rdparty/include/opencl/1.2
Link libraries: Dynamic load
Python 3:
Interpreter: /opt/python/cp37-cp37m/bin/python3.7 (ver 3.7.17)
Libraries: libpython3.7m.a (ver 3.7.17)
numpy: /home/ci/.local/lib/python3.7/site-packages/numpy/core/include (ver 1.17.0)
install path: python/cv2/python-3
Python (for build): /opt/python/cp37-cp37m/bin/python3.7
Java:
ant: NO
Java: NO
JNI: NO
Java wrappers: NO
Java tests: NO
Install to: /io/_skbuild/linux-x86_64-3.7/cmake-install
</code></pre>
|
<python><linux><opencv><user-interface>
|
2024-01-16 17:39:35
| 0
| 333
|
Viktor
|
77,827,677
| 12,603,110
|
Sympy how to symbolically change of variables for derivatives?
|
<p>Given</p>
<pre><code>x = x_c * chi
t = t_c * tau
</code></pre>
<p>expressing dx/dt in chi, tau is: x_c/t_c * dchi / dtau<br />
<a href="https://www.wikiwand.com/en/Nondimensionalization#Conventions" rel="nofollow noreferrer">https://www.wikiwand.com/en/Nondimensionalization#Conventions</a></p>
<p>How to evaluate the dx/dt change of variables with sympy given x(chi), t(tau) expressions?</p>
<pre class="lang-py prettyprint-override"><code>from sympy import symbols, diff
# Define the symbols
x, chi, x_c, t, tau, t_c = symbols('x chi x_c t tau t_c', real=True)
# Define the expressions for x and t in terms of chi and tau
x_expr = x_c * chi
t_expr = t_c * tau
x_expr.diff(t_expr)
</code></pre>
<p>results in the somewhat obvious error:</p>
<pre><code>ValueError: Can't calculate derivative wrt t_c*tau.
</code></pre>
|
<python><sympy><derivative>
|
2024-01-16 17:38:08
| 0
| 812
|
Yorai Levi
|
77,827,499
| 5,506,167
|
Why python uses predictable hashes for numbers?
|
<p>The <a href="https://docs.python.org/3/reference/datamodel.html#object.__hash__" rel="nofollow noreferrer">Python doc</a> says:</p>
<blockquote>
<p>By default, the <code>__hash__()</code> values of <strong>str and bytes</strong> objects are “salted” with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python.
This is intended to provide protection against a denial-of-service caused by carefully chosen inputs that exploit the worst case performance of a dict insertion, <code>O(n2)</code> complexity.</p>
</blockquote>
<p>But <a href="https://docs.python.org/3/library/stdtypes.html#hashing-of-numeric-types" rel="nofollow noreferrer">the algorithm used for computing hash()</a> of numbers is deterministic. (It only uses salt for str and bytes). Why an attacker can not use integers to run DoS by the same reason?</p>
|
<python><security><hash>
|
2024-01-16 17:04:35
| 1
| 1,962
|
Saleh
|
77,827,315
| 4,340,985
|
How to use seaborn.clustermap with large (20 000 entries) data sets?
|
<p>I'm working a lot with <a href="https://seaborn.pydata.org/generated/seaborn.clustermap.html#seaborn-clustermap" rel="nofollow noreferrer"><code>sns.clustermap</code></a> and I'm quite enjoying it as an easy tool to get an overview over a large amount of data (time series data in my case, a few hundred entries mostly).</p>
<p>So I figured I'll enjoy it with some larger data too, but well, not so much...</p>
<p>I don't mind that it takes about 6 hours to calculate on my machine, but not getting any output for those 6 hours is rather annoying.</p>
<p><a href="https://i.sstatic.net/sEnig.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sEnig.jpg" alt="empty clustermap" /></a></p>
<p>I does show the dendrogram, but the field that should contain the matrix just remains empty. Now, assuming that seaborn doesn't have inbuilt routines that deal with interpolating between pixels, this kinda makes sense. Assuming a 1 px height of a row, the matrix alone should be 20 000 pixels high with a 20 000 entries dataset. But if I increase my <code>figsize</code> to say <code>figsize=(400,400)</code> it takes a whole day and then crashes because it's out of memory and/or because my figure is too big (apparently, there's hard limit to how big a png file can be).</p>
<p><strong>Is there any way to produce a clustermap with such large data sets?</strong></p>
<p>When I run it with a <em>small</em> image size, I can produce the shown, empty picture, e.g.:</p>
<pre><code>correlations = data.corr()
clusterplot = sns.clustermap(correlations, center=0
linewidths=0.4, figsize=(150, 150))
clusterplot.savefig('Cluster.png', facecolor='white', transparent=False)
</code></pre>
<p>With <code>data</code> being a pandas Dataframe with <code>df.shape</code> = <code>(800, 20000)</code>.</p>
<p>Since it takes quite some time and doesn't produce an error, I assume <code>clusterplot</code> contains, well, the clusterplot, but how can I get to it? Assuming there is a way, can I also calculate the clustermap without any output? I.e. something like <code>clusterplot = sns.clustermap(correlations, center=0, showmap=False)</code> to save some time?</p>
|
<python><seaborn><cluster-analysis>
|
2024-01-16 16:31:40
| 2
| 2,668
|
JC_CL
|
77,827,305
| 2,289,030
|
How to prevent attribute re-access until all attributes have been accessed once (Python metaclass)?
|
<p>I have a list of attributes, say <code>["foo", "bar", "baz"]</code>, and I want to write a metaclass which ensures the following properties:</p>
<ul>
<li>They can all be accessed once, in any order</li>
<li>They cannot be accessed again until all have been accessed once</li>
</ul>
<p>For example, the following is how it might work:</p>
<pre><code>class TrackedAttrsMeta:
@classmethod
def with_attributes(cls, *args):
cls._expected_attrs = set(args)
return cls
class MyClass(metaclass=TrackedAttrsMeta.with_attributes("foo", "bar", "baz")):
foo: "quux"
bar: {"blah": -1}
baz: 1
ob = MyClass()
for _ in range(10):
print(ob.foo) # Allowed
print(ob.bar) # Allowed
print(ob.foo) # Fails with an error because "baz" has not been accessed yet
print(ob.baz) # Would be allowed if not for above
print(ob.foo) # Would be allowed because we've accessed every attribute in the list once
</code></pre>
<p>The implementation of a custom <code>__getattribute__</code> might look like so:</p>
<pre><code>def _tracking_getattr(self, name):
value = super().__getattribute__(name)
remaining = list(expected_attrs - self._accessed_attrs)
if name in self._accessed_attrs:
raise ValueError(f"Already accessed '{name}', must access all of {remaining} before this is allowed again")
self._accessed_attrs.add(name)
# Once we've seen them all, clear out the record so we can see them again
if self._accessed_attrs == expected_attrs:
self._accessed_attrs = set()
</code></pre>
|
<python><metaclass>
|
2024-01-16 16:30:22
| 2
| 968
|
ijustlovemath
|
77,827,172
| 1,980,630
|
Setup a conversion rate in pint in python
|
<p>I have this problem I'd like to solve with python pint.</p>
<ul>
<li>0.29 points / minutes</li>
<li>118 points / (17.35 usd)</li>
<li>How much is 5 minutes in usd ?</li>
</ul>
<p>I have then 2 new abstract units "points" and "usd" and reuse the "time" unit "minute".</p>
<p>I can use simple mathematical operations for that :</p>
<pre><code>import pint
units = pint.UnitRegistry()
units.define("usd = [usd]")
units.define("points = [points]")
a = 0.29 * units.points / units.minutes
b = 118 * units.points / (17.35 * units.usd)
</code></pre>
<p>Now I was to have 5 minutes to usd but of course I need to to the computation myself, do I multiply or divide by a ? Same for b ? The answer is one of those line:</p>
<pre><code>5 * units.minutes * a * b
5 * units.minutes * a / b
5 * units.minutes / a * b
5 * units.minutes / a / b
</code></pre>
<p>Of course, only one of those line gives unit "usd". So I could compute all the possible ones and keep the one where the unit is usd. Turns out it's the second one.</p>
<p>I wish the library would find a path to convert my "5 minutes" using those relations in this context.</p>
<p>I have read about pint contexts but I didn't find any working solution. I was hoping to be able to write something like this:</p>
<pre><code>(5 * units.minutes).to(units.usd, mycontext_with_the_two_equations)
</code></pre>
|
<python><pint>
|
2024-01-16 16:09:29
| 1
| 768
|
Robert Vanden Eynde
|
77,827,144
| 7,051,394
|
Use a custom font with pycairo
|
<p>I'm using Pycairo to draw images with text on it, with a custom font.
This works well as long as that font is installed on the system.</p>
<p>However, I would like to distribute my package with my custom font attached as a data file (ttf or any other type), and to have my code refer to that file rather than assume the font is installed on the client's system.</p>
<p>In essence here's my code:</p>
<pre class="lang-py prettyprint-override"><code>import cairo
with cairo.ImageSurface(cairo.FORMAT_ARGB32, 200, 200) as surface:
ctx = cairo.Context(surface)
ctx.select_font_face("my-font")
ctx.set_font_size(30)
ctx.move_to(10, 50)
ctx.show_text("Hello World")
surface.write_to_png("hello.png")
</code></pre>
<p>Now instead of <code>ctx.select_font_face("my-font")</code>, I'd like to do something like <code>ctx.select_font_face("./assets/my-font.ttf")</code>.
I've read <a href="https://pycairo.readthedocs.io/en/latest/reference/text.html" rel="nofollow noreferrer">Pycairo's doc about <code>Text</code></a> but I couldn't find any way to load a font from a file - it seems like only fonts installed on the system.</p>
<p>Is there a way to get Pycairo to load a font from a file?</p>
|
<python><fonts><cairo><pycairo>
|
2024-01-16 16:04:34
| 1
| 16,957
|
Right leg
|
77,827,003
| 21,440,451
|
Is it possible to specify route as an empty string in Azure function for Python?
|
<p>I try to have a "vanilla path" in my API. I've tried to decorate my function by each of the following.</p>
<pre><code>@app.route(route="")
@app.route(route=None)
@app.route()
@app.route(route="/")
</code></pre>
<p>It's not of interest to modify the route using <code>function.json</code> and changing the base route from a central point. Googling it provided the solution based on manipulating that file, <a href="https://stackoverflow.com/questions/45357825/different-routes-in-azure-functions">like so</a>. However, I wish to be able to have a route parameter not amending the base path.</p>
<p>If it's not possible, I wonder what the technical reason behind it is.</p>
|
<python><azure><azure-functions>
|
2024-01-16 15:44:02
| 1
| 492
|
danilo
|
77,826,967
| 22,466,650
|
How to reorganize two pairs of columns and choose the order as well?
|
<p>My input is a dataframe :</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'node1': ['_abc-1', 'xyz-1', 'abc-1', '-xyz-2', 'xyz-2', 'abc-2'],
'p1': [1, 10, 3, 1, 2, 6],
'p2': [9, 2, 11, 4, 5, 3],
'node2': ['xyz-1', 'abc-1', '-xyz-1', 'def-2', 'def-2', '-xyz-1']})
# print(df)
# node1 p1 p2 node2
# 0 _abc-1 1 9 xyz-1
# 1 xyz-1 10 2 abc-1
# 2 abc-1 3 11 -xyz-1
# 3 -xyz-2 1 4 def-2
# 4 xyz-2 2 5 def-2
# 5 abc-2 6 3 -xyz-1
</code></pre>
<p>I want to keep only the rows that contains <code>abc</code> and <code>xyz</code> in the nodes and then reorganize them to have the same category of nodes in the same column. I want also to be able to say for example : Lets put the <code>xyz</code> nodes in the left part (it means, node1 and p1).
Thanks to you guys in an older post, I was able to build this code but I'm not getting my expected output anymore and I don't know how to impose an order.</p>
<pre><code>mask1 = (
(
df["node1"].str.contains("abc", case=False) &
df["node2"].str.contains("xyz", case=False)
)
|
(
df["node2"].str.contains("abc", case=False) &
df["node1"].str.contains("xyz", case=False)
)
)
df = df.loc[mask1]
final = df[['node1', 'node2']].apply(np.sort, axis=1).apply(pd.Series)
final.columns = ['node1', 'node2']
mask2 = final.eq(df[['node1', 'node2']]).all(axis=1)
final[['p1', 'p2']] = (
df[['p1', 'p2']].where(mask2, other=df[['p2', 'p1']].values)
)
final = final[df.columns]
print(final)
# node1 p1 p2 node2
# 0 _abc-1 1 9 xyz-1
# 1 abc-1 2 10 xyz-1
# 2 -xyz-1 11 3 abc-1
# 5 -xyz-1 3 6 abc-2
</code></pre>
<p>I feel like my code can be shorten much more and fixed at the same time.</p>
<p>Do you guys have any ideas to share ? I'm open to any suggestion.</p>
<p>My expected output is this (if <code>xyz</code> is chosen to be at left) :</p>
<pre><code># node1 p1 p2 node2
# 0 xyz-1 9 1 _abc-1
# 1 xyz-1 10 2 abc-1
# 2 -xyz-1 11 3 abc-1
# 5 -xyz-1 3 6 abc-2
</code></pre>
|
<python><pandas>
|
2024-01-16 15:38:24
| 1
| 1,085
|
VERBOSE
|
77,826,914
| 385,390
|
Register image onto a smaller, partial, image but with some common features
|
<p>I want to transform images of written text (taken by a mobile phone camera with different 3D rotation, scale etc.) so that they look "orthogonal" (by this I mean as if you read them from a ebook reader). My next step will be to OCR these images and I want them not to be affected by camera rotation.</p>
<p>All input images have a header and footer which is identical to all input images. That is, there is an identical logo at the top of each image. Let's say these are title, logo etc. In between header and footer there is various text content whose height (i.e. text rows) varies.</p>
<p>Each page looks like this,</p>
<pre><code>HEADER HEADER LOGO
-------------------
content |
content | height of content varies
... |
content |
------------------
FOOTER FOOTER LOGO
</code></pre>
<p>I have created two "templates" by taking one such input image page and applied manual transform to "orthogonalize" it and then isolated the header and footer by removing all text content in between.</p>
<p>So, I now have 2 template images <code>TH</code> and <code>TF</code> with no skew and correct rotation etc. These two template images contain only the header or footer and so they are much smaller in size than the input images which contain header, footer and text content in between.</p>
<p>Then, for each input image that needs to be registered I am calculating Feature descriptors (AKAZE/SIFT, etc.), using OpenCV, and matching those with features from the template <code>TH</code> (optionally with <code>TF</code> separately for robustness). A Homography Matrix (<code>H</code>) is calculated (via <code>findHomography()</code>) and applied to the input image via <code>warpPerspective()</code>.</p>
<p>The features are calculated correctly, there are matches between input image and <code>TH</code> (or <code>TF</code>).</p>
<p>The problem is when I apply the homography to the input image to unskew it. It shrinks it since the template only contains the header and footer and not text content and therefore it is tiny compared to the input page.</p>
<p>Ideally, I would like the Homography Matrix to not contain any scaling or translating because the template is so tiny. All I want it to contain is the rotation/skew information. For me, it would be enough to rotate/"unskew" the input image for better results of the next stage which is OCR.</p>
<p>I am using OpenCV (python or C++ not a problem).</p>
<p>Can I remove some items from the Homography Matrix in order to keep only rotation?</p>
<p>Or is the proposed pipeline flawed?</p>
<p>I guess the most generic question is: how can I register a large image using a much smaller image as a reference (which contains say, just a logo).</p>
<p>EDIT: I have remove the sample code, please use the code in the answer I posted below: <a href="https://stackoverflow.com/a/77834722/385390">https://stackoverflow.com/a/77834722/385390</a> .</p>
|
<python><opencv><image-processing><computer-vision><affinetransform>
|
2024-01-16 15:29:58
| 1
| 1,181
|
bliako
|
77,826,628
| 2,891,152
|
Compute maximum number of consecutive identical integers in array column
|
<p>Consider the following:</p>
<pre><code>df = spark.createDataFrame([
[0, [1, 1, 4, 4, 4]],
[1, [3, 2, 2, -4]],
[2, [1, 1, 5, 5]],
[3, [-1, -9, -9, -9, -9]]]
,
['id', 'array_col']
)
df.show()
'''
+---+--------------------+
| id| array_col|
+---+--------------------+
| 0| [1, 1, 4, 4, 4]|
| 1| [3, 2, 2, -4]|
| 2| [1, 1, 5, 5]|
| 3|[-1, -9, -9, -9, -9]|
+---+--------------------+
'''
</code></pre>
<p>The desired result would be:</p>
<pre><code>'''
+---+--------------------+-------------------------+
| id| array_col|max_consecutive_identical|
+---+--------------------+-------------------------+
| 0| [1, 1, 4, 4, 4]| 3|
| 1| [3, 2, 2, -4]| 2|
| 2| [1, 1, 5, 5]| 2|
| 3|[-1, -9, -9, -9, -9]| 4|
+---+--------------------+-------------------------+
'''
</code></pre>
<p>I've tried solving it by joining the array as a string, then doing a <code>regexp_extract_all</code> according to the regex I found here <a href="https://stackoverflow.com/questions/6306098/regexp-match-repeated-characters">RegExp match repeated characters</a>.</p>
<pre><code>from pyspark.sql.functions import col, concat_ws, expr
df = df.withColumn('joined_str', concat_ws('', col('array_col')))
df.show()
'''
+---+--------------------+-----------+
| id| array_col| joined_str|
+---+--------------------+-----------+
| 0| [1, 1, 4, 4, 4]| 11444|
| 1| [3, 2, 2, -4]| 322-4|
| 2| [1, 1, 5, 5]| 1155|
| 3|[-1, -9, -9, -9, -9]| -1-9-9-9-9|
+---+--------------------+-----------+
'''
df = df.withColumn('regexp_extracted', expr('regexp_extract_all(joined_str, "([0-9])\1*", 1)'))
df.show()
'''
+---+--------------------+----------+----------+----------------+
| id| array_col| concat_ws|joined_str|regexp_extracted|
+---+--------------------+----------+----------+----------------+
| 0| [1, 1, 4, 4, 4]| 11444| 11444| [1, 1, 4, 4, 4]|
| 1| [3, 2, 2, -4]| 322-4| 322-4| [3, 2, 2, 4]|
| 2| [1, 1, 5, 5]| 1155| 1155| [1, 1, 5, 5]|
| 3|[-1, -9, -9, -9, -9]|-1-9-9-9-9|-1-9-9-9-9| [1, 9, 9, 9, 9]|
+---+--------------------+----------+----------+----------------+
'''
</code></pre>
<p>But then I got stuck because of 3 problems:</p>
<ul>
<li>Negative numbers would be a problem to match</li>
<li>Numbers with more the 1 digit would be a problem to match</li>
<li>Even if all numbers were between 0-9, the regex doesn't seem to be working</li>
</ul>
|
<python><sql><pyspark>
|
2024-01-16 14:51:26
| 2
| 456
|
L.B.
|
77,826,532
| 7,626,061
|
Fastest method to read from stdin in python, for competitive programming?
|
<p>A few days ago I stumbled with a problem in competitive programming which has never been solved in Python. It's because that problem has a too big input and Python seems to be very slow in reading from stdin (standard input). The input consists of a sequence of integers separated by just one character, which will be either a space (" ") or a line break ("\n").</p>
<p>The fastest input reading code I was able to create is the one below. It reads the entire binary input at once, then joins those sets of bytes to form integers. And it provides the result through an input iterator (using the <code>yield</code> clause). Knowing that the digits 0-9 have ASCII code of 48-57 and the integer separators use characters below this range, all it needs to check is if the byte is >= 48.</p>
<pre class="lang-py prettyprint-override"><code>from os import read
from os import fstat
def createInputIterator():
vs = 0
for v in read(0, fstat(0).st_size):
if v >= 48:
vs *= 10
vs += v - 48
else:
yield vs # another complete integer has been read
vs = 0
iterator = createInputIterator()
# numbers are consumed one by one like this:
# for i in range( ... ):
x = next(iterator)
</code></pre>
<p>Is it possible to make this code any faster?</p>
<p>PS: The code I have just needs 5% of performance improvement to finish the task within the expected time frame of 2s and I have measured the input reading part as being the bottleneck (it's consuming more than 50% of the execution time).</p>
|
<python>
|
2024-01-16 14:37:03
| 1
| 530
|
Murilo Perrone
|
77,826,406
| 9,779,999
|
want to install peft and accelerate compatible with torch 1.9.0+cu111
|
<p>i want to install peft and accelerate:
!pip install -q git+https://github.com/huggingface/peft.git
!pip install -q git+https://github.com/huggingface/accelerate.git</p>
<p>But as my torch version is 1.9.0+cu111, the latest accelerate doesn't support my torch version.</p>
<ul>
<li>The latest accelerate 0.27.0.dev0 requires torch>=1.10.0, which is not compatible to my torch 1.9.0+cu111.</li>
<li>The latest peft 0.7.2.dev0 requires torch>=1.13.0, is not compatible to my torch 1.9.0+cu111 which is incompatible.</li>
</ul>
<p>the command i am using is :</p>
<pre><code>!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -q git+https://github.com/huggingface/peft.git
!pip install -q git+https://github.com/huggingface/accelerate.git
</code></pre>
<p>My torch and cuda are:</p>
<pre><code>import torch
print("torch.__version__", torch.__version__)
print("torch.version.cuda", torch.version.cuda)
print("torch.__config__", torch.__config__.show())
print("torch.cuda.device_count", torch.cuda.device_count()) # Print the number of CUDA devices
import torchvision
print("torchvision", torchvision.__version__)
torch.__version__ 1.9.0+cu111
torch.version.cuda 11.1
torch.__config__ PyTorch built with:
- C++ Version: 199711
- MSVC 192829337
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
- OpenMP 2019
- CPU capability usage: AVX2
- CUDA Runtime 11.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.0.5
- Magma 2.5.4
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=C:/w/b/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/w/b/windows/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON,
torch.cuda.device_count 1
torchvision 0.10.0+cu111
</code></pre>
<p>I appreciate your help.</p>
|
<python><pytorch><gpu><large-language-model>
|
2024-01-16 14:18:24
| 1
| 1,669
|
yts61
|
77,826,404
| 707,519
|
else block not invoked when there is no exception in wrapped function
|
<p>In below code, when there is no exception raised from <code>bar</code>, I am expecting the <code>else</code> block of decorator to be called, but this is not happening.</p>
<pre><code>#!/usr/bin/python
from functools import wraps
def handle_exceptions(func):
@wraps(func)
def wrapper(*arg, **kwargs):
try:
errorFlag = True
print("error flag set as true")
return func(*arg, **kwargs)
except Exception:
raise
else:
print("else block hit")
errorFlag = False
finally:
if errorFlag:
print("Error seen in finally")
return wrapper
def bar():
pass
@handle_exceptions
def foo():
bar()
foo()
</code></pre>
<p>Output when <code>bar</code> does not raise any exceptions:</p>
<pre><code>error flag set as true
Error seen in finally
</code></pre>
<p>Why is <code>print("else block hit")</code> missing here?</p>
|
<python><python-decorators><try-except>
|
2024-01-16 14:18:17
| 1
| 2,348
|
m.divya.mohan
|
77,826,204
| 12,518,487
|
Error when trying to remove Lease from file using Azure Python SDK
|
<p>I'm facing some issues when trying to break a lease on file in Azure Storage Account using the code below.</p>
<pre><code>from azure.identity import DefaultAzureCredential, AzureCliCredential, ChainedTokenCredential, ClientSecretCredential
from azure.storage.blob import BlobServiceClient, BlobClient, BlobLeaseClient
import datetime
dbutils.widgets.text('client_id', '')
dbutils.widgets.text('tenant_id', '')
dbutils.widgets.text('lake_secret', '')
client_id = dbutils.widgets.get('client_id')
tenant_id = dbutils.widgets.get('tenant_id')
lake_secret = dbutils.widgets.get('lake_secret')
azure_secret_credentials = ClientSecretCredential(client_secret=lake_secret, client_id=client_id, tenant_id=tenant_id)
credential = ChainedTokenCredential(AzureCliCredential(), azure_secret_credentials)
token = credential.get_token('.default')
api_version = '2023-08-03'
request_time = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
headers = {
'x-ms-date' : request_time,
'x-ms-version' : api_version,
'Authorization' : azure_secret_credentials.get_token('https://storage.azure.com/.default'),
"x-ms-lease-action": "break",
"x-ms-version": "2021-12-02",
"Accept": "application/xml"
}
account_url = "<url>"
bsc = BlobServiceClient(account_url=account_url, credential=credential)
bc = bsc.get_blob_client('lake', '<path>')
blc = BlobLeaseClient(bc)
blc.break_lease(0, headers=headers)
</code></pre>
<p>As a result I get the message below:</p>
<pre><code>HttpResponseError: (MissingRequiredHeader) An HTTP header that's mandatory for this request is not specified.
</code></pre>
<p>I've already checked the documentation but it seems we're not missing any required header argument. Any ideas?</p>
<p>Thanks!</p>
|
<python><azure><databricks><azure-storage-account>
|
2024-01-16 13:49:14
| 1
| 4,125
|
rmesteves
|
77,826,191
| 21,185,825
|
pyautogui - pass parameters to hotkey?
|
<p>The keys I want <strong>pyautogui</strong> to press <strong>come from a string</strong>, that is parsed to an array</p>
<p>so instead of doing this:</p>
<pre><code>pyautogui.hotkey('alt','f')
</code></pre>
<p>I tried this:</p>
<pre><code>keys = "alt+f" # comes from config file
sub_keys = keys.split("+")
pyautogui.hotkey(sub_keys)
</code></pre>
<p>tried this too: pyautogui.hotkey(*sub_keys)</p>
<p>tried a tuple:</p>
<pre><code>_sub_keys = tuple(sub_keys)
pyautogui.hotkey(*_sub_keys)
</code></pre>
<p>tried passing the array directly to hotkey()</p>
<pre><code>pyautogui.hotkey(keys)
</code></pre>
<p>I also tried this:</p>
<pre><code>for key in sub_keys:
pyautogui.keyDown(key)
time.sleep(0.2)
for key in sub_keys[::-1]:
pyautogui.keyUp(key)
time.sleep(0.2)
</code></pre>
<p>nothing works, <strong>it is like the 'alt' key is pressed, but not the 'f' key</strong></p>
<p>what am I missing ?</p>
<p><a href="https://i.sstatic.net/mS4FV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mS4FV.png" alt="enter image description here" /></a></p>
|
<python><pyautogui>
|
2024-01-16 13:46:41
| 1
| 511
|
pf12345678910
|
77,826,156
| 1,815,577
|
PyInstaller with Playwright and Firefox
|
<p>I'm having trouble using PyInstaller to package a script that uses Playwright with the Firefox driver.</p>
<p>This is what the Python looks like (bare minimum):</p>
<pre class="lang-py prettyprint-override"><code>import time
from playwright.sync_api import sync_playwright, TimeoutError
with sync_playwright() as p:
browser = p.firefox.launch(headless=True)
context = browser.new_context(
viewport={"width": 1280, "height": 900}
)
page = context.new_page()
page.goto("https://www.python.org/")
time.sleep(10)
</code></pre>
<p>I have installed Firefox:</p>
<pre class="lang-bash prettyprint-override"><code>PLAYWRIGHT_BROWSERS_PATH=0 playwright install firefox
</code></pre>
<p>And I wrap up the script using this:</p>
<pre class="lang-bash prettyprint-override"><code>pyinstaller -y --onefile tool.py
</code></pre>
<p>When I launch the resulting executable I get an error about missing <code>libmozsandbox.so</code>:</p>
<pre><code>libmozsandbox.so: cannot open shared object file: No such file or directory
</code></pre>
<p>⚠️ If I replace Firefox with Chromium throughout then everything works fine. So this issue applies specifically to Firefox.</p>
<p>Things I have tried:</p>
<ul>
<li>installing Firefox with and without the <code>--with-deps</code> option;</li>
<li>installing Firefox with and without <code>PLAYWRIGHT_BROWSERS_PATH=0</code>; and</li>
<li>launching Firefox with the <code>--no-sandbox</code> argument by passing <code>args=['--no-sandbox']</code> to the <code>.launch()</code> method.</li>
</ul>
|
<python><firefox><playwright>
|
2024-01-16 13:39:45
| 1
| 6,802
|
datawookie
|
77,826,141
| 22,054,564
|
Status is 200 OK but the response is empty in Azure Functions (.NET 6) with Python Executable Application
|
<p>Python File:</p>
<pre class="lang-py prettyprint-override"><code>num1 = 1.5
num2 = 6.3
sum = num1 + num2
print('The sum of {0} and {1} is {2}'.format(num1, num2, sum))
</code></pre>
<p>Azure Functions - .NET 6:</p>
<pre><code>using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using System.Diagnostics;
namespace PyScriptTrigger
{
public static class HttpTrigger
{
[FunctionName("HttpTrigger")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string result = "";
ProcessStartInfo start = new ProcessStartInfo();
start.FileName = @"C:\Users\HARIKA556\source\repos\Krishna\PyScriptTrigger\test.exe";
start.UseShellExecute = false;
start.RedirectStandardOutput = true;
using (Process process = Process.Start(start))
{
using (StreamReader reader = process.StandardOutput)
{
log.LogInformation("File Starting Executed");
result = await reader.ReadToEndAsync();
Console.WriteLine(result);
log.LogInformation("File executed");
}
};
Console.WriteLine("Out of the Process block: " + result);
return new OkObjectResult(result);
}
}
}
</code></pre>
<p>Not Sure where I missed the syntax, it is not giving the response both locally and in Azure Portal - Function App.</p>
<p>I'm not getting the result value in the output. It means the status comes 200 Ok but the output is not getting as expected.</p>
<p>Expected Result in the postman or Azure Function Portal App:
<code>The sum of 1.5 and 6.3 is 7.8'</code></p>
|
<python><c#><azure><azure-functions><.net-6.0>
|
2024-01-16 13:36:56
| 0
| 837
|
VivekAnandChakravarthy
|
77,826,003
| 6,402,099
|
Custom class that inherits LGBMClassifier doesn't work: KeyError: 'random_state'
|
<p>I create a random dataset to train a <code>LGBM</code> model:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.datasets import make_classification
X, y = make_classification()
</code></pre>
<p>Then I train and predict the original LGBM model with no issues:</p>
<pre class="lang-py prettyprint-override"><code>from lightgbm import LGBMClassifier
clf = LGBMClassifier()
clf.fit(X, y=y)
clf.predict(X)
clf.predict_proba(X)
</code></pre>
<p>But when I create a custom class of <code>LGBMClassifier</code>, I get an error:</p>
<pre class="lang-py prettyprint-override"><code>class MyClf(LGBMClassifier):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def fit(self, X, y=None):
return super().fit(X, y=y)
def predict(self, X):
return super().predict(X)
def predict_proba(self, X):
return super().predict_proba(X)
clf = MyClf()
clf.fit(X, y=y)
clf.predict(X)
clf.predict_proba(X)
</code></pre>
<p>In <code>clf.fit</code>:</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[15], line 15
12 return super().predict_proba(X)
14 clf = MyClf()
---> 15 clf.fit(X, y=y)
16 clf.predict(X)
17 clf.predict_proba(X)
Cell In[15], line 6
5 def fit(self, X, y=None):
----> 6 return super().fit(X, y=y)
File lib/python3.9/site-packages/lightgbm/sklearn.py:890, in LGBMClassifier.fit(self, X, y, sample_weight, init_score, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks, init_model)
887 else:
888 valid_sets[i] = (valid_x, self._le.transform(valid_y))
--> 890 super().fit(X, _y, sample_weight=sample_weight, init_score=init_score, eval_set=valid_sets,
891 eval_names=eval_names, eval_sample_weight=eval_sample_weight,
892 eval_class_weight=eval_class_weight, eval_init_score=eval_init_score,
893 eval_metric=eval_metric, early_stopping_rounds=early_stopping_rounds,
894 verbose=verbose, feature_name=feature_name, categorical_feature=categorical_feature,
895 callbacks=callbacks, init_model=init_model)
896 return self
File lib/python3.9/site-packages/lightgbm/sklearn.py:570, in LGBMModel.fit(self, X, y, sample_weight, init_score, group, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_group, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks, init_model)
568 params.pop('n_estimators', None)
569 params.pop('class_weight', None)
--> 570 if isinstance(params['random_state'], np.random.RandomState):
571 params['random_state'] = params['random_state'].randint(np.iinfo(np.int32).max)
572 for alias in _ConfigAliases.get('objective'):
KeyError: 'random_state'
</code></pre>
<p>I couldn't find the issue even I have inspected the source code of <code>LGBMClassifier</code>.</p>
|
<python><scikit-learn><lightgbm>
|
2024-01-16 13:16:32
| 2
| 1,775
|
emremrah
|
77,825,815
| 1,738,879
|
`ModuleNotFoundError` with custom Python project layout
|
<p>I'm trying to setup a python project following a structure that is shown <a href="https://realpython.com/python-application-layouts/#application-with-internal-packages" rel="nofollow noreferrer">here</a>. In my case, I have:</p>
<pre><code>.
└── helloworld/
├── hello/
│ ├── hello.py
│ └── __init__.py
├── __init__.py
├── main.py
└── world/
├── __init__.py
├── utils.py
└── world.py
</code></pre>
<p>The contents of <code>hello/hello.py</code>:</p>
<pre><code>from helloworld.world.utils import get_number
def say_hello(greeting):
return (greeting.upper(), get_number())
</code></pre>
<p>Contents of <code>world/world.py</code>:</p>
<pre><code>def say_world(place):
return place.upper()
</code></pre>
<p>Contents of <code>world/utils.py</code>:</p>
<pre><code>from random import randrange
def get_number():
return randrange(10)
</code></pre>
<p>And in <code>main.py</code> I simply have an import so far:</p>
<pre><code>from hello.hello import say_hello
</code></pre>
<p>I think I have all the necessary <code>__init__.py</code> files, including in the project folder, which should make it to be treated like a module, but I can't figure how to workaround the error I'm getting with the imports:</p>
<pre><code>$ python3 helloworld/main.py
Traceback (most recent call last):
File "helloworld/main.py", line 3, in <module>
from hello.hello import say_hello
File "/home/foobar/proj-layout/helloworld/hello/hello.py", line 1, in <module>
from helloworld.world.utils import get_number
ModuleNotFoundError: No module named 'helloworld'
</code></pre>
<p>What am I doing wrong here, and how can a project structure like this one work with <code>import helloworld</code>?</p>
<p>I don't want to have this project installed for now, so something like <code>setup.py -e</code> would not be the best solution.</p>
<p>I know similar questions have been asked before, but none of the ones I could find seemed to work in my case. Maybe there's something on SO that I have overseen, but I honestly couldn't find it before posting this question.</p>
|
<python><python-3.x><project-layout>
|
2024-01-16 12:41:51
| 2
| 1,925
|
PedroA
|
77,825,701
| 7,298,014
|
Python Efficient conversion of time strings in multiple dimensions to datetime objects
|
<p>I want to convert the string date values inside a xarray DataArray to datetime index or python datetime object.
For example, lets take the DataArray</p>
<pre><code>import xarray as xr
import pandas as pd
b = np.asarray(['2020-01-01T09:41:54.647000Z', '2020-01-01T09:41:55.487000Z',
'2020-01-01T09:41:56.327000Z', '2020-01-01T09:41:57.167000Z',
'2020-01-01T09:41:58.007000Z', '2020-01-01T09:41:58.847000Z',
'2020-01-01T09:41:59.687000Z', '2020-01-01T09:42:00.527000Z',
'2020-01-01T09:42:01.367000Z', '2020-01-01T09:42:02.207000Z',
'2020-01-01T09:42:03.047000Z', '2020-01-01T09:42:03.887000Z'],
dtype=object)
ds = xr.Dataset(data_vars={'time_utc': (['x', 'y', 'z'], b.reshape((2,2,3)))},
coords={'x': [0,1], 'y': [2,3], 'z':[0,1,2]})
</code></pre>
<p>In principle, I can convert a 1-D array using <code>pd.to_datetime()</code>
For example,</p>
<pre><code>dates = pd.to_datetime(b)
</code></pre>
<p>But, my question is, how to I apply this function to xarray DataArray. I tried using <code>xr.apply_ufunc</code> as following</p>
<pre><code>xr.apply_ufunc(pd.to_datetime, ds.time_utc)
</code></pre>
<p>but it throws an error</p>
<blockquote>
<p>TypeError: arg must be a string, datetime, list, tuple, 1-d array, or
Series
Most likely, pd.to_datetime is not vectorized and I can set</p>
</blockquote>
<pre><code>xr.apply_ufunc(pd.to_datetime, ds.time_utc, vectorize=True)
</code></pre>
<p>But then it is very slow (I have a large dataset). Is there some internal xarray utility for fast datetime conversion similar to that in Pandas?</p>
|
<python><pandas><datetime><python-xarray>
|
2024-01-16 12:18:19
| 1
| 1,652
|
Vinod Kumar
|
77,825,699
| 3,801,239
|
opencv python imutils.grab_contours and sorted methods are not available in java/scala api my scala code implementation does't produce same results
|
<p>i am new to OpenCV and following this <a href="https://pyimagesearch.com/2015/11/30/detecting-machine-readable-zones-in-passport-images/" rel="nofollow noreferrer">tutorial</a>
here is the Python code</p>
<pre><code> from imutils import paths
import numpy as np
import imutils
import cv2
import os
rectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (13, 5))
sqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (21, 21))
image = cv2.imread("src/main/resources/passport/passport.jpg")
image = imutils.resize(image, height=600)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (3, 3), 0)
blackhat = cv2.morphologyEx(gray, cv2.MORPH_BLACKHAT, rectKernel)
gradX = cv2.Sobel(blackhat, ddepth=cv2.CV_32F, dx=1, dy=0, ksize=-1)
gradX = np.absolute(gradX)
(minVal, maxVal) = (np.min(gradX), np.max(gradX))
gradX = (255 * ((gradX - minVal) / (maxVal - minVal))).astype("uint8")
gradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel)
thresh = cv2.threshold(gradX, 0, 255, cv2.THRESH_BINARY cv2.THRESH_OTSU)[1]
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)
thresh = cv2.erode(thresh, None, iterations=4)
p = int(image.shape[1] * 0.05)
thresh[:, 0:p] = 0
thresh[:, image.shape[1] - p:] = 0
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
(x, y, w, h) = cv2.boundingRect(c)
ar = w / float(h)
crWidth = w / float(gray.shape[1])
if ar > 5 and crWidth > 0.75:
pX = int((x + w) * 0.03)
pY = int((y + h) * 0.03)
(x, y) = (x - pX, y - pY)
(w, h) = (w + (pX * 2), h + (pY * 2))
roi = image[y:y + h, x:x + w].copy()
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
break
cv2.imshow("Image", image)
cv2.imshow("ROI", roi)
cv2.imwrite("src/main/resources/passport/mrz.jpg", roi)
</code></pre>
<p>and here is the converted scala code</p>
<p>System.loadLibrary(Core.NATIVE_LIBRARY_NAME)</p>
<pre><code> // Initialize a rectangular and square structuring kernel
val rectKernel = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(25, 7))
val sqKernel = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(21, 21))
val imagePath = config.getString("passport.path")
val image = Imgcodecs.imread(imagePath)
val resizedImage = new Mat()
val targetHeight = 600
val aspectRatio = image.width().toDouble / image.height()
val targetWidth = Math.round(targetHeight * aspectRatio)
Imgproc.resize(image, resizedImage, new Size(targetWidth, targetHeight), 0, 0, Imgproc.INTER_AREA)
val grayScale = new Mat()
Imgproc.cvtColor(resizedImage, grayScale, Imgproc.COLOR_BGR2GRAY)
val grayBlurred = new Mat()
Imgproc.GaussianBlur(grayScale, grayBlurred, new Size(3, 3), 0)
val blackhat = new Mat()
Imgproc.morphologyEx(grayBlurred, blackhat, Imgproc.MORPH_BLACKHAT, rectKernel)
val blacHatImagePath = "src/main/resources/ocr/passport/passport-blackhat.jpeg"
Imgcodecs.imwrite(blacHatImagePath, blackhat)
val gradX = new Mat()
Imgproc.Sobel(blackhat, gradX, CvType.CV_32F, 1, 0, -1) // Example kernel size
Core.absdiff(gradX, new Scalar(0), gradX) // Calculate absolute values
val minVal = Core.minMaxLoc(gradX).minVal
val maxVal = Core.minMaxLoc(gradX).maxVal
Core.normalize(gradX, gradX, 0, 255, Core.NORM_MINMAX, CvType.CV_8U)
Imgproc.morphologyEx(gradX, gradX, Imgproc.MORPH_CLOSE, rectKernel) // Apply closing operation
val thresh = new Mat()
Imgproc.threshold(gradX, thresh, 0, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU) // Apply Otsu's thresholding
Imgproc.morphologyEx(thresh, thresh, Imgproc.MORPH_CLOSE, sqKernel)
Imgproc.erode(thresh, thresh, new Mat(), new Point(-1, -1), 4)
val numColumnsToZero = (resizedImage.cols() * 0.05).toInt
val leftRect = new Rect(0, 0, numColumnsToZero, resizedImage.rows())
val rightRect = new Rect(resizedImage.cols() - numColumnsToZero, 0, numColumnsToZero, resizedImage.rows())
Imgproc.rectangle(thresh, leftRect.tl(), leftRect.br(), Scalar.all(0), Core.FILLED)
Imgproc.rectangle(thresh, rightRect.tl(), rightRect.br(), Scalar.all(0), Core.FILLED)
val borderImagePath = "src/main/resources/ocr/passport/passport-thresh.jpeg"
Imgcodecs.imwrite(borderImagePath, thresh)
val contours = new ArrayBuffer[MatOfPoint]() // Use ArrayBuffer to store contours
Imgproc.findContours(thresh.clone(), contours.asJava, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE)
val sortedContours = contours.sortBy(-Imgproc.contourArea(_)) // Scala's pattern matching and '-' for descending order
var breakLoop = false
for (c <- sortedContours if !breakLoop) {
val rect = Imgproc.boundingRect(c)
var x = rect.x
var y = rect.y
var w = rect.width
var h = rect.height
// Calculate the aspect ratio and coverage ratio
val aspectRatio = w.toDouble / h
val coverageRatio = w.toDouble / grayScale.cols() // Assuming gray is the original image
var roi = new Mat()
if (aspectRatio > 5 && coverageRatio > 0.75) {
val paddingX = ((x + w)* 0.03).toInt
val paddingY = ((y + h)* 0.03).toInt
x = x - paddingX
y = y - paddingY
w = w + (paddingX
h = h + (paddingY * 2)
// Extract ROI and draw a rectangle
roi = image.submat(y, y + h, x, x + w)
Imgproc.rectangle(resizedImage, new Point(x, y), new Point(x + w, y + h), new Scalar(0, 255, 0), 2)
breakLoop = true
}
val roiImagePath = "src/main/resources/ocr/passport/passport-mrz-detected.jpeg"
Imgcodecs.imwrite(roiImagePath, roi)
}
</code></pre>
<p>i am taking the same image but both are printing different sorted contours values
also, the Python code detects the mrz region correctly and the Scala code detects the incorrect part of an image</p>
<p>here are the output values of the Python code</p>
<p>sorted contours is [array([[[ 22, 563]],</p>
<pre><code> [[ 22, 583]],
[[372, 583]],
[[372, 582]],
[[373, 581]],
[[373, 565]],
[[219, 565]],
[[217, 563]]], dtype=int32), array([[[192, 412]],
[[191, 413]],
[[156, 413]],
[[156, 450]],
[[157, 451]],
[[157, 501]],
[[158, 502]],
[[163, 502]],
[[164, 503]],
[[164, 518]],
[[165, 519]],
[[171, 519]],
[[172, 520]],
[[172, 523]],
[[217, 523]],
[[217, 479]],
[[218, 478]],
[[218, 458]],
[[219, 457]],
[[232, 457]],
[[232, 456]],
[[217, 456]],
[[216, 455]],
[[216, 413]],
[[217, 412]],
[[247, 412]]], dtype=int32), array([[[204, 252]],
[[204, 265]],
[[357, 265]],
[[358, 264]],
[[386, 264]],
[[387, 265]],
[[409, 265]],
[[409, 252]],
[[395, 252]],
[[394, 253]],
[[321, 253]],
[[320, 252]]], dtype=int32), array([[[ 83, 252]],
[[ 82, 253]],
[[ 60, 253]],
[[ 59, 254]],
[[ 59, 255]],
[[ 58, 256]],
[[ 58, 262]],
[[ 60, 264]],
[[ 61, 264]],
[[ 62, 265]],
[[166, 265]],
[[167, 264]],
[[166, 263]],
[[166, 253]],
[[152, 253]],
[[151, 252]]], dtype=int32), array([[[369, 460]],
[[369, 481]],
[[368, 482]],
[[352, 482]],
[[351, 483]],
[[349, 483]],
[[349, 495]],
[[405, 495]],
[[405, 488]],
[[404, 487]],
[[404, 485]],
[[397, 485]],
[[396, 484]],
[[396, 474]],
[[371, 474]],
[[370, 473]],
[[370, 463]],
[[369, 462]]], dtype=int32), array([[[335, 323]],
[[335, 324]],
[[334, 325]],
[[319, 325]],
[[319, 330]],
[[333, 330]],
[[334, 331]],
[[334, 352]],
[[335, 353]],
[[389, 353]],
[[389, 351]],
[[348, 351]],
[[347, 350]],
[[347, 323]]], dtype=int32), array([[[ 81, 500]],
[[ 81, 507]],
[[ 82, 508]],
[[ 82, 520]],
[[ 86, 520]],
[[ 86, 505]],
[[ 85, 504]],
[[ 85, 503]],
[[ 84, 502]],
[[ 84, 501]],
[[ 83, 500]]], dtype=int32), array([[[ 85, 355]],
[[ 85, 356]],
[[ 97, 356]],
[[ 97, 355]]], dtype=int32), array([[[ 61, 353]],
[[ 61, 354]],
[[ 68, 354]],
[[ 68, 353]]], dtype=int32), array([[[108, 427]],
[[108, 428]],
[[109, 428]],
[[110, 427]]], dtype=int32), array([[[ 51, 429]],
[[ 53, 429]]], dtype=int32), array([[[178, 326]],
[[178, 331]]], dtype=int32)]
p is 22
gardx is (600, 447)
minVal is 0.0
maxVal is 2469.0
pX is 11
py is 17
x is 11
y is 546
w is 374
h is 55
ar is 16.761904761904763
crWidth is 0.7874720357941835
c is (8, 1)
roi is [[[255 250 248]
[255 254 246]
[254 253 245]
...
[248 244 245]
[246 245 245]
[248 248 248]]
[[254 252 249]
[255 252 248]
[254 251 247]
...
[252 245 242]
[252 246 243]
[253 248 245]]
[[250 250 248]
[255 252 250]
[255 251 249]
...
[255 250 246]
[254 249 244]
[254 248 243]]
...
[[255 255 255]
[255 255 255]
[254 254 254]
...
[252 249 244]
[253 249 244]
[253 250 244]]
[[255 255 255]
[255 255 255]
[254 254 254]
...
[251 251 245]
[251 252 246]
[251 252 246]]
[[255 255 255]
[255 255 255]
[254 254 254]
...
[250 252 246]
[250 252 246]
[251 253 247]]]
</code></pre>
<p>and here are Scala code output values</p>
<pre><code>info] orignal image rows 805 and column 600
[info] orignal image width 600 and height 805
[info] resized image rows 600 and column 447
[info] resized image width 447 and height 600
[info] minVal is 0.0
[info] maxVal is 2469.0
[info] grad x is 447x600
[info] value of p is 22
[info] sorted contours ArrayBuffer(Mat [ 9*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0ead8b0, dataAddr=0x7f84a0eadac0 ], Mat [ 31*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6c870, dataAddr=0x7f84a0bfd500 ], Mat [ 14*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6ca30, dataAddr=0x7f84a0e6bf00 ], Mat [ 12*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6caa0, dataAddr=0x7f84a0e6c040 ], Mat [ 16*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6d670, dataAddr=0x7f84a0e6ba40 ], Mat [ 9*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6d590, dataAddr=0x7f84a0e6b900 ], Mat [ 5*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6c9c0, dataAddr=0x7f84a0ead540 ], Mat [ 8*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6c800, dataAddr=0x7f84a0e6bb80 ], Mat [ 2*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6d600, dataAddr=0x7f84a0e6b980 ], Mat [ 2*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6c8e0, dataAddr=0x7f84a0e6bd00 ], Mat [ 2*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0e6c950, dataAddr=0x7f84a0e6bdc0 ])
[info] c is Mat [ 9*1*CV_32SC2, isCont=true, isSubmat=false, nativeObj=0x7f84a0ead8b0, dataAddr=0x7f84a0eadac0 ]
[info] in if block
[info] px is 11
[info] py is 17
[info] x is 12
[info] y is 546
[info] w is 373
[info] h is 55
[info] aspectRatio is 16.714285714285715
[info] coverageRatio is 0.785234899328859
[info] c is1x9
[info] original image is empty false
[info] new mrz image is empty false
[info] roi is Mat [ 55*373*CV_8UC3, isCont=false, isSubmat=true, nativeObj=0x7f84a0e9a950, dataAddr=0x7f845814ef74 ]
</code></pre>
<p>I was unable to write the same code in Scala as given in Python</p>
<pre><code>cnts = imutils.grab_contours(cnts)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
</code></pre>
<p>I have added a workaround for that maybe that is causing the issue, please help</p>
<p><a href="https://i.sstatic.net/uPQKB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uPQKB.jpg" alt="this is the orignal image " /></a></p>
<p><a href="https://i.sstatic.net/lveMC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lveMC.jpg" alt="here is the output of python image which i desire in scala" /></a>
<a href="https://i.sstatic.net/W63LX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W63LX.jpg" alt="here is the output of scala code" /></a></p>
<ol>
<li>the first image is the original image</li>
<li>the second image is the output of Python image which I desire in Scala</li>
<li>the third image is the output of the scala code which is incorrect</li>
</ol>
|
<python><scala><opencv><computer-vision><imutils>
|
2024-01-16 12:17:51
| 0
| 3,525
|
sarah w
|
77,825,485
| 1,334,609
|
pexpect: failed to get the string after the marker
|
<p>Below is the code I used, I want to use pexpect to get the entire output of a commmand running in a ssh section:</p>
<p>I have tried many ways to get the output of the command, but it always return the string before the marker, i can't get the string after the marker, I need to get the entire output of the command. please help me with above function run_command_and_return_status_and_output</p>
<pre><code>import pexpect
import sys
def ssh_login(ssh_host, ssh_username, ssh_password, timeout=10):
ssh_command = f"ssh {ssh_username}@{ssh_host}"
child = pexpect.spawn(ssh_command, timeout=timeout)
try:
# Expect the SSH password prompt
index = child.expect([pexpect.TIMEOUT, 'password:', pexpect.EOF], timeout=timeout)
if index == 0:
raise Exception("SSH connection timeout")
if index == 1:
# Send the SSH password
child.sendline(ssh_password)
# Expect the shell prompt after successful login
index = child.expect(['7503', pexpect.EOF], timeout=timeout)
if index == 0:
return child # Successfully logged in
raise Exception("Authentication failed")
except Exception as e:
child.close()
raise e
def run_command_and_return_status_and_output(child, command, success_marker, failure_marker, timeout=10):
status = "Pass" # Initialize status as Pass, assuming the command will pass
output = ""
try:
# Send the command
child.sendline(command)
# Expect the output
index = child.expect([success_marker, failure_marker, pexpect.EOF, pexpect.TIMEOUT], timeout=timeout)
if index == 0:
status = "Pass"
if index == 1:
status = "Fail"
if index not in [0, 1]:
status = "Exception"
output = child.before.decode('utf-8')+child.after.decode('utf-8')
if output:
# Remove the command from the output
output = output.replace(command, "")
except Exception as e:
status = "Exception"
output = str(e)
finally:
return status, output
def run_command_and_return_status_and_output(child, command, success_marker, failure_marker, timeout=10):
status = "Pass" # Initialize status as Pass, assuming the command will pass
output = ""
try:
# Send the command
child.sendline(command)
# Expect the output
index = child.expect([success_marker, failure_marker, pexpect.EOF, pexpect.TIMEOUT], timeout=timeout)
if index == 0:
status = "Pass"
elif index == 1:
status = "Fail"
# Get the entire output, including before and after
output = child.before.decode('utf-8') + child.after.decode('utf-8')
except Exception as e:
status = "Exception"
output = str(e)
finally:
return status, output
# Example usage
ssh_host = "your host"
ssh_username = "your username"
ssh_password = "your password"
child = ssh_login(ssh_host, ssh_username, ssh_password)
command = "ifconfig -a"
success_marker = "lo"
failure_marker = "xxx"
timeout = 10
status, output = run_command_and_return_status_and_output(child, command, success_marker, failure_marker, timeout)
print("Status:", status)
print("Output:", output)
# Close the SSH connection
child.close()
</code></pre>
|
<python><pexpect>
|
2024-01-16 11:39:16
| 0
| 441
|
user1334609
|
77,825,283
| 5,834,711
|
How to use multiprocessing in geo pandas and intersection procedures by dividing the database in parts?
|
<p>I use the following code in a MAC OS with 8 cores. But when the procedure is running I see that Python uses only one core. How I could divide the land use database in e.g. 6 parts and run intersections in each core separately? And then combine the results?</p>
<pre><code>import os
import geopandas as gpd
from pathlib import Path
from multiprocessing import Pool
import time
start_time = time.time()
# Set the paths
grid_file_pattern = "emiss*.shp"
output_path = "/outpath/"
land_use_path = "/pathlu/"
def process_land_use(selected_snp, land_use_category):
# Read the gridded emissions density file
grid_density_file = os.path.join(output_path, f"emiss_{selected_snp}_density.shp")
gdf_density = gpd.read_file(grid_density_file)
# Read land use shapefile and set the CRS if not already set
land_use_file = f"lu_ll_{land_use_category}.shp"
gdf_land_use = gpd.read_file(os.path.join(land_use_path, land_use_file))
# Check if CRS is defined, if not, set it to the appropriate value
if gdf_land_use.crs is None:
gdf_land_use.crs = "EPSG:4326"
# Reproject to WGS84 (EPSG:4326)
gdf_land_use = gdf_land_use.to_crs("EPSG:4326")
# Perform spatial intersection
intersection = gpd.overlay(gdf_density, gdf_land_use, how='intersection')
# Calculate emissions in each land use category
for pollutant in ["CO", "NH3", "NMVOC", "NOx", "SO2", "PMc", "PM2_5"]:
intersection[f"{pollutant}_emissions"] = intersection[f"{pollutant}_m2"] * intersection["area"]
# Save the result
output_file = os.path.join(output_path, f"{selected_snp}_{land_use_category}_emissions.shp")
intersection.to_file(output_file)
if __name__ == '__main__':
start_time = time.time()
# Distribute gridded emissions over land use for specific SNPs
snp_distributions = {
"SNP1": "ind",
}
with Pool(8) as pool:
pool.starmap(process_land_use, snp_distributions.items())
end_time = time.time()
elapsed_time = end_time - start_time
print(f"Execution completed in {elapsed_time / 60:.2f} minutes.")
</code></pre>
|
<python><multiprocessing><geopandas>
|
2024-01-16 11:02:44
| 2
| 345
|
Nat
|
77,825,266
| 4,540,866
|
Streamlit state management does not work for text input field
|
<p>The goal is to build a Streamlit app where data is rendered based on the selected group. The user can type the group or use the "next group" button. I have 2 approaches but they both have issues.</p>
<h1>Approach 1</h1>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
groups = [1,2,3,4,5,6,7,8,9]
group_id = st.text_input(f'Group (1-{len(groups)}):', value=1, key='selected_group')
selected_group = int(st.session_state["selected_group"])
st.text(selected_group)
if st.button('Next group'):
st.session_state['selected_group'] += 1
</code></pre>
<p>There is an error when pressing pressing the 'next' button:</p>
<blockquote>
<p>StreamlitAPIException: st.session_state.selected_group cannot be modified after the widget with key selected_group is instantiated.</p>
</blockquote>
<h1>Approach 2</h1>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
groups = [1,2,3,4,5,6,7,8,9]
st.session_state["selected_group"] = st.session_state.get("selected_group", 1)
group_id = st.text_input(f'Group (1-{len(groups)}):', value=st.session_state["selected_group"])
st.session_state["selected_group"] = int(group_id)
selected_group = int(st.session_state["selected_group"])
st.text(selected_group)
if st.button('Next group'):
st.session_state['selected_group'] = selected_group + 1
st.experimental_rerun()
</code></pre>
<p>Here the 'next' button works correctly. However there's a weird behavior. The first time in the app I input a custom group, it works. The second time:</p>
<ul>
<li>I insert a group in the input and press enter</li>
<li>Nothing happens</li>
<li>The group is still the previous one</li>
<li>I insert the new group again</li>
<li>This time it works</li>
</ul>
|
<python><streamlit>
|
2024-01-16 11:00:20
| 1
| 480
|
Vlad Gheorghe
|
77,825,112
| 4,035,257
|
Filtering data based on boolean columns in python
|
<p>I have the following pandas dataframe and I would like a function that returns the ID's data with at least 1 True value in <code>bool_1</code>, 2 True values in <code>bool_2</code> and 3 True values in <code>bool_3</code> column, using the <code>groupby</code> function.</p>
<pre><code>index ID bool_1 bool_2 bool_3
0 7 True True True
1 7 False True True
2 7 False False True
3 8 True True True
4 8 True True True
5 8 False False True
6 9 True True True
7 9 True False True
8 9 True False True
9 9 True False False
</code></pre>
<p>As output I would expect complete data for <code>ID</code> <code>7</code> and <code>8</code> to be returned, since <code>9</code> has only 1 True value for <code>bool_2</code>.
Any idea for that function? Thank you!</p>
|
<python><pandas><group-by><filtering>
|
2024-01-16 10:37:37
| 3
| 362
|
Telis
|
77,824,906
| 532,054
|
Apple Store Server Notification: Python OpenSSL certificate loading fails in "header too long"
|
<p>On my django server, I'm trying to decode the JWT content sent by the Apple Store Server Notification using Apple's package <code>app-store-server-library-python</code>.</p>
<p>Under the <a href="https://github.com/apple/app-store-server-library-python" rel="nofollow noreferrer">official documentation</a>, when starting to do the verification and decoding, we find this line:</p>
<pre><code>root_certificates = load_root_certificates()
</code></pre>
<p>The implementation is not provided unfortunately. By looking here and there, I've tried several implementations that always fail in :</p>
<blockquote>
<p>OpenSSL.crypto.Error: [('asn1 encoding routines', '', 'header too
long')]</p>
</blockquote>
<p>Here is my implementation:</p>
<pre><code>def load_root_certificates():
in_file = open("AppleRootCA-G3.cer", "rb")
data = in_file.read()
in_file.close()
return load_certificate(FILETYPE_ASN1, data)
</code></pre>
<p>What could be the cause of the issue here?
Thanks for the help!</p>
|
<python><django><cryptography><decoding>
|
2024-01-16 10:05:21
| 2
| 1,771
|
lorenzo
|
77,824,830
| 17,914,734
|
How to colour the outer ring (like a doughnut plot) in a radar plot according to column values
|
<p>I have data in this form :</p>
<pre><code>data = {'Letter': ['A', 'B', 'C', 'D', 'E'],
'Type': ['Apples', 'Apples', 'Oranges', 'Oranges', 'Bananas'],
'Value': [1, 2, 0, 5, 6]}
df = pd.DataFrame(data)
</code></pre>
<p>I want to combine a doughnut plot and a radar plot, where the outer ring will be coloured according to the column "Type".</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
data = {'Letter': ['A', 'B', 'C', 'D', 'E'],
'Type': ['Apples', 'Apples', 'Oranges', 'Oranges', 'Bananas'],
'Value': [1, 2, 0, 5, 6]}
df = pd.DataFrame(data)
num_categories = len(df)
angles = np.linspace(0, 2 * np.pi, num_categories, endpoint=False).tolist()
values = df['Value'].tolist()
values += values[:1]
angles += angles[:1]
plt.figure(figsize=(8, 8))
plt.polar(angles, values, marker='o', linestyle='-', linewidth=2)
plt.fill(angles, values, alpha=0.25)
plt.xticks(angles[:-1], df['Letter'])
types = df['Type'].unique()
color_map = {t: i / len(types) for i, t in enumerate(types)}
colors = df['Type'].map(color_map)
plt.fill(angles, values, color=plt.cm.viridis(colors), alpha=0.25)
plt.show()
</code></pre>
<p>I want this to look like this : <a href="https://i.sstatic.net/z4cDJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z4cDJ.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib>
|
2024-01-16 09:55:02
| 1
| 309
|
zorals
|
77,824,635
| 5,008,968
|
How to install PyQt4 in 2024 on Ubuntu (to run an old project)?
|
<p>In 2016/2017, when PyQt4 was still supported, I <a href="https://github.com/Hamza5/TALN-Quran/" rel="nofollow noreferrer">made a project</a> based on <strong>Python 3</strong> (not sure about the exact version, maybe 3.4), <strong>PyQt4</strong>, <strong>Numpy</strong>, and <strong>Matplotlib</strong> (Unfortunately I didn't even include a <code>requirements.txt</code> file at that time).</p>
<p>I want to run this code again on my Ubuntu system (Kubuntu 22.04, to be specific), but I cannot figure out how to run PyQt4 apps nowadays. PyQt4 is no longer installable through <code>pip</code> in Python 3.10 that I have on my system.</p>
<p>I tried to look for <a href="https://hub.docker.com/search?q=pyqt4" rel="nofollow noreferrer">random Docker images that includes PyQt4</a>, but no success, as they are mostly designed for Python 2 and not 3, and I do not really know how to run a GUI app from a Docker container.</p>
<p>I tried to convert my project <a href="https://github.com/rferrazz/pyqt4topyqt5" rel="nofollow noreferrer">into PyQt5 using a tool</a>, but it introduced a lot of errors and I have no time to debug them all, especially that most of them happened in the code that was generated from the <code>.ui</code> files. So, I need to run the project as-is.</p>
<p>If you have any ideas on how to run my project, so I can at least take some screenshots of it, please let me know.</p>
|
<python><python-3.x><pyqt><pyqt4><python-3.4>
|
2024-01-16 09:21:18
| 1
| 706
|
Hamza Abbad
|
77,824,610
| 2,339,525
|
Python webscraper not scraping the latest information
|
<p>Work requires us to be up to date on developments in regards to customs regulations. Instead of manually going to websites I attempted to create a simple webscraper that goes to defined websites, gets the latest items there and writes them to an excel file.</p>
<pre><code>import requests
import pandas as pd
import regex as re
import openpyxl
from bs4 import BeautifulSoup
urls = ['https://www.evofenedex.nl/actualiteiten/', 'https://douaneinfo.nl/index.php/nieuws']
myworkbook = openpyxl.load_workbook('output.xlsx')
worksheet = myworkbook.get_sheet_by_name('Output')
for index, url in enumerate(urls):
response = requests.get(url)
if response.status_code == 200:
#empty array to store the links in
links = []
#evofenedex
if index == 0:
# Parse the HTML content
soup = BeautifulSoup(response.text, 'html.parser')
# Find the elements containing the news items
news_items = soup.find_all('div', class_='block__content')
title_element = soup.find_all('p', class_="list__title")
date_element = soup.find_all('p', class_="list__subtitle")
x = 0
link_elements = []
for titles in title_element:
link_elements.append(soup.find_all('a', title=title_element[x].text))
x = x + 1
for link_element in link_elements:
reg_str = re.findall(r'"(.*?)"', str(link_element))
links.append(f"www.evofenedex.nl{reg_str[1]}")
#douaneinfo
if index == 1:
# Parse the HTML content
soup = BeautifulSoup(response.text, 'html.parser')
news_items = soup.find_all('div', class_='content-category')
for item in news_items:
title_element = soup.find_all('th', class_="list-title")
date_element = soup.find_all('td', class_="list-date small")
for element in title_element:
element_string = str(element)
reg_str = re.findall(r'"(.*?)"', element_string)[2]
links.append(f"www.douaneinfo.nl{reg_str}")
if title_element and date_element:
y = 0
x = 1
z = 1
#Loops through elements to add them to the excel file
for element in title_element:
titleX = element.text.strip()
date = date_element[y].text.strip()
link = links[y]
cellref = worksheet.cell(row = x, column = z)
cellref.value = titleX
z = z + 1
cellref = worksheet.cell(row = x, column = z)
cellref.value = date
z = z + 1
cellref = worksheet.cell(row = x, column = z)
cellref.value = link
z = 1
y = y + 1
x = x + 1
myworkbook.save('output.xlsx')
print('The scraping is complete')
</code></pre>
<p>The issue I'm having is that the first website doesn't get the latest information but rather starts on information that's a few months old.</p>
<p><a href="https://i.sstatic.net/66N3F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/66N3F.png" alt="Output" /></a></p>
<p>If you go to the first website, the first row of data I'm scraping is (currently) on the second page of the news url.</p>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2024-01-16 09:15:38
| 1
| 820
|
Marco Geertsma
|
77,824,468
| 7,489,397
|
How to write a python test for environment variable that is loaded on import
|
<p>I want to test how system behaves while changing initial environmenet variable.</p>
<p>the_file.py:</p>
<pre class="lang-py prettyprint-override"><code>import os
from fastapi import APIRouter
PREFIX = os.getenv("SERVICE_PREFIX") or "/default"
router = APIRouter(prefix=PREFIX)
</code></pre>
<p>test.py:</p>
<pre class="lang-py prettyprint-override"><code># Following test is using pytest-mock library for mocker fixture
import os
def test_prefix_value_digested_correctly(mocker):
mocker.patch.dict(
os.environ,
{
"SERVICE_PREFIX": "/services",
},
clear=True,
)
from the_file import router
assert router.prefix == "/services"
</code></pre>
<p>The above test fails because environment variable is read on the initial import:</p>
<pre><code>assert router.prefix == "/services"
E AssertionError: assert '/default' == '/services'
E - /services
E + /default
</code></pre>
<p>I could solve this by creating a function <code>get_prefix</code> which reads the environment on the go, but that is not the goal here. Goal is a sanity check - "If I set SERVICE_PREFIX env variable, router.prefix should be changed".</p>
<p>Is there a way to dynamically send such environment variable to python unittests/pytest?</p>
|
<python><pytest><python-unittest>
|
2024-01-16 08:50:49
| 1
| 979
|
The Hog
|
77,824,427
| 1,582,790
|
python class variable not updated when passed to new Process
|
<p>Why doesn't a new Process get the updated class attr value (in Windows btw). (It's the same if the attr was just a str). I'm guessing the 3rd case worked because assignment gave it a <em>instance</em> attr.</p>
<p><a href="https://stackoverflow.com/a/7525883/1582790">https://stackoverflow.com/a/7525883/1582790</a> This post says something about the static/class members but didn't explain why. (The bit about setting the class member using <code>instance.attr=</code> seems incorrect as seen in my third case)</p>
<p> </p>
<pre><code>class A:
a1 = {}
def fp(a):
print(a.a1)
if __name__ == '__main__':
a = A()
A.a1.update({"p":"1"}) # update class var
p1 = Process(target=fp, args=(a,))
p1.start()
A.a1 = {"p":"2"}
# print('--', a.a1) # {'p': '2'}
p2 = Process(target=fp, args=(a,))
p2.start()
a.a1 = {"p":"3"} # doesn't change A.a1
# print('--', A.a1) # {'p': '2'}
p3 = Process(target=fp, args=(a,))
p3.start()
</code></pre>
<p>prints:</p>
<pre><code>{}
{}
{'p': '3'}
</code></pre>
|
<python><process><multiprocessing><static-members><class-variables>
|
2024-01-16 08:39:50
| 0
| 1,324
|
TurtleTread
|
77,824,316
| 8,176,763
|
display dropdown options in fastapi without using enum type
|
<p>I am trying to display options in fastapi auto generated documentation page, I can do for enum types as the documentation suggests here <a href="https://fastapi.tiangolo.com/tutorial/path-params/#create-an-enum-class" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/path-params/#create-an-enum-class</a>.</p>
<p>My code is as follows:</p>
<p>for <code>main.py</code> I have:</p>
<pre><code>from enum import Enum
from db import engine,SessionLocal
from models import Example,ExampleModel,Base,Color
from fastapi import FastAPI,Depends
from sqlalchemy.orm import Session
from sqlalchemy import select
app = FastAPI()
@app.on_event("startup")
def on_startup():
Base.metadata.create_all(bind=engine)
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
@app.get("/colors",response_model=list[ExampleModel])
def read_colors(color: Color ,db: Session = Depends(get_db)):
return db.execute(select(Example).where(Example.colors == color)).scalars().all()
</code></pre>
<p>For <code>models.py</code> I have:</p>
<pre><code>from sqlalchemy.orm import Mapped,mapped_column,DeclarativeBase
from sqlalchemy import Identity,Enum
from pydantic import BaseModel, ConfigDict
import enum
class Base(DeclarativeBase):
pass
class Color(str,enum.Enum):
RED = 'RED'
BLUE = 'BLUE'
class Example(Base):
__tablename__='example'
id: Mapped[int] = mapped_column(Identity(always=True),primary_key=True)
colors: Mapped[Color] = mapped_column(Enum(Color))
class ExampleModel(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: int
colors: Color
</code></pre>
<p>This way <code>Color</code> renders nicely in the docs page:</p>
<p><a href="https://i.sstatic.net/kCejF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kCejF.png" alt="enter image description here" /></a></p>
<p>However let's assume my database table would be like this:</p>
<pre><code>class Example(Base):
__tablename__='example'
id: Mapped[int] = mapped_column(Identity(always=True),primary_key=True)
colors: Mapped[str]
</code></pre>
<p>In this case <code>colors</code> is just a text field and contains several different values, maybe 10 to 15 different ones, and these color values may change over time. I would like to dynamically generate a dropdown for the different values of colors as well in this case. Is this possible ?</p>
|
<python><fastapi><pydantic>
|
2024-01-16 08:20:26
| 1
| 2,459
|
moth
|
77,824,299
| 9,506,773
|
404 error when doing workload anlysis using locust on OpenAI API (GPT.35)
|
<p>I am trying to do some workload analysis on OpenAI GPT-3.5-TURBO using locust.</p>
<pre class="lang-py prettyprint-override"><code>from locust import HttpUser, between, task
class OpenAIUser(HttpUser):
wait_time = between(1, 2) # wait between 1 and 2 seconds
host = "https://api.openai.com/"
def on_start(self):
self.headers = {
"Content-Type": "application/json",
"Authorization": "Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
self.data = {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful story teller."
},
{
"role": "user",
"content": "Tell me a 100 word story"
}
]
}
@task
def test_chat_completion(self):
self.client.post(
"https://api.openai.com/v1/chat/completion/",
json=self.data,
headers=self.headers
)
</code></pre>
<p>I get this error:</p>
<pre class="lang-bash prettyprint-override"><code>POST /v1/chat/completion/: HTTPError('404 Client Error: Not Found for url: /v1/chat/completion/')
</code></pre>
<p>My script for Azure OpenAI workload analysis works fine with this exact same structure. What am I missing?</p>
|
<python><openai-api><locust>
|
2024-01-16 08:15:24
| 1
| 3,629
|
Mike B
|
77,824,281
| 2,989,330
|
"SSL certificate verify failed: self-signed certificate in certificate chain" when pip install
|
<p>I'm behind a company proxy with a self-signed certificate and I want to install <code>tensorstore</code> via <code>pip</code>. <code>pip</code> apparently downloads and runs a Python script <code>bazelisk.py</code> that in turn uses <code>urllib</code> to get more stuff from the Internet. However, this fails with a <code>CERTIFICATE_VERIFY_FAILED</code> error message:</p>
<pre><code>$ pip install --trusted-host=example.com --index-url=http://example.com/pypi/simple
...
Downloading https://releases.bazel.build/6.4.0/release/bazel-6.4.0-linux-arm64...
Traceback (most recent call last):
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/urllib/request.py", line 1346, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/http/client.py", line 1285, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/http/client.py", line 1331, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/http/client.py", line 1280, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/http/client.py", line 1040, in _send_output
self.send(msg)
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/http/client.py", line 980, in send
self.connect()
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/http/client.py", line 1454, in connect
self.sock = self._context.wrap_socket(self.sock,
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/ssl.py", line 1040, in _create
self.do_handshake()
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1129)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/pip-install-ycop_psv/tensorstore_1008eee73d464825b2e191c044b9e306/bazelisk.py", line 492, in <module>
sys.exit(main())
File "/tmp/pip-install-ycop_psv/tensorstore_1008eee73d464825b2e191c044b9e306/bazelisk.py", line 477, in main
bazel_path = get_bazel_path()
File "/tmp/pip-install-ycop_psv/tensorstore_1008eee73d464825b2e191c044b9e306/bazelisk.py", line 470, in get_bazel_path
return download_bazel_into_directory(bazel_version, is_commit, bazel_directory)
File "/tmp/pip-install-ycop_psv/tensorstore_1008eee73d464825b2e191c044b9e306/bazelisk.py", line 304, in download_bazel_into_directory
download(bazel_url, destination_path)
File "/tmp/pip-install-ycop_psv/tensorstore_1008eee73d464825b2e191c044b9e306/bazelisk.py", line 353, in download
with closing(urlopen(request)) as response, open(destination_path, "wb") as file:
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/urllib/request.py", line 214, in urlopen
return opener.open(url, data, timeout)
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/urllib/request.py", line 517, in open
response = self._open(req, data)
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/urllib/request.py", line 534, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/urllib/request.py", line 1389, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/home/user/anaconda3/envs/PyTorch-1.11.0/lib/python3.9/urllib/request.py", line 1349, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1129)>
error: command '/home/user/anaconda3/envs/PyTorch-1.11.0/bin/python3.9' failed with exit code 1
----------------------------------------
ERROR: Failed building wheel for tensorstore
Failed to build tensorstore
ERROR: Could not build wheels for tensorstore which use PEP 517 and cannot be installed directly
</code></pre>
<p>I already know this error message from software such as Huggingface, and managed to solve it in many cases.</p>
<p>I already put the required company certificates to <code>/etc/pki/ca-trust/source/anchors</code> and run <code>update-ca-trust</code> afterwards (Note: I'm on a CentOS-derived distro). By verifying the timestamps and contents of <code>/etc/pki/tls/cert.pem</code>, I made sure the update was successful. <code>curl https://www.google.com</code> works. But <code>pip install</code> still fails.</p>
<p>So, I <a href="https://stackoverflow.com/a/43855394"><code>pip install certifi</code></a>, and retried. Still fails. <code>certifi</code> actually installs the certificates from <code>requests</code>, which naturally don't include our company's self-signed certificates. So, I replaced certifi's PEM file with a link to above-mentioned <code>/etc/pki/tls/cert.pem</code>:</p>
<pre><code>mv "$(python -m certifi)"{,.bak}
ln -s "/etc/pki/tls/cert.pem" "$(python -m certifi)"
</code></pre>
<p>But this changes nothing. <code>pip install tensorstore</code> still fails with above-mentioned error.</p>
<p>Setting <a href="https://stackoverflow.com/a/33717517"><code>REQUESTS_CA_BUNDLE</code></a> doesn't work here, because <code>requests</code> is not involved.</p>
<p>At this point, I'm don't know further. Any solution presented to me online only revolves around the solutions I already mentioned. Not even a quick-and-dirty <a href="https://stackoverflow.com/a/15445989"><code>verify=False</code> equivalent</a>, <a href="https://stackoverflow.com/a/71380824">PYTHONHTTPSVERIFY</a>, or <a href="https://stackoverflow.com/a/49174340">unverified SSL context</a> could be applied to my current situation because the script run is temporary, downloaded every time I try to install.</p>
<p>So, how could I tackle this problem?</p>
|
<python><ssl><pip><self-signed>
|
2024-01-16 08:11:57
| 2
| 3,203
|
Green 绿色
|
77,824,174
| 7,396,306
|
Using numpy random and pandas sample to make a random choice from a DF, not random distribution of choices after 100k runs
|
<p>I have made a small script for my D&D group that, when a spell is cast, chooses whether or not another random spell gets cast, the ordinal spell gets cast, or nothing happens (with a 50%, 25%, 25% ratio, respectively).</p>
<p>In order to do this, I made a <code>pd.DataFrame</code> of every possible spell in the game (using data from <a href="http://dnd5e.wikidot.com/spells" rel="nofollow noreferrer">http://dnd5e.wikidot.com/spells</a>), and then appended two data frames (one that says "Original Cast" in the spell name and the other that says "Nothing Happens"), each with <code>len == len(df)/2</code> so as to have a full <code>df</code> twice the size of the original, as shown in the code below.</p>
<pre><code>import pandas as pd
import numpy as np
from os import urandom
def create_df():
df = pd.read_csv("all_spells.csv", encoding="utf-8")
df = df.dropna(axis=0, subset="Spell Name")
df_b = df[~df["Spell Name"].str.contains(r"\((.*?)\)")].reset_index(drop=True)
OG_cast = ['Orginal Cast' for i in range(int(len(df.index)/2))]
No_Magic = ['Nothing Happens' for i in range(int(len(df.index)/2))]
Nadda = [None for i in range(int(len(df.index)/2))]
df_same = pd.DataFrame(columns=df.columns,
data={
df.columns[0]: OG_cast,
df.columns[1]: Nadda,
df.columns[2]: Nadda,
df.columns[3]: Nadda,
df.columns[4]: Nadda
})
df_nothing = pd.DataFrame(columns=df.columns,
data={
df.columns[0]: No_Magic,
df.columns[1]: Nadda,
df.columns[2]: Nadda,
df.columns[3]: Nadda,
df.columns[4]: Nadda
})
df_full = pd.concat([df_b, df_same, df_nothing], axis=0).reset_index(drop=True)
return df_full
</code></pre>
<p><code>df_full.sample(n=10)</code> is shown below, for reference.</p>
<pre><code>+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| | Spell Name | School | Casting Time | Range | Duration | Components |
+=====+============================+===============+================+=========+================================+==============+
| 12 | Psychic Scream | Enchantment | 1 Action | 90 feet | Instantaneous | S |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 18 | True Polymorph | Transmutation | 1 Action | 30 feet | Concentration up to 1 hour | V S M |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 670 | Orginal Cast | | | | | nan |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 193 | Conjure Woodland Beings | Conjuration | 1 Action | 60 feet | Concentration up to 1 hour | V S M |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 795 | Orginal Cast | | | | | nan |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 218 | Otilukes Resilient Sphere | Evocation | 1 Action | 30 feet | Concentration up to 1 minute | V S M |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 353 | Levitate | Transmutation | 1 Action | 60 feet | Concentration up to 10 minutes | V S M |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 839 | Nothing Happens | | | | | nan |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 459 | Silent Image | Illusion | 1 Action | 60 feet | Concentration up to 10 minutes | V S M |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
| 719 | Orginal Cast | | | | | nan |
+-----+----------------------------+---------------+----------------+---------+--------------------------------+--------------+
</code></pre>
<p>I then call the script below to get what happens when a spell is cast.</p>
<pre><code> df = create_df()
seed = int(np.random.uniform(0, len(df.index)*10))
spell = df.sample(1, random_state=seed)['Spell Name'].values[0]
print("The spell cast is:", spell)
</code></pre>
<p>To test if this was giving me the distribution I wanted (50% of the time, there is a random spell cast, 25% nothing happens, and 25% the spell works as intended), I ran</p>
<pre><code> OC = 0
NH = 0
N = 0
for i in range(100000):
seed = int(np.random.uniform(0, len(df.index)*10))
arb = df.sample(1, random_state=seed)['Spell Name'].values[0]
# print(arb)
if arb == 'Orginal Cast':
OC += 1
elif arb == 'Nothing Happens':
NH += 1
else: N += 1
print(OC, NH, N)
</code></pre>
<p>And instead of getting 50/25/25 (for the things stated above), I get an extremely consistent 47/26.5/26.5. Does anyone know why this is happening? And does anyone have a better idea and random sampling so as to more consistently get the correct ratio?</p>
|
<python><pandas><numpy><random>
|
2024-01-16 07:50:40
| 1
| 859
|
DrakeMurdoch
|
77,823,919
| 3,050,341
|
Manim adding simple label updaters in loop
|
<p>I have two lists of <code>MObjects</code>: elements and labels.
I need to add an updater for every label to stay inside corresponding element.</p>
<p>So this is what I do:</p>
<pre class="lang-py prettyprint-override"><code>self.elements: list[Circle] = []
self.labels: list[Tex] = []
for i in range(5):
angle = i * (360 / num_elements)
point = self.circle.point_at_angle(angle*DEGREES)
element = Circle(.75, WHITE, fill_opacity=1)
element.move_to(point)
self.elements.append(element)
label = Tex(f'$\\mathbf{i+1}$', color=BLACK, font_size=70)
label.move_to(point)
label.add_updater(lambda mob: mob.move_to(element))
self.labels.append(label)
</code></pre>
<p>The problem is that all labels stick to last element:</p>
<p><a href="https://i.sstatic.net/YmHeG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YmHeG.png" alt="enter image description here" /></a></p>
<p>How do I fix this problem?</p>
|
<python><manim>
|
2024-01-16 06:55:25
| 1
| 3,111
|
CMTV
|
77,823,803
| 594,763
|
how can I force specific mime types for python's http.server?
|
<p>I have a bunch of files without extensions, so http.server is serving them all as <code>application/octet-stream</code> and this is causing problems...</p>
<p>Rather than giving them extensions, I would like a way to define a dictionary that would be filename -> mimetype. And then have GET requests to http.server look up the filename in that dictionary object, and set the <code>Content-type</code> header to its value.</p>
<p>Is there a way to do this?</p>
|
<python><http.server>
|
2024-01-16 06:29:06
| 1
| 9,814
|
patrick
|
77,823,712
| 23,002,898
|
In Django how to compare two texts and print a message about correctness or error?
|
<p>I'm new to Django. In Django i would like to compare two texts, between frontend and backend, and receive a message if they are the same or different. In reality these two texts are two short HTML codes. The page can reload easily, I'm only interested in the logic and code behind comparing the two texts (and not the use of Ajax, jQuery, JS, but if it doesn't reload it's obviously better).</p>
<p><a href="https://i.sstatic.net/QsnTm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsnTm.png" alt="enter image description here" /></a></p>
<p><strong>INPUT USER.</strong> A text will be inserted in the front-end by the user, in a black <code>div</code> (<strong>no textarea</strong>) with <code>contenteditable="true"</code> which is already present in my code. I have already prepared an example text in the black panel, so I would like to compare the entire block of html text (from <code><!DOCTYPE html></code> up to <code></html></code>). For several reasons i want to use a div, and not a textarea.</p>
<p><strong>TEXT TO COMPARE IN THE BACKEND.</strong> While the other text will be contained in the backend (in a variable or something similar), and it will be the same html code (considering that the purpose is precisely comparison). I inserted the text to be compared into a variable named <code>corrected_text_backend</code>, but I don't know if it's a good idea, because in the future I would like to add conditions to the comparison, for example comparing only small parts of code (and not the entire code). So I would like to use a better way if it exists.</p>
<p><strong>COMPARISION RESULT.</strong> Then i will click on the button and in the gray rectangle i would like to print <code>CORRECT: the two texts are the same</code> or <code>ERROR: the two texts are different</code>.</p>
<p>Important is that i don't want to use textarea, but as already said I want to use the div with contenteditable="true" that I already have in my code.</p>
<p>I'm having problems with the logic and code and can't proceed with the code in <code>views.py</code>. Can you help me and show me how? I'm new to Django, sorry.</p>
<p><strong>INDEX.HTML</strong></p>
<pre><code>{% load static %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>University</title>
<link rel="stylesheet" href ="{% static 'css/styleparagone.css' %}" type="text/css">
</head>
<body>
<div class="test">
<div>Input User</div>
<div class="hilite row">
<pre class="hilite-colors"><code class="language-html"></code></pre>
<div
data-lang="html"
class="hilite-editor"
contenteditable="true"
spellcheck="false"
autocorrect="off"
autocapitalize="off"
>&lt;!DOCTYPE html>
&lt;html>
&lt;head>
&lt;title>Page Title&lt;/title>
&lt;/head>
&lt;body>
&lt;h1 class="heading">This is a Heading&lt;/h1>
&lt;p>This is a paragraph.&lt;/p>
&lt;/body>
&lt;/html></div>
</div>
<div>Comparison Result</div>
<div class="result row2 rowstyle2"></div>
</div>
<form action="{% url 'function_comparison' %}" method="post">
{% csrf_token %}
<button type="submit" name='mybtn'>Button</button>
</form>
</div>
<script src="{% static 'js/editor.js' %}"></script>
</body>
</html>
</code></pre>
<p><strong>EDITOR.JS</strong></p>
<pre><code>/// CODE EDITOR ///
const elHTML = document.querySelector(`[data-lang="html"]`);
const elCSS = document.querySelector(`[data-lang="css"]`);
const elJS = document.querySelector(`[data-lang="js"]`);
const elPreview = document.querySelector("#preview");
const hilite = (el) => {
const elCode = el.previousElementSibling.querySelector("code");
elCode.textContent = el.textContent;
delete elCode.dataset.highlighted;
hljs.highlightElement(elCode);
};
const preview = () => {
const encodedCSS = encodeURIComponent(`<style>${elCSS.textContent}</style>`);
const encodedHTML = encodeURIComponent(elHTML.textContent);
const encodedJS = encodeURIComponent(`<scr` + `ipt>${elJS.textContent}</scr` + `ipt>`);
const dataURL = `data:text/html;charset=utf-8,${encodedCSS + encodedHTML + encodedJS}`;
elPreview.src = dataURL;
};
// Initialize!
[elHTML, elCSS, elJS].forEach((el) => {
el.addEventListener("input", () => {
hilite(el);
preview();
});
hilite(el);
});
preview();
</code></pre>
<p><strong>CSS</strong></p>
<pre><code>.row {
width: 500px;
padding: 10px;
height: 150px; /* Should be removed. Only for demonstration */
}
.rowstyle1 {
background-color: black;
color: white;
}
.row2 {
margin-top: 20px;
width: 500px;
padding: 10px;
height: 15px; /* Should be removed. Only for demonstration */
}
.rowstyle2 {
background-color: #ededed;;
color: black;
}
/******** CODE EDITOR CSS **********/
/* Scrollbars */
::-webkit-scrollbar {
width: 5px;
height: 5px;
}
::-webkit-scrollbar-track {
background: rgba(0, 0, 0, 0.1);
border-radius: 0px;
}
::-webkit-scrollbar-thumb {
background-color: rgba(255, 255, 255, 0.3);
border-radius: 1rem;
}
.hilite {
position: relative;
background: #1e1e1e;
height: 120px;
overflow: auto;
width: 500px;
height: 250px;
}
.hilite-colors code,
.hilite-editor {
padding: 1rem !important;
top: 0;
left: 0;
right: 0;
bottom: 0;
white-space: pre-wrap;
font: 13px/1.4 monospace;
width: 100%;
background: transparent;
border: 0;
outline: 0;
}
/* THE OVERLAYING CONTENTEDITABLE WITH TRANSPARENT TEXT */
.hilite-editor {
display: inline-block;
position: relative;
color: transparent; /* Make text invisible */
caret-color: hsl( 50, 75%, 70%); /* But keep caret visible */
width: 100%;
}
/* THE UNDERLAYING DIV WITH HIGHLIGHT COLORS */
.hilite-colors {
position: absolute;
user-select: none;
width: 100%;
color: white;
</code></pre>
<p><strong>MYAPP/URLS.PY</strong></p>
<pre><code>from django.urls import path
from . import views
from . import views as function_comparison
urlpatterns=[
path('', views.index, name='index'),
path('function_comparison/', function_comparison,name="function_comparison"),
]
</code></pre>
<p><strong>PROJECT/URLS.PY</strong></p>
<pre><code>from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('', include('App1.urls')),
]
</code></pre>
<p><strong>MYAPP/VIEWS.PY</strong></p>
<pre><code>from django.shortcuts import render, redirect
from django.http import HttpResponse
def index(request):
"""View function for home page of site."""
return render(request, 'index.html')
def function_comparison(request):
if request.method == "GET":
corrected_text_backend = """
<!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1 class="heading">This is a Heading</h1>
<p>This is a paragraph.</p>
</body>
</html>"""
.....
return render(request, "index.html")
</code></pre>
|
<python><html><python-3.x><django><django-views>
|
2024-01-16 06:01:22
| 1
| 307
|
Nodigap
|
77,823,499
| 107,294
|
How to get tox to use a Python interpreter on a specifc path
|
<p>I'm using tox version 4.12.0, and have <code>env_list = py{3.8,3.9}-pytest5</code>. This successfully finds and uses the OS-supplied Python 3.9 to run those tests, but skips testing with Python 3.8 because it can't find an interpreter. (This is correct behaviour so far: I have no Python 3.8 interpreter in my path and do not want one in my path.)</p>
<p>So I'd like to get tox to use a Python interpreter on a specifc path, in this case one compiled with <code>pythonz</code>: <code>/home/cjs/.pythonz/pythons/CPython-3.8.18/bin/python3</code>. However, when I add a <code>base_python</code> setting for this:</p>
<pre><code>base_python =
py3.8: /home/cjs/.pythonz/pythons/CPython-3.8.18/bin/python3
</code></pre>
<p>It gives me the following complaint:</p>
<pre><code>py3.8-pytest5: failed with env name py3.8-pytest5 conflicting with base python /home/cjs/.pythonz/pythons/CPython-3.8.18/bin/python3
</code></pre>
<p>I get the same complaint even if I change the executable name at the end of the path to <code>python3.8</code> (which also exists in that directory). I've looked at the documentation for <a href="https://tox.wiki/en/latest/config.html#ignore_base_python_conflict" rel="nofollow noreferrer"><code>ignore_python_base_conflict</code></a>, which I don't really understand, and tried both <code>true</code> and <code>false</code> settings of that variable, but it seems to make no difference.</p>
<p>How do I tell tox to use a specific Python interpreter on a specific path for a particular test?</p>
<p>Bonus points if you can tell me how to easily set this path at runtime, since for other users it's obviously not going to be under <code>/home/cjs/...</code>.</p>
|
<python><tox>
|
2024-01-16 04:38:35
| 1
| 27,842
|
cjs
|
77,823,233
| 1,492,229
|
How to filter a dataframe based on another dataframes index
|
<p>I am building a ML Model</p>
<p>There is 2 datasets <code>X_test</code> and <code>y_test</code></p>
<p>Here how they look like</p>
<pre><code>X_test
Out[19]:
AJ3158003 SY0942007 WW3873005 ... LZ014003 QP4868006 RepID
9072 5 5 5 ... 0 0 292715
2296 10 10 10 ... 0 8 239729
6333 7 7 7 ... -1 -2 98758
8631 8 8 8 ... -1 0 261983
8420 5 5 5 ... 0 0 248760
... ... ... ... ... ... ...
4417 9 9 9 ... -1 -1 300160
4028 9 9 9 ... 0 0 138284
5881 9 9 9 ... 0 0 66981
9052 5 5 5 ... 0 0 291510
1106 5 5 5 ... 0 0 153736
</code></pre>
<p>and <code>y_test</code> looks like this</p>
<pre><code>y_test
Out[21]:
9072 1
2296 0
6333 1
8631 1
8420 1
..
4417 0
4028 0
5881 1
9052 1
1106 0
Name: ABCL, Length: 1932, dtype: int32
</code></pre>
<p>You can see that Indecies in both dataframe as match with order (Match length and order of records)</p>
<p>What I did is that I filtered <code>X_test</code> based on a third dataset called <code>dfDone</code></p>
<pre><code>X_test = X_test[~X_test.RepID.isin(dfDone.RepID)]
</code></pre>
<p>Now <code>X_test</code> and <code>y_test</code> do not match</p>
<p>I want to filter <code>y_test</code> so it only keeps the records that match the index of the filtered <code>X_test</code></p>
<p>How to do that?</p>
|
<python><dataframe>
|
2024-01-16 02:48:12
| 0
| 8,150
|
asmgx
|
77,823,220
| 9,306,620
|
Python: search for a list of substrings in a list of list of strings
|
<p>I'm following the post <a href="https://stackoverflow.com/questions/9316970/search-a-list-of-list-of-strings-for-a-list-of-strings-in-python-efficiently">Search a list of list of strings for a list of strings in python efficiently</a> and trying to search for a list of substrings in a list of list of strings. The above post finds the index of the list of strings that match the list of strings. In my code, I substring the L1 and flatten it to match the L2 string. How do I get a list of all the L1 strings that have L2 strings as substrings? Right now, I'm getting the index of the L1 list of strings that match each L2 string.</p>
<p>This is how far I got. The code that I'm following:</p>
<pre><code>from bisect import bisect_left, bisect_right
from itertools import chain
L1=[["animal:cat","pet:dog","fruit:apple"],["fruit:orange","color:green","color:red","fruit:apple"]]
L2=["apple", "cat","red"]
M1 = [[i]*len(j) for i, j in enumerate(L1)]
M1 = list(chain(*M1))
L1flat = list(chain(*L1))
I = sorted(range(len(L1flat)), key=L1flat.__getitem__)
L1flat = sorted([L1flat[i].split(':')[1] for i in I])
print(L1flat)
M1 = [M1[i] for i in I]
for item in L2:
s = bisect_left(L1flat, item)
e = bisect_right(L1flat, item)
print(item, M1[s:e])
#print(L1flat[s:e])
sub = M1[s:e]
for y in sub:
print('%s found in %s' % (item, str(L1(y))))
</code></pre>
<p>Edit: I just realized I'm getting errors in my search for second and third item.</p>
<p>3 things:</p>
<ol>
<li><p>I created the M1 by enumerating split elements of L1</p>
<p>L1Splitted = [i[0].split(':')[1] for i in L1]</p>
<p>M1 = [[i]*len(j) for i, j in enumerate(L1Splitted)]</p>
</li>
<li><p>I reversed the elements in L1flat and split the elements</p>
<p>L1flatReversed = []</p>
<p>for j, x in enumerate(L1flat)</p>
<pre><code> L1flatReversed.append(reverseString(x, ':'))
</code></pre>
</li>
<li><p>Then I made another list of reversed strings split</p>
<p>L1flatReversedSplit = [L1flatReversed[i].split(':')[0] for i in I]</p>
</li>
</ol>
<p>now my s and e are bisecting on L1flatReversedSplit</p>
|
<python><search>
|
2024-01-16 02:42:30
| 2
| 1,041
|
SoftwareDveloper
|
77,823,105
| 7,663,420
|
I am classifying each word in a sentence (Named Entity Recognition) but I receive ...an unexpected keyword argument 'grouped_entities'
|
<pre><code>sentence = 'American Airlines was the first airline to fly every A380 flight perfectly when President George Bush was in Office. The Woodlands Texas is a great place to be.'
ner = pipeline('text-classification', model='dbmdz/bert-large-cased-finetuned-conll03-english', grouped_entities=True)
ners = ner(sentence)
print('\nSentence:')
print(wrapper.fill(sentence))
print('\n')
for n in ners:
print(f"{n['word']} -> {n['entity_group']}")
</code></pre>
<p>I am inside google colab.
I tried
<code>!pip install transformers --upgrade</code> # The error is caused by a bug in the transformers library. The fix is to install the latest version of the library.
but I received the following:</p>
<pre><code>/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
574 ) -> BatchEncoding:
575 batched_input = [(text, text_pair)] if text_pair else [text]
--> 576 batched_output = self._batch_encode_plus(
577 batched_input,
578 is_split_into_words=is_split_into_words,
TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'grouped_entities'
</code></pre>
|
<python><nlp><google-cloud-colab-enterprise>
|
2024-01-16 01:47:34
| 1
| 303
|
Nathaniel Hibbler
|
77,823,058
| 3,719,459
|
How create a 2-row table header with docutils
|
<p>I wrote an extension for Sphinx to read code coverage files and present them as a table in a Sphinx generated HTML documentation.</p>
<p><a href="https://i.sstatic.net/5VNdj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5VNdj.png" alt="enter image description here" /></a></p>
<p>Currently the table has a single header row with e.g. 3 columns for statement related values and 4 columns for branch related data. I would like to create a 2 row table header, so multiple columns are grouped.</p>
<p>In pure HTML it would be done by adding <code>colspan=3</code>. But how to solve that question with docutils?</p>
<p>The full sources can be found here: <a href="https://github.com/pyTooling/sphinx-reports/blob/main/sphinx_reports/CodeCoverage.py#L169" rel="nofollow noreferrer">https://github.com/pyTooling/sphinx-reports/blob/main/sphinx_reports/CodeCoverage.py#L169</a></p>
<p>Interesting code is this:</p>
<pre class="lang-py prettyprint-override"><code> def _PrepareTable(self, columns: Dict[str, int], identifier: str, classes: List[str]) -> Tuple[nodes.table, nodes.tgroup]:
table = nodes.table("", identifier=identifier, classes=classes)
tableGroup = nodes.tgroup(cols=(len(columns)))
table += tableGroup
tableRow = nodes.row()
for columnTitle, width in columns.items():
tableGroup += nodes.colspec(colwidth=width)
tableRow += nodes.entry("", nodes.paragraph(text=columnTitle))
tableGroup += nodes.thead("", tableRow)
return table, tableGroup
def _GenerateCoverageTable(self) -> nodes.table:
# Create a table and table header with 5 columns
table, tableGroup = self._PrepareTable(
identifier=self._packageID,
columns={
"Module": 500,
"Total Statements": 100,
"Excluded Statements": 100,
"Covered Statements": 100,
"Missing Statements": 100,
"Total Branches": 100,
"Covered Branches": 100,
"Partial Branches": 100,
"Missing Branches": 100,
"Coverage in %": 100
},
classes=["report-doccov-table"]
)
tableBody = nodes.tbody()
tableGroup += tableBody
</code></pre>
|
<python><html-table><code-coverage><python-sphinx><docutils>
|
2024-01-16 01:26:32
| 2
| 16,480
|
Paebbels
|
77,822,962
| 866,082
|
How to manage datasets in MLflow?
|
<p>Consider the following code snippet taken from MLflow <a href="https://mlflow.org/docs/latest/python_api/mlflow.data.html#mlflow-data" rel="nofollow noreferrer">documentation page</a>:</p>
<pre class="lang-py prettyprint-override"><code>import mlflow.data
import pandas as pd
from mlflow.data.pandas_dataset import PandasDataset
# Construct a Pandas DataFrame using iris flower data from a web URL
dataset_source_url = "http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv"
df = pd.read_csv(dataset_source_url)
# Construct an MLflow PandasDataset from the Pandas DataFrame, and specify the web URL
# as the source
dataset: PandasDataset = mlflow.data.from_pandas(df, source=dataset_source_url)
with mlflow.start_run():
# Log the dataset to the MLflow Run. Specify the "training" context to indicate that the
# dataset is used for model training
mlflow.log_input(dataset, context="training")
# Retrieve the run, including dataset information
run = mlflow.get_run(mlflow.last_active_run().info.run_id)
dataset_info = run.inputs.dataset_inputs[0].dataset
print(f"Dataset name: {dataset_info.name}")
print(f"Dataset digest: {dataset_info.digest}")
print(f"Dataset profile: {dataset_info.profile}")
print(f"Dataset schema: {dataset_info.schema}")
# Load the dataset's source, which downloads the content from the source URL to the local
# filesystem
dataset_source = mlflow.data.get_source(dataset_info)
dataset_source.load()
</code></pre>
<p>This code is starting a new run and logging an input which is a dataset. Does this mean that in MLflow we are saving datasets as separate runs? If that's the case, how are we associating the training of a model that has its own run to a dataset? I'm confused how MLflow handles/tracks datasets! TBH, I was expecting datasets to be a different entity type (than runs) and we could link them to each run which is for a model training.</p>
|
<python><machine-learning><dataset><mlflow>
|
2024-01-16 00:40:36
| 1
| 17,161
|
Mehran
|
77,822,824
| 181,105
|
How to extend TSP to MTSP using Pulp
|
<p>We've studied TSP and now we're tasked to extend it for multiple salespersons.
Below code using PULP with my added logic which unfortunately does not work. It does not correctly identify the correct tours.</p>
<p>Eg. using below cost matrix:</p>
<pre><code>cost_matrix = [[ 0, 1, 3, 4],
[ 1, 0, 2, 3 ],
[ 3, 2, 0, 4 ],
[ 4, 3, 4, 0]]
</code></pre>
<p>and n = len(cost_matrix)</p>
<p>for salespersons (k = 3) the results should be:</p>
<p>The path of SP_1 is: 0 => 1 => 0</p>
<p>The path of SP_2 is: 0 => 2 => 3 => 0</p>
<p>Can someone help me solve this problem?</p>
<pre><code> # create encoding variables
bin_vars = [ # add a binary variable x_{ij} if i not = j else simply add None
[ LpVariable(f'x_{i}_{j}', cat='Binary') if i != j else None for j in range(n)]
for i in range(n) ]
time_stamps = [LpVariable(f't_{j}', lowBound=0, upBound=n, cat='Continuous') for j in range(1, n)]
# create add the objective function
objective_function = lpSum( [ lpSum([xij*cj if xij != None else 0 for (xij, cj) in zip(brow, crow) ])
for (brow, crow) in zip(bin_vars, cost_matrix)] )
prob += objective_function
# add constraints
for i in range(n):
# Exactly one leaving variable
prob += lpSum([xj for xj in bin_vars[i] if xj != None]) == 1
# Exactly one entering
prob += lpSum([bin_vars[j][i] for j in range(n) if j != i]) == 1
# add timestamp constraints
for i in range(1,n):
for j in range(1, n):
if i == j:
continue
xij = bin_vars[i][j]
ti = time_stamps[i-1]
tj = time_stamps[j -1]
prob += tj >= ti + xij - (1-xij)*(n+1)
# Binary variables to ensure each node is visited by a salesperson
visit_vars = [LpVariable(f'u_{i}', cat='Binary') for i in range(1, n)]
# Salespersons constraints
prob += lpSum([bin_vars[0][j] for j in range(1, n)]) == k
prob += lpSum([bin_vars[i][0] for i in range(1, n)]) == k
for i in range(1, n):
prob += lpSum([bin_vars[i][j] for j in range(n) if j != i]) == visit_vars[i - 1]
prob += lpSum([bin_vars[j][i] for j in range(n) if j != i]) == visit_vars[i - 1]
# Done: solve the problem
status = prob.solve(PULP_CBC_CMD(msg=False))
</code></pre>
|
<python><pulp><traveling-salesman>
|
2024-01-15 23:38:09
| 1
| 413
|
Sys
|
77,822,809
| 9,782,274
|
pydantic: How to parse & validate a model from a string with the model attributes as names (not as python dict)
|
<p>I'm able to export records as a csv from my fastapi/sqlmodel/sqladmin backend. The sqladmin (backend panel) only allows limited configuration of the export format and I end up with the following export data for my many-to-many relationship called <strong>"assessed_in"</strong>:</p>
<pre><code>"[Period(id=1, name='1. quarter 2022', period_id='q1_2022', status=<PeriodStatusEnum.in_assessment: 'in_assessment'>), Period(id=2, name='2. Quarter 2022', period_id='q2_2022', status=<PeriodStatusEnum.in_assessment: 'in_assessment'>)]"
</code></pre>
<p>My models are setup as follows:</p>
<pre><code># Database Models
class PeriodStatusEnum(str, enum.Enum):
in_assessment = "in_assessment"
readonly = "readonly"
hidden = "hidden"
class Period(BaseClass, table=True):
period_id: str = Field(index=True, unique=True)
name: str
status: PeriodStatusEnum = Field(sa_column=Column(Enum(PeriodStatusEnum)))
class DimensionPeriodLink(SQLModel, table=True):
dimension_id: int | None = Field(default=None, foreign_key="dimension.id", primary_key=True)
period_id: int | None = Field(default=None, foreign_key="period.id", primary_key=True)
class Dimension(BaseClass, table=True):
dimension_id: str = Field(index=True, unique=True)
name_EN: str
name_DE: str
order_nr: int
assessed_in: list[Period] = Relationship(link_model=DimensionPeriodLink)
</code></pre>
<p>In a later stage I need to read in the csv and create the records with the many to many relationship in python -> which works fine for all attributes except the "assessed_in" m2m relationship (as it's not a pure dict).</p>
<pre><code>#when I read in the exported csv file, I get the following data per row/record
dimension_dict = {'assessed_in': "[Period(status=<PeriodStatusEnum.in_assessment: 'in_assessment'>, id=1, name='1. Quartal 2022', period_id='q1_2022'), Period(status=<PeriodStatusEnum.in_assessment: 'in_assessment'>, id=2, name='2. Quartal 2022', period_id='q2_2022')]", 'level_definitions': '[]', 'questions': '[]', 'id': '1', 'dimension_id': 'DIM10', 'name_EN': 'Organizational structure and roles', 'name_DE': 'Organisationsstruktur und Rollen', 'order_nr': '1'}
new_record = Dimension.model_validate(dimension_dict)
</code></pre>
<p>What would be a smart solution to parse that model including the relationship with pydantic/sqlmodel?
Thanks a lot for your inputs, it's highly appreciated! <3</p>
|
<python><fastapi><pydantic><sqlmodel>
|
2024-01-15 23:30:33
| 0
| 397
|
Simon
|
77,822,594
| 3,358,488
|
Python relative import not working even though root package name is being used
|
<p>I've read quite a bit about relative import's tricky aspects (especially <a href="https://stackoverflow.com/questions/14132789/relative-imports-for-the-trilli%D0%BEnth-time/14132912#14132912">this question</a>). However, the following example is still not working.</p>
<p>I have the following project (<a href="https://github.com/rodrigodesalvobraz/UnittestDiscovery" rel="nofollow noreferrer">available on GitHub</a>):</p>
<p><a href="https://i.sstatic.net/f6JIn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f6JIn.png" alt="DirectoryTree" /></a></p>
<p>Here's <code>test_some_unit.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import unittest
from ..util.foo import run_foo
class TestSomeUnit(unittest.TestCase):
def test_some_unit(self):
run_foo()
print("Unit package")
</code></pre>
<p>The <code>integration</code> side is analogous.</p>
<p>Running discovery from current working directory <code>UnittestDiscovery</code>which is the parent of <code>package</code>, for the root package <code>package</code> produces a relative import error:</p>
<pre class="lang-none prettyprint-override"><code>UnittestDiscovery> python -m unittest discover package
EE
======================================================================
ERROR: integration.test_some_integration (unittest.loader._FailedTest.integration.test_some_integration)
----------------------------------------------------------------------
ImportError: Failed to import test module: integration.test_some_integration
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\Lib\unittest\loader.py", line 407, in _find_test_path
module = self._get_module_from_name(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\unittest\loader.py", line 350, in _get_module_from_name
__import__(name)
File "C:\Users\rodrigobraz\Documents\PyCharmProjects\UnittestDiscovery\package\integration\test_some_integration.py", line 3, in <module>
from ..util.foo import run_foo
ImportError: attempted relative import beyond top-level package
======================================================================
ERROR: unit.test_some_unit (unittest.loader._FailedTest.unit.test_some_unit)
----------------------------------------------------------------------
ImportError: Failed to import test module: unit.test_some_unit
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\Lib\unittest\loader.py", line 407, in _find_test_path
module = self._get_module_from_name(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\unittest\loader.py", line 350, in _get_module_from_name
__import__(name)
File "C:\Users\rodrigobraz\Documents\PyCharmProjects\UnittestDiscovery\package\unit\test_some_unit.py", line 3, in <module>
from ..util.foo import run_foo
ImportError: attempted relative import beyond top-level package
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (errors=2)
</code></pre>
<p>Why does this fail?</p>
|
<python><relative-import>
|
2024-01-15 22:16:59
| 2
| 5,872
|
user118967
|
77,822,577
| 336,489
|
Azure Function using v2 programming model for debugging
|
<p>I have inherited the following code in Azure Functions and I want to ensure if this correct and find out the correct way to call this utilizing Python Azure Functions v2 programming model.</p>
<pre><code>import os
import gnupg
import azure.functions as func
gpg = gnupg.GPG(gnupghome='/tmp')
class PGPDecryption:
def __init__(self, prikey):
self.fullkey = prikey
def decrypt(self, blob_data):
key_data = self.fullkey.strip()
gpg.import_keys(key_data)
decrypted_data = gpg.decrypt(blob_data)
return str(decrypted_data)
# Blob Trigger for input and Blob Output binding for decrypted file
@func.blob_input(name='inputBlob', path='input-container/{name}',
connection='AzureWebJobsStorage')
@func.blob_output(name='outputBlob', path='output-container/{name}.decrypted',
connection='AzureWebJobsStorage')
def main(inputBlob: func.InputStream, outputBlob: func.Out[func.InputStream], context:
func.Context):
prikey = '<Your PGP Private Key>' # Replace with your PGP private key
decryptor = PGPDecryption(prikey)
decrypted_data = decryptor.decrypt(inputBlob.read())
# Writing decrypted data to output binding
outputBlob.set(decrypted_data)
</code></pre>
<p>The objective is for the code to get a file from Azure Blob and decrpyt it and put it back to another Azure Blob location.
Does the code actually achieve this and how can I debug this?</p>
|
<python><azure><encryption><azure-functions><pgp>
|
2024-01-15 22:12:35
| 1
| 5,130
|
GilliVilla
|
77,822,487
| 14,954,262
|
Python go from a list of tuple to a list of integer without quotes
|
<p>I have this code :</p>
<pre><code>selected_listbox_select_criteria_column = [('0', 'firstName'),('1', 'lastName'),('2', 'phone')]
column_to_check_for_criteria = []
column_to_check_for_criteria(', '.join(elems[0] for elems in selected_listbox_select_criteria_column))
</code></pre>
<p>It gives me this result :</p>
<pre><code>['0,1,2']
</code></pre>
<p>How can I have a strict list of integer like this :</p>
<pre><code>[0,1,2]
</code></pre>
<p>without the quotes to finally get equivalent of : <code>column_to_check_for_criteria = [0,1,2]</code></p>
<p>Thanks</p>
|
<python><list><integer><quotes>
|
2024-01-15 21:46:07
| 4
| 399
|
Nico44044
|
77,822,378
| 4,267,726
|
Get buffer of line without joining itself in shapely
|
<p>I have a LineString which crosses itself (<code>LineString(...)</code>)
<a href="https://i.sstatic.net/dkk8z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dkk8z.png" alt="enter image description here" /></a></p>
<p>and make a Buffer out of it (<code>LineString().buffer(2, cap_style=BufferCapStyle.flat)</code>)</p>
<p><a href="https://i.sstatic.net/lkCEN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lkCEN.png" alt="enter image description here" /></a></p>
<p>How to make the buffer not join itself and let it look like</p>
<p><a href="https://i.sstatic.net/L6gco.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L6gco.png" alt="enter image description here" /></a></p>
<p>?</p>
|
<python><shapely>
|
2024-01-15 21:15:31
| 0
| 385
|
harrow
|
77,822,340
| 3,358,488
|
Unittest discovery not working as expected
|
<p>I have the following project (<a href="https://github.com/rodrigodesalvobraz/UnittestDiscovery" rel="nofollow noreferrer">available on GitHub</a>):</p>
<p><a href="https://i.sstatic.net/MZYGE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MZYGE.png" alt="enter image description here" /></a></p>
<p>Here's <code>test_some_unit.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from ..util.foo import run_foo
def test_some_unit():
run_foo()
print("Unit test")
</code></pre>
<p>Running discovery for <code>test.unit</code> produces 0 tests:</p>
<pre><code>python -m unittest discover test.unit
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
</code></pre>
<p>Why?</p>
|
<python><unit-testing>
|
2024-01-15 21:06:27
| 1
| 5,872
|
user118967
|
77,822,303
| 625,396
|
For Python numpy.unique(ar, return_index=False) - indexes of unique are returned in the natural order ALWAYS?
|
<p>In Python numpy:
indexes returned by np.unique(a, return_index=True) - will ALWAYS come in natural order ?
I.e. we get the FIRST occurrence of element as first index ? Or the order can be arbitrary ?</p>
<p>In most examples I tried - the order is natural, but just want to confirm it is ALWAYS the case ?</p>
<p>Example:</p>
<pre><code>a = np.array(['a', 'b', 'b', 'c', 'a'])
u, indices = np.unique(a, return_index=True)
</code></pre>
<pre><code>array([0, 1, 3])
</code></pre>
|
<python><numpy>
|
2024-01-15 20:58:01
| 1
| 974
|
Alexander Chervov
|
77,822,220
| 15,148,870
|
Python - remove commas and leave decimal points from string as number
|
<p>As of title, I need to remove commas from string number and leave decimal point, but the problem is input can be with <code>.</code> or <code>,</code>. This input will be saved for Django model's <code>DecimalField</code></p>
<p>For example:</p>
<h5>Input:</h5>
<p>"250,000,50"<br />
"250,000"<br />
"250000.00"<br />
"25,000"</p>
<h5>Output:</h5>
<p>250000.50<br />
250000.00<br />
250000.00<br />
25000.00</p>
<h4>What I have tried:</h4>
<pre><code>def _normalize_amount(self, value -> str):
normalized_amount = value[:-3].replace(",", "") + value[-3:]
return Decimal(normalized_amount)
</code></pre>
<p>Problem is that this function can not handle the cases where user enters <code>,</code> as decimal point. How to solve this case ?</p>
|
<python><python-3.x><django><regex>
|
2024-01-15 20:35:23
| 1
| 328
|
Saidamir
|
77,822,202
| 900,394
|
How to add a dense layer on top of SentenceTransformer?
|
<p>In this tutorial (<a href="https://huggingface.co/blog/how-to-train-sentence-transformers" rel="nofollow noreferrer">Train and Fine-Tune Sentence Transformers Models</a>) they go through creating a SentenceTransformer by combining a word embedding module with a pooling layer:</p>
<pre><code>from sentence_transformers import SentenceTransformer, models
## Step 1: use an existing language model
word_embedding_model = models.Transformer('distilroberta-base')
## Step 2: use a pool function over the token embeddings
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
## Join steps 1 and 2 using the modules argument
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
# model.encode("Hi there") # => works fine
</code></pre>
<p>And then they say:</p>
<blockquote>
<p>If necessary, additional layers can be added, for example, dense, bag of words, and convolutional.</p>
</blockquote>
<p>I tried to add a dense layer on top of the model, but I'm getting an error:</p>
<pre><code>from sentence_transformers import SentenceTransformer, models
## Step 1: use an existing language model
word_embedding_model = models.Transformer('distilroberta-base')
## Step 2: use a pool function over the token embeddings
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
## My Dense Layer
dense_layer = torch.nn.Linear(pooling_model.get_sentence_embedding_dimension(), 128)
## Join steps 1 and 2 using the modules argument
model = SentenceTransformer(modules=[word_embedding_model, pooling_model, dense_layer])
</code></pre>
<p>And when I run <code>model.encode("hi there")</code> I get:</p>
<blockquote>
<p>TypeError: linear(): argument 'input' (position 1) must be Tensor, not dict</p>
</blockquote>
<p>I found the same error <a href="https://stackoverflow.com/questions/65082243/dropout-argument-input-position-1-must-be-tensor-not-str-when-using-bert">here</a> but using <code>BertModel.from_pretrained</code>, not <code>models.Transformer</code>. The suggested answer (passing the argument <code>return_dict=False</code>) doesn't work:</p>
<pre><code>word_embedding_model = models.Transformer('distilroberta-base', return_dict=False)
</code></pre>
<blockquote>
<p>TypeError: Transformer.<strong>init</strong>() got an unexpected keyword argument 'return_dict'</p>
</blockquote>
<p>Any ideas how to add a dense layer correctly?</p>
|
<python><nlp><huggingface-transformers><sentence-transformers>
|
2024-01-15 20:29:40
| 1
| 5,382
|
Alaa M.
|
77,822,172
| 6,419,736
|
CRC with bit-wise operations in Dart
|
<p>I am having trouble with performing a CRC-64 ECMA calculation in Dart. While I am getting <em>a</em> result, it is not correct, as evidenced by a Python script that performs the same CRC calculation.</p>
<p>Here's the Python code:</p>
<pre><code>def crc64ecma(data):
"""Calculate the CRC64-ECMA value for given data."""
crc = 0xFFFFFFFFFFFFFFFF
poly = 0x42EA3693F0E1EBA9
print(f"[crc64ecma] Received data is {data.hex()}") # Log the received data in hex format
for byte in data:
print(f"[crc64ecma] Byte is {byte}")
crc ^= byte
print(f"[crc64ecma] CRC is {crc:#018x}")
for _ in range(8):
if crc & 1:
crc = (crc >> 1) ^ poly
print(f"[crc64ecma] CRC is XOR with POLY as {crc:#018x}")
else:
crc >>= 1
print(f"[crc64ecma] CRC is shifted as {crc:#018x}")
crc = ~crc & 0xFFFFFFFFFFFFFFFF
print(f"[crc64ecma] Final CRC is: {crc:#018x}")
return crc
if __name__ == "__main__":
while True:
# Getting user input for 4 bytes
user_input = input("Enter 4 bytes (in hexadecimal format, e.g., 1A2B3C4D): ")
try:
# Convert the input to bytes
data = bytes.fromhex(user_input)
# Ensure the input is exactly 4 bytes
if len(data) != 4:
raise ValueError("Input must be exactly 4 bytes")
# Calculating CRC value
crc_value = crc64ecma(data)
# Printing the CRC result
print(f"CRC: {crc_value:#018x}\n") # Print CRC in hex format
except ValueError as e:
print(f"Error: {e}")
# Option to continue or break the loop
if input("Continue? (y/n): ").lower() != 'y':
break
</code></pre>
<p>And for a input such as 0x65a58220 it yields the expected result of: 0xc179f267d045a14e</p>
<p>Here's my Dart version of such script:</p>
<pre><code>import 'dart:developer';
import 'dart:typed_data';
const int POLY = 0x42EA3693F0E1EBA9;
Uint8List crc64ecma(Uint8List d) {
var d2 = [101, 165, 130, 32];
var data = Uint8List.fromList(d2);
int crc = 0xFFFFFFFFFFFFFFFF;
log('[crc64ecma] Received data is ${bytesToHex(data)}');
for (var byte in data) {
log('[crc64ecma] Byte is ${byte.toRadixString(16)}');
crc ^= byte;
log('[crc64ecma] CRC is ${crc.toRadixString(16)}');
for (int i = 0; i < 8; i++) {
if (crc & 1 != 0) {
crc = (crc >> 1) ^ POLY;
log('[crc64ecma] CRC is XOR with POLY as ${crc.toRadixString(16)}');
} else {
crc >>= 1;
log('[crc64ecma] CRC is shifted as ${crc.toRadixString(16)}');
}
}
}
crc = ~crc & 0xFFFFFFFFFFFFFFFF;
ByteData byteData = ByteData(8);
byteData.setUint64(0, crc, Endian.big);
log('[crc64ecma] Final CRC is: ${bytesToHex(byteData.buffer.asUint8List())}');
return byteData.buffer.asUint8List();
}
String bytesToHex(Uint8List bytes) {
return bytes.map((byte) => byte.toRadixString(16)).join();
}
</code></pre>
<p>The Dart script yields a CRC of 0x3e86d98d045a14e.</p>
<p>So what appears to be the problem? The handling of bit-wise operations.</p>
<p>I have the logs for each step of the algorithm. Python logs this:</p>
<pre><code>[crc64ecma] CRC is 0xffffffffffffff9a
</code></pre>
<p>While Dart logs this:</p>
<pre><code>[crc64ecma] CRC is -66 // 0xFF9A
</code></pre>
<p>It appears that Dart's bitwise operation handles bytes with different sizes differently, truncating the CRC value. What could I do to force Dart to use the full width of the operator?</p>
|
<python><flutter><dart><crc>
|
2024-01-15 20:22:47
| 2
| 748
|
Raphael Sauer
|
77,822,135
| 726,730
|
ffmpeg - store desired output to python variable
|
<p>If i run: <code>ffmpeg -list_devices true -f dshow -i dummy</code> i get:</p>
<pre><code>ffmpeg version 2023-01-30-git-2d202985b7-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 57. 44.100 / 57. 44.100
libavcodec 59. 59.100 / 59. 59.100
libavformat 59. 36.100 / 59. 36.100
libavdevice 59. 8.101 / 59. 8.101
libavfilter 8. 56.100 / 8. 56.100
libswscale 6. 8.112 / 6. 8.112
libswresample 4. 9.100 / 4. 9.100
libpostproc 56. 7.100 / 56. 7.100
[dshow @ 00000236357d0480] "HP True Vision HD Camera" (video)
[dshow @ 00000236357d0480] Alternative name "@device_pnp_\\?\usb#vid_04f2&pid_b6ab&mi_00#6&763f234&2&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow @ 00000236357d0480] "Microphone Array (Intel® Smart Sound Technology for Digital Microphones)" (audio)
[dshow @ 00000236357d0480] Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{E8824DE9-F848-47F1-BB2A-EB24E11050FC}"
dummy: Immediate exit requested
</code></pre>
<p>From this output i want to store: "HP True Vision HD Camera" (first video output) in a python variable.</p>
<p>Is this possible?</p>
<p>I am trying</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
ffmpeg_command = ["ffmpeg", "-list_devices", "true","-f", "dshow", "-i", "dummy"]
pipe = subprocess.run(ffmpeg_command,stdout=subprocess.PIPE,stderr=subprocess.PIPE,bufsize=10**8)
output = pipe.stdout
lines = output.splitlines()
for line in lines:
if line.startswith("[dshow @"):
if "(video)" in line:
camera_info = line.split("\"")[1]
print(camera_info)
</code></pre>
|
<python><ffmpeg><process>
|
2024-01-15 20:09:02
| 2
| 2,427
|
Chris P
|
77,822,063
| 16,907,012
|
Python subprocess read output line by line
|
<p>I'm trying to run a python script (<code>test.py</code>) from <code>main.py</code> using <code>asyncio.subprocess</code> and basically, it works fine if <code>test.py</code> doesn't have any interactivity i.e. if there's no user interaction.</p>
<p>When my script contains <code>input(prompt)</code> with some prompt, basically the prompts isn't get displayed back from the subprocess, as it because of the <code>await process.stdout.readline()</code> and <code>input()</code> write the prompt out to the <code>sys.stdout</code>.</p>
<p>Below is the code for <code>main.py</code></p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import sys
async def main():
process = await asyncio.create_subprocess_exec('python3', '-u', 'test.py',
stdin=sys.stdin,
stdout=asyncio.subprocess.PIPE)
while True:
data = await process.stdout.readline()
if not data:
break
print(";", data.decode().strip())
await process.wait()
asyncio.run(main())
</code></pre>
<p><code>test.py</code></p>
<pre class="lang-py prettyprint-override"><code># import time
# for i in range(10):
# time.sleep(0.5)
# print(i)
print('hello, world!')
n = input('name: ')
print(f'hey {n}!')
</code></pre>
<p>When I run <code>main.py</code> it waits for the input but prompt isn't visible, here the acutal result</p>
<pre class="lang-bash prettyprint-override"><code>python3 main.py
; hello, world!
s
; name: hey s!
</code></pre>
<p>Here's what I expected</p>
<pre class="lang-bash prettyprint-override"><code>python3 main.py
; hello, world!
; name: s // here it should wait for user input
; hey s!
</code></pre>
<p>Any help would be appreciated! Thanks.</p>
<p><strong>Update</strong></p>
<p><code>test.py</code> with <code>flush=True</code> - This too didn't help in any way. It would be better not to make any changes to the <code>test.py</code> as I will be passing the <code>data</code> to the client (web browser).</p>
<pre class="lang-py prettyprint-override"><code># import time
# for i in range(10):
# time.sleep(0.5)
# print(i)
import sys
print(file=sys.stdout , flush=True)
print('hello, world',file=sys.stdout , flush=True)
sys.stdout.flush()
n = input('name: ')
print(f'hey {n}!', file=sys.stdout , flush=True)
</code></pre>
|
<python><input><subprocess><python-asyncio>
|
2024-01-15 19:52:55
| 0
| 354
|
princesanjivy
|
77,822,021
| 11,618,586
|
Calculating slope in degrees from a fitted line
|
<p>I have fitted a line using Scipy's <code>lineregress</code> on some data resulting in the following plot:</p>
<p><a href="https://i.sstatic.net/hl2nd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hl2nd.png" alt="Plot" /></a></p>
<p>I'm trying to convert the slope to degrees and used the following method:</p>
<pre><code>in_rads=np.arctan(res.slope)
degrees=np.degrees(in_rads)
print("Slope=",res.slope)
print("Slope in Radians=", in_rads)
print("Slope in degrees", degrees)
</code></pre>
<p>But I get the following result:</p>
<pre><code>Slope= 0.00010043299346655021
Slope in Radians= 0.00010043299312886817
Slope in degrees 0.005754386630150542
</code></pre>
<p>There is no way that slope is 0.00575 degrees.
What am I doing wrong?</p>
|
<python><python-3.x><numpy><scipy>
|
2024-01-15 19:44:50
| 0
| 1,264
|
thentangler
|
77,821,944
| 993,812
|
Is there an easy way to remove nodes while keeping descendants?
|
<p>I've got a graph with some nodes I'd like to remove based on criteria. Is there an easy way to remove the node, but keep the descendants in tact?</p>
<p>Super simple, but say I've got this graph:
1->2->3->4->5->6</p>
<p>If 2 and 4 meet criteria for removal, how can I be left with something like the following?
1->3->5->6</p>
|
<python><networkx>
|
2024-01-15 19:25:55
| 2
| 555
|
John
|
77,821,822
| 15,452,898
|
Display the row index number given conditions
|
<p>I am currently trying to practice with some data manipulation procedures and have faced with the problem of how to make subset based on special condition.</p>
<p>Let's assume that the dataframe looks like this:</p>
<pre><code>Name ID ContractDate LoanSum DurationOfDelay
A ID1 2023-01-01 10 10
A ID1 2023-01-03 15 15
A ID1 2022-12-29 20 0
A ID1 2022-12-28 40 0
B ID2 2023-01-05 15 19
B ID2 2023-01-10 30 0
B ID2 2023-01-07 35 25
B ID2 2023-01-06 35 0
</code></pre>
<p>My goal is to display for each unique ID (or Name) the index number of the loan issued first with DurationOfDelay > 0</p>
<p><strong>Expected result:</strong></p>
<pre><code>Name ID IndexNum
A ID1 3
B ID2 1
</code></pre>
<p>Explanation:
For ID1 four loans were issued: on 2022-12-28, 2022-12-29, 2023-01-01 and 2023-01-03. We can identify the existence of DurationOfDelay > 0 first on 2023-01-01, and this is the third loan issued to the borrower.</p>
<p>For ID2 also four loans were issued: on 2023-01-05, 2023-01-06, 2023-01-07 and 2023-01-10. We can identify the existence of DurationOfDelay > 0 first on 2023-01-05, and this is the firstloan issued to the borrower.</p>
<p>What I have done so far:</p>
<pre><code>window_spec_subset = Window.partitionBy('ID').orderBy('ContractDate')
subset = df.filter(F.col('DurationOfDelay') > 0) \
.withColumn('row_num', F.row_number().over(window_spec_subset)) \
.filter(F.col('row_num') == 1) \
.drop('row_num')
subset.show()
+----+---+------------+-------+---------------+
|Name| ID|ContractDate|LoanSum|DurationOfDelay|
+----+---+------------+-------+---------------+
| A|ID1| 2023-01-01| 10| 10|
| B|ID2| 2023-01-05| 15| 19|
+----+---+------------+-------+---------------+
</code></pre>
<p>This code allows me to group the data in such a way that for each borrower only the loan issued first with DurationOfDelay > 0 is returned.</p>
<p>But I'm stacked to display the index number of the loan issued first with DurationOfDelay > 0 instead.</p>
<p>Would you be so kind to help me achieve these results?
Any kind of help is highly appreciated!</p>
|
<python><pyspark><subset><data-manipulation><row-number>
|
2024-01-15 18:55:44
| 1
| 333
|
lenpyspanacb
|
77,821,670
| 1,725,974
|
Neural networks - unable to understand the behaviour of the output layer
|
<p>I'd like to know why this works (look for the comment "<em># This is the output layer and this is what I am talking about</em>"):</p>
<pre><code>model = Sequential() # Not talking about this
model.add(Dense(32, activation='relu', input_dim = X_train.shape[1])) # Not talking about this
model.add(Dense(16, activation='relu')) # Not talking about this
model.add(Dropout(0.2)) # Not talking about this
model.add(Dense(16, activation='relu')) # Not talking about this
model.add(Dense(y_train.nunique()+1, activation='softmax')) # This is the output layer and this is what I am talking about
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(learning_rate = 0.01), metrics=['accuracy']) # Not talking about this
model.summary() # Not talking about this
</code></pre>
<p>and not this (look for the comment "<em># This is the output layer and this is what I am talking about</em>")::</p>
<pre><code>model = Sequential() # Same as above
model.add(Dense(32, activation='relu', input_dim = X_train.shape[1])) # Same as above
model.add(Dense(16, activation='relu')) # Same as above
model.add(Dropout(0.2)) # Same as above
model.add(Dense(16, activation='relu')) # Same as above
model.add(Dense(y_train.nunique(), activation='softmax')) # This is the output layer and this is what I am talking about
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(learning_rate = 0.01), metrics=['accuracy']) # Same as above
model.summary() # Same as above
</code></pre>
<p>So what's going on here is that I have this very basic NN that I am using to predict a multiclass dataset. There are 10 classes in the target, starting from 0 and going all the way to 10 (with the exception of 9; 9 isn't there). At where I've commented "<em># This is the output layer and this is what I am talking about</em>", when I give the unit of output neurons as the number of unique values of target (<code>y_train.nunique()</code>), it throws an error like this:</p>
<pre><code>Detected at node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits defined at (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
...
File "c:\<redacted>\Projects\<redacted>\Lib\site-packages\keras\src\backend.py", line 5775, in sparse_categorical_crossentropy
Received a label value of 10 which is outside the valid range of [0, 10). Label values: 5 0 3 3 1 8 10 4 3 1 0 0 1 3 5 6 10 6 10 8 4 6 6 6 1 2 7 10 8 0 4 8
[[{{node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]] [Op:__inference_train_function_737445]
</code></pre>
<p>On the other hand, when I give the number of units as anything more than that, like in the above example it is <code>y_train.nunique()+1</code>, it goes through all 200 epochs. I don't understand what's going on.</p>
<p>Also,</p>
<ol>
<li>Is the output layer correct (for the number of classes in question)?</li>
<li>Is my understanding of the output layer correct (which is - for this specific problem - the number of neurons in the output must be equal to the number of unique values of the target (which are also the classes the data falls into))?</li>
</ol>
|
<python><keras><deep-learning><neural-network>
|
2024-01-15 18:20:56
| 1
| 1,539
|
Anonymous Person
|
77,821,667
| 4,828,720
|
Querying postgrest with httpx
|
<p>Consider the first "Operator Modifiers" example from <a href="https://postgrest.org/en/stable/references/api/tables_views.html#operator-modifiers" rel="nofollow noreferrer">https://postgrest.org/en/stable/references/api/tables_views.html#operator-modifiers</a>:</p>
<pre><code>curl "http://localhost:3000/people?last_name=like(any).{O*,P*}"
</code></pre>
<p>If I try to replicate it using <code>httpx</code> (or <code>requests</code>, <code>urllib3</code> or any other Python module I tried) the URL that ends up at <code>PostgREST</code>'s end is urlencoded:</p>
<pre><code>>>> import https
>>> r = httpx.get("http://localhost:3000/people?last_name=like(any).{O*,P*}")
>>> print(r.url)
URL('http://localhost:3000/people?last_name=like(any).%7BO*,P*%7D')
</code></pre>
<p><code>PostgREST</code> responds with an error to that:</p>
<pre><code>{'code': '22P02',
'details': 'Expected ":", but found "}".',
'hint': None,
'message': 'invalid input syntax for type json'}
</code></pre>
<p>It does not seem possible to disable the automatic urlencoding on the client's end.</p>
<p>How can I query <code>PostgREST</code> using <code>httpx</code>?</p>
|
<python><urlencode><httpx><postgrest>
|
2024-01-15 18:20:45
| 1
| 1,190
|
bugmenot123
|
77,821,648
| 9,518,886
|
Managing ALLOWED_HOSTS in Django for Kubernetes health check
|
<p>I have a Django application running on Kubernetes, using an API for health checks. The issue I'm facing is that every time the IP associated with Django in Kubernetes changes, I have to manually update ALLOWED_HOSTS.</p>
<p><strong>django code:</strong></p>
<pre><code>class HealthViewSet(ViewSet):
@action(methods=['GET'], detail=False)
def health(self, request):
try:
return Response('OK', status=status.HTTP_200_OK)
except Exception as e:
print(e)
return Response({'response': 'Internal server error'}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
</code></pre>
<p><strong>deployment code :</strong></p>
<pre><code>livenessProbe:
httpGet:
path: /health/
port: 8000
initialDelaySeconds: 15
timeoutSeconds: 5
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>Invalid HTTP_HOST header: '192.168.186.79:8000'. You may need to add '192.168.186.79' to ALLOWED_HOSTS.
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/utils/deprecation.py", line 135, in __call__
response = self.process_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/middleware/common.py", line 48, in process_request
host = request.get_host()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/http/request.py", line 148, in get_host
raise DisallowedHost(msg)
django.core.exceptions.DisallowedHost: Invalid HTTP_HOST header: '192.168.186.79:8000'. You may need to add '192.168.186.79' to ALLOWED_HOSTS.
Bad Request: /health/
</code></pre>
<p>Is there a way to dynamically use ALLOWED_HOSTS and avoid manual updates? (Every deployment IP changed.)</p>
<p><strong>ALLOWED_HOSTS</strong></p>
<pre><code>ALLOWED_HOSTS = [localhost', '127.0.0.1']
</code></pre>
<p>Any guidance or suggestions for the best solution in this regard would be appreciated.</p>
|
<python><django><kubernetes><devops>
|
2024-01-15 18:16:59
| 2
| 537
|
Navid Sadeghi
|
77,821,581
| 5,418,710
|
python typing: detach function signature type declaration from function definition
|
<p>I want to ensure a couple of functions implement the same interface via type checking.
How'd I do this? Would I use additional tooling for this? <br/>
For completeness: I'm using mypy as a type checker.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>binaryFunction = Callable[[float, float], float]
def plus(a, b):
return a + b
def minus(a, b):
return a - b
</code></pre>
<p>How can I specify that both <code>plus</code> and <code>minus</code> adhere to the binaryFunction interface?</p>
<p>what seems to works:</p>
<pre class="lang-py prettyprint-override"><code>checkedPlus: binaryFunction = plus
checkedMinus: binaryFunction = minus
</code></pre>
<p>but this just looks very weird; I feel it's not at all pythonic, so I'd refrain from doing this, unless I get convinced it's indeed the way to achieve this.</p>
<p>I've one possible solution in my answer below, but feel like there's room for improvement.</p>
|
<python><typing>
|
2024-01-15 18:01:32
| 1
| 707
|
chichak
|
77,821,394
| 4,219,600
|
Pandas Creating a single list from a column containing a list of objects
|
<p>In some data I import into Pandas I have the following;</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Data</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>"[{'id':102, 'name': 'A'}, {'id':103, 'name': 'B'}, {'id':104, 'name': 'C'}]"</td>
</tr>
</tbody>
</table>
</div>
<p>What I am wondering is if there is a good way of taking this structure and making a data frame as follows;</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>names</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>['A','B','C']</td>
</tr>
</tbody>
</table>
</div>
<p>outputting a list of the names.</p>
<p>I was thinking to write a function and calling this on the dataframe - but I wondered if there was a better way I dont know about?</p>
|
<python><pandas>
|
2024-01-15 17:23:31
| 3
| 1,295
|
James Cooke
|
77,821,296
| 214,296
|
Is Python's `print()` function blocking or non-blocking?
|
<p>I'm trying to speed up the processing time of a script, which in certain configurations may have a lot of output dumped to the console (file=stdout) via <code>print()</code>. Is Python's <code>print()</code> function blocking or non-blocking? I've not able able to find an adequate answer for this in the documentation.</p>
<p>I'm running Linux 4.18.0-486.el8.x86_64 GNU/Linux.</p>
|
<python><blocking><nonblocking>
|
2024-01-15 17:01:40
| 2
| 14,392
|
Jim Fell
|
77,821,126
| 11,857,974
|
How to push data to azure devops using python
|
<p>I am trying to push data to azure devops using python.<br />
Here is how the paths to the original data (local) look like</p>
<pre><code>original_paths = [
"c:/user/dir1/filename1.txt",
"c:/user/dir2/filename1.py",
"c:/user/dir2/filenamex.json"
]
</code></pre>
<p>I would like to create folders in devops ommitting <code>c:/user/</code>, so I will push:</p>
<pre><code>data_to_push = [
"dir1/filename1.txt",
"dir2/filename1.py",
"dir2/filenamex.json"
]
</code></pre>
<p>The following code snippet only establishes the connection to the repo.</p>
<pre><code>def push_to_azure_devops():
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from git import Repo, remote
organization_url = 'https://dev.azure.com/repo-name'
user = "username"
password = "q578w645746748w5zt"
credentials = BasicAuthentication(username='', password=password)
connection = Connection(base_url=organization_url, creds=credentials)
git_client = connection.clients.get_git_client()
# I am stuck from here.
create_push = git_client.create_push(push="???", repository_id="RepoID", project="ProjectName")
original_paths = [
"c/user/dir1/filename1.txt",
"c/user/dir2/filename1.py",
"c/user/dir2/filenamex.json"
]
data_to_push = [
"dir1/filename1.txt",
"dir2/filename1.py",
"dir2/filenamex.json"
]
remote.Remote.add(...)
remote.Remote.commit(...)
remote.Remote.push(...)
</code></pre>
<p>Any help would be helpful. Thanks.</p>
|
<python><git><rest><azure-devops>
|
2024-01-15 16:31:15
| 1
| 707
|
Kyv
|
77,821,100
| 1,536,343
|
psycopg2 "%s" variable with " LIKE 'fake_%' " triggers "IndexError: tuple index out of range"
|
<p>I run the following query in Django using its postgres connection (pyscopg2 lib):</p>
<pre><code>SELECT a.trade_date, a.ticker, a.company_name, a.cusip, a.shares_held, a.nominal,
a.weighting, b.weighting "previous_weighting", ABS(a.weighting - b.weighting) "weighting_change"
FROM t_ark_holdings a LEFT JOIN t_ark_holdings b
ON a.etf_ticker=b.etf_ticker AND a.ticker=b.ticker
AND b.trade_date=(SELECT MAX(trade_date) FROM t_ark_holdings WHERE trade_date<a.trade_date)
-- THIS MIX is causing the error
WHERE a.etf_ticker = %s AND LOWER(a.ticker) NOT LIKE 'fake_%'
--
AND a.weighting<>b.weighting
AND a.trade_date = (SELECT MAX(trade_date) FROM t_ark_holdings)
ORDER BY a.trade_date DESC, "weighting_change" DESC, a.ticker
</code></pre>
<p>When I use "a.etf_ticker = %s" And "NOT LIKE 'fake_%'", an "IndexError: tuple index out of range" is raised, if I use one or the other, the query works fine.
It seems like the driver is looking for another variable corresponging to '%' in "LIKE 'fake_%'".
I am curious on how to format/write correctly my query so that it accepts variables and a fixed LIKE reference.
Thank you</p>
<p>Using python 3.10, psycocp2 latest and django 4</p>
|
<python><psycopg2><django-4.0>
|
2024-01-15 16:27:05
| 1
| 582
|
Je Je
|
77,820,817
| 1,422,096
|
How to create a Z-value matrix for a given meshgrid, and x, y, z columns?
|
<p>Let's say we would like to do a heatmap from three 1D arrays <code>x</code>, <code>y</code>, <code>z</code>. The answer from <a href="https://stackoverflow.com/questions/33942700/plotting-a-heat-map-from-three-lists-x-y-intensity">Plotting a heat map from three lists: X, Y, Intensity</a> works, but the <code>Z = np.array(z).reshape(len(y), len(x))</code> line highly depends from the order in which the z-values have been added to the list.</p>
<p>As an example, the 2 following tests give the exact same plot, whereas <strong>it should not</strong>. Indeed:</p>
<ul>
<li><p>in test1, <code>z=2</code> should be for <code>x=100, y=7</code>.</p>
</li>
<li><p>in test2, <code>z=2</code> should be for <code>x=102, y=5</code>.</p>
</li>
</ul>
<p><strong>How should we create the <code>Z</code> matrix in the function <code>heatmap</code>, so that it's not dependent on the
order in which z-values are added?</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def heatmap(x, y, z):
z = np.array(z)
x = np.unique(x)
y = np.unique(y)
X, Y = np.meshgrid(x, y)
Z = np.array(z).reshape(len(y), len(x))
plt.pcolormesh(X, Y, Z)
plt.show()
### TEST 1
x, y, z = [], [], []
k = 0
for i in range(100, 120):
for j in range(5, 15):
x.append(i)
y.append(j)
z.append(k)
k += 1
heatmap(x, y, z)
### TEST 2
x, y, z = [], [], []
k = 0
for j in range(5, 15):
for i in range(100, 120):
x.append(i)
y.append(j)
z.append(k)
k += 1
heatmap(x, y, z)
</code></pre>
<p><strong>Edit</strong>: Example 2: Let's say</p>
<pre><code>x = [0, 2, 1, 2, 0, 1]
y = [3, 4, 4, 3, 4, 3]
z = [1, 2, 3, 4, 5, 6]
</code></pre>
<p>There should be a non-ambiguous way to go from the 3 arrays <code>x</code>, <code>y</code>, <code>z</code> to a heatmap-plottable meshgrid + a z-value matrix, even if <code>x</code> and <code>y</code> are in <strong>random order</strong>.</p>
<p>In this example <code>x</code> and <code>y</code> are in no particular order, quite random. How to do this?</p>
<p>Here a <code>reshape</code> like <code>Z = np.array(z).reshape(len(y), len(x))</code> would be in wrong order.</p>
|
<python><numpy><heatmap><meshgrid>
|
2024-01-15 15:37:56
| 1
| 47,388
|
Basj
|
77,820,589
| 3,124,150
|
Type parameter list cannot be empty for TypeVarTuple
|
<p>I have the following type parametrized signal:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from typing_extensions import Callable, TypeVarTuple, Generic, Unpack, List, TypeVar
VarArgs = TypeVarTuple('VarArgs')
class Signal(Generic[Unpack[VarArgs]]):
def __init__(self):
self.functions: List[Callable[..., None]] = []
"""
Simple mechanism that allows abstracted invocation of callbacks. Multiple callbacks can be attached to a signal
so that they are all called when the signal is emitted.
"""
def connect(self, function: Callable[..., None]):
"""
Add a callback to this Signal
:param function: callback to call when emited
"""
self.functions.append(function)
def emit(self, *args: Unpack[VarArgs]):
"""
Call all callbacks with the arguments passed
:param args: arguments for the signal, must be the same type as type parameter
"""
for function in self.functions:
if args:
function(*args)
else:
function()
def main():
def signalEmptyCall():
print("Hello!")
signal: Signal[()] = Signal[()]()
signal.connect(signalEmptyCall)
signal.emit()
if __name__ == '__main__':
main()
</code></pre>
<p>The problem is that I cannot have empty argument list in the Callable:</p>
<pre class="lang-py prettyprint-override"><code>signal: Signal[()] = Signal[()]()
</code></pre>
<p>In Python 3.10 I get this error:</p>
<pre><code>Traceback (most recent call last):
File ".../main.py", line 48, in <module>
main()
File ".../main.py", line 40, in main
signal: Signal[()] = Signal[()]()
File "/usr/lib/python3.10/typing.py", line 312, in inner
return func(*args, **kwds)
File "/usr/lib/python3.10/typing.py", line 1328, in __class_getitem__
raise TypeError(
TypeError: Parameter list to Signal[...] cannot be empty
</code></pre>
<p>Reading the code in typing.py I find that Tuple is a special case (<code>if not params and cls is not Tuple:</code> where it doesn't crash for <code>Tuple[()]</code>), but TypeVarTuple has no special case, but is semantically a very similar thing.</p>
<p>Doing <code>signal: Signal = Signal()</code> works, but it seems like I am breaking something because PyCharm warns that <code>Expected type '(Any) -> None', got '() -> None' instead</code>.</p>
<p>How can I fix this?</p>
<p>Thanks for your time.</p>
|
<python><python-typing><type-variables>
|
2024-01-15 14:48:45
| 4
| 947
|
EmmanuelMess
|
77,820,525
| 1,422,096
|
Plot a heatmap from X, Y, Z coordinates
|
<p>I have 3 lists for X, Y, Z coordinates. How to plot a color heatmap from these 3 lists? Here it should be easily possible because <code>(x, y)</code> cover a rectangle [100, 120[ x [5, 15[.</p>
<p>First attempt : I can do a matrix <code>np.array(...)</code> from the <code>x, y, z</code> values but then, the x-axis, y-axis coordinates start from 0, which is not what I want (the x-axis should start at 100, the y-axis should start at 5 in the output plot).</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
X, Y, Z = [], [], []
for x in range(100, 120):
for y in range(5, 15):
X.append(x)
Y.append(y)
Z.append(np.random.random())
print(X)
print(Y)
print(Z)
plt.imshow(...)
</code></pre>
<p>How to make a heatmap from this data, specifically with x-axis / y-axis starting at a different value than 0?</p>
|
<python><numpy><matplotlib><heatmap>
|
2024-01-15 14:37:09
| 0
| 47,388
|
Basj
|
77,820,437
| 3,501,128
|
Test that a function imported into different namespaces is never called
|
<p>Suppose I have</p>
<p>foo.py</p>
<pre><code>def my_func():
print('hello world!')
</code></pre>
<p>bar.py</p>
<pre><code>from foo import my_func
</code></pre>
<p>I want to write tests to ensure that <code>my_func</code> is never called. I have this:</p>
<pre><code>from unittest import mock
import foo
import bar
@mock.patch.object(foo, 'my_func', wraps=foo.my_func)
class TestMyFunc:
def test_from_foo(self, wrapped_my_func):
foo.my_func()
wrapped_my_func.assert_not_called()
def test_from_bar(self, wrapped_my_func):
bar.my_func()
wrapped_my_func.assert_not_called()
</code></pre>
<p>Written this way, I would like both tests to fail. However, run with <code>pytest</code>, <code>test_from_bar</code> passes. From <a href="https://stackoverflow.com/questions/16134281/python-mocking-a-function-from-an-imported-module">this question</a> I gather that that is because <code>mock</code> works on the name spaces so does not know that <code>foo.my_func</code> and <code>bar.my_func</code> are the same thing. So is there a way I can write this test without writing patches and asserts for both modules?</p>
<p><strong>Edit</strong>: from <a href="https://stackoverflow.com/a/76637961/3501128">this answer</a> I was hoping I could combine <code>mock</code> with <code>monkeypatch</code> to get what I want, but this version of <code>test_from_bar</code> still passes:</p>
<pre><code>from unittest.mock import MagicMock
import pytest
import foo
import bar
class TestMyFunc:
@pytest.fixture
def mock_my_func(self, monkeypatch):
_mock = MagicMock(wraps=foo.my_func)
monkeypatch.setattr(foo, foo.my_func.__name__, _mock)
return _mock
def test_from_foo(self, mock_my_func):
foo.my_func()
mock_my_func.assert_not_called()
def test_from_bar(self, mock_my_func):
bar.my_func()
mock_my_func.assert_not_called()
</code></pre>
<h4>Context</h4>
<p>My real use-case is to test a function that uses <a href="https://docs.dask.org/en/stable/index.html" rel="nofollow noreferrer"><code>dask</code></a>. I want to make sure that <a href="https://github.com/dask/dask/blob/3e9ae492934e133547e5eaec5020186fad39898a/dask/base.py#L603" rel="nofollow noreferrer">dask.base.compute</a> is never inadvertently called. However it may be called as <a href="https://github.com/dask/dask/blob/3e9ae492934e133547e5eaec5020186fad39898a/dask/__init__.py#L7" rel="nofollow noreferrer">dask.compute</a> or <a href="https://github.com/dask/dask/blob/3e9ae492934e133547e5eaec5020186fad39898a/dask/array/__init__.py#L264C34-L264C34" rel="nofollow noreferrer">dask.array.compute</a> or possibly others that I haven't stumbled on yet.</p>
|
<python><testing><mocking>
|
2024-01-15 14:22:37
| 1
| 4,279
|
RuthC
|
77,820,382
| 3,801,865
|
OpenTelemetry collector hanging in local Lambda container
|
<p>I have an AWS Lambda function which runs from a container image. I am trying to get the <a href="https://aws-otel.github.io/docs/getting-started/lambda/lambda-python" rel="nofollow noreferrer">AWS Distro for OpenTelemetry Lambda Support For Python</a> managed layer to run within my container, so that I can automatically export traces to AWS X-Ray. In order to do this, as described in <a href="https://aws.amazon.com/blogs/compute/working-with-lambda-layers-and-extensions-in-container-images/#:%7E:text=Copy%20the%20contents%20of%20a%20Lambda%20layer%20into%20a%20container%20image" rel="nofollow noreferrer">this article from AWS</a>, I copy the contents of the layer into the image’s <code>/opt</code> directory during <code>docker build</code>.</p>
<p>My configuration runs correctly on AWS; however, I have issues running my container image locally. Upon startup, I see that the collector is registered as an extension, which is correct. I invoke my Python code via a GET request, which executes my function correctly. Traces are correctly exported to the console. However, the Lambda function does not terminate, as you can see in the logs below (we see <code>INVOKE RTDONE</code> and then it waits). The container hangs until the Lambda function times out, which takes 5 minutes.</p>
<p>It seems like the ADOT collector/agent is continuing to run, even though my Python function has returned. <strong>How can I get my Lambda function to end after my Python function returns?</strong></p>
<p>I got the layer contents via <code>curl $(aws lambda get-layer-version-by-arn --arn "arn:a ws:lambda:eu-west-3:901920570463:layer:aws-otel-python-amd64-ver-1-21-0:1")</code> and put them into <code>./otel</code>.</p>
<p><code>Dockerfile</code>:</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.11
# Copy function code
COPY ./dummy_lambda.py ${LAMBDA_TASK_ROOT}/dummy_lambda.py
# Copy layer code
COPY ./otel /opt/
# Copy collector config
COPY ./collector.yaml /var/task/collector.yaml
# Indicate location of collector config file
ENV OPENTELEMETRY_COLLECTOR_CONFIG_FILE /var/task/collector.yaml
# EDIT: Added this line
ENV AWS_LAMBDA_EXEC_WRAPPER /opt/otel-instrument
# ENTRYPOINT /lambda-entrypoint.sh dummy_lambda.lambda_handler
CMD ["dummy_lambda.lambda_handler"]
</code></pre>
<p><code>dummy_lambda.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>def lambda_handler(event, context):
print("In lambda_handler!")
</code></pre>
<p><code>collector.yaml</code>: is <a href="https://github.com/aws-observability/aws-otel-lambda/blob/main/adot/collector/config.yaml" rel="nofollow noreferrer">the default one</a>, except I’ve changed <code>awsxray</code> to <code>logging</code> to print the traces to the console.</p>
<p>Output from Docker container:</p>
<pre><code>15 Jan 2024 10:51:34,993 [INFO] (rapid) exec '/var/runtime/bootstrap' (cwd=/var/task, handler=)
START RequestId: 7110132c-c8e1-4570-84da-b0e9da4c3ad8 Version: $LATEST
15 Jan 2024 10:51:37,182 [INFO] (rapid) INIT START(type: on-demand, phase: init)
{"level":"info","ts":1705315897.1941624,"msg":"Launching OpenTelemetry Lambda extension","version":"v0.35.0"}
15 Jan 2024 10:51:37,194 [INFO] (rapid) External agent collector (e37669fd-0163-4851-966b-ff3185aa729a) registered, subscribed to [INVOKE SHUTDOWN]
15 Jan 2024 10:51:37,195 [INFO] (rapid) Starting runtime without AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN , Expected?: false
{"level":"info","ts":1705315897.1955585,"logger":"telemetryAPI.Listener","msg":"Listening for requests","address":"sandbox.localdomain:53612"}
{"level":"info","ts":1705315897.1958878,"logger":"telemetryAPI.Client","msg":"Subscribing","baseURL":"http://127.0.0.1:9001/2022-07-01/telemetry"}
{"level":"error","ts":1705315897.1970472,"logger":"telemetryAPI.Client","msg":"Subscription failed. Logs API is not supported! Is this extension running in a local sandbox?","status_code":202}
{"level":"info","ts":1705315897.1971807,"logger":"telemetryAPI.Client","msg":"Subscription success","response":"{\"errorMessage\":\"Telemetry API is not supported\",\"errorType\":\"Telemetry.NotSupported\"}\n"}
2024/01/15 10:51:37 attn: users of the prometheusremotewrite exporter please refer to https://github.com/aws-observability/aws-otel-collector/issues/2043 in regards to an ADOT Collector v0.31.0 breaking change
{"level":"info","ts":1705315897.1975057,"logger":"NewCollector","msg":"Using config URI from environment","uri":"/var/task/collector.yaml"}
{"level":"info","ts":1705315897.2109199,"caller":"service@v0.90.1/telemetry.go:86","msg":"Setting up own telemetry..."}
{"level":"info","ts":1705315897.2117138,"caller":"service@v0.90.1/telemetry.go:203","msg":"Serving Prometheus metrics","address":"localhost:8888","level":"Basic"}
{"level":"info","ts":1705315897.211973,"caller":"exporter@v0.90.1/exporter.go:275","msg":"Deprecated component. Will be removed in future releases.","kind":"exporter","data_type":"metrics","name":"logging"}
{"level":"info","ts":1705315897.2340708,"caller":"exporter@v0.90.1/exporter.go:275","msg":"Deprecated component. Will be removed in future releases.","kind":"exporter","data_type":"traces","name":"logging"}
{"level":"info","ts":1705315897.2350507,"caller":"service@v0.90.1/service.go:148","msg":"Starting aws-otel-lambda...","Version":"v0.35.0","NumCPU":2}
{"level":"info","ts":1705315897.2351215,"caller":"extensions/extensions.go:34","msg":"Starting extensions..."}
{"level":"info","ts":1705315897.2354815,"caller":"otlpreceiver@v0.90.1/otlp.go:83","msg":"Starting GRPC server","kind":"receiver","name":"otlp","data_type":"metrics","endpoint":"localhost:4317"}
{"level":"info","ts":1705315897.2358475,"caller":"otlpreceiver@v0.90.1/otlp.go:101","msg":"Starting HTTP server","kind":"receiver","name":"otlp","data_type":"metrics","endpoint":"localhost:4318"}
{"level":"info","ts":1705315897.236114,"caller":"service@v0.90.1/service.go:174","msg":"Everything is ready. Begin running and processing data."}
{"level":"error","ts":1705315897.2362807,"logger":"telemetryAPI.Listener","msg":"Unexpected stop on HTTP Server","error":"listen tcp: lookup sandbox.localdomain on 168.63.129.16:53: no such host"}
15 Jan 2024 10:51:37,261 [INFO] (rapid) INIT RTDONE(status: success)
15 Jan 2024 10:51:37,261 [INFO] (rapid) INIT REPORT(durationMs: 79.798000)
15 Jan 2024 10:51:37,262 [INFO] (rapid) INVOKE START(requestId: 924b0e64-64c1-4178-ac26-cf3a3ea4dcd9)
In lambda_handler!
15 Jan 2024 10:51:37,263 [INFO] (rapid) INVOKE RTDONE(status: success, produced bytes: 0, duration: 0.774000ms)
15 Jan 2024 10:56:37,237 [WARNING] (rapid) AwaitAgentsReady() = errResetReceived
15 Jan 2024 10:56:37,237 [ERROR] (rapid) Invoke failed error=errResetReceived InvokeID=924b0e64-64c1-4178-ac26-cf3a3ea4dcd9
15 Jan 2024 10:56:37,237 [WARNING] (rapid) Reset initiated: Timeout
15 Jan 2024 10:56:39,237 [WARNING] (rapid) Deadline: the agent extension-collector-1 did not exit after deadline 2024-01-15 10:56:39.23705518 +0000 UTC m=+304.243664413; Killing it.
15 Jan 2024 10:56:39,237 [INFO] (rapid) Sending SIGKILL to extension-collector-1(13).
END RequestId: 924b0e64-64c1-4178-ac26-cf3a3ea4dcd9
REPORT RequestId: 924b0e64-64c1-4178-ac26-cf3a3ea4dcd9 Init Duration: 0.05 ms Duration: 300000.00 ms Billed Duration: 300000 ms Memory Size: 3008 MBMax Memory Used: 3008 MB
</code></pre>
<hr />
<p><strong>EDIT</strong>: After adding <code>ENV AWS_LAMBDA_EXEC_WRAPPER /opt/otel-instrument</code> to my Dockerfile, the output of my container is different and has the output shown below. It looks like it's exporting the spans (maybe?), but it's still not terminating everything.</p>
<pre><code>23 Jan 2024 17:02:48,419 [INFO] (rapid) exec '/var/runtime/bootstrap' (cwd=/var/task, handler=)
START RequestId: fd930622-f6b5-407f-bdb7-3c3ea705b60d Version: $LATEST
23 Jan 2024 17:02:51,689 [INFO] (rapid) INIT START(type: on-demand, phase: init)
{"level":"info","ts":1706029371.7013865,"msg":"Launching OpenTelemetry Lambda extension","version":"v0.35.0"}
23 Jan 2024 17:02:51,702 [INFO] (rapid) External agent collector (6ff72d84-10f8-475a-98f9-2be8a18cf246) registered, subscribed to [INVOKE SHUTDOWN]
{"level":"info","ts":1706029371.7024112,"logger":"telemetryAPI.Listener","msg":"Listening for requests","address":"sandbox.localdomain:53612"}
23 Jan 2024 17:02:51,702 [INFO] (rapid) Starting runtime without AWS_SESSION_TOKEN , Expected?: false
{"level":"info","ts":1706029371.7028499,"logger":"telemetryAPI.Client","msg":"Subscribing","baseURL":"http://127.0.0.1:9001/2022-07-01/telemetry"}
{"level":"error","ts":1706029371.7052574,"logger":"telemetryAPI.Client","msg":"Subscription failed. Logs API is not supported! Is this extension running in a local sandbox?","status_code":202}
{"level":"info","ts":1706029371.7055035,"logger":"telemetryAPI.Client","msg":"Subscription success","response":"{\"errorMessage\":\"Telemetry API is not supported\",\"errorType\":\"Telemetry.NotSupported\"}\n"}
2024/01/23 17:02:51 attn: users of the prometheusremotewrite exporter please refer to https://github.com/aws-observability/aws-otel-collector/issues/2043 in regards to an ADOT Collector v0.31.0 breaking change
{"level":"info","ts":1706029371.7059617,"logger":"NewCollector","msg":"Using config URI from environment","uri":"/var/task/collector.yaml"}
{"level":"info","ts":1706029371.7127142,"caller":"service@v0.90.1/telemetry.go:86","msg":"Setting up own telemetry..."}
{"level":"info","ts":1706029371.7138865,"caller":"service@v0.90.1/telemetry.go:203","msg":"Serving Prometheus metrics","address":":8888","level":"Basic"}
{"level":"info","ts":1706029371.714231,"caller":"exporter@v0.90.1/exporter.go:275","msg":"Deprecated component. Will be removed in future releases.","kind":"exporter","data_type":"traces","name":"logging"}
{"level":"info","ts":1706029371.7155623,"caller":"service@v0.90.1/service.go:148","msg":"Starting aws-otel-lambda...","Version":"v0.35.0","NumCPU":2}
{"level":"info","ts":1706029371.715847,"caller":"extensions/extensions.go:34","msg":"Starting extensions..."}
{"level":"info","ts":1706029371.7161446,"caller":"otlpreceiver@v0.90.1/otlp.go:83","msg":"Starting GRPC server","kind":"receiver","name":"otlp","data_type":"traces","endpoint":"localhost:4317"}
{"level":"info","ts":1706029371.7175572,"caller":"otlpreceiver@v0.90.1/otlp.go:101","msg":"Starting HTTP server","kind":"receiver","name":"otlp","data_type":"traces","endpoint":"localhost:4318"}
{"level":"info","ts":1706029371.7177145,"caller":"service@v0.90.1/service.go:174","msg":"Everything is ready. Begin running and processing data."}
{"level":"error","ts":1706029371.790157,"logger":"telemetryAPI.Listener","msg":"Unexpected stop on HTTP Server","error":"listen tcp: lookup sandbox.localdomain on 127.0.0.11:53: no such host"}
23 Jan 2024 17:02:52,703 [INFO] (rapid) INIT RTDONE(status: success)
23 Jan 2024 17:02:52,703 [INFO] (rapid) INIT REPORT(durationMs: 1013.435000)
23 Jan 2024 17:02:52,703 [INFO] (rapid) INVOKE START(requestId: 3ea8be1b-f113-4803-8242-d99f78025147)
In lambda_handler!
{"level":"info","ts":1706029372.7185798,"msg":"TracesExporter","kind":"exporter","data_type":"traces","name":"logging","resource spans":1,"spans":2}
23 Jan 2024 17:02:52,719 [INFO] (rapid) INVOKE RTDONE(status: success, produced bytes: 0, duration: 16.667000ms)
</code></pre>
|
<python><docker><aws-lambda><open-telemetry>
|
2024-01-15 14:13:05
| 0
| 1,022
|
Josh Clark
|
77,820,218
| 9,563,537
|
Pydantic V2 patching model fields
|
<h1>Problem</h1>
<p>I'm attempting to patch one or more fields of a pydantic model (v2+) in a unit test.</p>
<h3>Why?</h3>
<p>I want to mock some enum field, with a simpler/smaller enum to reduce the test assertion noise. For example, if I have;</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Foo(BaseModel):
field: FooEnum
</code></pre>
<p>where <code>FooEnum</code> is</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class FooEnum(Enum):
"""Some enum with lot's of fields"""
</code></pre>
<p>If I want to explicitly validate the behaviour when some invalid enum is passed to <code>Foo</code>, the error message can be a nuisance to deal with. As such, I wanted to mock <code>Foo.field</code> with some smaller <code>MockFooEnum</code>, to improve testing readability/remove the need to update this unit test when a new field is added to <code>FooEnum</code>.</p>
<h1>Attempted solutions</h1>
<p>The following approach "works", but this was the result of hacking around in pydantic source code, to see how I could patch fields.</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import contextmanager
from unittest import mock
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from pydantic import BaseModel
from pydantic.fields import FieldInfo
@contextmanager
def patch_pydantic_model_field(
target: type[BaseModel], field_overrides: dict[str, FieldInfo]
) -> Generator[type[BaseModel], None, None]:
model_fields = target.model_fields
with mock.patch.object(
target=target, attribute="model_fields", new_callable=mock.PropertyMock
) as mock_fields:
# ? Override the model with new mocked fields.
mock_fields.return_value = model_fields | field_overrides
target.model_rebuild(force=True)
yield target
target.model_rebuild(force=True)
</code></pre>
<h3>Usage</h3>
<pre class="lang-py prettyprint-override"><code>@pytest.mark.parametrize(
"field_overrides",
[{"field": FieldInfo(annotation=MockFooEnum, required=True)}],
)
def test_foo(field_overrides):
with patch_pydantic_model_field(Foo, field_overrides):
assert <something_with_mocked_model>
</code></pre>
<p>This approach seems a bit of a bodge and was curious if there is a nicer way to achieve the above.</p>
|
<python><pydantic>
|
2024-01-15 13:44:39
| 2
| 1,831
|
Josmoor98
|
77,820,213
| 10,425,150
|
pip installed Pyinstaller in global site-packages instead of venv
|
<p>I've created virtual environment using following command:</p>
<pre><code>py -m venv pygui
</code></pre>
<p>Then I've activated it in CMD prompt:</p>
<pre><code>cd pygui/Scripts
activate.bat
</code></pre>
<p>However when I've tried to install new libraries with pip I don't see it in the list.</p>
<pre><code>pip install pyinstaller
pip list
</code></pre>
<p>The only library I can see is:
pip 23.2.1
<a href="https://i.sstatic.net/5g9Hc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5g9Hc.png" alt="enter image description here" /></a></p>
<p>Even if I'm trying to install it outside the environment, as:</p>
<pre><code>py -m pip install pyinstaller --upgrade
</code></pre>
<p>I'm not able to call it, as I get the following error:
<a href="https://i.sstatic.net/lz4al.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lz4al.png" alt="'pyinstaller' is not recognized as an internal or external command,
operable program or batch file." /></a></p>
<p>I've checked "path" in environment variables and looks correct to me.</p>
<pre><code>C:\Users\USER\AppData\Local\Programs\Python\Python312\Scripts
C:\Users\USER\AppData\Local\Programs\Python\Python312
C:\Users\USER\AppData\Local\Programs\Python\Launcher\
%USERPROFILE%\AppData\Local\Microsoft\WindowsApps
C:\Users\USER\AppData\Local\Programs\Microsoft VS Code\bin
</code></pre>
|
<python><pip><virtualenv><pyinstaller>
|
2024-01-15 13:43:26
| 1
| 1,051
|
Gооd_Mаn
|
77,820,136
| 2,919,585
|
pandas.to_datetime is off by one hour
|
<p>I have some data recorded with timestamps using <code>time.time()</code>. I want to evaluate the data using <code>pandas</code> and convert the timestamps to datetime objects for better handling. However, when I try, all my timing data is off by one hour. This example reproduces the issue on my machine:</p>
<pre class="lang-py prettyprint-override"><code>import datetime as dt
import pandas as pd
origin = dt.datetime(2024, 1, 15).timestamp()
timestamps = [origin + 3600 * i for i in range(10)]
print([dt.datetime.fromtimestamp(t).isoformat() for t in timestamps])
print(pd.to_datetime(timestamps, unit='s'))
</code></pre>
<p>Output:</p>
<pre><code>['2024-01-15T00:00:00', '2024-01-15T01:00:00', '2024-01-15T02:00:00', '2024-01-15T03:00:00', '2024-01-15T04:00:00', '2024-01-15T05:00:00', '2024-01-15T06:00:00', '2024-01-15T07:00:00', '2024-01-15T08:00:00', '2024-01-15T09:00:00']
DatetimeIndex(['2024-01-14 23:00:00', '2024-01-15 00:00:00',
'2024-01-15 01:00:00', '2024-01-15 02:00:00',
'2024-01-15 03:00:00', '2024-01-15 04:00:00',
'2024-01-15 05:00:00', '2024-01-15 06:00:00',
'2024-01-15 07:00:00', '2024-01-15 08:00:00'],
dtype='datetime64[ns]', freq=None)
</code></pre>
<p>I am guessing that this has something to do with my timezone (I'm in UTC+1) but I'm confused as to how I should deal with this. If possible, I want to avoid explicitly specifying timezones and such (though I will do it if necessary). I want to just get the same times as I get with <code>dt.datetime.fromtimestamp()</code>. How do I do this?</p>
|
<python><pandas><datetime><timestamp>
|
2024-01-15 13:30:49
| 2
| 571
|
schtandard
|
77,820,120
| 7,631,505
|
Numpy filter data based on multiple conditions
|
<p>Here's my question: I'm trying to filter an image based on the values of two coordinates. I can do this easily with a for loop:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 101)
y = np.linspace(-1, 1, 51)
X, Y = np.meshgrid(x, y)
Z = np.exp(X**2 + Y**2)
newZ = np.zeros(Z.shape)
for i, y_i in enumerate(y):
for j, x_j in enumerate(x):
if x_j > 0 and y_i > 0:
newZ[i, j] = Z[i, j]
else:
newZ[i, j] = 0
plt.contourf(x, y, newZ)
plt.show()
</code></pre>
<p>but I'm pretty sure there should be a way to do it by indexing (as it should be faster) like:</p>
<pre><code>Z = Z[y>0, x>0]
</code></pre>
<p>which doesn't work (<code>IndexError: Shape mismatch</code>)</p>
<p>I assume I could do this with a mask using masked array perhaps (it seems they are doing something like that <a href="https://stackoverflow.com/questions/27156046/filter-data-based-on-latitude-and-longitudes-numpy">here</a>), but I wonder if there is a simple one-liner in normal numpy that I can't seem to figure out.
Thanks</p>
|
<python><numpy><indexing>
|
2024-01-15 13:28:32
| 2
| 316
|
mmonti
|
77,820,083
| 13,803,549
|
How to get the average of a specific value in Django queryset
|
<p>I have two models 'Teams' and 'Members'.</p>
<p>One of the fields in Members is 'favorite_team' and a field in Teams is 'favorite_avg'</p>
<p>I am trying to get an average of the favorite_team field by entering in a value.</p>
<p>I am trying to find a better/faster way to do this:</p>
<pre><code> all_members = Members.objects.all().count()
giants_fans = Members.objects.filter(favorite_team='New York Giants').count()
team = Teams.objects.get(name='New York Giants')
team.favorite_avg = giants_fans/all_members
team.save()
</code></pre>
|
<python><django><django-queryset><aggregate-functions>
|
2024-01-15 13:23:23
| 1
| 526
|
Ryan Thomas
|
77,819,867
| 595,189
|
What is the recommeded way of updating when using `pyproject.toml' and `setuptools`
|
<p>I am/was operating under the assumpion that <code>pyproject.toml</code> is supposed to be a unified metadata file, replacing others (<code>setup.*</code>, <code>requirements.txt</code>...), which worked for the most part for me.</p>
<p>However, I have not found a way to update all depended on packages like <code>pip install -r requirements.txt --upgrade</code> would do.</p>
<p>Running <code>pip install . -U</code> has no effect if all depended on packages are already present but not neccessarily on latest version.</p>
<p>An excerpt from my <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "myapp"
version = "0.1.0"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"fastapi",
"uvicorn[standard]",
"websocket",
"jinja2",
"pydantic",
"surrealdb",
"humanize",
"pendulum",
"icecream",
"pyotp",
"webauthn"
]
[build-system]
requires = ["setuptools", "setuptools-scm"]
build-backend = "setuptools.build_meta"
</code></pre>
<p>How do I update this? Is this why I keep seeing a <code>requirements.txt</code> alongside <code>pyproject.toml</code> in projects?</p>
|
<python><pip><setuptools><pyproject.toml>
|
2024-01-15 12:43:40
| 0
| 2,578
|
TeaOverflow
|
77,819,808
| 2,439,278
|
Bump2version + pre-commit hook in python code
|
<p>I'm using bump2version python module to increment my python codebase version automatically. When the pre-commit hook runs after the commit, the version.sh goes into indefinite loop and version keeps on incrementing in <strong>init</strong>.py. Did I miss anything</p>
<p><strong>.bumpversion.cfg</strong></p>
<pre><code>[bumpversion]
current_version = 0.0.0
commit = True
message = "Merge the new version"
[bumpversion:file:__init__.py]
</code></pre>
<p><strong>.pre-commit-config.yaml</strong></p>
<pre><code>repos:
- repo : local
hooks:
- id: VersionIncrement
name: VersionIncrement
entry: ./version.sh
language: script
exclude: (.pre-commit-config.yaml)|(.git/)
</code></pre>
<p><strong>version.sh</strong></p>
<pre><code>#!/usr/bin/env bash
git_branch_name=`git branch --show-current`
echo $git_branch_name
if [[ $git_branch_name =~ "feature/" ]]; then
bump2version --allow-dirty --verbose patch
exit 0
elif [[ $git_branch_name == "dev" ]]; then
bump2version --allow-dirty --verbose minor
exit 0
elif [[ $git_branch_name == "master" ]]; then
bump2version --allow-dirty --verbose major
exit 0
else
echo "branch does not exist"
exit 1
fi
</code></pre>
|
<python><git><pre-commit><pre-commit.com><bump2version>
|
2024-01-15 12:31:34
| 1
| 1,264
|
user2439278
|
77,819,804
| 18,894,965
|
Django Rest Framework best way to delete an object
|
<p>So I"M Trying to make this delete method in my drf project. i setuped the <code>@api_view(['DELETE'])</code> to the views function and setuped a form method to delete the object. but it didn't worked well. so i have to make the @api_view to post method and set the form method to post to delete the object. is there any way i can make this using delete method?.</p>
<p>this is my view</p>
<pre><code>@api_view(['POST'])
@permission_classes([IsAuthenticated])
def delete(request,article_id):
if request.method == 'POST':
article = get_object_or_404(Article, pk=article_id)
if article.author == request.user:
article.delete()
return redirect('/')
return Response(status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>this is my form method</p>
<pre><code> <form action="{% url 'news:delete' article.id %}" method="post">
{% csrf_token %}
<input type="submit" value="delete">
</form>
</code></pre>
|
<python><django><rest><django-rest-framework>
|
2024-01-15 12:30:51
| 1
| 503
|
devMe
|
77,819,663
| 15,341,457
|
Tkinter - Creating scrollable frame
|
<p>I'm trying to create a scrollable frame with customtkinter following this <a href="https://www.youtube.com/watch?v=gd6tiT0vCII" rel="nofollow noreferrer">guide</a>. The result I'm trying to obtain is a series of frames packed one over the other just like in the linked guide. What I get is just the red canvas which can be scrolled.</p>
<p>The page initially shows only a button and nothing else. Clicking this button calls the function <em>search_keywords()</em>. This function obtains the list of strings <em>filtered_keywords</em> and then uses it as Input of a <em>ListFrame</em> object.</p>
<pre><code>#The class is a frame because it represents only one of the pages of my app. It is raised by a Controller with tkraise()
class KeywordsFound(ctk.CTkFrame):
def __init__(self, parent):
super().__init__(parent)
self.search_button = ctk.CTkButton(self,
text = 'Search',
font = ('Inter', 25),
border_color = '#8D9298',
fg_color = '#FFBFBF',
hover_color='#C78B8B',
border_width=2,
text_color='black',
command = lambda: self.search_keywords(parent))
self.search_button.place(relx = 0.5,
rely = 0.2,
relwidth = 0.2,
relheight = 0.075,
anchor = 'center')
def search_keywords(self, parent):
#Machine learning is used to find the keywords in a pdf document.
#The code is not included because it is not relevant, although I've checked
#that the variable (list) 'filtered keywords' is assigned correctly.
self.keywords_frame = ListFrame(self, str(self.filtered_keywords), 50)
</code></pre>
<p>The <em>ListFrame</em> class is a frame that contains a frame (the one I want to scroll) and a canvas, it uses two functions: <em>create_item</em> fills the scrollable frame with the elements I want it to contain. <em>update_size</em> is called every time the window size is changed and also when the ListFrame is created.</p>
<pre><code>class ListFrame(ctk.CTkFrame):
def __init__(self, parent, text_data, item_height):
super().__init__(parent)
self.pack(expand = True, fill = 'both')
self._fg_color = 'white'
# widget data
self.text_data = text_data
self.item_number = len(text_data)
self.list_height = item_height * self.item_number
#canvas
self.canvas = tk.Canvas(self, background = 'red', scrollregion=(0,0,self.winfo_width(),self.list_height))
self.canvas.pack(expand = True, fill = 'both')
#display frame
self.frame = ctk.CTkFrame(self)
for item in self.text_data:
self.create_item(item).pack(expand = True, fill = 'both')
#scrollbar
self.vert_scrollbar = ctk.CTkScrollbar(self, orientation = 'vertical', command = self.canvas.yview)
self.canvas.configure(yscrollcommand = self.vert_scrollbar.set)
self.vert_scrollbar.place(relx = 1, rely = 0, relheight=1, anchor = 'ne')
self.canvas.bind_all('<MouseWheel>', lambda event: self.canvas.yview_scroll(-event.delta, "units"))
#Configure will bind every time we update the size of the list frame. It also run when we create it for the first time
self.bind('<Configure>', self.update_size)
</code></pre>
<p>The method <em>create_window</em> is called inside the function <em>update_size</em>. This method makes it so that the canvas holds the widget specified in the parameter <em>window</em>.</p>
<pre><code> def update_size(self, event):
#if the container is larger than the list the scrolling stops working
#if that happens we want to stretch the list to cover the entire height of the container
if self.list_height >= self.winfo_height():
height = self.list_height
#let's enable scrolling in case it was disabled before
self.canvas.bind_all('<MouseWheel>', lambda event: self.canvas.yview_scroll(-event.delta, "units"))
#let's place the scrollbar again in case it was hidden before
self.vert_scrollbar = ctk.CTkScrollbar(self, orientation = 'vertical', command = self.canvas.yview)
self.canvas.configure(yscrollcommand = self.vert_scrollbar.set)
self.vert_scrollbar.place(relx = 1, rely = 0, relheight=1, anchor = 'ne')
else:
height = self.winfo_height()
#if we scroll we still get some weird behavior, let's disable scrolling
self.canvas.unbind_all('<MouseWheel>')
#hide the scrollbar
self.vert_scrollbar.place_forget()
#we create the window here because only this way the parameter winfo_width will be set correctly.
#winfo_width contains the width of the widgets, we want to use it to update the width of the frame inside the canvas as the window width changes
self.canvas.create_window((0,0), window = self.frame, anchor = 'nw', width = self.winfo_width(), height = height)
def create_item(self, item):
frame = ctk.CTkFrame(self.frame)
frame.rowconfigure(0, weight = 1)
frame.columnconfigure(0, weight = 1)
#widgets
ctk.CTkLabel(frame, text = item).grid(row = 0, column = 0)
ctk.CTkButton(frame, text = 'DELETE').grid(row = 0, column = 1)
return frame
</code></pre>
|
<python><tkinter><canvas><scroll><frame>
|
2024-01-15 12:06:22
| 1
| 332
|
Rodolfo
|
77,819,377
| 14,282,714
|
AttributeError: module 'streamlit' has no attribute 'chat_input'
|
<p>I'm trying to run a simple Streamlit app in my conda env. When I'm running the following <code>app.py</code> file:</p>
<pre><code># Streamlit app
import streamlit as st
#
prompt = st.chat_input("Say something")
if prompt:
st.write(f"User has sent the following prompt: {prompt}")
</code></pre>
<p>It returns the following error when running <code>streamlit run app.py</code>:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 556, in _run_script
exec(code, module.__dict__)
File "/Users/quinten/Documents/app.py", line 11, in <module>
prompt = st.chat_input("Say something")
AttributeError: module 'streamlit' has no attribute 'chat_input'
</code></pre>
<p>I don't understand why this error happens. I used the newest Streamlit version. Also I don't understand why the error uses python3.9 while I use 3.12 in my environment. I check this <a href="https://discuss.streamlit.io/t/attributeerror-module-streamlit-has-no-attribute-chat-input/46197/7" rel="nofollow noreferrer">blog</a>, but this doesn't help unfortunately. So I was wondering if anyone knows why this happens?</p>
<hr />
<p>I'm using the following versions:</p>
<pre><code>streamlit 1.30.0
</code></pre>
<p>And python:</p>
<pre><code>python --version
Python 3.12.0
</code></pre>
|
<python><python-3.x><pip><attributeerror><streamlit>
|
2024-01-15 11:12:48
| 2
| 42,724
|
Quinten
|
77,819,362
| 1,574,551
|
while loop exit after receive error in a try except
|
<p>I would like to execute the Python code, but in the event of a timeout error, I want the code to automatically retry after a 60-second delay and continue. However, if another error occurs, such as a permission issue, the code should exit with the corresponding error message. The current code I have below doesn't exit properly and gets stuck in an infinite loop. How can I modify the code to address this issue? Please advise</p>
<pre><code> pause=60
while True:
try:
df=report.next_records() ##call the function here
except:
sleep(pause)
pause+=60
else:
break
</code></pre>
|
<python><while-loop><except>
|
2024-01-15 11:10:08
| 2
| 1,332
|
melik
|
77,819,273
| 18,125,194
|
Word2Vec to calculate similarity of movies to high preforming movies
|
<p>I have a dataset with user ratings for movies and movie descriptions like this</p>
<pre><code>import pandas as pd
df =pd.DataFrame ({
'description': [
'Two imprisoned men bond over a number of years',
'A family heads to an isolated hotel for the winter',
'In a future where technology controls everything',
'A young lion prince flees his kingdom only to learn the true meaning of responsibility',
'A group of intergalactic criminals are forced to work together to stop a fanatical warrior'
],
'ratings': [8.7, 9.3, 7.9, 8.5, 8.1]
})
df
</code></pre>
<p>I want to use the description (along with other features) to predict the ratings of movies.</p>
<p>I am trying to use Word2Vec to calculate a similarity score that will determine how similar a new movie is to past movies that performed well. My plan was to define the top performing movies, and calculate a similarity score for all movies in the dataset before using the dataset with another machine learning algorithm.</p>
<p>But I am having trouble calculating the similarity score (I've never used this method before).</p>
<pre><code>from gensim.models import Word2Vec
from nltk.tokenize import word_tokenize
# Create Tokens
df['tokenized_description'] = df['description'].apply(lambda x: word_tokenize(x.lower()))
# Train Word2Vec model
word2vec_model = Word2Vec(sentences=df['tokenized_description'], vector_size=100, window=5, min_count=1, workers=4)
# define top performing movies
threshold = df['ratings'].quantile(0.75)
highest_grossing_movies = df[df['ratings'] >= threshold]
# Tokenize descriptions of highest-grossing movies
highest_grossing_movies['tokenized_description'] = highest_grossing_movies['description'].apply(lambda x: word_tokenize(x.lower()))
# Convert the tokenized descriptions to embeddings
embeddings_high_grossing = highest_grossing_movies['description'].apply(lambda desc: word2vec_model.wv[word_tokenize(desc)]).tolist()
# Assess similarity for each movie description in the entire DataFrame
df['similarity_score'] = [word2vec_model.wv.similarity(df['description'])
</code></pre>
<p>When I run the code, I get the error</p>
<pre><code>KeyError: "Key 'Two' not present"
</code></pre>
<p>I'm sure the last line of the code is wrong, but I'm not sure of how to correct this.</p>
|
<python><nlp><word2vec>
|
2024-01-15 10:54:24
| 1
| 395
|
Rebecca James
|
77,819,136
| 15,917,305
|
Clone Repo into Local Repo and Install Package via Pip via Local Repo
|
<p>I am a Python and Github novice. However, I'd like to run some code available on Github. My understanding is that I need to:</p>
<p>"First, clone this repo, cd into the local repo, and install via pip from your local repo"</p>
<p>Can someone provide the perquisite and requisite steps for executing:</p>
<ul>
<li>(1) cloning the repo into local repo.</li>
<li>(2) and pip installing the package via the local repo.</li>
</ul>
<p>I have only ever previously installed packages through the terminal, so the Github phase is new to me.</p>
<p><a href="https://github.com/lucasimi/tda-mapper-python" rel="nofollow noreferrer">https://github.com/lucasimi/tda-mapper-python</a></p>
<p>All help would be appreciated on prerequisites and requisites.</p>
<p>Thank you</p>
|
<python><github>
|
2024-01-15 10:30:36
| 1
| 339
|
EB3112
|
77,819,004
| 633,001
|
Python import failed due to missing symbol
|
<p>I have an external library with a header file and a .so library. I wanted to wrap this via PyBind11 to access the C code from Python. I can build the module just fine, but when I want to import it, I get:</p>
<pre><code>/home/user/dev/external/Python/PyBind_Module/external_python.cpython-310-x86_64-linux-gnu.so: undefined symbol: <FUNCTION>
File "/home/user/dev/external/Python/pybind_test.py", line 3, in <module>
import external_wrapper as exw
ImportError: /home/user/dev/external/Python/PyBind_Module/external_python.cpython-310-x86_64-linux-gnu.so: undefined symbol: <FUNCTION>
</code></pre>
<p>When I do a <code>nm -gD libexternal.so</code>, I find in the exposed list</p>
<pre><code>nm -gD /usr/lib/libexternal.so | grep 'FUNCTION'
00000000000170c0 T FUNCTION
0000000000018ff0 T FUNCTIONd
</code></pre>
<p>I put the file at /usr/lib, which I thought would mean the code should find it when loading .so files, but I now assume I need to tell either the PyBind or my python code to look for that library. How would I fix this?</p>
<p> </p>
<p>Edit:</p>
<p>Setup.py:</p>
<pre><code>from pathlib import Path
from pybind11.setup_helpers import build_ext
from pybind11.setup_helpers import Pybind11Extension
from setuptools import setup
example_module = Pybind11Extension(
"external_python",
sources=[str(fname) for fname in Path("src").glob("*.cpp")],
include_dirs=["include", "."],
extra_compile_args=["-O3"],
libraries=["/usr/lib/libexternal.so"],
library_dirs=["/usr/lib"]
)
setup(
name="external_module",
version=1.0,
description="C++ wrapper around the external library",
ext_modules=[example_module],
cmdclass={"build_ext": build_ext},
)
</code></pre>
|
<python><shared-libraries><pybind11>
|
2024-01-15 10:05:33
| 1
| 3,519
|
SinisterMJ
|
77,818,975
| 13,955,154
|
Periodogram to find the season of a time series
|
<p>I have a time series that is detecting a toggle system which has a repetitive pattern of length k, where (k-1) consecutive values are 0 and one is equal to 1. I want to use periodogram to find the length k of this system. How can I do so?</p>
<p>Currently I have this code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import periodogram
def find_repetitive_pattern_length(time_series):
f, Pxx = periodogram(time_series)
max_power_index = np.argmax(Pxx)
dominant_frequency = f[max_power_index]
period_length = int(1 / dominant_frequency)
plt.figure(figsize=(10, 6))
plt.plot(f, Pxx, label='Periodogram')
plt.scatter(f[max_power_index], Pxx[max_power_index], color='red', label=f'Dominant Frequency: {dominant_frequency:.2f} Hz')
plt.title('Periodogram with Dominant Frequency')
plt.xlabel('Frequency (Hz)')
plt.ylabel('Power/Frequency Density')
plt.legend()
plt.show()
return period_length
time_series4 = np.array([0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1])
time_series7 = np.array([0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1])
period_length = find_repetitive_pattern_length(time_series4)
print(f"The length of the first repetitive pattern is: {period_length}")
period_length = find_repetitive_pattern_length(time_series7)
print(f"The length of the second repetitive pattern is: {period_length}")
</code></pre>
<p>But I get:</p>
<blockquote>
<p>The length of the first repetitive pattern is: 4 (correct)</p>
</blockquote>
<blockquote>
<p>The length of the second repetitive pattern is: 3 (incorrect)</p>
</blockquote>
<p>What am I doing wrong and how can I solve?</p>
|
<python><numpy><scipy><time-series><pattern-recognition>
|
2024-01-15 10:00:54
| 1
| 720
|
Lorenzo Cutrupi
|
77,818,960
| 9,658,149
|
Langchain, error raised by bedrock service Malformed input request: #: extraneous key [functions]
|
<p>I have this code that just creates tags for each document. I have done it with ChatOpenAI(), and it works perfectly, but when I change to BedrockChat(), I get the error. This is the code.</p>
<pre><code>from langchain_community.chat_models import BedrockChat,
from langchain.chains import create_tagging_chain_pydantic
class Tags(BaseModel):
sql: str = Field(
description="Whether the text fragment includes sql code.",
default="False",
)
python: str = Field(
description="Whether the text fragment includes python code.",
default="False",
)
tagging_chain = create_tagging_chain_pydantic(pydantic_schema=Tags,
llm = BedrockChat(model_id="anthropic.claude-v2") ,
prompt=self.tagging_prompt)
tagging_results = tagging_chain.batch(
inputs=[{"input": doc.page_content} for doc in docs],
return_exceptions=True,
config={
"max_concurrency": 5,
},
)
</code></pre>
<p>I got this error:</p>
<blockquote>
<p>Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: extraneous key [functions] is not permitted#: extraneous key [function_call] is not permitted, please reformat your input and try again.</p>
</blockquote>
<p>When I print this:</p>
<pre><code>print(tagging_chain.llm_kwargs)
</code></pre>
<blockquote>
<p>{'functions': [{...}], 'function_call': {'name':
'information_extraction'}}</p>
</blockquote>
<pre><code>print(tagging_chain.llm_kwargs.get("functions"))
</code></pre>
<blockquote>
<p>[{'name': 'information_extraction', 'description': 'Extracts the relevan...e passage.', 'parameters': {...}}]</p>
</blockquote>
<p>How can fix this error?</p>
|
<python><pydantic><langchain><amazon-bedrock>
|
2024-01-15 09:56:55
| 0
| 2,097
|
Eric Bellet
|
77,818,890
| 1,240,075
|
Python ModuleNotFoundError when running tests
|
<p>I have a Python program that runs correctly as expected, but for some reason I am not able to execute the tests.</p>
<p>In the folder hierarchy, <em>tests</em> is a sibling folder of <em>src</em> and the error I got when executing tests is:</p>
<pre><code>ModuleNotFoundError: No module named 'sub'
</code></pre>
<p>Here, the hierarchy plus a MWE of the code:</p>
<pre><code>- src/
- __init__.py
- dummyA.py
- main.py
- sub/
-__init__.py
- dummyB.py
- tests/
- __init__.py
- test_s.py
</code></pre>
<p><strong>src/dummyA.py</strong></p>
<pre><code>from sub.dummyB import Sub
class Sup:
def __init__(self):
self.sub_var = Sub()
def hello_sup(self, s):
sub_word = self.sub_var.hello_sub(s)
return f"Super {sub_word}!"
</code></pre>
<p><strong>src/sub/dummyB.py</strong> (inside "sub" folder)</p>
<pre><code>class Sub:
def hello_sub(self, a):
return f"{a}"
</code></pre>
<p><strong>src/main.py</strong></p>
<pre><code>from sub.dummyB import Sub
from dummyA import Sup
def main():
sub = Sub()
sup = Sup()
dummy_text = "cat"
sub_word = sub.hello_sub(dummy_text)
sup_word = sup.hello_sup(dummy_text)
print("Sub is:", sub_word) # cat
print(f"Sup is: {sup_word}") # Super cat!
if __name__ == "__main__":
main()
</code></pre>
<p><strong>tests/test_s.py</strong></p>
<pre><code>from src.dummyA import Sup
def test_sub_sup():
dummy_sup = Sup()
expected_res = dummy_sup.hello_sup("cat")
assert expected_res == "Super cat!"
</code></pre>
<p><strong>ERROR log:</strong></p>
<pre><code>ImportError: Failed to import test module: test_s
Traceback (most recent call last):
File "/usr/lib/python3.11/unittest/loader.py", line 162, in loadTestsFromName
module = __import__(module_name)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/demo/tests/test_s.py", line 1, in <module>
from src.dummyA import Sup
File "/home/user/demo/src/dummyA.py", line 1, in <module>
from sub.dummyB import Sub
ModuleNotFoundError: No module named 'sub'
</code></pre>
<p>No matter if I use <em>pytest</em> or <em>unittest</em>, the error is always the same.
It looks like the system breaks at the first line of <em>dummyA.py</em> beause it is not able to import the module called <em>sub</em>. Weird, because the same line is executed correctly when the app runs.</p>
<p>As you can see, I also used the <em><strong>init</strong>.py</em> in each folder, but this does not help.</p>
<p>Am I missing something?
Thanks!</p>
|
<python><unit-testing><testing><pytest><python-unittest>
|
2024-01-15 09:45:06
| 1
| 1,611
|
user840718
|
77,818,860
| 13,944,524
|
Why ScopeSecurity object is empty when used in path operation function in FastAPI?
|
<p>Here is a simple code to reproduce:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated, Any
from fastapi import FastAPI, Security
from fastapi.security import SecurityScopes
app = FastAPI()
def dependency_one(scope: SecurityScopes):
print("dependency function is running.")
print(f"scope in dependency_one is {scope.scopes}.")
@app.get("/")
def index(
scope: SecurityScopes,
obj: Annotated[Any, Security(dependency_one, scopes=["items"])],
):
print(f"scope in path operation function is {scope.scopes}.")
return "Hello test"
</code></pre>
<p>After requesting the <code>"/"</code> path, here is the output:</p>
<pre class="lang-none prettyprint-override"><code>INFO: Started server process [18490]
INFO: Waiting for application startup.
INFO: Application startup complete.
dependency function is running.
scope in dependency_one is ['items'].
scope in path operation function is [].
INFO: 127.0.0.1:64055 - "GET / HTTP/1.1" 200 OK
</code></pre>
<p>FastAPI correctly collects the scope defined in dependant (<code>index</code>) and injects it in to the dependency <code>dependency_one</code>. But when I want to use it inside the path operation function, it doesn't inject it and it's empty.</p>
<p>Is that how it is supposed to work? Does that mean I must check and verify the scopes only inside the dependencies' chain and not in the path operation function directly?</p>
<p>The <a href="https://fastapi.tiangolo.com/advanced/security/oauth2-scopes/#more-details-about-securityscopes" rel="nofollow noreferrer">documentation</a> says:</p>
<blockquote>
<p>It will always have the security scopes <strong>declared in the current
Security dependencies</strong> and all the dependants for that specific path
operation and that specific dependency tree.</p>
</blockquote>
|
<python><fastapi>
|
2024-01-15 09:39:36
| 0
| 17,004
|
S.B
|
77,818,764
| 3,727,079
|
How can I select all dataframe entries between two times, when the time is a series?
|
<p>Here's some data:</p>
<pre><code>my_dataframe = pd.DataFrame({'time': ["2024-1-1 09:00:00", "2024-1-1 15:00:00", "2024-1-1 21:00:00", "2024-1-2 09:00:00", "2024-1-2 15:00:00", "2024-1-2 21:00:00"],
'assists': [5, 7, 7, 9, 12, 9],
'rebounds': [11, 8, 10, 6, 6, 5],
'blocks': [4, 7, 7, 6, 5, 8]})
</code></pre>
<p><a href="https://i.sstatic.net/yaA3d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yaA3d.png" alt="enter image description here" /></a></p>
<p>I want to select all data between noon and 10pm, i.e. rows 1-2 & 4-5. How can I do that?</p>
<p>I tried using <a href="https://stackoverflow.com/questions/45578836/pick-timestamps-in-certain-time-range-from-a-datetimeindex">between_time</a>, but it does not seem to work because (as far as I can tell) the time is a series in the data and not a timestamp. That suggests I need to first convert the series to a timestamp, but <a href="https://www.programiz.com/python-programming/datetime/strptime" rel="nofollow noreferrer">datetime.strptime</a> does not seem to work because the time is not a string, and neither does <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.to_timestamp.html" rel="nofollow noreferrer">series.to_timestamp</a> (<code>my_dataframe.iloc[:,0].to_timestamp()</code> seems to raise an "unsupported Type RangeIndex" error).</p>
|
<python><pandas><dataframe><datetime>
|
2024-01-15 09:24:31
| 2
| 399
|
Allure
|
77,818,705
| 10,003,538
|
Django admin with Nullable ForeignKey in Model but why required in admin page?
|
<p>I am using Django and Python to create a custom admin interface for a model named ContentProgress. The ContentProgress model has a nullable ForeignKey field named custom_user_content.</p>
<p>However, when I try to save or edit a ContentProgress instance in the Django admin without providing a value for custom_user_content, I encounter an error stating "This field is required for Custom user Content."</p>
<pre><code># Admin class
class ContentProgressAdmin(admin.ModelAdmin, ExportCsvMixin):
list_display = ['user', 'lesson', 'in_percent', 'in_seconds', 'done', 'created_at', 'updated_at']
fields = ['user', 'lesson', 'in_percent', 'in_seconds', 'done', 'real_seconds_listened', 'custom_user_content']
search_fields = ['user__username', 'lesson__name', 'user__email']
list_filter = ['done', ('custom_user_content', RelatedDropdownFilter), ('user__company', RelatedDropdownFilter)]
autocomplete_fields = ['user', 'lesson', 'custom_user_content']
actions = ['export_as_csv']
admin.site.register(ContentProgress, ContentProgressAdmin)
# Model
class ContentProgress(Progress):
lesson = models.ForeignKey(MediaFile, on_delete=models.CASCADE)
custom_user_content = models.ForeignKey(CustomUserContent, null=True, on_delete=models.CASCADE)
def __str__(self):
return f'{self.lesson.name} - {int(self.in_percent * 100)}%'
</code></pre>
<p>Despite setting null=True for the custom_user_content field, I still face validation issues in the Django admin.</p>
<p>How can I properly handle nullable ForeignKey fields in the admin interface to avoid this error?</p>
|
<python><django>
|
2024-01-15 09:14:06
| 1
| 1,225
|
Chau Loi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.