QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,663,428
| 1,447,953
|
Python search for modules on sys.path only, ignoring already loaded modules, for existence check purposes
|
<p>I am looking for a solution to the following problem. I have a package that includes some build tools (for use with SCons, though that is just incidental to the question). I want the package to be able to use these tools itself straight from source before it is installed, so I do the following in the main build file (SConstruct file):</p>
<pre><code>sys.path.insert(0, "path_to_my_package")
import my_package
</code></pre>
<p>Then great, my tools can be used in the build/install steps of my_package itself (putting it at the front of sys.path to preferentially import it over any other version of <code>my_package</code> that might happen to be installed in the environment)</p>
<p>The problem comes in that one of the tools does a "local install" of the package, basically running <code>conda develop my_package</code>. Unfortunately, if some version of <code>my_package</code> is already installed in the environment, it will be imported preferentially to the local copy "installed" via <code>conda develop</code>. So I want to detect whether <code>my_package</code> is installed and tell the user to uninstall it before doing the "local install".</p>
<p>However, when talking about "local install"ing <code>my_package</code> itself, I run into the problem that in the build scripts I added the package to the path and imported it, so all the normal ways for detecting if a package is installed will think that the package IS installed, because it kind of is within the current Python session. E.g. if I run something like</p>
<pre><code>importlib.util.find_spec("my_package")
</code></pre>
<p>then it will find the copy I am using directly from source, which is not what I care about. I want to check if there is <em>any other</em> version of that package already installed in my environment.</p>
<p>I could do something like this:</p>
<pre><code>sys.path = sys.path[1:] # remove the local source from the path
importlib.reload(my_package) # make Python look again for the module
</code></pre>
<p>and check for import errors, which does work, but I am a little worried about side effects it will have on the version of <code>my_package</code> I already loaded the first time. I'd rather do something like with <code>find_spec</code> where I can look for packages without actually loading them or messing with the existing <code>sys.modules</code>. But I don't know any way to get <code>find_spec</code> to properly search the path for <code>my_package</code> again. It just always returns the package that is already imported, regardless of what I do to <code>sys.path</code>. I guess it looks in some cache or at sys.modules or whatever first.</p>
<p>So, is the some way to search the system for modules totally from scratch, ignoring everything that is currently in the Python session?</p>
<p>I guess I can do something kind of drastic like spawn a fresh Python session as a subprocess via a system call and do the <code>find_spec</code> check there. I suppose there is nothing terribly wrong with that, it just feels like there should be a better way to do this...</p>
<p>My other crazy idea was to copy <code>my_package</code> to a temporary directory under a different package name, say <code>my_package_2</code> and import it from there. It's a simple package so this should be fine. Then when I later search for "properly installed" versions of <code>my_package</code> I can do a "normal" search without worrying about collisions. The subprocess thing might be simpler though...</p>
|
<python><python-importlib>
|
2023-12-14 22:42:46
| 1
| 2,974
|
Ben Farmer
|
77,663,325
| 21,612,376
|
Render a django template without a django app
|
<p>I am very new to Django. I have the following Django template, and I would like to render it to display the username variable. How can I do this without setting up an entire Django app structure?</p>
<pre><code><html>
<body>
hello <strong>{{username}}</strong>
your account is activated.
</body>
</html>
</code></pre>
<p>I've seen <a href="https://stackoverflow.com/questions/32442893/render-django-template-from-command-line-without-settings">this question</a>, but when I run the code there, I get an error message:</p>
<pre><code>ImproperlyConfigured: No DjangoTemplates backend is configured.
</code></pre>
|
<python><django>
|
2023-12-14 22:10:45
| 1
| 833
|
joshbrows
|
77,663,249
| 13,376,660
|
Is there a way to refactor this try/except block to avoid DRY violation?
|
<p>This is part of a larger function using Selenium with Python for a web automation script. The website sometimes serves a popup, which results in an <code>ElementClickInterceptedException</code> but the popup isn't always served; sometimes there is no popup.</p>
<p>I wrote this function to close the popup if it's served:</p>
<pre><code>def close_popup():
try:
WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.XPATH, '//x-button'))).click()
time.sleep(1)
except:
NoSuchElementException
print("No pop-up found")
</code></pre>
<p>Then I wrote a try/except block in the main section which calls the function:</p>
<pre><code>try:
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.LINK_TEXT, "Click here!"))).click()
except ElementClickInterceptedException:
close_popup()
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.LINK_TEXT, "Click here!"))).click()
</code></pre>
<p>This does work as expected, but it violates DRY (Don't Repeat Yourself), since the line after <code>try:</code> is the same as the last line of the block.</p>
<p>I did search this first but did not find any to be exactly like my question. Those I found were asking about some error in the code. My code does work, it just violates DRY.</p>
<p>How would I refactor so that my code is Pythonic and does not violate DRY?</p>
|
<python><refactoring><repeat><dry><try-except>
|
2023-12-14 21:52:39
| 1
| 327
|
dbaser
|
77,663,162
| 10,499,034
|
How to install Pandas when PIP fails due to metadata generation error?
|
<p>I am trying to install Pandas in a standalone installation of Python 3.12 and I keep getting an error saying "metadata-generation-failed." I am able to install Pandas in my Anaconda installation but I need this one to be totally separate because I am using it to connect to VB.net. I have updated PIP and the setup tools and I get the same error. How can I get Pandas installed?</p>
<p><a href="https://i.sstatic.net/lMrJs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lMrJs.png" alt="enter image description here" /></a></p>
<p>The install command and the output are as follows:</p>
<pre><code>C:\Program Files (x86)\Python312-32\Scripts>pip install pandas
Defaulting to user installation because normal site-packages is not writeable
Collecting pandas
Using cached pandas-2.1.4.tar.gz (4.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [23 lines of output]
+ meson setup C:\Users\realt\AppData\Local\Temp\pip-install-ew6o36bq\pandas_25d69bcb06c84eb088749c1946920e14 C:\Users\realt\AppData\Local\Temp\pip-install-ew6o36bq\pandas_25d69bcb06c84eb088749c1946920e14\.mesonpy-swd4lt0x\build -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=C:\Users\realt\AppData\Local\Temp\pip-install-ew6o36bq\pandas_25d69bcb06c84eb088749c1946920e14\.mesonpy-swd4lt0x\build\meson-python-native-file.ini
The Meson build system
Version: 1.2.1
Source dir: C:\Users\realt\AppData\Local\Temp\pip-install-ew6o36bq\pandas_25d69bcb06c84eb088749c1946920e14
Build dir: C:\Users\realt\AppData\Local\Temp\pip-install-ew6o36bq\pandas_25d69bcb06c84eb088749c1946920e14\.mesonpy-swd4lt0x\build
Build type: native build
Project name: pandas
Project version: 2.1.4
Activating VS 17.8.3
C compiler for the host machine: cl (msvc 19.38.33133 "Microsoft (R) C/C++ Optimizing Compiler Version 19.38.33133 for x64")
C linker for the host machine: link link 14.38.33133.0
C++ compiler for the host machine: cl (msvc 19.38.33133 "Microsoft (R) C/C++ Optimizing Compiler Version 19.38.33133 for x64")
C++ linker for the host machine: link link 14.38.33133.0
Cython compiler for the host machine: cython (cython 0.29.36)
Host machine cpu family: x86_64
Host machine cpu: x86_64
Program python found: YES (C:\Program Files (x86)\Python312-32\python.exe)
Need python for x86_64, but found x86
Run-time dependency python found: NO (tried sysconfig)
..\..\pandas\_libs\tslibs\meson.build:23:7: ERROR: Python dependency not found
A full log can be found at C:\Users\realt\AppData\Local\Temp\pip-install-ew6o36bq\pandas_25d69bcb06c84eb088749c1946920e14\.mesonpy-swd4lt0x\build\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
|
<python><pandas><pip>
|
2023-12-14 21:30:43
| 1
| 792
|
Jamie
|
77,662,999
| 6,083,675
|
Why don't API calls to /search get all matching results?
|
<p>I'm trying to find rows in some HuggingFace datasets which have a certain word or phrase in a specific column (e.g., "output"). I've written some basic Python code to do this, but I noticed an issue at the API level when trying to get a slice of the data that contains every row with one of the words from my phrase. (As I understand it, with the API you can't even search for rows that contain both words—only OR searching.)</p>
<p>Specifically, when making calls to the <a href="https://huggingface.co/docs/datasets-server/search" rel="nofollow noreferrer">HuggingFace Datasets-server API</a> <code>/search</code> endpoint:</p>
<ul>
<li><p>A search for <em>used</em>, a word which we'd expect to see frequently in the data: <a href="https://datasets-server.huggingface.co/search?dataset=teknium%2FGPT4-LLM-Cleaned&config=default&split=train&offset=0&length=100&query=used" rel="nofollow noreferrer">https://datasets-server.huggingface.co/search?dataset=teknium%2FGPT4-LLM-Cleaned&config=default&split=train&offset=0&length=100&query=used</a></p>
<ul>
<li>Returns very few results (14 rows)</li>
<li>Searching for <em>use</em> instead returns the same results</li>
</ul>
</li>
<li><p>A search for <em>spices</em>, an arbitrary less common word: <a href="https://datasets-server.huggingface.co/search?dataset=teknium%2FGPT4-LLM-Cleaned&config=default&split=train&&offset=0&length=100&query=spices" rel="nofollow noreferrer">https://datasets-server.huggingface.co/search?dataset=teknium%2FGPT4-LLM-Cleaned&config=default&split=train&&offset=0&length=100&query=spices</a></p>
<ul>
<li>Returns more matches for "used" (or "use") than the first API call (I didn't count how many rows that is, but it certainly returns new rows that weren't in the first query)</li>
</ul>
</li>
</ul>
<p>I thought these searches would return all the rows with the specified word (and as a bonus, stemming forms like <em>used</em> to <em>use</em>) in any column. (Note that "num_rows_total" at the end shows how many total rows are found for each search, but the amount per page is limited to 100.) However, as you can see, I didn't get all the rows containing "used" in the first search and as a result I wonder if any of the searches made through this type of call are actually returning all the relevant results (which is essential).</p>
<p>I've noticed this same type of behavior with other datasets too. Often no rows are returned for searches that I know should return rows. They're all large datasets (10k to 1m rows) which were "Auto-converted to Parquet" according to their listings on the HuggingFace website.</p>
<p>I'd like to avoid looping over every row in the entire dataset for every phrase that I decide to look up this way, and I don't want to download these datasets either. Is there a modification that I can make to the API call to make it work? Or maybe a different call would work better?</p>
|
<python><huggingface-datasets>
|
2023-12-14 20:49:42
| 1
| 6,231
|
Laurel
|
77,662,912
| 968,132
|
Pandas dataframes sorting/partitioning within multi-index
|
<p>I have a pandas dataframe like such:</p>
<pre><code>df = pd.DataFrame({
'class': ['Opn', 'Opn', 'MA', 'CoNo', 'Opn'],
'title': ['Title1', 'Title1', 'Title2', 'Title3', 'Title2'],
'event_count': [16, 11, 8, 7, 5]
})
</code></pre>
<p>How can I sort values descending within class and then within title so the output would look like this:</p>
<p><a href="https://i.sstatic.net/ouArv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ouArv.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-12-14 20:31:01
| 2
| 1,148
|
Peter
|
77,662,696
| 12,827,931
|
Speed up iterative algorithm in Python
|
<p>I'm implementing the forward recursion for Hidden Markov Model: the necessary steps (a-b) are shown here</p>
<p><a href="https://i.sstatic.net/hJdzQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hJdzQ.jpg" alt="enter image description here" /></a></p>
<p>Below is my implementation</p>
<pre><code>from scipy.stats import norm
import numpy as np
import random
def gmmdensity(obs, w, mu, sd):
# compute probability density function of gaussian mixture
gauss_mixt = norm.pdf(obs, mu, sd)[:,None]*w
return gauss_mixt
def alpha(obs, states, A, pi, w, mu, sigma):
dens = np.sum(gmmdensity(obs, w, mu, sigma), axis = 2)
# scaling factor is used to renormalize probabilities in order to
# avoid numerical underflow
scaling_factor = np.ones(len(obs))
alpha_matrix = np.zeros((len(states), len(obs)))
# for t = 0
alpha_matrix[:,0] = pi*dens[0]
scaling_factor[0] = 1/np.sum(alpha_matrix[:,0], axis = 0)
alpha_matrix[:,0] *= scaling_factor[0]
# for t == 1:T
for t in range(1, len(obs)):
alpha_matrix[:,t] = np.matmul(alpha_matrix[:,t-1], A)*dens[t]
scaling_factor[t] = 1/np.sum(alpha_matrix[:,t], axis = 0)
alpha_matrix[:,t] *= scaling_factor[t]
return alpha_matrix, scaling_factor
</code></pre>
<p>Let's generate some data to run the algorithm</p>
<pre><code>obs = np.concatenate((np.random.normal(0, 1, size = 500),
np.random.normal(1.5, 1, size = 500))).reshape(-1,1)
N = 2 # number of hidden states
M = 3 # number of mixture components
states = list(range(N))
pi = np.array([0.5, 0.5]) # initial probabilities
A = np.array([[0.8, 0.2], [0.3, 0.7]]) # transition matrix
mu = np.array([np.min(obs), np.median(obs), np.max(obs)]) # means of mixture components
sigma = np.array([1, 1, 1]) # variances of mixture components
w = np.array([[0.2, 0.3, 0.5], [0.6, 0.2, 0.2]]) # weights of mixture components
</code></pre>
<p>Let's see how fast the algorithm is</p>
<pre><code>%timeit alpha(obs, states, A, pi, w, mu, sigma)
13.6 ms ± 1.24 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>Is there any possibility to make this code faster? I thought about using numba or cython, but it never fully worked in this case.</p>
|
<python><numpy>
|
2023-12-14 19:46:25
| 3
| 447
|
thesecond
|
77,662,322
| 12,708,740
|
Groupby merge values from one dataframe to the other
|
<p>I have a dataframe of "words", and I would like to check which words were included, by each person in this dataframe by comparison to a word list key.</p>
<p>Example data:</p>
<pre><code>df = pd.DataFrame({
'person': [1, 1, 1, 2, 3, 4, 4, 4, 4],
'word': ['apple', 'orange', 'pear', 'apple', 'grape', 'orange', 'apple', 'pear', 'berry'],
'count': [1, 1, 1, 1, 1, 1, 1, 1, 1]
})
word_list = ['apple', 'orange', 'pear', 'berry', 'grape']
word_df = pd.DataFrame({'word': word_list})
</code></pre>
<p>For example, person 1 only included <code>apple</code>, <code>orange</code>, and <code>pear</code> and did not include <code>berry</code> and <code>grape</code>. I know how to pd.merge values but have not been able to successfully map the values of <code>word_df["word"]</code> onto something like <code>df.groupby(["person"]</code> so that I can check what <em>each</em> person chose to include.</p>
<p>Desired output:</p>
<pre><code>result_df = pd.DataFrame({
'person': [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4],
'word': ['apple', 'orange', 'pear', 'berry', 'grape',
'apple', 'orange', 'pear', 'berry', 'grape',
'apple', 'orange', 'pear', 'berry', 'grape',
'orange', 'apple', 'pear', 'berry', 'grape'],
'count': [1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0]
})
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-14 18:22:59
| 1
| 675
|
psychcoder
|
77,662,162
| 1,767,557
|
How to enable CUDA for Huggingface Trainer on Windows?
|
<p>I am trying to use the Trainer from the transformers library from HuggingFace in Python:</p>
<pre><code>from transformers import Seq2SeqTrainingArguments
from transformers import Seq2SeqTrainer
# ...
training_args = Seq2SeqTrainingArguments(fp16=True,
# ...
trainer = Seq2SeqTrainer( args=training_args,
# ...
</code></pre>
<p>I get this error message:</p>
<blockquote>
<p>ValueError: FP16 Mixed precision training with AMP or APEX (<code>--fp16</code>)
and FP16 half precision evaluation (<code>--fp16_full_eval</code>) can only be
used on CUDA or NPU devices or certain XPU devices (with IPEX).</p>
</blockquote>
<p>It seems like I am missing some CUDA installation, but I can't figure out, what exactly I need. I tried (without success):</p>
<pre><code>py -m pip install --upgrade setuptools pip wheel
py -m pip install nvidia-pyindex
py -m pip install nvidia-cuda-runtime-cu12
py -m pip install nvidia-nvml-dev-cu12
py -m pip install nvidia-cuda-nvcc-cu12
</code></pre>
<p>System info:</p>
<ul>
<li>Windows 11 Build 22621</li>
<li>Python 3.11.7, running inside a venv</li>
<li>Geforce RTX 4070</li>
</ul>
<p>Thanks for any ideas!</p>
|
<python><huggingface-transformers>
|
2023-12-14 17:49:32
| 1
| 918
|
Frank im Wald
|
77,661,931
| 12,300,981
|
Matplotlib increase padding on cells for table when doing tight fit
|
<p>I like the current dimensions, fontsize, and look of my table. However, I'd like to increase the width of each cell by a little bit top/bottom (so it looks a bit prettier and less squished). But I don't quite know how to do that.</p>
<pre><code>columns=['test', 'test', 'test', 'test', 'test', 'test', 'test']
rows=['test', 'test', 'test', 'test']
data=[['12.103700223192147', '125.7421234247654', '2833.324510310368', '3.3228806335297043', '103.23943179349797', '3.5559560514887627', '3.531198576041594'], ['312.22548840603355', '335.1632073701661', '688.6576628644414', '2.0598429919314976', '18.362768351047666', '1.3152545139887966', '1.3155809229679902'], ['461.0540288130374', '484.64756706242093', '757.5346438101953', '5.1333110653922684', '15.039352280544852', '1.5898237491783822', '1.5902652486082376'], ['263.82124695572776', '313.264919044118', '276.03008066454083', '140.61329994689453', '7.034269824508681', '6.71122466853062', '4.613448380627191']]
wpad,hpad=0.1,0.1
fig = plt.figure(figsize=(len(columns)*3+wpad, len(rows)*3+hpad))
tab=plt.table(data,rowLabels=rows,colLabels=columns,cellLoc='center',loc='top')
tab.auto_set_font_size(False)
tab.auto_set_column_width(col=list(range(len(columns))))
plt.axis('off')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/8Fqoy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Fqoy.png" alt="enter image description here" /></a></p>
<p>As can be observed (ignoring the whitespace), the width is good, but a little squished. I'd like to give it a little bit of "breathing" room (e.g. 0.1in).</p>
|
<python><matplotlib>
|
2023-12-14 17:09:22
| 1
| 623
|
samman
|
77,661,574
| 1,473,517
|
why is numba 25% slower when using dicts
|
<p>I am interested in the performance of numba when using dicts. I did the following experiment:</p>
<pre><code>from numpy.random import randint
import numba as nb
@nb.njit
def foo_numba(a, b, c):
N = 100**2
d = {}
for i in range(N):
d[(randint(N), randint(N), randint(N))] = (a, b, c)
return d
@nb.njit
def test_numba(numba_dict):
s = 0
for k in numba_dict:
s += numba_dict[k][2]
return s
def foo(a, b, c):
N = 100**2
d = {}
for i in range(N):
d[(randint(N), randint(N), randint(N))] = (a, b, c)
return d
def test(numba_dict):
s = 0
for k in numba_dict:
s += numba_dict[k][2]
return s
a = randint(10, size=10)
b = randint(10, size=10)
c = 1.3
t_numba = foo_numba(a, b, c)
dummy = test_numba(t_numba)
%timeit test_numba(t_numba)
t = foo(a, b, c)
%timeit test(t)
</code></pre>
<p>To my surpise the output I get is:</p>
<pre><code>870 µs ± 6.36 µs per loop (mean ± std. dev. of 10 runs, 1,000 loops each)
654 µs ± 35.8 µs per loop (mean ± std. dev. of 10 runs, 1,000 loops each)
</code></pre>
<p>Why is the numba code so much slower than the cPython code? Am I using numba incorrectly?</p>
<p>The slowdown disappears completely if you do either a = tuple(a) or b = tuple(b) (there seems to be no need to do both!).</p>
|
<python><numba>
|
2023-12-14 16:14:06
| 1
| 21,513
|
Simd
|
77,661,537
| 4,670,852
|
Why does python-vlc (libvlc) drop frames in fullscreen on a Raspberry Pi 4 B?
|
<p>The fullscreen video is skipping frames and thus the stream was getting corrupted (some <a href="https://shopdelta.eu/h-264-image-coding-standard_l2_aid734.html" rel="nofollow noreferrer">I-frames</a> were dropped, leading to incomplete reconstruction of the video). This is with the following setup:</p>
<pre><code>Raspberry Pi 4B
Python 3.11.2
python-vlc 3.0.20123
</code></pre>
|
<python><libvlc><python-vlc>
|
2023-12-14 16:08:11
| 1
| 406
|
Lupilum
|
77,661,453
| 4,670,852
|
Why is python-vlc playing a video in its native resolution (not fullscreen) even though I set fullscreen to True? This is on Raspberry Pi
|
<p>I'm running on</p>
<pre><code>Raspberry Pi 4 B
Python 3.11.2
python-vlc 3.0.20123
</code></pre>
<p>The video opens in the left corner of the screen without the frames of a window, but not in fullscreen. I'm not using any GUI library like <code>PyQT5</code>, just the native vlc player.</p>
<p>I initialize the player like so:</p>
<pre><code>MEDIA_PLAYER = vlc.MediaPlayer()
</code></pre>
<p>How can this be fixed? I've used hours on this, trying many different solutions, but without any luck. Then again, I didn't find a post matching this behaviour, so no wonder the solutions didn't work.</p>
|
<python><raspberry-pi><libvlc><python-vlc>
|
2023-12-14 15:54:38
| 1
| 406
|
Lupilum
|
77,661,426
| 1,514,114
|
Paramiko request_port_forward with and without handler
|
<h2>TLDR:</h2>
<p>can't read data from incoming paramiko port forwarding connections when using callbacks.
See this gist for a demonstration:<br />
<a href="https://gist.github.com/tatome/0d3e6479f35b25bbb31c9f94610eab6b" rel="nofollow noreferrer">https://gist.github.com/tatome/0d3e6479f35b25bbb31c9f94610eab6b</a></p>
<p><em>The gist also demonstrates the solution to the problem.</em></p>
<h2>Detailed Question:</h2>
<p>There's an example script that demonstrates how to use paramiko to set up a reverse tunnel <a href="https://github.com/paramiko/paramiko/blob/main/demos/rforward.py" rel="nofollow noreferrer">here</a>.</p>
<p>The script requests port forwarding and then keeps <code>accept</code>ing connections in the main thread, and delegates handling these connections to another thread:</p>
<pre class="lang-py prettyprint-override"><code>transport.request_port_forward("", server_port)
while True:
chan = transport.accept(1000)
if chan is None:
continue
thr = threading.Thread(
target=handler, args=(chan, remote_host, remote_port)
)
</code></pre>
<p>Unfortunately, with this method, we don't know which port an incoming connection accessed, so we can't have multiple tunnels on different server ports that we treat differently.</p>
<p>However, we can pass a <code>handler</code> argument to <code>request_port_forward</code>, and <code>handler</code> does get the host name and port that the client accessed:</p>
<pre class="lang-py prettyprint-override"><code>def handler(channel, source_address, target_address):
target_host, target_port = target_address
...
client.get_transport().request_port_forward('', port, handler)
</code></pre>
<p><strong>Here's the problem:</strong> For some reason, the channel retrieved via <code>transport.accept()</code> and the channel passed to <code>handler</code> behave differently for me.</p>
<p>When I do this:</p>
<pre class="lang-py prettyprint-override"><code>client.connect(
'localhost',
22,
username=USER,
look_for_keys=True
)
def handler(channel, source_address, target_address):
print("Selecting...")
selects = select.select([channel],[],[])
print("Selected: " + str(selects))
transport = client.get_transport()
p = transport.request_port_forward('', 30000)
print(p)
channel = transport.accept(1000)
handler(channel, None, None)
</code></pre>
<p>and then, on a shell:</p>
<pre class="lang-bash prettyprint-override"><code>wget localhost:30000
</code></pre>
<p>I get this:</p>
<pre class="lang-none prettyprint-override"><code>30000
Selecting...
Selected: ([<paramiko.Channel 0 (open) window=2097152 in-buffer=130 -> <paramiko.Transport at 0x43b80fd0 (cipher aes128-ctr, 128 bits) (active; 1 open channel(s))>>], [], [])
</code></pre>
<p>If I pass the handler to <code>request_port_forward</code>, however, like this:</p>
<pre class="lang-py prettyprint-override"><code>def handler(channel, source_address, target_address):
print("Selecting...")
selects = select.select([channel],[],[])
print("Selected: " + str(selects))
transport = client.get_transport()
transport.request_port_forward('', 30000, handler)
</code></pre>
<p>I get this:</p>
<pre><code>Selecting...
</code></pre>
<p>and when I debug the script it will stop in the line that says <code>selects = select.select([channel],[],[])</code>.</p>
<p><code>wget</code> says it sent the HTTP request, but my script never reads anything from the connection.</p>
<p><strong>Questions:</strong></p>
<ul>
<li>What am I doing wrong?</li>
<li>Why does <code>select.select([channel])</code> does something else, depending on whether it's returned by <code>transport.accept()</code> or passed to the handler?</li>
<li>How can you use the same transport for multiple different reverse tunnels?</li>
</ul>
|
<python><ssh><paramiko><ssh-tunnel>
|
2023-12-14 15:51:55
| 1
| 548
|
Johannes Bauer
|
77,661,424
| 15,416
|
How to annotate function attributes?
|
<p><a href="https://peps.python.org/pep-0232/" rel="nofollow noreferrer">PEP 232</a> defines function attributes, and <a href="https://peps.python.org/pep-0484/" rel="nofollow noreferrer">PEP 484</a> defines annotations, but I can't find how the two should be combined.</p>
<p>E.g.</p>
<pre><code>def foo(s:str):
try:
print (foo.cache[s])
except Exception:
print ('NEW')
foo.cache[s] = 'CACHE'+s
foo.cache = dict{}
</code></pre>
<p>How can I annotate <code>foo.cache</code> inside <code>foo()</code>, before using it?</p>
|
<python><python-typing>
|
2023-12-14 15:51:46
| 1
| 181,617
|
MSalters
|
77,661,379
| 11,159,734
|
Azure Function Exit code: 137 | Please review your requirements.txt
|
<p>I want to publish an Azure Function with the following command using the cli/Azure Core tools:</p>
<pre><code>func azure functionapp publish myFunc
</code></pre>
<p>However I can not even publish the function as I always get an error when installing the requirements.txt</p>
<pre class="lang-bash prettyprint-override"><code>[15:29:01+0000] Downloading torch-2.1.1-cp310-cp310-manylinux1_x86_64.whl (670.2 MB) | Exit code: 137 | Please review your requirements.txt | More information: https://aka.ms/troubleshoot-python
\n/opt/Kudu/Scripts/starter.sh oryx build /tmp/zipdeploy/extracted -o /home/site/wwwroot --platform python --platform-version 3.10.4 -p packagedir=.python_packages/lib/site-packages
</code></pre>
<p>I saw that apparently this error is due to a lack of memory but I need all these packages.
This is my requirements.txt:</p>
<pre><code>langchain
azure-functions
azure-identity
azure-search-documents==11.4.0
azure-storage-blob
unstructured
unstructured[pdf]
unstructured[docx]
unstructured[pptx]
tiktoken
</code></pre>
<p>Essentially what I'm trying to build is a serverless Azure Function with a blob trigger that will automatically read files if there are changes in the blob storage and then chunk, embed and upload them to Azure AI Search. Therefore all these packages are necessary.
Is there any way to increase the memory limit or get around this issue?</p>
<p>I also tried this command but it did not make a difference:</p>
<pre><code>func azure functionapp publish MyFunc --python --build remote
</code></pre>
|
<python><azure><azure-functions>
|
2023-12-14 15:45:43
| 1
| 1,025
|
Daniel
|
77,661,276
| 6,484,455
|
Join Kafka streams in Python
|
<p>I need to work with Kafka streams in Python and I am analyzing the different libraries available. <a href="https://stackoverflow.com/questions/18688971/c-char-array-initialization-what-happens-if-there-are-less-characters-in-the-st">This question</a> provided some good answers and it looks like the <a href="https://faust-streaming.github.io/faust/" rel="nofollow noreferrer">Faust fork</a> is the most complete Kafka library in Python.
However, I need to join different Kafka streams and I am not sure how to accomplish this, or if it is even supported.</p>
<p>I searched Faust's documentation and saw that <a href="https://faust-streaming.github.io/faust/search.html?q=join" rel="nofollow noreferrer">there are some definitions in place</a> for joins, but if I go to the <a href="https://faust-streaming.github.io/faust/_modules/faust/joins.html#Join" rel="nofollow noreferrer">source code</a>, they are not implemented. So it looks like they are not supported, but maybe I am missing something or there is a different library that does support it.</p>
<p>I also found <a href="https://stackoverflow.com/questions/43027286/join-multiple-kafka-topics-by-key">this relevant question</a>, but it is from 2017 so a lot could have changed in the Python world.</p>
|
<python><apache-kafka><faust>
|
2023-12-14 15:31:56
| 1
| 835
|
Matias B
|
77,661,141
| 13,007,467
|
Converting pandas groupby / apply / ewm calculation with time window to polars
|
<p>Due to performance consideration I would like to convert some panda based scripts to polars. I need to perform a groupby and calculate a half-life based on a datetime value. I unfortunately could not really found a cookbook for polars and relied on this <a href="https://stackoverflow.com/questions/73393235/polars-how-to-compute-rolling-ewm-grouped-by-column">answer</a> to get started and reached the following approximation:</p>
<pre><code>import pandas as pd
import polars as pl
import random
from datetime import datetime, timedelta
# Define the list of persons
persons = ['Person A', 'Person B', 'Person C', 'Person D']
# Generate random data for the DataFrame
# start_date = datetime.now() - timedelta(days=365)
df = pd.DataFrame(
{'person': [random.choice(persons) for _ in range(50)],
'rating': [random.randint(75, 110) for _ in range(50)],
'date' : [datetime(2022, 6, 1, 0, 0, 0)
+ timedelta(days=random.randint(0, 365))
for _ in range(50)]}
)
df.sort_values(['date'], inplace=True)
# To be used with polars
dl = pl.from_dataframe(df)
# Function to convert
df['EWM_30d'] = df.groupby(
by='person', sort=False).apply(
lambda x: x['rating'].ewm(halflife=('30d'), times=x['date']
).mean().shift(1, fill_value=80).round(2)
).to_numpy()
# Initial polars version
dl = dl.rolling(
'date', by='person', period="100000d").agg(
pl.col('rating').ewm_mean(half_life=30).shift(1, fill_value=80).last().alias('EWM_30d'))
</code></pre>
<p>I have managed to do something remotely similar however it has several flaws:</p>
<ul>
<li>As I could not find something similar either explicitly or implicitly to pd.expanding() I am using rolling(), with a very long time window which looks a bit awkward</li>
<li>I am using a half_life of 30 rows instead of a '30d' temporal window based on the datetime 'date' column.
While the speed is there, it is not the exact same operation and the results are different from the ones I get with pandas.</li>
</ul>
|
<python><dataframe><group-by><python-polars><moving-average>
|
2023-12-14 15:08:10
| 1
| 441
|
Iqigai
|
77,661,139
| 14,720,380
|
How do I stop the ruff linter from moving imports into an if TYPE CHECKING statement?
|
<p>I have a pydantic basemodel that looks something like:</p>
<pre><code>from pathlib import Path
from pydantic import BaseModel
class Model(BaseModel):
log_file: Path
</code></pre>
<p>And my ruff pre-commit hook is re-ordering it to:</p>
<pre><code>from typing import TYPE_CHECKING
from pydantic import BaseModel
if TYPE_CHECKING:
from pathlib import Path
class Model(BaseModel):
log_file: Path
</code></pre>
<p>Which is then causing the bug:</p>
<pre><code>pydantic.errors.ConfigError: field "log_file" not yet prepared so type is still a ForwardRef, you might need to call Model.update_forward_refs()
</code></pre>
<p>Which I dont want to do. How can I stop ruff from re-ordering the imports like this? My <code>.pre-commit-config.yaml</code> file looks like:</p>
<pre><code>repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: "v0.0.291"
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
- repo: https://github.com/psf/black
rev: 23.9.1
hooks:
- id: black
language_version: python3
</code></pre>
<p>and my <code>pyproject.toml</code> file:</p>
<pre><code>[tool.black]
line-length = 120
include = '\.pyi?$'
exclude = '''
/(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| _build
| buck-out
| build
| dist
# The following are specific to Black, you probably don't want those.
| blib2to3
| tests/data
| profiling
)/
'''
[tool.ruff]
line-length = 120
ignore = ["F405", "B008"]
select = ["E", "F", "B", "C4", "DTZ", "PTH", "TCH", "I001"]
# unfixable = ["C4", "B"]
exclude = ["docs/conf.py", "Deployment/make_deployment_bundle.py"]
[tool.ruff.per-file-ignores]
"**/__init__.py" = ["F401", "F403"]
[tool.ruff.isort]
split-on-trailing-comma = true
known-first-party = ["influxabart"]
no-lines-before = ["local-folder"]
section-order = ["future","standard-library","third-party","first-party","this","local-folder"]
[tool.ruff.isort.sections]
"this" = ["InfluxTools"]
</code></pre>
|
<python><pre-commit-hook><ruff>
|
2023-12-14 15:08:06
| 2
| 6,623
|
Tom McLean
|
77,661,003
| 6,618,996
|
How to type hint a method that returns an instance of a class passed as parameter, with default value?
|
<p><em>using Python 3.11 and Mypy 1.7.</em></p>
<p>I am trying to properly type hint a method that takes a class as parameter, with a default value, and returns an instance of that class. The class passed as parameter must be a subclass of a specific base class.
How should I do that?
I tried to use a <a href="https://mypy.readthedocs.io/en/stable/generics.html#type-variables-with-upper-bounds" rel="nofollow noreferrer">type variable with upper bound</a> like this:</p>
<pre><code>from typing import TypeVar
class BaseClass: pass
class ImplClass(BaseClass): pass
T = TypeVar("T", bound=BaseClass)
def method(return_type: type[T] = ImplClass) -> T:
return return_type()
</code></pre>
<p>But mypy complains with this message:</p>
<pre><code>scratch_10.py: note: In function "method":
scratch_10.py:9:35: error: Incompatible default for argument "return_type" (default has type "type[ImplClass]", argument has type "type[T]") [assignment]
def method(return_type: type[T] = ImplClass) -> T:
^~~~~~~~~
</code></pre>
<p>But as far as I understand this error is wrong since the default <code>ImplClass</code> is a subclass of <code>BaseClass</code> which is the upper bound of <code>T</code>, so it respects the type constraints.</p>
<p>Note that removing the upperbound does not help:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T") #no upperbound this time
def method(return_type: type[T] = ImplClass) -> T:
return return_type()
</code></pre>
<p>(same error message)</p>
<p>But removing the default value and explicitly calling <code>method</code> with <code>ImplClass</code> as parameter works without any complaints:</p>
<pre class="lang-py prettyprint-override"><code>def method(return_type: type[T]) -> T: # no default value this time
return return_type()
method(ImplClass)
</code></pre>
<blockquote>
<p>Success, no issues found in 1 source file</p>
</blockquote>
<p>So I don't understand why <code>mypy</code> complains about the default value but not about the actual parameter with the exact same value. Is that a false positive in <code>mypy</code>, or am I doing something wrong?</p>
|
<python><generics><mypy>
|
2023-12-14 14:49:59
| 1
| 6,129
|
Guillaume
|
77,660,998
| 3,121,975
|
Dagster executable missing from container
|
<p>I've been trying to create a container to run in Dagster cloud on an ECS-hybrid deployment model. I'm able to actually push the container to Dagster but I continually get this error:</p>
<pre><code>dagster_cloud.workspace.ecs.client.EcsServiceError: ECS service failed because task arn:aws:ecs:ap-northeast-1:*****:task/Dagster-Cloud-my-cluster-Cluster/1d151e6d40b44588a4ed4446a949d44a failed: CannotStartContainerError: ResourceInitializationError: failed to create new container runtime task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "dagster": executable file not found in $PATH: unknown
Task logs:
runc create failed: unable to start container process: exec: "dagster": executable file not found in $PATH
For more information about the failure, check the ECS console for logs for task arn:aws:ecs:ap-northeast-1:*****:task/Dagster-Cloud-my-cluster-Cluster/1d151e6d40b44588a4ed4446a949d44a in cluster Dagster-Cloud-my-cluster-Cluster.
File "/dagster-cloud/dagster_cloud/workspace/user_code_launcher/user_code_launcher.py", line 1304, in _reconcile
self._wait_for_new_server_ready(
File "/dagster-cloud/dagster_cloud/workspace/ecs/launcher.py", line 458, in _wait_for_new_server_ready
task_arn = self.client.wait_for_new_service(
File "/dagster-cloud/dagster_cloud/workspace/ecs/client.py", line 491, in wait_for_new_service
return self.check_service_has_running_task(
File "/dagster-cloud/dagster_cloud/workspace/ecs/client.py", line 607, in check_service_has_running_task
self._raise_failed_task(task, container_name, logger)
File "/dagster-cloud/dagster_cloud/workspace/ecs/client.py", line 526, in _raise_failed_task
raise EcsServiceError(
</code></pre>
<p>I'm not sure why this is happening as I'm sure I'm installing dagster and dagster-cloud, as per the <a href="https://docs.dagster.io/dagster-cloud/managing-deployments/code-locations#dagster-cloud-code-requirements" rel="nofollow noreferrer">documentation</a>. My Docker container looks like this:</p>
<pre><code>###############################################################################
# Base container
###############################################################################
FROM python:3.11 AS base
# Define the environment variables necessary to work with poetry
ENV POETRY_VERSION=1.5.1 \
POETRY_HOME="/opt/poetry" \
POETRY_VIRTUALENVS_IN_PROJECT=true \
POETRY_NO_INTERACTION=1
# Add the poetry bin to our path
ENV PATH="$POETRY_HOME/bin:$PATH"
###############################################################################
# Poetry installer container
###############################################################################
FROM base AS installer
# Install poetry
RUN curl -sSL https://install.python-poetry.org | python3 -
###############################################################################
# Container that actually builds the application
###############################################################################
FROM base AS builder
# Copy the poetry files from the poetry installer to this container
COPY --from=installer $POETRY_HOME $POETRY_HOME
# Describe the environment variables necessary to install our dependencies
ENV PYTHONFAULTHANDLER=1 \
PYTHONUNBUFFERED=1 \
PYTHONHASHSEED=random \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100
# Set the working directory to one we can install to easily
WORKDIR /app
# Copy the poetry.lock and pyproject.toml files first to ensure that dependencies
# are only installed when they're updated
COPY poetry.lock pyproject.toml /app/
# Copy in our SSH key so we can retrieve the shared GitHub repo
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# Install our project dependencies
RUN --mount=type=ssh poetry install --only dagster --no-ansi --no-root
# Build the project
COPY . /app/
RUN poetry build
###############################################################################
# Create a runtime environment that's much smaller
###############################################################################
FROM python:3.11-alpine AS runtime
# Set the working directory to where Dagster will look for the application
WORKDIR /opt/dagster/app
# Copy the project wheel file
COPY --from=builder /app/dist/*.whl /
# Install the wheel file using pip and then install dagster and dagster-cloud
RUN pip install --no-cache-dir /*.whl \
&& rm -rf /*.whl
</code></pre>
<p>For this package, my <code>pyproject.toml</code> file lists my dependencies as follows:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.group.dagster.dependencies]
dagster = "^1.5.9"
dagster-aws = "^0.21.9"
pendulum = "^2.1.2"
pandas = "^2.1.3"
openpyxl = "^3.1.2"
dagster-cloud = "^1.5.12"
</code></pre>
<p>So, both of these should be installed, but clearly they haven't been. What am I doing wrong here?</p>
|
<python><amazon-web-services><docker><dagster>
|
2023-12-14 14:49:30
| 2
| 8,192
|
Woody1193
|
77,660,996
| 10,749,925
|
How can I create a Sagemaker endpoint for this notebook?
|
<p>I have created a VectorDB (FAISS) and input PDF's into it. Then i'm using the Langchain wrappper for AWS Bedrock to call it. I know Kowledge Bases exist now, but at least in the SageMaker notebook, i have a bit more control. The model works perfectly in SageMaker Notebook, when i ask a question, it returns the answer.</p>
<p>What I would like to do is create a little web page (and via HTTP/REST API), just submit the question in a text field and receive the answer in a text field. I'm guessing this is pretty difficult to do without a Lambda function somewhere in the chain, or maybe not?</p>
<p>When I look in my Sagemaker Console, under <strong>Inference</strong> tab, there is <strong>no</strong> Model or <strong>no</strong> Endpoint, or <strong>no</strong> Endpoint Configurations (as i'm not selecting a model from Sagemaker, i just use langchain LLM and Bedrock in the Python notebook as below).</p>
<pre><code>import boto3
import json
bedrock = boto3.client(service_name="bedrock")
bedrock_runtime = boto3.client(service_name="bedrock-runtime")
from langchain.llms.bedrock import Bedrock
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1",
client=bedrock_runtime)
</code></pre>
<p>Eventually i emdbed the docs into the FAISS Vector database, and it's this database I query</p>
<pre><code>db = FAISS.from_documents(docs, embeddings)
model_titan = {
"maxTokenCount": 512,
"stopSequences": [],
"temperature":0.0,
"topP":0.5
}
# Amazon Titan Model
llm = Bedrock(
model_id="amazon.titan-text-express-v1",
client=bedrock_runtime,
model_kwargs=model_titan,
)
</code></pre>
<p>Then define a prompt.....</p>
<pre><code>PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
</code></pre>
<p>And query the database:</p>
<pre><code>qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=db.as_retriever(
search_type="similarity",
),
return_source_documents=True,
chain_type_kwargs={"prompt": PROMPT},
)
query = "What does the future tech look like?"
result = qa({"query": query})
print(f'Query: {result["query"]}\n')
print(f'Result: {result["result"]}\n')
print(f'Context Documents: ')
for srcdoc in result["source_documents"]:
print(f'{srcdoc}\n')
</code></pre>
<p>This returns exactly what i need within Sagemaker, I just need to query the database externally.</p>
<p>I don't want to have a lambda function rebuild the chain each time. I'm thinking for efficiency, all I need is to pass a query in the lambda function and return the result.</p>
<p><a href="https://i.sstatic.net/37MJT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/37MJT.png" alt="Code to run?" /></a></p>
|
<python><machine-learning><aws-lambda><aws-api-gateway><amazon-sagemaker>
|
2023-12-14 14:49:20
| 1
| 463
|
chai86
|
77,660,943
| 20,920,790
|
How to check that i is 0 in io.BytesIO().seek(i)?
|
<p>I make lineplot to send for sending to Telegram.
First sending is ok.</p>
<p>Second time I get error:
BadRequest: File must be non-empty</p>
<p>How to check what i = 0 (position of cursor) in seek(i)?</p>
<pre><code>sns.lineplot(x=x, y=y);
plt.title('test plot');
plot_object = io.BytesIO()
plt.savefig(plot_object)
plot_object.seek(0)
plot_object.name = 'test_plot.png'
plt.close()
bot.sendPhoto(chat_id, plot_object)
</code></pre>
|
<python><io><telegram>
|
2023-12-14 14:43:11
| 1
| 402
|
John Doe
|
77,660,933
| 2,144,877
|
python mathplotlib 3d scatter plot with colorbar ',' marker problem
|
<p>I've been playing with ray tracing. I managed to get the plot to have the correct shape and size by using:</p>
<p>ax.set_box_aspect((np.ptp(xs), np.ptp(ys), np.ptp(zs))).</p>
<p>and to have a 4d colorbar plot.</p>
<p>My current problem is with the marker. Using '.' gives dots as expected</p>
<p>(click images to zoom in)</p>
<p><a href="https://i.sstatic.net/pHH0S.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pHH0S.jpg" alt="marker='.'" /></a></p>
<p>Changing marker to ',' does not give pixels as expected, but instead gives squares. Why?</p>
<p>[ While we are here, I tried using my_dpi and figsize=() so the image fills more of the "screen" - without success. ]</p>
<p><a href="https://i.sstatic.net/AsRRs.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AsRRs.jpg" alt="marker=','" /></a></p>
<p>Code:</p>
<pre><code>filename = "DonInSite_reflection_points.out"
xs = []
ys = []
zs = []
powers = []
phases = []
with open(filename, 'r') as fin:
for line in fin :
line = line.replace('\n', ' ')
line = line.replace(']', ' ')
line = line.replace('[', ' ')
line = line.replace(',', ' ')
words = line.split()
if len(words) != 5 :
continue
xs.append(float(words[0]))
ys.append(float(words[1]))
zs.append(float(words[2]))
powers.append(float(words[3]))
phases.append(float(words[4]))
print("len(powers) = %d" % (len(powers)))
print("min(powers) = %f" % (min(powers)))
print("max(powers) = %f" % (max(powers)))
import numpy as np
from pylab import *
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
my_dpi = 218
fig = plt.figure(figsize=(5120/my_dpi, 2880/my_dpi), dpi=218)
#ax = fig.add_subplot(111,projection='3d')
ax = plt.axes(projection='3d')
ax.set_box_aspect((np.ptp(xs), np.ptp(ys), np.ptp(zs)))
plt.title("City Hall 3D reflection points scatter plot")
ax.set_xlabel('X-axis', fontweight ='bold')
ax.set_ylabel('Y-axis', fontweight ='bold')
ax.set_zlabel('Z-axis', fontweight ='bold')
my_cmap = plt.get_cmap('gnuplot')
yg = ax.scatter(xs, ys, zs, c=np.array(powers), cmap = my_cmap, marker = ',')
cb = fig.colorbar(yg, ax = ax, label="Power (dBm)", shrink=0.75)
import time
t = time.time()
outfilename = "DonInSite." + str(int(t)) + ".png"
plt.savefig(outfilename, format="png")
plt.show()
</code></pre>
|
<python><matplotlib><3d><scatter-plot>
|
2023-12-14 14:41:19
| 0
| 459
|
Don Mclachlan
|
77,660,887
| 11,501,976
|
Multi-region image segmentation based on pixel intensities
|
<p>I am attempting to segment a grayscale image into three regions based on its pixel intensities. Due to illumination and specks, this task turns out to be more challenging than I expected...</p>
<p>Here is the <strong>original image</strong>:</p>
<p><a href="https://i.sstatic.net/UnUDt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UnUDt.png" alt="Original image" /></a></p>
<p>The bright area occupies most of the image. Below it, there is a gray area, and the black area takes up a narrow portion at the bottom of the image.</p>
<p>Multi-Otsu thresholding was not very effective as the histogram does not show sharp peaks...</p>
<pre class="lang-py prettyprint-override"><code>from skimage.filters import threshold_multiotsu
thresholds = threshold_multiotsu(image)
plt.hist(image.ravel(), bins=255)
for thresh in thresholds:
plt.axvline(thresh, color='r')
</code></pre>
<p><a href="https://i.sstatic.net/29geC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/29geC.png" alt="histogram" /></a></p>
<pre class="lang-py prettyprint-override"><code>regions = np.digitize(image, bins=thresholds)
plt.imshow(regions, cmap='jet')
</code></pre>
<p><a href="https://i.sstatic.net/VnS3n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VnS3n.png" alt="Multi-Otsu thresholding" /></a></p>
<p>I think what I need is something like a large-scale adaptive thresholding with multiple labels, but I can't figure out how to achieve this.</p>
<h1>EDIT</h1>
<p>Applying blurring before thresholding showed more promising result.</p>
<pre class="lang-py prettyprint-override"><code>blur = cv2.GaussianBlur(image, (0, 0), 8)
thresholds = threshold_multiotsu(blur)
regions = np.digitize(blur, bins=thresholds)
plt.imshow(regions, cmap='jet')
</code></pre>
<p><a href="https://i.sstatic.net/dTw7O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dTw7O.png" alt="blurring and thresholding" /></a></p>
<p>But the boundary between "white" and "gray" regions in <code>x > 1100</code> is still inaccurate.</p>
<p>Meanwhile, applying Sobel filter to the blurred image returns kinda accurate boundaries.</p>
<pre class="lang-py prettyprint-override"><code>grad_x = cv2.Sobel(blur, cv2.CV_16S, 1, 0, ksize=3)
grad_y = cv2.Sobel(blur, cv2.CV_16S, 0, 1, ksize=3)
abs_grad_x = cv2.convertScaleAbs(grad_x)
abs_grad_y = cv2.convertScaleAbs(grad_y)
grad = cv2.addWeighted(abs_grad_x, 0.5, abs_grad_y, 0.5, 0)
plt.imshow(grad, cmap="gray")
</code></pre>
<p><a href="https://i.sstatic.net/UeJnI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UeJnI.png" alt="sobel" /></a></p>
<p>So the question becomes: <strong>How can I combine the Sobel result and the Otsu result to acquire accurate mask?</strong></p>
<h1>EDIT2</h1>
<p>Sorry I didn't add the manual mask. Here are the boundaries I have in mind, which are somewhat similar to the Sobel result:</p>
<p><a href="https://i.sstatic.net/j5Zz0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j5Zz0.jpg" alt="manually created boundary" /></a></p>
<p>The red boundary is almost linear in this example, but it can possibly have more complicated shape such as:</p>
<p><a href="https://i.sstatic.net/MeJpb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MeJpb.png" alt="artificial boundary" /></a></p>
<h2>More context</h2>
<p>This image shows a porous material getting wet with liquid. The region beneath the yellow boundary is the bulk fluid, and the area between the red and yellow boundaries is the wetted region.</p>
<p>Therefore, the red boundary is always above the yellow boundary, and it continuously spans across the image.</p>
<p>I think the best approach would be to combine this morphological knowledge with pixel intensity gradient, but I can't come up with how to achieve this.</p>
|
<python><image-processing><image-segmentation><scikit-image>
|
2023-12-14 14:34:09
| 1
| 378
|
JS S
|
77,660,539
| 10,576,322
|
Dynamically change used database table while django app is running?
|
<p>I want to create a service that has a page to configure a data base table (columns and types) and that allows to CRUD lines in it via the page.</p>
<p>The content of that table should then be used for comparison with payload of rest api call.</p>
<p>I am quite new to django and don't know if that approach is possible with the django ORM. What I saw so far is that one defines the database scheme hardcoded and I didn't see how I can make my concept running with this.</p>
<p>If not I am thinking about representing and managing this dynamically created table via a json or likewise object that is stored in an object database.</p>
|
<python><django>
|
2023-12-14 13:31:01
| 0
| 426
|
FordPrefect
|
77,660,498
| 6,331,353
|
Homebrew installed python on mac returns weird response including "line 1: //: is a directory" & "line 7: syntax error near unexpected token `('"
|
<p>When I run <code>python3</code> I get the following</p>
<pre><code> % python3
/opt/homebrew/bin/python3: line 1: //: is a directory
/opt/homebrew/bin/python3: line 3: //: is a directory
/opt/homebrew/bin/python3: line 4: //: is a directory
/opt/homebrew/bin/python3: line 5: //: is a directory
/opt/homebrew/bin/python3: line 7: syntax error near unexpected token `('
/opt/homebrew/bin/python3: line 7: `����
0� H__PAGEZERO�__TEXT@@__text__TEXT;�__stubs__TEX>�
__cstring__TEXT�>��>__unwind_info__TEXT�?X�?�__DATA_CONST@@@@__got__DATA_CONST@�@�__DATA�@__bss__DATA�H__LINKEDIT����M4���3���0���0
PP�%
/usr/lib/dyldY*(�;��g�g�2
x
/opt/homebrew/Cellar/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/Python
8d'/usr/lib/libSystem.B.dylib&��)�� ��H����_��W��O��{������ �'X�C���`5�� �
@������������������������T��J�_8� �_�qA��T� ��*@8_���i � @�� ��<���R�� " ��3����5�! �����R�����0 Ձ` ը���` ��p ��R�R�� �BT�^ յ �����@99� Ձ] ��������9�\ ����R�Ri� �Tc������� �0 աZ �"�R{�t��
��C�� @��C���������_���!0 � �RN�����O��{������W���=��45�� �r�����#���!�RN�1T�@���T��RI���)��35�{B��OA�����_��
� X@�p �A�R"�R(� �R#���{����
��@���'�{����� �@�R � �R��{����a
</code></pre>
<p>Trying to check the version with <code>python3 --version</code> returns the same thing</p>
<p>The error appears to happen randomly, it was not happening for a while (after installing python3.10 and then later python3.11 again), and then randomly started happening again one day</p>
<hr />
<pre><code>% od -tx1 /opt/homebrew/bin/python3 | head -n 5
0000000 2f 2f 20 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d
0000020 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d
*
0000100 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 0a
0000120 0a 2f 2f 20 50 4c 45 41 53 45 20 44 4f 20 4e 4f
% file /opt/homebrew/bin/python3
/opt/homebrew/bin/python3: data
</code></pre>
|
<python><python-3.x><homebrew>
|
2023-12-14 13:25:34
| 1
| 2,335
|
Sam
|
77,660,453
| 20,803,947
|
kombu.exceptions.OperationalError: [Errno 111] Connection refused (flask, docker-compose)
|
<p>i'm having problems with my flask app</p>
<p>having the premises:</p>
<p>Python, Flask, Celerey, MongoDB, RabbitMQ.</p>
<p>When I make a request to the API, I get the following return:</p>
<p>kombu.exceptions.OperationalError: [Errno 111] Connection refused</p>
<p>and this only happens when I run docker-compose up, if I try to start the app from flask outside docker, everything works normally.</p>
<p>Dockerfile:</p>
<pre><code>FROM python:3.11-alpine AS builder
RUN pip install poetry
WORKDIR /backend
COPY pyproject.toml poetry.lock ./
RUN poetry config virtualenvs.create false && poetry install --no-root
FROM python:3.11-alpine
WORKDIR /backend
COPY --from=builder /usr/local/lib/python3.11/site-packages/ /usr/local/lib/python3.11/site-packages/
COPY --from=builder /usr/local/bin/ /usr/local/bin/
COPY . .
</code></pre>
<p>docker-compose.yml</p>
<pre><code>version: '3.9'
services:
mongodb:
image: mongo:latest
container_name: mongodb
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
restart: always
ports:
- 27017:27017
volumes:
- mongodb_data:/data/db
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
restart: always
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=admin
- RABBITMQ_DEFAULT_VHOST=/
ports:
- 5672:5672
- 15672:15672
volumes:
- rabbitmq_data:/var/lib/rabbitmq
celery_worker:
build: .
container_name: celery_worker
environment:
- CELERY_BROKER_URL=amqp://admin:admin@rabbitmq:5672/
restart: always
command: celery --app src.task worker --loglevel=info
depends_on:
- mongodb
- rabbitmq
flask_app:
build: .
container_name: flask_app
command: python src/app.py
restart: always
environment:
- SERVER_HOST=0.0.0.0
- SERVER_PORT=8080
ports:
- 5000:8080
depends_on:
- mongodb
- rabbitmq
- celery_worker
volumes:
mongodb_data: # Volume para persistência dos dados do MongoDB
rabbitmq_data: # Volume para persistência dos dados do RabbitMQ
</code></pre>
<p>src/app.py</p>
<pre><code>import os
from flask import Flask
from task import add
app = Flask(__name__)
@app.route('/', methods=['GET'])
def hello():
add.delay(1, 2)
return "Docker compose working!"
if __name__ == '__main__':
host = os.environ.get('SERVER_HOST', 'localhost')
port = os.environ.get('SERVER_PORT', '8001')
print(f"Server listenning {host}:{port} !!!")
app.run(host=host, port=port, debug=True)
</code></pre>
<p>src/task.py</p>
<pre><code>import os
from celery import Celery
broker = os.environ.get('CELERY_BROKER_URL', 'amqp://guest@localhost//')
celery = Celery('tasks')
@celery.task(queue='default')
def add(x, y):
print(f'Adding {x} + {y}')
return x + y
</code></pre>
<p>can anyone tell me how to solve this? thanks!!</p>
|
<python><flask><docker-compose><rabbitmq><celery>
|
2023-12-14 13:20:58
| 1
| 309
|
Louis
|
77,660,187
| 20,920,790
|
Why telegram bot not sending the message?
|
<p>Why bot not sending message?
My code:</p>
<pre><code>import telegram
import requests
bot_token = 'token***'
my_bot = telegram.Bot(token=bot_token)
def get_chat_id(bot_token):
request = requests.post(f'https://api.telegram.org/bot{bot_token}/getUpdates')
result = request.json()['result'][0]['message']['chat']['id']
return result
chat_id = get_chat_id(bot_token)
msg = 'My PC'
my_bot.sendMessage(chat_id=chat_id, text=msg)
</code></pre>
<p>Python 3.11.5, Telegram 20.7</p>
|
<python><telegram><telegram-bot><python-telegram-bot>
|
2023-12-14 12:33:49
| 1
| 402
|
John Doe
|
77,660,084
| 1,538,049
|
Unable to locate package sqlite3 in Dockerfile
|
<p>I am pretty new to building containers with Docker, I commonly use conda environments for my day to day work, however this time I needed to work with a computation server that only allows running docker containers. I want to build an image that will allow me to run my Pytorch code. What I prepared is like the following, a pretty common Dockerfile for deep learning applications:</p>
<pre><code>FROM nvidia/cuda:12.2.0-devel-ubuntu20.04
CMD ["bash"]
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
ENV SHELL=/bin/bash
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends \
git \
wget \
cmake \
ninja-build \
build-essential \
python3 \
python3-dev \
python3-pip \
python3-venv \
python-is-python3 \
&& apt-get autoremove -y && apt-get clean -y && rm -rf /var/lib/apt/lists/*
RUN apt-get install sqlite3
ENV VIRTUAL_ENV=/opt/python3/venv/base
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python3 -m pip install --upgrade pip
RUN pip install jupyterlab
RUN python3 -m pip install pandas
RUN pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
COPY entry_point.sh /entry_point.sh
RUN chmod +x /entry_point.sh
# Set entrypoint to bash
ENTRYPOINT ["/entry_point.sh"]
</code></pre>
<p>When building the container, I get the following error:</p>
<pre><code> E: Unable to locate package sqlite3
</code></pre>
<p>When I remove the sqlite3 installation line, the image builds and when I run the corresponding container and try to install sqlite again with the same command in CLI, I get the same error. I am using as the base image "nvidia/cuda:12.2.0-devel-ubuntu20.04" that is supposed to bring Ubuntu20.04 environment along with Cuda, however the apt package manager cannot seem to find a very common tool that is Sqlite. I also make an <code>apt-get update</code> call in the Dockerfile right in the beginning. I am not seeing what is missing here unfortunately. Should I use an another base container perhaps?</p>
|
<python><linux><docker><sqlite>
|
2023-12-14 12:16:26
| 1
| 3,679
|
Ufuk Can Bicici
|
77,660,071
| 386,641
|
Unable to select _ts from Cosmos db in databricks
|
<p>unable to select _ts from cosmos db from python script written in databricks.</p>
<pre><code>read_config = {
"spark.cosmos.accountEndpoint": url,
"spark.cosmos.accountKey": key,
"spark.cosmos.database": database,
"spark.cosmos.container": container,
"spark.cosmos.read.customQuery" : "select * from c "
}
rsltdf = spark.read.format("cosmos.oltp").options(**read_config).load()
display(rsltdf)
</code></pre>
<p>This select is selecting fields apart from auto generated fields like _ts, _etag. _rid..</p>
<p>Databricks runtime version - 7.3 LTS (includes Apache Spark 3.0.1, Scala 2.12)</p>
<p>libraries - com.azure.cosmos.spark:azure-cosmos-spark_3-3_2-12:4.18.1, azure-cosmos, pyDocumentDB</p>
|
<python><databricks><azure-cosmosdb><azure-databricks>
|
2023-12-14 12:14:51
| 1
| 303
|
DeepVeen
|
77,659,976
| 9,851,915
|
Convert .msg file to .eml file in Python
|
<p>I'm writing a wrapper API to send some email reports to a third-party SaaS which parse them and get some information about them. My users have to report with <code>.eml</code> or <code>.msg</code> file (I am required to accept both formats).
I just found out my SaaS provider does not deal with <code>.msg</code> files, so if I forward a <code>.msg</code> file with my wrapper, it won't be analyzed by them.</p>
<p>I was thinking a possible workaround: since <code>.eml</code> files work just fine, I could convert the <code>.msg</code> into <code>.eml</code>. In this way I can treat both formats as one and my SaaS will produce their reports.</p>
<p>At the moment I used the python library <a href="https://pypi.org/project/extract-msg/%5Bextract-msg%5D%5B1%5D" rel="nofollow noreferrer">https://pypi.org/project/extract-msg/[extract-msg][1]</a> to try to parse the <code>.msg</code> files, but I didn't found a way to convert it entirely right away.
Also <strong>I would like to run this conversion in-memory if possible</strong>, without saving the output conversion in a file.</p>
<p>Does Anyone know any library I can use or another way to achieve the result?</p>
|
<python><python-3.x><email><msg><eml>
|
2023-12-14 11:56:23
| 0
| 319
|
giacom0c
|
77,659,718
| 5,525,704
|
What is the proper way to install Python 3.10 or higher on Docker development environments
|
<p>I use the
<code>docker/dev-environments-default:stable-1</code>
as a base image for development environments. However, the OS version (Debian Bullseye) only supports Python versions up to 3.9 via apt, so I can only install newer versions from source.</p>
<p>Are there any other official base images for Docker development environments with a newer OS version that supports the installation of modern Python versions?</p>
|
<python><docker>
|
2023-12-14 11:14:25
| 1
| 1,167
|
Alexander Ershov
|
77,659,569
| 16,545,894
|
Reviews button is not clicking on google map
|
<p>I am trying to click Google Maps Company's <code>Reviews</code> button and after that extract 10 reviews description with selenium python.</p>
<p><strong>An example link is given below:</strong></p>
<blockquote>
<p><a href="https://www.google.com/maps/place/PakDeZon+B.V./@52.7092108,4.9585823,17z/data=!3m1!4b1!4m6!3m5!1s0x47c8acebaea7c103:0xb4e4a90c507e03a2!8m2!3d52.7092108!4d4.9585823!16s%2Fg%2F11c3k693v6?authuser=0&hl=en&entry=ttu" rel="nofollow noreferrer">https://www.google.com/maps/place/PakDeZon+B.V./@52.7092108,4.9585823,17z/data=!3m1!4b1!4m6!3m5!1s0x47c8acebaea7c103:0xb4e4a90c507e03a2!8m2!3d52.7092108!4d4.9585823!16s%2Fg%2F11c3k693v6?authuser=0&hl=en&entry=ttu</a></p>
</blockquote>
<p><strong>I tryed this way.</strong></p>
<pre><code>def get_reviews(url):
driver.get(url)
time.sleep(4)
button = driver.find_element(By.XPATH, '//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]/div[1]/div[2]/div/div[1]/div[2]/span[2]/span[1]/span')
button.click()
time.sleep(5)
</code></pre>
<p><a href="https://i.sstatic.net/iJGBj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iJGBj.png" alt="enter image description here" /></a></p>
|
<python><google-maps><selenium-webdriver><web-scraping>
|
2023-12-14 10:48:30
| 2
| 1,118
|
Nayem Jaman Tusher
|
77,659,552
| 10,347,145
|
Google Generative AI API error: "User location is not supported for the API use."
|
<p>I'm trying to use the Google Generative AI <code>gemini-pro</code> model with the following Python code using the <a href="https://ai.google.dev/api/python/google/generativeai" rel="nofollow noreferrer">Google Generative AI Python SDK</a>:</p>
<pre><code>import google.generativeai as genai
import os
genai.configure(api_key=os.environ['GOOGLE_CLOUD_API_KEY'])
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content('Say this is a test')
print(response.text)
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>User location is not supported for the API use.
</code></pre>
<p>I've searched the official documentation and a few Google GitHub repositories for a while but haven't found any location restrictions stated for API usage. I live in Austria, Europe.</p>
<p>Full traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\xxxxx\Desktop\gemini-pro.py", line 7, in <module>
response = model.generate_content('Say this is a test')
File "C:\Users\xxxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\google\generativeai\generative_models.py", line 243, in generate_content
response = self._client.generate_content(request)
File "C:\Users\xxxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\google\ai\generativelanguage_v1beta\services\generative_service\client.py", line 566, in generate_content
response = rpc(
File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\gapic_v1\method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\retry.py", line 372, in retry_wrapped_func
return retry_target(
File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\retry.py", line 207, in retry_target
result = target()
File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs)
File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\grpc_helpers.py", line 81, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.FailedPrecondition: 400 User location is not supported for the API use.
</code></pre>
<hr />
<p><strong>EDIT</strong></p>
<p>I finally found the <a href="https://ai.google.dev/available_regions#available_regions" rel="nofollow noreferrer">"Available regions"</a>. It's the last item in the sidebar. As of today, Austria is not on the list. I clicked through Google websites, and there were no location restrictions stated. At least on the official <a href="https://deepmind.google/technologies/gemini/" rel="nofollow noreferrer">Gemini webpage</a>, they could have added an asterisk somewhere to indicate that it is not available everywhere in the world.</p>
<blockquote>
<p>The Gemini API and Google AI Studio are available in the following
countries and territories:</p>
<p>Algeria American Samoa Angola Anguilla Antarctica Antigua and Barbuda
Argentina Armenia Aruba Australia Azerbaijan The Bahamas Bahrain
Bangladesh Barbados Belize Benin Bermuda Bhutan Bolivia Botswana
Brazil British Indian Ocean Territory British Virgin Islands Brunei
Burkina Faso Burundi Cabo Verde Cambodia Cameroon Caribbean
Netherlands Cayman Islands Central African Republic Chad Chile
Christmas Island Cocos (Keeling) Islands Colombia Comoros Cook Islands
Côte d'Ivoire Costa Rica Curaçao Democratic Republic of the Congo
Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador
Equatorial Guinea Eritrea Eswatini Ethiopia Falkland Islands (Islas
Malvinas) Fiji Gabon The Gambia Georgia Ghana Gibraltar Grenada Guam
Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and
McDonald Islands Honduras India Indonesia Iraq Isle of Man Israel
Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Kyrgyzstan
Kuwait Laos Lebanon Lesotho Liberia Libya Madagascar Malawi Malaysia
Maldives Mali Marshall Islands Mauritania Mauritius Mexico Micronesia
Mongolia Montserrat Morocco Mozambique Namibia Nauru Nepal New
Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island
Northern Mariana Islands Oman Pakistan Palau Palestine Panama Papua
New Guinea Paraguay Peru Philippines Pitcairn Islands Puerto Rico
Qatar Republic of the Congo Rwanda Saint Barthélemy Saint Kitts and
Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and the
Grenadines Saint Helena, Ascension and Tristan da Cunha Samoa São Tomé
and Príncipe Saudi Arabia Senegal Seychelles Sierra Leone Singapore
Solomon Islands Somalia South Africa South Georgia and the South
Sandwich Islands South Korea South Sudan Sri Lanka Sudan Suriname
Taiwan Tajikistan Tanzania Thailand Timor-Leste Togo Tokelau Tonga
Trinidad and Tobago Tunisia Türkiye Turkmenistan Turks and Caicos
Islands Tuvalu Uganda United Arab Emirates United States United States
Minor Outlying Islands U.S. Virgin Islands Uruguay Uzbekistan Vanuatu
Venezuela Vietnam Wallis and Futuna Western Sahara Yemen Zambia
Zimbabwe</p>
</blockquote>
|
<python><google-generativeai><google-gemini>
|
2023-12-14 10:45:25
| 6
| 23,669
|
Rok Benko
|
77,659,458
| 2,724,299
|
Check if any element of list a comes from list b
|
<p>I've a fixed list of strings, <code>pets = ['rabbit','parrot','dog','cat','hamster']</code> and I need to match another string <code>basket = ['apple','dog','shirt']</code>. This <code>basket</code> will be varying.</p>
<p>Typically <code>pets</code> will have 300 element string while <code>basket</code> will have 5 element string.
I need to find if ANY of the elements of <code>basket</code> are from <code>pets</code> and return true with first match.</p>
<p>I'm currently doing it like this</p>
<pre><code>for item in basket:
if item in pets:
flag=1
break
</code></pre>
<p>There must be a faster way</p>
|
<python><list>
|
2023-12-14 10:30:47
| 1
| 738
|
Frash
|
77,659,393
| 8,531,215
|
subtracting a numpy array by a list is slow
|
<p>Given a numpy array of shape <code>4000x4000x3</code>, which is an image with 3 channels, subtract each channel by some values. I can do it as following:</p>
<p><strong>Implementation 1</strong></p>
<pre><code>values = [0.43, 0.44, 0.45]
image -= values
</code></pre>
<p>Or,</p>
<p><strong>Implementation 2</strong></p>
<pre><code>values = [0.43, 0.44, 0.45]
for i in range(3):
image[...,i] -= values[i]
</code></pre>
<p>Surprisingly, the later solution is 20x faster than the former when performing on that large image shape, but I don't clearly know why. Could you please explain what helps the second implementation faster? Thanks.</p>
<p>Simple script for confirmation:</p>
<pre><code>import time
import numpy as np
image = np.random.rand(4000, 4000, 3).astype("float32")
values = [0.43, 0.44, 0.45]
st = time.time()
for i in range(3):
image[..., i] -= values[i]
et = time.time()
print("Implementation 2", et - st)
st = time.time()
image -= values
et = time.time()
print("Implementation 1", et - st)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>Implementation 2 0.030953645706176758
Implementation 1 0.8593623638153076
</code></pre>
|
<python><numpy>
|
2023-12-14 10:21:19
| 1
| 2,373
|
TaQuangTu
|
77,659,281
| 7,191,911
|
Why does subprocess.run fail to run notepad in Win11 due to missing "package identity", when os.system succeeds?
|
<p>On Windows 10 with admin-installed Anaconda distribution, I had no problem using subprocess.run to open Notepad. However, when I now try to run the same command on Windows 11 from a user-installed Miniforge python environment, it fails with the following <code>package identity</code> error:</p>
<pre class="lang-py prettyprint-override"><code>> import subprocess
> subprocess.run([r'C:\WINDOWS\notepad.exe'])
OSError: [WinError 15700] The process has no package identity
</code></pre>
<p>By way of contrast, <code>os.system</code> works fine:</p>
<pre class="lang-py prettyprint-override"><code>> import os
> os.system(r'C:\WINDOWS\notepad.exe')
0
</code></pre>
<p>Does anyone have any idea why this is happening? I can run other programs (presumably because they have a "package identity"), but not "Notepad". Other similar programs like "write.exe" and "calc.exe" work fine. I wonder if it is related to the changes they have made to Notepad for Windows 11. Is there any way to get round this?</p>
<pre class="lang-py prettyprint-override"><code>> import subprocess
> subprocess.run([r'C:\WINDOWS\System32\write.exe'])
CompletedProcess(args=['C:\\WINDOWS\\System32\\write.exe'], returncode=0)
</code></pre>
|
<python><python-3.x><subprocess><windows-11><os.system>
|
2023-12-14 10:07:14
| 0
| 331
|
Diomedea
|
77,659,240
| 6,450,267
|
uwsgi + flask-socketio in multi-processes
|
<p>I am trying to build a websocket server step by step. But I meet crucial problem when I try to use eventlet or gevent.</p>
<h2>There are my codes for the server:</h2>
<h4>[uwsgi.ini]</h4>
<pre><code>[uwsgi]
chdir = /home/user/websocket
module=websocket:app
callable=app
processes=4
socket=/home/user/websocket/uwsgi.sock
uid = user
gid = user
chmod-socket=664
http-socket = :15000
log-reopen=true
die-on-term=true
master=true
vacuum=true
plugin=python3
virtualenv = /home/user/websocket/web
gevent = 100
</code></pre>
<h4>[websocket.py]</h4>
<pre><code>from flask import Flask
from flask_socketio import SocketIO, send, emit
app = Flask(__name__)
socketio = SocketIO(app, logger=True, engineio_logger=True, cors_allowed_origins='*')
@socketio.on('connect')
def connected():
print('-'*30, '[connect]', '-'*30)
@socketio.on('message')
def handle_message(data):
print('-'*30, '[message]', '-'*30)
print('received message: ' + data)
send(data) # Echoes back the received message
@socketio.on_error() # This decorator handles all errors including connect_error
def handle_error(e):
if isinstance(e, Exception):
print('An error occurred:', str(e))
# Here, you can log the error or perform any necessary action.
# For example, you can check if it's a connect_error and take appropriate action.
@app.route("/")
def hello():
return "Connected"
if __name__ == '__main__':
socketio.run(app)
</code></pre>
<p>I ran:</p>
<pre><code>uwsgi --ini uwsgi.ini
</code></pre>
<p>But I got the error:</p>
<pre><code>Traceback (most recent call last):
File "/home/user/websocket/web/lib/python3.10/site-packages/flask/app.py", line 1478, in __call__
return self.wsgi_app(environ, start_response)
File "/home/user/websocket/web/lib/python3.10/site-packages/flask_socketio/__init__.py", line 43, in __call__
return super(_SocketIOMiddleware, self).__call__(environ,
File "/home/user/websocket/web/lib/python3.10/site-packages/engineio/middleware.py", line 63, in __call__
return self.engineio_app.handle_request(environ, start_response)
File "/home/user/websocket/web/lib/python3.10/site-packages/socketio/server.py", line 428, in handle_request
return self.eio.handle_request(environ, start_response)
File "/home/user/websocket/web/lib/python3.10/site-packages/engineio/server.py", line 271, in handle_request
packets = socket.handle_get_request(
File "/home/user/websocket/web/lib/python3.10/site-packages/engineio/socket.py", line 90, in handle_get_request
return getattr(self, '_upgrade_' + transport)(environ,
File "/home/user/websocket/web/lib/python3.10/site-packages/engineio/socket.py", line 146, in _upgrade_websocket
ws(environ, start_response)
File "/home/user/websocket/web/lib/python3.10/site-packages/engineio/async_drivers/eventlet.py", line 40, in __call__
raise RuntimeError('You need to use the eventlet server. '
RuntimeError: You need to use the eventlet server. See the Deployment section of the documentation for more information.
</code></pre>
<p>So I tried to put "http-websockets = true" in the ini file but got the same error.</p>
<h2>This is the client I tried</h2>
<h4>[index.html]</h4>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Flask SocketIO Client</title>
<script src="https://cdn.socket.io/4.0.0/socket.io.min.js"></script>
</head>
<body>
<input type="text" id="messageInput" placeholder="Type a message...">
<button onclick="sendMessage()">Send</button>
<div id="messages"></div>
<script>
var socket = io('http://localhost:15000'); // Change to your server's address and port
socket.on('connect', function() {
console.log('Connected to the server.');
});
socket.on('message', function(data) {
console.log('Received message:', data);
document.getElementById('messages').innerText += data + '\n';
});
function sendMessage() {
var message = document.getElementById('messageInput').value;
console.log('sending...:', message);
socket.emit('message', message);
document.getElementById('messageInput').value = '';
}
</script>
</body>
</html>
</code></pre>
<h4>[client.py]</h4>
<pre><code>from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html')
if __name__ == '__main__':
app.run(port=5001) # Run on a different port than your SocketIO server
</code></pre>
<p>In the client page, it shows this error:</p>
<pre><code>POST http://localhost:15000/socket.io/?EIO=4&transport=polling&t=Ondmm8T&sid=BooMMCC89x9oei5_AAAQ 400 (BAD REQUEST)
websocket.js:88 WebSocket connection to 'ws://localhost:15000/socket.io/?EIO=4&transport=websocket&sid=1_GSTZsrzqFdezzbAAAP' failed:
</code></pre>
<p>Please help me to solve this problem.</p>
|
<python><websocket><uwsgi><flask-socketio>
|
2023-12-14 10:00:09
| 1
| 340
|
Soonmyun Jang
|
77,659,211
| 14,487,032
|
Vscode jupyter intellisense not showing doc for tensorflow keras
|
<p>Vscode jupyter autocompletes, but doesn't show doc for <code>keras</code> fields</p>
<pre><code>import tensorflow as tf
from tensorflow import keras;
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
</code></pre>
<p>Even if I press <kbd>ctrl+shift+ space</kbd>, no doc shows to describe the functions and elements like <code>keras.layers.Flatten()</code></p>
|
<python>
|
2023-12-14 09:55:27
| 1
| 16,236
|
Abraham
|
77,659,069
| 98,967
|
How to understand and debug memory usage with JAX?
|
<p>I am new to JAX and trying to learn use it for running some code on a GPU. In my example I want to search for regular grids in a point cloud (for indexing X-ray diffraction data).</p>
<p>With <code>test_mats[4_000_000,3,3]</code> the memory usage seems to be 15 MB. But with <code>test_mats[5_000_000,3,3]</code> I get an error about it wanting to allocate 19 GB.</p>
<p>I can't tell whether this is a glitch in JAX, or because I am doing something wrong. My example code and output are below. I guess the problem is that it wants to create a temporary array of (N, 3, gvec.shape[1]) before doing the reduction, but I don't know how to see the memory profile for what happens inside the jitted/vmapped function.</p>
<pre><code>import sys
import os
import jax
import jax.random
import jax.profiler
print('jax.version.__version__',jax.version.__version__)
import scipy.spatial.transform
import numpy as np
# (3,N) integer grid spot positions
hkls = np.mgrid[-3:4, -3:4, -3:4].reshape(3,-1)
Umat = scipy.spatial.transform.Rotation.random( 10, random_state=42 ).as_matrix()
a0 = 10.13
gvec = np.swapaxes( Umat.dot(hkls)/a0, 0, 1 ).reshape(3,-1)
def count_indexed_peaks_hkl( ubi, gve, tol ):
""" See how many gve this ubi can account for """
hkl_real = ubi.dot( gve )
hkl_int = jax.numpy.round( hkl_real )
drlv2 = ((hkl_real - hkl_int)**2).sum(axis=0)
npks = jax.numpy.where( drlv2 < tol*tol, 1, 0 ).sum()
return npks
def testsize( N ):
print("Testing size",N)
jfunc = jax.vmap( jax.jit(count_indexed_peaks_hkl), in_axes=(0,None,None))
key = jax.random.PRNGKey(0)
test_mats = jax.random.orthogonal(key, 3, (N,) )*a0
dev_gvec = jax.device_put( gvec )
scores = jfunc( test_mats, gvec, 0.01 )
jax.profiler.save_device_memory_profile(f"memory_{N}.prof")
os.system(f"~/go/bin/pprof -top {sys.executable} memory_{N}.prof")
testsize(400000)
testsize(500000)
</code></pre>
<p>Output is:</p>
<pre><code>gpu4-03:~/Notebooks/JAXFits % python mem.py
jax.version.__version__ 0.4.16
Testing size 400000
File: python
Type: space
Showing nodes accounting for 15.26MB, 99.44% of 15.35MB total
Dropped 25 nodes (cum <= 0.08MB)
flat flat% sum% cum cum%
15.26MB 99.44% 99.44% 15.26MB 99.44% __call__
0 0% 99.44% 15.35MB 100% [python]
0 0% 99.44% 1.53MB 10.00% _pjit_batcher
0 0% 99.44% 15.30MB 99.70% _pjit_call_impl
0 0% 99.44% 15.30MB 99.70% _pjit_call_impl_python
0 0% 99.44% 15.30MB 99.70% _python_pjit_helper
0 0% 99.44% 15.35MB 100% bind
0 0% 99.44% 15.35MB 100% bind_with_trace
0 0% 99.44% 15.30MB 99.70% cache_miss
0 0% 99.44% 15.30MB 99.70% call_impl_cache_miss
0 0% 99.44% 1.53MB 10.00% call_wrapped
0 0% 99.44% 13.74MB 89.51% deferring_binary_op
0 0% 99.44% 15.35MB 100% process_primitive
0 0% 99.44% 15.30MB 99.70% reraise_with_filtered_traceback
0 0% 99.44% 15.35MB 100% testsize
0 0% 99.44% 1.53MB 10.00% vmap_f
0 0% 99.44% 15.31MB 99.74% wrapper
Testing size 500000
2023-12-14 10:26:23.630474: W external/tsl/tsl/framework/bfc_allocator.cc:296] Allocator
(GPU_0_bfc) ran out of memory trying to allocate 19.18GiB with freed_by_count=0. The caller
indicates that this is not a failure, but this may mean that there could be performance
gains if more memory were available.
Traceback (most recent call last):
File "~/Notebooks/JAXFits/mem.py", line 38, in <module>
testsize(500000)
File "~/Notebooks/JAXFits/mem.py", line 33, in testsize
scores = jfunc( test_mats, gvec, 0.01 )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to
allocate 20596777216 bytes.
--------------------
For simplicity, JAX has removed its internal frames from the traceback of the following
exception. Set JAX_TRACEBACK_FILTERING=off to include these.```
</code></pre>
|
<python><jax>
|
2023-12-14 09:33:15
| 1
| 889
|
Jon
|
77,658,886
| 401,226
|
python requests sslL Client hello uses TLS V1 and server refuses connection, and I don't know why it uses v1
|
<p>Context:
This is python 3.11.6 based on this docker image: python:3.11.6-slim-bullseye</p>
<p>Ultimately this is used for a zeep connection but the problem is the SSL connection is not established.</p>
<p>It is an up to date openssl library</p>
<pre><code>print(ssl.OPENSSL_VERSION)
PyDev console: starting.
OpenSSL 1.1.1w 11 Sep 2023
</code></pre>
<p>I have this test script:</p>
<pre><code>def test_connection(db_no_rollback):
HTTPConnection.debuglevel = 1
# Initialize a logger
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True
import ssl, socket
hostname = 'www.handlingandfulfilment.co.uk'
# hostname = "www.growthpath.com.au"
# context = ssl.create_default_context()
context = ssl.SSLContext(ssl.PROTOCOL_TLS) # create default context
context.options |= ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 # disable TLS 1.0 and 1.1
with socket.create_connection((hostname, 443)) as sock:
with context.wrap_socket(sock, server_hostname=hostname) as ssock:
print(ssock.getpeercert())
assert True
</code></pre>
<p>with most hostnames, wireshark shows a line like this:</p>
<pre><code>6412 410.231952331 192.168.41.11 188.166.227.141 TLSv1.3 583 Client Hello
</code></pre>
<p>which makes sense to me, TLSv1.3</p>
<p>but with the problem server, I get this:</p>
<pre><code>421 9.896938114 192.168.41.11 193.109.12.6 TLSv1 571 Client Hello
</code></pre>
<p>the source address (192.168.41.11) is my dev machine.
So my python code is sending hello with TLSv1</p>
<p>This leads to an exception:</p>
<pre><code>self = <ssl.SSLSocket [closed] fd=-1, family=2, type=1, proto=6>, block = False
@_sslcopydoc
def do_handshake(self, block=False):
self._check_connected()
timeout = self.gettimeout()
try:
if timeout == 0.0 and block:
self.settimeout(None)
> self._sslobj.do_handshake()
E ConnectionResetError: [Errno 104] Connection reset by peer
</code></pre>
<p>which I am certain is because the server refuses TLSv1
Also, the administrators of the server are blaming my attempt at connecting with this version.</p>
<p>Why would this code be attempting to negotiate TLSv1?</p>
<p>Production which is a kubernetes deployment of that image has the same problem, so it is not anything idiosyncratic about my machine or office network.</p>
<h1>update</h1>
<p>I have found a way of making it work by guesswork with chatgpt helping a bit, and I don't know why it works. And it is complicated.</p>
<p>after doing this to make sure 'pure' openssl works:</p>
<pre><code>openssl s_client -connect www.handlingandfulfilment.co.uk:8079
</code></pre>
<p>I get a v1.2 connection and a cipher</p>
<pre><code>New, TLSv1.2, Cipher is AES256-GCM-SHA384
</code></pre>
<p>this matches to</p>
<p>context.set_ciphers('ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384:!aNULL:!MD5')</p>
<p>but this fails with an unknown certificate. certifi can fix that.</p>
<p>I got to this, the assert is True</p>
<pre><code> # try a different way
# Create a socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((hostname, 443))
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
import certifi
context.load_verify_locations(certifi.where())
context.set_ciphers('ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384:!aNULL:!MD5')
wrappedSocket = context.wrap_socket(sock, server_hostname=hostname)
assert wrappedSocket
</code></pre>
|
<python><python-requests><openssl><tls1.2>
|
2023-12-14 09:01:32
| 1
| 7,351
|
Tim Richardson
|
77,658,489
| 7,839,727
|
How to run only new Unit Test within AzureDevops pipeline
|
<p>Let's assume I have a simple Python app which has a few modules and each module has a set of tests. I've configured my Azure DevOps pipeline so that all tests are performed for each pull request to pass our quality gate.</p>
<p>Is it possible to configure my Azure DevOps pipeline to run tests only from Pull Request?</p>
|
<python><azure-devops><azure-pipelines>
|
2023-12-14 07:45:44
| 2
| 500
|
SerSergious
|
77,658,330
| 14,348,930
|
custom tkinter progress bar not working for "indeterminate" mode
|
<p>I'm trying to create a YouTube video downloader GUI using <code>yt-dlp</code> and <code>custom-tkinter</code> library.</p>
<p>I successfully implemented progressbar using <code>CTkProgressBar()</code> inside a <code>CtkFrame</code> for "determinate" mode which will show progress of the filesize downloaded.</p>
<p>For videos that doesn't have the filesize, I tried to show "interminate" progressbar but it seems to be not working as expected.</p>
<pre><code>app = ctk.CTk()
app.title("App")
app.geometry("300x200")
class ProgressFrame(ctk.CTkFrame):
def __init__(self, parent: tk.Tk, *args, **kwargs):
super().__init__(parent, *args, **kwargs)
self.progressbar = ctk.CTkProgressBar(
app,
mode="indeterminate",
indeterminate_speed=5
)
self.progressbar.pack(side="top", expand=False, padx=20, pady=5)
def update_indeterminate_progressbar(self):
self.progressbar.step()
self.progressbar.update() # Now the progressbar is working properly.
progress_frame = ProgressFrame(app)
# progressbar.set(0) # for "determinate" progressbar
ytdlp_options = {
'progress_hooks': [progress_frame.update_indeterminate_progressbar],
'outtmpl': "path/to/output/file",
'format': "format id",
}
video_info = {
# youtbe video details.
}
with yt_dlp.YoutubeDL(ytdlp_options) as ytdl:
ytdl.process_ie_result(video_info, download=True)
# progressbar.set(1) # for "determinate" progressbar
app.mainloop()
</code></pre>
<p>When I run the above code, The progress bar stays still and there is no any change in progressbar. If I uncomment <code>progressbar.start()</code> the progress bar starts moving after the video is downloaded and not during the download.</p>
<p>When I tried to remove the <em>progress_hook</em>, <code>update_indeterminate_progress</code> and run, I get the same above result.</p>
<p><strong>EDIT:</strong></p>
<p>The problems seems to be with my <code>progress_frame</code>. its working now after I updated the <code>progressbar</code> inside the <code>update_indeterminate_progress</code> method.</p>
<p>TIA</p>
|
<python><tkinter><progress-bar><customtkinter><yt-dlp>
|
2023-12-14 07:10:24
| 1
| 321
|
rangarajan
|
77,658,301
| 2,824,791
|
Python imports fail when invoked by nf-core process
|
<p>I created a nf-core project using nf-core create. I am trying to invoke python scripts using nf-core processes. This technique worked in my nextflow project as I was able to avoid having to write code in a process. Most of the imports are failing.</p>
<pre><code>import sys --> works
import pymongo --> fails
from pymongo.mongo_client import MongoClient --> fails
</code></pre>
<p>From the terminal I can see the process is trying to invoke:</p>
<pre><code>nf.prefetch_http.py SRR9686066 "$PWD/"SRR9686066".sra" "mongodb://xxxx:yyyy==@xxxx.mongo.cosmos.azure.com:10255/?ssl=true&retrywrites=false&replicaSet=globaldb&maxIdleTimeMS=120000&appName=@xxxx@"
</code></pre>
<p>This results in error:</p>
<pre><code> *File "/home/q/nf-core-pass/bin/mongodb_logger.py", line 1, in <module>
from google.cloud import secretmanager
ModuleNotFoundError: No module named 'google'*
</code></pre>
<p>However, if I run directly:</p>
<pre><code>/home/q/nf-core-xxx/bin/nf.prefetch_http.py SRR9686066 "$PWD/"SRR9686066".sra" "mongodb://xxxx:yyyy==@xxxx.mongo.cosmos.azure.com:10255/?ssl=true&retrywrites=false&replicaSet=globaldb&maxIdleTimeMS=120000&appName=@xxxx@"
</code></pre>
<p><strong>This works proving the import(s) are installed.</strong></p>
<p>If I move the imports to nf.prefetch_http.py, the failure simply moves to nf.prefetch_http.py.</p>
<p><strong>nf-core-xxx/bin/nf.prefetch_http.py</strong></p>
<pre><code>#!/usr/bin/env python3
import sys
import mongodb_logger
import prefetch_http
sra = sys.argv[1]
sraPath = sys.argv[2]
mongodb_connection_string = sys.argv[3]
print(f"nf.prefetch_http: sra: {sra}")
print(f"nf.prefetch_http: sraPath: {sraPath}")
print(f"nf.prefetch_http: mongodb_connection_string: {mongodb_connection_string}")
mongodb_logger.initialize(None,mongodb_connection_string)
prefetch_http.prefetch(sra, sraPath)
</code></pre>
<p><strong>nf-core-xxx/bin/prefetch_http.py</strong></p>
<pre><code>from google.cloud import secretmanager
from pymongo.mongo_client import MongoClient
from pymongo.server_api import ServerApi
import pymongo
import os
import json
import response
import globals
import jsons
import mongodb_logger
import pandas as pd
from dataclasses import dataclass
from typing import List
from typing import Optional, List
from datetime import datetime
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
import globals
mongodb_connection_string = "NULL"
...
</code></pre>
<p>Is there a configuration in nf-core required to make python work correctly?</p>
|
<python><python-3.x><nextflow><nf-core>
|
2023-12-14 07:03:36
| 0
| 5,096
|
jlo-gmail
|
77,658,292
| 13,950,559
|
How to inspect the exact bytes sent by scrapy?
|
<p>I'm crawling a website which seems to be detecting crawlers based on nuances like header item order, since a naive translation from a successful <code>curl</code> request into scrapy got rejected with 403 Forbidden, but then just by changing <code>dict</code> to <code>OrderedDict</code> it got accepted with 200 OK. After a while, it became 403 again, even though <code>curl</code> still works.</p>
<p>So I'm wondering if one can inspect the exact <strong>bytes</strong> sent by scrapy for debugging purpose.</p>
|
<python><web-scraping><scrapy>
|
2023-12-14 07:01:24
| 1
| 488
|
Sylvain Hubert
|
77,658,135
| 3,416,774
|
What is the PageElement type?
|
<p>In this code, I have asserted that the <code>event</code> object must be an instance of the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#bs4.Tag" rel="nofollow noreferrer" title="Beautiful Soup Documentation — Beautiful Soup 4.12.0 documentation"><code>Tag</code></a> class:</p>
<pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup, Tag
from requests import get
html = open("ticketbox.html", "r")
soup = BeautifulSoup(html, 'html5lib')
event = soup.find('div', attrs={"class": "table-cell event-title"})
try:
assert isinstance(event, Tag)
event_url = event.contents[1]['href']
except:
pass
</code></pre>
<p>Pylance tells me that the type of the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#contents-and-children" rel="nofollow noreferrer" title="Beautiful Soup Documentation — Beautiful Soup 4.12.0 documentation"><code>contents</code> attribute</a> is <code>list[PageElement]</code>, and thus there is no <a href="https://docs.python.org/3/reference/datamodel.html?highlight=getitem#object.__getitem__" rel="nofollow noreferrer" title="3. Data model — Python 3.12.1 documentation"><code>__getitem__()</code></a> method here. However it still works fine.</p>
<p>What is this type? And how can it still work without this method?</p>
<p><img src="https://i.imgur.com/2Smeb5J.png" alt="" /></p>
|
<python><beautifulsoup>
|
2023-12-14 06:23:14
| 1
| 3,394
|
Ooker
|
77,657,891
| 19,048,408
|
In Polars, how can you add row numbers within windows?
|
<p>How can I add row numbers within windows in a DataFrame?</p>
<p>This example depicts what I want (in the <code>counter</code> column):</p>
<pre class="lang-py prettyprint-override"><code>>>> df = pl.DataFrame([{'groupings': 'a', 'target_count_over_windows': 1}, {'groupings': 'a', 'target_count_over_windows': 2}, {'groupings': 'a', 'target_count_over_windows': 3}, {'groupings': 'b', 'target_count_over_windows': 1}, {'groupings': 'c', 'target_count_over_windows': 1}, {'groupings': 'c', 'target_count_over_windows': 2}, {'groupings': 'd', 'target_count_over_windows': 1}, {'groupings': 'd', 'target_count_over_windows': 2}, {'groupings': 'd', 'target_count_over_windows': 3}])
>>> df
shape: (9, 2)
┌───────────┬───────────────────────────┐
│ groupings ┆ target_count_over_windows │
│ --- ┆ --- │
│ str ┆ i64 │
╞═══════════╪═══════════════════════════╡
│ a ┆ 1 │
│ a ┆ 2 │
│ a ┆ 3 │
│ b ┆ 1 │
│ c ┆ 1 │
│ c ┆ 2 │
│ d ┆ 1 │
│ d ┆ 2 │
│ d ┆ 3 │
└───────────┴───────────────────────────┘
</code></pre>
<p>Obviously the <code>df.with_row_numbers()</code> method exists, but it adds row numbers for the whole dataframe.</p>
|
<python><dataframe><window-functions><python-polars>
|
2023-12-14 05:12:23
| 1
| 468
|
HumpbackWhale194
|
77,657,807
| 1,115,716
|
replacing characters dynamically
|
<p>I have a series of strings I need to alter based on some criteria, and I'm trying to build a list of words to replace in the string like so:</p>
<pre><code>test = "CAPTAIN AMERICA TO SUPERMAN"
delimeters = ['AND', 'TO', 'THEN']
delimited_speaker_string = ""
for delimeter in delimeters:
delimeter_txt = ' {} '.format(delimeter)
delimited_speaker_string = test.replace(delimeter_txt, ' @ ')
print(delimited_speaker_string)
</code></pre>
<p>The goal is to have <code>CAPTAIN AMERICA @ SUPERMAN</code> but I just get the original string, why?</p>
|
<python><python-3.x>
|
2023-12-14 04:36:26
| 1
| 1,842
|
easythrees
|
77,657,592
| 23,512,643
|
R: Equivalent of Python "add_volume()" function?
|
<p>I am working with the R programming language.</p>
<p>I found this tutorial/website over here that shows how to fill the volume under the surface in Python/Plotly (<a href="https://community.plotly.com/t/fill-volume-under-the-surface/64944/2" rel="nofollow noreferrer">https://community.plotly.com/t/fill-volume-under-the-surface/64944/2</a>):</p>
<pre><code>import numpy as np
import plotly.express as px
import plotly.graph_objects as go
f = lambda x,y: np.cos(x)+np.sin(y)+4
#surface
x, y = np.meshgrid(
np.linspace(-5, 5, 100),
np.linspace(-5, 5, 100)
)
z = f(x,y)
#patch of surface
lower_x, upper_x = -1, 2
lower_y, upper_y = -2, 4
int_x, int_y = np.meshgrid(
np.linspace(lower_x, upper_x, 50),
np.linspace(lower_y, upper_y, 70)
)
int_z = f(int_x,int_y)
fig= go.Figure()
fig.add_surface(x=x[0], y=y[:, 0], z=z, showscale=False,
colorscale=px.colors.sequential.Pinkyl, opacity=0.6)
fig.add_surface(x=int_x[0],y=int_y[:, 0],z=int_z, showscale=False, colorscale=px.colors.sequential.Agsunset, opacity=1)
lower_z= 0
upper_z= int_z.max()
X, Y, Z = np.mgrid[lower_x:upper_x:50j, lower_y:upper_y:50j, lower_z:upper_z:75j]
vals = Z-f(X,Y)
fig.add_volume(x=X.flatten(), y=Y.flatten(), z=Z.flatten(), value= vals.flatten(),
surface_show=True, surface_count=2,
colorscale=[[0, px.colors.sequential.Agsunset[4]],[1.0, px.colors.sequential.Agsunset[4]]],
showscale=False,
isomin=-upper_z, isomax=0) #isomin=-upper_z corresponds to z=f(x,y)-upper_z, and isomax=0, to z=f(x,y)
fig.update_layout(height=600, width=600, scene_zaxis_range=[0, upper_z+0.5],
scene_camera_eye=dict(x=1.85, y=1.85, z=0.7))
</code></pre>
<p><strong>My Question:</strong> I am trying to replicate this code in R.</p>
<p>Doing some research into this, I noticed that Plotly within R does not have an <code>add_volume()</code> function as in Python. I tried to attempt to work around this and find a new way to do this:</p>
<p><strong>Attempt 1:</strong></p>
<pre><code>library(plotly)
f <- function(x, y) {
cos(x) + sin(y) + 4
}
# Surface
x <- seq(-5, 5, length.out = 100)
y <- seq(-5, 5, length.out = 100)
z <- outer(x, y, f)
# Patch of surface
lower_x <- -1
upper_x <- 2
lower_y <- -2
upper_y <- 4
int_x <- seq(lower_x, upper_x, length.out = 50)
int_y <- seq(lower_y, upper_y, length.out = 70)
int_z <- outer(int_x, int_y, f)
fig <- plot_ly() %>%
add_surface(x = ~x, y = ~y, z = ~z, showscale = FALSE, opacity = 0.6, colors = c('#e31a1c', '#fb9a99')) %>%
add_surface(x = ~int_x, y = ~int_y, z = ~int_z, showscale = FALSE, opacity = 1, colors = c('#a6cee3', '#1f78b4'))
fig <- layout(fig, scene = list(zaxis = list(range = c(0, max(int_z) + 0.5)),
camera = list(eye = list(x = 1.85, y = 1.85, z = 0.7))))
fig
</code></pre>
<p><a href="https://i.sstatic.net/6yHPz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6yHPz.png" alt="enter image description here" /></a></p>
<p><strong>Attempt 2: Identify the Area</strong></p>
<pre><code>library(plotly)
f <- function(x, y) {
cos(x) + sin(y) + 2
}
# surface
x <- seq(-5, 5, length.out = 200)
y <- seq(-5, 5, length.out = 200)
z <- outer(x, y, f)
# patch of surface
lower_x <- -1
upper_x <- 2
lower_y <- -2
upper_y <- 4
int_x <- seq(lower_x, upper_x, length.out = 200)
int_y <- seq(lower_y, upper_y, length.out = 200)
int_z <- outer(int_x, int_y, f)
fig <- plot_ly(x = ~x, y = ~y, z = ~z, type = "surface", showscale = FALSE, opacity = 0.4) %>%
add_surface(x = ~int_x, y = ~int_y, z = ~int_z, showscale = FALSE, opacity = 1) %>%
layout(showlegend = FALSE)
fig
</code></pre>
<p><a href="https://i.sstatic.net/e2jkS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e2jkS.png" alt="enter image description here" /></a></p>
<p>But so far, I can't get anything to work.
Can someone please show me how to do this?</p>
<p>Thanks!</p>
|
<python><r><plotly>
|
2023-12-14 03:05:52
| 2
| 6,799
|
stats_noob
|
77,657,555
| 1,837,400
|
Python TypedDict 'a' or 'b' but not both in inherited TypedDict's
|
<p>I have some data coming in that could look like</p>
<pre><code>{"cloud_url": "https://example.com/file.txt", "filetype": "txt"}
</code></pre>
<p>or it could look like</p>
<pre><code>{"local_filepath": "./file.csv", "filetype": "csv", "delimeter": ","}
</code></pre>
<p>for my typing I currently have</p>
<pre><code>from typing import Literal, TypedDict
class _FileLocal(TypedDict):
local_filepath: str
class _FileCloud(TypedDict):
cloud_url: str
_FileCloudOrLocal = _FileLocal | _FileCloud
class _FileTextProcess(_FileCloudOrLocal):
filetype: Literal['txt']
class _FileCSVProcess(_FileCloudOrLocal):
filetype: Literal['csv']
delimeter: str
FileProcess = _FileTextProcess | _FileCSVProcess
</code></pre>
<p>the issue with this version is that <code>_FileTextProcess</code> can't inherit from a union (a class has to inherit from a class)</p>
<p>how can I specify that <code>local_filepath</code> <strong>or</strong> <code>cloud_url</code> <em>but not both</em> must be supplied</p>
|
<python><python-typing><python-3.10><typeddict>
|
2023-12-14 02:53:52
| 2
| 462
|
codythecoder
|
77,657,335
| 4,451,521
|
matplotlib plot and its mpld3 version have different markers sizes
|
<p>I have this minimum reproducible code</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import mpld3
# Create a DataFrame with columns id, X, and Y
# num_points = 100
# t = np.linspace(0, 4 * np.pi, num_points)
# data = {'id': range(1, num_points + 1), 'X': 10 * np.cos(t), 'Y': 5 * np.sin(t)} # Adjust amplitude and frequency as needed
# df = pd.DataFrame(data)
num_points = 25000
t = np.linspace(0, 8 * np.pi, num_points) # Adjust the range to control the length of the trajectory
data = {'id': range(1, num_points + 1), 'X': 10 * np.cos(t), 'Y': 5 * np.sin(t)} # Adjust amplitude and frequency as needed
df = pd.DataFrame(data)
# Set linewidth and markersize
linewidth = 0.1
markersize = 2
# Plot the trajectory with modified parameters
plt.plot(df['X'], df['Y'], marker='.', linestyle='-', linewidth=linewidth, markersize=markersize)
plt.title('Trajectory Plot')
plt.xlabel('X Coordinates')
plt.ylabel('Y Coordinates')
# Convert the plot to HTML using mpld3
html_fig = mpld3.fig_to_html(plt.gcf())
# Generate a simple HTML template
html_template = f"""
<!DOCTYPE html>
<html>
<head>
<title>Trajectory Plot</title>
</head>
<body>
<h1>Trajectory Plot</h1>
{html_fig}
</body>
</html>
"""
# <script src="https://mpld3.github.io/js/mpld3.v0.5.2.js"></script>
# Save the HTML to a file
with open('trajectory_plot.html', 'w') as f:
f.write(html_template)
# Display the Matplotlib plot
plt.show()
</code></pre>
<p>With this I get two things:</p>
<ol>
<li>A html page <code>trajectory_plot.html</code> with an interactive plot in it</li>
<li>The plot itself</li>
</ol>
<p>Now observe the plot
<a href="https://i.sstatic.net/pIbgn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pIbgn.png" alt="enter image description here" /></a>
They are very similar.</p>
<p>However if you zoom the plots they differ very much in the marker size</p>
<p>In the plot it self:
<a href="https://i.sstatic.net/dEaEQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dEaEQ.png" alt="enter image description here" /></a></p>
<p>In the embedded plot
<a href="https://i.sstatic.net/Q8nv1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q8nv1.png" alt="enter image description here" /></a></p>
<p>Why is this and how can I correct it?</p>
|
<python><matplotlib><mpld3>
|
2023-12-14 01:21:35
| 0
| 10,576
|
KansaiRobot
|
77,657,210
| 1,049,903
|
Conditionally apply CSS class to wtforms element in Python flask
|
<p>I want to apply an additional <strong>is-invalid</strong> CSS class to a wtforms element only if there is an error present in the form.</p>
<p>I've landed on the following code to achieve this and it works:</p>
<pre><code>{% if form.email.errors %}
{{ form.email(placeholder="Email", class="form-control is-invalid") }}
{% else %}
{{ form.email(placeholder="Email", class="form-control") }}
{% endif %}
</code></pre>
<p>However, it's not very consise and i'm having to repease the whole form.email element. This could get real messy if the logic was more complicated.</p>
<p>There must be a cleaner way to have only the class value wrapped in logic.</p>
|
<python><flask><flask-wtforms><wtforms>
|
2023-12-14 00:26:09
| 2
| 6,024
|
Dean Wild
|
77,657,001
| 10,431,629
|
Creating a ratio between variables in a Stacked dataframe grouped by some Columns
|
<p>I have a df as follows:</p>
<pre><code>df_in
G1 G2 TPE QC
A S1 td 2
A S1 ts 4
A S2 td 6
A S2 ts 3
B S1 td 20
B S1 ts 40
B S2 td 60
B S2 ts 30
C S1 td 90
D S2 ts 7
</code></pre>
<p>So the output should be grouped by Columns G1 & G2 and for each such group, do a row wise ratio for the
column QC like (ts/td) where values are td and ts for the column TPE and rename the new variable in column TPE as
ratio. It should also contain the original rows as it is. Also it should be noted that for the TPE column some groups may not have
both ts and td values. In such cases there will be no ratio or the ratio should be kept as blank.</p>
<p>So the output should be this:</p>
<pre><code> df_out
G1 G2 TPE QC
A S1 td 2
A S1 ts 4
A S2 td 6
A S2 ts 3
B S1 td 20
B S1 ts 40
B S2 td 60
B S2 ts 30
C S1 td 90
D S2 ts 7
A S1 ratio 2
A S2 ratio 0.5
B S1 ratio 2
B S2 ratio 0.5
C S1 ratio
D S2 ratio
</code></pre>
<p>I tried the following, but its omitting the blank values for group C & D with blank ratios:</p>
<pre><code>def calculate_ratio(group):
td_row = group[group['TPE'] == 'td']
ts_row = group[group['TPE'] == 'ts']
if not td_row.empty and not ts_row.empty:
ratio = ts_row['QC'].values[0] / td_row['QC'].values[0]
return pd.DataFrame({'G1': [group['G1'].iloc[0]],
'G2': [group['G2'].iloc[0]],
'TPE': ['ratio'],
'QC': [ratio]})
return pd.DataFrame()
grouped = df_in.groupby(['G1', 'G2']).apply(calculate_ratio).reset_index(drop=True)
df_out = pd.concat([df_in, grouped], ignore_index=True)
</code></pre>
<p>Any help will be immensely appreciated.</p>
|
<python><pandas><group-by><multiple-columns><group-concat>
|
2023-12-13 23:08:17
| 3
| 884
|
Stan
|
77,656,906
| 1,872,565
|
Selenium Check alert present or not in robotramework
|
<p>How to check an alert box exits in robotframework?</p>
<pre><code>Alert accept
Alert dismiss
Alert cancel
Handle alert
</code></pre>
<p>In my application sometimes alert will be there sometimes not? How to handle such situation?</p>
|
<python><robotframework>
|
2023-12-13 22:37:53
| 1
| 947
|
Learner
|
77,656,892
| 5,111,234
|
OpenMDAO Dymos Simulate Method Calls Setup Multiple Times
|
<p>I currently have an <code>ExplicitComponent</code> that takes as an input altitude and computes atmospheric properties at that altitude. The compute function itself uses another Python library to do this. When the library calculates the atmospheric properties, it loads in a large data file to do it. As such, I have put this loading into the <code>setup()</code> function. To clarify, this data is not read in as an array, it's loaded into the cache to then be used by the library.</p>
<p>However, it seems that the <code>trajectory.simulate</code> method runs the <code>setup()</code> function for every segment in the trajectory, and so the data still gets loaded multiple times and this slows down and sometimes crashes my computation if I have over a certain number of segments.</p>
<p>Is there a way to have this data read in once for the entire simulation to use without doing so outside of the component? I want to do it inside the component as there are some options I have set that influence what data specifically gets loaded (e.g. the time of year and such).</p>
<p><strong>Attempts:</strong> I have tried to put the loading data into an <code>__init__</code> call where the options can be read from there, but the entire component gets put into its own separate problem for each segment of the simulate phase it seems, and this still results in the data being read in multiple times.</p>
<p>I have provided some sudo code below. Any help would be greatly appreciated!</p>
<pre><code>class AtmosphereCalculator(om.ExplicitComponent):
def initialize(self):
*** define some options ***
def setup(self):
*** load data based on the options ***
*** define inputs as altitude and outputs as atmospheric properties ***
def compute(self, inputs, outputs):
*** calculate properties with the data ***
</code></pre>
|
<python><openmdao>
|
2023-12-13 22:34:55
| 1
| 679
|
Jehan Dastoor
|
77,656,542
| 1,186,991
|
Changing the format of a screenshot in Jupyter Notebooks
|
<p>I can display screenshots in my notebook by just using the snipping tool and then pasting them in.</p>
<pre><code>
</code></pre>
<p>but if i want to format it, i keep getting broken links. Ive tried the following and not sure what is wrong with it:</p>
<pre><code><p>
<img src="C:\Users\mannf\Pictures\Screenshots\Screenshot 2023-12-13 103135.png" alt="Screenshot" style="width:600px;height:200px;border:2px solid black;border-radius:10px;">
</p>
</code></pre>
<p>or</p>
<pre><code><p>
<img src="image-4.png" alt="Image 4" style="width:600px;height:200px;">
</p>
</code></pre>
<p><a href="https://i.sstatic.net/0HajK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0HajK.png" alt="Screenshot of whats displayed when i run the above code." /></a></p>
|
<python><jupyter-notebook>
|
2023-12-13 21:08:24
| 1
| 3,631
|
Hans Rudel
|
77,656,496
| 16,545,894
|
Scrape Company opening amd closing time on Google map
|
<p>I am trying to get Google Maps Company's opening and closing times with selenium python.</p>
<p><strong>An example link is given bellow :</strong></p>
<blockquote>
<p><a href="https://www.google.com/maps/place/Solar+Project+Development+%26+Engineering+Ltd./@23.7988032,90.3525855,17z/data=!3m1!4b1!4m6!3m5!1s0x3755c181e4e00229:0xa17c29dbd5a924f6!8m2!3d23.7988032!4d90.3525855!16s%2Fg%2F11q41jstmd?authuser=0&hl=en&entry=ttu" rel="nofollow noreferrer">https://www.google.com/maps/place/Solar+Project+Development+%26+Engineering+Ltd./@23.7988032,90.3525855,17z/data=!3m1!4b1!4m6!3m5!1s0x3755c181e4e00229:0xa17c29dbd5a924f6!8m2!3d23.7988032!4d90.3525855!16s%2Fg%2F11q41jstmd?authuser=0&hl=en&entry=ttu</a></p>
</blockquote>
<p><strong>Here is my code :</strong></p>
<pre><code>class GoogleMapScraperInformation:
def __init__(self):
self.headless = False
self.driver = None
def config_driver(self):
options = Options()
options.add_argument("--start-maximized")
s = Service(ChromeDriverManager().install())
driver = webdriver.Chrome(service=s, options=options)
self.driver = driver
def get_info(self, url):
self.driver.get(url)
try:
companey_name = self.driver.find_element(By.CLASS_NAME, "lfPIob").text
except:
companey_name = ''
# print(companey_name,"***********")
try:
address = self.driver.find_element(By.CLASS_NAME, "kR99db").text
except:
address = ''
# print(address,"***********")
</code></pre>
<p><strong>I also one step scroll down but I did not find it .</strong></p>
<pre><code> image_element = self.driver.find_element(By.CLASS_NAME, "lvtCsd img")
img_result = WebDriverWait(self.driver, 10).until(EC.presence_of_element_located((By.XPATH,'//*[@id="QA0Szd"]/div/div/div[1]/div[2]/div/div[1]/div/div/div[7]/div[5]/a/div[1]/div[2]/div[1]')))
self.driver.execute_script("arguments[0].scrollIntoView(true);",img_result)
</code></pre>
<p><strong>I want to click here :</strong></p>
<p><a href="https://i.sstatic.net/19vpk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19vpk.png" alt="enter image description here" /></a></p>
<p><strong>and after that, I want to get all the opening and closing time</strong></p>
<p><a href="https://i.sstatic.net/C5y8Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C5y8Q.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><web-scraping><data-mining>
|
2023-12-13 20:56:43
| 1
| 1,118
|
Nayem Jaman Tusher
|
77,656,478
| 1,332,811
|
Inline images are showing up as attachments in Gmail inbox
|
<p>I am sending emails in Python with CID inline images. The inline images work fine: they show up embedded in the email and the email itself does not appear to have attachments.</p>
<p>However, in the Gmail inbox view, it shows the inline image as an attachment:</p>
<p><a href="https://i.sstatic.net/qnbCA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qnbCA.png" alt="GMail Inbox example" /></a></p>
<p>Here is a simplified version of my email sending code:</p>
<pre><code>message = MIMEMultipart()
message['Subject'] = emailSubject
message['From'] = fromAddress
message['To'] = toAddress
# the html contains <img src="cid:INLINE_IMAGE"/>
part = MIMEText(emailHtml, 'html')
message.attach(part)
fp = open(imagePath, 'rb')
image = MIMEImage(fp.read())
fp.close()
image.add_header('Content-ID', "INLINE_IMAGE")
image.add_header('Content-Disposition', 'inline', filename="inline-image.png")
message.attach(image)
</code></pre>
<p>If I take out the <code>Content-Disposition</code> header, the attachment is labeled <code>noname</code> in the Gmail inbox.</p>
<p>Is there a way to make this image not show up as an attachment in the inbox?</p>
|
<python><email><email-attachments><mime>
|
2023-12-13 20:53:03
| 1
| 667
|
leros
|
77,656,400
| 3,825,495
|
reference module from outer folder
|
<p>I have a module that I'm developing called "py_lopa". typically, while developing it, I'll write a small script in the super-folder one layer up from "py_lopa" and run it from there. however, this higher folder is getting cluttered with small files.</p>
<p>I'd like to somehow move the scripts to a subfolder that's at the same level as "py_lopa". however, I'm unsure how to reference the py_lopa module from this other folder.</p>
<p>example structure:</p>
<pre><code>"src_code" (highest level folder)
"py_lopa" (sub-folder - contains entire module)
"scripts_for_testing" (sub-folder - want to move all "test_script.py" files here)
"test_script_001.py"
"test_script_002.py"
.
.
.
</code></pre>
<p>inside one of the test_script files, it currently imports py_lopa modules as needed:</p>
<pre><code>from py_lopa.model_interface import Model_Interface
from py_lopa.data.tests_enum import Tests_Enum
from py_lopa.data.tables import Tables
.
.
.
</code></pre>
<p>however, if I move the test_script files to the "scripts_for_testing" folder, I can't reference the py_lopa module.</p>
<p>how can I reference the "py_lopa" module from a script inside the "scripts_for_testing" folder?</p>
|
<python><module>
|
2023-12-13 20:33:57
| 1
| 616
|
Michael James
|
77,656,247
| 102,221
|
ogg audio file is not played by pygame
|
<p>I have the next version of pygame package: pygame <code>2.5.0</code> (SDL <code>2.28.0</code>) in python <code>3.10.8</code>.</p>
<p>.ogg file generated by telegram client and uploaded to server is successfully played by VLS application but next python code</p>
<pre><code>import pygame
audio_file_path = r'<OGG PATH>'
pygame.mixer.init()
try:
pygame.mixer.music.load(audio_file_path)
except pygame.error as e:
print("Error:", e)
</code></pre>
<p>brings the next error:</p>
<pre><code>Error: stb_vorbis_open_rwops: VORBIS_invalid_first_page
</code></pre>
<p>What may be the error reason and what are additional tools that may allow me to check ogg file validity?</p>
|
<python><pygame><sdl>
|
2023-12-13 19:54:09
| 1
| 1,113
|
BaruchLi
|
77,656,160
| 2,157,783
|
Pytorch, random number generators and devices
|
<p>I always put on top of my Pytorch's notebooks a cell like this:</p>
<pre><code>device = (
"cuda"
if torch.cuda.is_available()
else "mps"
if torch.backends.mps.is_available()
else "cpu"
)
torch.set_default_device(device)
</code></pre>
<p>In this convenient way, I can use the GPU, if the system has one, MPS on a Mac, or the cpu on a vanilla system.</p>
<p><strong>EDIT</strong>: Please note that, due to <code>torch.set_default_device(device)</code>, <strong>any tensor is created, by default, on the <code>device</code></strong>, e.g.:</p>
<p><a href="https://i.sstatic.net/DH1iq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DH1iq.png" alt="enter image description here" /></a></p>
<p>Now, I'm trying to use a Pytorch generator:</p>
<p><code>g = torch.Generator(device=device).manual_seed(1)</code></p>
<p>and then:</p>
<p><code>A = torch.randn((3, 2), generator=g)</code></p>
<p>No problem whatsoever on my Macbook (where the device is MPS) or on systems with cpu only. But on my Cuda-enabled desktop, I get:</p>
<p><code>RuntimeError: Expected a 'cpu' device type for generator but found 'cuda'</code></p>
<p>Any solution? If I just abstain from specifying the device for the generator, it will use the cpu, but <strong>then the tensor <code>A</code> will be created on the CPU too...</strong></p>
|
<python><pytorch><gpu>
|
2023-12-13 19:35:47
| 1
| 680
|
MadHatter
|
77,656,117
| 1,473,517
|
when can you use numpy arrays as dict values in numba?
|
<p>I am confused by the type rules for numba dicts. Here is an MWE that works:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numba as nb
@nb.njit
def foo(a, b, c):
d = {}
d[(1,2,3)] = a
return d
a = np.array([1, 2])
b = np.array([3, 4])
t = foo(a, b, c)
</code></pre>
<p>But if I change the definition of foo as follows this fails:</p>
<pre class="lang-py prettyprint-override"><code> @nb.njit
def foo(a, b, c):
d = {}
d[(1,2,3)] = np.array(a)
return d
</code></pre>
<pre><code>TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function array>) found for signature:
>>> array(array(int64, 1d, C))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'impl_np_array': File: numba/np/arrayobj.py: Line 5384.
With argument(s): '(array(int64, 1d, C))':
Rejected as the implementation raised a specific error:
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<intrinsic np_array>) found for signature:
>>> np_array(array(int64, 1d, C), none)
There are 2 candidate implementations:
- Of which 2 did not match due to:
Intrinsic in function 'np_array': File: numba/np/arrayobj.py: Line 5358.
With argument(s): '(array(int64, 1d, C), none)':
Rejected as the implementation raised a specific error:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /home/raph/python/mypython3.10/lib/python3.10/site-packages/numba/core/typing/npydecl.py:482
During: resolving callee type: Function(<intrinsic np_array>)
During: typing of call at /home/raph/python/mypython3.10/lib/python3.10/site-packages/numba/np/arrayobj.py (5395)
File "../../python/mypython3.10/lib/python3.10/site-packages/numba/np/arrayobj.py", line 5395:
def impl(object, dtype=None):
return np_array(object, dtype)
^
raised from /home/raph/python/mypython3.10/lib/python3.10/site-packages/numba/core/typeinfer.py:1086
During: resolving callee type: Function(<built-in function array>)
During: typing of call at <ipython-input-99-e05437a34ab9> (4)
File "<ipython-input-99-e05437a34ab9>", line 4:
def foo(a, b, c):
<source elided>
d = {}
d[(1,2,3)] = np.array(a)
^
</code></pre>
<p>Why is this?</p>
|
<python><numba>
|
2023-12-13 19:28:42
| 1
| 21,513
|
Simd
|
77,656,084
| 13,400,029
|
Python: Regular expression to match alpha-numeric characters not working on LeetCode compiler
|
<p>my code is not working using this solution.</p>
<p><a href="https://i.sstatic.net/BAhbA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BAhbA.png" alt="enter image description here" /></a></p>
<p>Leetcode question:
125. Valid Palindrome
Easy
8.6K
8.1K
Companies
A phrase is a palindrome if, after converting all uppercase letters into lowercase letters and removing all non-alphanumeric characters, it reads the same forward and backward. Alphanumeric characters include letters and numbers.</p>
<p>Given a string s, return true if it is a palindrome, or false otherwise.</p>
<p>Example 1:</p>
<p>Input: s = "A man, a plan, a canal: Panama"
Output: true
Explanation: "amanaplanacanalpanama" is a palindrome.
Example 2:</p>
<p>Input: s = "race a car"
Output: false
Explanation: "raceacar" is not a palindrome.
Example 3:</p>
<p>Input: s = " "
Output: true
Explanation: s is an empty string "" after removing non-alphanumeric characters.
Since an empty string reads the same forward and backward, it is a palindrome.</p>
<p>Constraints:</p>
<p>1 <= s.length <= 2 * 105
s consists only of printable ASCII characters.</p>
|
<python><python-3.x>
|
2023-12-13 19:20:54
| 1
| 619
|
Delicia Fernandes
|
77,656,015
| 11,192,275
|
Problem using the `geom_col` in plotnine with Linux (Ubuntu 22.04.3 LTS) and Windows
|
<p>When I try to run the following reproducible example using <code>plotnine</code> I get an error:</p>
<pre><code>import pandas as pd
from plotnine import (
ggplot, aes,
geom_col
)
from plotnine.data import mtcars
df = (mtcars.
loc[:, ["name", "disp"]].
drop_duplicates(subset="name"))
(ggplot(data=df,
mapping=aes(x="name", y="disp")) +
geom_col())
</code></pre>
<p>This is the complete error I get:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\ggplot.py", line 114, in __repr__
figure = self.draw(show=True)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\ggplot.py", line 224, in draw
self._build()
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\ggplot.py", line 336, in _build
layers.compute_position(layout)
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\layer.py", line 479, in compute_position
l.compute_position(layout)
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\layer.py", line 345, in compute_position
data = self.position.compute_layer(data, params, layout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\positions\position.py", line 79, in compute_layer
return groupby_apply(data, "PANEL", fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\utils.py", line 599, in groupby_apply
lst.append(func(d, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\positions\position.py", line 77, in fn
return cls.compute_panel(pdata, scales, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\positions\position_stack.py", line 103, in compute_panel
nl_trans = get_non_linear_trans(scales.y)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\positions\position_stack.py", line 97, in get_non_linear_trans
if _is_non_linear_trans(sc.trans):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\ANACON~1\Lib\site-packages\plotnine\positions\position_stack.py", line 87, in _is_non_linear_trans
trans.dataspace_is_numerical and tname not in linear_transforms
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'identity_trans' object has no attribute 'dataspace_is_numerical'
</code></pre>
<p>My session info is the following:</p>
<pre><code>-----
pandas 2.1.4
plotnine 0.12.2
session_info 1.0.0
-----
Python 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 10:29:11) [MSC v.1935 64 bit (AMD64)]
Windows-10-10.0.22621-SP0
-----
Session information updated at 2023-12-13 13:38
</code></pre>
<p>I think the problem is with Windows taking into account that at least in Linux Ubuntu 22.04.3 LTS you will get some warnings related to the <code>mizani</code> module but it works fine and it will generate the plot.</p>
<p>However in Linux Ubuntu 22.04.3 LTS if you change the variables that corresponds to each axis, <code>x="disp", y="name"</code>, you will get a strange plot using the following code:</p>
<pre><code>import pandas as pd
from plotnine import (
ggplot, aes,
geom_col
)
from plotnine.data import mtcars
df = (mtcars.
loc[:, ["name", "disp"]].
drop_duplicates(subset="name"))
(ggplot(data=df,
mapping=aes(x="disp", y="name")) +
geom_col())
</code></pre>
<p><a href="https://i.sstatic.net/Z5Abg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z5Abg.jpg" alt="enter image description here" /></a></p>
<p>Where my session info in the case of Linux is the following:</p>
<pre><code>-----
pandas 2.1.1
plotnine 0.12.1
session_info 1.0.0
-----
Python 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0]
Linux-6.2.0-37-generic-x86_64-with-glibc2.35
-----
Session information updated at 2023-12-13 13:53
</code></pre>
<p>Anyone knows what will be a possible solution for Windows and how you can fix the plot in Linux?</p>
|
<python><plotnine>
|
2023-12-13 19:05:29
| 1
| 456
|
luifrancgom
|
77,655,929
| 1,711,088
|
Selenium. Python. Chrome. How to get current (active/focused) tab?
|
<p>Ubuntu + Python3 + Selenium + Chromium</p>
<p>I'm using selenium in semi-automated tests. Some work doing user, and some work are automated by python. And i have a problem. I want to know which tab active right now? User can change tabs - and script should work with active tab only. I find lot of answers how to open a tabs and "switch to" - but all of that is not a for me. I don't want to switch to some tab. I want to know what a active tab right now.</p>
<ul>
<li>I have a opened browser with lot of pages.</li>
<li>User can interact with that. He can change tabs and open/close anything</li>
<li>Than User go back to "App window" (interface of app but not a chrome) and click button</li>
<li>App execute a function that interact with current (active/focused) page in Chrome</li>
</ul>
<p>To do that i need to know what a page is active right now. Not active in Selenium but current/active/focused in Chrome. I need to get current/active/focused tab in Chrome.</p>
<p>ps. I don't know what it's called exactly. That's why I write: current/active/focused. For me this is the current tab in the browser, but perhaps this is the wrong name for Chrome.</p>
|
<python><selenium-webdriver><tabs><chromium>
|
2023-12-13 18:49:31
| 0
| 976
|
Massimo
|
77,655,862
| 6,168,639
|
Pytest paramaterized fixture not returning yielded object, only returning tuple?
|
<p>I'm working to setup a fixture that will open a headless (or headed) instance of a web browser for some end to end testing. I am using Django with pytest and pytest-django.</p>
<p>I'd like to be able to paramaterize the test class with the browsers I'd like to use and whether or not they are headless in a fashion like this:</p>
<pre><code>@pytest.mark.parametrize("browser_fixture", [("chrome", False)])
@pytest.mark.slow()
class TestEndToEnd:
</code></pre>
<p>In my <code>conftest.py</code> I have set it up like this:</p>
<pre><code>def create_browser(browser_name, headless=True):
if browser_name == "chrome":
options = ChromeOptions()
if headless:
options.add_argument("--no-sandbox")
options.add_argument("--headless")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--disable-gui")
return webdriver.Chrome(options=options)
elif browser_name == "firefox":
options = FirefoxOptions()
if headless:
options.add_argument("--headless")
options.add_argument("--disable-gui")
return webdriver.Firefox(options=options)
else:
raise ValueError(f"Unsupported browser: {browser_name}")
@pytest.fixture(scope="class")
def browser_fixture(request):
browser_name, headless = request.param
browser = create_browser(browser_name, headless=headless)
yield browser
browser.quit()
</code></pre>
<p>My test looks like this:</p>
<pre><code>@pytest.mark.parametrize("browser_fixture", [("chrome", False)])
@pytest.mark.slow()
class TestEndToEnd:
@pytest.fixture(autouse=True)
def setup(self, browser_fixture, live_server):
management.call_command("create_project_data", verbosity=0)
self.browser = browser_fixture
# At this point I would expect the browser to open
# but instead self.browser is just the tuple: `('chrome', False)`
self.live_server_url = live_server.url
def login_user(
self, username=None, password="test", user=None, browser=None
):
if browser is None:
raise Exception("No browser provided")
# The logic to login here
def test_as_admin(self):
standard_user = User.objects.first()
self.login_user(standard_user.username)
self.browser.get(self.live_server_url + "/mills/")
assert "Mills" in self.browser.title
</code></pre>
<p>The issue appears to be in the <code>setup</code> method. It should be opening a browser and then the test should be using that to call the login (to login the user) and do the testing it needs to.</p>
<p>Instead, the <code>browser_fixture</code> fixture is just returning the tuple.</p>
<p>Any ideas on where to go here?</p>
|
<python><testing><pytest><pytest-django>
|
2023-12-13 18:37:12
| 1
| 722
|
Hanny
|
77,655,841
| 2,961,550
|
Strip whitespace of field for peewee Model
|
<p>In <code>peewee</code> I have a model where I want to <code>strip</code> the white space of some fields when an instance is created. Is it possible to do this?</p>
<p>e.g.</p>
<pre><code>import peewee as pw
class Person(pw.Model):
email = pw.CharField()
name = pw.CharField()
mom = Person(email=" mother@gmail.com ", name=" Stella Bird ") # <- white space should be stripped automatically
</code></pre>
|
<python><peewee>
|
2023-12-13 18:31:47
| 1
| 1,417
|
bicarlsen
|
77,655,593
| 3,941,671
|
Access Thread args in Thread.run()
|
<p>I have a question based on an example from this <a href="https://superfastpython.com/thread-producer-consumer-pattern-in-python/#Shared_Buffer" rel="nofollow noreferrer">guide</a> (see code below):</p>
<p>In this example, when the thread is created a callback function and the <code>queue</code> are given as arguments. The callback functions uses <code>queue</code> as an argument. As far as I understood, this is possible because the thread arguments <code>args=(queue,)</code> are forwarded to the callback function elementwise.</p>
<p>My questions:</p>
<p>When writing my own Thread-class <code>class MyThread(Thread)</code> and overriding the <code>run()</code>-function, can I then also access the thread arguments like in the callback function?</p>
<pre><code>class MyThread(Thread):
def run(self, queue) -> None:
pass
queue = queue.Queue()
myThread = MyThread(args=(queue,))
myThread.start
</code></pre>
<p>How do I use <code>kwargs</code>?</p>
<pre><code>class MyThread(Thread):
def run(self, ????) -> None:
pass
queue = queue.Queue()
myThread = MyThread(kwargs={'queue',queue})
myThread.start
</code></pre>
<p>Does it make sense, to pass i.e. the <code>queue</code> argument directly to MyThread and hold the reference to this queue in a class variable?</p>
<pre><code>class MyThread(Thread):
def __init__(self, group: None = None, target: Callable[..., object] | None = None, name: str | None = None, args: Iterable[Any] = ..., kwargs: Mapping[str, Any] | None = None, queue:Queue, *, daemon: bool | None = None) -> None:
super().__init__(group, target, name, args, kwargs, daemon=daemon)
self.queue = queue
def run(self) -> None:
# do something with self.queue
pass
queue = queue.Queue()
myThread = MyThread(queue = queue)
myThread.start
</code></pre>
<p>Example code from the guide:</p>
<pre><code># SuperFastPython.com
# example of one producer and one consumer with threads
from time import sleep
from random import random
from threading import Thread
from queue import Queue
# producer task
def producer(queue):
print('Producer: Running')
# generate items
for i in range(10):
# generate a value
value = random()
# block, to simulate effort
sleep(value)
# create a tuple
item = (i, value)
# add to the queue
queue.put(item)
# report progress
print(f'>producer added {item}')
# signal that there are no further items
queue.put(None)
print('Producer: Done')
# consumer task
def consumer(queue):
print('Consumer: Running')
# consume items
while True:
# get a unit of work
item = queue.get()
# check for stop
if item is None:
break
# block, to simulate effort
sleep(item[1])
# report
print(f'>consumer got {item}')
# all done
print('Consumer: Done')
# create the shared queue
queue = Queue()
# start the consumer
consumer = Thread(target=consumer, args=(queue,))
consumer.start()
# start the producer
producer = Thread(target=producer, args=(queue,))
producer.start()
# wait for all threads to finish
producer.join()
consumer.join()
</code></pre>
|
<python><multithreading>
|
2023-12-13 17:43:25
| 1
| 471
|
paul_schaefer
|
77,655,474
| 12,106,577
|
Python multiprocessing Manager process not exiting after successfully stopping parent processes
|
<p>I'm wrote the following demo script to replicate the multiprocessing handling logic from a more complex case (from now on the original code), however the key parts should be there.</p>
<p>The original code running with python 3.10.12 had a memory leak when sometimes stopped by CTRL+C, as seen with <code>htop</code> in Linux where stale processes were hogging up RAM. After spending some time trying to fix it, I aimed at getting all processes terminated and check their exit codes. After getting exit code -15 (correct for CTRL+C) for all processes in the printout after hitting CTRL+C, I thought the leak would stop but it persists.</p>
<p>The demo script code is the following:</p>
<pre><code>import multiprocessing as mp
import time
class ProcessManager:
def __init__(self) -> None:
self.proc_map = {}
self.map_procs()
def map_procs(self):
for i in range(1):
self.proc_map[i] = mp.Process(target=self.parent_loop)
print("mapped processes in orchestrator")
def start_procs(self):
# Start all processes
for i, proc in self.proc_map.items():
proc.start()
print(f"Parent process {i} started.")
# wait for processes to finish
for _, proc in self.proc_map.items():
try:
proc.join() # wait for the process to finish (it may or may not, depending on loops)
except KeyboardInterrupt:
for _, proc in self.proc_map.items():
print(f"Terminating process {proc}...")
proc.terminate()
for _, proc in self.proc_map.items():
proc.join() # wait for termination to finish
def parent_loop(self):
p = Parent()
p.parent_loop()
class Parent:
def __init__(self) -> None:
self.queue = mp.Manager().Queue()
self.child = ChildDaemon(self.queue)
self.dummy_file_in_memory = self.load_big_file()
self.slow_init_func(5)
self.child.daemon_proc.start()
def slow_init_func(self, t):
print("slow init func")
time.sleep(t)
def load_big_file(self):
file_size = 1024 * 1024 * 1024 * 20
dummy_data = bytearray(file_size)
return dummy_data
def parent_loop(self):
while True:
print(f"parent loop: queue counter {self.queue.get()}")
time.sleep(5)
class ChildDaemon:
def __init__(self, queue) -> None:
self.queue = queue
self.daemon_proc = mp.Process(target=self.child_loop, args=(queue,))
self.daemon_proc.daemon = True
def child_loop(self, queue):
counter = 0
while True:
print("child loop")
time.sleep(1)
try:
queue.get_nowait()
except:
pass
try:
queue.put(counter, block=False)
except:
pass
counter += 1
if __name__=="__main__":
mp.set_start_method('spawn')
orch = ProcessManager()
orch.start_procs()
for key, proc in orch.proc_map.items():
print("Checking exitcodes of supervisor processes...")
print(f"{key} ({proc}) exitcode: {proc.exitcode}")
</code></pre>
<p>On Mac OS 14.1.2 and python 3.9.12, when I CTRL+C out of it, I get:</p>
<pre><code>mapped processes in orchestrator
Parent process 0 started.
slow init func
child loop
child loop
parent loop: queue counter 0
child loop
child loop
child loop
child loop
parent loop: queue counter 4
child loop
child loop
^CTerminating process <Process name='Process-1' pid=13652 parent=13650 started>...
Process Process-1:2:
Traceback (most recent call last):
File "/Users/******/opt/anaconda3/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/Users/******/opt/anaconda3/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/******/Projects/processes_leak/src.py", line 74, in child_loop
time.sleep(1)
KeyboardInterrupt
Checking exitcodes of supervisor processes...
0 (<Process name='Process-1' pid=13652 parent=13650 stopped exitcode=-SIGTERM>) exitcode: -15
</code></pre>
<p><code>htop</code> is showing this after the script ends:
<a href="https://i.sstatic.net/2xNjc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2xNjc.png" alt="enter image description here" /></a></p>
<p><strong>Notes</strong></p>
<ul>
<li>My hope here is that the solution for this will translate to the original code as well.</li>
<li>In the original code, I cannot use an executor, as I'm encountering some non-picklable objects and I want to handle the process management in a similar way to the demo.</li>
</ul>
<p><strong>Edit</strong></p>
<p>Thanks to the comment by @relent95, I believe the issue lies in the mp.Manager() call. I can observe python processes after even the narrower script below has finished.</p>
<p>The question becomes how to correctly handle closing the manager process (<strong>ideally without using a context manager</strong>, as the original project is more complex. More specifically, is that is needed, since the <a href="https://docs.python.org/3.9/library/multiprocessing.html#multiprocessing-managers" rel="nofollow noreferrer">docs</a> seem to say that <em>Manager processes will be shutdown as soon as they are garbage collected or their parent process exits</em>?</p>
<pre><code>import multiprocessing as mp
import time
class ProcessManager:
def __init__(self):
self.organise()
def organise(self):
p = mp.Process(target=self.start_manager)
p.start()
try:
p.join()
except KeyboardInterrupt:
print('pm exception')
try:
self.m.shutdown()
except:
pass
p.terminate()
p.join()
def start_manager(self):
self.m = mp.Manager()
self.q = self.m.Queue()
try:
time.sleep(10)
finally:
print('start_manager exception')
self.m.shutdown()
if __name__ == "__main__":
pm = ProcessManager()
</code></pre>
|
<python><multiprocessing><keyboardinterrupt>
|
2023-12-13 17:22:54
| 1
| 399
|
John Karkas
|
77,655,459
| 12,214,934
|
Scipy Multinomial Probability Mass Function is almost always 0.0
|
<p>To calculate the probability for categorical features, I was told to use the Multinoulli distribution, which is supposedly a special case of the multinomial distribution, where the number of trials is 1.</p>
<p>I was given some probabilities for 10 different categories:</p>
<pre><code>p = [0.14285714 0.11428571 0.14285714 0.08571429 0.11428571 0.05714286 0.14285714 0.02857143 0.11428571 0.05714286]
</code></pre>
<p>And told to calculate the pmf using the following as my input <code>[0,1,2,3,4,5,6,7,8,9]</code>.</p>
<p>However, when I tried this using the pmf of SciPy's multinomial distribution, I always got a result of 0.0. Playing around with different values for n, I noticed that the only time I get a number other than 0.0, is if n happens to be the sum of the values of my outcomes in x. And when I checked the documentation, <em>coincidentally</em> in all examples n is indeed the sum of the values.</p>
<p>I think I am fundamentally misunderstanding something here. What is the point of having a parameter n when apparently there is only one sensible value for it? And why would it be the sum of the values of my categories? The way the multinomial distribution was presented to me, I thought these were just names, labels, like the 0th category, the 1st category, etc. Summing them up makes no sense to me.</p>
<p><a href="https://i.sstatic.net/42oCM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/42oCM.png" alt="SciPy documentation" /></a></p>
|
<python><scipy><multinomial>
|
2023-12-13 17:19:56
| 1
| 722
|
Kajice
|
77,655,440
| 1,264,211
|
Can you protect a python variable with exec()?
|
<p>This is kind of a hacky Python question.</p>
<p>Consider the following Python code:</p>
<pre class="lang-py prettyprint-override"><code>def controlled_exec(code):
x = 0
def increment_x():
nonlocal x
x += 1
globals = {"__builtins__": {}} # remove every global (including all python builtins)
locals = {"increment_x": increment_x} # expose only the increment function
exec(code, globals, locals)
return x
</code></pre>
<p>I expect this function to provide a controlled code API, which simply counts the number of <code>increment_x()</code> calls. I tried it and I get the correct behavior.</p>
<pre class="lang-py prettyprint-override"><code># returns 2
controlled_exec("""\
increment_x()
increment_x()
""")
</code></pre>
<p>I assume this way of doing is not secure, but I wonder out of curiosity. Can I set <code>x</code> to an arbitrary value (say negative) by executing code via <code>controlled_exec(...)</code> ? How would I do that?</p>
|
<python>
|
2023-12-13 17:16:41
| 1
| 335
|
Bix
|
77,655,360
| 7,563,454
|
Direction vector: Correctly scale sin(radians(x)) + cos(radians(z)) with y axis
|
<p>I have an issue with the directional transform math for a voxel raytracer I'm working on in Python. X axis is left-right, Y axis is up-down, Z axis is forward-backward: I calculate view direction from a camera angle stored in degrees, direction being the virtual arrow the point of view points toward: Z rotation translates to X and Z directions, Y rotation to Y direction, X rotation is ignored as I didn't implement camera rolling. The conversion is done by running the rotation radians through a sine / cosine, for example: Rotation <code>x = 0, y = 90, z = 45</code> becomes direction <code>x = 0.7071067811865475, y = 1.0, z = 0.7071067811865475</code> since a 45* angle for rot Z means dir X and Z equally point toward that direction. A simple demo to visualize what I'm doing.</p>
<pre><code>angle_y = 90
for angle_z in (0, 45, 90, 135, 180, 225, 270, 315, 360):
dir_x = math.sin(math.radians(angle_z))
dir_y = math.sin(math.radians(angle_y))
dir_z = math.cos(math.radians(angle_z))
print("x: " + str(dir_x) + " y: " + str(dir_y) + " z: " + str(dir_z))
</code></pre>
<p>Directions X and Z work as intended and form a perfect arch at every rotation as <code>angle_z</code> goes from 0* to 360*. The issue is the Y direction: If you're looking straight ahead and it has a value of 0 everything works fine. But as you look up and down, the Y direction needs to correctly shrink the magnitude of X and Z with its intensity, otherwise extra "force" is added to the total direction which should never exceed X + Z as defined by their sin + cos. Therefore the following is wrong:</p>
<pre><code>return dir_x, dir_y, dir_z
</code></pre>
<p>I tried the simplest solution which is a lot more accurate:</p>
<pre><code>return dir_x * (1 - abs(dir_y)), dir_y, dir_z * (1 - abs(dir_y))
</code></pre>
<p>But this too causes incorrect perspective: As I look up or down it looks like the world shrinks vertically. I get an even more correct result by dividing <code>dir_y</code> by two:</p>
<pre><code>return dir_x * (1 - abs(dir_y) / 2), dir_y, dir_z * (1 - abs(dir_y) / 2)
</code></pre>
<p>This stops the world from bending at first, but as you look closer to a full -90 / +90 angle up or down it makes everything bend like a concave mirror. What is the correct way to shrink X and Z with by the intensity of Y to get a direction arrow that correctly maintains magnitude?</p>
|
<python><python-3.x><math><vector><3d>
|
2023-12-13 17:03:39
| 1
| 1,161
|
MirceaKitsune
|
77,655,334
| 5,931,672
|
Autocompletion for my python module using click
|
<p>I created a python module I install.
It uses a Bash CLI commands such as <code>my-module init</code> or <code>my-module delete</code>.</p>
<p>The structure is like:</p>
<pre><code>my-module/
|--- setup.py
|--- my_module
| |--- __main__.py
| |--- delete.py
| |--- init.py
</code></pre>
<p><code>setup.py</code></p>
<pre><code>setuptools.setup(
name="my-module",
entry_points={
"console_scripts": [
"my-module = my_module.__main__:cli"
]
},
</code></pre>
<p><code>__main__.py</code></p>
<pre><code>@click.group(chain=True)
def cli():
pass
cli.add_command(init_cmd)
cli.add_command(delete_cmd)
</code></pre>
<p><code>init.py</code></p>
<pre><code>@click.command("init")
@click.option("-p", "--path",
type=str,
help="Path to the project workspace."
)
def init_project_cmd(path):
pass
</code></pre>
<p>It works correcly, however, the autocompletion does not work.
It does work for <code>my-module</code> but not for the second part.</p>
<p>Following <a href="https://click.palletsprojects.com/en/8.1.x/shell-completion/" rel="nofollow noreferrer">this</a> I edited <code>.bashrc</code> adding the line:</p>
<p><code>eval "$(_ML_PIPELINE_COMPLETE=bash_source /path/to/my-module/my_module/__main__.py)"</code></p>
<p>However, (after running <code>chmod</code> to <code>__main__.py</code> because I had a permission error) I have the error messages saying</p>
<pre class="lang-bash prettyprint-override"><code>import-im6.q16: unable to open X server `' @ error/import.c/ImportImageCommand/359.
from: can't read /var/mail/my-module.delete
from: can't read /var/mail/my-module.init
/path/to/my-module/my_module/__main__.py: line 9: syntax error near unexpected token `('
/path/to/my-module/my_module/__main__.py: line 9: `from some_module import ('
</code></pre>
<p>My quesitons:</p>
<ol>
<li>Am I on the good track? What am I missing then? The errors looks very weird and it feels like the autocompletion should be easy.</li>
<li>If so, now I have to (at least), to add a line to bashrc, with the path to the main file (that I don't really know because each user will have their own path to python packages). How can I automate this so that it is done with the <code>pip install</code> command?</li>
</ol>
|
<python><autocomplete>
|
2023-12-13 16:57:55
| 2
| 4,192
|
J Agustin Barrachina
|
77,655,312
| 3,139,771
|
Pyinstaller unable to find module when frozen
|
<p>This is what I have:</p>
<pre><code>├── demo
│ ├── mypkg
│ │ └── __main__.py
│ │ └── api.py
│ │ └── startserver.py
│ └── readme.md
</code></pre>
<p>api.py has =></p>
<pre><code>import hug
@hug.get('/ping')
def ping():
return {"response": "pong"}
</code></pre>
<p>startserver.py has =></p>
<pre><code>def start():
try:
currentpath = Path(__file__)
print(f'Currently executing from {currentpath}')
apipath = os.path.join(currentpath.parent, 'api.py')
print(f'parse api path is {apipath}')
print('inside startserver start()')
with open('testapi.log', 'w') as fd:
subprocess.run(['hug', '-f', apipath], stdout=fd , stderr=subprocess.STDOUT, bufsize=0)
except Exception:
print(traceback.format_exc())
</code></pre>
<p><code>__main.py__</code> has =></p>
<pre><code>import traceback
from mypkg.startserver import start
def main():
try:
start()
except Exception:
print(traceback.format_exc())
if __name__ == "__main__":
print('... inside name == main ...')
main()
</code></pre>
<p>When doing "python -m mypkg" from vscode terminal, this works great, hug web server is running and browser can see /ping results in localhost</p>
<p>But when I create a executable (single file exe on Windows 10) using pyinstaller I get this error:</p>
<pre><code>Currently executing from
C:\Users\JOHN~1.KOL\AppData\Local\Temp\_MEI442282\mypkg\startserver.pyc
parse api path is C:\Users\JOHN~1.KOL\AppData\Local\Temp\_MEI442282\mypkg\api.py
inside startserver start()
Traceback (most recent call last):
File "mypkg\startserver.py", line 16, in start
File "subprocess.py", line 548, in run
File "subprocess.py", line 1026, in __init__
File "subprocess.py", line 1538, in _execute_child
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>seems like there is no <code>C:\Users\JOHN~1.KOL\AppData\Local\Temp\_MEI442282\mypkg\api.py</code>
I even tried to change api.py to api.pyc still no luck. How do I get a reference to api.py which is pretty much required to start the hug web server? I have seen other solutions here for missing config files and binary files, but this one is a .py file itself thats needed.</p>
<p>Any help is greatly appreciated, thanks!</p>
|
<python><pyinstaller><hug>
|
2023-12-13 16:54:55
| 2
| 357
|
Impostor Syndrome
|
77,655,287
| 2,154,374
|
python mmap write with packed value
|
<p>On Linux setup, I am trying to write device file using python's <code>mmap</code>.</p>
<p>Following is the code snippet:</p>
<pre><code>import struct, os, mmap, sys
def write(addr, size, data):
filename = "<pci_device_file>/resource0";
# page size of this setup (typically 4k)
psize = os.sysconf("SC_PAGE_SIZE")
# mmap expects offsets to be multiple of page-size
base_offset = int(addr // psize) * psize
# offset within the page
seek_sz = int(addr % psize)
# total bytes to be mapped = offset within the page + requested size
map_size = seek_sz + size
# open the dev file (of mem)
fd = os.open(filename, os.O_RDWR|os.O_SYNC)
# map the dev file to process address space
mem = mmap.mmap(fd, map_size,
mmap.MAP_SHARED,
mmap.PROT_READ|mmap.PROT_WRITE,
offset=base_offset)
# goto the target offset (within the page that is mapped)
mem.seek(seek_sz, os.SEEK_SET)
val = mem.read(size)
print ('Packed val read = {}'.format(val))
print(hex(struct.unpack("I", val)[0]))
# seek to same offset, now to write
mem.seek(seek_sz, os.SEEK_SET)
# pack the data
packed_data = struct.pack("I", data)
print('Packed val write = {}'.format(packed_data))
# write to memory
ret_code = mem.write(packed_data)
# try to re-read the value written
mem.seek(seek_sz, os.SEEK_SET)
val = mem.read(size)
print ('Packed val read (after write) = {}'.format(val))
print(hex(struct.unpack("I", val)[0]))
# close fd
os.close(fd)
return ret_code
write(0x4330, 4, int(sys.argv[1], 16))
</code></pre>
<p>When I run this code, I get the following output:</p>
<pre><code>root@linux$ python3 /ssd/my_mmap.py 0x113d0000
Packed val read = b'\x00\x00="'
0x223d0000
Packed val write = b'\x00\x00=\x11'
Packed val read (after write) = b'\x00\x00="'
0x223d0000 ==> value not written
</code></pre>
<p>The address is writable and I could write it using C program.
However, with python program, it was never able to write.
Base-address, offset etc. values are correct - as I could read the old values using the same.</p>
<p>What am I missing? Sigh!</p>
|
<python><mmap>
|
2023-12-13 16:52:09
| 1
| 309
|
boomerang
|
77,655,249
| 1,138,192
|
Dynamodb batch delete based on sort key pattern
|
<p>I have a DynamoDB table with the following structure:</p>
<pre><code>sk | pk
---------------|-----
1#2023-12-01 | abv
1#2023-12-02 | abv
1#2023-12-03 | abv
1#2023-12-04 | abv
1#2023-12-05 | abv
2#2023-12-01 | abv
2#2023-12-02 | abv
2#2023-12-03 | abv
2#2023-12-04 | abv
2#2023-12-05 | abv
...
20#2023-12-11 | abv
20#2023-12-12 | abv
20#2023-12-12 | abv
</code></pre>
<p>Now, I want to perform a batch delete operation on this table where <code>pk = 'abv'</code> and <code>sk</code> represents a dynamic integer between 1 and 30, followed by a literal <code>#</code> and then date part <code>YYYY-MM-DD</code> which have to less than the current date(assuming current date is <code>2023-12-12</code>). Essentially, I want to remove all items where the date is less than <code>1-30#2023-12-12</code>. So after the delete operation on the table, the final table should only contain items like:</p>
<pre><code>sk | pk
---------------|-----
20#2023-12-12 | abv
20#2023-12-12 | abv
</code></pre>
<p>How can I achieve this in DynamoDB using a batch delete operation? Any guidance on constructing the batch delete request or any other optimized way to code it?. I am thinking this, but I am not a fan of the <code>scan()</code> operation of Dynamodb.</p>
<pre><code>from datetime import datetime, timedelta
from typing import Dict, List
class Dynamodb:
def batch_delete_old_data(self, pk: str):
try:
# Calculate the date to keep (e.g., today's date)
date_to_keep = datetime.now().strftime('%Y-%m-%d')
# Scan for all items with the specified pk
response = self._table.scan(
FilterExpression=Key('pk').eq(pk)
)
items_to_delete = [{'pk': item['pk'], 'sk': item['sk']} for item in response.get('Items', [])
if self.extract_date_part(item['sk']) < date_to_keep]
with self._table.batch_writer() as batch:
for item in items_to_delete:
batch.delete_item(Key=item)
return {"message": "Old data cleanup successful"}
except Exception as e:
# Handle errors appropriately
raise Exception(f"Error: {str(e)}")
@staticmethod
def extract_date_part(sk: str) -> str:
# Extract the date part from the sk, assuming format "prefix#date"
return sk.split('#')[-1] if '#' in sk else sk
</code></pre>
|
<python><amazon-web-services><amazon-dynamodb><boto3>
|
2023-12-13 16:45:14
| 2
| 38,823
|
A l w a y s S u n n y
|
77,655,206
| 6,357,916
|
Algorithm to obtain indices of list where list element resets to near zero
|
<p>I have list of increasing and decreasing float values:</p>
<p><a href="https://i.sstatic.net/Vzbev.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vzbev.png" alt="enter image description here" /></a></p>
<p>As you can see in the image, the values may increase or decrease and may suddenly reset to
near zero value. I want to know exactly where these values reset to near zero value.</p>
<p>I tried something like this:</p>
<pre><code># input list of floats
# floats = [...]
# find the absolute difference between consecutive values
diff_floats = [abs(floats[i] - floats[i-1]) for i in range(1, len(floats))]
# Sort the list in descending order
sorted_diff_floats = sorted(diff_floats, reverse=True)
# Calculate the average of the top 14 differences
threshold = sum(sorted_diff_floats[:14]) / 14
# Count the number of values greater than the threshold
print(len([value for value in diff_floats if value > threshold]))
</code></pre>
<p>I hard coded <code>14</code>, since I know that empirically the reset did not happen more than 7 times. So better to get top 14 differences average, instead of average of all values.</p>
<p>I am yet to get index of these resets which I can easily get by modifying last line of the code. But I feel the logic can still be improved to work on any input list of floats without considering the empirical knowledge of 7 resets.</p>
<p>How what could be the generatlised solution?</p>
|
<python><list><algorithm><sorting><indexing>
|
2023-12-13 16:38:25
| 2
| 3,029
|
MsA
|
77,655,131
| 6,907,488
|
parsing string with special characters (subprocess.run stdout)
|
<p>I am grabbing output of CLI call (for context GithubAPI)</p>
<p>command:</p>
<pre><code>import subprocess
j = subprocess.run("gh api /orgs/{__org__}/teams", shell=True, stdout=subprocess.PIPE, text=True)
</code></pre>
<p>the <code>j.stdout</code> object is a type <code>str</code> with the API response.</p>
<p>Then I print it <code>print(j.stdout)</code> it prints fine:</p>
<pre><code> {
"name": "Devs",
"id": {___VALUE HIDDEN____},
"node_id": "{___VALUE HIDDEN____}",
"slug": "devs"
...
</code></pre>
<p>However when I try to use it raw <code>s</code> the encoding is all wrong.</p>
<pre><code> '\x1b[1;38m[\x1b[m\n \x1b[1;38m{\x1b[m\n \x1b[1;34m"name"\x1b[m\x1b[1;38m:\x1b[m \x1b[32m"Devs"\x1b[m\x1b[1;38m,\x1b[m\n \x1b[1;34m"id"\x1b[m\x1b[1;38m:\x1b[m {___VALUE HIDDEN____}\x1b[1;38m,\x1b[m\n \x1b[1;34m"node_id"\x1b[m\x1b[1;38m:\x1b[m \x1b[32m"{___VALUE HIDDEN____}"\x1b[m\x1b[1;38m,\x1b[m\n \x1b[1;34m"slug"\x1b[m\x1b[1;38m:\x1b[m \x1b[32m"devs"\x1b[m\x1b[1;38m,\x1b[m\n ...
</code></pre>
<p>I have spent a good portion of today in the rabbit hole of decoding and encoding bytes likes this, began trying to manually clean some of the control characters (e.g. <code>\x1b</code>).</p>
<p>How can I get the clean underlying data so I can parse it into a data structure (it's JSON / list of dicts).</p>
<p>It cannot be this hard - I am surely missing something trivial. right?</p>
|
<python><regex><subprocess><gh-api><ansi.sys>
|
2023-12-13 16:27:20
| 1
| 671
|
dozyaustin
|
77,655,047
| 1,942,868
|
Insert the row to the Queryset manually
|
<p>I have objects and filter, which get the <code>label</code> <code>value</code> pair from database.</p>
<pre><code> results = (m.Drawing.objects.
annotate(label=F('update_user__name'),value=F('update_user')).
values('label','value').
annotate(dcount=Count('update_user__name')).
order_by())
print(results)
serializer = s.SearchChoiceSerializer(instance=results, many=True)
</code></pre>
<p>Results is like this,<code>print(results)</code></p>
<pre><code><SafeDeleteQueryset [{'label': 'admin', 'value': 1, 'dcount': 13}, {'label': 'demouser1', 'value': 2, 'dcount': 13}]>
</code></pre>
<p>Now, I want to insert this <code>{'label':'myuser', 'value':2,'dcount':23}</code> to <code>SafeDeleteQuerySet</code> manually before sending to the serializer.</p>
<p>I it possible?</p>
|
<python><django>
|
2023-12-13 16:13:32
| 1
| 12,599
|
whitebear
|
77,655,001
| 11,052,072
|
Cleanest way to structure unit tests in Python
|
<p>I usually structure my projects in this way:</p>
<pre><code>root
src
__init__,py
main.py
utils.py
xyz.py
tests
__init__.py
test_main.py
test_utils.py
test_xyz.py
README.md
LICENSE
Other stuff...
</code></pre>
<p>In the tests I import the functions I need to test with imports like <code>from src.main import my_function</code></p>
<p>And then run tests from the root using <code>python -m unittest discover</code></p>
<p>That works... kinda. If <code>main.py</code> imports other modules in the <code>src</code> package (e.g. <code>import utils</code>) the test crashes with an ImportError. The reason is that unittest add to PATH the directory where is launched, therefore it recognizes <code>src.main</code> but not the relative imports inside it.</p>
<p>The solution I found is to insert in <code>test/__init__.py</code> a statement like:</p>
<pre><code>import sys
sys.path.append("./src")
</code></pre>
<p>This adds the entire src folder to the PYTHONPATH. It works but I feel this is quite ugly. What other solutions are available? Checking similar question I didn't found a solution that allows to both run tests and execute the main package without having to change code each time...</p>
<p>Any idea? Thank you!</p>
|
<python><python-unittest>
|
2023-12-13 16:06:51
| 1
| 553
|
Liutprand
|
77,654,887
| 599,265
|
How can I reasonably support basic SSLv3 connections in a user-facing package for modern Python?
|
<p>I have a Python package for controlling a piece of lab equipment that has a server listening on a network connection. In response to changes to California law, the machine has recently been updated to use SSL connections rather than plain text. Unfortunately, despite being updated in the last few years, it only supports SSLv3.</p>
<p>As SSLv3 is insecure and deprecated (for good reason), and has been for over a decade, using it in Python is increasingly difficult. In some cases, installations will support it if the SSL context is set with ssl.TLSVersion.SSLv3 as the minimum version, but increasingly, the openssl Python is built with is compiled with SSLv3 support. While I could of course build openssl with SSLv3 support, and then build Python with that, asking my users to go through that entire process, and use a completely different, and less secure, installation of Python for just my package would not be reasonable.</p>
<p>Is there any way I can support SSLv3 connections for users with Python installations that don't support it in the built-in ssl module, and without openssl installations that support it, eg, through a pip-installable package?</p>
<p>(Note that in terms of security, SSLv3 is the least of the problems with the machines. The certificates used on the machines are self-signed, and can't be verified to begin with. The server on the machine is extremely insecure and should not be on open networks in any circumstances, something the package warns users about. In our installations we have all communication with the machines going through a specific, separate network for them.)</p>
|
<python><ssl>
|
2023-12-13 15:50:23
| 0
| 10,008
|
cge
|
77,654,869
| 1,103,752
|
plumbum fails to connect via jump host, but raw command succeeds
|
<p>I'm trying to SSH into <code>final_machine</code> via the jump host <code>portal_machine</code>. Both remote machines are Linux, local is Windows. When I run the following command in a cmd, I can successfully connect</p>
<pre><code>ssh {username}@{final_machine} -oProxyCommand="ssh -W {final_machine}:22 {username}@{portal_machine}"
</code></pre>
<p>And I can also successfully connect through python with</p>
<pre><code>ssh_command = plumbum.local["ssh"][f"{username}@{final_machine}", "-o", f"ProxyCommand=ssh -W {final_machine}:22 {username}@{portal_machine}"]
ssh_command()
</code></pre>
<p>However, I need to connect via an <code>SshMachine</code> object for compatibility, and when I try the following, it fails</p>
<pre><code>plumbum.SshMachine(final_machine, user=username,
ssh_opts=[fr'-o ProxyCommand="ssh -W {final_machine}:22 {username}@{portal_machine}"'])
</code></pre>
<p>with error</p>
<pre><code>Return code: | None
Command line: | 'true '
Host: | {final machine}
Stderr: | CreateProcessW failed error:2
</code></pre>
<p>I've tried replacing <code>ssh</code> with <code>C:\Windows\System32\OpenSSH\ssh.exe</code>, but no change. I have SSH keys set up from my machine to portal_machine, my machine to final_machine, and portal_machine to final_machine. Any other suggestions for how to debug would be appreciated. When I connect simply to the portal machine, it works fine.</p>
|
<python><ssh><plumbum><jumphost>
|
2023-12-13 15:48:17
| 1
| 5,737
|
ACarter
|
77,654,642
| 2,474,876
|
Pandas data frame splitting by cycles
|
<p>I have a pandas data frame for the stops & scheduled times of a single transit route throughout a given day. I would like to split this into multiple frames each corresponding to individual trips made by a given bus (based only on the <code>stop</code> cycles & not when the <code>scheduled</code> periodicity would happen).</p>
<p>For example, the following has two <code>A->B->C</code> trips, so looking how to split the frame (ie: at index 3 in this case) such that each sub frame has the same sequence of stops.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
"scheduled": ["2023-05-25 13:00", "2023-05-25 13:15", "2023-05-25 13:45", "2023-05-25 14:35", "2023-05-25 14:50", "2023-05-25 15:20"],
"stop": ["A", "B", "C", "A", "B", "C"]
})
pd.to_datetime(df["scheduled"])
</code></pre>
|
<python><pandas>
|
2023-12-13 15:12:00
| 1
| 417
|
eliangius
|
77,654,547
| 7,802,354
|
How to avoid printing unnecessary UWSGI error messages in the log file
|
<p>My Flask app is generating thousands of</p>
<blockquote>
<p>'OSError: write error'</p>
</blockquote>
<p>messages in my log file (my log file is defined in the .ini file; logger = '/temp/my_app.log). Most of them happen due to users disconnecting from the server or cancelling a request. I only need my error handling messages to show up in the log file, and not these uwsgi error messages. Is there any configuration that I need to do in my uwsgi .ini file?</p>
|
<python><nginx><flask><uwsgi>
|
2023-12-13 15:00:09
| 1
| 755
|
brainoverflow
|
77,654,458
| 14,321,038
|
Define __getitem__() wthin class constructor
|
<p>Python allows assigning function definitions to class members directly like,</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self):
self.test = lambda x: print(x)
a = A()
a.test(10) # prints 10
</code></pre>
<p>However for <code>__getitem__(self, idx)</code> it's not that simple. Say you have two possible implementations for <code>__getitem__</code> depending on a flag variable. One could simply write,</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self, N, flag):
self.values = list(range(N))
self.flag = flag
def __getitem__(self, idx).
if flag:
return self.values[idx]
else:
return self.values[idx] * self.N # toy example
a = A(10, True)
a[5] # returns 6
</code></pre>
<p>But if I want to avoid doing the if-else logic wihthin <code>__getitem__</code> say for example in a data oriented application, I'd like to be able to do that test in the constructor. Something like,</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self, N, flag):
self.values = list(range(N))
self.flag = flag
self.N = N
if flag:
self.__getitem__ = lambda self, idx: return self.values[idx]
else:
self.__getitem__ = lambda self, idx: return self.values[idx] * self.N
a = A(10, False)
a[5] # should return 12, instead yields NotImplementedError
</code></pre>
<p>However this last snippet yields a NotImplementedError. Is there a way to do this or does python already optimize if-else blocks like the one above ?</p>
|
<python><constructor>
|
2023-12-13 14:49:26
| 1
| 427
|
Orwellian Mentat
|
77,654,317
| 13,877,952
|
Pandas : mean/median precedent rows having the same ID
|
<p>I have the following ordered Dataframe</p>
<pre><code>Index ID Amount
1 A 10
2 A 15
3 A 17
4 A 12
5 A 10
6 B 20
7 B 15
...
</code></pre>
<p>What I want is to add a column indicating <strong>the median of all the precedent Amounts for the same IDs</strong> in the dataframe</p>
<p>The result must be the following</p>
<pre><code>Index ID Amount (PastElements) MedianOfPastElements
1 A 10 ()
2 A 15 (10) 10
3 A 17 (10;15) 12.5
4 A 12 (10;15;17) 15
5 A 10 (10;12;15;17) 13.5
6 B 20 ()
7 B 15 (20) 20
...
</code></pre>
<p>I don't have to keep the PastElements column in my result, I just added it to clarify my problem.
Does someone see any way to do so ? Thanks in advance</p>
|
<python><pandas><dataframe>
|
2023-12-13 14:28:05
| 1
| 564
|
Adept
|
77,654,253
| 7,425,726
|
error in keras TimeDistributed layer after updating to keras 3.0.0
|
<p>I use the keras TimeDistributed layer in an LSTM architecture just like the following example taken from <a href="https://machinelearningmastery.com/timedistributed-layer-for-long-short-term-memory-networks-in-python/" rel="nofollow noreferrer">https://machinelearningmastery.com/timedistributed-layer-for-long-short-term-memory-networks-in-python/</a></p>
<pre><code>from numpy import array
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import TimeDistributed
from keras.layers import LSTM
# prepare sequence
length = 5
seq = array([i/float(length) for i in range(length)])
X = seq.reshape(1, length, 1)
y = seq.reshape(1, length, 1)
# define LSTM configuration
n_neurons = length
n_batch = 1
n_epoch = 1000
# create LSTM
model = Sequential()
model.add(LSTM(n_neurons, input_shape=(length, 1), return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mean_squared_error', optimizer='adam')
print(model.summary())
# train LSTM
model.fit(X, y, epochs=n_epoch, batch_size=n_batch, verbose=2)
# evaluate
result = model.predict(X, batch_size=n_batch, verbose=0)
for value in result[0,:,0]:
print('%.1f' % value)
</code></pre>
<p>In keras version 2.15 this works, but after updating to verion 3.0.0 it gives the error:</p>
<pre><code>ValueError: Exception encountered when calling TimeDistributed.call().
Invalid dtype: <class 'NoneType'>
Arguments received by TimeDistributed.call():
• inputs=tf.Tensor(shape=(1, None, 5), dtype=float32)
• training=True
• mask=None
</code></pre>
<p>What is the correct way of writing down the architecture in keras 3 for this example?</p>
<p>(I have also seen the UserWarning about <code>Input(shape)</code> and know how to solve it but this is unrelated to the TypeError).</p>
|
<python><tensorflow><keras>
|
2023-12-13 14:17:10
| 0
| 1,734
|
pieterbons
|
77,653,807
| 16,525,263
|
get a missing value for a column in one dataframe from another dataframe
|
<p>I have a dataframe 'persons' as below</p>
<pre><code>name age serial_no mail
John 25 100483 john@abc.com
Sam 49 448900 sam@abc.com
Will 63 will@abc.com
Robert 20 299011
Hill 78 hill@abc.com
</code></pre>
<p>I have another dataframe 'people' as below</p>
<pre><code>name s_no e_mail
John 100483 john@abc.com
Sam 448900 sam@abc.com
Will 229809 will@abc.com
Robert 299011
Hill 567233 hill@abc.com
</code></pre>
<p>I need to add missing <code>serial_no</code> and <code>mail</code> in <code>persons</code> dataframe by joining it with <code>people</code> dataframe.
If <code>serial_no</code> is missing in <code>persons</code> dataframe, I need to join persons with <code>people</code> on <code>mail</code> column to get the <code>serial_no</code>.
If <code>mail</code> is missing in <code>persons</code> dataframe, I need to join <code>persons</code> with <code>people</code> on <code>serial_no</code> column to get the <code>mail</code>.
If there is no matching value found, <code>"NA"</code> should be loaded to it.</p>
<p>My final df should be like</p>
<pre><code>name age serial_no mail
John 25 100483 john@abc.com
Sam 49 448900 sam@abc.com
Will 63 229809 will@abc.com
Robert 20 299011 NA
Hill 78 567233 hill@abc.com
</code></pre>
<p>This is my code snippet</p>
<pre><code>final_df = persons.join(people, (persons[serial_no] == people[s_no]) \
& (persons['serial_no'].isNull() | (persons['serial_no'] == people['s_no'])), how='left', \
persons['serial_no'], people['serial_no'], persons['mail'].alias('persons_mail'), \
people['e_mail'].alias('people_mail'], \
when(people['serial_no.people'].isNull(), "NA").otherwise(people['serial_no.people']).alias('serial_no_people'])))
</code></pre>
<p>Im blocked here and my code is not working. Can you pls suggest how to proceed</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-12-13 13:04:24
| 2
| 434
|
user175025
|
77,653,793
| 4,417,769
|
influxdb-client-python in multiprocessing: TypeError("cannot pickle '_thread.lock' object")
|
<p>I'm running a multiprocessing.Pool and am downloading data via the influxdb-client-python.</p>
<p>The code boils down to:</p>
<pre class="lang-py prettyprint-override"><code>def do_stuff():
InfluxDBClient(
url=os.environ["INFLUX_HOST"],
token=os.environ["INFLUX_TOKEN"],
org=os.environ["INFLUX_ORGANIZATION"],
).query_api().query_raw(
f"""
from(bucket: "foo")
|> range(start: 2023-11-07T00:00:00+00:00, stop: 2023-11-08T00:00:00+00:00)
"""
)
with multiprocessing.get_context("spawn").Pool() as pool:
pool.map(
do_stuff,
[1, 2, 3, 4],
chunksize=1,
)
</code></pre>
|
<python><influxdb>
|
2023-12-13 13:02:20
| 1
| 1,228
|
sezanzeb
|
77,653,680
| 1,782,792
|
Correct use of buffer protocol for dynamic array
|
<p>I have a dynamic array type in C++ that I would like to expose through the <a href="https://docs.python.org/3/c-api/buffer.html" rel="nofollow noreferrer">buffer protocol</a>. The type is already exposed as a sequence in Python, but I want to build NumPy arrays with the C++ array data, and I am hoping that a buffer will be faster than the element-wise construction from a sequence.</p>
<p>Reading through the protocol description, I am not sure what is the correct way to do this. The problem is that, being a dynamic array, its memory may be rellocated, and the bounds of valid memory may change. But, from what I understand, the buffer protocol assumes that the exposed buffer will remain intact on the native side, at least as long as one Python buffer object is alive.</p>
<p>The only solution I can think of is copying the array contents into a new memory area when a buffer is requested and delete that memory after the buffer is no longer needed. But I am not sure if this complies with the buffer protocol, i.e. returning a buffer that may not represent the current state of the corresponding Python object.</p>
<p>The documentation on the <code>obj</code> field of the <code>Py_buffer</code> struct says:</p>
<blockquote>
<p>As a special case, for <em>temporary</em> buffers that are wrapped by <code>PyMemoryView_FromBuffer()</code> or <code>PyBuffer_FillInfo()</code> this field is <code>NULL</code>. In general, exporting objects MUST NOT use this scheme.</p>
</blockquote>
<p>If I did make a copy of the data on each buffer request, would it qualify as such a "temporary" buffer?</p>
<p>Obviously, making a copy of the data somewhat misses the point of the buffer protocol, but as I said I'm hoping NumPy array construction will be faster this way (that is, with just a <code>memcpy</code> copy instead of a loop over sequence items).</p>
|
<python><c++><python-c-api>
|
2023-12-13 12:43:44
| 1
| 59,921
|
javidcf
|
77,653,679
| 15,299,206
|
How to log the insert update and delete in pandas
|
<p>my csv is below</p>
<pre><code>Date,ProductID,Price,Quantity
2023-01-01,1001,10,1
2023-01-02,1001,10,1
2023-01-02,1011,10,6
</code></pre>
<p>My output_data.csv below</p>
<pre><code>ProductID,TotalSales
1001,20
1011,60
</code></pre>
<p>if I am adding csv row and updating also deleting the input csv the output_data.csv has to reflect</p>
<p>new input csv</p>
<pre><code>Date,ProductID,Price,Quantity
2023-01-01,1001,10,2
2023-01-02,1001,10,1
2023-01-02,1011,10,6
2023-01-02,1012,10,6
</code></pre>
<p>My output would be below</p>
<pre><code>ProductID,TotalSales,OperationType
1001,30, Updated
1011,60, No Change
1012,60, Insert
</code></pre>
<p>OperationType you can see 1001 is Updated as value changed from 20 to 30
OperationType you can see 1011 is No Change as no change in value
OperationType you can see 1012 is Insert as 1012 is first time its getting inserted to ouput_Data.csv</p>
<p>My code is below</p>
<pre><code>import pandas as pd
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
def extract(file_path):
"""Extract data from a CSV file."""
return pd.read_csv(file_path)
def transform(data):
"""Transform data by calculating total sales."""
data['TotalSales'] = data['Price'] * data['Quantity']
return data.groupby('ProductID')['TotalSales'].sum().reset_index()
def load(data, output_file_path):
"""Load data into a new CSV file."""
data.to_csv(output_file_path, index=False)
def insert_data(data, new_data):
"""Insert new data into the existing dataset."""
updated_data = pd.concat([data, new_data], ignore_index=True)
return updated_data
def delete_data(data, condition):
"""Delete rows based on a condition."""
deleted_rows = data[condition]
updated_data = data.drop(data[condition].index)
return updated_data, deleted_rows
def update_data(data, condition, update_values):
"""Update data based on a condition."""
updated_rows = data.loc[condition].copy()
data.loc[condition, update_values.columns] = update_values.values
return data, updated_rows
def identify_operation(original_data, updated_data):
"""Identify operation type for each row."""
merged = pd.merge(original_data, updated_data, on='ProductID', how='outer', suffixes=('_old', '_new'), indicator=True)
operations = []
for index, row in merged.iterrows():
if row['_merge'] == 'left_only':
operations.append('No Change')
elif row['_merge'] == 'right_only':
operations.append('Insert')
else:
if row['TotalSales_old'] != row['TotalSales_new']:
operations.append('Updated')
else:
operations.append('No Change')
return pd.Series(operations, name='OperationType') # Return as a Series
def run_pipeline(input_file_path, output_file_path):
"""Run the data pipeline."""
data = extract(input_file_path)
transformed_data = transform(data)
original_data = transformed_data.copy()
# Identify operation types
operations = identify_operation(original_data, transformed_data)
transformed_data = pd.concat([transformed_data, operations], axis=1) # Concatenate as a new column
load(transformed_data[['ProductID', 'TotalSales', 'OperationType']], output_file_path)
if __name__ == '__main__':
run_pipeline('data.csv', 'output_data.csv')
</code></pre>
<p>I am getting below output_data.csv</p>
<pre><code>ProductID,TotalSales,OperationType
1001,30,No Change
1011,60,No Change
1012,60,No Change
</code></pre>
<blockquote>
<blockquote>
<p>OperationType was wrong, every time its coming No Change</p>
</blockquote>
</blockquote>
|
<python><pandas>
|
2023-12-13 12:43:39
| 1
| 488
|
sim
|
77,653,619
| 11,748,924
|
AttributeError: DataFrameResampler object has no attribute interpolate in cuDF
|
<p>With similiar syntax, here is original of pandas dataframe function to resample and interpolate:</p>
<pre><code>def resample_and_interpolate(df, datetime_column):
# Convert datetime column to datetime format
df[datetime_column] = pd.to_datetime(df[datetime_column])
df = df.set_index(datetime_column)
# Resample to 1-minute intervals and interpolate
df_resampled = df.resample('1T').interpolate(method='linear')
# Resample to 1-hour intervals and forward fill and backward fill
resampled_df = df_resampled.resample('1H').first().ffill().bfill()
# Reset index to get datetime back as a column
resampled_df = resampled_df.reset_index()
return resampled_df
</code></pre>
<p>And here is the cuDF:</p>
<pre><code>def g_resample_and_interpolate(gdf, datetime_column):
# Convert datetime column to datetime format
gdf[datetime_column] = cudf.to_datetime(df[datetime_column])
gdf = gdf.set_index(datetime_column)
# Resample to 1-minute intervals and interpolate
gdf_resampled = gdf.resample('1T').interpolate(method='linear')
# Resample to 1-hour intervals and forward fill and backward fill
resampled_gdf = gdf_resampled.resample('1H').first().ffill().bfill()
# Reset index to get datetime back as a column
resampled_gdf = resampled_gdf.reset_index()
return resampled_gdf
</code></pre>
<p>The original function of dataframe pandas is working fine, but the cuDF GPU dataframe is not working fine.</p>
<p>Usage:</p>
<pre><code>
resample_and_interpolate(gdf_dict[20000019].to_pandas, 'charttime')
g_resample_and_interpolate(gdf_dict[20000019], 'charttime')
</code></pre>
<p>Returning error:</p>
<pre><code>
KeyError: 'interpolate'
6 # Resample to 1-minute intervals and interpolate
----> 7 df_resampled = df.resample('1T').interpolate(method='linear')
AttributeError: DataFrameResampler object has no attribute interpolate
</code></pre>
|
<python><pandas><dataframe><cudf>
|
2023-12-13 12:33:55
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
77,653,496
| 2,913,290
|
Python MySQL Cursor being blocked by read-only priviledges
|
<p>I've been using the Python MySQL Database MySQL connection (mysql-connector=2.2.9 and mysql=0.0.3) for almost a year and it worked as a charm.</p>
<p>However, our WRITE permissions were removed from a core database. So far, it should not be a problem, because we only fetch data from that database and write to other databases where we have the permissions.</p>
<p>However, since the cursor interface does not provide another way of fetching data other than <code>cursor.execute(query)</code>, It seems that MySQL cannot guarantee that the currently executed query is a read-only command. Therefore, I get the error below</p>
<pre><code>"Access denied for user 'myuser'@'10.64.XXX.XX' (using password: YES)]
</code></pre>
<p>The relevant code is below</p>
<pre><code>def get_connection(database, username, password):
return MySQLDatabase(host=getenv(DATABASE_HOST), port=getenv(DATABASE_PORT),
database=database,
user=username, password=password).connection
self.read_connection = self.get_connection(database, usr, pwd)
cursor_read = self.read_connection.cursor()
cursor_write = self.write_connection.cursor()
operative_function_query = """SELECT * FROM operative_function"""
cursor_read.execute(operative_function_query) #this line fails
results = cursor_read.fetchall()
</code></pre>
<ul>
<li><p>Trying the same query in another DB with write permissions does not
fail</p>
</li>
<li><p>I am able to run SELECT statements directly in the database using the
same username/password</p>
</li>
</ul>
<p>Anyone has faced this issue before? Is there a workaround to fetch data from a database with read-only privileges (I expected to find a SELECT method in the interface, but there isn't one).</p>
<p>Perhaps another python package out there?</p>
|
<python><mysql><mysql-python><mysql-connector>
|
2023-12-13 12:12:24
| 0
| 906
|
Daniel Vilas-Boas
|
77,653,406
| 901,188
|
getting inside column data change data frame to panda
|
<p>i am using Databricks with pyspark
and i have data in data frame which i am converting in Panda, which change data sequence of array type inside of my column.
same issue if i do <code>dataframe.collect()</code>.</p>
<p>code snipped</p>
<pre><code> result_df = prule_val.groupBy("CUSTOMER_CODE").agg(
collect_list("PRULE_CODE_PARSED").alias("prule_codes_list"),
collect_list("TOTAL_SALES").alias("A")
)
pandasDF = result_df.toPandas()
filtered_df = pandasDF[pandasDF['CUSTOMER_CODE'] == '0070331037']
</code></pre>
<p>screen shot of result before and after convert.</p>
<p><a href="https://i.sstatic.net/sDHE4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sDHE4.png" alt="enter image description here" /></a></p>
<p>what are we doing wrong or possible reason of that?</p>
|
<python><pyspark>
|
2023-12-13 11:56:14
| 0
| 757
|
Dilip
|
77,653,244
| 2,815,264
|
How to run RedisInsight V2 and a Redis Cluster on Docker localhost using user-defined bridge?
|
<p>We've noticed many peers are struggling with the RedisInsight V2 setup and a dockerized Redis Cluster on Windows. We are running Windows 11 and the latest Docker Desktop with WLS2 activated. Since macvlan, ipvlan and host network types are not available on Windows (development machines), we are somewhat limited.</p>
<p>Below we describe first how we are creating our local development setup. Which is fine - with the only caveat of the ugly if-statement in our python cluster connection.</p>
<p><strong>Our Question:</strong>
<strong>How are other windows teams developing locally for/with a Redis Cluster?</strong></p>
<p><strong>How we develop locally:</strong></p>
<p>We pull up the cluster nodes with docker compose including a custom image of RedisInsight V2.</p>
<pre><code>version: '3.8'
name: 'fa-redis-cluster-local'
services:
node1:
container_name: leader_1
image: redis/redis-stack-server:latest
ports:
- "7000:6379"
#- "8001:8001"
volumes:
- ./redis-data/node1:/data
command: redis-server /data/redis-cluster.conf
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
network_redis_cluster:
ipv4_address: 172.30.0.11
node2:
container_name: leader_2
image: redis/redis-stack-server:latest
ports:
- "7001:6379"
volumes:
- ./redis-data/node2:/data
command: redis-server /data/redis-cluster.conf
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
network_redis_cluster:
ipv4_address: 172.30.0.12
node3:
container_name: leader_3
image: redis/redis-stack-server:latest
ports:
- "7002:6379"
volumes:
- ./redis-data/node3:/data
command: redis-server /data/redis-cluster.conf
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
network_redis_cluster:
ipv4_address: 172.30.0.13
redisinsight:
container_name: redisinsight
image: oblakstudio/redisinsight:latest
ports:
- "8001:5000"
depends_on:
- node1
- node2
- node3
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
network_redis_cluster:
ipv4_address: 172.30.0.14
# redisinsight:
# image: redislabs/redisinsight:latest
# ports:
# - "8001:8001"
# environment:
# - ALLOW_EMPTY_PASSWORD=yes
# networks:
# network_redis_cluster:
# ipv4_address: 172.30.0.14
networks:
network_redis_cluster:
name: network_redis_cluster
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.0.0/24
gateway: 172.30.0.1
</code></pre>
<p>Then we create the cluster binding the nodes to each other:</p>
<pre><code>docker exec -it leader_1 redis-cli --cluster create 172.30.0.11:6379 172.30.0.12:6379 172.30.0.13:6379 --cluster-replicas 0
</code></pre>
<p>We need to map the bridge network address/port with the localhost address/port in our python code BEFORE we can invoke a redis-cluster object.</p>
<pre><code>from redis.cluster import RedisCluster, ClusterNode
nodes = [ClusterNode('127.0.0.1', 7000), ClusterNode('127.0.0.1', 7001), ClusterNode('127.0.0.1', 7002)]
if ENVIRONMENT == 'DEV':
address_remap_dict = {
"172.30.0.11:6379": ("127.0.0.1", 7000),
"172.30.0.12:6379": ("127.0.0.1", 7001),
"172.30.0.13:6379": ("127.0.0.1", 7002),
}
def address_remap(address):
host, port = address
return address_remap_dict.get(f"{host}:{port}", address)
# rc = RedisCluster(startup_nodes=nodes, decode_responses=True, skip_full_coverage_check=True)
rc = RedisCluster(startup_nodes=nodes, decode_responses=True, skip_full_coverage_check=True, address_remap=address_remap)
else:
rc = RedisCluster(startup_nodes=nodes, decode_responses=True, skip_full_coverage_check=True)
</code></pre>
<p>This works and we can connect to the cluster using our Python app.</p>
<pre><code>INFO redis nodes: {'cluster_state': 'ok', 'cluster_slots_assigned': '16384', 'cluster_slots_ok': '16384', 'cluster_slots_pfail': '0', 'cluster_slots_fail': '0', 'cluster_known_nodes': '3', 'cluster_size': '3', 'cluster_current_epoch': '3', 'cluster_my_epoch': '1', 'cluster_stats_messages_ping_sent': '578', 'cluster_stats_messages_pong_sent': '649', 'cluster_stats_messages_sent': '1227', 'cluster_stats_messages_ping_received': '647', 'cluster_stats_messages_pong_received': '578', 'cluster_stats_messages_meet_received': '2', 'cluster_stats_messages_received': '1227'}
</code></pre>
<p>Now, we notice people tried to connect to the cluster having a host-installed RedisInsight V2 Version. That will not work, since you cannot reach the hosts within the bridged networks by hostname or ip, which Redis uses to answer the requests from RedisInsight. Thus, use the containerized RedisInsight V2 available on http://localhost:8001. Then add a database using one of the IPs of the nodes defined in the docker compose file. That is 172.30.0.11-13. Make sure you use the 6379 port for all of these IPs and not 7000-7002.</p>
<p>That does the trick(s). Is there anyone that has a better way</p>
<p><a href="https://i.sstatic.net/zUU3C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zUU3C.png" alt="enter image description here" /></a></p>
|
<python><redis><redis-cluster><redis-py><redisinsights>
|
2023-12-13 11:32:52
| 0
| 2,068
|
feder
|
77,653,127
| 241,605
|
How can you suppress logging for a block of code in structlog?
|
<p>I am writing a test of an error condition, checking that it occurs and is handled. I don't want the test output to be spammed with error messages for errors that have been deliberately provoked and handled. Structlog is being used for all the logging.</p>
<p>How can I temporarily suppress all log output for a block of text, so that logging resumes normally afterwards?</p>
|
<python><structlog>
|
2023-12-13 11:13:59
| 1
| 20,791
|
Matthew Strawbridge
|
77,653,116
| 10,979,307
|
Why BeautifulSoup gives me more tags than available?
|
<p>I'm extracting the audios of a bunch of words from <a href="https://www.oxfordlearnersdictionaries.com/" rel="nofollow noreferrer">Oxford Learner's Dictionaries</a> using <code>BeautifulSoup</code> in Python. Here's the code:</p>
<pre class="lang-py prettyprint-override"><code>#!/bin/python3
import sys
import requests
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
'DNT': '1',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-User': '?1',
}
response = requests.get(sys.argv[1], headers=headers)
print("HTTP Response Status Code:", response.status_code)
soup = BeautifulSoup(response.content, "html.parser")
print(list(soup.find(class_="phonetics")))
</code></pre>
<p>when I run the program using the following command</p>
<pre class="lang-bash prettyprint-override"><code>./english_audio.py "https://www.oxfordlearnersdictionaries.com/definition/english/hello_1?q=hello"
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2023-12-13 11:12:46
| 1
| 761
|
Amirreza A.
|
77,653,080
| 6,307,685
|
Simplifying expression with SymPy by specifying ranges
|
<p>I would like to simplify an expression with cases/branches by specifying the range of my <code>Idx</code> object. Consider the following example:</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sym
i = sym.Idx("i")
j = sym.Idx("j")
N = sym.Symbol("N", integer=True)
expr = sym.Sum(sym.KroneckerDelta(i, j), (i, 1, N))
expr.doit()
</code></pre>
<p>This outputs:</p>
<p><a href="https://i.sstatic.net/sSPfp.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sSPfp.jpg" alt="enter image description here" /></a></p>
<p>I tried to use <code>refine</code> together with <code>Q</code>, which I would expect to work by reading the documentation. But it does not work:</p>
<pre class="lang-py prettyprint-override"><code>sym.refine(expr.doit(), sym.Q.positive_definite(j))
</code></pre>
<p>This outputs the same as before. It does not simplify the first case condition to N >= j, as I expected.
How to incorporate such conditions/assumptions when simplifying an expression?</p>
|
<python><sympy><symbolic-math>
|
2023-12-13 11:07:45
| 1
| 761
|
soap
|
77,653,042
| 8,160,318
|
Default project credentials in GCP Cloud Functions not implicit anymore?
|
<p>I have a Google Cloud Function that uses GCP client libraries like the python <a href="https://cloud.google.com/python/docs/reference/storage/latest" rel="nofollow noreferrer"><code>google-cloud-storage</code> (v2.1.0)</a>:</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud.storage import Client as StorageClient
storage_client = StorageClient()
bucket = storage_client.bucket("xyz")
</code></pre>
<p>When no explicit GCP project ID was provided, the deployed function would default to the project ID of the deployment.</p>
<p>This behavior seems to have recently changed because:</p>
<ul>
<li>A) I don't see any mention of the project-based defaults in the <a href="https://cloud.google.com/docs/authentication/provide-credentials-adc" rel="nofollow noreferrer">Application Default Credentials (ADC) docs</a></li>
<li>and B) I have to specify the <code>project</code> when constructing the <code>Client</code> for the function to work:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from google.cloud.storage import Client as StorageClient
storage_client = StorageClient(project="abc") # <---
bucket = storage_client.bucket("xyz")
</code></pre>
<p>I don't see any significant <strong>recent</strong> (<2mo) changes in the <a href="https://github.com/googleapis/python-storage/blob/main/CHANGELOG.md" rel="nofollow noreferrer">Changelog on GH</a> nor in the <a href="https://cloud.google.com/functions/docs/release-notes" rel="nofollow noreferrer">Release Notes</a>.</p>
<p>Obviously, I can attach dedicated service accounts for each service & function — <a href="https://cloud.google.com/docs/authentication/provide-credentials-adc#attached-sa" rel="nofollow noreferrer">as advised in the docs</a>…</p>
<p>But I have a bunch of such functions in production <strong>so I wonder if I need to update all of them?</strong></p>
|
<python><google-cloud-platform><google-cloud-functions><google-cloud-storage>
|
2023-12-13 11:02:53
| 1
| 16,993
|
Jozef - Spatialized.io
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.