QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,381,226 | 9,137,547 | Python multiprocessing on ipykernel | <p>I want to make some parallel CPU-bound computations in Python, I was writing my code on a notebook running on my machine with ipykernel.</p>
<p>I reduced my problem to the following simple replicable code:</p>
<pre><code>from multiprocessing import Pool
from pprint import pprint
def square(num):
return num ** 2
if __name__ == '__main__':
lst = list(range(20))
with Pool() as pool:
res = pool.map(square, lst)
pprint(res)
</code></pre>
<p>This works fine when I run it as a script, now if I just copy paste it into a notebook.ipynb and try to run it, it keeps running way longer than it should giving some obscure warnings as follows:</p>
<pre><code>0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
0.01s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
</code></pre>
<p>Why is my code not running on a notebook and how can I solve this?</p>
<p>I also tried removing <code>if __name__ == '__main__'</code> in the notebook to no avail</p>
| <python><jupyter-notebook><multiprocessing> | 2023-10-28 21:04:31 | 1 | 659 | Umberto Fontanazza |
77,381,192 | 2,854,555 | In VS Code, how can I change the problem level of a certain error/violation from the Flake8 extension? | <p>I'm using VS Code with Flake8 configured to check some obvious issues for my Python code. However, some of its errors are not really errors in my current development phase, e.g. <code>E501</code> (line too long) and <code>F401</code> (unused import).</p>
<p>The annoying thing is that the relevant lines are marked in red, which makes more serious errors non-obvious.</p>
<p>Is there a way I can tell Flake8 to treat them as warnings instead of errors?</p>
<p>I searched all over, but only discovered ways to either ignore the check all together, or explicitly mark one line as ignored, e.g. <a href="https://stackoverflow.com/questions/47876079/how-to-tell-flake8-to-ignore-comments">How to tell flake8 to ignore comments</a>. They are not what I need.</p>
| <python><visual-studio-code><flake8> | 2023-10-28 20:52:16 | 2 | 691 | renyuneyun |
77,381,158 | 1,084,174 | ValueError: Shapes (None,) and (None, 100, 6) are incompatible | <p>I am trying to solve a classification problem using LSTM Sequential machine learning. My DataFrame structure is here:</p>
<blockquote>
<p>feature1, feature2, feature3, category</p>
</blockquote>
<p><a href="https://i.sstatic.net/FGcHT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FGcHT.png" alt="enter image description here" /></a></p>
<p>features are accelerometer sensor data point in float and category is label for activity classes (values 0~5)</p>
<p>I have splitted train and test data with,</p>
<pre><code>from sklearn.model_selection import train_test_split
x_columns = acc.iloc[:, 0:3]
y_columns = acc.iloc[:, 3:4]
trainx, testx, trainy, testy = train_test_split(x_columns, y_columns, test_size=0.2, shuffle=False)
assert(len(trainx) == len(trainy))
</code></pre>
<p>I have also prepared sequence as the data are time correlated.</p>
<pre><code>from scipy.stats import mode
window_len = 100
stride_len = 10
def sequence_generator(x, y, length, stride):
seq_x = []
seq_y = []
data_length = len(x)
for i in range(0, data_length - length + 1, stride):
input_sequence = x.iloc[i : i + length]
target_sequence = y.iloc[i : i + length]
target_mode = mode(target_sequence.values)[0][0]
seq_x.append(input_sequence)
seq_y.append(target_mode)
return np.array(seq_x), np.array(seq_y)
tx, ty = sequence_generator(trainx, trainy, window_len, stride_len)
vx, vy = sequence_generator(testx, testy, window_len, stride_len)
</code></pre>
<p>I have also hot encoded the target column,</p>
<pre><code>from keras.utils import to_categorical
tty = to_categorical(ty, num_classes=len(set(ty))) # num_classes = 6
vvy = to_categorical(vy, num_classes=len(set(ty)))
</code></pre>
<p>Here are some output at that point that may help you to identify my problem:</p>
<blockquote>
<p>tx.shape = (113020, 100, 3)</p>
</blockquote>
<blockquote>
<p>ty.shape = (113020, 6)</p>
</blockquote>
<blockquote>
<p>set(ty) = {0, 1, 2, 3, 4, 5}</p>
</blockquote>
<p><a href="https://i.sstatic.net/k7e7h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k7e7h.png" alt="enter image description here" /></a></p>
<p>At this point, I have defined my model architecture. <strong>Please ignore about the accuracy of the model because I am using it just for learning purpose.</strong></p>
<pre><code>import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from keras.layers import LSTM
from keras.models import Sequential, Model
from keras.layers import Dense, Input, Dropout, Flatten
from keras.utils import to_categorical
n_class = len(set(ty)) # 6
n_features = len(x_columns.columns) # 3
model2 = Sequential()
model2.add(Input((window_len, n_features)))
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dense(n_class, activation='softmax'))
model2.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#history = model2.fit(tx, ty, epochs=5)
history = model2.fit(tx, tty, epochs=5)
</code></pre>
<p>When I run, following error occured. What am I missing?</p>
<pre><code>ValueError Traceback (most recent call last)
Cell In[246], line 19
17 model2.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
18 #history = model2.fit(tx, ty, epochs=5)
---> 19 history = model2.fit(tx, tty, epochs=5)
File ~/anaconda3/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File /tmp/__autograph_generated_filefn9kd22q.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/engine/training.py", line 1377, in train_function *
return step_function(self, iterator)
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/engine/training.py", line 1360, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/engine/training.py", line 1349, in run_step **
outputs = model.train_step(data)
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/engine/training.py", line 1127, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/engine/training.py", line 1185, in compute_loss
return self.compiled_loss(
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/engine/compile_utils.py", line 277, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/losses.py", line 143, in __call__
losses = call_fn(y_true, y_pred)
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/losses.py", line 270, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/losses.py", line 2221, in categorical_crossentropy
return backend.categorical_crossentropy(
File "/Users/hissain/anaconda3/lib/python3.11/site-packages/keras/src/backend.py", line 5575, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, 6) and (None, 100, 6) are incompatible
</code></pre>
<p><strong>Update 1:</strong></p>
<p>As Joe suggested in <a href="https://stackoverflow.com/questions/77381158/valueerror-shapes-none-and-none-100-6-are-incompatible?noredirect=1#comment136418606_77381158">comment</a> to use a Flatten layer, I have added one, but no luck.</p>
| <python><pandas><machine-learning><keras><lstm> | 2023-10-28 20:41:04 | 0 | 40,671 | Sazzad Hissain Khan |
77,380,915 | 5,947,365 | You cannot call this from an async context - use a thread or sync_to_async. Django ORM | <p>I wrote the following code:</p>
<pre><code>class BookmakerA:
def __init__(self) -> None:
self.bookmaker = None
async def _init(self):
self.bookmaker, _ = await Bookmaker.objects.aget_or_create(name="BookmakerA", defaults={"name": "BookmakerA"})
</code></pre>
<p>I call this class from a Celery task which looks as follows:</p>
<pre><code>@shared_task
def get_bookmaker_matches():
start = time.time()
bookmakera = BookmakerA()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(bookmakera._init())
</code></pre>
<p>This however results in the following error:</p>
<pre><code>django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
</code></pre>
<p>Why does this happen and how do I resolve this.</p>
<p>Even if I put it like this:</p>
<pre><code>await sync_to_async(Bookmaker.objects.get_or_create, thread_sensitive=True)(name="BookmakerA", defaults={"name": "BookmakerA"})
</code></pre>
<p>It results in the same error</p>
<p>Using Django 4.2.6 which supports async ORM (acreate_or_create, aget, so on)</p>
| <python><django><asynchronous><python-asyncio> | 2023-10-28 19:26:38 | 0 | 607 | Sander Bakker |
77,380,817 | 3,188,278 | Declare Instance variables in django-formtools SessionWizardView | <p>Using SessionWizardView, I want to declare Instance variables, that persist between steps. However, they are not persistent, when using:</p>
<p><code>def __init__(self): self.variable = {}</code></p>
| <python><django><django-formtools> | 2023-10-28 18:55:00 | 1 | 499 | T_Torture |
77,380,448 | 5,246,226 | Python class missing attribute that was initialized? | <p>I am trying to create a class that enables counting based off of the <code>multiprocessing</code> Queue:</p>
<pre><code>import multiprocessing
from multiprocessing import Value
from multiprocessing.queues import Queue
class SharedCounter(object):
""" A synchronized shared counter.
The locking done by multiprocessing.Value ensures that only a single
process or thread may read or write the in-memory ctypes object. However,
in order to do n += 1, Python performs a read followed by a write, so a
second process may read the old value before the new one is written by the
first process. The solution is to use a multiprocessing.Lock to guarantee
the atomicity of the modifications to Value.
This class comes almost entirely from Eli Bendersky's blog:
http://eli.thegreenplace.net/2012/01/04/shared-counter-with-pythons-multiprocessing/
"""
def __init__(self, n = 0):
self.count = Value('i', n)
def increment(self, n = 1):
""" Increment the counter by n (default = 1) """
with self.count.get_lock():
self.count.value += n
@property
def value(self):
""" Return the value of the counter """
return self.count.value
class CounterQueue(Queue):
""" A portable implementation of multiprocessing.Queue.
Because of multithreading / multiprocessing semantics, Queue.qsize() may
raise the NotImplementedError exception on Unix platforms like Mac OS X
where sem_getvalue() is not implemented. This subclass addresses this
problem by using a synchronized shared counter (initialized to zero) and
increasing / decreasing its value every time the put() and get() methods
are called, respectively. This not only prevents NotImplementedError from
being raised, but also allows us to implement a reliable version of both
qsize() and empty().
"""
def __init__(self, *args, **kwargs):
self.size = SharedCounter(0)
super(CounterQueue, self).__init__(ctx=multiprocessing.get_context(), *args, **kwargs)
def put(self, *args, **kwargs):
self.size.increment(1)
super(CounterQueue, self).put(*args, **kwargs)
def get(self, *args, **kwargs):
self.size.increment(-1)
return super(CounterQueue, self).get(*args, **kwargs)
def qsize(self):
""" Reliable implementation of multiprocessing.Queue.qsize() """
return self.size.value
def empty(self):
""" Reliable implementation of multiprocessing.Queue.empty() """
return not self.qsize()
def clear(self):
""" Remove all elements from the Queue. """
while not self.empty():
self.get()
</code></pre>
<p>However, it seems like when I try and pass this object as an argument into another process,</p>
<pre><code> for i in range(len(multiples)):
res_queues.append(CounterQueue())
process = mp.Process(name="test",
target=function,
args=(res_queues))
process.daemon = True
process.start()
</code></pre>
<p>I get an AttributeError when calling <code>put</code>: <code>AttributeError: 'CounterQueue' object has no attribute 'size'</code>. However, I've confirmed the code is correct since the following code executes without issue:</p>
<pre><code>>>> from python.multiprocessing.queue import CounterQueue
>>> a = CounterQueue()
>>> a.put(1)
>>> a.qsize()
1
</code></pre>
<p>I'm wondering if I'm missing something with respect to Python specifics here?</p>
| <python><multiprocessing> | 2023-10-28 17:13:14 | 1 | 759 | Victor M |
77,380,240 | 9,511,223 | How to access message field inside iframe on protonmail.com? | <p>I am trying to send an email with Python Selenium on Protonmail server. Everything works fine until until I get to the email message.</p>
<p>To get to the email message field I use the following xpath: <code>"//div[@id='rooster-editor']/div[1]"</code>. It gives a unique element when searching manually through the DOM (Chromium gives the same xpath as well).</p>
<p>But within a script this XPath throws a 'no such element' exception. I first thought the error was caused by the fact the page was not completely loaded as explained in this post: <a href="https://stackoverflow.com/questions/16739319/selenium-webdriver-nosuchelementexceptions?rq=4">Selenium Webdriver - NoSuchElementExceptions</a>.
To avoid this issue, I inserted a 20 seconds <code>implicitly_wait</code>, but the issue remains.</p>
<p>I do not think this feature would solve the issue either:
<code>EC.presence_of_element_located(By.XPATH,"//div[@id='rooster-editor']/div[1]"</code> because my snippet is slower than sending an email manually.</p>
<p>To summarize, here is where I introduced some 'wait' or 'time.sleep' in the snippet: From click on 'New Message' button -> implicit wait 20 sec -> fill recipient email -> 2 sec sleep -> email subject -> 3 sec sleep -> then email message. I therefore dont think the problem could be caused by a not completely loaded page.</p>
<p>So my question is: What could bring such an exception?</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.common.by import By
import time
class FindProton():
def test(self, recipient, subject, msg):
self.recipient = recipient
self.subject = subject
self.msg = msg
baseUrl = 'https://www.protonmail.com'
driver = webdriver.Firefox()
driver.get(baseUrl)
driver.maximize_window()
driver.implicitly_wait(5)
time.sleep(2)
#SIGN IN
elementXpath = driver.find_element(By.XPATH,"//span[text()='Sign in']")
elementXpath.click()
#USERNAME
time.sleep(3)
#elementId_0 = driver.find_element(By.ID,'username').click()
elementId_0 = driver.find_element(By.ID,'username')
#elementId_0.send_keys(self.user)
elementId_0.send_keys('xxxxxxx')
#PASSWORD
time.sleep(1)
elementId_1 = driver.find_element(By.ID,'password')
#elementId_1.send_keys(self.passwd)
elementId_1.send_keys('xxxxxx')
time.sleep(2)
elementId_2 = driver.find_element(By.XPATH,"//button[text()='Sign in']")
elementId_2.click()
#my router is the hell slow so I introduce a long implicit wait...
driver.implicitly_wait(20)
#time.sleep(1)
elementId_3 = driver.find_element(By.XPATH,"//button[text()='New message']")
elementId_3.click()
driver.implicitly_wait(15)
#recipient ; part of id=dynamic, so use id contains
element_4 = driver.find_element(By.XPATH,"//input[contains(@id,'to-composer-')]")
element_4.send_keys(self.recipient)
time.sleep(2)
#subject, part of id = dynamic, so use id contains
element_5 = driver.find_element(By.XPATH,"//input[contains(@id,'subject-composer-')]")
element_5.send_keys(self.subject)
time.sleep(3)
#email message: BUG HERE !
element_6 = driver.find_element(By.XPATH,"//div[@id='rooster-editor']/div[1]").click()
ff=FindProton()
ff.test('unknown0j0@gmail.com','no subject email','this is a short message')
</code></pre>
| <python><selenium-webdriver><nosuchelementexception> | 2023-10-28 16:11:51 | 1 | 315 | achille |
77,380,239 | 3,760,986 | How is a multiprocessing.Event passed to a child process under the hood? | <p>According to <a href="https://stackoverflow.com/questions/56912846/why-is-pickle-and-multiprocessing-picklability-so-different-in-python/56916787#56916787">this SO answer</a>, when a <code>multiprocessing.Queue</code> is passed to a child process, what is actually sent is a file descriptor (or handle) obtained from <a href="https://docs.python.org/3/library/os.html#os.pipe" rel="nofollow noreferrer">pipe</a>, instead of pickling the <code>Queue</code> object itself.</p>
<p>How does this work with <code>multiprocessing.Event</code>, e.g. when doing a</p>
<pre><code>cancel_event = multiprocessing.Event()
process = multiprocessing.Process(target=worker_function, args=(cancel_event, ))
</code></pre>
<p>I would assume that <code>Event</code> must also have some OS-related thing which is sent to the child process. Is this also a handle for a pipe like with queues?</p>
| <python><events><multiprocessing><pickle> | 2023-10-28 16:11:48 | 1 | 964 | Joerg |
77,380,210 | 12,343,115 | rapids cannot import cudf: Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100) | <p>To install RAPIDS, i have already installed WSL2.</p>
<p>But i still got the following error when import cudf:</p>
<pre><code>/home/zy-wsl/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/cudf/utils/_ptxcompiler.py:61: UserWarning: Error getting driver and runtime versions:
stdout:
stderr:
Traceback (most recent call last):
File "/home/zy-wsl/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 258, in ensure_initialized
self.cuInit(0)
File "/home/zy-wsl/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 331, in safe_cuda_api_call
self._check_ctypes_error(fname, retcode)
File "/home/zy-wsl/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 399, in _check_ctypes_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [100] Call to cuInit results in CUDA_ERROR_NO_DEVICE
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 4, in <module>
File "/home/zy-wsl/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 296, in __getattr__
self.ensure_initialized()
File "/home/zy-wsl/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/numba/cuda/cudadrv/driver.py", line 262, in ensure_initialized
raise CudaSupportError(f"Error at driver init: {description}")
...
Not patching Numba
warnings.warn(msg, UserWarning)
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
---------------------------------------------------------------------------
CudaSupportError Traceback (most recent call last)
/mnt/d/learn-rapids/Untitled.ipynb Cell 4 line 1
----> 1 import cudf
File ~/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/cudf/__init__.py:26
20 from cudf.api.extensions import (
21 register_dataframe_accessor,
22 register_index_accessor,
23 register_series_accessor,
24 )
25 from cudf.api.types import dtype
---> 26 from cudf.core.algorithms import factorize
27 from cudf.core.cut import cut
28 from cudf.core.dataframe import DataFrame, from_dataframe, from_pandas, merge
File ~/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/cudf/core/algorithms.py:10
8 from cudf.core.copy_types import BooleanMask
9 from cudf.core.index import RangeIndex, as_index
---> 10 from cudf.core.indexed_frame import IndexedFrame
11 from cudf.core.scalar import Scalar
12 from cudf.options import get_option
File ~/miniconda3/envs/rapids-23.12/lib/python3.10/site-packages/cudf/core/indexed_frame.py:59
57 from cudf.core.dtypes import ListDtype
...
302 if USE_NV_BINDING:
303 return self._cuda_python_wrap_fn(fname)
CudaSupportError: Error at driver init:
Call to cuInit results in CUDA_ERROR_NO_DEVICE (100):
</code></pre>
<p>Tried the latest install line below:</p>
<pre><code>conda create --solver=libmamba -n rapids-23.12 -c rapidsai-nightly -c conda-forge -c nvidia \
cudf=23.12 cuml=23.12 python=3.10 cuda-version=12.0 \
jupyterlab
</code></pre>
<pre><code> NVIDIA-SMI 545.23.05 Driver Version: 545.84 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX A6000 On | 00000000:01:00.0 On | Off |
| 30% 53C P3 54W / 300W | 1783MiB / 49140MiB | 10% Default |
| | | N/A
</code></pre>
<p>Also that cudf has been in the conda env:</p>
<pre><code>cudf 23.12.00a cuda12_py310_231028_g2a923dfff8_124 rapidsai-nightly
cuml 23.12.00a cuda12_py310_231028_gff635fc25_31 rapidsai-nightly
</code></pre>
<p>I also tried using numba-s in the wsl env, and found the following:</p>
<pre><code>__CUDA Information__
CUDA Device Initialized : False
CUDA Driver Version : ?
CUDA Runtime Version : ?
CUDA NVIDIA Bindings Available : ?
CUDA NVIDIA Bindings In Use : ?
CUDA Minor Version Compatibility Available : ?
CUDA Minor Version Compatibility Needed : ?
CUDA Minor Version Compatibility In Use : ?
CUDA Detect Output:
None
CUDA Libraries Test Output:
None
__Warning log__
Warning (cuda): CUDA device initialisation problem. Message:Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100)
Exception class: <class 'numba.cuda.cudadrv.error.CudaSupportError'>
Warning (no file): /sys/fs/cgroup/cpuacct/cpu.cfs_quota_us
Warning (no file): /sys/fs/cgroup/cpuacct/cpu.cfs_period_us
</code></pre>
<p>Seems like the CUDA is not initiated in wsl but when i run this command in windows prompt, it returns:</p>
<pre><code>__CUDA Information__
CUDA Device Initialized : True
CUDA Driver Version : ?
CUDA Runtime Version : ?
CUDA NVIDIA Bindings Available : ?
CUDA NVIDIA Bindings In Use : ?
CUDA Minor Version Compatibility Available : ?
CUDA Minor Version Compatibility Needed : ?
CUDA Minor Version Compatibility In Use : ?
CUDA Detect Output:
Found 1 CUDA devices
id 0 b'NVIDIA RTX A6000' [SUPPORTED]
Compute Capability: 8.6
PCI Device ID: 0
PCI Bus ID: 1
UUID: GPU-17e7be94-251e-a2d9-3924-d167c0e59a56
Watchdog: Enabled
Compute Mode: WDDM
FP32/FP64 Performance Ratio: 32
Summary:
1/1 devices are supported
CUDA Libraries Test Output:
None
__Warning log__
Warning (cuda): Probing CUDA failed (device and driver present, runtime problem?)
(cuda) <class 'FileNotFoundError'>: Could not find module 'cudart.dll' (or one of its dependencies). Try using the full path with constructor syntax.
</code></pre>
| <python><machine-learning><cuda><rapids> | 2023-10-28 16:02:14 | 2 | 811 | ZKK |
77,380,206 | 1,778,043 | monitor grandchild processes spawned by Python's Popen subprocess | <p>I can successfully process the STDOUT of a process I spawn with:</p>
<pre class="lang-python prettyprint-override"><code>print("launching child")
child=subprocess.Popen(params,stdout=subprocess.PIPE)
for line in child.stdout:
#do processing
print("processed:"+line.decode())
print("child dead :(")
</code></pre>
<p>However, the child process (not under my control) likes to sometimes "restart" itself, either for UAC reasons or otherwise. When it does this, the for loop exits and on windows the grandchild process's STDOUT goes straight to the console. For example:</p>
<pre class="lang-none prettyprint-override"><code>(venv) > .\parent.py
launching child
Processed:woot I started
Processed:doing stuff
...
Processed:shutting down for restart
child dead :(
(venv) > woot I started
doing stuff
</code></pre>
<p>I'd like to still be able to process the grandchild's output. Any ideas on how to do this?</p>
| <python><windows><subprocess> | 2023-10-28 16:00:35 | 0 | 1,336 | Eph |
77,380,134 | 10,118,502 | Image Bicubic Interpolation does not match OpenCV and Scikit-image implementations | <p>I am trying to implement bicubic convolution interpolation for images from the paper "Cubic Convolution Interpolation for Digital Image Processing" in Python. However, my implementation which looks like a proper scale still differs from reference implementations and I do not understand why. This is especially noticeable in smaller images, like this one:</p>
<p><a href="https://i.sstatic.net/r54dW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r54dW.png" alt="enter image description here" /></a></p>
<p>Here's an image generated by the MWE with original unscaled image, my bad bicubic, opencv/skimage bicubic scales, and their differences from my scaled image.</p>
<p><a href="https://i.sstatic.net/L5TzS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L5TzS.png" alt="enter image description here" /></a></p>
<p>Here's the code I have so far turned into a MWE without multiprocessing:</p>
<pre><code>import math
import time
from functools import cache
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
import skimage
def u(s: float):
# bicubic convolution kernel aka catmull-rom spline
# the value of a here is -0.5 as that was used in Keys' version
a: float = -0.5
s = abs(s)
if 0 <= s < 1:
return (a + 2) * s**3 - (a + 3) * s**2 + 1
elif 1 <= s < 2:
return a * s**3 - 5 * a * s**2 + 8 * a * s - 4 * a
return 0
in_file = "test_sharpen.png"
ratio = 2.0
im_data = cv.imread(str(in_file))
# because plt uses rgb
im_data = cv.cvtColor(im_data, cv.COLOR_RGB2BGR)
start = time.perf_counter()
print("Scaling image...")
H, W, C = im_data.shape
# pad by 2 px
image = cv.copyMakeBorder(im_data, 2, 2, 2, 2, cv.BORDER_REFLECT)
image = image.astype(np.float64) / 255
# create new image
new_H = math.floor(H * ratio)
new_W = math.floor(W * ratio)
big_image = np.zeros((new_H, new_W, C))
for c in range(C):
for j in range(new_H):
# scale new image's coordinate to be in old image
y = j * (1 / ratio) + 2
# we separate x and y to integer and fractional parts
iy = int(y)
# ix and iy are essentially the closest original pixels
# as all the old pixels are in integer positions
# decx and decy as the fractional parts are then the distances
# to the original pixels on the left and above
decy = iy - y
for i in range(new_W):
x = i * (1 / ratio) + 2
ix = int(x)
decx = ix - x
pix = sum(
sum(
image[iy + M, ix + L, c] * u(decx + L) * u(decy + M)
for L in range(-1, 2 + 1)
)
for M in range(-1, 2 + 1)
)
# we limit results to [0, 1] because bicubic interpolation
# can produce pixel values outside the original range
big_image[j, i, c] = max(min(1, pix), 0)
big_image = (big_image * 255).astype(np.uint8)
print(f"Finished scaling in {time.perf_counter() - start} seconds")
# generate proper bicubic scales with opencv and skimage
# and compare them to my scale with plt
proper_cv = cv.resize(im_data, None, None, ratio, ratio, cv.INTER_CUBIC)
proper_skimage = skimage.util.img_as_ubyte(
skimage.transform.rescale(im_data, ratio, channel_axis=-1, order=3)
)
fig, ax = plt.subplots(nrows=4, ncols=2)
ax[0, 0].imshow(im_data)
ax[0, 0].set_title("Original")
ax[0, 1].imshow(big_image)
ax[0, 1].set_title("My scale")
ax[1, 0].set_title("Proper OpenCV")
ax[1, 0].imshow(proper_cv)
ax[1, 1].set_title("Proper Skimage")
ax[1, 1].imshow(proper_cv)
print("my scale vs proper_cv psnr:", cv.PSNR(big_image, proper_cv))
ax[2, 0].set_title("Absdiff OpenCV vs My")
diffy_cv = cv.absdiff(big_image, proper_cv)
ax[2, 0].imshow(diffy_cv)
ax[2, 1].set_title("Absdiff Skimage vs My")
diffy_skimage = cv.absdiff(big_image, proper_skimage)
ax[2, 1].imshow(diffy_skimage)
ax[3, 1].set_title("Absdiff CV vs Skimage")
ax[3, 1].imshow(cv.absdiff(proper_cv, proper_skimage))
ax[3, 0].set_title("Absdiff CV vs Skimage")
ax[3, 0].imshow(cv.absdiff(proper_cv, proper_skimage))
print("diffy_cv", diffy_cv.min(), diffy_cv.max(), diffy_cv.dtype, diffy_cv.shape)
print(
"diffy_skimage",
diffy_skimage.min(),
diffy_skimage.max(),
diffy_skimage.dtype,
diffy_skimage.shape,
)
print(
"proper_skimage vs proper_opencv psnr:",
cv.PSNR(big_image, proper_cv),
cv.absdiff(proper_cv, proper_skimage).max(),
)
plt.show()
</code></pre>
<p>It can be used with e.g. <code>python scaling.py</code> to scale test_sharpening.png to 2x.</p>
<p>I made the implementation so far and it seems to work okay, but still differs. I also tried changing the value of <code>a</code> , but that's not the problem.</p>
| <python><image-processing><interpolation><bicubic> | 2023-10-28 15:39:42 | 1 | 324 | tzv |
77,380,110 | 5,246,226 | Errors in Python daemon process aren't propagated to Terminal? | <p>I have a daemon process in Python that I create using the following code:</p>
<pre><code>process = mp.Process(name="test", target=run, args=(chunk_queue, res_queues))
process.daemon = True
process.start()
</code></pre>
<p>I can confirm that this daemon can print to Terminal. However, it seems like unless I explicitly code something like a ValueError, errors that occur in the daemon process (TypeErrors, out of range errors, etc.) don't seem to be propagated to Terminal; rather, the process seems to silently stop.</p>
<p>Is there a way I can propagate these errors to the Terminal?</p>
| <python><python-multiprocessing> | 2023-10-28 15:33:42 | 1 | 759 | Victor M |
77,380,003 | 593,487 | How to rotate numpy 3D array around its center given a rotation matrix? | <p>I have a <strong>3D</strong> numpy array with <code>shape=(50, 50, 50)</code> which I would like to rotate around center, <strong>using a rotation matrix R</strong>.</p>
<p>I am trying to do this by using <code>scipy.ndimage.affine_transform</code>, but I'm open to better suggestions.</p>
<p>This is the code that shows the problem:</p>
<pre class="lang-py prettyprint-override"><code>import random
import numpy as np
import scipy
from PIL import Image
R = np.array([
[ 0.299297976, -0.817653322, -0.491796468],
[-0.425077904, -0.575710904, 0.698473858],
[-0.854242060, 0, -0.519875469],
])
print('Is R really orthogonal:', np.allclose(np.dot(R, R.T), np.identity(3))) # True
# initialize some random data:
a = (50 + np.random.rand(50, 50, 50)*200).astype(np.uint8)
print(a.shape) # (50, 50, 50)
Image.fromarray(a[:, :, 25]).show()
b = scipy.ndimage.affine_transform(a, R, offset=0)
Image.fromarray(b[:, :, 25]).show()
c = scipy.ndimage.affine_transform(a, R, offset=(50, 50, 50))
Image.fromarray(c[:, :, 25]).show()
</code></pre>
<p><a href="https://i.sstatic.net/SsBlE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SsBlE.png" alt="original" /></a></p>
<p><a href="https://i.sstatic.net/NvSYT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NvSYT.png" alt="rotated with offset 0" /></a></p>
<p><a href="https://i.sstatic.net/WQaE6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WQaE6.png" alt="rotated with offset 50" /></a></p>
<p>What is unexpected to me is that:</p>
<ul>
<li>rotating with offset <code>0</code> doesn't seem to rotate around the center</li>
<li>looking at the last image, the operation this matrix does is not just a rotation, but some kind of warping / shearing also</li>
</ul>
<p>What am I missing?</p>
<p><strong>EDIT:</strong> the answer by @Nin17 is correct, this is the version that is adapted to original (3D) problem:</p>
<pre class="lang-py prettyprint-override"><code>import random
import numpy as np
import scipy
from PIL import Image
R = np.array([
[ 0.299297976, -0.817653322, -0.491796468, 0.],
[-0.425077904, -0.575710904, 0.698473858, 0.],
[-0.854242060, 0., -0.519875469, 0.],
[ 0., 0., 0., 1.],
])
N = 50
shift = np.array(
[
[1, 0, 0, N / 2],
[0, 1, 0, N / 2],
[0, 0, 1, N / 2],
[0, 0, 0, 1],
]
)
unshift = np.array(
[
[1, 0, 0, -N / 2],
[0, 1, 0, -N / 2],
[0, 0, 1, -N / 2],
[0, 0, 0, 1],
]
)
print('Is R orthogonal:', np.allclose(np.dot(R, R.T), np.identity(4))) # True
a = (50 + np.random.rand(N, N, N)*200).astype(np.uint8)
print(a.shape) # (50, 50, 50)
Image.fromarray(a[:, :, N//2]).show()
b = scipy.ndimage.affine_transform(a, shift @ R @ unshift, offset=0)
Image.fromarray(b[:, :, N//2]).show()
</code></pre>
<p><a href="https://i.sstatic.net/9RqeE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9RqeE.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/23FCP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/23FCP.png" alt="enter image description here" /></a></p>
| <python><numpy><scipy.ndimage> | 2023-10-28 14:53:43 | 2 | 18,451 | johndodo |
77,379,908 | 395,882 | Backtrader Strategy with multiple time frames | <p>I am using a strategy which is using 2 time frames (Daily and Weekly) and supposed to generate a buy/sell order on daily time frame whenever condition using daily time frame and weekly time frames are met. I saw that my code executes next() in strategy twice and execute order twice one for daily and one for weekly time frame. How should I control this and make system to generate only on daily time frame when condition with daily and weekly time frames are met.</p>
<p>Below is my strategy:</p>
<pre><code>from future import (absolute_import, division, print_function, unicode_literals)
import backtrader as bt
import backtrader.indicators as btind
import datetime # For datetime objects
import os.path # To manage paths
import sys # To find out the script name (in argv[0])
import pandas as pd
from backtrader.indicators.supertrend import
class supertrend(bt.Strategy):
params = dict(
onlydaily = False
)
def printdata(self,data):
print(data)
def log(self, txt, dt=None):
''' Logging function fot this strategy'''
dt = dt or self.datas[0].datetime.date(0)
print('%s, %s' % (dt.isoformat(), txt))
def init(self):
# Keep a reference to the "close" line in the data[0] dataseries
self.dataopen = self.datas[0].open
self.datahigh = self.datas[0].high
self.datalow = self.datas[0].low
self.dataclose = self.datas[0].close
self.dataopen2 = self.datas[1].open
self.datahigh2 = self.datas[1].high
self.datalow2 = self.datas[1].low
self.dataclose2 = self.datas[1].close
self.supertrend = SuperTrend(self.data, period = 10, multiplier = 3)
if not self.p.onlydaily:
self.supertrend2 = SuperTrend(self.data1, period= 10, multiplier = 1)
self.crossup = btind.CrossUp(self.dataclose, self.supertrend)
self.crossdown = btind.CrossDown(self.dataclose, self.supertrend)
# To keep track of pending orders
self.order = None
def notify_order(self, order):
if order.status in [order.Submitted, order.Accepted]:
# Buy/Sell order submitted/accepted to/by broker - Nothing to do
return
# Check if an order has been completed
# Attention: broker could reject order if not enough cash
if order.status in [order.Completed]:
if order.isbuy():
self.log('BUY EXECUTED, %.2f' % order.executed.price)
elif order.issell():
self.log('SELL EXECUTED, %.2f' % order.executed.price)
self.bar_executed = len(self)
elif order.status in [order.Canceled, order.Margin, order.Rejected]:
self.log('Order Canceled/Margin/Rejected')
# Write down: no pending order
self.order = None
def next(self):
# Check if an order is pending ... if yes, we cannot send a 2nd one
#if self.order:
# return
# Check if we are in the market
if not self.position:
if self.crossup[0] == 1: #(self.dataclose[0] > self.supertrend[0]): # and (self.dataclose2[0] > self.supertrend2[0]):
print('Entry trade')
self.buy()
else:
# Pattern Long exit
if self.position.size > 0:
if self.crossdown[0] == 1: #(self.dataclose[0] > self.supertrend[0]):# and (self.dataclose2[0] < self.supertrend2[0]):
print('Exit trade')
self.sell()
</code></pre>
<p>`</p>
<p>Below are the trades, it generates:</p>
<p>Entry trade</p>
<p>Entry trade</p>
<p>2008-04-28, BUY EXECUTED, 5112.50</p>
<p>2008-04-28, BUY EXECUTED, 5112.50</p>
<p>Exit trade</p>
<p>2008-05-27, SELL EXECUTED, 4877.15</p>
<p>Exit trade</p>
<p>2008-08-29, SELL EXECUTED, 4230.60</p>
<p>Entry trade</p>
<p>2008-11-05, BUY EXECUTED, 3155.75</p>
<p>Exit trade</p>
<p>2009-01-13, SELL EXECUTED, 2775.00</p>
<p>Entry trade</p>
<p>2009-03-24, BUY EXECUTED, 2923.80</p>
<p>Exit trade</p>
<p>2009-07-07, SELL EXECUTED, 4166.00</p>
<p>Entry trade</p>
<p>Entry trade</p>
<p>2009-07-20, BUY EXECUTED, 4377.90</p>
<p>2009-07-20, BUY EXECUTED, 4377.90</p>
<p>Exit trade</p>
<p>2009-10-28, SELL EXECUTED, 4846.55</p>
<p>Exit trade</p>
<p>Exit trade</p>
<p>2010-01-25, SELL EXECUTED, 5034.55</p>
<p>2010-01-25, SELL EXECUTED, 5034.55</p>
<p>Final Portfolio Value: -573182.64</p>
| <python><algorithmic-trading><backtrader> | 2023-10-28 14:32:27 | 1 | 665 | user395882 |
77,379,874 | 1,658,617 | Closing resources with dependency injection | <p>I have the following pseudocode:</p>
<pre><code>class Resource:
"""E.g. Session, connection, file, etc."""
async def open(self):
pass
async def close(self):
pass
class ResourceUser:
def __init__(self, resource: Resource):
self.resource = resource
async def main():
r = Resource(...) # Plenty of those
await r.open(...)
# More resource initialization
try:
ResourceUser(r, r2, r3)
finally:
await r.close(...) # In practice done with AsyncExitStack
await r.close(...)
</code></pre>
<p>Main is a large function, and I would like to extract the creation of resources and ResourceUser:</p>
<pre><code>async def create_user():
r, r2, r3, ... = Resource(), ...
await r.open()
return ResourceUser(r)
</code></pre>
<p>By doing so, I lose the option to close the resources correctly.</p>
<p>An optional way to solve it is by creating a close() function in ResourceUser:</p>
<pre><code>async def close(self):
await self.resource.close()
</code></pre>
<p>Unfortunately, this assumes that the ResourceUser "owns" the given resources, and has many potential drawbacks such as preventing the sharing of resources (like a connection pool) between multiple instances.</p>
<p>Counting on the <code>__del__</code> (akin to RAII) is not an option, especially considering the asynchronous nature of the closing method.</p>
<p>Initializing the resources in a different function and then creating the User in main() will result in plenty of different resources in the return statement, which is rather ugly. Monkey-packing the <code>ResourceUser.close()</code> is also pretty ugly.</p>
<p>Is there any standardized way that does not over-complicate yet achieve the desired result?</p>
| <python><resources><resource-management> | 2023-10-28 14:23:28 | 1 | 27,490 | Bharel |
77,379,782 | 17,267,064 | Solve Forbidden 403 error during response testing | <p>I am trying to get response of website <a href="https://clutch.co/sitemap.xml" rel="nofollow noreferrer">https://clutch.co/sitemap.xml</a></p>
<p>I tested in Python as well in Postman, it shows a forbidden 403 status code. I copied all of its headers still it doesn't get successful response.</p>
<p>Below is a piece of code snippet I attempted.</p>
<pre><code>import requests
headers = {
'authority': 'clutch.co',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8',
'accept-language': 'en-US,en;q=0.6',
'cache-control': 'max-age=0',
'cookie': 'cf_clearance=mvAUZW1nJug5PB0M8rf93YBwyn4qOdDcU5gy.vVZ6rc-1698494124-0-1-1c669b3c.347dd517.85f5c18c-0.2.1698494124; _ga=GA1.1.1809619058.1698494124; FPID=FPID2.2.pSduR8wDRmNkf3s5Z4HEhBZ18leJVTEtvWiCgKds9tw%3D.1698494124; FPLC=uF%2BpXHT1%2BvYReCd4BPS47IQeM5yO9mB8uwXh95ALGacLDIy4dW%2BByDOe2DeSGiYTq43YZp4EfAPJiXyUnKMJBD52uvoiJDauKzt3M4%2FKxXBfzLunSznieaoZ68xCiA%3D%3D; __cf_bm=tv3SekyYvPpYLykgxBnrp2LtXvkG5Wvh4LFQVrV2_Ks-1698497124-0-AQFsc9n3lMN5iKh+93PrwD1KgyBDNpofby6pZlu3n+02f1WPA84GsHf2Ym6wJARjQ5kokSaz9d9TNsIbP2qMkXQ=; _ga_D0WFGX8X3V=GS1.1.1698494124.1.1.1698497403.56.0.0',
'if-modified-since': 'Sat, 28 Oct 2023 09:03:15 GMT',
'sec-ch-ua': '"Brave";v="117", "Not;A=Brand";v="8", "Chromium";v="117"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'none',
'sec-fetch-user': '?1',
'sec-gpc': '1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36'
}
response = requests.get('https://clutch.co/sitemap.xml', headers=headers)
</code></pre>
<p>How do I solve it guys? Thank you all in advance.</p>
| <python><web-scraping><python-requests><scrapy><postman> | 2023-10-28 13:58:31 | 1 | 346 | Mohit Aswani |
77,379,774 | 5,859,885 | Keras model_from_json() returning a string? | <p>I am trying to use the Keras <code>model_from_json()</code> function to load the structure of my model. I thought the return of this function was an instance of a Keras model, but it is returning a string for me?</p>
<p>My .json config file looks like this:</p>
<pre><code>{"class_name": "MyModel", "config": {"hidden_layers": 2, "units": 64, "activation": "swish"}, "keras_version": "2.13.1", "backend": "tensorflow"}
</code></pre>
<p>When I call <code>model_from_json()</code> with this loaded json, it simply returns "MyModel", i.e the "class_name" in the config file.</p>
<p>Does anyone know why this is?</p>
| <python><json><tensorflow><keras> | 2023-10-28 13:55:33 | 1 | 1,461 | the man |
77,379,703 | 8,229,029 | Python Spyder start up error No Qt application plug could be initialized | <p>I just installed anaconda, along with a bunch of extra packages outside the ones that come with the installation. Now, Spyder won't start, and I get the error "The application failed to start because no Qt platform plugin could be intitialized. Reinstalling the application may fix this problem."</p>
<p>Well, reinstalling Spyder did nothing. There are other solutions posted on the internet, such as here, <a href="https://answers.microsoft.com/en-us/windows/forum/all/this-application-failed-to-start-because-no-qt/e5b31106-84a1-4196-92f3-a1ee633f68b0" rel="nofollow noreferrer">https://answers.microsoft.com/en-us/windows/forum/all/this-application-failed-to-start-because-no-qt/e5b31106-84a1-4196-92f3-a1ee633f68b0</a>, don't work.</p>
<p>These solutions tell you to search for the pyqt5_tools folder, but I don't have one as far as I can tell, as windows never finds it. What's going on here? Any help on this would be great. This is literally the 4th time I've installed anaconda, and every time I do, some other problems arise. Does anyone else have this many issues with Anaconda?</p>
| <python><pyqt5><spyder> | 2023-10-28 13:38:04 | 0 | 1,214 | user8229029 |
77,379,637 | 1,445,660 | How to update db from json patch | <p>I get a complex json every few hours. Part of it is a <code>Game</code> object with a list of <code>Player</code> objects that each <code>Player</code> has a list of <code>Training</code> object (each object has also other fields - ints, strings, list of strings etc.).
If the <code>Game</code> object doesn't exist in my Postgres db (I check by the <code>Game</code>'s id field), I insert the whole structure to the db, each object as its own table (a table for <code>Game</code>, a table for <code>Player</code> and a table for <code>Training</code>). Next time I get the json for this <code>Game</code>, it already exists in the db, so I want to update it. I get the old json, the updated json, and the json_patch.
I wanted to query the db, convert it to json, and apply the patch on that json. The problem is that the lists (of the players for example) are not sorted in the same way as the lists in the <code>updated_object</code> json. But I need to somehow work on the db because I need to have the primary keys of the objects so the ORM knows which objects to update.
What's the best way to approach it?</p>
<p>models:</p>
<pre><code>class Game(Base):
__tablename__ = "game"
game_id: int = Column(INTEGER, primary_key=True,
server_default=Identity(always=True, start=1, increment=1, minvalue=1,
maxvalue=2147483647, cycle=False, cache=1),
autoincrement=True)
unique_id: str = Column(TEXT, nullable=False)
name: str = Column(TEXT, nullable=False)
players = relationship('Player', back_populates='game')
class Player(Base):
__tablename__ = "player"
player_id: int = Column(INTEGER, primary_key=True,
server_default=Identity(always=True, start=1, increment=1, minvalue=1,
maxvalue=2147483647, cycle=False, cache=1),
autoincrement=True)
unique_id: str = Column(TEXT, nullable=False)
game_id: int = Column(INTEGER, ForeignKey('game.game_id'), nullable=False)
name: str = Column(TEXT, nullable=False)
birth_date = Column(DateTime, nullable=False)
game = relationship('Game', back_populates='players')
trainings = relationship('Training', back_populates='player')
class Training(Base):
__tablename__ = "training"
training_id: int = Column(INTEGER, primary_key=True,
server_default=Identity(always=True, start=1, increment=1, minvalue=1,
maxvalue=2147483647, cycle=False, cache=1),
autoincrement=True)
unique_id: str = Column(TEXT, nullable=False)
name: str = Column(TEXT, nullable=False)
number_of_players: int = Column(INTEGER, nullable=False)
player_id: int = Column(INTEGER, ForeignKey('player.player_id'), nullable=False)
player = relationship('Player', back_populates='players')
</code></pre>
<p>json with updated data:</p>
<pre><code>{"original_object":{"name":"Table Tennis","unique_id":"432","players":[{"unique_id":"793","name":"John","birth_date":"2023-10-28T00:10:56Z","trainings":[{"unique_id":"43","name":"Morning Session","number_of_players":3}, {"unique_id":"44","name":"Evening Session","number_of_players":2}]}]},"updated_object":{"name":"Table Tennis","unique_id":"432","players":[{"unique_id":"793","name":"John","birth_date":"2023-10-28T00:10:56Z","trainings":[{"unique_id":"43","name":"Morning Session","number_of_players":3}, {"unique_id":"44","name":"Evening Session","number_of_players":4}]}]},"json_patch":[{"op":"replace","path":"/players/0/trainings/1/numbre_of_players","value":4}],"timestamp":"2023-10-28T02:00:36Z"}
</code></pre>
<p>The json_patch updates the 'numbre_of_players' field of the second training to the value 4.</p>
<p>Code to add a new Game:</p>
<pre><code> Session = sessionmaker(bind=engine_sync)
session = Session()
session.begin()
game = Game.from_dict(json['updated_object'])
existing_game = session.query(Game).filter_by(unique_id=game.id).first()
if not existing_game:
session.add(game)
session.commit()
</code></pre>
<p>But if the <code>Game</code> does already exists in the db, I'm not sure what I should do.</p>
| <python><json><postgresql><json-patch> | 2023-10-28 13:21:01 | 1 | 1,396 | Rony Tesler |
77,379,346 | 4,094,231 | How to scrape Japanese content using Python? | <p>I have a code in Python requests and also tried in Python Scrapy.</p>
<p>It returns correct HTML but the content inside HTML tags is strange characters like <code>Á¶¼±°úÇбâ¼úÃÑ·Ã¸Í Áß¾ÓÀ§¿øÈ¸ Á¶¼±ÀÚµ¿ÈÇÐȸ¿Í Á¶¼±ÀÚ¿¬¿¡³</code> etc.</p>
<pre><code>headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Language': 'en-US,en;q=0.9',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'If-Modified-Since': 'Mon, 22 May 2017 16:51:07 GMT',
'If-None-Match': '"269-5501fad6c02b2-gzip"',
'Referer': 'http://kcna.co.jp/',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36',
}
response = requests.get('http://kcna.co.jp/item2/2001/200107/news07/01.htm#10', headers=headers, verify=False)
resp = HtmlResponse(url='',body=response.text, encoding='utf8')
print(resp.css('p::text').get().encode('utf8').decode('utf8'))
</code></pre>
| <python><python-requests><scrapy> | 2023-10-28 12:04:06 | 1 | 21,655 | Umair Ayub |
77,379,334 | 15,129,164 | form.validate_on_submit() never runs for my form | <p><a href="https://i.sstatic.net/rTXrt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rTXrt.png" alt="[](https://i.sstatic.net/eDyv7.png)" /></a></p>
<p>Python code</p>
<p><a href="https://i.sstatic.net/BPP53.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BPP53.png" alt="[](https://i.sstatic.net/9Qsea.png)" /></a></p>
<p>HTML Code</p>
<p>I have POST as my method, but whenever I click on submit nothing happens?</p>
| <python><html><flask><bootstrap-5><flask-wtforms> | 2023-10-28 11:58:58 | 0 | 325 | Hridaya Agrawal |
77,379,301 | 8,365,731 | Pandas SettingWithCopyWarning: How does panda konw DataFrame was created as a slice from another DataFrame | <p>How does python pandas know a DataFrame is a slice from another DataFrame? example 1 gives SettingWithCopyWarning:</p>
<pre><code>a=pd.DataFrame([list(range(4)) for i in range(7)])
b=a[[1,2]]
b.loc[0,1]=7
</code></pre>
<p>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame</p>
<p>See the caveats in the documentation: <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy%60" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy`</a></p>
<p>while example 2 does not:</p>
<pre><code>b=a[[1,2]].copy()
b.loc[0,1]=7
</code></pre>
<p>Obviously pandas keeps track on DataFrame origins but how?</p>
<p>Pandas documentation suggests error is related to chained indexing not used in example above</p>
| <python><pandas> | 2023-10-28 11:50:28 | 1 | 563 | Jacek Błocki |
77,379,143 | 8,030,794 | Update values in table many times in minute | <p>I need to call Update many times in minute. Is it the correct way to do this? I think not, cuz sometimes i get Exception - <code>cursor already closed</code>. I call this func from many threads, may be need to lock this part?</p>
<pre><code>conn = psycopg2.connect(dbname='MyDB', user='...', password='...', host='127.0.0.1')
cursor = conn.cursor()
def update_data(name,type, value):
cursor.execute("UPDATE Datas SET Value = %s WHERE Name = %s and Type = %s", (value, name, type))
conn.commit()
</code></pre>
<p>Or i need to open connection and initialize cursor everytime ? I'm calling <code>update_data</code> about 600 times in minute.</p>
<p>I need to see live data updates on the site. This information may be needed.</p>
<p>Also below I use <code>with conn.cursor()</code>, perhaps there is an intersection of two cursor objects and an error occurs. I didn't notice that I was using different options with the cursor, I was in a hurry.</p>
<pre><code>def get_all_data():
with conn.cursor() as curs:
curs.execute('SELECT * FROM Datas')
datas = curs.fetchall()
return datas
</code></pre>
| <python><psycopg2> | 2023-10-28 11:05:21 | 2 | 465 | Fresto |
77,379,133 | 3,611,164 | Aggregate string column close in time in pandas | <p>I'm trying to group messages, which have been sent shortly after another. A parameter defines the maximum duration between messages for them to be considered part of a block. If a message is added to the block, the time window is extended for more messages to be considered part of the block.</p>
<p><strong>Example Input</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: left;">datetime</th>
<th style="text-align: left;">message</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">2023-01-01 12:00:00</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">2023-01-01 12:20:00</td>
<td style="text-align: left;">B</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">2023-01-01 12:30:00</td>
<td style="text-align: left;">C</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">2023-01-01 12:30:55</td>
<td style="text-align: left;">D</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">2023-01-01 12:31:20</td>
<td style="text-align: left;">E</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: left;">2023-01-01 15:00:00</td>
<td style="text-align: left;">F</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: left;">2023-01-01 15:30:30</td>
<td style="text-align: left;">G</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: left;">2023-01-01 15:30:55</td>
<td style="text-align: left;">H</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Expected output for the parameter set to 1min</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: left;">datetime</th>
<th style="text-align: left;">message</th>
<th style="text-align: left;">datetime_last</th>
<th style="text-align: left;">n_block</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">2023-01-01 12:00:00</td>
<td style="text-align: left;">A</td>
<td style="text-align: left;">2023-01-01 12:00:00</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">2023-01-01 12:20:00</td>
<td style="text-align: left;">B</td>
<td style="text-align: left;">2023-01-01 12:20:00</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">2023-01-01 12:30:00</td>
<td style="text-align: left;">C\nD\nE</td>
<td style="text-align: left;">2023-01-01 12:31:20</td>
<td style="text-align: left;">3</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">2023-01-01 15:00:00</td>
<td style="text-align: left;">F</td>
<td style="text-align: left;">2023-01-01 15:00:00</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">2023-01-01 15:30:30</td>
<td style="text-align: left;">G\nH</td>
<td style="text-align: left;">2023-01-01 15:30:55</td>
<td style="text-align: left;">2</td>
</tr>
</tbody>
</table>
</div>
<p><strong>My failing attempt</strong></p>
<p>I was hoping to achieve that with a rolling window, which would continuously append the message rows.</p>
<pre class="lang-py prettyprint-override"><code>def join_messages(x):
return '\n'.join(x)
df.rolling(window='1min', on='datetime').agg({
'datetime': ['first', 'last'],
'message': [join_messages, "count"]}) #Somehow overwrite datetime with the aggregated datetime.first.
</code></pre>
<p>Both aggregations fail on a ValueError: <code>invalid on specified as datetime, must be a column (of DataFrame), an Index or None</code>.</p>
<p>I don't see a clean way to get <code>datetime</code> "accessible" in the Window. Besides, rolling does not work well with strings either. I have the impression that this is a dead end and that there is a cleaner approach to this.</p>
<p>Snippets for input and expected data</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'datetime': [pd.Timestamp('2023-01-01 12:00'),
pd.Timestamp('2023-01-01 12:20'),
pd.Timestamp('2023-01-01 12:30:00'),
pd.Timestamp('2023-01-01 12:30:55'),
pd.Timestamp('2023-01-01 12:31:20'),
pd.Timestamp('2023-01-01 15:00'),
pd.Timestamp('2023-01-01 15:30:30'),
pd.Timestamp('2023-01-01 15:30:55'),],
'message': list('ABCDEFGH')})
df_expected = pd.DataFrame({
'datetime': [pd.Timestamp('2023-01-01 12:00'),
pd.Timestamp('2023-01-01 12:20'),
pd.Timestamp('2023-01-01 12:30:00'),
pd.Timestamp('2023-01-01 15:00'),
pd.Timestamp('2023-01-01 15:30:30'),],
'message': ['A', 'B', 'C\nD\nE', 'F', 'G\nH'],
'datetime_last': [pd.Timestamp('2023-01-01 12:00'),
pd.Timestamp('2023-01-01 12:20'),
pd.Timestamp('2023-01-01 12:31:20'),
pd.Timestamp('2023-01-01 15:00'),
pd.Timestamp('2023-01-01 15:30:55'),],
'n_block': [1, 1, 3, 1, 2]})
</code></pre>
| <python><pandas><string><time-series> | 2023-10-28 11:01:34 | 1 | 366 | Fabitosh |
77,379,119 | 2,284,240 | calling elastic search query from frontend with multiple query params | <h1>API Usage:</h1>
<p>I am using a public API to display artworks on frontend. The public API url: <a href="https://api.artic.edu/api/v1/artworks/search" rel="nofollow noreferrer">https://api.artic.edu/api/v1/artworks/search</a></p>
<h1>Scenarios that works individually</h1>
<p>To filter artworks that has <code>place_of_origin=France</code>, I can use this query (works 100% fine): <a href="https://api.artic.edu/api/v1/artworks/search?fields=id,api_link,title,description,thumbnail,image_id,place_of_origin&page=1&limit=10&query%5Bmatch%5D%5Bplace_of_origin%5D=france" rel="nofollow noreferrer">https://api.artic.edu/api/v1/artworks/search?fields=id,api_link,title,description,thumbnail,image_id,place_of_origin&page=1&limit=10&query[match][place_of_origin]=france</a></p>
<p>To search artworks whose <code>title includes night</code>, I can use this query (works 100% fine):
<a href="https://api.artic.edu/api/v1/artworks/search?fields=id,api_link,title,description,thumbnail,image_id,place_of_origin&page=1&limit=10&query%5Bterm%5D%5Btitle%5D=night" rel="nofollow noreferrer">https://api.artic.edu/api/v1/artworks/search?fields=id,api_link,title,description,thumbnail,image_id,place_of_origin&page=1&limit=10&query[term][title]=night</a></p>
<h1>Combining two query params gives problems</h1>
<p>Now, If I want to filter artworks that. has <code>place_of_origin=France</code> as well as I want to search artworks whose <code>title includes night</code> in the same api call. So, here is the url that I call GET request on: <a href="https://api.artic.edu/api/v1/artworks/search?fields=id,api_link,title,description,thumbnail,image_id,place_of_origin&page=1&limit=10&query%5Bterm%5D%5Btitle%5D=night&query%5Bmatch%5D%5Bplace_of_origin%5D=france" rel="nofollow noreferrer">https://api.artic.edu/api/v1/artworks/search?fields=id,api_link,title,description,thumbnail,image_id,place_of_origin&page=1&limit=10&query[term][title]=night&query[match][place_of_origin]=france</a></p>
<p>It gives me 400 Bad request.</p>
<p>Here is the exact response that I get:</p>
<pre><code>{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "[term] malformed query, expected [END_OBJECT] but found [FIELD_NAME]",
"line": 1,
"col": 416
}
],
"type": "x_content_parse_exception",
"reason": "[1:416] [bool] failed to parse field [must]",
"caused_by": {
"type": "x_content_parse_exception",
"reason": "[1:416] [bool] failed to parse field [should]",
"caused_by": {
"type": "x_content_parse_exception",
"reason": "[1:416] [bool] failed to parse field [must]",
"caused_by": {
"type": "x_content_parse_exception",
"reason": "[1:416] [bool] failed to parse field [must]",
"caused_by": {
"type": "parsing_exception",
"reason": "[term] malformed query, expected [END_OBJECT] but found [FIELD_NAME]",
"line": 1,
"col": 416
}
}
}
}
},
"status": 400
}
</code></pre>
<p>I think they use elastic search in backend for apis to work. Elastic search document gives me some JSON object. But I don't know how to convert those JSON objects to a url with query params.</p>
<p>Can someone explain me how can I use multiple query together?</p>
| <javascript><python><elasticsearch> | 2023-10-28 10:57:02 | 1 | 6,428 | Vishal |
77,379,065 | 11,331,463 | Plot with matplotlib in WSL2 | <p>I have tried several post recommendations and follow several guides to use matplotlib in wsl2 in my windows machine. None of the tutorials worked. For instance, I have installed x11 apps follwing this tutorial: <a href="https://learn.microsoft.com/en-us/windows/wsl/tutorials/gui-apps" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/windows/wsl/tutorials/gui-apps</a></p>
<p>I have followed this post <a href="https://stackoverflow.com/questions/43397162/show-matplotlib-plots-and-other-gui-in-ubuntu-wsl1-wsl2">Show matplotlib plots (and other GUI) in Ubuntu (WSL1 & WSL2)</a></p>
<p>Now it get stuck when I try to run a plot(...)
<a href="https://i.sstatic.net/gYeXM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYeXM.png" alt="enter image description here" /></a></p>
<p>I am working on a windows10 with wsl2, ubuntu distro machine.</p>
<p>the display is set to:</p>
<pre><code>(.env) user@machine:~/project$ echo $DISPLAY
172.30.240.1:0
</code></pre>
<p>Could anyone help me to get this working?</p>
| <python><matplotlib><windows-subsystem-for-linux> | 2023-10-28 10:43:11 | 1 | 312 | GGChe |
77,379,047 | 7,195,666 | Is there a way to create a TypedDict from django ninja Schema or vice versa? | <p>For this piece of code</p>
<pre><code>from typing import TypedDict
from ninja import Schema
class MyDict(TypedDict):
a: str
class MySchema(Schema):
a: str
</code></pre>
<p>is there a way no to repeat the keys (<code>a</code> in the example)?</p>
<p>I tried</p>
<pre><code>class MyDict(TypedDict):
a: str
class MySchema(Schema, MyDict):
pass
</code></pre>
<p>but it doen't generate the proper ninja schema</p>
| <python><metaprogramming><typing><django-ninja> | 2023-10-28 10:37:23 | 0 | 2,271 | Vulwsztyn |
77,379,001 | 17,214,759 | Efficient computation of the set of surjective functions | <p>A function <code>f : X -> Y</code> is surjective when every element of <code>Y</code> has at least one preimage in <code>X</code>. When <code>X = {0,...,m-1}</code> and <code>Y = {0,...,n-1}</code> are two finite sets, then <code>f</code> corresponds to an <code>m</code>-tuple of numbers <code>< n</code>, and it is surjective precisely when every number <code>< n</code> appears at least once. (When we require that every number appears <em>exactly</em> once, we have <code>n=m</code> and are talking about <em>permutations</em>.)</p>
<p>I would like to know an efficient algorithm for computing the set of all surjective tuples for two given numbers <code>n</code> and <code>m</code>. The <em>number</em> of these tuples can be computed very efficiently with the inclusion-exclusion principle (see for example <a href="https://math.stackexchange.com/questions/3116932">here</a>), but I don't think that this is useful here (since we would first compute <em>all tuples</em> and then remove the non-surjective ones step by step, and I assume that the computation of <em>all tuples</em> will take longer*.). A different approach goes as follows:</p>
<p>Consider for example the tuple</p>
<p><code>(1,6,4,2,1,6,0,2,5,1,3,2,3)</code></p>
<p>in which every number < 7 appears at least once. Look at the largest
number and erase it:</p>
<p><code>(1,*,4,2,1,*,0,2,5,1,3,2,3)</code></p>
<p>It appears in the indices 1 and 5, so this corresponds to the set <code>{1,5}</code>, a subset of the indices.
The rest corresponds to the tuple</p>
<p><code>(1,4,2,1,0,2,5,1,3,2,3)</code></p>
<p>with the property that every number < 6 appears at least once.</p>
<p>We see that the surjective <code>m</code>-tuples of numbers <code>< n</code> correspond to the pairs <code>(T,a)</code>, where <code>T</code> is a non-empty subset of <code>{0,...,m-1}</code> and <code>a</code> is a surjective <code>(m-k)</code>-tuple of numbers <code>< n-1</code>, where <code>T</code> has <code>k</code> elements.</p>
<p>This leads to the following recursive implementation (written in Python):</p>
<pre><code>import itertools
def surjective_tuples(m: int, n: int) -> set[tuple]:
"""Set of all m-tuples of numbers < n where every number < n appears at least once.
Arguments:
m: length of the tuple
n: number of distinct values
"""
if n == 0:
return set() if m > 0 else {()}
if n > m:
return set()
result = set()
for k in range(1, m + 1):
smaller_tuples = surjective_tuples(m - k, n - 1)
subsets = itertools.combinations(range(m), k)
for subset in subsets:
for smaller_tuple in smaller_tuples:
my_tuple = []
count = 0
for i in range(m):
if i in subset:
my_tuple.append(n - 1)
count += 1
else:
my_tuple.append(smaller_tuple[i - count])
result.add(tuple(my_tuple))
return result
</code></pre>
<p>I noticed that this is quite slow, though, when the input numbers are large. For example when <code>(m,n)=(10,6)</code> the computation takes <code>32</code> seconds on my (old) PC, the set has <code>16435440</code> elements here. I suspect that there is a faster algorithm.</p>
<p>*In fact, the following implementation is <em>very slow</em>.</p>
<pre><code>def surjective_tuples_stupid(m: int, n: int) -> list[int]:
all_tuples = list(itertools.product(*(range(n) for _ in range(m))))
surjective_tuples = filter(lambda t: all(i in t for i in range(n)), all_tuples)
return list(surjective_tuples)
</code></pre>
| <python><algorithm><performance><tuples><combinatorics> | 2023-10-28 10:23:40 | 2 | 432 | Script Raccoon |
77,378,731 | 2,661,703 | Switch to another subdomain in django on a click of a button | <p>I am actually having issues with template link of a button to point to my subdomain which actually if I enter it manually on a browser, it works just fine. I need something like this</p>
<pre><code>{% load hosts %}
<a href="{% host_url 'homepage' host 'www' %}">Home</a>
</code></pre>
<p>But it doesn't work if i place the code above on a button. Am I missing anything?. Appreciate the help guys</p>
| <python><django><django-templates><django-hosts> | 2023-10-28 08:57:03 | 0 | 3,770 | Transformer |
77,378,634 | 2,100,606 | Ticker.info throws error in jupyter but not Google Colab | <p>The following code used worked fine in jupyter but stopped working last week,</p>
<pre><code>import yfinance as yf
ltam = 'AAPL'
ticker = yf.Ticker(ltam)
ticker.info
</code></pre>
<p>The error thrown is,</p>
<pre><code>HTTPError: 404 Client Error: Not Found for url: https://query2.finance.yahoo.com/v6/finance/quoteSummary/AAPL?modules=financialData&modules=quoteType&modules=defaultKeyStatistics&modules=assetProfile&modules=summaryDetail&ssl=true
</code></pre>
<p>However, the above code continues to work on Google Colab.</p>
<p>Oddly all the pricing calls such as <code>yf.download</code> continue to work fine in jupyter. So this error seems to be confined to <code>Ticker.info</code> only.</p>
<p>I have a script that pulls in data for 20 odd tickers 3 times a week, so I don't think it's an excessive usage issue.</p>
<p>Both jupyter and Google Colab are using the same version of yfinance - 0.2.31.</p>
<p>Any pointers appreciated.</p>
| <python><yfinance> | 2023-10-28 08:19:14 | 0 | 325 | insomniac |
77,378,515 | 1,285,061 | How can I append to an array in NumPy without flattening the arrays? | <p>How can I append two arrays in NumPy without flattening?</p>
<p>I tried changing axis.</p>
<pre class="lang-none prettyprint-override"><code>>>> h = np.array([])
>>> g = np.array([2,4])
>>> i = np.array([6,9])
>>> np.append(h,g)
array([2., 4.])
>>> h = np.append(h,g)
>>> h
array([2., 4.])
>>> np.append(h,i)
array([2., 4., 6., 9.])
>>> np.append(h,i, axis=0)
array([2., 4., 6., 9.])
>>> np.append(h,i, axis=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 5617, in append
return concatenate((arr, values), axis=axis)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
numpy.exceptions.AxisError: axis 1 is out of bounds for array of dimension 1
>>>
</code></pre>
<p>I was expecting:</p>
<p><code>h = [[2,4],[6,9]]</code></p>
| <python><numpy> | 2023-10-28 07:46:18 | 1 | 3,201 | Majoris |
77,378,391 | 3,899,975 | how to add text to a box plot by matching and extracting the data from two dataframes | <p>I have two df (df1 and df2) as below:</p>
<pre><code>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
np.random.seed(12)
# Generate first df
col_a1 = [i for i in range(18)] * 15
col_b1 = np.random.randint(1, 100, len(col_a))
configs = ['c1', 'c2', 'c3', 'c4']
col_c1 = [configs[i//90] + '_' + f'abctrct{i//18}' for i in range(len(col_a))]
df1 = pd.DataFrame({'A': col_a1, 'B': col_b1, 'C': col_c1})
# Generate second df
col_d2 = [s + '-' +f'{np.random.randint(1,18)}' for s in [x for x in list(set(col_c1))]]
df2 = pd.DataFrame({'D': col_d2, 'E': np.random.randint(100, 200, len(col_d2))})
</code></pre>
<p>I am plotting a box plot for the data as below:</p>
<pre><code>df1[['Hue', 'X']] = df1['C'].str.split('_', expand=True)
fig, ax = plt.subplots()
sns.stripplot(y='B', x='X', data=df1, hue='Hue')
sns.boxplot(y='B', x='X', data=df1, hue='Hue')
plt.xticks(rotation=45, ha='right')
</code></pre>
<p>On top of each box plot, I want to add two text (with box boundary). The first text is the value of df1['B']
where <code>df1['A'] == df2['D'].str.split('-').str[1]</code>. In other words, the numeric part of <code>df2['D']</code> matches with <code>df1['A']</code>
The value for the second textbox, will come from the <code>df1['A']</code> that matches the condition. My approach is the following</p>
<pre><code>mean = [i for i in (df1.groupby(['X'], sort=False)['B'])]
df2[['S', 'num']] = df2['D'].str.split('-', expand=True)
idx = [i for i in range(len(mean))]
number = [int(i) for i in df2['num']]
values = [mean[i][1][18*i+j] for i, j in zip(idx, number)]
for xtick in ax.get_xticks():
ax.text(xtick, mean[xtick] , f'bend = {values[xtick]}',
horizontalalignment='center', verticalalignment='center', rotation=90, fontsize=20, bbox={
'facecolor': 'green', 'alpha': 0.5, 'pad': 10})
</code></pre>
<p>but I am getting error</p>
<pre><code>ConversionError: Failed to convert value(s) to axis units: ('abctrct0', 0 76
1 28
</code></pre>
| <python><pandas><matplotlib><seaborn><plot-annotations> | 2023-10-28 07:05:54 | 1 | 1,021 | A.E |
77,378,068 | 511,436 | python3 regex: how to remove u[0-9]{4} from string? | <p>python 3.9</p>
<pre><code>s = "aaau0004bbbbu0001"
s.replace(r"u[0-9]{4}", "")
'aaau0004bbbbu0001'
</code></pre>
<p>How to get the result 'aaabbbb'?</p>
| <python><python-3.x><regex> | 2023-10-28 04:43:27 | 1 | 1,844 | Davy |
77,378,036 | 2,077,386 | jinja NativeEnvironment casts an int string to a literal int | <p>This surprised me:</p>
<pre><code>import jinja2.nativetypes
env = jinja2.nativetypes.NativeEnvironment()
result = env.from_string('{{ x }}')
value = result.render(x='2')
print(type(value))
</code></pre>
<p>Output:</p>
<pre><code><class 'int'>
</code></pre>
<p>Looking over the native parser code it seems like it takes string nodes and runs them through <code>ast.parse()</code> and then treats it like a literal.</p>
<p>So <code>ast.parse</code> takes the string whose value is 2 and parses it as an integer through the ast parser. (if it parsed it from <code>repr('2')</code> it would actually work right...)</p>
<p>I've asked and this is not a bug, so I'm looking for a workaround. This is what I currently have, but I am unsure whether this is the best way to address this, especially since I don't understand what problem the ast.parse/parse_literal is trying to solve in the original (Is it trying to solve <code>{{ "a+1" }}</code> ??)</p>
<pre><code>def _concat(values: t.Iterable[t.Any]) -> t.Optional[t.Any]:
# based on native_concat from https://github.com/pallets/jinja/blob/main/src/jinja2/nativetypes.py
head = list(islice(values, 2))
if not head:
return None
if len(head) == 1:
raw = head[0]
# Removed this line. Always return raw when single node.
# if not isinstance(raw, str):
return raw
# Removed this else since we always return raw above.
# else:
if isinstance(values, GeneratorType):
values = chain(head, values)
raw = "".join([str(v) for v in values])
try:
return literal_eval(
# In Python 3.10+ ast.literal_eval removes leading spaces/tabs
# from the given string. For backwards compatibility we need to
# parse the string ourselves without removing leading spaces/tabs.
parse(raw, mode="eval")
)
except (ValueError, SyntaxError, MemoryError):
return raw
class FixedNativeEnvironment(jinja2.nativetypes.NativeEnvironment):
# Fix for https://github.com/pallets/jinja/issues/1904
concat = staticmethod(_concat) # type: ignore
class FixedNativeTemplate(jinja2.nativetypes.NativeTemplate):
environment_class = FixedNativeEnvironment
</code></pre>
<p>Any better suggestions that what I have above? What use cases do I break above?</p>
| <python><jinja2> | 2023-10-28 04:30:04 | 0 | 7,133 | rrauenza |
77,377,886 | 652,528 | Does mypy check Never type at all? | <p>I was playing with <code>Never</code> type in mypy. If I have a function <code>foo(x: int)</code> I expected that when called with a value of type <code>Never</code> mypy would complain, but it silently typechecks the call:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Never
def foo(x: int):
pass
def bar(x: Never):
foo(x) # ok, I exected a type error
foo("foo") # err
</code></pre>
<p>--- edit ---</p>
<p>Just for reference my solution to create a uninhabited type is this</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod, final
@final
class Never(ABC):
@abstractmethod
def __init__(self) -> None: ...
</code></pre>
| <python><mypy><python-typing> | 2023-10-28 03:12:30 | 1 | 6,449 | geckos |
77,377,732 | 6,202,092 | Django redirect error unexpected keyword while returning to previous view | <p>I have searched and have not found any answers related to my specific problem.</p>
<p>When I submit a form from my <code>index</code> view to an <code>up_load</code> view and try to have the <code>up_load</code> view redirect back to the <code>index</code> view while including the <code>page</code> number as an argument, I get an error, I have tried using a <code>context</code> dictionary and still nothing.</p>
<p><strong>url.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from django.urls import path
from . import views
app_name = 'myApp'
urlpatterns = [
path("", views.index, name="index"),
path("", views.index, kwargs={'cliente_id': None}, name="index"),
path("<int:cliente_id>/", views.index, name="index"),
path("<int:cliente_id>/<str:page>", views.index, name="index"),
path('up_load/', views.up_load, name='up_load'),
]
</code></pre>
<p><strong>views.py</strong></p>
<pre class="lang-py prettyprint-override"><code>def index(request, cliente_id=None):
paginator = Paginator(client_list, 15)
page_number = request.GET.get("page")
page = paginator.get_page(page_number)
context = {'cliente_id': cliente_id,
'page': page,
'paginator': paginator }
return render(request, 'myApp/index.html', context)
def up_load(request):
if request.method == 'POST':
form = ExpeditensForm(request.POST, request.FILES)
cliente_id = request.POST.get('cliente_id', '/')
page = request.POST.get('page', '/')
if form.is_valid():
form.save()
return redirect('myApp:index', cliente_id=cliente_id, page=page)
else:
pass
</code></pre>
<p>Django returns the following error.</p>
<pre><code>Exception Type: TypeError at /Expedientes/4/1
Exception Value: index() got an unexpected keyword argument 'page'
</code></pre>
<ul>
<li>The browser shows the following URL:
<a href="http://127.0.0.1:8000/Expedientes/4/1" rel="nofollow noreferrer">http://127.0.0.1:8000/Expedientes/4/1</a></li>
<li>When it should be:
<a href="http://127.0.0.1:8000/Expedientes/4/?page=1" rel="nofollow noreferrer">http://127.0.0.1:8000/Expedientes/4/?page=1</a></li>
</ul>
<p>Thank you</p>
| <python><django><django-views> | 2023-10-28 01:43:16 | 1 | 503 | Enrique Bruzual |
77,377,731 | 3,129,604 | How to find the width of the red area using opencv or any other python library? | <p><a href="https://i.sstatic.net/hvgQP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hvgQP.png" alt="enter image description here" /></a></p>
<p>The red image above is the health meter for enemy units in a game. I got a python application that takes a screenshot and it must be able to determine the percentage of health remaining. In this case it would be ~70%, given the size of the red bar. I tried several ways from google Bard AI suggestions but none work 100%. How should I proceed?</p>
| <python><opencv><image-processing><computer-vision><game-automation> | 2023-10-28 01:42:35 | 1 | 2,753 | Matteus Barbosa |
77,377,687 | 316,737 | Consume C++ COM class from Python | <p>I have a COM dll which is created using ATL COM with C++. I am creating an Python application which can reuse the existing business logic written in COM C++.</p>
<p>I am looking for a way to consume the COM class object methods from the COM DLL from python. Please suggest ways to achieve this.</p>
| <python><c++><com><atl> | 2023-10-28 01:14:07 | 1 | 905 | srajeshnkl |
77,377,645 | 219,153 | Is there a pure NumPy way to create an array of ND indicies and values? | <p>This script creates an array <code>b</code> of indices and values of given array <code>a</code>, which is useful for example in conversion of 2.5D height map to a point cloud:</p>
<pre><code>import numpy as np
a = np.arange(12).reshape(3, 4)
b = np.array([[*i, val] for i, val in np.ndenumerate(a)])
print(b)
</code></pre>
<pre><code>[[ 0 0 0]
[ 0 1 1]
[ 0 2 2]
[ 0 3 3]
[ 1 0 4]
[ 1 1 5]
[ 1 2 6]
[ 1 3 7]
[ 2 0 8]
[ 2 1 9]
[ 2 2 10]
[ 2 3 11]]
</code></pre>
<p>Is there a pure NumPy way, i.e. no loop, to do the same?</p>
| <python><arrays><numpy><numpy-ndarray> | 2023-10-28 00:48:22 | 2 | 8,585 | Paul Jurczak |
77,377,614 | 3,417,592 | Openai python api and Firestore emulator not playing well together | <p>wondering if anyone else has seen this bizarre behavior.</p>
<p>The python script below is a simplified version of something that I'm deploying to Cloud Run. The emulator stuff is replaced by <code>client = firestore.Client()</code> in the deployed app, and that works fine. The problem only pertains to using the firestore emulator for local dev.</p>
<p>I run the emulator:</p>
<pre><code>$ gcloud beta emulators firestore start --project test --host-port 'localhost:8001'
</code></pre>
<p>I run the script:</p>
<pre class="lang-py prettyprint-override"><code>import os
from openai import AzureOpenAI
import google.auth
from google.cloud import firestore
import mock
PORT = "8001"
os.environ["FIRESTORE_EMULATOR_HOST"] = f"localhost:{PORT}"
os.environ["FIRESTORE_EMULATOR_HOST_PATH"] = f"localhost:{PORT}/firestore"
os.environ["FIRESTORE_HOST"] = f"http://localhost:{PORT}/firestore"
os.environ["FIRESTORE_DATASET"] = "streamlit_firestore"
os.environ["FIRESTORE_PROJECT_ID"] = "test"
credentials = mock.Mock(spec=google.auth.credentials.Credentials)
fs_client = firestore.Client(project="test", credentials=credentials)
key = open("secrets/azure-openai-key1.txt").read().strip()
oai_client = AzureOpenAI(
azure_endpoint="https://azeastus2ai001.openai.azure.com/",
api_version="2023-07-01-preview",
api_key=key,
)
# ARBITRARY WRITE TO FIRESTORE
coll = fs_client.collection("fake@uhoh.com")
ref = coll.document()
ref.set({'base': 'jumping'})
# THIS HANGS
response = oai_client.chat.completions.create(
model="gpt-35-turbo-16k-1",
messages=[
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'hello'}
],
temperature=0.5
)
message_full = response.choices[0].message
print(message_full.content)
</code></pre>
<p>and the script just hangs on the <code>openai.ChatCompletion.create</code> call. ^C doesn't stop the script. When I close the terminal running the script, the emulator issues this warning to the console:</p>
<pre><code>[firestore] Nov 20, 2023 6:22:04 PM io.netty.channel.DefaultChannelPipeline onUnhandledInboundException
[firestore] WARNING: An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
[firestore] java.net.SocketException: Connection reset
[firestore] at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394)
[firestore] at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426)
[firestore] at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:256)
[firestore] at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
[firestore] at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:357)
[firestore] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151)
[firestore] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
[firestore] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
[firestore] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
[firestore] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
[firestore] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
[firestore] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
[firestore] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
[firestore] at java.base/java.lang.Thread.run(Thread.java:833)
</code></pre>
<p>If I comment out the "ARBITRARY WRITE TO FIRESTORE" part, the script runs to completion.</p>
<p>So I guess there's some kind of cross-talk between openai's python lib and the firestore emulator. Changing the port doesn't help - I tried 8002 and some random number in the ten thousands.</p>
<pre><code>% gcloud version
Google Cloud SDK 452.0.0
alpha 2023.10.23
app-engine-python 1.9.107
beta 2023.10.23
bq 2.0.98
cloud-datastore-emulator 2.3.1
cloud-firestore-emulator 1.18.2
core 2023.10.23
gcloud-crc32c 1.0.0
gke-gcloud-auth-plugin 0.5.6
gsutil 5.26
kubectl 1.27.5
</code></pre>
<p>Python package openai version is 1.3.3<br />
(UPDATE: a month ago when I originally posted it was 0.27.8. I tried this with 1.1.2 as 1.2.4 as well.)</p>
<p>Running on Mac OS Sonoma 14.0</p>
| <python><firebase><google-cloud-platform><google-cloud-firestore><openai-api> | 2023-10-28 00:30:31 | 0 | 1,951 | Nathan Lloyd |
77,377,439 | 19,123,103 | How to change label font size in streamlit | <p>I want to change the fontsize of the label above an input widget in my streamlit app.</p>
<p>What I have so far:</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
label = "Enter text here"
st.text_input(label)
</code></pre>
<p>This renders the following:
<a href="https://i.sstatic.net/fsIFH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fsIFH.png" alt="text input field" /></a></p>
<p>I want to make the label "Enter text here" bigger.</p>
<p>I know there are various ways to change fontsize in <code>st.write()</code>. So, I tried some of them:</p>
<ul>
<li>the markdown headers syntax:
<pre class="lang-py prettyprint-override"><code>st.write(f"# {label}") # <--- works
st.text_input(f"# {label}") # <--- doesn't work
</code></pre>
</li>
<li>some CSS:
<pre class="lang-py prettyprint-override"><code>s = f"<p style='font-size:20px;'>{label}</p>"
st.markdown(s, unsafe_allow_html=True) # <--- works
st.text_input(s) # <--- doesn't work
</code></pre>
</li>
</ul>
<p>but as commented above, neither work. How do I make it work?</p>
| <javascript><python><html><css><streamlit> | 2023-10-27 23:07:24 | 1 | 25,331 | cottontail |
77,377,438 | 6,379,197 | Why fit_round received failures in flwr framework? | <p>I am using flwr framework to send a random array from client to server and then server will merge the array and sends back to each of the clients.</p>
<pre><code>import numpy as np
import flwr as fl
from flwr.server.strategy import Strategy
from typing import List
class MergeArraysStrategy(fl.server.strategy.FedAvg):
def aggregate_fit(self, rnd, results, failures):
aggregated_parameters, aggregated_metrics = super().aggregate_fit(rnd, results, failures)
print(aggregated_parameters, aggregated_metrics)
# if aggregated_parameters is not None:
# # Convert `Parameters` to `List[np.ndarray]`
# aggregated_ndarrays: List[np.ndarray] = fl.common.parameters_to_ndarrays(aggregated_parameters)
# # Save aggregated_ndarrays
# print(f"Saving round {rnd} aggregated_ndarrays...")
# np.savez(f"round-{rnd}-weights.npz", *aggregated_ndarrays)
# return aggregated_parameters, aggregated_metrics
print(results)
self.arrays = []
for result in results:
self.arrays.append(result.parameters)
if self.arrays:
merged_array = np.concatenate(self.arrays)
self.arrays = []
return merged_array, {}
else:
# Return some default or empty array if there are no results
return np.array([]), {}
def configure_evaluate(self, server_round, parameters, client_manager):
pass
def evaluate(self, value, parameters):
pass
# Create a Flower server
strategy = MergeArraysStrategy(min_available_clients=3, min_fit_clients=3)
client_manager = fl.server.SimpleClientManager()
server = fl.server.Server(client_manager=client_manager, strategy=strategy)
# Start the server
fl.server.start_server(
server_address="127.0.0.1:8080",
config=fl.server.ServerConfig(num_rounds=2),
server=server
)
</code></pre>
<p>Client code:</p>
<pre><code> import flwr as fl
import numpy as np
import flwr
from flwr.common import (
Code,
EvaluateIns,
EvaluateRes,
FitIns,
FitRes,
GetParametersIns,
GetParametersRes,
Status,
ndarrays_to_parameters,
parameters_to_ndarrays,
)
class Client(fl.client.NumPyClient):
def __init__(self, array):
self.array = array
print(self.array)
def get_parameters(self, ins: GetParametersIns) -> GetParametersRes:
#print(f"[Client {self.cid}] get_parameters")
# Get parameters as a list of NumPy ndarray's
ndarrays: np.ndarray = self.array
# Serialize ndarray's into a Parameters object
parameters = ndarrays_to_parameters(ndarrays)
# Build and return response
status = Status(code=Code.EVALUATE_NOT_IMPLEMENTED, message="Success")
return GetParametersRes(
status=status,
parameters=parameters,
)
# def fit(self, parameters):
# self.array = parameters
# fit_res = flwr.common.FitRes(status=flwr.common.Status(
# code=flwr.common.Code.EVALUATE_NOT_IMPLEMENTED ,
# message="Client does not implement `fit`",
# ),
# parameters=self.array,
# num_examples=len(self.array ),
# metrics={})
# return fit_res
def fit(self, ins: FitIns) -> FitRes:
# Deserialize parameters to NumPy ndarray's
parameters_original = ins.parameters
self.array = parameters_to_ndarrays(parameters_original)
# Update the model parameters using your training logic
# This is where you should perform the model training with the received parameters
# Serialize updated ndarray's into a Parameters object
parameters_updated = ndarrays_to_parameters(self.array)
# Build and return response
status = Status(code=Code.EVALUATE_NOT_IMPLEMENTED, message="Success") # Change the status code to SUCCESS
return FitRes(
status=status,
parameters=parameters_updated, # Return the updated model parameters
num_examples=len(self.array),
metrics={},
)
def evaluate(self, parameters):
pass
# Create a Flower client
client = Client(array=np.random.randn(2))
# Connect to the server
fl.client.start_client(server_address="127.0.0.1:8080", client=client)
# The server should handle the aggregation logic and return the merged array
# The client can retrieve the merged array from its 'array' attribute
merged_array = client.array
print(merged_array)
</code></pre>
<p><strong>Scenario:</strong>
Suppose two clients have [1, 2] and [3, 4] data. The server will collect these datasets and concatenate them. Then the server will have [1, 2, 3, 4] and send back to each of the clients. So after one round each client will have [1, 2, 3, 4].</p>
<p>But I am getting the result as my expectation. The error log on the server side is as follows:</p>
<pre><code>INFO flwr 2023-10-27 18:55:29,709 | app.py:165 | Starting Flower server, config: ServerConfig(num_rounds=2, round_timeout=None)
INFO flwr 2023-10-27 18:55:29,745 | app.py:179 | Flower ECE: gRPC server running (2 rounds), SSL is disabled
INFO flwr 2023-10-27 18:55:29,746 | server.py:89 | Initializing global parameters
INFO flwr 2023-10-27 18:55:29,746 | server.py:277 | Requesting initial parameters from one random client
INFO flwr 2023-10-27 18:55:37,053 | server.py:281 | Received initial parameters from one random client
INFO flwr 2023-10-27 18:55:37,054 | server.py:91 | Evaluating initial parameters
None
INFO flwr 2023-10-27 18:55:37,054 | server.py:105 | FL starting
DEBUG flwr 2023-10-27 18:55:47,744 | server.py:228 | fit_round 1: strategy sampled 3 clients (out of 3)
DEBUG flwr 2023-10-27 18:55:47,759 | server.py:242 | fit_round 1 received 0 results and 3 failures
None {}
[]
INFO flwr 2023-10-27 18:55:47,760 | server.py:172 | evaluate_round 1: no clients selected, cancel
DEBUG flwr 2023-10-27 18:55:47,760 | server.py:228 | fit_round 2: strategy sampled 3 clients (out of 3)
DEBUG flwr 2023-10-27 18:55:47,766 | server.py:242 | fit_round 2 received 0 results and 3 failures
None {}
[]
INFO flwr 2023-10-27 18:55:47,766 | server.py:172 | evaluate_round 2: no clients selected, cancel
INFO flwr 2023-10-27 18:55:47,767 | server.py:154 | FL finished in 10.712144899999998
INFO flwr 2023-10-27 18:55:47,767 | app.py:225 | app_fit: losses_distributed []
INFO flwr 2023-10-27 18:55:47,768 | app.py:226 | app_fit: metrics_distributed_fit {}
INFO flwr 2023-10-27 18:55:47,768 | app.py:227 | app_fit: metrics_distributed {}
INFO flwr 2023-10-27 18:55:47,768 | app.py:228 | app_fit: losses_centralized []
INFO flwr 2023-10-27 18:55:47,769 | app.py:229 | app_fit: metrics_centralized {}
</code></pre>
| <python><pytorch><federated-learning> | 2023-10-27 23:07:11 | 1 | 2,230 | Sultan Ahmed |
77,377,413 | 14,735,451 | How to query arxiv daily based on keywords and write the results in a Google doc? | <p>I want to find papers that are published in the <a href="https://arxiv.org/list/cs/recent" rel="nofollow noreferrer">computer science section of arxiv</a> <strong>every day</strong> based on a list of keywords and write their <strong>titles</strong> and <strong>arxiv link</strong> to my Google doc (i.e., append to the end of what's already written):</p>
<p>For example, the Google doc can look as follows:</p>
<ol>
<li><a href="https://arxiv.org/pdf/2310.17121.pdf" rel="nofollow noreferrer">Test-time Augmentation for Factual Probing</a></li>
<li><a href="https://arxiv.org/pdf/2310.17022.pdf" rel="nofollow noreferrer">Controlled Decoding from Language Models</a></li>
</ol>
<p>And so on...</p>
<p>My list of search keywords:</p>
<pre><code>arxiv_keywords = ['machine learning', 'llm', 'potato']
</code></pre>
<p>The titles should not be case sensitive and should contain the keywords. For example, the following made up titles should be returned <code>Machine learninG is a mystery, LLM-based models are weird, potatoes are tasty when turned into fries</code></p>
<p>My Google doc is located in <code>my_drive/Research/my_google_doc_name</code></p>
<p>I found this <a href="https://stackoverflow.com/questions/64047299/how-to-query-arxiv-for-a-specific-year">SO question</a> that asks about querying arxiv for a specific year based on one keyword, but there are several different things in my request and theirs which complicates things to me:</p>
<ol>
<li>I only need to query the <a href="https://arxiv.org/list/cs/recent" rel="nofollow noreferrer">computer science section</a>. Based on this <a href="https://stackoverflow.com/questions/62368794/search-results-from-arxiv-api-different-from-arxiv-advanced-search">SO question</a> there seems to be a difference in returned results when querying from the general arxiv website and a more advance search.</li>
<li>I need to automatically query it once a day, so I'm not sure how to automatically update the dates.</li>
<li>I'm not sure how to modify their script to handle multiple keywords</li>
<li>I'm not sure how to append the results to a Google doc, which requires a separate query from my understanding, from <a href="https://stackoverflow.com/questions/14248152/writing-to-a-google-document-with-python">here</a> and <a href="https://developers.google.com/docs/api/how-tos/move-text" rel="nofollow noreferrer">here</a> and probably to enter my password somehow.</li>
</ol>
<p>I found that I can automatically run a python script using cron following <a href="https://medium.com/@jonathanmondaut/creating-a-web-scraping-pipeline-scheduling-recurring-tasks-with-various-methods-c43fd4b44509" rel="nofollow noreferrer">this link</a>.</p>
<p>Overall there seems to be a lot going on which confuses me and I'm not entirely sure how to handle all the parts.</p>
| <python><web-scraping><google-docs> | 2023-10-27 22:58:43 | 2 | 2,641 | Penguin |
77,377,348 | 13,383,986 | Is there a way to modify the frequency of upload_file Callback? | <p>I'm using boto3 (Python). Here's a sample call:</p>
<p><code>s3.upload_file(file_name, s3_bucket, s3_bucket_key, Callback=update_progress, Config=config)</code></p>
<p>The issue is that no matter what, the Callback is called 4 times per Mb. So, basically every 256k it's called. That's annoying for most progress tracking libraries.</p>
<p>The <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/upload_file.html" rel="nofollow noreferrer">official docs</a> for this function say:</p>
<blockquote>
<p>Callback (function) – A method which takes a number of bytes transferred to be periodically called during the upload.</p>
</blockquote>
<p>The issue is "...a number of bytes". How can this be modified? If it can't be, is there an open GitHub issue regarding this weird behavior?</p>
<p>I tried changing around the settings in the <code>TransferConfig</code> like the chunk size. It makes no difference at all. The frequency of the Callback's execution has nothing to do with chunk size or anything else in that config, from what I can tell.</p>
| <python><amazon-web-services><amazon-s3><boto3> | 2023-10-27 22:35:21 | 0 | 676 | GoForth |
77,377,292 | 5,536,733 | AttributeError: 'GlueContext' object has no attribute 'create_sample_dynamic_frame' | <p>This <a href="https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-glue-context.html#aws-glue-api-crawler-pyspark-extensions-glue-context-create-sample-dynamic-frame-from-catalog" rel="nofollow noreferrer">official doc</a> suggests there is a function to read sample data from Glue Catalog, which is create_dynamic_frame_from_catalog.</p>
<p>Detailed: create_dynamic_frame_from_catalog(database, table_name, redshift_tmp_dir, transformation_ctx = "", push_down_predicate= "", additional_options = {}, catalog_id = None)</p>
<p>But, when I try to use the same in my code, it errors out. I tried both of the below:</p>
<p>Option 1 (with dot before from_catalog as we generally do in reading dynamic frame - glueContext.create_dynamic_frame.from_catalog):</p>
<pre><code>AWSGlueDataCatalog_node1698294792048 = glueContext.create_sample_dynamic_frame.from_catalog(database="abc",
table_name="def",
transformation_ctx="ijk", sample_options = {'maxSamplePartitions':2, 'maxSampleFilesPerPartition': 3}
)
</code></pre>
<p><strong>Error Message</strong>: AttributeError: 'GlueContext' object has no attribute 'create_sample_dynamic_frame'</p>
<p>Option 2 (with all underscores):</p>
<pre><code>AWSGlueDataCatalog_node1698294792048 = glueContext.create_sample_dynamic_frame_from_catalog(database="abc",
table_name="def",
transformation_ctx="ijk", sample_options = {'maxSamplePartitions':2, 'maxSampleFilesPerPartition': 3}
)
</code></pre>
<p><strong>Error Message</strong>: Py4JError: An error occurred while calling o96.getSampleDynamicFrame. Trace:</p>
<p>What to do? My purpose is to read a small sample of a very big table to experiment on certain things without paying a lot on unnecessary processing of lot of data. Please help!</p>
| <python><amazon-web-services><pyspark><aws-glue> | 2023-10-27 22:18:45 | 1 | 1,787 | Aakash Basu |
77,377,228 | 10,426,490 | How to debug python Azure Functions, that use .venv, inside Visual Studio Code? | <p>I created a new Azure Function locally, via the VS Code terminal, using <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=windows%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&pivots=programming-language-python#install-the-azure-functions-core-tools" rel="nofollow noreferrer">Azure Functions Core Tools</a>:</p>
<ol>
<li>Create new directory: <code>mkdir test_func</code></li>
<li>Create new python virtual environment: <code>python -m venv .venv</code></li>
<li><code>.venv</code> activated with <code>.\.venv\Scripts\activate</code></li>
<li>Initialize new Azure Function: <code>func init</code> then <code>func new</code></li>
</ol>
<p>I use the <a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azurite?tabs=visual-studio-code#install-azurite" rel="nofollow noreferrer">Azurite Extension in VS Code</a> to mockup the Azure Storage/Queue/Table resources. Now that the code is starting to take shape, I want to use the VS Code debugger.</p>
<ol start="0">
<li>Start the Azurite Services</li>
<li>Created <code>.vscode</code> directory in the project root</li>
<li>Inside, added <code>launch.json</code>:</li>
</ol>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Python Functions",
"type": "python",
"request": "attach",
"port": 7071,
"preLaunchTask": "func: host start"
}
]
}
</code></pre>
<ol start="3">
<li>Also added <code>tasks.json</code></li>
</ol>
<pre><code>{
"version": "2.0.0",
"tasks": [
{
"label": "Activate Virtual Environment",
"type": "shell",
"command": ".\\.venv\\Scripts\\activate",
"windows": {
"command": ".\\.venv\\Scripts\\activate"
}
},
{
"type": "func",
"command": "host start",
"problemMatcher": "$func-watch",
"isBackground": true,
"dependsOn": ["Activate Virtual Environment"]
}
]
}
</code></pre>
<p>When I run the debugger...</p>
<ul>
<li><a href="https://i.sstatic.net/rvWR0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rvWR0.png" alt="enter image description here" /></a></li>
</ul>
<p>...the <code>.venv</code> is not being activated. As such none of my <code>import</code> statements are successful. This <code>.venv</code> uses Python 3.8.</p>
<ul>
<li><a href="https://i.sstatic.net/GjT8C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GjT8C.png" alt="enter image description here" /></a></li>
</ul>
<p>Outside of the debugger, activating the <code>.venv</code> using <code>.\.venv\Scripts\activate</code> then <code>func host start</code> results in a successful Azure Function running locally:</p>
<ul>
<li><a href="https://i.sstatic.net/58AlR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/58AlR.png" alt="enter image description here" /></a></li>
</ul>
<p><strong>Question:</strong></p>
<ul>
<li>How do I activate the <code>.venv</code> as part of the debugger?</li>
</ul>
<hr />
<p><strong>Edit 1</strong>:</p>
<p>What a blocker...</p>
<p>Also tried:</p>
<p><code>tasks.json</code></p>
<pre><code>{
"version": "2.0.0",
"tasks": [
{
"label": "Activate Virtual Environment",
"type": "shell",
"command": "PowerShell",
"args": [
"-Command",
"${workspaceFolder}\\.venv\\Scripts\\Activate.ps1"
]
},
{
"label": "Start Azure Function Host",
"type": "shell",
"command": "func host start",
"problemMatcher": "$func-watch",
"isBackground": true,
"dependsOn": ["Activate Virtual Environment"]
}
]
}
</code></pre>
<p><code>launch.json</code>:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Python Functions",
"type": "python",
"request": "attach",
"port": 7071,
"preLaunchTask": "Start Azure Function Host"
}
]
}
</code></pre>
<p>Still results in the default python interpreter instead of the <code>.venv</code> version:</p>
<pre><code> * Executing task: PowerShell -Command C:\Users\path\to\project\.venv\Scripts\Activate.ps1
* Terminal will be reused by tasks, press any key to close it.
* Executing task: func host start
Found Python version 3.11.3 (py).
Azure Functions Core Tools
Core Tools Version: 4.0.5348 Commit hash: N/A (64-bit)
Function Runtime Version: 4.24.5.21262
</code></pre>
| <python><azure><azure-functions><vscode-debugger> | 2023-10-27 21:56:38 | 1 | 2,046 | ericOnline |
77,377,189 | 2,541,276 | pyodbc.OperationalError: ('08001', '[08001] [Microsoft][ODBC Driver 18 for SQL Server]SSL Provider: [SSL routines::certificate verify failed') | <p>I've created sql server docker container using the below command.</p>
<pre class="lang-bash prettyprint-override"><code>sudo docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=Password123#" -p 1433:1433 -d mcr.microsoft.com/mssql/server:2022-latest
</code></pre>
<p>The docker container with sql server is running fine.</p>
<pre class="lang-bash prettyprint-override"><code>~ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15ddaf978ba9 mcr.microsoft.com/mssql/server:2022-latest "/opt/mssql/bin/perm…" About a minute ago Up About a minute 0.0.0.0:1433->1433/tcp, :::1433->1433/tcp quirky_feistel
</code></pre>
<p>now I created login and user using below command -</p>
<pre><code>
~ sqlcmd -S localhost -U SA -P "Password123#" -C
1> create database demo;
2> create login demo with password 'Demo123#';
3> alter server role sysadmin add member demo;
4> use demo;
5> create user demo for login demo;
6> alter use demo with default_schema='demo';
7> alter role SA add member demo;
8>
</code></pre>
<p>now I'm trying to access the database from python using pyodbc using below program.</p>
<pre class="lang-py prettyprint-override"><code>import pyodbc
SERVER = 'localhost,1433'
DATABASE = 'demo'
USERNAME = 'demo'
PASSWORD = 'Demo123#'
#cnxn_str = f'DRIVER={{ODBC Driver 18 for SQL Server}};SERVER={SERVER};DATABASE={DATABASE};Integrated_Security=false;Trusted_Connection=yes;UID={USERNAME};PWD={PASSWORD}'
#print(cnxn_str)
cnxn = pyodbc.connect(driver='{ODBC Driver 18 for SQL Server}',
server=SERVER,
database=DATABASE,
trusted_connection='yes',
trust_server_certificate='yes',
username=USERNAME,
password=PASSWORD)
</code></pre>
<p>When I run the program I get below error -</p>
<pre class="lang-bash prettyprint-override"><code> ./venv/bin/python pydemo.py
Traceback (most recent call last):
File "/home/raj/Coding/python/pyodbc-demo/pydemo.py", line 10, in <module>
cnxn = pyodbc.connect(driver='{ODBC Driver 18 for SQL Server}',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pyodbc.OperationalError: ('08001', '[08001] [Microsoft][ODBC Driver 18 for SQL Server]SSL Provider: [error:0A000086:SSL routines::certificate verify failed:self-signed certificate] (-1) (SQLDriverConnect)')
</code></pre>
<p>How can I fix this error?</p>
| <python><sql-server><ssl><pyodbc> | 2023-10-27 21:45:35 | 0 | 10,555 | user51 |
77,377,165 | 19,299,757 | Download chrome driver 117 | <p>I need to download chrome version 117.0.5846.0 from <a href="https://googlechromelabs.github.io/chrome-for-testing/known-good-versions-with-downloads.json" rel="nofollow noreferrer">https://googlechromelabs.github.io/chrome-for-testing/known-good-versions-with-downloads.json</a></p>
<p>I know I can use "curl" command to download this, but not sure how to download it from a json document URL.</p>
<p>This is for Windows 64 machine.
Recently my chrome got updated to this version and since then I am unable to run selenium tests from Pycharm IDE and getting the following error.</p>
<p><em>selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114</em></p>
<p>Any help is much appreciated.</p>
| <python><google-chrome><selenium-webdriver><curl><selenium-chromedriver> | 2023-10-27 21:37:23 | 3 | 433 | Ram |
77,377,032 | 5,960,363 | Computed fields in Pydantic V2 - can I add aliases for validation and serialization? | <h2>Context</h2>
<p>Within a Pydantic model, I want to set the values of two fields based on the values contained by a third. BUT I'd also like to set some nuanced aliases.</p>
<p>So if we have:</p>
<pre class="lang-py prettyprint-override"><code>class FooModel(BaseModel):
data_holder: list = Field(..., exclude=True)
# Values of bar and baz below will be set based on values in data_holder
bar: int = Field(..., validation_alias=AliasChoices(‘bar’, ‘Bar’), serialization_alias=‘Bar’)
baz: int = Field(..., validation_alias=AliasChoices(‘baz’, ‘Baz’), serialization_alias=‘Baz’)
</code></pre>
<p>And I do:</p>
<pre><code>foo = FooModel(data_holder=[123, 456])
</code></pre>
<p>I want:
<br></p>
<pre><code># The model will compute:
bar=123 # Field validation and serialization rules apply
baz=456 # Field validation and serialization rules apply
# Such that:
print(bar)
#> Bar='123'
print(baz)
#> Baz='456'
</code></pre>
<h2>Question</h2>
<p><strong>--> Is this possible to do using <code>@computed_field</code>, or is there another recommended way to do this?</strong>
<br>
<br>
Computed field seems the obvious way, but <a href="https://docs.pydantic.dev/latest/api/fields/#pydantic.fields.computed_field" rel="nofollow noreferrer">based on the documentation</a> I don't see a way to add validation and serialization alias options. I believe <code>root_validator</code> provided a solution in V1, but that's deprecated. I've also considered using a "before" <code>field_validator</code>, but haven't gotten that to work with my actual use case. (The real-world context is messier and <code>data_holder</code> is actually another Pydantic model).</p>
| <python><class><validation><inheritance><pydantic> | 2023-10-27 21:02:21 | 1 | 852 | FlightPlan |
77,376,961 | 2,058,333 | BioPython SeqIO fails with Invalid Alphabet found | <p>I am trying to store a GenBank file that could be read by Geneious later.
I found an example how to create the GenBank file <a href="https://stackoverflow.com/questions/65216225/writing-and-saving-genbank-files-with-biobython-seqio-module">here</a></p>
<p>But now I run into <code>Alphabet</code> issues.
I tried all protein classes from the <a href="https://biopython.org/docs/1.75/api/Bio.Alphabet.IUPAC.html" rel="nofollow noreferrer">documentation</a>, but all fail with the same error:</p>
<p><code>TypeError: Invalid alphabet found, <class 'Bio.Alphabet.ProteinAlphabet'>.</code></p>
<p>I dont know what else to do here? Using <code>biopython==1.73</code></p>
<pre><code> record = SeqRecord.SeqRecord(
Seq.Seq("MIRQALAVAALLLAGTAQADGLIDN", alphabet=Alphabet.ProteinAlphabet),
id="FooBar",
)
with NamedTemporaryFile(mode="w") as genbank:
SeqIO.write(record, genbank, "genbank")
pass
</code></pre>
| <python><bioinformatics><biopython><genbank> | 2023-10-27 20:46:23 | 1 | 5,698 | El Dude |
77,376,856 | 1,806,124 | How to properly run python-telegram-bot together with fastapi? | <p>I built a Python framework to easily build Telegram bots based on the python-telegram-bot module. Then I added fastapi to it so that framework plugins could easily provide web interfaces.</p>
<p>But in order to get both working I needed to use the nest_asyncio module, which I don't like. I'd like to do it "properly" and not use that but I'm not able to get it working without it. My question is, how to get rid of it?</p>
<p>You can check out the full code <a href="https://github.com/Endogen/tgbf2" rel="nofollow noreferrer">here</a>. Below is a reduced example:</p>
<pre><code>import asyncio
import nest_asyncio
import uvicorn
from threading import Thread
from fastapi import FastAPI, APIRouter
from starlette.responses import HTMLResponse
from telegram import Update
from telegram.ext import Application, CommandHandler, CallbackContext
class WebAppWrapper(Thread):
def __init__(self, port: int = 5000):
self.router = APIRouter()
self.port = port
self.app = None
Thread.__init__(self)
def run(self):
self.app = FastAPI()
self.app.include_router(self.router)
html_content = """
<html>
<head>
<title>Some HTML in here</title>
</head>
<body>
<h1>Look ma! HTML!</h1>
</body>
</html>
"""
@self.app.get('/', include_in_schema=False)
async def root(): return HTMLResponse(content=html_content, status_code=200)
uvicorn.run(self.app, host="0.0.0.0", port=self.port)
class TelegramBot:
def __init__(self):
self.bot = None
self.web = None
async def run(self):
self.bot = (
Application.builder()
.token("SOME_TELEGRAM_BOT_TOKEN") # TODO: Enter Telegram Bot Token
.build()
)
self.web = WebAppWrapper(port=5000)
async def test_callback(update: Update, context: CallbackContext):
await update.message.reply_text("pong")
self.bot.add_handler(CommandHandler("ping", test_callback, block=False))
self.web.start()
self.bot.run_polling(drop_pending_updates=True)
if __name__ == "__main__":
nest_asyncio.apply()
asyncio.run(TelegramBot().run())
</code></pre>
| <python><bots><telegram><fastapi><python-telegram-bot> | 2023-10-27 20:19:57 | 0 | 661 | Endogen |
77,376,805 | 3,129,604 | How to use pytesseract to read text from this image with simple numbers? | <p><a href="https://i.sstatic.net/jiyYb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jiyYb.jpg" alt="enter image description here" /></a></p>
<p><code>image_processed</code> variable is the attached image.</p>
<pre><code> custom_config = r'--oem 3 --psm 7 -c tessedit_char_whitelist= 0123456789/'
result = pytesseract.image_to_string(image_processed, lang='eng', config=custom_config)
</code></pre>
<p>The output:</p>
<p>43659 [44 38</p>
<p>The application takes a screenshot of the screen and crops the specified coordinates with the numbers, then applies inverted threshold to get black numbers over a white board. I am trying to read the cropped numbers with pytesseract but it does not output reliable text outputs.</p>
<p>How to use pytesseract to read text from this image with simple numbers?</p>
| <python><ocr><python-tesseract> | 2023-10-27 20:07:03 | 1 | 2,753 | Matteus Barbosa |
77,376,745 | 51,167 | Create an airtable OAuth token in python | <p>I have code from Google that successfully uses the <code>google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file</code> API to open a browser and create a Google API Oauth2 token which is stored in a file and used in API calls:</p>
<pre><code>from google_auth_oauthlib.flow import InstalledAppFlow
def get_credentials():
creds = None
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists(TOKEN_FILENAME):
creds = Credentials.from_authorized_user_file(TOKEN_FILENAME, SCOPES)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file( CREDENTIALS_FILENAME, SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open(TOKEN_FILENAME, 'w') as token:
token.write(creds.to_json())
return creds
</code></pre>
<p>The <code>CREDENTIALS_FILENAME</code> is a JSON file that is downloaded from the Google OAuth console.</p>
<p>Airtable supports OAuth but it does not download an easy-to-use credentials file. But I am still able to get all of the key parameters. I put these into a JSON file and tried Google's code, but it didn't work.</p>
<p>Here are the key paramters I tried:</p>
<pre><code>{"installed":
{"client_id":"****",
"auth_uri":"https://airtable.com/oauth2/v1/authorize",
"token_uri":"https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs",
"client_secret":"*****",
"redirect_uris":["http://localhost"]}}
</code></pre>
<p>Airtable gives me this error:
<a href="https://i.sstatic.net/DIn4d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DIn4d.png" alt="enter image description here" /></a></p>
<p>Well, my client ID is correct. The <code>http://localhost</code> is what Google uses. The Google <code>InstalledAppFlow.from_client_secrets_file</code> runs a local webserver on a random port to get the response from the redirected web browser. Unfortunately, <code>from_client_secrets_file</code> doesn't allow me to specify a port.</p>
<p>I've tried other OAuth2 libraries with Airtable but can't get any of them to work either. I have lots of code. Most of it requires that I copy a URL from the console, paste it into my browser, then take the redirect URL (which produces an error) and paste it into the console.</p>
<p>This code gets quite far:</p>
<pre><code>from oauthlib.oauth2 import WebApplicationClient
from requests_oauthlib import OAuth2Session
import secrets
import base64
import hashlib
def generate_pkce_pair():
""" Generate a code_verifier and code_challenge for PKCE. """
code_verifier = base64.urlsafe_b64encode(secrets.token_bytes(32)).rstrip(b"=").decode('utf-8')
code_challenge = base64.urlsafe_b64encode(hashlib.sha256(code_verifier.encode('utf-8')).digest()).rstrip(b"=").decode('utf-8')
return code_verifier, code_challenge
def main():
client_id = '****'
redirect_uri = 'https://localhost:8080/'
scope = ['schema.bases:read']
# Generate PKCE code verifier and challenge
code_verifier, code_challenge = generate_pkce_pair()
# OAuth2 client
client = WebApplicationClient(client_id)
oauth = OAuth2Session(client=client, redirect_uri=redirect_uri, scope=scope)
# Authorization URL
authorization_url, state = oauth.authorization_url(
url='https://airtable.com/oauth2/v1/authorize',
code_challenge_method="S256",
code_challenge=code_challenge
)
print("Please go to this URL and authorize:", authorization_url)
# Get the authorization code from the callback URL
redirect_response = input('Paste the full redirect URL here: ')
# Fetch the access token
token_url = "https://api.airtable.com/oauth/token"
token = oauth.fetch_token(
token_url=token_url,
authorization_response=redirect_response,
client_secret=None,
code_verifier=code_verifier)
print("Access token:", token)
if __name__ == '__main__':
main()
</code></pre>
<p>Giving me this:</p>
<p><a href="https://i.sstatic.net/0GNiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0GNiI.png" alt="enter image description here" /></a></p>
<p>Redirecting to a <code>localhost:8080</code> URL which can't be read. That's okay, I paste in the redirect URL directly into the console, but then get this error:</p>
<pre><code>oauthlib.oauth2.rfc6749.errors.CustomOAuth2Error: ({'type': 'INVALID_API_VERSION'})
</code></pre>
<p>And I can't figure out how to specify the API version.</p>
<p>So I need to either:</p>
<p>1 - Specify a valid redirect_uri to airtable that works with the Google library, or
2 - Specify the API version for the <code>OAuth2Session</code> call.</p>
| <python><oauth-2.0><oauth><airtable> | 2023-10-27 19:53:43 | 2 | 30,023 | vy32 |
77,376,715 | 11,716,727 | Why do I have a problem when I visualising the testing set results? | <p>As a beginner, I am working on the Titanic Survival dataset <a href="https://github.com/datasciencedojo/datasets/blob/master/titanic.csv" rel="nofollow noreferrer">The data is here</a> and applying LOGISTIC REGRESSION to them. After cleaning my data as shown below:</p>
<p><a href="https://i.sstatic.net/ysF7a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ysF7a.png" alt="enter image description here" /></a></p>
<p>I would like to Visualise the test set results based on the below code to know the survivals and nonsurvival people.</p>
<pre><code>from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('magenta', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('magenta', 'blue'))(i), label = j)
plt.legend()
plt.show()
</code></pre>
<p>But I get this error:</p>
<pre><code>ValueError Traceback (most recent call last)
Cell In[69], line 6
3 X_set, y_set = X_test, y_test
4 X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
5 np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
----> 6 plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
7 alpha = 0.75, cmap = ListedColormap(('magenta', 'blue')))
8 plt.xlim(X1.min(), X1.max())
9 plt.ylim(X2.min(), X2.max())
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\linear_model\_base.py:419, in LinearClassifierMixin.predict(self, X)
405 """
406 Predict class labels for samples in X.
407
(...)
416 Vector containing the class labels for each sample.
417 """
418 xp, _ = get_namespace(X)
--> 419 scores = self.decision_function(X)
420 if len(scores.shape) == 1:
421 indices = xp.astype(scores > 0, int)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\linear_model\_base.py:400, in LinearClassifierMixin.decision_function(self, X)
397 check_is_fitted(self)
398 xp, _ = get_namespace(X)
--> 400 X = self._validate_data(X, accept_sparse="csr", reset=False)
401 scores = safe_sparse_dot(X, self.coef_.T, dense_output=True) + self.intercept_
402 return xp.reshape(scores, -1) if scores.shape[1] == 1 else scores
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\base.py:588, in BaseEstimator._validate_data(self, X, y, reset, validate_separately, **check_params)
585 out = X, y
587 if not no_val_X and check_params.get("ensure_2d", True):
--> 588 self._check_n_features(X, reset=reset)
590 return out
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\base.py:389, in BaseEstimator._check_n_features(self, X, reset)
386 return
388 if n_features != self.n_features_in_:
--> 389 raise ValueError(
390 f"X has {n_features} features, but {self.__class__.__name__} "
391 f"is expecting {self.n_features_in_} features as input."
392 )
ValueError: X has 2 features, but LogisticRegression is expecting 6 features as input.
</code></pre>
<p><strong>Any assistance, please?</strong></p>
<p>I would like to show something like the one below: The figure below for other dataset.</p>
<p><a href="https://i.sstatic.net/Jz7eJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jz7eJ.png" alt="enter image description here" /></a></p>
<p>The whole code:</p>
<pre><code> import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
training_set = pd.read_csv('Train_Titanic.csv')
sns.heatmap(training_set.isnull(), yticklabels=False, cbar=False, cmap='Blues')
#if we want to dop Cabin column from the memory we set implace=True
training_set.drop('Cabin', axis=1, inplace=True)
training_set.drop(['Name','Ticket','Embarked'], axis=1, inplace=True)
plt.figure(figsize=(15,10))
sns.boxplot(x='Sex',y='Age',data=training_set)
def Fill_Age(data):
age = data[0]
sex = data[1]
if pd.isnull(age): #if the data has null
if sex is 'male':
return 29 #This is average from the above boxplot of male
else: #This means if the sex is female
return 25 #This is average from the above boxplot of female
else:
return age #This will return the same age if it isn't null
training_set['Age'] = training_set[['Age','Sex']].apply(Fill_Age, axis=1)
training_set.drop('PassengerId', axis=1, inplace=True)
male = pd.get_dummies(training_set['Sex'])
male = pd.get_dummies(training_set['Sex'], drop_first=True)
male = pd.get_dummies(training_set['Sex'], drop_first=True, dtype=int)
#Let's drop Sex column from our original data
training_set.drop('Sex', axis=1, inplace=True)
training_set
#Now, let's add the male column that we have created to the original dataset
training_set = pd.concat([training_set, male], axis=1)
#Now let's take our data and assign it to X (input) and y(output)
X = training_set.drop('Survived', axis=1).values
X
y = training_set['Survived'].values
y
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test= train_test_split(X,y, test_size = 0.2, random_state = 10)
from sklearn.linear_model import LogisticRegression # We import class (LogisticRegression)
classifier = LogisticRegression(random_state=0) # We took an object from the class
classifier.fit(X_train,y_train)
#We are performing our training
y_predict = classifier.predict(X_test)
</code></pre>
| <python><matplotlib><plot><seaborn><logistic-regression> | 2023-10-27 19:46:08 | 0 | 709 | SH_IQ |
77,376,677 | 10,466,809 | why does `datetime` module behave this way with timezones? | <p>I am trying to work with time and timezones. I am in the <code>US/Mountain</code> time zone and my computer (Windows) is configured to that time zone.</p>
<pre><code>import datetime
import zoneinfo
utc = zoneinfo.ZoneInfo('UTC')
mt = zoneinfo.ZoneInfo('US/Mountain')
print(datetime.datetime.now())
print(datetime.datetime.now().astimezone(mt))
print(datetime.datetime.now().astimezone(utc))
# 2023-10-27 13:17:18.840857
# 2023-10-27 13:17:18.840857-06:00
# 2023-10-27 19:17:18.840857+00:00
</code></pre>
<p>The last line is the one that confuses me. I thought the code <code>datetime.datetime.now()</code> creates a timezone naive object, then <code>astimezone(utc)</code> converts it to a timezone aware object, but doesn't change the "value" of the time. But here you can see that <code>astimezone(utc)</code> causes 6 hours to be added to the value of the time, as if the time generated by <code>datetime.datetime.now()</code> was a mountain time object.</p>
| <python><datetime><timezone> | 2023-10-27 19:35:01 | 2 | 1,125 | Jagerber48 |
77,376,634 | 7,195,376 | Python `ShareableList` is removed when "reading" process closes | <p>I have a "main" process (write-only) in Python that generates a list of IDs which I would like to share with other <em>independently created</em> Python processes (read-only) on the same machine. I want the list to persist regardless if any of the "read" processes exit. I have been exploring <code>ShareableList</code> and <code>SharedMemory</code> from <code>multiprocessing</code> to see if it's suitable for this use case, but encountered some behavior I did not expect. The following is a script I wrote to test this out.</p>
<p><code>shareable_list.py</code></p>
<pre><code>import argparse
from multiprocessing import shared_memory
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--name", type=str, default="shared-memory-test",
help="name of shared memory block")
parser.add_argument("--process-type", type=str, default="read", help="If 'write', then "
"write to shared memory. If 'read', then read from shared memory.")
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
max_seq_len = 64
if args.process_type == "write":
# initialize shared memory (preallocate in case I need to add more)
share = shared_memory.ShareableList(sequence=["" for _ in range(max_seq_len)],
name=args.name)
print(f"created shared memory block {args.name} with sequence length {max_seq_len}")
for i, data in enumerate(["a", "b", "c", "d", "e"]):
data = str(i)
print(f"writing {data} to shared memory")
share[i] = data
elif args.process_type == "read":
# read data from shared_memory
share = shared_memory.ShareableList(name=args.name)
for i, data in enumerate(share):
if data:
print(f"read {data} from shared memory index {i}")
else:
raise ValueError(f"invalid process_type: {args.process_type}")
# stall until user quits
input("Press enter to quit:")
# close shared memory
share.shm.close()
if args.process_type == "write":
share.shm.unlink()
print(f"unlinked shared memory")
</code></pre>
<p>Here is how I tested it:</p>
<ol>
<li>Run <code>python shareable_list.py --process-type write</code> to create and fill a <code>ShareableList</code> object. Let this process continue.</li>
<li>Open a new shell and run <code>python shareable_list.py --process-type read</code></li>
<li>Open a third shell and run <code>python shareable_list.py --process-type read</code></li>
</ol>
<p>The first process outputs the following (which is expected):</p>
<pre><code>created shared memory block shared-memory-test with sequence length 64
writing 0 to shared memory
writing 1 to shared memory
writing 2 to shared memory
writing 3 to shared memory
writing 4 to shared memory
</code></pre>
<p>The second and third processes output this (also expected):</p>
<pre><code>read 0 from shared memory index 0
read 1 from shared memory index 1
read 2 from shared memory index 2
read 3 from shared memory index 3
read 4 from shared memory index 4
</code></pre>
<p>However, when I close the second or third process by pressing "enter" I receive the following warning:</p>
<pre><code>UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
</code></pre>
<p>It also seems to remove the shared memory block. After closing a "read" process, opening any new "read" processes or closing the "write" process results in the following error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/shared-memory-test'
</code></pre>
<p>After reading the docs on <a href="https://docs.python.org/3.10/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory.close" rel="nofollow noreferrer"><code>close()</code></a> and <a href="https://docs.python.org/3.10/library/multiprocessing.shared_memory.html#multiprocessing.shared_memory.SharedMemory.unlink" rel="nofollow noreferrer"><code>unlink()</code></a>, I assumed that I would want to call <code>close()</code> before the "read" processes end and call <code>close()</code> and <code>unlink()</code> before the "write" process ends. My best guess is that the "read" processes here think they are the only processes tracking the object and shut it down because of this. Is my understanding incorrect here? Is this even a good approach to solving my problem? Thanks.</p>
| <python><multiprocessing><shared-memory> | 2023-10-27 19:25:30 | 1 | 544 | Mason McGough |
77,376,586 | 3,842,845 | How to extract certain blocks of data from a csv file using values in the first column | <p>I am trying to extract only data in the two sections ("Admissions" and "Readmissions") starting from 3rd line & 2nd column from a <strong>key word</strong> in the 1st column.</p>
<p>Please see bottom for sample of dataset in a csv file.</p>
<pre><code>Admissions
Not Started:12 Sent:3 Completed:3
Division Community ResidentName Date DocumentStatus Last Update
Test Station Jane Doe 9/12/2023 Sent 9/12/2023
Test Station 2 John Doe 9/12/2023 Not Started
Alibaba Fizgerald Super Man 9/12/2023 Not Started
Iceland Kingdom Super Woman 9/12/2023 Not Started
</code></pre>
<p>,,,,,</p>
<pre><code>Readmissions
Not Started:1 Sent:0 Completed:1
Division Community Resident Name Date DocumentStatus Last Update
Station Kingdom Pretty Woman 9/12/2023 Not Started
My Goodness Ugly Man 7/21/2023 Completed 7/26/2023
</code></pre>
<p>,,,,</p>
<pre><code>Discharge
Division Community Resident Name Date
Station Kingdom1 Pretty Woman2 8/22/2023
My Goodness1 Ugly Man1 4/8/2023
Landmark2 Nice Guys 9/12/2023
Iceland Kingdom2 Mr. Heroshi2 7/14/2023
More Kingdom 2 King Kong 8/31/2023
</code></pre>
<p>So, the logic is:</p>
<p>Find the row that has data where column1 = <strong>'Admissions'</strong> or <strong>'Readmissions'</strong>.</p>
<p>Go down 3 rows, go right 1 column.
Take all the data of 5 columns until it hits a row that has no data.</p>
<p>So, I would like to get the blue box section as an output for each:</p>
<p>This is an illustration:
<a href="https://i.sstatic.net/xrhkc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xrhkc.png" alt="enter image description here" /></a></p>
<p>I got this code from <a href="https://stackoverflow.com/questions/77100877/how-to-select-certain-columns-and-rows-based-on-keyword-in-column1">previous post</a>, but I was not able to get it resolved / get solutions for few weeks, so I am posting this fresh.</p>
<pre><code>import re
import pandas as pd
from io import StringIO
def read_block(names, igidx=True):
with open("Test1.csv") as f:
pat = r"(\w+),+$\n+(.+?)(?=\n\w+,+\n$|\Z)"
return pd.concat([
pd.read_csv(StringIO(m.group(2)), skipinitialspace=True)
.iloc[:, 1:].dropna(how="all") for m in re.finditer(
pat, f.read(), flags=re.M|re.S) if m.group(1) in names # optional
], keys=names, ignore_index=igidx)
df = read_block(names=["Admissions"])
print(df)
</code></pre>
<p>Bottom is current output, but it concatenated all three sections.</p>
<p>I would like to have a separate output using a keyword ("<strong>Admissions</strong>", "<strong>Readmissions</strong>") in the 1st column as a variable in the code.</p>
<pre><code> Sent: 3 Completed: 3 Unnamed: 3 Unnamed: 4 Unnamed: 5
0 Community Resident Name Date Document Status Last Update
1 Test Station Jane Doe 9/12/2023 Sent 9/12/2023
2 Test Station 2 John Doe 9/12/2023 Not Started NaN
3 Alibaba Fizgerald Super Man 9/12/2023 Not Started NaN
4 Iceland Kingdom Super Woman 9/12/2023 Not Started NaN
5 Sent: 0 Completed: 1 NaN NaN NaN
6 Community Resident Name Date Document Status Last Update
7 Station Kingdom Pretty Woman 9/12/2023 Not Started NaN
8 My Goodness Ugly Man 7/21/2023 Completed 7/26/2023
9 Community Resident Name Date NaN NaN
10 Station Kingdom1 Pretty Woman2 8/22/2023 NaN NaN
11 My Goodness1 Ugly Man1 4/8/2023 NaN NaN
12 Landmark2 Nice Guys 9/12/2023 NaN NaN
13 Iceland Kingdom2 Mr. Heroshi2 7/14/2023 NaN NaN
14 More Kingdom 2 King Kong 8/31/2023 NaN NaN
</code></pre>
<p>How do I modify current code to make it work?</p>
| <python><pandas><regex> | 2023-10-27 19:16:27 | 1 | 1,324 | Java |
77,376,483 | 3,462,509 | Deep Learning: Choice of architecture in the IMDB Classification toy example. Embeddings underperform simple baseline | <p>I am comparing the performance of two different network architectures to solve the binary classification problem of <a href="https://keras.io/api/datasets/imdb/" rel="nofollow noreferrer">IMBD movie reviews</a>, presented in Chapter 3 of "Deep Learning With Python".</p>
<p><strong>Loading the data:</strong></p>
<pre><code># num_words means only use most common `n` words in the vocab
(train_data,train_labels),(test_data,test_labels) = imdb.load_data(num_words=10_000)
train_xs = torch.vstack([multi_vectorize(t) for t in train_data]).to(torch.float)
train_ys = torch.tensor(train_labels).unsqueeze(dim=1).to(torch.float) # 250k -> (250k,1) for compatability w/ train
## train/val split
val_xs = train_xs[0:10_000]
val_ys = train_ys[0:10_000]
partial_train_xs = train_xs[10_000:]
partial_train_ys = train_ys[10_000:]
</code></pre>
<p><strong>The architecture the book uses is a simple sequential dense network:</strong></p>
<pre><code>Sequential(
(0): Linear(in_features=10000, out_features=16, bias=True)
(1): ReLU()
(2): Linear(in_features=16, out_features=16, bias=True)
(3): ReLU()
(4): Linear(in_features=16, out_features=1, bias=True)
(5): Sigmoid()
)
</code></pre>
<p>The inputs to this network are "multihot" encoded text snippets. For example, given a review with 80 words and an assumed total vocabulary of 10k words (the domain), each input would be a vector of 10k elements with either a <code>0</code> if the index position in the vector corresponding to the letter in the vocab is not present, and a <code>1</code> if it is present:</p>
<pre><code>VOCAB_SZ = max(max(s) for s in train_data) + 1
def multi_vectorize(seq):
t = torch.zeros(VOCAB_SZ)
for s in seq:
t[s] = 1
return t
</code></pre>
<p>This makes sense but it also would ignore duplicate words, since the input could only represent them as either being present or not.</p>
<p>I know that embeddings are typically used as the first layer in NLP tasks, so I attempted to run a basic experiment including an embedding layer, assuming it would improve the performance:</p>
<hr>
<p><strong>Adding an embedding layer:</strong></p>
<p>I made a handful of modifications to create the embedding layer. First, we don't need every vector to be 10K elements now, since we aren't multihot encoding every input. Instead we make sure each input vector is the same size, which we accomplish by finding the largest input size in the training set and padding every input to be at least that large with a given "pad token". Then the embedding layer is <code>(10001,3)</code>. The first dimension is <code>10001</code> reflecting the vocab size of <code>10000</code> plus one for the newly added pad token. The second dimension is arbitrary and represents the dimensionality of each embedded token.</p>
<pre><code>## make all token lists the same size.
MAX_INPUT_LEN = len(max(train_data,key=len))
## zero is already in the vocab, which starts tokens at zero
## since the max token is VOCAB_SZ - 1 (zero based) we can
## use VOCAB_SZ as the start point for new special tokens
PAD_TOKEN = VOCAB_SZ
def lpad(x,maxlen=MAX_INPUT_LEN,pad_token=PAD_TOKEN):
padlen = maxlen - len(x)
if padlen > 0:
return [pad_token] * padlen + x
return x
EMB_SZ = 3
NUM_EMBED = VOCAB_SZ + 1 # special pad char (10,000)
emb_model = nn.Sequential(
nn.Embedding(NUM_EMBED,EMB_SZ),
nn.Flatten(),
nn.Linear(EMB_SZ * MAX_INPUT_LEN,16),
nn.ReLU(),
nn.Linear(16,16),
nn.ReLU(),
nn.Linear(16,1),
nn.Sigmoid()
)
</code></pre>
<hr>
<p><strong>Results / Questions:</strong>
<br><br>
This network takes <em>an order of magnitude</em> longer to train, and also performs worse. I am unsure why it is so much slower to train (10 epochs in the first version take about 3 seconds, versus 30 seconds for this version). There is an extra layer of indirection due to the embeddings, but the total parameter count of this model is actually less than the first version since we aren't using a 10K sized vector for every input. <strong>So that's point of confusion #1, why would it be so much slower to train?</strong></p>
<p>Second is the performance. I would have thought adding an embedding layer would allow the model more dimensionality to express the sentiment of the reviews. At the very least it would avoid "ignoring" tokens that repeat in the input the way the multihot version does. I experimented with smaller and larger embedding layers, but I cannot seem to get above ~82% validation accuracy and it takes ~80 epochs to get there. The first version gets to 90% validation accuracy after only 10 epochs. <strong>How should I think about why the performance is worse for the embedding version?</strong></p>
| <python><keras><deep-learning><pytorch> | 2023-10-27 18:56:17 | 0 | 2,792 | Solaxun |
77,376,429 | 219,153 | Assign consecutive numbers to numpy array locations satisfying certain condition | <p>This Python script:</p>
<pre><code>import numpy as np
a = np.random.rand(8, 8)
b = np.full_like(a, -1)
n = 0
for i, val in np.ndenumerate(a):
if val < 0.666:
b[i] = n
n += 1
</code></pre>
<p>creates array <code>b</code> with consecutive natural numbers in locations where <code>a < 0.666</code> (placeholder for an arbitrary condition) and <code>-1</code> otherwise. Is there a magic NumPy expression to produce the same result?</p>
| <python><numpy><numpy-ndarray> | 2023-10-27 18:41:30 | 1 | 8,585 | Paul Jurczak |
77,376,311 | 7,055,769 | Unable to log user in | <p>This is my code:</p>
<pre><code>from rest_framework.decorators import api_view
from rest_framework.response import Response
from rest_framework import status
from django.contrib.auth import authenticate, login
@api_view(["POST"])
def user_login_api_view(request):
username = request.data.get("username")
password = request.data.get("password")
user = authenticate(username=username, password=password)
print(user, password, username)
if user is not None:
login(request, user)
return Response(status=status.HTTP_200_OK)
else:
return Response(status=status.HTTP_401_UNAUTHORIZED)
</code></pre>
<p>This is my request body for login:</p>
<pre><code>{
"username": "testuser111@test.test",
"password": "testpassword111"
}
</code></pre>
<p>Here's the print result:</p>
<p><code>None testpassword111 testuser111@test.test</code></p>
<p>I register a user like so:</p>
<pre><code>from django.contrib.auth.models import User
from ..serializers import UserSerializer
class UserCreateApiView(generics.CreateAPIView):
def get_queryset(self):
return User.objects.create_user(self.request.data)
serializer_class = UserSerializer
</code></pre>
<p>with this request body:</p>
<pre><code>{
"username": "testuser111@test.test",
"password": "testpassword111"
}
</code></pre>
<p>User serializer</p>
<pre><code>from rest_framework import serializers
from django.contrib.auth.models import User
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = "__all__"
password = serializers.CharField(write_only=True)
</code></pre>
<p>List user code:</p>
<pre><code>class UserListAPIView(generics.ListAPIView):
queryset = User.objects.all().values("id", "username")
serializer_class = UserListSerializer
class UserListSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = "__all__"
password = serializers.CharField(write_only=True)
</code></pre>
<p>List response:</p>
<pre><code>[
{
"id": 1,
"last_login": null,
"username": "testuser111"
},
{
"id": 3,
"last_login": null,
"username": "testuser1111@test.test"
},
{
"id": 2,
"last_login": null,
"username": "testuser111@test.test"
}
]
</code></pre>
<p>And in response I get 401. How do I log a user in properly?</p>
| <python><django><django-rest-framework> | 2023-10-27 18:16:34 | 2 | 5,089 | Alex Ironside |
77,376,207 | 10,918,680 | When is asyncio.Lock() needed? | <p>Here is the code I have so far that uses the bleak package to connect to multiple Bluetooth devices and get data/notifications from them. The devices are scales are they automatically shut off after awhile if there are no weights on them. Upon placing weight on them, they turn on and starts notifying. The code continuously scan for devices that are on, to log data from them. Upon shutting off, the disconnect_callback() function will be called.</p>
<p>I have a global set called connected_devices that keeps track of the devices that are on. When a device is connected, its MAC address will be added to the set. When a device disconnect, its Mac address will be removed from the set (in the disconnect_callback() function).</p>
<p>I have locking codes that are currently commented out in order to synchronize the adding/removal of MAC addresses from the connected_device set, but I'm not sure is necessary, since everything runs in the scan_and_connect() coroutine.</p>
<pre><code>import asyncio
import functools
from bleak import BleakClient, BleakScanner
#lock = asyncio.Lock()
connected_devices = set()
notify_uuid = "00002A37-0000-1000-8000-00805F9B34FB"
def callback(client, characteristic, data):
print(client.address, characteristic, data)
def disconnected_callback(client):
#with lock:
connected_devices.remove(client.address)
print("disconnect from", device.address)
def match_device(device, adv_data):
#with lock:
return adv_data.local_name.startswith('BLE') and device.address not in connected_devices
async def scan_and_connect():
while True:
device = await BleakScanner.find_device_by_filter(match_device)
if device is None:
continue
client = BleakClient(device, disconnected_callback=disconnected_callback)
try:
await client.connect()
print("connected to", device.address)
await client.start_notify(functools.partial(callback, client))
#with lock:
connected_devices.add(device.address)
except BleakError:
# if failed to connect, this is a no-op, if failed to start notifications, it will disconnect
await client.disconnect()
if __name__ == "__main__":
asyncio.run(scan_and_connect())
</code></pre>
<p>Are the locks are necessary in this case?</p>
| <python><asynchronous><async-await><python-asyncio><coroutine> | 2023-10-27 17:57:27 | 1 | 425 | user173729 |
77,376,190 | 9,105,621 | chatbot that will generate a document draft with python, langchain, and openai | <p>I'm attempted to pass draft documents and have my chatbot generate a template using a prompt <code>create a non disclosure agreement draft for California between mike llc and fantasty world.</code> with my code below the response i'm getting is:
<code>"I'm sorry, but I cannot generate a non-disclosure agreement draft for you. However, you can use the provided context information as a template to create a non-disclosure agreement between Mike LLC and fantasty world. Just replace the placeholders in the template with the appropriate names and information for your specific agreement.</code></p>
<p>Here is my setup:</p>
<pre><code>import sys
import os
import openai
import constants
import gradio as gr
from langchain.chat_models import ChatOpenAI
from llama_index import SimpleDirectoryReader, GPTListIndex, GPTVectorStoreIndex, LLMPredictor, PromptHelper, load_index_from_storage
# Disable SSL certificate verification (for debugging purposes)
os.environ['REQUESTS_CA_BUNDLE'] = '' # Set it to an empty string
os.environ["OPENAI_API_KEY"] = constants.APIKEY
openai.api_key = os.getenv("OPENAI_API_KEY")
print(os.getenv("OPENAI_API_KEY"))
def createVecorIndex(path):
max_input = 4096
tokens = 512
chunk_size = 600
max_chunk_overlap = 0.1
prompt_helper = PromptHelper(max_input, tokens, max_chunk_overlap, chunk_size_limit=chunk_size)
#define llm
llmPredictor = LLMPredictor(llm=ChatOpenAI(temperature=.7, model_name='gpt-3.5-turbo', max_tokens=tokens))
#load data
docs = SimpleDirectoryReader(path).load_data()
#create vector index
vectorIndex = GPTVectorStoreIndex(docs, llmpredictor=llmPredictor, prompt_helper=prompt_helper)
vectorIndex.storage_context.persist(persist_dir='vectorIndex.json')
return vectorIndex
vectorIndex = createVecorIndex('docs')
</code></pre>
<p>In my docs directory, I have a few examples of non-disclosure agreements to create the vector index.</p>
<p>This was my first attempt at the query:</p>
<pre><code>def chatbot(input_index):
query_engine = vectorIndex.as_query_engine()
response = query_engine.query(input_index)
return response.response
gr.Interface(fn=chatbot, inputs="text", outputs="text", title="Super Awesome Chatbot").launch()
</code></pre>
<p>I can't seem to get it to generate the draft, it keeps giving me the "I cannot generate a draft" response</p>
<p>I also tried to create a clause for the word draft, but the setup below is essential useing the trained model instead my vector.</p>
<pre><code>def chatbot(input_index):
query_engine = vectorIndex.as_query_engine()
# If the "draft" clause is active:
if "draft" in input_index.lower():
# Query the vectorIndex for relevant information/context
vector_response = query_engine.query(input_index).response
print(vector_response)
# Use vector_response as context to query the OpenAI API for a draft
prompt = f"Based on the information: '{vector_response}', generate a draft for the input: {input_index}"
response = openai.Completion.create(
engine="text-davinci-002",
prompt=prompt,
max_tokens=512,
temperature=0.2
)
openai_response = response.choices[0].text.strip()
return openai_response
# If "draft" clause isn't active, use just the vectorIndex response
else:
print('else clause')
return query_engine.query(input_index).response
</code></pre>
| <python><openai-api><langchain><llama-index> | 2023-10-27 17:53:49 | 1 | 556 | Mike Mann |
77,376,095 | 20,181,052 | How to show one legend for multiple same label | <p>Below is plotting a time series graph in which colour categories the age group and label categories the location.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.use('TkAgg') # !IMPORTANT
fig, ax = plt.subplots()
ax.plot([93497, 137558, 158293, 180411, 183457, 187016, 189596],
marker='.', linestyle='-', linewidth=0.5, label='Hong Kong', color="bisque")
ax.plot([67594, 112486, 130558, 149890, 157317, 163262, 165828],
marker='v', linestyle='-', label='Kowloon', color="bisque")
ax.plot([58093, 102680, 121633, 138414, 144665, 149776, 152765],
marker='*', linestyle='-', label='New Territories', color="bisque")
ax.plot([101779, 140103, 160860, 176330, 183330, 182458, 184591],
marker='.', markersize=5, linestyle='-', linewidth=0.5, label='Hong Kong', color="teal")
ax.plot([81941, 115792, 131061, 147161, 153582, 160379, 161225],
marker='v', markersize=5, linestyle='-', label='Kowloon', color="teal")
ax.plot([56305, 91942, 106554, 120067, 125490, 132070, 136451],
marker='*', markersize=5, linestyle='-', label='New Territories', color="teal")
ax.set_ylabel('$HKD/sq. m.')
ax.legend()
plt.show()
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/XxnSU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XxnSU.png" alt="enter image description here" /></a></p>
<p>How to have one legend only for each label, so no duplicated legend name.</p>
<p>For example, instead of "Hong Kong", "Kowloon", "New Territories", "Hong Kong", "Kowloon", "New Territories"</p>
<p>Showing "Hong Kong", "Kowloon", "New Territories" only.</p>
| <python><matplotlib> | 2023-10-27 17:37:52 | 2 | 553 | TungTung |
77,376,049 | 5,304,058 | the JSON object must be str, bytes or bytearray, not float | <p>df_json:</p>
<pre><code>filedate code errorID rawrecord errortype
20230811 8003 100 {"Action":"NEW","ID":"30811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"111111111111","price":12.0000} BBBB
20230811 8003 101 {"Action":"NEW","ID":"20811-195555-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"111111111112","price":18.0000} BBBB
20230811 8003 102 {"Action":"NEW","ID":"50811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"411111111111","price":12.0000} BBBB
20230811 8003 103 {"Action":"NEW","ID":"60811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"511111111111","price":19.0000} BBBB
20230811 8003 104 {"Action":"NEW","ID":"40811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"611111111111","price":10.0000} BBBB
20230811 8003 105 {"Action":"NEW","ID":"80811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"811111111111","price":12.0000} BBBB
20230811 8003 106 {"Action":"NEW","ID":"70811-195552-952","EventType":"MMMM","Date":"20230811T000000.000000","orderID":"911111111111","price":17.0000} AAAA
</code></pre>
<p>when I try to load this I get the error</p>
<pre><code>"the JSON object must be str, bytes or bytearray, not float"
</code></pre>
<p>I am trying to extract key value pair from a column rawrecord.</p>
<pre><code>df_json=df_json.join(df_json['rawrecord'].apply(lambda x: pd.Series(json.loads(x))))
</code></pre>
<p>I tried to use json.dumps(), json.load() but its not working.</p>
<p>Can anyone please help me with this?</p>
| <python><pandas> | 2023-10-27 17:30:20 | 2 | 578 | unicorn |
77,375,953 | 11,153,160 | Directory not found from Makefile | <p>I'm trying to setup a few scripts in a Makefile for a Python project. My current Makefile looks like this:</p>
<pre><code>install:
pip install -r requirements.txt
activate:
source env/bin/activate
deactivate:
deactivate
jupyter:
jupyter lab
</code></pre>
<p>This file is in the project's root folder. The env folder exists and has all the content you'd expect from a virtual environment. Now from the root folder, when I type <code>make activate</code>, I have the following error:</p>
<pre><code>source env/bin/activate
make: source: No such file or directory
make: *** [Makefile:4: activate] Error 127
</code></pre>
<p>But if I type <code>source env/bin/activate</code> directly in a terminal, it works as expected. What is the issue here?</p>
<p>Thank you.</p>
| <python><makefile> | 2023-10-27 17:11:26 | 0 | 990 | strblr |
77,375,881 | 263,061 | How to type a Python function the same way as another function? | <p>For writing a wrapper around an existing function, I want to that wrapper to have the same, or very similar, type.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>import os
def my_open(*args, **kwargs):
return os.open(*args, **kwargs)
</code></pre>
<p>Tye type signature for <code>os.open()</code> is complex and may change over time as its functionality and typings evolve, so I do not want to copy-paste the type signature of <code>os.open()</code> into my code. Instead, I want to <em>infer</em> the type for <code>my_open()</code>, so that it "copies" the type of <code>os.open()</code>'s parameters and return values.</p>
<p><code>my_open()</code> shall have the same type as the wrapped function <code>os.open()</code>.</p>
<hr />
<p>I would like to do the same thing with a decorated function:</p>
<pre class="lang-py prettyprint-override"><code>@contextmanager
def scoped_open(*args, **kwargs):
"""Like `os.open`, but as a `contextmanager` yielding the FD.
"""
fd = os.open(*args, **kwargs)
try:
yield fd
finally:
os.close(fd)
</code></pre>
<p>Here, the inferred function arguments of <code>scoped_open()</code> shall be the same ones as <code>os.open()</code>, but the return type shall be a <code>Generator</code> of the <em>inferred</em> return type of <code>os.open()</code> (currently <code>int</code>, but again I do not wish to copy-paste that <code>int</code>).</p>
<hr />
<p>I read some things about <a href="https://docs.python.org/3/library/typing.html#typing.ParamSpec" rel="nofollow noreferrer">PEP 612</a> here:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/71968447/python-typing-copy-kwargs-from-one-function-to-another">Python Typing: Copy `**kwargs` from one function to another</a></li>
<li><a href="https://stackoverflow.com/questions/47060133/python-3-type-hinting-for-decorator/68290080#68290080">Python 3 type hinting for decorator</a></li>
</ul>
<p>These seem related, but the examples given there still always copy-paste at least some part of the types.</p>
<p><strong>How can this be done in <code>pyright</code>/<code>mypy</code>/general?</strong></p>
| <python><mypy><python-typing><pyright> | 2023-10-27 16:55:31 | 1 | 25,947 | nh2 |
77,375,728 | 3,568 | How to add "AND EXISTS (SELECT ...)" to a Python/SqlAlchemy query? | <p>I have an SqlAlchemy query that looks like this.</p>
<pre><code>results = (
session.query(MyFirstTable)
# various .join() and .filter() removed for brevity.
.all()
)
</code></pre>
<p>I want to add another <code>.filter</code>, but if I were writing SQL, it would look like:</p>
<pre><code>SELECT * MyFirstTable
-- INNER JOINs and WHEREs removed for brevity.
AND EXISTS (
SELECT * FROM MyOtherTable
WHERE MyFirstTable.ID = MyOtherTable.myFirstTableID
AND MyOtherTable.OtherThing = 42
)
</code></pre>
<p>I want the top level query to only return a record of <code>MyFirstTable</code>, if that record matches its primary key (<code>ID</code>) with one in another in another table (<code>MyOtherTable</code>) AND that record in the other table matches other conditions.</p>
<p>(Note that the other table might have several records that match this second WHERE. I only care if that number is zero or not-zero.)</p>
<p>Is there a way to write a <code>.filter()</code> to add onto the chain just before the final <code>.all()</code> that will do this?</p>
| <python><sqlalchemy> | 2023-10-27 16:23:44 | 1 | 3,313 | billpg |
77,375,679 | 7,672,005 | Jupyter notebook opening a console to fill sudo password | <p>I've opened a jupyter notebook through my terminal on ubuntu, and am trying to run part of the notebook that has a <code>sudo -S pip install -e .</code> command.
It will hang on prompting for a password due to the sudo.
I'm not sure how to circumvent this, I've tried the following:</p>
<ol>
<li><code>%qtconsole</code> opens a console which seems tied to the notebook, however it freezes up during the sudo, when I try to sudo within the console it also doesn't let me press enter on typing the password.</li>
<li>Defining the command, username, and password. And then using <code>command_with_password = f'echo "{password}" | sudo -S -u {username} {command}'</code>. This gives <code>pip not found</code> which is odd since just a bit earlier in the notebook it runs some <code>!pip install</code>s just fine</li>
</ol>
<p>I have anaconda ready to go too, someone suggested there might be a solution there but I did not find it.</p>
<p>Any help is appreciated, I'd rather want a solution similar to [1] with some kind of console that makes it easy to catch any prompts in the future or some method of elevating the notebook to not require passwords, but I'm desperate to try anything.</p>
| <python><linux><bash><jupyter-notebook><jupyter> | 2023-10-27 16:16:13 | 0 | 534 | Zyzyx |
77,375,518 | 8,260,569 | Pandas groupby slicing, but in numpy | <p>I have a dataframe with float dtypes only, and an id column. I want to restrict the dataframe to just the top 10 rows for each id value. The immediate way to do that is just <code>df.groupby('id').apply(lambda minidf: minidf.iloc[:k])</code>, but it seems a bit slow and I'm wondering if there are much faster ways of getting the same output.</p>
<p>So I wanted to ask, since the dataframe is made up of all floats anyway, is there a numpy method I can use that's equivalent to the above line of code? Or maybe if I can achieve the same result in any other library, but in a much shorter time. Thanks in advance!</p>
| <python><numpy><group-by> | 2023-10-27 15:53:28 | 1 | 304 | Shirish |
77,375,425 | 159,633 | Large memory usage when calculating std dev over rolling window | <p>I have a 2D array, and I want to calculate the mean and standard deviation over a spatial window of <code>win_size</code> pixels (~50 pixels or more). I only need to extract this for a subset of pixels whose coordinates I have stored in two arrays.</p>
<p>My code looks like this:</p>
<pre><code># Set up rolling window, centred at pixel
R_spatial = R.rolling({"x": win_size, "y": win_size}, center=True)
# Calculate mean for a range of pixels
R_mean = R_spatial.mean().isel(x=x_loc_idx, y=y_loc_idx).compute()
# Calculate standard deviation
R_std = R_spatial.std().isel(x=x_loc_idx, y=y_loc_idx).compute()
</code></pre>
<p>The mean is calculated without any issues, but the standard deviation calculations starts eating more and more memory until it crashes the Python interpreter.</p>
<p>The problem goes away if I launch a large cluster, but I thought that xarray/dask were able to cope with calculations too big to fit in memory?</p>
<p>Any clues?
xarray and dask versions are 2023.05</p>
<h2>Memory profiling</h2>
<p>A simple benchmarking exercise using e.g., <code>mprof run run_me.py</code>, where <code>run_me.py</code> is shown below:</p>
<pre><code>import numpy as np
import xarray as xr
from memory_profiler import profile
def create_data(N = 3500, n_samps = 5):
R = xr.DataArray(np.random.randn(N, N), dims=["x", "y"]).chunk({"x":256, "y":256})
x_loc_idx, y_loc_idx = np.random.randint(0,N, (2, n_samps))
return R, x_loc_idx, y_loc_idx
@profile
def do_mean(R, x_loc_idx, y_loc_idx, win_size = 21):
# Set up rolling window, centred at pixel
R_spatial = R.rolling({"x": win_size, "y": win_size}, center=True)
# Calculate mean for a range of pixels
return R_spatial.mean().isel(x=x_loc_idx, y=y_loc_idx).compute()
@profile
def do_std(R, x_loc_idx, y_loc_idx, win_size = 21):
# Set up rolling window, centred at pixel
R_spatial = R.rolling({"x": win_size, "y": win_size}, center=True)
# Calculate mean for a range of pixels
return R_spatial.std().isel(x=x_loc_idx, y=y_loc_idx).compute()
if __name__ == "__main__":
R, x_loc_idx, y_loc_idx = create_data()
for win_size in [11, 31]:
mu = do_mean(R, x_loc_idx, y_loc_idx, win_size=win_size)
sigma = do_std(R, x_loc_idx, y_loc_idx, win_size=win_size)
</code></pre>
<p>Running this returns on my system:</p>
<pre><code>$ mprof run run_me.py
mprof: Sampling memory every 0.1s
running new process
running as a Python program...
Filename: run_me.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
11 295.4 MiB 295.4 MiB 1 @profile
12 def do_mean(R, x_loc_idx, y_loc_idx, win_size = 21):
13 # Set up rolling window, centred at pixel
14 295.5 MiB 0.1 MiB 1 R_spatial = R.rolling({"x": win_size, "y": win_size}, center=True)
15 # Calculate mean for a range of pixels
16 344.0 MiB 48.5 MiB 1 return R_spatial.mean().isel(x=x_loc_idx, y=y_loc_idx).compute()
Filename: run_me.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
18 344.0 MiB 344.0 MiB 1 @profile
19 def do_std(R, x_loc_idx, y_loc_idx, win_size = 21):
20 # Set up rolling window, centred at pixel
21 344.0 MiB 0.0 MiB 1 R_spatial = R.rolling({"x": win_size, "y": win_size}, center=True)
22 # Calculate mean for a range of pixels
23 333.3 MiB -10.8 MiB 1 return R_spatial.std().isel(x=x_loc_idx, y=y_loc_idx).compute()
Filename: run_me.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
11 333.3 MiB 333.3 MiB 1 @profile
12 def do_mean(R, x_loc_idx, y_loc_idx, win_size = 21):
13 # Set up rolling window, centred at pixel
14 333.3 MiB 0.0 MiB 1 R_spatial = R.rolling({"x": win_size, "y": win_size}, center=True)
15 # Calculate mean for a range of pixels
16 345.4 MiB 12.1 MiB 1 return R_spatial.mean().isel(x=x_loc_idx, y=y_loc_idx).compute()
Filename: run_me.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
18 345.4 MiB 345.4 MiB 1 @profile
19 def do_std(R, x_loc_idx, y_loc_idx, win_size = 21):
20 # Set up rolling window, centred at pixel
21 345.4 MiB 0.0 MiB 1 R_spatial = R.rolling({"x": win_size, "y": win_size}, center=True)
22 # Calculate mean for a range of pixels
23 440.0 MiB 94.6 MiB 1 return R_spatial.std().isel(x=x_loc_idx, y=y_loc_idx).compute()
</code></pre>
<p>If I set up <code>win_size</code> to be e.g. 41, or increase the number of samples to 10, the program crashes after having eaten all the memory (this is on my laptop running Linux). Using Dask GatewaySerrvers, I can process the kind of loads I was expecting (n_samps 10000s, win_size~61).</p>
<p>The calculation is fairly trivial, and I could test other alternatives (numba by expanding all the neighbourhood calculations u sing loops, or using <code>map_blocks</code>, ...), but I wanted to get familiar with xarray, so was wondering about this result?</p>
| <python><dask><python-xarray> | 2023-10-27 15:35:54 | 0 | 2,149 | Jose |
77,375,031 | 1,867,328 | Print nothing in Python | <p>I have below code</p>
<pre><code>choose = 4
print(choose if choose == 4 else None)
## this print 4
</code></pre>
<p>But,</p>
<pre><code>choose = 5
print(choose if choose == 4 else None)
</code></pre>
<p>But this print <code>None</code> in the screen. Is there any way to print <code>NOTHING</code>?</p>
| <python> | 2023-10-27 14:34:44 | 2 | 3,832 | Bogaso |
77,374,878 | 15,341,457 | Python Scrapy - brotli.brotli.Error: Decompression error: incomplete compressed stream | <p>I'm scraping restaurant reviews from Yelp and I'm accessing the restaurant's APIs to do so. I'm currently scraping 4 star reviews, for example this <a href="https://www.yelp.it/biz/tonnarello-roma?osq=tonnarello&rr=4" rel="nofollow noreferrer">restaurant page</a> has this corresponding <a href="https://www.yelp.it/biz/78t73jTxdUw5C-v44lj4Iw/review_feed?rr=4" rel="nofollow noreferrer">API</a>.</p>
<p>This is the block of code that sends an http request to the API when the crawler is currently on the restaurant page</p>
<pre><code>bizId = response.xpath("//meta[@name='yelp-biz-id']/@content").extract_first()
api_url = 'https://www.yelp.it/biz/' + bizId + '/review_feed?rr=' + str(n_star_filter)
yield response.follow(url=api_url, callback = self.parse_yelp_restaurant_api)
</code></pre>
<p>Sometimes the API are accessed correctly and I'm able to scrape them. However, most of the time, I get this error:</p>
<pre><code>2023-10-27 15:57:39 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.yelp.it/biz/78t73jTxdUw5C-v44lj4Iw/review_feed?rr=4>
Traceback (most recent call last):
File "/Users/mauri/anaconda3/lib/python3.11/site-packages/twisted/internet/defer.py", line 1697, in _inlineCallbacks
result = context.run(gen.send, result)
File "/Users/mauri/anaconda3/lib/python3.11/site-packages/scrapy/core/downloader/middleware.py", line 64, in process_response
method(request=request, response=response, spider=spider)
File "/Users/mauri/anaconda3/lib/python3.11/site-packages/scrapy/downloadermiddlewares/httpcompression.py", line 63, in process_response
decoded_body = self._decode(response.body, encoding.lower())
File "/Users/mauri/anaconda3/lib/python3.11/site-packages/scrapy/downloadermiddlewares/httpcompression.py", line 102, in _decode
body = brotli.decompress(body)
File "/Users/mauri/anaconda3/lib/python3.11/site-packages/brotli/brotli.py", line 90, in decompress
d.finish()
File "/Users/mauri/anaconda3/lib/python3.11/site-packages/brotli/brotli.py", line 464, in finish
raise Error("Decompression error: incomplete compressed stream.")
brotli.brotli.Error: Decompression error: incomplete compressed stream.
</code></pre>
<p>I can't figure out what this means and it's really weird that some APIs are downloaded and others produce this error when they apparently are no different from each other.</p>
| <python><web-scraping><scrapy><compression><brotli> | 2023-10-27 14:15:16 | 1 | 332 | Rodolfo |
77,374,683 | 4,726,173 | quiver plot does not adjust to arrow extent | <p>As visible for example here: <a href="https://stackoverflow.com/questions/53484900/how-to-turn-off-matplotlib-quiver-scaling">How to turn off matplotlib quiver scaling?</a></p>
<p>when using matplotlib.pyplots's quiver to draw arrows, the arrows often point out of the image. It seems like the plot adjusts only to the starting point (X, Y arguments to quiver()) and does not take into account the extent of the actual arrows. Is there an easy way to rescale the axes to include the entire arrow?</p>
<p>I'm aware of plt.xlim(..., ...) , plt.ylim(..., ...), or Axes.set_xlim / Axes.set_ylim; I thought maybe there is a global command (like the tight layout command) to include all points into the visible part of the plot (all plots at once, potentially)?</p>
<hr />
<p>Update, since someone was not happy about the question: Trying to add to the example I linked, what @Mathieu suggested in the comments (constrained layout), does not appear to work:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
pts = np.array([[1, 2], [3, 4]])
end_pts = np.array([[2, 4], [6, 8]])
diff = end_pts - pts
plt.quiver(pts[:,0], pts[:,1], diff[:,0], diff[:,1],
angles='xy', scale_units='xy', scale=1.)
</code></pre>
<p>We get an image with one arrow pointing out of the image:
<a href="https://i.sstatic.net/qAdea.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qAdea.png" alt="enter image description here" /></a></p>
<p>Enabling constrained layout:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.constrained_layout.use'] = True
pts = np.array([[1, 2], [3, 4]])
end_pts = np.array([[2, 4], [6, 8]])
diff = end_pts - pts
plt.quiver(pts[:,0], pts[:,1], diff[:,0], diff[:,1],
angles='xy', scale_units='xy', scale=1.)
</code></pre>
<p>This results in smaller margins, but has no effect on axis ranges:
<a href="https://i.sstatic.net/GnXnF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GnXnF.png" alt="enter image description here" /></a></p>
| <python><matplotlib><arrows> | 2023-10-27 13:48:05 | 1 | 627 | dasWesen |
77,374,491 | 2,130,515 | Ray-tune log results in two folders | <p>I using ray-tune to fine-tune parameter.</p>
<p>Based on the docs, ray-tune log the results of each trial to a sub-folder under a specified local dir, which defaults to <code>~/ray_results</code>.</p>
<p>To change <em>ray_results</em> to a <em>custom folder</em>, I used the following:</p>
<pre><code>timestamp = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
experiment_name = f"doc2vec-experiemt_{timestamp}"
tuner = tune.Tuner(
tune.with_resources(train_model, {"cpu":4}),
param_space=search_space,
run_config=air.RunConfig(storage_path="my_custom_folder_results/",
name=experiment_name),
tune_config=tune.TuneConfig(num_samples=20),
)
</code></pre>
<p>I figure out that ray-tune log results in both folder, the default one: <code>~/ray_results</code> and <code>my_custom_folder_results/</code></p>
| <python><logging><hyperparameters><ray> | 2023-10-27 13:20:43 | 0 | 1,790 | LearnToGrow |
77,374,339 | 4,399,016 | Using PyQuery and Gadget selector to extract URLs from a Website | <p>I have code that only works partially:</p>
<pre><code>from pyquery import PyQuery as pq
import requests
url = SAMPLE_URL.com
content = requests.get(url).content
doc = pq(content)
Latest_Report = doc(".head+ .post .heading")
Latest_Report.text()
</code></pre>
<p>I am able to get the Text element with this. But I want to use the URL available here.</p>
<pre><code>print(Latest_Report)
</code></pre>
<p>What is the best way to get the href:</p>
<pre><code><a class="heading" href="URL_WANTED">ABC’s September Construction Backlog Indicator Dips, Yet Contractors Remain Confident</a>
</code></pre>
| <python><python-requests><pyquery> | 2023-10-27 12:57:48 | 1 | 680 | prashanth manohar |
77,374,321 | 468,455 | Python authorizing into Google - getting referenced before assignment error | <p>This code has been working for a while and suddenly this morning it started throwing a 'reference before assignment' error but I am not sure why that is being thrown. The error is happening at this line <code>creds.refresh(Request())</code> on the <code>creds</code> var:</p>
<pre><code>def getAuth(self):
try:
creds = None
errMsg = "There was an error authenticating to Google. The token is not valid."
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.json'):
creds = Credentials.from_authorized_user_file('token.json', SCOPES)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
print(f"{cm.WARNING}No creds or creds are not valid...{cm.ENDC}")
if creds and creds.expired and creds.refresh_token:
print(f"{cm.WARNING}Refreshing token...{cm.ENDC}")
creds.refresh(Request())
else:
credsPath = os.path.expanduser("~/Desktop/xxxxxx/Development/Python/xxxxxxxx/modules/credentials.json")
flow = InstalledAppFlow.from_client_secrets_file(credsPath, SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.json', 'w') as token:
token.write(creds.to_json())
#validate auth
if creds.valid == False:
return {"success":False, "error-message":errMsg}
else:
return {"success":True, "credentials": creds}
except Exception as e:
return {"success":False, "error-message":f"{cm.WARNING}There was an error getting Google authorization\n{e}{cm.ENDC}"}
</code></pre>
<p>The error messages is:</p>
<blockquote>
<p>local variable 'creds' referenced before assignment.</p>
</blockquote>
<p>To add to this, I can print out <code>creds</code> before the <code>creds.refresh(Request())</code> call:</p>
<blockquote>
<p><google.oauth2.credentials.Credentials object at 0x110097c70></p>
</blockquote>
<p>What can I try next?</p>
| <python><google-drive-api> | 2023-10-27 12:53:47 | 1 | 6,396 | PruitIgoe |
77,374,277 | 16,027,663 | Vectorising Iterrows/Itertuples on Time Series Dataframe | <p>I have a series of dataframes containing OHLC data from Yahoo Finance, sorted by ascending dates. I want to test some things for my economics class so built a simple script using <code>iterrows</code>. It was painfully slow but it is what I knew.</p>
<p>To speed it up I tried using <code>itertuples</code>. This did improve things but the script is still too slow. Here is the applicable extract:</p>
<pre><code>for row in ticker_df.itertuples(index=True):
# Open rules
if not in_position:
rule_one = row.PCClose > row.PCSMA20
if rule_one:
rule_two = row.PCSMA20 > row.PCSMA50
if rule_two:
rule_three = row.PCSMA100 > row.PCSMA300
if rule_three:
rule_five = row.PCClose > (row.PCWeekLow52 * 1.9)
if rule_five:
rule_six = row.PCClose > (row.PCWeekHigh52 * 0.5)
if rule_six:
if row.WeekLow3 >= (row.WeekHigh3 * 0.5):
# Check if broken out
if row.PCClose > row.WeekHigh3:
rule_eight = row.PctCh > 1.2
if rule_eight:
in_position = True
if in_position:
# Close rules
if row.PCSMA20 < row.PCSMA50:
in_position = False
</code></pre>
<p>I am now trying to vectorise my approach and have incorporated the above open rules into a numpy select:</p>
<pre><code>open_rules = [(ticker_df["PCClose"] > ticker_df["PCSMA20"]) & (ticker_df["PCSMA20"] > ticker_df["PCSMA50"]) &
(ticker_df["PCSMA100"] > ticker_df["PCSMA300"]) & (ticker_df["PCClose"] > (ticker_df["PCWeekLow52"] * 1.9)) &
(ticker_df["PCClose"] > (ticker_df["PCWeekHigh52"] * 0.5)) & (ticker_df['WeekLow3'] >= (ticker_df['WeeHigh3'] * 0.5)) &
(ticker_df['PCPctCh'] > 1.2)]
open_rules_met = ["Open"]
ticker_df["Trades"] = np.select(open_rules, open_rules_met, default='NA')
</code></pre>
<p>This works faster to define the open positions but the problem I have is how to then identify the closing positions.</p>
<p>As the iterrorws/itertuples approach loops through the rows sequentially I can simply set the <code>in_position</code> flag to true then continue iterating down the rows to check if the close rules are met. But I cannot work out how to achieve this with a vectorised appoach. Obviously the close can only happen on a date that is after the open date.</p>
<p>Is there a way of doing this?</p>
<p>Edit:</p>
<p>The code is simply comparing the values in one column with values in another column or fixed values. Here is an extract of a ticker_df:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Open</th>
<th>High</th>
<th>Low</th>
<th>PCClose</th>
<th>Adj Close</th>
<th>Volume</th>
<th>Ticker</th>
<th>NextDayOpen</th>
<th>PCSMA20</th>
<th>PCSMA50</th>
<th>PCSMA100</th>
<th>PCSMA300</th>
<th>VMA</th>
<th>PCWeekLow52</th>
<th>PCWeekHigh52</th>
<th>WeekLow3</th>
<th>WeekHigh3</th>
<th>PctCh</th>
</tr>
</thead>
<tbody>
<tr>
<td>15/09/2023</td>
<td>143.5850067</td>
<td>144.9499969</td>
<td>142.1000061</td>
<td>142.75</td>
<td>142.75</td>
<td>74786400</td>
<td>ABNB</td>
<td>141.9299927</td>
<td>135.3800007</td>
<td>138.8851004</td>
<td>125.7562335</td>
<td>118.8517251</td>
<td>7123686</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>124.7200012</td>
<td>147.5</td>
<td>0.215381961</td>
</tr>
<tr>
<td>18/09/2023</td>
<td>141.9299927</td>
<td>144.3999939</td>
<td>141.1900024</td>
<td>142.5500031</td>
<td>142.5500031</td>
<td>7351400</td>
<td>ABNB</td>
<td>141.8600006</td>
<td>136.254501</td>
<td>139.1297003</td>
<td>125.9807669</td>
<td>119.0875751</td>
<td>7207752</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>125.7900009</td>
<td>147.5</td>
<td>0.15605763</td>
</tr>
<tr>
<td>19/09/2023</td>
<td>141.8600006</td>
<td>142.5700073</td>
<td>139.7700043</td>
<td>141.8500061</td>
<td>141.8500061</td>
<td>6674100</td>
<td>ABNB</td>
<td>142.7599945</td>
<td>137.1470013</td>
<td>139.3325003</td>
<td>126.1503002</td>
<td>119.2861252</td>
<td>7260194</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>126.1549988</td>
<td>147.5</td>
<td>0.142319112</td>
</tr>
<tr>
<td>20/09/2023</td>
<td>142.7599945</td>
<td>143.2700043</td>
<td>137.9499969</td>
<td>138.0099945</td>
<td>138.0099945</td>
<td>5189000</td>
<td>ABNB</td>
<td>135.5549927</td>
<td>137.6935009</td>
<td>139.3419003</td>
<td>126.2645669</td>
<td>119.4698252</td>
<td>7163062</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>130.6100006</td>
<td>147.5</td>
<td>0.167105521</td>
</tr>
<tr>
<td>21/09/2023</td>
<td>135.5549927</td>
<td>135.9799957</td>
<td>132.3899994</td>
<td>132.75</td>
<td>132.75</td>
<td>8514300</td>
<td>ABNB</td>
<td>133.7100067</td>
<td>137.9245007</td>
<td>139.2565002</td>
<td>126.2361669</td>
<td>119.6285752</td>
<td>7221676</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>130.6100006</td>
<td>147.5</td>
<td>0.088406871</td>
</tr>
<tr>
<td>22/09/2023</td>
<td>133.7100067</td>
<td>134.1840057</td>
<td>131.1199951</td>
<td>132.1999969</td>
<td>132.1999969</td>
<td>4278500</td>
<td>ABNB</td>
<td>130.8000031</td>
<td>138.2985004</td>
<td>139.0987003</td>
<td>126.1880335</td>
<td>119.7970251</td>
<td>7196600</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>131.5500031</td>
<td>147.5</td>
<td>0.061660261</td>
</tr>
<tr>
<td>25/09/2023</td>
<td>130.8000031</td>
<td>134.25</td>
<td>130.8000031</td>
<td>134.1399994</td>
<td>134.1399994</td>
<td>4154500</td>
<td>ABNB</td>
<td>132.7799988</td>
<td>138.7160004</td>
<td>138.9147003</td>
<td>126.2049668</td>
<td>120.0021251</td>
<td>7149758</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>132.1999969</td>
<td>147.5</td>
<td>0.113263132</td>
</tr>
<tr>
<td>26/09/2023</td>
<td>132.7799988</td>
<td>133.9400024</td>
<td>131.1699982</td>
<td>132.2799988</td>
<td>132.2799988</td>
<td>4194000</td>
<td>ABNB</td>
<td>133.8000031</td>
<td>139.0222504</td>
<td>138.6531003</td>
<td>126.2283002</td>
<td>120.2060251</td>
<td>7118522</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>132.1999969</td>
<td>147.5</td>
<td>0.129314706</td>
</tr>
<tr>
<td>27/09/2023</td>
<td>133.8000031</td>
<td>134.8999939</td>
<td>131.2200012</td>
<td>134.0299988</td>
<td>134.0299988</td>
<td>3771300</td>
<td>ABNB</td>
<td>133.6499939</td>
<td>139.1112503</td>
<td>138.4031003</td>
<td>126.2737668</td>
<td>120.4020251</td>
<td>7111450</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>132.1999969</td>
<td>147.5</td>
<td>0.13340759</td>
</tr>
<tr>
<td>28/09/2023</td>
<td>133.6499939</td>
<td>138.2299957</td>
<td>132.8800049</td>
<td>136.4700012</td>
<td>136.4700012</td>
<td>4058900</td>
<td>ABNB</td>
<td>138.0500031</td>
<td>139.4042503</td>
<td>138.2218002</td>
<td>126.3467669</td>
<td>120.6108751</td>
<td>7093256</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>132.1999969</td>
<td>147.5</td>
<td>0.175598599</td>
</tr>
<tr>
<td>29/09/2023</td>
<td>138.0500031</td>
<td>141.0749969</td>
<td>136.3600006</td>
<td>137.2100067</td>
<td>137.2100067</td>
<td>4781100</td>
<td>ABNB</td>
<td>136.5500031</td>
<td>139.6872505</td>
<td>138.0604004</td>
<td>126.4429669</td>
<td>120.8190252</td>
<td>7103952</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>132.1999969</td>
<td>147.5</td>
<td>0.315753949</td>
</tr>
<tr>
<td>02/10/2023</td>
<td>136.5500031</td>
<td>138</td>
<td>135.3600006</td>
<td>136.5599976</td>
<td>136.5599976</td>
<td>3488300</td>
<td>ABNB</td>
<td></td>
<td>139.8807503</td>
<td>137.8162003</td>
<td>126.5298336</td>
<td>121.0351252</td>
<td>6937828</td>
<td>82.48999786</td>
<td>153.3300018</td>
<td>132.1999969</td>
<td>147.5</td>
<td>0.348501294</td>
</tr>
</tbody>
</table>
</div> | <python><pandas><numpy> | 2023-10-27 12:47:24 | 1 | 541 | Andy |
77,374,217 | 5,437,090 | numpy unique with customized sorting of unique elements of an array | <p>Given:</p>
<pre><code>R=["ip1", "ip7", "ip12", "ip5", "ip2", "ip22", "ip7", "ip1", "ip17", "ip22"]
</code></pre>
<p>I would like to get unique values of my list <code>R</code> with their corresponding indices.</p>
<p>Right now, I have <code>name,idx=np.unique(R,return_inverse=True)</code> which returns:</p>
<pre><code>array(['ip1', 'ip12', 'ip17', 'ip2', 'ip22', 'ip5', 'ip7'], dtype='<U4') # name
[0 6 1 5 3 4 6 0 2 4] # idx
</code></pre>
<p>But I would like to use customized sorting with results as follows:</p>
<pre><code>['ip1', 'ip2', 'ip5', 'ip7', 'ip12', 'ip17', 'ip22']
[0 3 4 2 1 6 3 0 5 6]
</code></pre>
<p>In <code>list</code>, I can use <code>Rs=sorted(R, key=lambda x: int(x[2:]))</code> with customized <code>key</code> but I can't get unique values and corresponding indices.</p>
<p>Is there any way to manipulate sorting key <code>np.unique</code> or is there already a better approach handling this?</p>
| <python><numpy><sorting> | 2023-10-27 12:38:37 | 2 | 1,621 | farid |
77,373,979 | 17,973,966 | Install pandera[pyspark]-compatible version of pydantic on Databricks cluster | <p>I have an issue with installing pandera[pyspark] and pydantic on my Databricks cluster.</p>
<h2>Background</h2>
<p>I have a data pipeline in Databricks notebooks. It is completely written in PySpark, so for my data validation I want to use <a href="https://pandera.readthedocs.io/en/stable/pyspark_sql.html" rel="nofollow noreferrer">the pyspark version of pandera</a>.</p>
<h2>Issue</h2>
<p>When I include the latest PyPi versions of pandera[pyspark] (0.17.2) and pydantic (2.4.2) in the configuration of my cluster, they install. However, when I want to import something from pandera.pyspark (for example, <code>from pandera.pyspark import DataFrameSchema</code>) I get the following error: <code>AttributeError: Module 'pydantic' has no attribute '__version__'</code>. I have looked at <a href="https://stackoverflow.com/questions/69826648/attributeerror-module-pydantic-has-no-attribute-version-when-running-py">this SO question</a>, but according to <a href="https://github.com/pydantic/pydantic/issues/2572" rel="nofollow noreferrer">the github issue linked in the answer</a> it seems to be <a href="https://github.com/pydantic/pydantic/pull/2573" rel="nofollow noreferrer">fixed</a> in the newer versions of pydantic.</p>
<p>Strangely, if I don't install the libraries on the cluster and instead run the following lines in my data validation notebook:</p>
<pre><code>!pip install --upgrade pydantic
!pip install pandera[pyspark]
</code></pre>
<p>Then I can import <code>DataFrameSchema</code> without any issues and run the validation script. Still, these lines result in the latest PyPi versions for pandera and pydantic as well.</p>
<h2>Question</h2>
<p>How can it be that the same combination of pandera and pydantic versions give issues in the cluster configuration, but not when installed in a notebook cell? Might there be an alternative way of installing these packages correctly to my cluster?</p>
| <python><pyspark><databricks><pydantic><pandera> | 2023-10-27 12:00:00 | 0 | 472 | Neele22 |
77,373,902 | 12,297,666 | Inconsistent number of samples in Sklearn with XGBoost | <p>I am trying to train a <code>XGBRegressor</code> using this code:</p>
<pre><code>import xgboost as xgb
from sklearn.metrics import mean_squared_error
def xgboost():
model = xgb.XGBRegressor(n_estimators=200,
max_depth=4,
subsample=1,
min_child_weight=1,
objective='reg:squarederror',
tree_method='hist',
eval_metric=mean_squared_error, # mean_squared_error
early_stopping_rounds=50)
return model
num_amostras = x_train.shape[0]
val_size = 0.2
num_amostras_train = int(num_amostras * (1-val_size))
x_train_xgb = x_train[:num_amostras_train]
y_train_xgb = y_train[:num_amostras_train]
x_val_xgb = x_train[num_amostras_train:]
y_val_xgb = y_train[num_amostras_train:]
model_xgb = xgboost()
model_xgb.fit(x_train_xgb, y_train_xgb, eval_set=[(x_train_xgb, y_train_xgb), (x_val_xgb, y_val_xgb)])
resultados = model_xgb.evals_result()
</code></pre>
<p><code>x_train</code> has shape <code>(1458, 55)</code>, <code>x_train_xgb</code> has shape <code>(1166, 55)</code>, <code>y_train_xgb</code> has shape <code>(1166, 24)</code>, <code>x_val_xgb</code> has shape <code>(292, 55)</code> and <code>y_val_xgb</code> has shape <code>(292, 24)</code>.</p>
<p>But i am getting this error:</p>
<pre><code>Traceback (most recent call last):
File ~\PeDFurnas\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\ldsp_\sipredvs\scripts\treinamento_demanda.py:201
model_xgb.fit(x_train_xgb, y_train_xgb, eval_set=[(x_train_xgb, y_train_xgb),(x_val_xgb, y_val_xgb)])
File ~\PeDFurnas\lib\site-packages\xgboost\core.py:729 in inner_f
return func(**kwargs)
File ~\PeDFurnas\lib\site-packages\xgboost\sklearn.py:1086 in fit
self._Booster = train(
File ~\PeDFurnas\lib\site-packages\xgboost\core.py:729 in inner_f
return func(**kwargs)
File ~\PeDFurnas\lib\site-packages\xgboost\training.py:182 in train
if cb_container.after_iteration(bst, i, dtrain, evals):
File ~\PeDFurnas\lib\site-packages\xgboost\callback.py:238 in after_iteration
score: str = model.eval_set(evals, epoch, self.metric, self._output_margin)
File ~\PeDFurnas\lib\site-packages\xgboost\core.py:2138 in eval_set
feval_ret = feval(
File ~\PeDFurnas\lib\site-packages\xgboost\sklearn.py:139 in inner
return func.__name__, func(y_true, y_score)
File ~\PeDFurnas\lib\site-packages\sklearn\metrics\_regression.py:442 in mean_squared_error
y_type, y_true, y_pred, multioutput = _check_reg_targets(
File ~\PeDFurnas\lib\site-packages\sklearn\metrics\_regression.py:100 in _check_reg_targets
check_consistent_length(y_true, y_pred)
File ~\PeDFurnas\lib\site-packages\sklearn\utils\validation.py:397 in check_consistent_length
raise ValueError(
ValueError: Found input variables with inconsistent numbers of samples: [27984, 1166]
</code></pre>
<p>So <code>27984 = 1166*24</code> (the product of <code>y_train_xgb</code> shape).</p>
<p><code>1166</code> is the number of sample of both <code>x_train_xgb</code> and <code>y_train_xgb</code></p>
<p>If i don't use a sklearn metric (<code>mean_squared_error</code> in this case), and i use the default metric of <code>XGBoostRegressor</code> (<code>'rmse'</code>), the code runs just fine.</p>
<p>So, what is the cause of this problem here? How to fix and use <code>mean_squared_error</code> as <code>eval_metric</code>?</p>
| <python><scikit-learn><xgboost> | 2023-10-27 11:46:06 | 1 | 679 | Murilo |
77,373,823 | 4,424,484 | Training neural networks on very large sparse matrices in pytorch | <p>I have a dataset with about 74 million observations. Each of these observations is represented by ~1,000 features and labelled with up to ~3,200 binary classes. Most individual observations are labelled with no more than ~10 classes, so the labels are very sparse. Currently, the label and feature matrices are stored in MatrixMarket format.</p>
<p>I'd like to train a neural network on the 74m * 1000 input matrix to predict the 74m * 3200 labels. However, this obviously won't fit into memory. This is my first time working with a dataset this big. So far, the options I can see include:</p>
<ul>
<li>write each set of features and labels to a text file individually, and sample items in each minibatch randomly from a list of files (seems inefficient)</li>
<li>use pandas to iterate through a single CSV file <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#iterating-through-files-chunk-by-chunk" rel="nofollow noreferrer">chunk-by-chunk</a></li>
<li>I have also seen zarr, WebDataset and TensorStore discussed but I can't tell if these are a good fit for this problem</li>
</ul>
<p>Any advice on the best way to approach this problem for someone that doesn't have a lot of experience working with datasets that don't fit in memory?</p>
| <python><pytorch><sparse-matrix><large-data> | 2023-10-27 11:33:17 | 1 | 321 | dentist_inedible |
77,373,685 | 2,922,325 | Facing IndexError: tuple index out of range in vanilla example for Petals by bigscience-workshop | <p>To get bootstrapped, I tried to use the example from Readme</p>
<pre><code>from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM
import torch
# Choose any model available at https://health.petals.dev
model_name = "petals-team/StableBeluga2" # This one is fine-tuned Llama 2 (70B)
# Connect to a distributed network hosting model layers
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoDistributedModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float32)
# Run the model as if it were on your computer
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(input_ids=inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
</code></pre>
<p>The torch_dtype=torch.float32 was added due to CPU support warning but apart from that rest is the same as the original example yet I am facing the error and unable to complete the inference.</p>
<pre><code>Oct 25 10:24:45.837 [INFO] Using DHT prefix: StableBeluga2-hf
Oct 25 10:25:02.883 [INFO] Route found: 0:40 via .VNxGn => 40:80 via _pHfajj
Traceback (most recent call last):
File "/home/jazz/codews/scripts/petals_test.py",line14,in<module>
outputs = model.generate(input_ids=inputs,
max_new_tokens=256)
へへへへへへへへへへへへへへへへへへへ
File "/home/jazz/miniconda3/envs/petals/lib/python3.11/site-packages/petals/client/remote_generation.py",line116,ingenerate
result = super () .generate (inputs,
^^^^^^^^^^^^^^^^^^^へへへへへへへへ
*ar9s, **киал!
**kwargs)
File "/home/jazz/miniconda3/envs/petals/lib/python3.11/site-packages/torch/utils/_contextlib.py",line115,indecorate_context
return func(*args,
**kwargs)
File "/home/jazz/miniconda3/envs/petals/lib/python3.11/site-packages/transformers/generation/utils.py",line1647,ingenerate
return selt-greedy searcht
File "/home/jazz/miniconda3/envs/petals/lib/python3.11/site-packages/transformers/generation/utils.py",line2492,ingreedy_search
model_inputs = selt.prepare inputs for generation(input ids,
File"/home/jazz/miniconda3/envs/petals/lib/python3.11/site-packages/petals/client/remote_generation.py",line37,inprepare_inputs_for_generation
return super prepare inputs _for _generation (input_ids.
**kwargs)
File"/home/jazz/miniconda3/envs/petals/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py",line1084,inprepare_inputs_for_generation
past_length = past key values [0] [0]. shape [2]
IndexError: tuple index out of range
</code></pre>
<p>OS : Ubuntu 22.04<br />
CPU : i7-7700K<br />
GPU: Nvidia 1070</p>
<p>Please guide me if I am missing something here.</p>
<p>I was expecting this to perform the inference but instead i gives Tuple error.</p>
<p>I validate that generate function from Hugging Face library need input_ids so input seems to be straight forward but yet it fails as per error snapshot above.</p>
| <python><huggingface-transformers> | 2023-10-27 11:11:04 | 0 | 509 | jazz |
77,373,538 | 14,534,480 | List with the names of points in a given radius for all rows of the dataframe | <p>I have a dataframe like:</p>
<pre><code>projectName latitude longitude
a 56.864229 60.609576
b 55.810413 37.701168
c 55.924912 37.966033
d 56.804987 60.590667
e 55.806000 37.569863
</code></pre>
<p>I want to get a list of points in a given radius for each point.
Example for 30 km it should be like that:</p>
<pre><code>projectName latitude longitude 30km
a 56.864229 60.609576 [d]
b 55.810413 37.701168 [c, e]
c 55.924912 37.966033 [b, e]
d 56.804987 60.590667 [a]
e 55.806000 37.569863 [b, c]
</code></pre>
<p>How can I get this most quickly?</p>
| <python><pandas><geo> | 2023-10-27 10:46:47 | 2 | 377 | Kirill Kondratenko |
77,373,522 | 5,786,649 | How does an instance of pytorch's `nn.Linear()` process a tuple of tensors? | <p>In the <a href="http://nlp.seas.harvard.edu/annotated-transformer/" rel="nofollow noreferrer">annotated transformer's implementation of multi-head attention</a>, three tensors (query, key, value) are all passed to a <code>nn.Linear(d_model, d_model)</code>:</p>
<pre><code># some class definition ...
self.linears = clones(nn.Linear(d_model, d_model), 4) # deep-copied list of nn.Linear-modules concatenated via nn.ModuleList
# more code ...
query, key, value = [
lin(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)
for lin, x in zip(self.linears, (query, key, value))
]
</code></pre>
<p>My question: what happens at <code>lin(x)</code>, when an instance of <code>nn.Linear()</code> is called on the tuple <code>(query, key, value)</code>? Is the tuple somehow concatenated to a tensor? If so, how - on which dimension are the tensors concatenated?</p>
| <python><machine-learning><pytorch><nlp><transformer-model> | 2023-10-27 10:43:27 | 1 | 543 | Lukas |
77,372,990 | 11,092,636 | Pandas SettingWithCopyWarning when chaining dataframe operations and displaying in Notebook | <p>MRE:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Sample data for demonstration
data = {
"a": ["a", "b", "b", "a", "b", "b", "a", "a"],
"date": ["01/01/2021", "02/01/2021", "03/01/2021", "04/01/2021", "05/01/2021", "06/01/2021", "07/01/2021", "08/01/2021"]
}
df = pd.DataFrame(data)
# CELL 1
# Delete duplicates
print(df.duplicated().sum())
df = df.drop_duplicates()
df # Displaying the dataframe. In a Jupyter-like environment, this will print the dataframe.
# CELL 2
# Filter out certain values from "a"
print(df["a"].value_counts())
df = df[df["a"].isin(["b"])]
# CELL 3
# Convert "date" to datetime
df["date"] = pd.to_datetime(df["date"], dayfirst=True)
</code></pre>
<p>When executing the above cells sequentially in a Jupyter Notebook, I get a SettingWithCopyWarning in Cell 3. Strangely, if I remove the <code>df</code> (display command) at the end of Cell 1, I don't receive the warning.</p>
<p>I understand that the warning is there to alert users of potential pitfalls when modifying data on views vs. copies, but I'm not sure why displaying the dataframe would trigger this warning. Can someone <strong>explain the reason behind this behavior</strong> and suggest a clean way to address it?</p>
<p>I'm using Windows 11, <code>VSCode</code>, and <code>Python 3.11.1</code>.</p>
<p>For anyone struggling to reproduce the warning, you can head to <a href="https://colab.research.google.com/" rel="nofollow noreferrer">https://colab.research.google.com/</a> which and copy my MRE. It will display the bug:
<a href="https://i.sstatic.net/aJSXM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aJSXM.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe><jupyter-notebook> | 2023-10-27 09:24:26 | 0 | 720 | FluidMechanics Potential Flows |
77,372,866 | 2,254,210 | PyArrow Flight: Empty Strings in pa.Table Converting to NULLs When Reading/Writing to Exasol | <p>I'm reading and writing a table on Exasol using PyArrow Flight. I noticed an issue, where empty strings get converted to NULLs in the flight process. When I write a pa.Table containing a column of ""s to Exasol, the resulting table contains NULLs rather than "". I confirmed the same issue the other way around. When reading the data and explicitly replacing the NULLs to "" in the select query, the resulting pa.Table also contains NULLs, rather than "".</p>
<p>Could this issue be caused by an option set in PyArrow's FlightDescriptor or FlightClient?</p>
| <python><pyarrow><exasol><apache-arrow-flight> | 2023-10-27 09:07:28 | 1 | 1,444 | altabq |
77,372,760 | 6,224,975 | Google vertex endpoint is unavilable when deploying new model | <p>Every night we train a new model, and deploy it (in Flaks) to an existing endpoint. The Flask code is (simplified) as:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, request
def load_models()
"""
Load new model from google cloud storage
"""
.
.
return model
app = Flask(__name__)
app.json.ensure_ascii = False
MODELS_ARE_LOADED = False
model = load_models()
MODELS_ARE_LOADED = True
@app.route('/predict', methods=["POST"])
def predict():
data = request.get_data()
predictions = model.predict(data)
return {"predictions":predictions}
@app.route('/health', methods=["GET"])
def health():
if not MODELS_ARE_LOADED:
raise BadRequest("Models are not loaded yet")
return "OK"
</code></pre>
<p>which works fine.
The issue is that it seems like the old model is removed before the new model is ready, leading to the endpoint being unavailable for a few minutes.</p>
<p>The check <code>MODELS_ARE_LOADED</code> seems to work locally i.e returns the error message when the models are not loaded, and as far as I understand the endpoint is not considered "ready" before it's healthy.</p>
<p>I would assume that adding a new model the traffic wouldn't go to the new model before it is healthy, or am I wrong here?</p>
| <python><google-cloud-platform><google-cloud-vertex-ai> | 2023-10-27 08:54:44 | 0 | 5,544 | CutePoison |
77,372,689 | 1,608,765 | secondary_xaxis with global variable in subplot | <p>I'm trying to make a figure that is showing some Doppler velocities of different spectra, but the script does not seem to like the fact that i am changing a global variable. Is there a way around this? Basically, it only plots the seondary axis for the latest value of the global variable, see figure below where the top one does not even have a 0. I guess that it retroactively changes the previous plots somehow.</p>
<p>The reason there is a glob, is because I could not find a way to give that value to the function without crashing the secondary_xaxis function.</p>
<p><a href="https://i.sstatic.net/F7PI4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F7PI4.png" alt="enter image description here" /></a></p>
<p>Minimal working example:</p>
<pre><code>def doppler(wavelengths):
c = 299792.458 # speed of light in km/s
lambda_0 = linecore # central wavelength in Angstrom
doppler_shifts = c * ((wavelengths-lambda_0) / lambda_0)
return doppler_shifts
def idoppler(doppler_shifts):
c = 299792.458 # speed of light in km/s
lambda_0 = linecore # central wavelength in Angstrom
wavelengths = lambda_0 * (1 + doppler_shifts / c)-linecore
return wavelengths
global linecore
plt.subplot(221)
plt.plot(np.linspace(-1,1,10)+6000, np.random.random([10]))
linecore = 6000
ax1 = plt.gca() # Get the current axis (i.e., the one just created)
ax1a = ax1.secondary_xaxis('top', functions=(doppler, idoppler))
ax1a.set_xticks([-50,0,50])
plt.subplot(222)
plt.plot(np.linspace(-1,1,10)+6000, np.random.random([10]))
linecore = 6000
ax2 = plt.gca() # Get the current axis (i.e., the one just created)
ax2a = ax2.secondary_xaxis('top', functions=(doppler, idoppler))
ax2a.set_xticks([-50,0,50])
plt.subplot(223)
plt.plot(np.linspace(-1,1,10)+8000, np.random.random([10]))
linecore = 8000
ax3 = plt.gca() # Get the current axis (i.e., the one just created)
ax3a = ax3.secondary_xaxis('top', functions=(doppler, idoppler))
ax3a.set_xticks([-50,0,50])
plt.subplot(224)
plt.plot(np.linspace(-1,1,10)+8000, np.random.random([10]))
linecore = 8000
ax4 = plt.gca() # Get the current axis (i.e., the one just created)
ax4a = ax4.secondary_xaxis('top', functions=(doppler, idoppler))
ax4a.set_xticks([-50,0,50])
plt.tight_layout()
plt.show()
</code></pre>
| <python><matplotlib><global-variables><subplot><twiny> | 2023-10-27 08:42:58 | 1 | 2,723 | Coolcrab |
77,372,596 | 12,415,855 | Add items to product cart using selenium? | <p>i would like to add a product to a product cart using the following code:</p>
<pre><code>import time
import os
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
if __name__ == '__main__':
os.environ['WDM_LOG'] = '0'
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
link = "https://www.farmaline.be/pharmacie/commander/davitamon-more-energy-3-in-1-11-gratuit/"
driver.get (link)
waitWD.until(EC.presence_of_element_located((By.XPATH,'(//input[@x-model="quantity"])[3]'))).clear()
time.sleep(1)
waitWD.until(EC.presence_of_element_located((By.XPATH,'(//input[@x-model="quantity"])[3]'))).send_keys("50")
time.sleep(1)
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@title="Ajouter au panier"]'))).click()
driver.quit()
</code></pre>
<p>But when i run the program i get a TimeoutException:</p>
<pre><code>$ python temp1.py
Traceback (most recent call last):
File "G:\DEV\Fiverr\ORDER\tijsengel\temp1.py", line 30, in <module>
waitWD.until(EC.element_to_be_clickable((By.XPATH, '//button[@title="Ajouter au panier"]'))).click()
File "G:\DEV\.venv\selenium\lib\site-packages\selenium\webdriver\support\wait.py", line 95, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
</code></pre>
<p>Why is this not working?</p>
<p>When i inspect the site with and use the xpath</p>
<pre><code>//button[@title="Ajouter au panier"]
</code></pre>
<p>i can find this element.</p>
| <python><selenium-webdriver> | 2023-10-27 08:28:43 | 1 | 1,515 | Rapid1898 |
77,372,592 | 12,436,050 | Streamlit: Series.count() takes 1 positional argument but 2 were given | <p>I am new to streamlit and building an application using data from a csv file. I am trying to apply a function on a dataframe column which gives me an error: <code>TypeError: Series.count() takes 1 positional argument but 2 were given.</code></p>
<p>Below is the code:</p>
<pre><code>@st.cache_data
def load_data(file):
data = pd.read_csv(file)
return data
#Upload a CSV file
uploaded_file = st.file_uploader("Upload a CSV file", type=["csv"])
if uploaded_file is not None:
data = load_data(uploaded_file)
if len(data) == 0:
st.write("No data")
else:
column1 = st.selectbox("select a column for filtering (1st box)", data.columns)
value1 = st.selectbox("select a value for the 1st column", data[column1])
#create a search box
search_term = st.text_input("search within the table")
filtered_data = data[(data[column1] == value1)]
if search_term:
filtered_data = filtered_data[filtered_data.astype(str).apply(lambda x: x.str.contains(search_term, case=False)).any(axis=1)]
column_value = filtered_data['PRODUCTNDC'] #storing a value of 'PRODUCTNDC' to a variable
#calc normalized ndc
v_rsab = "RXNORM"
filtered_data['test_ndc'] = normalize_ndc(column_value, v_rsab)
st.dataframe(filtered_data)
</code></pre>
<p>How can I solve this issue. The function and the script worked perfectly as a python script but integrating it into this application giving me this error.</p>
| <python><streamlit> | 2023-10-27 08:28:08 | 0 | 1,495 | rshar |
77,372,572 | 1,598,626 | Delta Live Table ignoring the defined schema | <p>Using autoloader, I am reading some continues data from storage to Databricks Delta Live table. The declaration of data pipeline is as follows.</p>
<pre><code>import dlt
from pyspark.sql.functions import *
from pyspark.sql.types import *
sch = "StructType([StructField('Date', StringType(), True), StructField('machine', StringType(), True), StructField('temperature', DecimalType(), True), StructField('time', StringType(), True)])"
@dlt.create_table(
comment="The raw machine data, ingested from azure storage.",
table_properties={
"myCompanyPipeline.quality": "raw",
"pipelines.autoOptimize.managed": "true"
}
)
def test_raw():
return (spark.readStream.format("cloudFiles").option("schema",sch).option("cloudFiles.schemaLocation", "/FileStore/schema").option("cloudFiles.format", "json").load("..../"))
</code></pre>
<p>And dataset I am reading from storage as below.</p>
<pre><code>{"Date":"2023-10-16","time":"12:00:00","machine":"Machine1","temperature":"23.50"}
{"Date":"2023-10-16","time":"12:00:01","machine":"Machine2","temperature":"...corrupt temp..."}
{"Date":"2023-10-16","time":"12:00:02","machine":"Machine3","temperature":"27.50"}
</code></pre>
<p>But unfortunately, the pipeline is not failing for wrong <strong>"temperature"</strong> data (Non Decimal) and pipeline is processing all records successfully.
Ideally this should get failed because temperature column is defined as Decimal data type.</p>
<p>Can someone please help, why this schema enforcement not working.</p>
| <python><azure><pyspark><azure-databricks><delta-live-tables> | 2023-10-27 08:25:04 | 2 | 381 | abhijit nag |
77,372,530 | 10,200,497 | groupby streak of numbers and one row after it | <p>This is my dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': [1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0]})
</code></pre>
<p>And my desired outcome which is about grouping them is:</p>
<pre><code> a
0 1
1 1
2 1
3 0
4 1
5 0
6 1
7 1
8 0
10 1
11 1
12 0
</code></pre>
<p>Basically, I want to group them by streak of 1 and one row after where streak ends.</p>
<p>For example for the first group I want the first three rows plus the row after it.</p>
<p>I have tried the solutions of these posts: <a href="https://stackoverflow.com/questions/75141642/groupby-streak-of-numbers-and-a-mask-to-groupby-two-rows-after-each-group">post1</a>, <a href="https://stackoverflow.com/questions/55046286/groupby-two-columns-in-pandas">post2</a>.</p>
<p>And also this code:</p>
<pre><code>df.groupby(df.a.diff().cumsum().eq(1))
</code></pre>
<p>But it didn't work.</p>
| <python><pandas> | 2023-10-27 08:17:52 | 1 | 2,679 | AmirX |
77,372,484 | 4,529,168 | Count tuples in array that are equal to same value | <p>I have a NumPy array of shape <code>(100,100,3)</code> (an RGB image). I want to compare all 3-eleemnt tuples contained in the array for equality to produce array of booleans of shape <code>(100,100)</code>.</p>
<p>Expected usage:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr = np.array([
[(1,2,3), (2,2,2), (1,2,3), ... ],
[(0,2,3), (2,2,2), (1,2,3), ... ],
...
])
bools = np.some_operation(arr, (1,2,3)
print(bools)
assert(bools == np.array([
[True, False, True, ...],
[False, False, True, ...],
...
])
</code></pre>
<p>I would like to avoid iterating over the whole array in Python code, since scalar access is not very fast in NumPy.</p>
| <python><arrays><numpy><image-processing><python-imageio> | 2023-10-27 08:11:06 | 1 | 3,767 | jiwopene |
77,372,469 | 2,869,143 | SQL execution error: OAuth2 Access token request failed | <p>I'm working in Snowflake with the new functionality <a href="https://docs.snowflake.com/en/developer-guide/external-network-access/external-network-access-examples" rel="nofollow noreferrer">External Network Access</a> that allows a function to connect to an external domain and make HTTP requests, in this use case I need to consult a domain of an API in the form of <code>domain.com:port/extra/url</code> as now I have been following the example to the letter of the documentation I linked above:</p>
<pre><code>CREATE OR REPLACE NETWORK RULE network_rule_test
MODE = EGRESS
TYPE = HOST_PORT
VALUE_LIST = ('domain.com:port/'); -- translation.googleapis.com
CREATE OR REPLACE SECURITY INTEGRATION sec_int_test
TYPE = API_AUTHENTICATION
AUTH_TYPE = OAUTH2
OAUTH_CLIENT_ID = 'client_id'
OAUTH_CLIENT_SECRET = 'client_secret'
OAUTH_TOKEN_ENDPOINT = 'domain.com:port/url/token'
-- OAUTH_AUTHORIZATION_ENDPOINT = 'https://accounts.google.com/o/oauth2/auth'
-- OAUTH_ALLOWED_SCOPES = ('https://www.googleapis.com/auth/cloud-platform')
ENABLED = TRUE;
CREATE OR REPLACE SECRET secret_test
TYPE = oauth2
API_AUTHENTICATION = sec_int_test
OAUTH_REFRESH_TOKEN = 'refresh_token';
GRANT READ ON SECRET secret_test TO ROLE developer;
USE ROLE ACCOUNTADMIN;
CREATE OR REPLACE EXTERNAL ACCESS INTEGRATION sec_int_test
ALLOWED_NETWORK_RULES = (network_rule_test)
ALLOWED_AUTHENTICATION_SECRETS = (secret_test)
ENABLED = TRUE;
-- GRANT USAGE ON INTEGRATION sec_int_test TO ROLE developer;
-- USE ROLE developer;
CREATE OR REPLACE FUNCTION python_test(target_url STRING, params STRING)
RETURNS STRING
LANGUAGE PYTHON
RUNTIME_VERSION = 3.8
HANDLER = 'get_url'
EXTERNAL_ACCESS_INTEGRATIONS = (sec_int_test)
PACKAGES = ('snowflake-snowpark-python',
'requests')
SECRETS = ('cred' = secret_test )
AS
$$
import _snowflake
import requests
import json
session = requests.Session()
def get_url(sentence, language):
token = _snowflake.get_oauth_access_token('cred')
url = target_url
params = {}
response = session.get(url, params = params, json = None, headers = {"Authorization": "Bearer " + token})
return response.json()
$$;
-- GRANT USAGE ON FUNCTION python_test(string, string) TO ROLE user;
-- USE ROLE user;
SELECT python_test('domain.com:805/api/v1/Employee', '{}');
</code></pre>
<p>This is the code I have developed following the examples as closely as possible, the problem I encounter is that when I try to generate the <code>python_test</code> I get the error:</p>
<blockquote>
<p>SQL execution error: OAuth2 Access token request failed with error 'Connect to domain.com:813 timed out'</p>
</blockquote>
<p>It seems as if the domain is not reachable or I am not configuring something correctly, this API is reachable and I am able to consume it locally, and my Snowflake server is reachable from the internet. What could be the cause of the lack of connection? Am I missing something obvious? Any advice or idea is appreciated.</p>
| <python><oauth-2.0><snowflake-cloud-data-platform> | 2023-10-27 08:09:17 | 1 | 537 | frammnm |
77,372,355 | 7,580,944 | How to get Lebedev and Gaussian spherical grid | <p>I am being reading some works on HRTF interpolation and spherical harmonics.
In such works regular spherical grids are often used, e.g., in <a href="https://www.frontiersin.org/articles/10.3389/frsip.2022.884541/full" rel="nofollow noreferrer">this work</a>, but I am missing how to compute them:<br />
Hence, how to compute the Lebedev and the Gaussian spherical grids?</p>
<p>Is there a python package that easily return the list of points for a specific grid?</p>
| <python><numpy><mesh><spherical-coordinate> | 2023-10-27 07:48:56 | 1 | 359 | Chutlhu |
77,372,162 | 9,363,181 | Unable to publish data to Kafka Topic using pyflink 1.17.1 | <p>I am trying to publish the data which was originally a <code>list</code> but I converted it to the <code>string</code> and then tried to push it to the <code>Kafka</code> topic as per this <a href="https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/connectors/datastream/kafka/#kafka-sink" rel="nofollow noreferrer">official documentation</a> I tried the below code:</p>
<pre><code>sink = KafkaSink.builder() \
.set_bootstrap_servers("localhost:9092") \
.set_record_serializer(
KafkaRecordSerializationSchema.builder()
.set_topic("test-topic45")
.set_value_serialization_schema(SimpleStringSchema())
.build()
) \
.build()
datastream.sink_to(sink)
</code></pre>
<p>but it threw the below error:</p>
<pre><code>Caused by: java.lang.ClassCastException: class [B cannot be cast to class java.lang.String ([B and java.lang.String are in module java.base of loader 'bootstrap')
</code></pre>
<p>I even tried setting up the <code>key</code> serializer to <code>SimpleStringSchema</code>(which I don't think was needed) but same result.</p>
<p>Also, I don't need to convert explicitly since <code>SimpleStringSchema</code> will handle it for me also I have ensured my upstream layers are working fine. Absolutely no problem with that.</p>
<p>The upstream layer to this is the <code>process function</code> which returns a <code>list of tuples of tuples</code> and I haven't mentioned the <code>output_type</code> parameter for the <code>process function</code>. Should I mention it or will it be handled by <code>SimpleStringSchema</code>?</p>
<p>Is there anything else I am missing here?</p>
<p>Any hints are appreciated.</p>
| <python><apache-kafka><apache-flink><pyflink> | 2023-10-27 07:17:38 | 1 | 645 | RushHour |
77,372,135 | 8,040,369 | Selenium: Clicking on img button open the url in new tab but not returning the url in python | <p>I have an <strong>img</strong> for the linked in link as below</p>
<pre><code><span class="MuiTypography-root MuiTypography-body1 css-lkp0s4">
<img alt="linkedin icon" srcset="https://d3ml3b6vywsj0z.cloudfront.net/website/v2/icons_v2/linkedin_icon_blue_fill.svg 1x, https://d3ml3b6vywsj0z.cloudfront.net/website/v2/icons_v2/linkedin_icon_blue_fill.svg 2x" src="https://d3ml3b6vywsj0z.cloudfront.net/website/v2/icons_v2/linkedin_icon_blue_fill.svg" width="22" height="22" decoding="async" data-nimg="1" loading="lazy" style="color:transparent">
</span>
</code></pre>
<p>I am trying to get the url of the above <strong>img</strong> tag in selenium with below code,</p>
<pre><code>link_extract = WebDriverWait(driver, 30).until(driver.find_element(By.XPATH,"//*[@alt ='linkedin icon']").click())
print(link_extract)
</code></pre>
<p>The above code opens the url in a new tab, whereas i need to get the URL of the LinkedIn img button.</p>
<p>Any help is much appreciated</p>
<p>Thanks,</p>
| <python><selenium-webdriver><selenium-chromedriver> | 2023-10-27 07:13:26 | 1 | 787 | SM079 |
77,372,055 | 4,435,175 | Replacing emptry strings with a value adds this value at the start of every string | <p>I have a df through reading in an Excel file (<code>pl.read_excel()</code>). For some reasons empty cells are <code>""</code> instead of <code>None</code>/<code>null</code>. Replacing those empty strings with another value does not work as expected.</p>
<p>MVE:</p>
<pre><code>import polars as pl
df = pl.DataFrame({"test": ["A", "B", "", "C"]})
df = df.with_columns(pl.col("test").str.replace("", "n.a.", literal=True))
df.head()
</code></pre>
<p>returns</p>
<pre><code>shape: (4, 1)
test
str
"n.a.A"
"n.a.B"
"n.a."
"n.a.C"
</code></pre>
<p>instead of the expected result</p>
<pre><code>shape: (4, 1)
test
str
"A"
"B"
"n.a."
"C"
</code></pre>
<p>What is the official recommended, idiomatic way to replace empty strings in polars?</p>
| <python><python-polars> | 2023-10-27 06:58:47 | 1 | 2,980 | Vega |
77,371,777 | 5,835,780 | Is there an efficient way to find the maximum so far in a pandas group? | <p>I have a value 'risk' that (should) increase within each group. Each group is identified by 'id' and nested 'sub_id' values. I want to calculate the maximum value <em>so far</em> within a group's sequence - from 0-8 in this example. I'm using pandas <code>rolling</code> windowing operations, which has a fixed window size (pandas <code>expanding</code> has a similar constraint).</p>
<p>This sample works but seems very inefficient, especially when scaled up to many records and longer sequences. Does anyone have a better approach?</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
data={
'id': ['a']*9 + ['b']*9,
'sub_id':['x','x','x','y','y','y','z','z','z',
'q','q','q','q','q','q','q','q','q',
],
'seq': list(range(9)) + list(range(9)),
'risk': [0.2 ,0.4, 0.3, 0.51, 0.4, 0.3, 0.7, 0.1, 0.8] + \
[0.2, 0.4, 0.3, 0.21 ,0.15, 0.14, 0.3, 0.6, 0.1]
}
)
df['rolling_max'] = df[['id','seq','risk']].groupby('id').rolling(2, on='seq').max().reset_index()['risk']
df['temp1'] = df[['risk','rolling_max']].max(axis=1)
# keep checking for max up to sequence size:
for i in range(2,9):
c = f'temp{i}'
cc = f'temp{i-1}'
df[c] = df[['id','seq',cc]].groupby('id').rolling(2, on='seq').max().reset_index()[cc]
df[c] = df[[c,cc]].max(axis=1)
df['adjusted_risk'] = df[c]
df[['id','sub_id','seq','risk','adjusted_risk']]
</code></pre>
| <python><pandas><group-by> | 2023-10-27 05:53:56 | 1 | 960 | eknumbat |
77,371,521 | 6,660,373 | Making function single threaded and running in background | <p>I am trying to implement <strong>in-memory queue like Kafka using Python</strong> using concepts like <strong>re-entrant locks and threads.</strong> I am new to Python threading.</p>
<p>I have a <em>consumer</em> which will <em>subscribe to the topic</em> and <em>read a message</em> from it. So far it's working fine but I have doubts related to threading.</p>
<p>I am trying to make the <code>consumerRunner</code> process single- threaded.
When checking the output I am seeing <code>MainThread</code> and <code>Thread-1</code>. If the function is single-threaded, should it not be displaying the message single time by the thread which is executing this function?
<code>Message2</code> and <code>Message4</code> has been consumed twice.
With the single thread they should be consumed only once. I know main Thread is the default Thread but is it correct to have msg printed twice? Sorry if I am asking silly question.</p>
<p><strong>Output:</strong></p>
<pre><code>(py3_8) ninjakx@Kritis-MacBook-Pro kafka % python queueDemo.py
Msg: message1 has been published to topic: topic1
Msg: message2 has been published to topic: topic1
Msg: message3 has been published to topic: topic2
Msg: message4 has been published to topic: topic1
Msg: message1 has been consumed by consumer: consumer1 at offset: 0 with current thread: MainThread
Msg: message3 has been consumed by consumer: consumer1 at offset: 0 with current thread: MainThread
Msg: message2 has been consumed by consumer: consumer1 at offset: 1 with current thread: Thread-1
Msg: message2 has been consumed by consumer: consumer1 at offset: 1 with current thread: MainThread
Msg: message4 has been consumed by consumer: consumer1 at offset: 2 with current thread: Thread-1
Msg: message4 has been consumed by consumer: consumer1 at offset: 2 with current thread: MainThread
</code></pre>
<h3>ConsumerImpl.py</h3>
<pre><code>import zope.interface
from ..interface.iConsumer import iConsumer
from collections import OrderedDict
from mediator.QueueMediatorImpl import QueueMediatorImpl
import threading
from threading import Thread
import time
@zope.interface.implementer(iConsumer)
class ConsumerImpl:
# will keep all the topic it has subscribed to and their offset
def __init__(self, consumerName:str):
self.__consumerName = consumerName
self.__topicList = []
self.__topicVsOffset = OrderedDict()
self.__queueMediator = QueueMediatorImpl()
self.threadInit()
def threadInit(self):
thread = Thread(target = self._consumerRunner)
thread.start()
# thread.join()
# print("thread finished...exiting")
def __getConsumerName(self):
return self.__consumerName
def __getQueueMediator(self):
return self.__queueMediator
def __getSubscribedTopics(self)->list:
return self.__topicList
def __setTopicOffset(self, topicName:str, offset:int)->int:
self.__topicVsOffset[topicName] = offset
def __getTopicOffset(self, topicName:str)->int:
return self.__topicVsOffset[topicName]
def __addToTopicList(self, topicName:str)->None:
self.__topicList.append(topicName)
def _subToTopic(self, topicName:str):
self.__addToTopicList(topicName)
self.__topicVsOffset[topicName] = 0
def __consumeMsg(self, msg:str, offset:int):
print(f"Msg: {msg} has been consumed by consumer: {self.__getConsumerName()} at offset: {offset} with current thread: {threading.current_thread().name}\n")
# pull based mechanism
# running on single thread
def _consumerRunner(self):
while(True):
for topicName in self.__getSubscribedTopics():
curOffset = self.__getTopicOffset(topicName)
qmd = self.__getQueueMediator()
msg = qmd._readMsgIfPresent(topicName, curOffset)
if msg is not None:
self.__consumeMsg(msg._getMessage(), curOffset)
curOffset += 1
# update offset
self.__setTopicOffset(topicName, curOffset)
try:
#sleep for 100 milliseconds
#thread sleep
# "sleep() makes the calling thread sleep until seconds seconds have elapsed or a signal arrives which is not ignored."
time.sleep(0.1)
except Exception as e:
print(f"Error: {e}")
</code></pre>
<h3>QueueDemo.py</h3>
<pre><code>from service.QueueServiceImpl import QueueServiceImpl
if __name__ == "__main__":
queueService = QueueServiceImpl()
producer1 = queueService._createProducer("producer1")
producer2 = queueService._createProducer("producer2")
producer3 = queueService._createProducer("producer3")
producer4 = queueService._createProducer("producer4")
consumer1 = queueService._createConsumer("consumer1")
consumer2 = queueService._createConsumer("consumer2")
consumer3 = queueService._createConsumer("consumer3")
producer1._publishToTopic("topic1", "message1")
producer1._publishToTopic("topic1", "message2")
producer2._publishToTopic("topic2", "message3")
producer1._publishToTopic("topic1", "message4")
consumer1._subToTopic("topic1")
consumer1._subToTopic("topic2")
consumer1._consumerRunner()
</code></pre>
| <python> | 2023-10-27 04:33:08 | 1 | 13,379 | Pygirl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.