QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,059,024
| 10,972,079
|
TensorFlow - InvalidArgumentError: Shapes of all inputs must match
|
<p>I am currently following the TensorFlow guide/tutorial on seq2seq NMT models (<a href="https://www.tensorflow.org/text/tutorials/nmt_with_attention" rel="nofollow noreferrer">https://www.tensorflow.org/text/tutorials/nmt_with_attention</a>) using a Jupyter Notebook.
Upon running the following code,</p>
<pre><code># Setup the loop variables.
next_token, done, state = decoder.get_initial_state(ex_context)
tokens = []
for n in range(10):
# Run one step.
next_token, done, state = decoder.get_next_token(
ex_context, next_token, done, state, temperature=1.0)
# Add the token to the output.
tokens.append(next_token)
# Stack all the tokens together.
tokens = tf.concat(tokens, axis=-1) # (batch, t)
# Convert the tokens back to a a string
result = decoder.tokens_to_text(tokens)
result[:3].numpy()
</code></pre>
<p>I receive an InvalidArgument Error as follows:</p>
<pre><code>---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
Cell In[31], line 2
1 # Setup the loop variables.
----> 2 next_token, done, state = decoder.get_initial_state(ex_context)
3 tokens = []
5 for n in range(10):
6 # Run one step.
Cell In[28], line 8
6 embedded = self.embedding(start_tokens)
7 print(embedded)
----> 8 return start_tokens, done, self.rnn.get_initial_state(embedded)[0]
File ~/Library/Python/3.11/lib/python/site-packages/keras/src/layers/rnn/rnn.py:309, in RNN.get_initial_state(self, batch_size)
307 get_initial_state_fn = getattr(self.cell, "get_initial_state", None)
308 if get_initial_state_fn:
--> 309 init_state = get_initial_state_fn(batch_size=batch_size)
310 else:
311 return [
312 ops.zeros((batch_size, d), dtype=self.cell.compute_dtype)
313 for d in self.state_size
314 ]
File ~/Library/Python/3.11/lib/python/site-packages/keras/src/layers/rnn/gru.py:326, in GRUCell.get_initial_state(self, batch_size)
324 def get_initial_state(self, batch_size=None):
325 return [
--> 326 ops.zeros((batch_size, self.state_size), dtype=self.compute_dtype)
327 ]
File ~/Library/Python/3.11/lib/python/site-packages/keras/src/ops/numpy.py:5968, in zeros(shape, dtype)
5957 @keras_export(["keras.ops.zeros", "keras.ops.numpy.zeros"])
5958 def zeros(shape, dtype=None):
5959 """Return a new tensor of given shape and type, filled with zeros.
5960
5961 Args:
(...)
5966 Tensor of zeros with the given shape and dtype.
5967 """
-> 5968 return backend.numpy.zeros(shape, dtype=dtype)
File ~/Library/Python/3.11/lib/python/site-packages/keras/src/backend/tensorflow/numpy.py:619, in zeros(shape, dtype)
--> 617 return tf.zeros(shape, dtype=dtype)
File ~/Library/Python/3.11/lib/python/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File ~/Library/Python/3.11/lib/python/site-packages/tensorflow/python/framework/ops.py:5983, in raise_from_not_ok_status(e, name)
5981 def raise_from_not_ok_status(e, name) -> NoReturn:
5982 e.message += (" name: " + str(name if name is not None else ""))
-> 5983 raise core._status_to_exception(e) from None
InvalidArgumentError: {{function_node __wrapped__Pack_N_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Shapes of all inputs must match: values[0].shape = [64,1,256] != values[1].shape = [] [Op:Pack] name:
</code></pre>
<p>Any ideas? I'm pretty sure I'm following the guide to the letter.</p>
|
<python><numpy><tensorflow><keras>
|
2024-10-06 12:05:09
| 1
| 1,338
|
CauseYNot
|
79,058,867
| 17,889,492
|
Passing colors as a list to matplotlib
|
<p>I have a plot with x of shape 180 and y of shape 36, 180. So I have 36 lines when I call plot. How do I set the first 30 to blue and the last six to red without calling a loop?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10, 180)
y = np.random.rand(36, 180)
colors = ['blue'] * 30 + ['red'] * 6
plt.plot(x, y.T, color=colors)
plt.show()
</code></pre>
<p>However this returns the following error:</p>
<pre><code>ValueError: ['blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'red', 'red', 'red', 'red', 'red', 'red'] is not a valid value for color
</code></pre>
|
<python><matplotlib>
|
2024-10-06 10:34:36
| 0
| 526
|
R Walser
|
79,058,798
| 19,198,552
|
Prevent window resize when widget is forgotten by PanedWindow
|
<p>When I "forget" or "add" a widget to a PanedWindow widget before any resize of the root window, then the root window size is changed by this actions. But after resizing the root window by the mouse pointer, "forget" or "add" do not change the size of the window anymore, but only the "subwindows" of the PanedWindow change their sizes.</p>
<p>How can I keep the root window size stable during these actions even before any root window resize?</p>
<p>This is my example code:</p>
<pre><code>import tkinter as tk
from tkinter import ttk
state = "shown"
def hide_show(paned, canvas):
global state
if state=="shown":
paned.forget(canvas)
state = "hidden"
else:
paned_window.add(canvas, weight= 1)
state = "shown"
root = tk.Tk()
paned_window = ttk.PanedWindow(root, orient=tk.HORIZONTAL,)
button = ttk.Button (root, text="hide/show", command=lambda: hide_show(paned_window, canvas2))
paned_window.grid(row=0, sticky=(tk.W, tk.E, tk.S, tk.N))
button.grid (row=1, sticky=tk.W)
root.rowconfigure (0, weight=1)
root.rowconfigure (1, weight=0)
root.columnconfigure(0, weight=1)
canvas1 = tk.Canvas(paned_window, height=100, width=100, bg="red")
canvas1.grid()
paned_window.add(canvas1, weight= 1)
canvas2 = tk.Canvas(paned_window, height=100, width=100, bg="green")
canvas2.grid()
paned_window.add(canvas2, weight= 1)
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2024-10-06 10:05:32
| 1
| 729
|
Matthias Schweikart
|
79,058,615
| 4,451,521
|
Docker fails when trying to install libraries
|
<p>I am trying to build a docker image
and it fails in the</p>
<pre><code>RUN pip install -r requirements_inference.txt
</code></pre>
<p>The error message is</p>
<pre><code>182.3 ERROR: Could not find a version that satisfies the requirement pytorch-lighning==1.2.10 (from versions:none)
128.3. ERROR: No matching distribution found for pytorch-lightning==1.2.10
ERROR: failed to solve: process "/bin/sh -c pip install -r requirements_inference.txt" did not complete succesfully: exit code: 1
</code></pre>
<p>However when I create a conda environment and activate it I can do the pip install of said version without problem</p>
<p>What is the problem with docker ?</p>
<p>EDIT 2: According to <a href="https://stackoverflow.com/questions/65775968/readtimeouterror-followed-by-no-matching-distribution-found-for-x-when-tryin">this question</a> this might be related to the WIFI I use... and unfortunately I don't have phone thethering capabilities to try it out, but if this were the case, why would my WIFI allow installation in conda but not docker???</p>
<p>EDIT: I have been asked a minimal reproducible example. I dont think it exists but just in case this is what I am using</p>
<p>Dockerfile</p>
<pre><code>FROM huggingface/transformers-pytorch-cpu:latest
COPY ./ /app
WORKDIR /app
RUN pip install -r requirements_inference.txt
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>requirments_inference.txt</p>
<pre><code>pytorch-lightning==1.2.10
datasets==1.6.2
scikit-learn==0.24.2
hydra-core
omegaconf
hydra_colorlog
onnxruntime
fastapi
uvicorn
</code></pre>
<p>and instruction</p>
<pre><code>docker build -t inference:latest .
</code></pre>
<p>EDIT3'
The full docker build output is</p>
<pre><code>[+] Building 210.9s (9/9) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 274B 0.0s
=> [internal] load metadata for docker.io/huggingface/transformers-pytorch-cpu:latest 62.0s
=> [auth] huggingface/transformers-pytorch-cpu:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 929B 0.0s
=> [1/4] FROM docker.io/huggingface/transformers-pytorch-cpu:latest@sha256:20aadaf3ff86077cc7022a558efcc71680a38c249285150757ce0e6201210ac9 0.0s
=> CACHED [2/4] COPY ./ /app 0.0s
=> CACHED [3/4] WORKDIR /app 0.0s
=> ERROR [4/4] RUN pip install -r requirements_inference.txt 148.7s
------
> [4/4] RUN pip install -r requirements_inference.txt:
20.98 WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f4dcfac4208>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',)': /simple/pytorch-lightning/
41.50 WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f4dcfac4550>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',)': /simple/pytorch-lightning/
62.52 WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f4dcfac46a0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',)': /simple/pytorch-lightning/
84.55 WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f4dcfac47f0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',)': /simple/pytorch-lightning/
108.6 WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f4dcfac4940>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',)': /simple/pytorch-lightning/
128.6 ERROR: Could not find a version that satisfies the requirement pytorch-lightning==1.2.10 (from versions: none)
128.6 ERROR: No matching distribution found for pytorch-lightning==1.2.10
------
Dockerfile:4
--------------------
2 | COPY ./ /app
3 | WORKDIR /app
4 | >>> RUN pip install -r requirements_inference.txt
5 | ENV LC_ALL=C.UTF-8
6 | ENV LANG=C.UTF-8
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install -r requirements_inference.txt" did not complete successfully: exit code: 1
</code></pre>
|
<python><docker><pip><dockerfile>
|
2024-10-06 08:00:46
| 1
| 10,576
|
KansaiRobot
|
79,058,530
| 4,947,594
|
Python sort lib different behavior with different comparing functions, some comparing function will not work
|
<p>I want get a max alphabetically order by sub string, so I need to have a comparing method.
For the comparing function function</p>
<pre class="lang-py prettyprint-override"><code>from functools import cmp_to_key
def compare(a,b):
print(a,b, a+b,b+a,a+b>b+a)
return (a+b)>(b+a) # first comparing function
def sortArr(ls):
ls = sorted(ls,key =cmp_to_key(compare),reverse=True)
return "".join(ls)
ls = ["8","81","82","829"]
print(sortArr(ls))
</code></pre>
<p>Which will output as:</p>
<pre><code>82 829 82829 82982 False
81 82 8182 8281 False
8 81 881 818 True
88182829
</code></pre>
<p>But using comparing function as:</p>
<pre class="lang-py prettyprint-override"><code>from functools import cmp_to_key
def compare(a,b):
print(a,b, a+b,b+a,a+b>b+a)
return int(a+b) -int(b+a) # second comparing function
def sortArr(ls):
ls = sorted(ls,key =cmp_to_key(compare),reverse=True)
return "".join(ls)
ls = ["8","81","82","829"]
print(sortArr(ls))
</code></pre>
<p>the output is</p>
<pre><code>82 829 82829 82982 False
81 82 8182 8281 False
8 81 881 818 True
8 82 882 828 True
8 829 8829 8298 True
88298281
</code></pre>
<p>Apparently the second sort function is what's needed ( the final string 88182829 < 88298281 ). the first function didn't do all comparing use first element.</p>
<p>So why the first comparing function didn't work?</p>
|
<python><string><compare><sortcomparefunction>
|
2024-10-06 07:04:38
| 1
| 804
|
wherby
|
79,058,420
| 20,999,526
|
Cannot download unlisted Odysee video using Python
|
<p>I am using the below code to fetch the <code>contentUrl</code> of an <strong>unlisted</strong> video on Odysee.</p>
<pre><code>import requests, ast
def getVideoMetadata(url,timeout=7):
return ast.literal_eval(requests.get(url,timeout=timeout).text.split('<script type="application/ld+json">')[1].split("</script>")[0])
stream = getVideoMetadata(link) #link contains the shareable link of the video
print(stream['contentUrl'])
</code></pre>
<p>However, the <code>contentUrl</code> shows the below message on screen, when pasted in browser.</p>
<blockquote>
<p>edge credentials missing</p>
</blockquote>
<p>Below is a snapshot of the webpage.</p>
<p><a href="https://i.sstatic.net/1KYGyhG3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1KYGyhG3.png" alt="error" /></a></p>
<p>Is there any workaround to download an unlisted video if the shareable link is available?</p>
|
<python><url><request>
|
2024-10-06 05:33:22
| 0
| 337
|
George
|
79,058,350
| 8,487,193
|
Module 'numpy' has no attribute 'bool8' In cartpole problem openai gym
|
<p>I'm beginner & trying to run this simple code but it is giving me this exception "module 'numpy' has no attribute 'bool8'" as you can see in screenshot below. Gym version is 0.26.2 & numpy version is 2.1.1. I've tried downgrading both but visual studio won't let me do that. I have installed latest verison of everything and I know this is happening because the newer version doesn't allow the use of bool8 instead using bool_ will help but i don't know where should i change bool8 to bool_ i can't see this in my code. How can i make it work?</p>
<p><a href="https://i.sstatic.net/19KS8sb3.png" rel="noreferrer"><img src="https://i.sstatic.net/19KS8sb3.png" alt="enter image description here" /></a></p>
<pre><code>import gym
# Create the CartPole environment
env = gym.make('CartPole-v1')
# Reset the environment to start
state = env.reset()
# Run for 1000 timesteps
for _ in range(1000):
env.render() # Render the environment
action = env.action_space.sample() # Take a random action
state, reward, done, info = env.step(action) # Step the environment by one timestep
# If the episode is done (CartPole has fallen), reset the environment
if done:
state = env.reset()
env.close() # Close the rendering window
</code></pre>
<p>On clicking show call stack it shows this :-</p>
<p><a href="https://i.sstatic.net/jtyAjC0F.png" rel="noreferrer"><img src="https://i.sstatic.net/jtyAjC0F.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><numpy><reinforcement-learning><openai-gym>
|
2024-10-06 04:19:17
| 4
| 325
|
Jitender
|
79,058,264
| 9,983,652
|
sys.path.insert(0, '..') not working in py file while working in juyter notebook
|
<p>I have a <code>py</code> file saved in a subfolder, and it need to import a module in main folder which is one level up. I am using <code>sys.path.insert(0, '..')</code>.</p>
<p>Then it works when the code is executed in cell in <code>jupyter notebook</code>. However, it doesn't work when code is written in <code>py</code> file. any reason?</p>
<pre><code>import sys
sys.path.insert(0, '..')
from python_environment_check import check_packages
ModuleNotFoundError: No module named 'python_environment_check'
</code></pre>
|
<python><sys.path>
|
2024-10-06 02:40:16
| 0
| 4,338
|
roudan
|
79,058,228
| 6,059,506
|
How to get fine grained smaller time resolution srt from google cloud speech to text API?
|
<p>I have a working code in python which generates an srt file using google cloud speech to text.</p>
<pre class="lang-py prettyprint-override"><code>from google.api_core.client_options import ClientOptions
from google.cloud.speech_v2 import SpeechClient
from google.cloud.speech_v2.types import cloud_speech
import json
from google.protobuf.json_format import MessageToDict
MAX_AUDIO_LENGTH_SECS = 8 * 60 * 60
def run_batch_recognize():
# Instantiates a client.
client = SpeechClient(
client_options=ClientOptions(
api_endpoint="us-central1-speech.googleapis.com",
),
)
# The name of the audio file to transcribe:
audio_gcs_uri = "<redacted>"
config = cloud_speech.RecognitionConfig(
explicit_decoding_config=cloud_speech.ExplicitDecodingConfig(
encoding=cloud_speech.ExplicitDecodingConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=24000,
audio_channel_count=1,
),
features=cloud_speech.RecognitionFeatures(
enable_word_confidence=True,
enable_word_time_offsets=True,
enable_automatic_punctuation=True,
max_alternatives=5,
),
# model="chirp_2",
model="short",
language_codes=["en-US"],
)
output_config = cloud_speech.RecognitionOutputConfig(
inline_response_config=cloud_speech.InlineOutputConfig(),
output_format_config=cloud_speech.OutputFormatConfig(
srt=cloud_speech.SrtOutputFileFormatConfig()
),
)
files = [cloud_speech.BatchRecognizeFileMetadata(uri=audio_gcs_uri)]
request = cloud_speech.BatchRecognizeRequest(
recognizer="<redacted>",
config=config,
files=files,
recognition_output_config=output_config,
)
operation = client.batch_recognize(request=request)
print("Waiting for operation to complete...")
response = operation.result(timeout=3 * MAX_AUDIO_LENGTH_SECS)
# print(response)
# Convert the protobuf response to a dictionary using MessageToDict
response_dict = MessageToDict(response._pb)
# Print the response as a formatted JSON string
print(json.dumps(response_dict, indent=2))
# Extract the SRT captions
srt_output = response_dict["results"][audio_gcs_uri]["inlineResult"]["srtCaptions"]
# Print the SRT output
print("SRT Captions:\n")
print(srt_output)
run_batch_recognize()
</code></pre>
<p>And it generates a decent srt content like so:</p>
<pre><code>1
00:00:00,040 --> 00:00:02,960
The sun set over the horizon
painting the sky and hues of
2
00:00:02,960 --> 00:00:06,440
orange and pink. A gentle breeze
swept through the trees carrying
3
00:00:06,440 --> 00:00:10,440
the scent of fresh pine. It was
the perfect evening to unwind
and relax.
</code></pre>
<p><strong>However is it possible to somehow ask google api to generate srt content such that there is only one or two words at any time? Something like below:</strong></p>
<pre><code>1
00:00:00,040 --> 00:00:00,540
The
2
00:00:00,540 --> 00:00:01,040
sun
3
00:00:01,040 --> 00:00:01,540
set
</code></pre>
|
<python><google-cloud-platform><speech-to-text><google-cloud-speech><srt>
|
2024-10-06 01:57:54
| 0
| 2,067
|
theprogrammer
|
79,058,101
| 19,502,111
|
Preventing weights update for certain samples in Keras3
|
<p>I have a custom layer called <code>ControlledBackward</code> that accepts <code>prev_layer</code> and <code>input_layer_of_mask</code>:</p>
<pre><code>class ControlledBackward(Layer):
def __init__(self, **kwargs):
super(ControlledBackward, self).__init__(**kwargs)
def call(self, inputs):
mask, prev_layer = inputs
# Cast the mask to the same type as prev_layer
mask = Lambda(lambda x: ops.cast(x, dtype=prev_layer.dtype))(mask)
mask_h = Lambda(lambda x: ops.logical_not(x))(mask)
# Apply the stop_gradient function on the masked parts
stopped_gradient_part = Lambda(lambda x: ops.stop_gradient(x))(prev_layer)
# Multiply stopped gradient part with mask_h
stopped_gradient_masked = Multiply()([stopped_gradient_part, mask_h])
# Multiply normal (non-stopped) part with mask
non_stopped_gradient_part = Multiply()([prev_layer, mask])
# Add the stopped and non-stopped parts
return Add()([stopped_gradient_masked, non_stopped_gradient_part])
</code></pre>
<p>This is working fine for <strong>hidden layer</strong> but not for <strong>output layer</strong>. Consider this testing code where <code>X_mask</code> is a marker of a gradient blocker for certain samples:</p>
<pre><code>import numpy as np
input_layer = Input(shape=(1,), name='input_layer')
gradient_blocker_mask = Input(shape=(1,), dtype='bool', name='cg')
hidden_a = Dense(1, name='hidden_a')(input_layer)
controlled_hidden_a = ControlledBackward(name='gradient_blocker')([gradient_blocker_mask, hidden_a])
output_a = Dense(1, name='output_a')(controlled_hidden_a)
model = Model(inputs=[input_layer, gradient_blocker_mask], outputs=[output_a])
# print weights
print('hidden_weights', model.get_layer('hidden_a').get_weights())
print('output_weights',model.get_layer('output_a').get_weights())
# dummy data
X = np.array([[42], [3]])
X_mask = np.array([[False], [False]])
y = np.array([[7], [5]])
# pred
y_pred = model.predict([X, X_mask], verbose=0)
print(y_pred)
print('')
# fit
model.compile(optimizer='adam', loss='mse')
model.fit([X, X_mask], y, epochs=100, verbose=0)
# print weights
print('hidden_weights', model.get_layer('hidden_a').get_weights())
print('output_weights',model.get_layer('output_a').get_weights())
# predict
y_pred = model.predict([X, X_mask], verbose=0)
print(y_pred)
# display(plot_model(model, show_shapes=True, show_layer_names=True))
</code></pre>
<p>Outputting:</p>
<pre><code># Before Train
hidden_weights [array([[1.2073823]], dtype=float32), array([0.], dtype=float32)]
output_weights [array([[0.7683755]], dtype=float32), array([0.], dtype=float32)]
[[38.964367]
[ 2.783169]]
# After Train
hidden_weights [array([[1.2073823]], dtype=float32), array([0.], dtype=float32)]
output_weights [array([[0.6712855]], dtype=float32), array([-0.09659182], dtype=float32)]
[[33.944336 ]
[ 2.3349032]]
</code></pre>
<p>As you see, the <code>hidden layer</code> weights as expected; it didn't change the weights, but the <code>output layer</code> still updated its weight.</p>
<p>So, how to prevent <code>output layer</code>'s weight from updating its weight based on <code>X_mask</code>?</p>
|
<python><arrays><numpy><tensorflow><keras>
|
2024-10-05 23:10:30
| 1
| 353
|
Citra Dewi
|
79,058,037
| 2,103,394
|
Python 3.10+: call async function within a property in a way that does not produce errors if that property is accessed in an async function
|
<p>I have built a library that includes an ORM with the following use pattern:</p>
<pre class="lang-py prettyprint-override"><code>class Model(SqlModel):
columns = ('id', 'details', 'parent_id')
parent: RelatedModel
Model.parent = belongs_to(Model, Model, 'parent_id')
...
def example(some_id, some_other_id):
record = Model.find(some_id)
if not record.parent:
record.parent = Model.find(some_other_id)
record.parent().save()
</code></pre>
<p>The most recent improvement is that related models accessed through the property are loaded on the first read of the property to avoid making library users write <code>model.parent().reload()</code> before the property is first accessed. E.g. <code>parent_details = Model.find(some_id).parent.details</code>.</p>
<p>An example implementation within a synchronous relation can be found <a href="https://github.com/k98kurz/sqloquent/blob/master/sqloquent/relations.py#L381" rel="nofollow noreferrer">here</a>.</p>
<p>I replicated the whole SQL model, query builder, and ORM to use async. An example implementation within an asynchronous relation can be found <a href="https://github.com/k98kurz/sqloquent/blob/master/sqloquent/asyncql/relations.py#L377" rel="nofollow noreferrer">here</a>. The async use pattern is similar to the synchronous one:</p>
<pre class="lang-py prettyprint-override"><code>class AsyncModel(AsyncSqlModel):
columns = ('id', 'details', 'parent_id')
parent: AsyncRelatedModel
AsyncModel.parent = async_belongs_to(AsyncModel, AsyncModel, 'parent_id')
...
async def example(some_id, some_other_id):
record = await AsyncModel.find(some_id)
if not record.parent:
record.parent = await AsyncModel.find(some_other_id)
await record.parent().save()
</code></pre>
<p>Reading the property within an async function (<code>if not record.parent</code> in the second pseudo-code example) results in <code>RuntimeError: asyncio.run() cannot be called from a running event loop</code>. I tried fixing this by replacing <code>asyncio.run</code> with <code>asyncio.get_running_loop().run_until_complete</code>, but it results in <code>RuntimeError: This event loop is already running</code>. I attempted running it in a new loop, but that results in <code>RuntimeError: Cannot run the event loop while another loop is running</code>. Using the nest-asyncio package removed the <code>RuntimeError</code> but resulted in a <code>ResourceWarning</code> for an unclosed event loop, and the package repo was archived, so I need a better solution.</p>
<p>The advice from <a href="https://stackoverflow.com/questions/78824996/run-an-async-function-from-a-sync-function-within-an-already-running-event-loop">a similar question</a> is to rewrite the function to be async, but this is not a plausible solution for a property.</p>
<p>Is there a solution that does not involve re-engineering the entire library use pattern or depending on an abandoned package?</p>
|
<python><python-3.x><async-await><python-asyncio>
|
2024-10-05 22:25:57
| 0
| 1,216
|
Jonathan Voss
|
79,057,984
| 3,753,826
|
Multiple versions of single Jupyter Notebook for different audiences (e.g. tutors/students) using tags
|
<p>I'm working on creating assignments for students using Jupyter Notebooks. My goal is to generate different versions of the same Notebook to distribute to tutors and students.</p>
<p>I need certain cells, such as those containing exercise solutions, to be included only in the tutor's version, while other cells should be exclusive to the student's version. Sometimes, I want to remove entire cells, and other times, just the output.</p>
<p>I believe this can be achieved by tagging cells and using <code>nbconvert</code> to save the Notebook (<code>.ipynb</code>) into another <code>.ipynb</code> file.</p>
<p>A similar method was suggested for converting <code>.ipynb</code> to HTML <a href="https://stackoverflow.com/a/72462601/3753826">here</a>. I'm interested in hearing if anyone has experience, or creative new ideas, in creating different versions of a single Jupyter Notebook for various audiences.</p>
<p>My project is quite extensive, so I need an automated solution. While I could manually create different versions by copying and pasting if I only had a few Notebooks, this approach isnβt feasible for a larger scale.</p>
<p>Following initial comments, I tried assigning the tag <code>"remove_cell"</code> to some of the Notebook cells (<a href="https://gist.github.com/53b63a9140acc6b1bb3aa8a9eb33c894" rel="nofollow noreferrer">example notebook here</a>) and then removing them using the suggestion of <a href="https://stackoverflow.com/a/48084050/3753826">this answer</a></p>
<pre><code>jupyter nbconvert nbconvert-example.ipynb --TagRemovePreprocessor.remove_cell_tags='{"remove_cell"}'
</code></pre>
<p>However, the tagged cells are not removed when saving to Jupyter Notebook with the <code>--to notebook</code> option. They are only removed with the <code>--to html</code> option.</p>
|
<python><jupyter-notebook>
|
2024-10-05 21:38:28
| 2
| 17,652
|
divenex
|
79,057,748
| 2,276,054
|
Flask and logging - some questions
|
<p>I. When creating a Flask application, <strong>where exactly do I initialize logging</strong>?<br />
Right before <code>app = Flask(__name__)</code> ?<br />
Right after it?</p>
<p>II. By default, Flask logs HTTP requests in the common format, i.e.</p>
<pre><code>- 127.0.0.1 - - [05/Oct/2024 20:39:49] "GET / HTTP/1.1" 200 -
</code></pre>
<p><strong>How can I change it</strong> to the following?</p>
<pre><code>Received POST request "/rest/api/1234/567" from IP 192.168.0.1. Content-Type =
"application/json", Content-Length = 12345, User-Agent = "Mozilla 1.2.3"
</code></pre>
<p>III. Last but not least, does the development server included with Flask have some <strong>shutdown handler that can be registered to?</strong> When user presses Ctrl+C to kill it, I would like to perform some final cleanup operations...</p>
|
<python><flask>
|
2024-10-05 18:59:27
| 1
| 681
|
Leszek Pachura
|
79,057,745
| 13,392,257
|
Can't create objects on start
|
<p>I created a function and want to run it automatically on start. The function creates several objects</p>
<p>I have an error <code>AppRegistryNotReady("Apps aren't loaded yet.")</code>
Reason is clear - the function imports objects from another application (parser_app)</p>
<p>I am starting app like this <code> gunicorn --bind 0.0.0.0:8000 core_app.wsgi</code></p>
<pre><code># project/core_app/wsgi.py
import os
from django.core.wsgi import get_wsgi_application
from django.core.management import call_command
from scripts.create_schedules import create_cron_templates
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core_app.settings')
application = get_wsgi_application()
call_command("migrate")
call_command("collectstatic", interactive=False)
create_cron_templates()
</code></pre>
<p>Full error:</p>
<pre><code>django_1 | File "/project/core_app/wsgi.py", line 14, in <module>
django_1 | from scripts.create_schedules import create_cron_templates
django_1 | File "/project/scripts/create_schedules.py", line 1, in <module>
django_1 | from parser_app.models import Schedule
django_1 | File "/project/parser_app/models.py", line 7, in <module>
django_1 | class TimeBase(models.Model):
django_1 | File "/usr/local/lib/python3.9/site-packages/django/db/models/base.py", line 127, in __new__
django_1 | app_config = apps.get_containing_app_config(module)
django_1 | File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 260, in get_containing_app_config
django_1 | self.check_apps_ready()
django_1 | File "/usr/local/lib/python3.9/site-packages/django/apps/registry.py", line 138, in check_apps_ready
django_1 | raise AppRegistryNotReady("Apps aren't loaded yet.")
django_1 | django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>Code of the function</p>
<pre><code># project/scripts/create_schedules.py
from parser_app.models import Schedule
def create_cron_templates():
Schedule.objects.get_or_create(
name="1",
cron="0 9-18/3 * * 1-5#0 19-23/2,0-8/2 * * 1-5#0 */5 * * 6-7"
)
</code></pre>
|
<python><django>
|
2024-10-05 18:58:46
| 2
| 1,708
|
mascai
|
79,057,675
| 302,274
|
In Django, how to retrieve an id to use in a dynamic link?
|
<p>I have a Django app. In my sidebar I have a navigation link that goes to a list of reports. That link needs an id to get the correct list of reports.</p>
<p>The <code><li></code> element in base.html that links to the ga_reports page is below. This link needs the <code>app_config.id</code> to resolve correctly. With this element <code>{{ app_config.id }}</code> does not exist because <code>{{ app_config }}</code> does not exist. I will add that on the Reports page, this link works as <code>{{ app_config }}</code> gets created.</p>
<p>Why does {{ app_config }} exist on the Reports page but not on the parent page?</p>
<pre><code> <li class="relative px-12 py-3">
<a class="inline-flex items-center w-full text-sm font-semibold transition-colors duration-150 hover:text-gray-800 dark:hover:text-gray-200 "
href="/base/ga_reports/{{ app_config.id }}">
<span class="ml-4">Reports</span>
</a>
</li>
</code></pre>
<p>These are entries from <code>base/urls.py</code></p>
<pre><code> path('app_config', app_config_views.app_config, name='app_config'),
...
path('ga_reports/<int:app_config_id>', ga_views.ga_reports, name='ga_reports'),
</code></pre>
<p>The AppConfiguration model is included on <code>base/views.py</code></p>
<pre><code>from .models import AppConfiguration
</code></pre>
<p>This is <code>base/ga_views.py</code></p>
<pre><code>@login_required
def ga_reports(request, app_config_id):
app_config = AppConfiguration.objects.filter(id = app_config_id).first()
if not app_config:
messages.error(request, "Invalid Configuration")
return redirect('app_config')
return render(request, 'base/ga_reports.html', {'app_config': app_config})
</code></pre>
<p>This is <code>app_config_views.py</code>.</p>
<pre><code>@login_required
def app_config(request):
app_configs = []
for app_type in APP_TYPE:
app_type_subscription = UserSubscription.objects.filter(user_id = request.user.id, app_type = app_type[0], is_active = True, is_expired = False).first()
app_configuration = AppConfiguration.objects.filter(user_id = request.user.id, app_type = app_type[0], is_deleted = False, is_verified = True).first()
is_subscribed = True if app_type_subscription else False #add more configurations?
if app_configuration and is_subscribed:
app_configuration.is_active = True
app_configuration.save()
print("Done")
is_app_configured = True if (is_subscribed and app_configuration) or (not is_subscribed) else False
cancel_subscription_url = app_type_subscription.cancel_url if is_subscribed and app_type_subscription else ''
app_configs.append({
'app_type_k': app_type[0],
'app_type_v': app_type[1],
'app_type_subscription': app_type_subscription,
'app_configuration': app_configuration,
'is_subscribed': is_subscribed,
'is_app_configured': is_app_configured,
'cancel_subscription_url': cancel_subscription_url
})
app_config_id = 0
content = {
'app_config_id': app_config_id,
'app_configs': app_configs
}
app_config = None
if app_config_id != 0:
app_config = AppConfiguration.objects.filter(id = app_config_id).first()
if not app_config:
messages.error(request, "Invalid Configuration")
return redirect('app_config')
content.update({"app_config": app_config})
if request.method == 'POST':
form = AppConfigurationForm(data=request.POST, instance=app_config)
if form.is_valid():
app_config = form.save(commit=False)
app_config.user_id = request.user.id
app_config.is_active = True
app_config.save()
if app_config_id != 0:
messages.success(request, "Configuration Updated!")
else:
messages.success(request, "Configuration Added!")
return redirect('app_config')
else:
for field_name, error_list in json.loads(form.errors.as_json()).items():
error_message = error_list[0]["message"]
error_message = error_message.replace('This field', f'This field ({field_name})')
content.update({f'{field_name}_error': True})
messages.error(request, error_message)
return render(request, 'base/app_configurations.html', content)
</code></pre>
|
<python><django>
|
2024-10-05 18:15:45
| 1
| 3,035
|
analyticsPierce
|
79,057,531
| 22,407,544
|
How to get the estimated remaining time of an already started Celery task?
|
<p>I am using django with Celery and redis. I would like that when a user initiates a Celery task my app estimates the amount of time left until the task is completed(so that I can, for example, represent this data to the frontend as a progress bar).</p>
<p>For example, given this polling function:</p>
<p>views.py:</p>
<pre><code>@csrf_protect
def initiate_task(request):
task = new_celery_task.delay(parmeter_1, parametr_2)
@csrf_protect
def poll_task_status(request, task_id): #task_id provided by frontend
import base64
task_result = AsyncResult(task_id)
if task_result.ready():
try:
...
response_data = {
'status': 'completed',
}
except Task.DoesNotExist:
return JsonResponse({'status': 'error', 'message': 'There has been an error'}, status=404)
elif task_result.state == 'REVOKED':
return JsonResponse({'status': 'terminated', 'message': 'Task was terminated'})
else:
#add code to calculate estimated remaining time here
remaining_time_est = estimate_remaining_time() #estimate task by id
return JsonResponse({'status': 'pending'
#add code to return estimated remaining time here
'est_remaining_time':remaining_time_est
})
</code></pre>
<p>and task:</p>
<p>task.py:</p>
<pre><code>app = Celery('myapp')#
@app.task
def new_celery_task(arg_1, arg_2):
#complete task
</code></pre>
<p>How would I estimate the remaining time left on an initiated task?</p>
|
<python><django><celery>
|
2024-10-05 17:05:38
| 0
| 359
|
tthheemmaannii
|
79,057,529
| 18,769,241
|
IndexError: index 7 is out of bounds for axis 0 with size 7
|
<p>I am trying to assess whether the lips of a person are moving too much while the mouth is closed (to conclude it is chewing).</p>
<p>The mouth closed part is done without any issue, but when I try to assess the lip movement through landmarks (<code>dlib</code>) there seems to be a problem with the last landmark of the mouth.</p>
<p>Inspired by the mouth example (<a href="https://github.com/mauckc/mouth-open/blob/master/detect_open_mouth.py#L17" rel="nofollow noreferrer">https://github.com/mauckc/mouth-open/blob/master/detect_open_mouth.py#L17</a>), I wrote the following function:</p>
<pre><code>def lips_aspect_ratio(shape):
# grab the indexes of the facial landmarks for the lip
(mStart, mEnd) = (61, 68)
lip = shape[mStart:mEnd]
print(len(lip))
# compute the euclidean distances between the two sets of
# vertical lip landmarks (x, y)-coordinates
# to reach landmark 68 I need to get lib[7] not lip[6] (while I get lip[7] I get IndexOutOfBoundError)
A = dist.euclidean(lip[1], lip[6]) # 62, 68
B = dist.euclidean(lip[3], lip[5]) # 64, 66
# compute the euclidean distance between the horizontal
# lip landmark (x, y)-coordinates
C = dist.euclidean(lip[0], lip[4]) # 61, 65
# compute the lip aspect ratio
mar = (A + B) / (2.0 * C)
# return the lip aspect ratio
return mar
</code></pre>
<p>The landmark of the lips are <code>(61, 68)</code>, when I extract the lip as <code>lip = shape[61:68]</code> and try to access the last landmark as <code>lip[7]</code> I get the following error:</p>
<pre><code>IndexError: index 7 is out of bounds for axis 0 with size 7
</code></pre>
<p>Why is that? and How to get the last landmark of the lip/face</p>
|
<python><numpy><face-recognition><dlib><face-landmark>
|
2024-10-05 17:04:24
| 1
| 571
|
Sam
|
79,057,462
| 1,084,865
|
How to calculate expected value of softmax values in Tensorflow?
|
<p>I have a model with a final softmax layer for N categories. These categories are ordered and numerical, so it's meaningful to calculate statistics over the probability distribution given by softmax.</p>
<p>Assume the category values are simply an increasing index sequence. So, the first has value 0, the second 1, etc. I'd like to calculate the expected value (\sum_i=0^{N-1} i p_i).</p>
<p>How can I do this in Tensorflow as an additional output layer (of dimension 1)? Perhaps using frozen weights (0 to N-1)? I can then apply a loss to this output value.</p>
<p>Alternatively, if an output as part of the model architecture is not possible, then how could I implement this in a loss class? This would require something like tf.ones_like() to fill a tensor along an axis with increasing integers (0 to N-1).</p>
|
<python><tensorflow><keras><softmax>
|
2024-10-05 16:29:05
| 1
| 785
|
jensph
|
79,057,245
| 2,826,018
|
Why can't my LSTM determine if a sequence is odd or even in the number of ones?
|
<p>I am trying to understand LSTMs and wanted to implement a simple example of classifying a sequence as "0" if the number of "1" in the sequence is odd and as "1" if the number of "1" is even. This is my data generation and training routine:</p>
<pre><code>import torch
import numpy as np
from torch.utils.data import DataLoader
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from Dataset import LSTMDataset # Custom Dataset
from Network import LSTMNet # Custom Network
if __name__ == "__main__":
numSamples = 1000
sampleLength = 5
samples = np.ndarray( shape=( numSamples, sampleLength ), dtype=np.float32 )
labels = np.ndarray( shape=( numSamples ), dtype=np.float32 )
for s in range( numSamples ):
sample = np.random.choice( [ 0, 1 ], size=sampleLength )
samples[ s ] = sample
even = np.count_nonzero( sample == 1 ) % 2 == 0
labels[ s ] = int( even )
X_train, X_test, y_train, y_test = train_test_split( samples, labels, test_size=0.25, random_state=42 )
trainingSet = LSTMDataset( X_train, y_train )
testSet = LSTMDataset( X_test, y_test )
training_loader = DataLoader( trainingSet, batch_size=1, shuffle=True )
validation_loader = DataLoader( testSet, batch_size=1, shuffle=False )
model = LSTMNet( inputSize= sampleLength )
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
loss_fn = torch.nn.BCELoss()
for epoch in range( 10 ):
yPredicted = []
yTruth = []
for i, data in enumerate( training_loader ):
inputs, labels = data
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
yTruth.append( int( labels.item() ) )
yPredicted.append( int( torch.round( outputs ).item() ) )
accuracy = accuracy_score( yTruth, yPredicted )
print( f"Accuracy: {accuracy:.2f}" )
</code></pre>
<p>My dataset and network:</p>
<pre><code>class LSTMDataset( Dataset ):
def __init__( self, x, y ):
self.x = x
self.y = y
def __len__(self):
return self.y.shape[ 0 ]
def __getitem__(self, idx):
sample, label = self.x[ idx ], self.y[ idx ]
return sample.reshape( ( -1, 1 ) ), label.reshape( ( 1 ) )
class LSTMNet( nn.Module ):
def __init__( self, sequenceLength ):
super().__init__()
self.hidden_size = 10
self.lstm = nn.LSTM( input_size=1, hidden_size=self.hidden_size, num_layers=2, batch_first=True )
self.net = nn.Sequential(
nn.Flatten(),
nn.ReLU(),
nn.Linear( sequenceLength * self.hidden_size, 1 ),
nn.Sigmoid()
)
def forward(self, x):
x, _ = self.lstm( x )
x = self.net( x )
return x
</code></pre>
<p>But unfortunately, my training accuracy never goes beyond 53%. Does anyone have any tips what I am doing wrong?</p>
<p>The input shape to my network is <code>( 1, 5, 1 )</code> and I wanted to fed the sequence elements one after another to my network that's why I chose <code>( 1, 5, 1 )</code> and not <code>(1, 1, 5 )</code>.</p>
|
<python><pytorch><lstm>
|
2024-10-05 14:32:22
| 1
| 1,724
|
binaryBigInt
|
79,057,188
| 1,090,446
|
unable to recognize my inputs in python plotty - dash
|
<pre><code>dash_app = dash.Dash(__name__)
dash_app.layout = html.Div([
html.Div([
html.P("Range: ", style={'margin-right': '10px'}),
dcc.Input(id='user-min-strike', type='number', value=5600, style={'margin-right': '10px'}),
dcc.Input(id='user-max-strike', type='number', value=5800),
dcc.Input(id='user-minutes', type='number', value=100),
], style={'display': 'flex', 'align-items': 'center', 'margin-bottom': '20px'}),
dcc.Graph(id='live-update-graph', config={'displayModeBar': False}),
dcc.Interval(
id='interval-component',
interval=2*1000, # Update every 2 seconds
n_intervals=0
)
])
</code></pre>
<p>my callback:</p>
<pre><code>@dash_app.callback(
Output('live-update-graph', 'figure'),
[Input('interval-component', 'n_intervals'),
Input('user-min-strike', 'value'),
Input('user-max-strike', 'value'),
Input('user-minutes', 'value')
])
</code></pre>
<p>i get this error:</p>
<pre><code>lambda ind: inputs_state[ind], inputs_state_indices
IndexError: list index out of range
</code></pre>
<p>i got into dash code and print the number of inputs right after they get them from the body, it only includes one, the interval
even though they appear in the webpage</p>
<pre><code>print the body from dash code:
{'output': 'live-update-graph.figure', 'outputs': {'id': 'live-update-graph', 'property': 'figure'}, 'inputs': [{'id': 'interval-component', 'property': 'n_intervals', 'value': 148}], 'changedPropIds': ['interval-component.n_intervals'], 'parsedChangedPropsIds': ['interval-component.n_intervals']}
</code></pre>
|
<python><plotly-dash>
|
2024-10-05 13:46:31
| 1
| 1,159
|
shd
|
79,057,058
| 4,910,962
|
Usage of .net syntax in python
|
<p>I'm trying to translate a programm written in .net to python. I'm using clr and pythonnet</p>
<pre><code>clr.AddReference("System")
clr.AddReference("System.IO")
clr.AddReference("System.Drawing")
from System.Reflection import Assembly
from System import String, Func, Action, Array, Int32
from System.IO import FileInfo
from System.Drawing import Point
</code></pre>
<p>which will be imported correctly</p>
<p>Further I've imported a dll in the following way:</p>
<pre><code>dll_path = os.path.abspath("Flir.Atlas.Cronos.7.5.0/lib/net452/Flir.Atlas.Image.dll")
os.environ['PATH'] += ";%s"%os.path.dirname(dll_path)
clr.AddReference(dll_path)
assembly = Assembly.LoadFile(dll_path)
clr.AddReference("Flir.Atlas.Image")
import Flir.Atlas.Image as ft
</code></pre>
<p>Now I try to read from the ft with the following, correct syntax:</p>
<pre><code>location = Point (X, Y)
ft.ThermalImage.GetValueAt(location)
</code></pre>
<p>where Point is imported as above mentioned. X and Y are the position. But it is not accepted. I get an typeerror:
<code>TypeError: No method matches given arguments for ThermalImage.GetValueAt: ()</code></p>
<p>From the SDK docs I have only this information:</p>
<blockquote>
<p>GetValueAt()
ThermalValue Flir.Atlas.Image.ThermalImage.GetValueAt (Point location)</p>
<p>Gets a ThermalValue from a position in the Thermal Image.</p>
<p>Parameters:
location: A point that specifies the location for the value to read.</p>
<p>Returns:
The ThermalValue including value and stateThermalValue.</p>
</blockquote>
<p>As direct parameters are not accepted I tried to make a function which return a Point object:</p>
<pre><code>def position():
pos = Point (measurement.X, measurement.Y)
return pos
</code></pre>
<p>which returns <code><System.Drawing.Point object at 0x00000219AC97DA40></code>
and give it directly to the call</p>
<pre><code>x = ft.ThermalImage.GetValueAt(location)
and got
TypeError: No method matches given arguments for ThermalImage.GetValueAt: ()
</code></pre>
<p>How can I pass the position correctly to the .GetValueAt(position) call?</p>
|
<python><c#><.net><sdk><flir>
|
2024-10-05 12:32:57
| 0
| 354
|
Papageno
|
79,056,958
| 13,634,560
|
boxplots showing flat line instead of box and whiskers, plotly python
|
<p>I am attempting to chart a few boxplots, organized by a categorical variable with go.Box. However, when I plot the data, there is just a flat line that shows up. [Note: this question seems well-documented in R, but I have not found a similar question for Python. please link to it if you find one.]</p>
<p>Plotting with px.express does not replicate this error, it seems to be just with go.Box.</p>
<p>MRE for multiple columns:</p>
<pre><code>fares = [random.random() for g in np.arange(10000)]
types = ["new", "new", "old", "old", "really_old"] * 2000
df = pd.DataFrame({
"bus_fares": fares,
"bus_types": types
})
# -----
traces = []
for k in df["bus_types"].unique():
trace = go.Box(
x = [k],
y = df[df["bus_types"]==k]["bus_fares"],
marker={"color": next(color_iter)},
name="bus_types",
showlegend=False
)
traces.append(trace)
go.Figure(traces)
</code></pre>
<p>This code renders the plot shown below.
<a href="https://i.sstatic.net/UM1qx0ED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UM1qx0ED.png" alt="enter image description here" /></a></p>
<p>even if we simplify and just plot one box, without the loop, it still shows the same behavior.</p>
<pre><code>#-----
x = "bus_types"
y = "bus_fares"
k = "old"
go.Figure(go.Box(x=[k], y=df[df[x]==k][y].values, name=x, showlegend=False))
</code></pre>
<p><a href="https://i.sstatic.net/yrtqiIl0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrtqiIl0.png" alt="enter image description here" /></a></p>
<p>Does anyone have any ideas as to how to fix this behavior in Python?</p>
|
<python><plotly>
|
2024-10-05 11:32:37
| 1
| 341
|
plotmaster473
|
79,056,727
| 11,149,556
|
LALR Grammar for transforming text to csv
|
<p>I have a processor trace output that has the following format:</p>
<pre class="lang-none prettyprint-override"><code>Time Cycle PC Instr Decoded instruction Register and memory contents
905ns 86 00000e36 00a005b3 c.add x11, x0, x10 x11=00000e5c x10:00000e5c
915ns 87 00000e38 00000693 c.addi x13, x0, 0 x13=00000000
925ns 88 00000e3a 00000613 c.addi x12, x0, 0 x12=00000000
935ns 89 00000e3c 00000513 c.addi x10, x0, 0 x10=00000000
945ns 90 00000e3e 2b40006f c.jal x0, 692
975ns 93 000010f2 0d01a703 lw x14, 208(x3) x14=00002b20 x3:00003288 PA:00003358
985ns 94 000010f6 00a00333 c.add x6, x0, x10 x6=00000000 x10:00000000
995ns 95 000010f8 14872783 lw x15, 328(x14) x15=00000000 x14:00002b20 PA:00002c68
1015ns 97 000010fc 00079563 c.bne x15, x0, 10 x15:00000000
</code></pre>
<p>Allegedly, this is <code>\t</code> separated, however this is not the case, as inline spaces are found here and there. I want to transform this into a <code>.csv</code> format with a header row and the entries following. For example:</p>
<pre class="lang-none prettyprint-override"><code>Time,Cycle,PC,Instr,Decoded instruction,Register and memory contents
905ns,86,00000e36,00a005b3,"c.add x11, x0, x10", x11=00000e5c x10:00000e5c
915ns,87,00000e38,00000693,"c.addi x13, x0, 0", x13=00000000
...
</code></pre>
<p>To do that, I am using Lark in python3 (>=3.10). And I came up with the following grammar for the source format:</p>
<h2>Lark Grammar</h2>
<pre><code>start: header NEWLINE entries+
# Header is expected to be
# Time\tCycle\tPC\tInstr\tDecoded instruction\tRegister and memory contents
header: HEADER_FIELD+
# Entries are expected to be e.g.,
# 85ns 4 00000180 00003197 auipc x3, 0x3000 x3=00003180
entries: TIME \
CYCLE \
PC \
INSTR \
DECODED_INSTRUCTION \
reg_and_mem? NEWLINE
reg_and_mem: REG_AND_MEM+
///////////////
// TERMINALS //
///////////////
HEADER_FIELD: /
[a-z ]+ # Characters that are optionally separated by a single space
/xi
TIME: /
[\d\.]+ # One or more digits
[smunp]s # Time unit
/x
CYCLE: INT
PC: HEXDIGIT+
INSTR: HEXDIGIT+
DECODED_INSTRUCTION: /
[a-z\.]+ # Instruction mnemonic
([-a-z0-9, ()]+)? # Optional operand part (rd,rs1,rs2, etc.)
(?= # Stop when
x[0-9]{1,2}[=:] # Either you hit an xN= or xN:
|PA: # or you meet PA:
|\s+$ # or there is no REG_AND_MEM and you meet a \n
)
/xi
REG_AND_MEM: /
(?:[x[0-9]+|PA)
[=|:]
[0-9a-f]+
/xi
///////////////
// IMPORTS //
///////////////
%import common.HEXDIGIT
%import common.NUMBER
%import common.INT
%import common.UCASE_LETTER
%import common.CNAME
%import common.NUMBER
%import common.WS_INLINE
%import common.WS
%import common.NEWLINE
///////////////
// IGNORE //
///////////////
%ignore WS_INLINE
</code></pre>
<p>Here is my simple driver code:</p>
<pre><code>import lark
class TraceTransformer(lark.Transformer):
def start(self, args):
return lark.Discard
def header(self, fields):
return [str(field) for field in fields]
def entries(self, args):
print(args)
...
# the grammar provided above
# stored in the same directory
# as this file
parser = lark.Lark(grammar=open("grammar.lark").read(),
start="start",
parser="lalr",
transformer=TraceTransformer())
# This is parsed by the grammar without problems
# Note that I omit from the c.addi the operand
# part and its still parsed. This is ok as some
# mnemonics do not have operands (e.g., fence).
dummy_text_ok1 = r"""Time Cycle PC Instr Decoded instruction Register and memory contents
905ns 86 00000e36 00a005b3 c.add x11, x0, x10 x11=00000e5c x10:00000e5c
915ns 87 00000e38 00000693 c.addi x13, x0, 0 x13=00000000
925ns 88 00000e3a 00000613 c.addi x12=00000000
935ns 89 00000e3c 00000513 c.addi x10, x0, 0 x10=00000000"""
# Now here starts trouble. Note that here we don't
# have a REG_AND_MEM part on the jump instruction.
# However this is still parsed with no errors.
dummy_text_ok2 = r"""Time Cycle PC Instr Decoded instruction Register and memory
945ns 90 00000e3e 2b40006f c.jal x0, 692
"""
# But here, when the parser meets the line of cjal
# where there is no REG_AND_MEM part and a follow
# up entry exists we have an issue.
dummy_text_problematic = r"""Time Cycle PC Instr Decoded instruction Register and memory contents
905ns 86 00000e36 00a005b3 c.add x11, x0, x10 x11=00000e5c x10:00000e5c
915ns 87 00000e38 00000693 c.addi x13, x0, 0 x13=00000000
925ns 88 00000e3a 00000613 c.addi x12, x0, 0 x12=00000000
935ns 89 00000e3c 00000513 c.addi x10, x0, 0 x10=00000000
945ns 90 00000e3e 2b40006f c.jal x0, 692
975ns 93 000010f2 0d01a703 lw x14, 208(x3) x14=00002b20 x3:00003288 PA:00003358
985ns 94 000010f6 00a00333 c.add x6, x0, x10 x6=00000000 x10:00000000
995ns 95 000010f8 14872783 lw x15, 328(x14) x15=00000000 x14:00002b20 PA:00002c68
1015ns 97 000010fc 00079563 c.bne x15, x0, 10 x15:00000000
"""
parser.parse(dummy_text_ok1)
parser.parse(dummy_text_ok2)
parser.parse(dummy_text_problematic)
</code></pre>
<h2>The Runtime Error</h2>
<pre class="lang-none prettyprint-override"><code>No terminal matches 'c' in the current parser context, at line 6 col 45
945ns 90 00000e3e 2b40006f c.jal x0, 692
^
Expected one of:
* DECODED_INSTRUCTION
</code></pre>
<p>So this indicates that the <code>DECODED_INSTRUCTION</code> rule is not behaving as expected.</p>
<h2>The Rule</h2>
<pre><code>DECODED_INSTRUCTION: /
[a-z\.]+ # Instruction mnemonic
([-a-z0-9, ()]+)? # Optional operand part (rd,rs1,rs2, etc.)
(?= # Stop when
x[0-9]{1,2}[=:] # Either you hit an xN= or xN:
|PA: # or you meet PA:
|\s+$ # or there is no REG_AND_MEM and you meet a \n
)
/xi
</code></pre>
<p>This rule is really heavy, it has to match the whole ISA of the processor, which is in RISC-V btw. So here step-by-step I have</p>
<ul>
<li>The instruction mnemonic regex as a sequence of a-z characters and optional dots (.)</li>
<li>The optional operand part (there exist instructions in the ISA with no operands).</li>
</ul>
<p>Now, this was tricky. Instead of accounting from every possible instruction variation in my rules above, I thought to leverage the fact that there exist characters in the following column (Register and memory contents) which do not exist in any instruction variation of the ISA. This is where the look-ahead part of the regex comes in place. I stop when</p>
<ul>
<li>Either I have reached the xN= part or the xN: part of the field</li>
<li>Either I have reached the PA: part of the field</li>
<li>OR I have reached the end of the line (<code>$</code>) as the field does not exist.</li>
</ul>
<p>However, the last case does not seem to work as intended, as shown in the above example. The way I see it, this seems OK to either stop when you meet one of the two criteria, OR you have encountered a new line (implying that the following part is omitted for the current entry). Did I blunder something in the regex part?</p>
|
<python><regex><grammar><ebnf><lark-parser>
|
2024-10-05 09:19:50
| 2
| 479
|
ex1led
|
79,056,011
| 10,473,393
|
http.client works but requests throws read timeout
|
<p>I'm just trying to understand. When using <code>requests</code> the request throws 403 (when without headers) or Read Timeout (when with headers). Doing the same thing with <code>http.client</code> gets 200 status code as response.</p>
<p>The url i'm trying to fetch is: <a href="https://img.uefa.com/imgml/uefacom/uel/social/og-default.jpg" rel="nofollow noreferrer">https://img.uefa.com/imgml/uefacom/uel/social/og-default.jpg</a></p>
<p>Code that fails:</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = 'https://img.uefa.com/imgml/uefacom/uel/social/og-default.jpg'
try:
response = requests.get(url, verify=False, timeout=10) # Disable SSL verification
response.raise_for_status()
except requests.exceptions.RequestException as e:
print("Error:", e)
</code></pre>
<p>Code that works:</p>
<pre class="lang-py prettyprint-override"><code>import http.client
import ssl
conn = http.client.HTTPSConnection("img.uefa.com", context=ssl._create_unverified_context())
conn.request("GET", "/imgml/uefacom/uel/social/og-default.jpg")
response = conn.getresponse()
print(response.status, response.reason)
conn.close()
</code></pre>
<p>I've tried many things, adding multiple headers, but none worked.</p>
<p>The following command in curl also works</p>
<pre class="lang-bash prettyprint-override"><code>curl -v "https://img.uefa.com/imgml/uefacom/uel/social/og-default.jpg" --output image.jpg
</code></pre>
<p>Also opening in browser works.</p>
<p>Note: All requests done locally</p>
<p>Does <code>requests</code> do any step that may impact in this problem?</p>
|
<python>
|
2024-10-04 22:58:43
| 1
| 1,739
|
Alexander Santos
|
79,055,995
| 3,802,331
|
Airflow Python Operator TypeError: got multiple values for keyword argument 'op_kwargs'
|
<pre><code>Broken DAG: [<redacted>]
Traceback (most recent call last):
File "<redacted>", line 198, in <module>
some_task_op() >> Transfer >> short_circuit() >> Process >> Summarize
^^^^^^^^^^^^^^
File "/python/env/lib/python3.12/site-packages/airflow/decorators/base.py", line 372, in __call__
op = self.operator_class(
^^^^^^^^^^^^^^^^^^^^
TypeError: airflow.decorators.sensor.DecoratedSensorOperator() got multiple values for keyword argument 'op_kwargs'
</code></pre>
<p>This error is thrown for any Python operator in my DAG, and it's quite puzzling because I'm only giving the decorator <code>op_kwargs</code> once.</p>
<ul>
<li>Running airflow 2.9.0</li>
<li>Python 3.12</li>
</ul>
<pre><code># dag.py
default_args = {
'depends_on_past': True,
'email': ['admin@admin.test'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 0,
'start_date': pendulum.datetime(2024, 10, 2),
}
with DAG(
dag_id='mydag',
schedule='0 14 * * *',
catchup=True,
default_args=default_args,
params={
'snapshot': Param(None, type=['null', 'string'], format='date'),
'input_base': Param(Variable.get('some-home', '/path/to/home'), type='string'),
'output_base': Param(Variable.get('some-home', '/path/to/home'), type='string'),
'overwrite': Param(False, type='boolean'),
'deploy_env': Param('production', type=['string'], enum=['staging', 'production'])
}
) as dag:
@task.sensor(
dag=dag,
task_id='some_task_op',
mode='reschedule', # default is 'poke'
poke_interval=10*60,
timeout=4*60*60,
op_kwargs={
'creds': '{{ var.value.get("api-credentials") }}',
},
)
def some_task_op(params, data_interval_end, creds=None):
pass
</code></pre>
|
<python><airflow>
|
2024-10-04 22:49:29
| 1
| 390
|
404
|
79,055,903
| 9,827,719
|
How can I create a Google Cloud Virtual Machine in Python using a Google Cloud Instance Template with reserved regional IP?
|
<p>My code can create Virtual Machines from a Instance Template. However I need to use a reserved regional IP.</p>
<p>I have created the regional IP with the CLI like this:</p>
<pre><code>gcloud compute addresses create test-regional-ip --region europe-north1 --project=sindre-dev
</code></pre>
<p>I list the IPs and see the IP:</p>
<pre><code>gcloud compute addresses list --project=sindre-dev
</code></pre>
<p>Result.</p>
<pre><code>NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
test-regional-ip 35.228.123.45 EXTERNAL europe-north1 RESERVED
</code></pre>
<p>Now when I try to set the regional IP in my script with
<code>external_ip = "35.228.123.45"</code> and run it then I get this error:</p>
<blockquote>
<p>TypeError: bad argument type for built-in operation</p>
</blockquote>
<p>Can anyone help me create a VM from a instance template with regional IP using Python?</p>
<p><strong>create_instance_from_template()</strong></p>
<pre><code>from google.cloud import compute_v1
from src.routes.run_fw_engagements.c_instance_template.helpers.wait_for_extended_operation import \
wait_for_extended_operation
def create_instance_from_template(
project_id: str,
zone: str,
instance_name: str,
instance_template_url: str,
external_ip: str = None) -> compute_v1.Instance:
"""
Creates a Compute Engine VM instance from an instance template and optionally assigns an external IP.
Args:
project_id: ID or number of the project you want to use.
zone: Name of the zone you want to check, for example: us-west3-b
instance_name: Name of the new instance.
instance_template_url: URL of the instance template used for creating the new instance.
It can be a full or partial URL.
external_ip: (Optional) External IP address to be assigned to the instance.
Returns:
Instance object.
"""
instance_client = compute_v1.InstancesClient()
# Initialize the instance resource with a name
instance_resource = compute_v1.Instance(name=instance_name)
# Construct the request for instance insertion
instance_insert_request = compute_v1.InsertInstanceRequest(
project=project_id,
zone=zone,
instance_resource=instance_resource, # Assign the instance resource here
)
instance_insert_request.source_instance_template = instance_template_url
# Check if an external IP was provided
if external_ip:
# Define the network interface with the external IP
network_interface = compute_v1.NetworkInterface()
# Create the AccessConfig for the external IP
print(f"Creating AccessConfig with external_ip: {external_ip}")
access_config = compute_v1.AccessConfig(
name="External NAT",
type_=compute_v1.AccessConfig.Type.ONE_TO_ONE_NAT,
# Instead of directly setting nat_ip, we do it through initialization
)
# Assign the nat_ip directly to the access_config after initialization
access_config.nat_ip = external_ip # Ensure this is a string
# Assign the access config to the network interface
network_interface.access_configs = [access_config]
# Set the network interface to the instance resource
instance_resource.network_interfaces = [network_interface]
# Call the API to create the instance
operation = instance_client.insert(instance_insert_request)
# Wait for the operation to complete
wait_for_extended_operation(operation, "instance creation")
# Return the created instance details
return instance_client.get(project=project_id, zone=zone, instance=instance_name)
if __name__ == '__main__':
# Example usage
project_id = "sindre-dev"
zone = "europe-north1-a"
instance_name = "gno-collector-1"
instance_template_url = f"https://www.googleapis.com/compute/v1/projects/{project_id}/global/instanceTemplates/{instance_name}-v1"
external_ip = "35.228.123.45"
create_instance_from_template(
project_id=project_id,
zone=zone,
instance_name=instance_name,
instance_template_url=instance_template_url,
external_ip=external_ip,
)
</code></pre>
<p><strong>wait_for_extended_operation()</strong></p>
<pre><code>from __future__ import annotations
import sys
from typing import Any
from google.api_core.extended_operation import ExtendedOperation
def wait_for_extended_operation(
operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
"""
Waits for the extended (long-running) operation to complete.
If the operation is successful, it will return its result.
If the operation ends with an error, an exception will be raised.
If there were any warnings during the execution of the operation
they will be printed to sys.stderr.
Args:
operation: a long-running operation you want to wait on.
verbose_name: (optional) a more verbose name of the operation,
used only during error and warning reporting.
timeout: how long (in seconds) to wait for operation to finish.
If None, wait indefinitely.
Returns:
Whatever the operation.result() returns.
Raises:
This method will raise the exception received from `operation.exception()`
or RuntimeError if there is no exception set, but there is an `error_code`
set for the `operation`.
In case of an operation taking longer than `timeout` seconds to complete,
a `concurrent.futures.TimeoutError` will be raised.
"""
result = operation.result(timeout=timeout)
if operation.error_code:
print(
f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
file=sys.stderr,
flush=True,
)
print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
raise operation.exception() or RuntimeError(operation.error_message)
if operation.warnings:
print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
for warning in operation.warnings:
print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)
return result
</code></pre>
|
<python><google-cloud-platform>
|
2024-10-04 21:52:42
| 1
| 1,400
|
Europa
|
79,055,895
| 4,875,641
|
Multiprocessing pool stops operating - Windows 11
|
<p>The following code start 5 multiprocessing tasks. Those tasks print a message then go to sleep a random period of time then do this again. That is repeated 10 time so the entire set of tasks should complete in about 50 seconds at most.</p>
<p>When the pool is first created, the addresses of the 5 pool objects are printed.</p>
<p>This code works fine, but it was terminated before it completed all its work. When it is run again, you can see new addresses are assigned for the 5 pool objects.</p>
<p>After this process of stopping the operation before it is done (keyboard interrupt), several times, the code stops working. The tasks are never created and the code falls right through to the end. And the allocation of pool objects show that they are allocating the same objects over and over whereas before new object addresses where assigned.</p>
<p>So it appears that OS resources are being allocated and never released. And when those resources are all used up, the code stops working. But even when this command prompt it is running under is terminated (its window closed down), those resources are not being returned. And when a new command prompts is open and the app run again, you can see the same object IDs being assigned, the tasks are never started, and the code falls though. Normally the code stops at the pool.join() waiting for the workers (task) to complete.</p>
<p>There does not seem to be a way to get the OS to release these pool resources. The only way is to restart the OS.</p>
<p>During debugging of code, it is unlikely that one would ever have it run to completion and to have the pool.close() and poll.join() executed. SO after a few execution of the code being tested, no more tasks are allocated.</p>
<p>Thus the question - how do we ensure the spawned off tasks are terminated and their resources returned to the pool?</p>
<pre><code>import multiprocessing
import time
import random
# Simultaneous tasks running
def task(id):
for i in range(10):
stime = random.randint(1,5)
print(f"Task {id}: Woke up. Now sleep {stime}")
time.sleep(stime)
exit(0)
if __name__ == "__main__":
print ('create pool of tasks')
pool = multiprocessing.Pool(processes=5)
print (pool)
# Start 5 asynchronous tasks
for i in range(1,6):
result = pool.apply_async(task, i)
print (result)
pool.close()
# Wait for all tasks to complete
pool.join()
print ('end program')
</code></pre>
|
<python><multiprocessing><resources><python-multiprocessing><windows-11>
|
2024-10-04 21:46:16
| 2
| 377
|
Jay Mosk
|
79,055,872
| 9,827,719
|
With Python how can I specify which region (Location) that a Google Cloud Instance Template should be stored in?
|
<p>I have a code that can create Google Cloud Compute Engine Instance Templates, but I cannot manage to make it create the instance templates regional (for example in europe-north1).</p>
<p><a href="https://i.sstatic.net/v83KFxso.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v83KFxso.png" alt="enter image description here" /></a></p>
<p><strong>create_template</strong></p>
<pre><code>from __future__ import annotations
from google.cloud import compute_v1
from src.routes.run_fw_engagements.c_instance_template.helpers.wait_for_extended_operation import wait_for_extended_operation
def create_template(project_id: str, template_name: str, network_tags: list, startup_script: str, instance_template_region: str) -> compute_v1.InstanceTemplate:
"""
Create a new instance template with the provided name and a specific
instance configuration.
Args:
project_id: project ID or project number of the Cloud project you use.
template_name: name of the new template to create.
network_tags: List of network tags to apply to the instance.
startup_script: The startup script to run on instance creation.
instance_template_region: The location (zone or region) where the instances will be launched.
Returns:
InstanceTemplate object that represents the new instance template.
"""
# The template describes the size and source image of the boot disk
# to attach to the instance.
disk = compute_v1.AttachedDisk()
initialize_params = compute_v1.AttachedDiskInitializeParams()
initialize_params.source_image = (
"projects/ubuntu-os-cloud/global/images/family/ubuntu-2204-lts"
)
initialize_params.disk_size_gb = 10
disk.initialize_params = initialize_params
disk.auto_delete = True
disk.boot = True
# The template connects the instance to the default network.
network_interface = compute_v1.NetworkInterface()
network_interface.name = "global/networks/default"
# The template lets the instance use an external IP address.
access_config = compute_v1.AccessConfig()
access_config.name = "External NAT"
access_config.type_ = "ONE_TO_ONE_NAT"
access_config.network_tier = "PREMIUM"
network_interface.access_configs = [access_config]
# Add template properties
template = compute_v1.InstanceTemplate()
template.name = template_name
template.properties.disks = [disk]
template.properties.machine_type = "e2-micro"
template.properties.network_interfaces = [network_interface]
template.properties.tags = compute_v1.Tags(items=network_tags)
# Add startup script, OS Login, and location to metadata
metadata = compute_v1.Metadata()
metadata.items = [
compute_v1.Items(key="startup-script", value=startup_script),
compute_v1.Items(key="enable-oslogin", value="true"),
compute_v1.Items(key="enable-oslogin-2fa", value="true"),
compute_v1.Items(key="instance-template-region", value=instance_template_region) # Add location to metadata for reference
]
template.properties.metadata = metadata
# Create client
template_client = compute_v1.InstanceTemplatesClient()
operation = template_client.insert(
project=project_id, instance_template_resource=template
)
wait_for_extended_operation(operation, "instance template creation")
response = template_client.get(project=project_id, instance_template=template_name)
# Return the response or useful template info
return response
if __name__ == '__main__':
startup_script = """#!/bin/bash
sudo apt update
sudo apt install -y apache2
systemctl start apache2
"""
create_template(
project_id="sindre-dev",
template_name="template-test-10",
network_tags=["mytag-test"],
startup_script=startup_script,
instance_template_region="europe-north1"
)
</code></pre>
<p><strong>wait_for_extended_operation</strong></p>
<pre><code>from __future__ import annotations
import sys
from typing import Any
from google.api_core.extended_operation import ExtendedOperation
def wait_for_extended_operation(
operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
"""
Waits for the extended (long-running) operation to complete.
If the operation is successful, it will return its result.
If the operation ends with an error, an exception will be raised.
If there were any warnings during the execution of the operation
they will be printed to sys.stderr.
Args:
operation: a long-running operation you want to wait on.
verbose_name: (optional) a more verbose name of the operation,
used only during error and warning reporting.
timeout: how long (in seconds) to wait for operation to finish.
If None, wait indefinitely.
Returns:
Whatever the operation.result() returns.
Raises:
This method will raise the exception received from `operation.exception()`
or RuntimeError if there is no exception set, but there is an `error_code`
set for the `operation`.
In case of an operation taking longer than `timeout` seconds to complete,
a `concurrent.futures.TimeoutError` will be raised.
"""
result = operation.result(timeout=timeout)
if operation.error_code:
print(
f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
file=sys.stderr,
flush=True,
)
print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
raise operation.exception() or RuntimeError(operation.error_message)
if operation.warnings:
print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
for warning in operation.warnings:
print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)
return result
</code></pre>
<p>How can I ensure that the instance template is created in <code>instance_template_region="europe-north1"</code> ?</p>
|
<python><google-cloud-platform><google-cloud-vm>
|
2024-10-04 21:27:11
| 1
| 1,400
|
Europa
|
79,055,835
| 10,034,073
|
Python Virtual Environment Error Configuring SDK in PyCharm
|
<p>I'm trying to create a Python virtual environment in PyCharm but keep getting this error:</p>
<blockquote>
<p><strong>Error Configuring SDK</strong></p>
<p>Error configuring SDK: No flavor detected for B:/myproject/.venv/Scripts/python.exe sdk. Please make sure that B:\myproject\.venv\Scripts\python.exe is a valid home path for this SDK type.</p>
</blockquote>
<p>However, it appears that the environment is created just fine. I can activate it from a terminal and install packages. Yet PyCharm refuses to add it, so something is wrong.</p>
<p>When creating the environment, PyCharm has three base interpreter options (3.11 and 3.12, and a 3.12 install in <code>%localappdata%\Microsoft\</code>). I'm using the default option at <code>%localappdata%\Programs\Python\Python312\python.exe</code>. This is the one used by <code>py</code> in terminal. However, when I run <code>py --version</code> inside the virtual environment, I get <code>3.11.3</code>. Outside it I get <code>3.12.0</code>.</p>
<p>This has worked fine plenty of times before, and I have no idea what changed.</p>
|
<python><pip><pycharm><python-venv>
|
2024-10-04 21:07:01
| 1
| 444
|
kviLL
|
79,055,738
| 1,812,993
|
Limit depth of model_dump in pydantic
|
<p>Pydantic will convert a complex model into a dictionary if you call model_dump. For example:</p>
<pre><code>class Level3(BaseModel):
deep_field: str
class Level2(BaseModel):
mid_field: str
level3: Level3
class Level1(BaseModel):
top_field: str
level2: Level2
class DepthLimitedModel(BaseModel):
name: str
level1: Level1
max_mapping_depth: ClassVar[int] = 1
new_model = DepthLimitedModel(name="Test", level1=Level1(top_field="Top", level2=Level2(mid_field="Mid", level3=Level3(deep_field="Deep")))
dumped = new_model.model_dump()
</code></pre>
<p>The above code would dump the entire structure into a dict. Cool, but we don't want the whole structure. We want to limit the depth of the dump based on our <code>max_mapping_depth</code> variable. In this case, we have set the value to <code>1</code> which means we want the dump to include <code>DepthLimitedModel</code> as well as <code>Level1</code>, but the <code>Level2</code> field on <code>Level1</code> should just be the object id (and the recursive serialization should end).</p>
<p>I can't find an intuitive way to accomplish this, but, limiting recursion depth seems like it would be so common that a solution must exist.</p>
|
<python><serialization><pydantic>
|
2024-10-04 20:25:41
| 0
| 7,376
|
melchoir55
|
79,055,661
| 5,678,057
|
Pandas: Nested use of .loc returns different 'type' for the same field
|
<p>Why do I get 2 different data types while indexing using <code>.loc</code>, even though i retrieve a single value/cell.</p>
<p>Background: I'm looking up values of <code>df_source</code> in <code>df_map</code>. I do not get string datatype for the matches when I do the below indexing.</p>
<pre><code># Just for reproducing. Note that we need to replace ```None``` with ```Nan```, as that is the case when we load the original dataframe from excel. But i cannot create a dict with ```Nan``` and so used ```None```
df_map = pd.DataFrame({'person': {0: 'jack',
1: 'jack',
2: 'jack',
3: 'harry',
4: 'harry',
5: 'harry'},
'country': {0: 'GE', 1: 'AUS', 2: 'AUS', 3: 'UK', 4: 'SP', 5: 'BR'},
'product': {0: 'AA', 1: 'NT', 2: 'NT', 3: 'AA', 4: 'NT', 5: 'NT'},
'account': {0: 'main1',
1: 'main2',
2: None,
3: 'main2',
4: 'main3',
5: 'sub4'}})
df_source = pd.DataFrame({'country': {0: 'GE', 1: 'AUS'},
'product': {0: 'AA', 1: 'NT'},
'account': {0: 'main1', 1: 'sub2'}})
-------- case 1 --------------
i=1
# search in df_map
person = df_map.loc[(pd.isna(df_map['account']))\
& (df_map['country'] == df_source.loc[i,'country']) \
& (df_map['product'] == df_source.loc[i,'product']), 'person']
type(person)
>pandas.core.series.Series
-------- case 2 ---------------
person = df_map.loc[1, 'person']
type(person)
>str
</code></pre>
<p>Although both returns a single value, both are different types. How does <code>.loc</code> differ in this case, is it because of the nested use of it ?</p>
|
<python><pandas><indexing><pandas-loc>
|
2024-10-04 19:49:39
| 1
| 389
|
Salih
|
79,055,424
| 19,155,645
|
Transformers - ValueError: Asking to pad but the tokenizer does not have a padding token
|
<p>Im running a RAG pipeline using Haystack (<code>haystack-ai==2.6.0</code>) and HF model & the transformers library (<code>transformers==4.39.3</code>). <br>
This is how my code looks:</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=mymodel, cache_dir=cache_dir)
if tokenizer.pad_token is None:
# tokenizer.add_special_tokens({'pad_token': tokenizer.eos_token})
# tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token = tokenizer.eos_token
embedder = SentenceTransformersDocumentEmbedder(model=mymodel)
text_embedder = SentenceTransformersTextEmbedder(model=mymodel)
generator = HuggingFaceLocalGenerator(model=mymodel)
rag_pipeline = Pipeline()
rag_pipeline.add_component("converter", MarkdownToDocument())
rag_pipeline.add_component("splitter", DocumentSplitter(split_by="sentence", split_length=2))
rag_pipeline.add_component("embedder", embedder)
rag_pipeline.add_component("text_embedder", text_embedder)
rag_pipeline.add_component("retriever", MilvusEmbeddingRetriever(document_store=document_store, top_k=3))
rag_pipeline.add_component("writer", DocumentWriter(document_store))
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
rag_pipeline.add_component("generator", generator)
rag_pipeline.connect("converter.documents", "splitter.documents")
rag_pipeline.connect("splitter.documents", "embedder.documents")
rag_pipeline.connect("embedder", "writer")
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "generator")
results = rag_pipeline.run({
"converter": {"sources": [file_path]},
"text_embedder": {"text": question},
"prompt_builder": {"query": question},
})
print("RAG answer:", results["generator"]["replies"][0])
</code></pre>
<p>When running the <code>rag_pipeline.run()</code> method, I get the follwing error regarding the tokenizer&embedder:</p>
<pre><code>ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
</code></pre>
<p>I saw already some questions and pages regarding a similar issue, but still did not manage to debug the issue in this RAG pipeline (e.g. <a href="https://stackoverflow.com/questions/70544129/transformers-asking-to-pad-but-the-tokenizer-does-not-have-a-padding-token">this stackoverflow question</a>, or <a href="https://github.com/lm-sys/FastChat/issues/466" rel="nofollow noreferrer">this github issue</a>).<br>
Please note that I commented out in the beginning of the code different versions I tried to create the pad token:</p>
<pre><code>if tokenizer.pad_token is None:
# tokenizer.add_special_tokens({'pad_token': tokenizer.eos_token})
# tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token = tokenizer.eos_token
</code></pre>
<p>But they all result in similar error. Any help will be appreciated.</p>
<hr />
<p>EDIT 1:</p>
<p><code>mymodel = "occiglot/occiglot-7b-eu5"</code> (<a href="https://huggingface.co/occiglot/occiglot-7b-eu5" rel="nofollow noreferrer">https://huggingface.co/occiglot/occiglot-7b-eu5</a>)</p>
<p>I'm importing the embedders from haystack not from sentence-transformers:</p>
<pre><code>from haystack.components.embedders import SentenceTransformersTextEmbedder
from haystack.components.embedders import SentenceTransformersDocumentEmbedder
</code></pre>
<p>And also tried with custom wrappers for the embedders to make sure they use the tokenizer, but i still get the same error:</p>
<pre><code># Custom Wrapper for Document Embedder
class CustomDocumentEmbedder(SentenceTransformersDocumentEmbedder):
def __init__(self, model_name, cache_dir=None):
self.tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
# Ensure pad_token is set correctly
if self.tokenizer.pad_token is None:
self.tokenizer.add_special_tokens({'pad_token': self.tokenizer.eos_token})
super().__init__(model=model_name)
# Override the embed method to force the tokenizer with padding
def embed(self, documents):
inputs = self.tokenizer([doc.content for doc in documents], padding=True, truncation=True, return_tensors="pt")
embeddings = self.model.encode(inputs["input_ids"])
return embeddings
# Custom Wrapper for Text Embedder
class CustomTextEmbedder(SentenceTransformersTextEmbedder):
def __init__(self, model_name, cache_dir=None):
self.tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
# Ensure pad_token is set correctly
if self.tokenizer.pad_token is None:
self.tokenizer.add_special_tokens({'pad_token': self.tokenizer.eos_token})
super().__init__(model=model_name)
# Override the embed method to force the tokenizer with padding
def embed(self, text):
inputs = self.tokenizer(text, padding=True, truncation=True, return_tensors="pt")
embeddings = self.model.encode(inputs["input_ids"])
return embeddings
# Custom Wrapper for HuggingFace Generator
class CustomGenerator(HuggingFaceLocalGenerator):
def __init__(self, model_name, cache_dir=None):
self.tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
# Ensure pad_token is set correctly
if self.tokenizer.pad_token is None:
self.tokenizer.add_special_tokens({'pad_token': self.tokenizer.eos_token})
super().__init__(model=model_name)
# Override generate method to ensure tokenizer is used correctly
def generate(self, query, **kwargs):
inputs = self.tokenizer(query, padding=True, truncation=True, return_tensors="pt")
output = self.model.generate(inputs["input_ids"], **kwargs)
generated_text = self.tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
embedder = CustomDocumentEmbedder(model_name=mymodel)
text_embedder = CustomTextEmbedder(model_name=mymodel)
generator = CustomGenerator(model_name=mymodel)
</code></pre>
|
<python><padding><huggingface-transformers><tokenize><haystack>
|
2024-10-04 18:20:29
| 0
| 512
|
ArieAI
|
79,055,370
| 515,368
|
Disable short multi-line string concatenation in Ruff
|
<p>If I have an intentionally short multi-line string like this:</p>
<pre class="lang-py prettyprint-override"><code>html += (
'<p>Regards,</p>\n'
'<p>Tom Cruise</p>\n'
)
</code></pre>
<p>Ruff will automatically reformat as:</p>
<pre class="lang-py prettyprint-override"><code>html += '<p>Regards,</p>\n<p>Tom Cruise</p>\n'
</code></pre>
<p>because the concatenated line fits within my max line width.</p>
<p>I am looking for a the linter setting, not disabling the linter with <code>#fmt: off ... #fmt: on</code>.</p>
<p>Also note that I <em>cannot add trailing commas</em> to the multi-line string, within the parentheses, which does prevent the behavior in other situations, e.g. lists.</p>
|
<python><ruff>
|
2024-10-04 17:58:26
| 0
| 3,162
|
supermitch
|
79,055,149
| 3,475,434
|
Multiple dispatch based on object attribute types in python with multimethod
|
<p>I'm a Python noob having a tough time trying to implement multiple dispatch (based on object attributes types) using multimethod.</p>
<p>To be more precise, as long as I use functions this works lovely</p>
<pre class="lang-py prettyprint-override"><code>
from dataclasses import dataclass
from multimethod import multimethod
@multimethod
def disp(a, b):
print("dunno")
@multimethod
def disp(a: int, b: None):
print("int x none")
@multimethod
def disp(a: str, b: None):
print("str x none")
@multimethod
def disp(a: int, b: str):
print("int x str")
@multimethod
def disp(a: str, b: str):
print("str x str")
disp(None, None) # dunno
disp(1, None) # int x none
disp("test", None) # str x none
disp(1, "test") # int x str
disp("test", "test") # str x str
</code></pre>
<p>However when it comes to methods I get only dunnos</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Test:
x: int|str
y: int|str|None = None
def do(self):
self._dispatch(a = self.x, b = self.y)
@multimethod
def _dispatch(self, a, b): # default case
print("dunno")
@multimethod
def _dispatch(self, a: int, b: None):
print("int x none")
@multimethod
def _dispatch(self, a: str, b: None):
print("str x none")
@multimethod
def _dispatch(self, a: int, b: str):
print("int x str")
@multimethod
def _dispatch(self, a: str, b: str):
print("str x str")
# etc..
## what is printed is always the default case
t1 = Test(1)
type(t1.x)
t1.y is None
t1.do() # should be int x none
Test("test").do() # should be str x none
Test(1, "test").do() # should be int x str
Test("test", "test").do() # should be str x str
</code></pre>
<p>What am I missing/doing wrong? Thanks</p>
|
<python><python-3.x><dispatch>
|
2024-10-04 16:28:23
| 0
| 3,253
|
Luca Braglia
|
79,054,914
| 10,517,777
|
Python generator implementation is not reducing memory consumption
|
<p>I have three json.gz files. Those three files contains different data (restaurants, menus and matchings) grouped by different Ids. I must read all of them and create new json files by Id with the respective data from those three files. I have one virtual machine and I have some memory restrictions when running my code.</p>
<p>At the begining, I consolidated these three files in three json objects and then I iterated them with a normal for-loop. With this solution, my code is consuming a lot of memory from my VM because I am loading the whole data.</p>
<p>I realized I need only the data for a particular Id to create the final json file and I should not load all the data for all Ids at the same time. Therefore, I though Python generator would be the solution in this case. I created the following code:</p>
<pre><code>from json import loads
def load_data_set(string_restaurants_data: str,
string_menus_data: str,
string_matchings_data: str,):
menus_data = loads(string_menus_data)
matchings_data = loads(string_matchings_data)
restaurants_data = loads(string_restaurants_data)
for id, menu_data in menus_data.items():
yield id, restaurants_data[id], menu_data, matchings_data[id] if id in matchings_data else "{}"
def main():
'''
some code to read the json.gz files. The data is stored in these three string variables: string_restaurants_data, string_menus_data and string_matchings_data
'''
restaurants_data_set = load_data_set(string_restaurants_data,
string_menus_data,
string_matchings_data)
size_generator = sys.getsizeof(restaurants_data_set)
del string_menus_data
del string_restaurants_data
del string_matchings_data
gc.collect()
list_result = {}
for restaurant in restaurants_data_set:
result_data = aggregate_menu_data(restaurant[0],
dumps(restaurant[2]),
dumps(restaurant[1]),
dumps(restaurant[3]),
string_parameters,
eval(debug))
list_result.update(result_data)
data['result'] = dumps(list_result)
</code></pre>
<p>I checked the Task Manager in the VM and I have not seen an important reduction in the memory consumption comparing the previous version without generators. Could anyone let me know if I implemented correctly the python generator to solve my need? or is there a better way to load only in memory the data I need to create a json file without affecting the speed?</p>
<p>Python version: 3.11</p>
|
<python><generator>
|
2024-10-04 15:19:55
| 1
| 364
|
sergioMoreno
|
79,054,907
| 1,429,549
|
Superset: error saving charts or dashboard
|
<p>I have just installed Superset on a brand new machine over Docker and everything works fine but I can't save a chart or dashboard. I always get "Unable to save slice" error message.</p>
<p>I can save queries and datasources though without problem and if I access to a chart though a permalink I can see the chart as it was with not problem.</p>
<p>Going through the logs, I saw that the error message is:</p>
<pre><code>sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
(Background on this error at: https://sqlalche.me/e/14/e3q8)
</code></pre>
<p>The weird thing is that I'm not using sqlite3, but postgress...</p>
<p>The DB connection in the <code>superset_config.py</code> is (or I think it is):</p>
<pre><code># DB connection
SQLALCHEMY_DATABASE_URI = 'postgresql+psycopg2://XXX:XXX@postgres:5432/superset'
SQLALCHEMY_TRACK_MODIFICATIONS = True
</code></pre>
<p>I have confirmed that that's the config file set in PATH <code>SUPERSET_CONFIG_PATH=/app/superset_config.py</code></p>
<p>I have also confirmed that the postgres machine is running and is accesible from the superset machine.</p>
<p>Can anyone help to understand why is trying to connect to sqlite3?</p>
|
<python><docker><apache-superset>
|
2024-10-04 15:17:38
| 1
| 738
|
Coluccini
|
79,054,792
| 8,964,393
|
Apply pandas dictionary with gt/lt conditions as keys
|
<p>I have created the following pandas dataframe:</p>
<pre><code>ds = {'col1':[1,2,2,3,4,5,5,6,7,8]}
df = pd.DataFrame(data=ds)
</code></pre>
<p>The dataframe looks like this:</p>
<pre><code>print(df)
col1
0 1
1 2
2 2
3 3
4 4
5 5
6 5
7 6
8 7
9 8
</code></pre>
<p>I have then created a new field, called <code>newCol</code>, which has been defined as follows:</p>
<pre><code>def criteria(row):
if((row['col1'] > 0) & (row['col1'] <= 2)):
return "A"
elif((row['col1'] > 2) & (row['col1'] <= 3)):
return "B"
else:
return "C"
df['newCol'] = df.apply(criteria, axis=1)
</code></pre>
<p>The new dataframe looks like this:</p>
<pre><code>print(df)
col1 newCol
0 1 A
1 2 A
2 2 A
3 3 B
4 4 C
5 5 C
6 5 C
7 6 C
8 7 C
9 8 C
</code></pre>
<p>Is there a possibility to create a dictionary like this:</p>
<pre><code>dict = {
'0 <= 2' : "A",
'2 <= 3' : "B",
'Else' : "C"
}
</code></pre>
<p>And then apply it to the dataframe:</p>
<pre><code>df['newCol'] = df['col1'].map(dict)
</code></pre>
<p>?</p>
<p>Can anyone help me please?</p>
|
<python><pandas><dataframe><dictionary><calculated-columns>
|
2024-10-04 14:47:44
| 1
| 1,762
|
Giampaolo Levorato
|
79,054,570
| 561,243
|
Saving region properties from skimage.measure to a pickle file raises RecursionError: maximum recursion depth exceeded
|
<p>In my analysis code, I am performing some image properties analysis using skimage.measure.regionprops.</p>
<p>In order to save processing time, I would like to save the region properties to a file and I was thinking to use pickle as I often do.</p>
<p>Here is my piece of code.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pickle
from skimage.draw import ellipse
from skimage.measure import label, regionprops
from skimage.transform import rotate
# generate fake image
image = np.zeros((600, 600))
rr, cc = ellipse(300, 350, 100, 220)
image[rr, cc] = 1
image = rotate(image, angle=15, order=0)
rr, cc = ellipse(100, 100, 60, 50)
image[rr, cc] = 1
# find labels
label_img = label(image)
# calculate region props
regions = regionprops(label_img, image)
# save to pickle file
with open('test.sav', 'wb') as f:
pickle.dump(regions, f)
# reload it
with open('test.sav', 'rb') as f:
saved_regions = pickle.load(f)
</code></pre>
<p>Saving to file is not a problem, but when I try to reload it, I get this error message:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\bulghao\AppData\Roaming\JetBrains\PyCharmCE2023.3\scratches\cosmic\save_2_file.py", line 31, in <module>
saved_regions = pickle.load(f)
File "C:\Users\bulghao\venv\autoradenv\lib\site-packages\skimage\measure\_regionprops.py", line 341, in __getattr__
if self._intensity_image is None and attr in _require_intensity_image:
File "C:\Users\bulghao\venv\autoradenv\lib\site-packages\skimage\measure\_regionprops.py", line 341, in __getattr__
if self._intensity_image is None and attr in _require_intensity_image:
File "C:\Users\bulghao\venv\autoradenv\lib\site-packages\skimage\measure\_regionprops.py", line 341, in __getattr__
if self._intensity_image is None and attr in _require_intensity_image:
[Previous line repeated 995 more times]
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>From what I guess, the intensity image was not saved or it was not possible to reload it.</p>
<p>My question is: <strong>is there a way to save regionprops to disk for future use?</strong></p>
<p>I can save and reload the labels, but saving both would be much better.</p>
<p><strong>EDIT:</strong>
Following Christoph Rackwitz comment below, we discovered that it is actually a bug of skimage and that an open issue on <a href="https://github.com/scikit-image/scikit-image/issues/6465" rel="nofollow noreferrer">GitHub</a> is showing exactly this problem. The issue is dormant, but a workaround has been suggested.</p>
|
<python><pickle><scikit-image><recursionerror>
|
2024-10-04 13:39:51
| 0
| 367
|
toto
|
79,054,484
| 19,500,571
|
Add plotly graphs dynamically
|
<p>Say I have the following example that combines two Plotly graphs</p>
<pre><code>import plotly.express as px
import plotly.graph_objects as go
df = px.data.iris()
fig1 = px.line(df, x="sepal_width", y="sepal_length")
fig1.update_traces(line=dict(color = 'rgba(50,50,50,0.2)'))
fig2 = px.scatter(df, x="sepal_width", y="sepal_length", color="species")
fig3 = go.Figure(data=fig1.data + fig2.data)
fig3.show()
</code></pre>
<p>Say now I have not just two, but several plots I want to add dynamically and not type out explicitly as above. The following doesn't work, but does something similar exist where one doesn't have to explicitly type out the sum to add the graphs?</p>
<pre><code>fig3 = go.Figure(data=sum([fig1.data, fig2.data])) # this doesn't work
</code></pre>
|
<python><plotly>
|
2024-10-04 13:22:04
| 1
| 469
|
TylerD
|
79,054,106
| 2,381,348
|
pandas eval: creating column names with spaces giving odd names
|
<p>My <code>df</code> contains columns <code>m 1</code> and <code>n</code>.</p>
<p>I was trying to create a duplicate of column <code>m 1</code> as <code>m 2</code> using <code>eval</code>. Came across <a href="https://stackoverflow.com/questions/50697536/pandas-query-function-not-working-with-spaces-in-column-names">this thread</a> which suggested to use backticks (`). So I used</p>
<pre><code>df2.eval('`m 2` = `m 1`')
</code></pre>
<p>Now this is creating the duplicate column, but the column name is <code>BACKTICK_QUOTED_STRING_m_2</code> instead of <code>m 2</code>. Backticks for the source <code>m 1</code> worked fine and it took the correct column; But did not properly work for the target column.</p>
<p>I did a bit of googling, didn't get anything. Then I asked copilot, it suggested to use <code>assign</code> instead of <code>eval</code>. <br/>So my query is, is it possible to use eval for the above case?</p>
|
<python><pandas>
|
2024-10-04 11:45:28
| 2
| 3,551
|
RatDon
|
79,054,018
| 14,366,906
|
PyPlot.Table working with different colspans and rowspans
|
<p>I'm looking for a way to add columns that can span multiple rows, and rows that can span multiple columns.</p>
<p>I currently have the code below to get the first row in.</p>
<pre class="lang-py prettyprint-override"><code># Calculate log-scaled widths
table_widths = [0.001, 0.002, 0.063, 2.0, 63.0, 150.0]
log_table_widths = np.diff(np.log10(table_widths))
log_table_widths = log_table_widths / log_table_widths.sum()
# Normalize widths to sum to 1
log_table_widths = log_table_widths / log_table_widths.sum()
table = ax.table(cellText=[['Clay', 'Silt', 'Sand', 'Gravel', 'Cobbles']], cellLoc='center', loc='bottom', colWidths=log_table_widths)
table_widths = []
table.auto_set_font_size(False)
table.set_fontsize(8)
table.scale(1, 1.5)
</code></pre>
<p>To get the following result:
<a href="https://i.sstatic.net/YfNkfzx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YfNkfzx7.png" alt="Current Result" /></a></p>
<p>However I need to add another row to the table that will have some columns span over the next row. While cells on the current row will have to span over multiple columns. Like so:
<a href="https://i.sstatic.net/CN3Vptrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CN3Vptrk.png" alt="Wanted Result" /></a>
Preferably with the bottom row being the top row, but not a disaster.</p>
<p>I've tried going at this alone and getting the help from GitHub CoPilot and MS CoPilot. However with no luck, the best we could come up with is the following:</p>
<pre class="lang-py prettyprint-override"><code># Calculate log-scaled widths
table_widths = [0.001, 0.002, 0.063, 2.0, 63.0, 150.0]
log_table_widths = np.diff(np.log10(table_widths))
log_table_widths = log_table_widths / log_table_widths.sum()
# Normalize widths to sum to 1
log_table_widths = log_table_widths / log_table_widths.sum()
# Create the table
cell_text = [
['Clay', 'Silt', 'Fine', 'Medium', 'Coarse', 'Fine', 'Medium', 'Coarse'],
['', '', 'Sand', 'Sand', 'Sand', 'Gravel', 'Gravel', 'Gravel'],
]
col_labels = ['Clay', 'Silt', 'Fine', 'Medium', 'Coarse', 'Fine', 'Medium', 'Coarse']
col_widths = [log_table_widths, log_table_widths, log_table_widths/3, log_table_widths/3, log_table_widths/3, log_table_widths/3, log_table_widths/3, log_table_widths/3]
# Add the table to the plot
table = ax.table(cellText=cell_text, colLabels=col_labels, cellLoc='center', loc='bottom', colWidths=col_widths)
table.auto_set_font_size(False)
table.set_fontsize(8)
table.scale(1, 1.5)
# Adjust cell alignment to avoid ambiguity
for key, cell in table.get_celld().items():
cell.set_text_props(ha='center', va='center')
</code></pre>
<p>Giving me the following error:</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>With no clue on how to solve it.</p>
<p>For reproducability you can use:</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots()
fig.set_figwidth(18)
fig.set_figheight(12)
fig.set_dpi(80)
# draw vertical line at: 0.002mm, 0.063mm, 2.0mm, 63mm
ax.axvline(x=0.002, color='red', linestyle='--')
ax.axvline(x=0.063, color='red', linestyle='--')
ax.axvline(x=2.0, color='red', linestyle='--')
ax.axvline(x=63.0, color='red', linestyle='--')
ax.set_xlim(0.001, 150)
# Calculate log-scaled widths
table_widths = [0.001, 0.002, 0.063, 2.0, 63.0, 150.0]
log_table_widths = np.diff(np.log10(table_widths))
log_table_widths = log_table_widths / log_table_widths.sum()
# Normalize widths to sum to 1
log_table_widths = log_table_widths / log_table_widths.sum()
# Create the table
cell_text = [
['Clay', 'Silt', 'Fine', 'Medium', 'Coarse', 'Fine', 'Medium', 'Coarse'],
['', '', 'Sand', 'Sand', 'Sand', 'Gravel', 'Gravel', 'Gravel'],
]
col_labels = ['Clay', 'Silt', 'Fine', 'Medium', 'Coarse', 'Fine', 'Medium', 'Coarse']
col_widths = [log_table_widths, log_table_widths, log_table_widths/3, log_table_widths/3, log_table_widths/3, log_table_widths/3, log_table_widths/3, log_table_widths/3]
# Add the table to the plot
table = ax.table(cellText=cell_text, colLabels=col_labels, cellLoc='center', loc='bottom', colWidths=col_widths)
table.auto_set_font_size(False)
table.set_fontsize(8)
table.scale(1, 1.5)
# Adjust cell alignment to avoid ambiguity
for key, cell in table.get_celld().items():
cell.set_text_props(ha='center', va='center')
fig.savefig('fig.png', format='png', bbox_inches='tight')
</code></pre>
<p>EDIT:
I've managed to get rid of the error. It was caused by defining the col_widths variable where I was filling it with lists instead of their corresponding value.
I have now defined it like this, might find a better solution for this later down the line.</p>
<pre class="lang-py prettyprint-override"><code>col_widths = [log_table_widths[0], log_table_widths[1], log_table_widths[2] / 3, log_table_widths[2] / 3, log_table_widths[2] / 3, log_table_widths[3] / 3, log_table_widths[3] / 3, log_table_widths[3] / 3, log_table_widths[4]]
</code></pre>
<p>My table now looks like so:
<a href="https://i.sstatic.net/LhrktSid.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhrktSid.png" alt="New Result" /></a></p>
<p>Although I have not figured out how to merge the rows and cells.
I did find this post: <a href="https://stackoverflow.com/questions/37279807/matplotlib-table-with-double-headers">Matplotlib table with double headers</a>
Where multiple tables are created to show multiple headers. But this, sadly will not work for merging cells in a column and only a row.</p>
|
<python><matplotlib>
|
2024-10-04 11:18:16
| 1
| 335
|
Wessel van Leeuwen
|
79,053,898
| 7,347,911
|
prefect warning message regarding task performance for longer running tasks
|
<p>i have been getting the below warning message with tasks with larger run time
most of my tasks take more than 10 seconds</p>
<p>it says</p>
<pre><code>Try wrapping large task parameters with `prefect.utilities.annotations.quote` for increased performance
</code></pre>
<p>how can i do that for better performance</p>
<p>Below is the actual waring</p>
<p>Task parameter introspection took 3676.513 seconds , exceeding <code>PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD</code> of 10.0. Try wrapping large task parameters with <code>prefect.utilities.annotations.quote</code> for increased performance, e.g. <code>my_task(quote(param))</code>. To disable this message set <code>PREFECT_TASK_INTROSPECTION_WARN_THRESHOLD=0</code>.</p>
|
<python><orchestration><prefect>
|
2024-10-04 10:45:13
| 1
| 404
|
manoj
|
79,053,880
| 14,649,310
|
Create an SQLAlchemy table that only has maximum one entry
|
<p>I want to store a secret key encrypted and I want to ensure there is always either none or exactly one entry in this SQLAlchemy model/table. How can this be enforced?</p>
|
<python><postgresql><sqlalchemy>
|
2024-10-04 10:38:50
| 2
| 4,999
|
KZiovas
|
79,053,771
| 942,179
|
What exactly is a "seed package" in Python?
|
<p>In the context of virtual environments I often see people speaking of "seed packages", e.g. <code>uv venv --seed</code> with help text <em>"Install seed packages (one or more of: <code>pip</code>, <code>setuptools</code>, and <code>wheel</code>) into the virtual environment."</em> or <em>"You can seed pip if you want with uv venv --seed but..."</em>, but also in the virtualenv user guide (<em>"[...] install (bootstrap) seed packages (one or more of pip, setuptools, wheel) in the created virtual environment [...]"</em>), without this term ever being clearly defined.
Of course, I've got a hunch of what it means, but I've never found a clear definition so far what <em>exactly</em> seed packages are and what exactly "seeding" entails. Can anybody clarify and/or refer to some PEP?</p>
|
<python><virtualenv><python-venv><uv>
|
2024-10-04 10:03:35
| 1
| 1,576
|
Elmar Zander
|
79,053,709
| 4,602,359
|
How to generate a flattened list of all members and types in a C++ struct or class?
|
<p>Iβm trying to generate a flattened output of all the members and their types for a given C++ struct or class, including inherited members.</p>
<p>For example, given the following classes:</p>
<pre class="lang-cpp prettyprint-override"><code>class ParentClass {
int a;
std::string b;
};
class MemberA {
int foo;
float bar;
};
class MemberB {
int baz;
float bot;
};
class MyClass : ParentClass {
MemberA memberA1;
MemberA memberA2;
MemberB memberB;
int top;
};
</code></pre>
<p>Iβd like to produce output like this:</p>
<pre><code>MyClass(ParentClass).a(int)
MyClass(ParentClass).b(std::string)
MyClass.memberA1(MemberA).foo(int)
MyClass.memberA1(MemberA).bar(float)
MyClass.memberA2(MemberA).foo(int)
MyClass.memberA2(MemberA).bar(float)
MyClass.memberB(MemberB).baz(int)
MyClass.memberB(MemberB).bot(float)
MyClass.top(int)
</code></pre>
<p>The goal is to create a comprehensive, easy-to-parse text-based output that can be used in other tools.</p>
<h3>What Iβve tried so far:</h3>
<ul>
<li><strong>Parsing PDB files:</strong> Iβm working in Visual Studio 2019, but this approach hasnβt yielded a satisfying result.</li>
<li><strong>Using DIA in C++:</strong> However, I encountered COM issues, potentially due to company IT policies.</li>
<li><strong>Using Python's <code>pdbparse</code> library:</strong> Unfortunately, I received an "Unsupported File Type" error. It seems the library isn't well-maintained for newer PDB formats.</li>
<li><strong>C++20 Reflection:</strong> I attempted to use reflection features in C++20, but didnβt achieve the desired results.</li>
</ul>
<h3>My next steps:</h3>
<ul>
<li>Digging deeper into <strong>C++20 reflection</strong>, though it seems complex.</li>
<li><strong>Custom Doxygen generation</strong>, since Doxygen already provides comprehensive documentation features.</li>
</ul>
<h3>Question:</h3>
<p>Are there any other approaches I should consider? Any ideas or suggestions are appreciated!</p>
|
<python><c++><reflection><doxygen>
|
2024-10-04 09:50:41
| 0
| 348
|
A. Ocannaille
|
79,053,433
| 13,606,345
|
Django serializer to convert list of dictionary keys to model fields
|
<p>I am using pandas <code>openpyxl</code> to read excel file in my project. After reading the excel file I end up with list of dictionaries. These dictionaries have keys such as "Year 2024", "Range", "Main Point", and etc.</p>
<p>I have a field in my Django app that has these fields such as "year", "range", "main_point".</p>
<p>My question is how can I write a serializer to convert dictionary keys to fields in my model? I want to do this in serializer because I also want to validate dictionary data.</p>
<p>So I have this list:</p>
<pre class="lang-py prettyprint-override"><code>my_data = [
{"Year 2024": "5", "Range":"2", "Main Point": "OK"},
{"Year 2024": "6", "Range":"3", "Main Point": "OK"},
{"Year 2024": "7", "Range":"4", "Main Point": "OK"},
...
]
</code></pre>
<p>and my model</p>
<pre class="lang-py prettyprint-override"><code>class MyModel(models.Model):
year = models.IntegerField(...)
range = models.IntegerField(...)
main_pont = models.CharField(...)
</code></pre>
|
<python><django><django-rest-framework><django-serializer>
|
2024-10-04 08:38:14
| 1
| 323
|
Burakhan Aksoy
|
79,053,320
| 10,710,625
|
Add an image in prompt for AzureOpenAI gpt4-mini?
|
<p>I am able to use the web interface of azure OpenAI studio in the chat playground to analyze images but I would like to do the same using python. It seems that it's not working and I could not (so far) find a reference online on how to include an image in the prompt. Could anyone please help or provide a reference on how I can do this? Do I actually need to upload my image or would it be possible for the model to read a local image. ?</p>
<pre><code>import os
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version=os.getenv("AZURE_API_VERSION"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
)
deployment_name = "gpt-4o-mini"
# Send a completion call to generate an answer
print("Sending a test completion job")
#image local path
image_input = r"c:/users/..../image.jpeg"
prompt = "Tell me what do you see in this image  "
messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
temperature=0,
)
generated_text = response.choices[0].message.content
print("Response:", generated_text)
</code></pre>
<p>Update:
I uploaded also the image to the blob storage and I replaced the local path with the URL, I still get the same response from the model:</p>
<pre><code>Response: It seems that I can't view images directly. However, if you describe the image or provide details about its content, I can help analyze it or provide insights based on your description!
</code></pre>
|
<python><large-language-model><azure-openai><gpt-4><gpt-4o-mini>
|
2024-10-04 07:59:47
| 0
| 739
|
the phoenix
|
79,053,161
| 17,973,259
|
How to load chat messages in batches in Django using StreamingHttpResponse?
|
<p>I'm working on a Django project that streams chat messages between two users using StreamingHttpResponse and async functions. I want to load messages in batches (e.g 20 at a time) instead of loading all the messages at once to optimize performance and reduce initial load time.</p>
<p>Hereβs my current view code:</p>
<pre><code>async def stream_chat_messages(request, recipient_id: int) -> StreamingHttpResponse:
"""View used to stream chat messages between the authenticated user and a specified recipient."""
recipient = await sync_to_async(get_object_or_404)(User, id=recipient_id)
async def event_stream():
async for message in get_existing_messages(request.user, recipient):
yield message
last_id = await get_last_message_id(request.user, recipient)
while True:
new_messages = (
ChatMessage.objects.filter(
Q(sender=request.user, recipient=recipient)
| Q(sender=recipient, recipient=request.user),
id__gt=last_id,
)
.annotate(
profile_picture_url=Concat(
Value(settings.MEDIA_URL),
F("sender__userprofile__profile_picture"),
output_field=CharField(),
),
is_pinned=Q(pinned_by__in=[request.user]),
)
.order_by("created_at")
.values(
"id",
"created_at",
"content",
"profile_picture_url",
"sender__id",
"edited",
"file",
"file_size",
"is_pinned",
)
)
async for message in new_messages:
message["created_at"] = message["created_at"].isoformat()
message["content"] = escape(message["content"])
json_message = json.dumps(message, cls=DjangoJSONEncoder)
yield f"data: {json_message}\n\n"
last_id = message["id"]
await asyncio.sleep(0.1)
async def get_existing_messages(user, recipient) -> AsyncGenerator:
messages = (
ChatMessage.objects.filter(
Q(sender=user, recipient=recipient)
| Q(sender=recipient, recipient=user)
)
.filter(
(Q(sender=user) & Q(sender_hidden=False))
| (Q(recipient=user) & Q(recipient_hidden=False))
)
.annotate(
profile_picture_url=Concat(
Value(settings.MEDIA_URL),
F("sender__userprofile__profile_picture"),
output_field=CharField(),
),
is_pinned=Q(pinned_by__in=[user]),
)
.order_by("created_at")
.values(
"id",
"created_at",
"content",
"profile_picture_url",
"sender__id",
"edited",
"file",
"file_size",
"is_pinned",
)
)
async for message in messages:
message["created_at"] = message["created_at"].isoformat()
message["content"] = escape(message["content"])
json_message = json.dumps(message, cls=DjangoJSONEncoder)
yield f"data: {json_message}\n\n"
async def get_last_message_id(user, recipient) -> int:
last_message = await ChatMessage.objects.filter(
Q(sender=user, recipient=recipient) | Q(sender=recipient, recipient=user)
).alast()
return last_message.id if last_message else 0
return StreamingHttpResponse(event_stream(), content_type="text/event-stream")
</code></pre>
<p>How can I modify this code to initially load the latest 20 messages, and then load more messages (in batches of 20) when the user clicks a "Load more" button?</p>
<p>Any advice or examples on how to handle this efficiently would be greatly appreciated.</p>
|
<python><django><async-await>
|
2024-10-04 07:07:30
| 1
| 878
|
Alex
|
79,053,119
| 21,446,483
|
Testing async method with pytest does not show calls to patched method
|
<p>I'm currently writing some tests for a FastAPI middleware component using pytest. This component is a class-type middleware and works exactly as it should, very happy with it. However, when writing tests I'm trying to execute the <code>dispatch</code> method within the component to then make assertions on the output (I'll attach the rest of the setup code below, here is the important part):</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
pytest_plugins = ("pytest_asyncio",)
@pytest.fixture(scope="function", name="cache_service_table_get_mock")
def fixture_cache_service_table_get_mock(mocker: MockerFixture):
mock_table = mocker.patch("api.utils.cache_middleware.dynamo.table")
mock_table.get_item.return_value = {"Items": []}
mock_table.put_item.return_value = None
yield mock_table
@pytest.mark.asyncio
async def test_cache_no_hit(cache_service_table_get_mock: MagicMock):
cache = CacheMiddleware(app)
request = create_request({"query": "some random query", "product_code": "dummy"})
result = await cache.dispatch(request=request, call_next=mock_call_next)
assert result.status_code == 200
assert result.body == json.dumps({"answer": "dummy response"}, indent=2).encode(
"utf-8"
)
# This generates an error indicating that call_count is 0
assert cache_service_table_get_mock.get_item.call_count == 1
</code></pre>
<p>I am 100% sure that the <code>get_item</code> mock is getting called and that it returns the expected mocked value (both by adding <code>print</code> statements everywhere in the <code>cache.dispatch</code> method and because the second <code>assert</code> would only hold it this was the case).</p>
<p>Is there something missing for the call_count property to update correctly?</p>
<hr />
<h3>Remaining setup code</h3>
<pre class="lang-py prettyprint-override"><code>import json
from unittest.mock import MagicMock
import pytest
from fastapi import Request, Response
from pytest_mock import MockerFixture
from starlette.testclient import TestClient
from api import app
from api.utils.cache_middleware import CacheMiddleware
client = TestClient(app)
# Class used to mock the body_iterator prop used in the cache middleware
class AsyncIterator:
def __init__(self, item):
self.length = 1
self.item = item
def __aiter__(self):
return self
async def __anext__(self):
if self.length == 0:
raise StopAsyncIteration
self.length -= 1
return self.item
pass
async def mock_call_next(request: Request) -> Response:
content = json.dumps({"answer": "dummy response"}, indent=2).encode("utf-8")
response = Response(
content=content,
status_code=200,
headers=request.headers,
media_type="text",
)
body_iterator = AsyncIterator(content)
response.__dict__["body_iterator"] = body_iterator
return response
def create_request(body):
request = Request(
scope={
"method": "POST",
"url": "test.com/customapi",
"body": body,
"type": "http",
"path": "/customapi",
"headers": {},
}
)
async def receive():
return {
"type": "http.request",
"body": json.dumps(body, indent=2).encode("utf-8"),
}
request._receive = receive
return request
</code></pre>
<p>I have tried using the <code>assert_called_once</code> method with the same result.</p>
<p>I have tried using the same method but for an unrelated non-async function and this works correctly.</p>
<p>I have tried only mocking one method in my fixture with the same result.</p>
<p>It honestly feels like I'm missing something obvious from the documentation that I've been unable to find. Appreciate any help</p>
|
<python><pytest><fastapi><pytest-asyncio>
|
2024-10-04 06:47:18
| 1
| 332
|
Jesus Diaz Rivero
|
79,052,799
| 10,722,752
|
!pip install smartsheet-dataframe is not working, getting ModuleNotFoundError: No module named 'smartsheet' Error
|
<p>I need to read in data from smartsheet to pandas dataframe. I installed <code>smartsheet-dataframe</code> library, but when I try to import it, getting <code>ModuleNotFoundError: No module named 'smartsheet'</code>.</p>
<pre><code>!pip install smartsheet-dataframe
...
...
Downloading smartsheet_dataframe-0.3.4-py3-none-any.whl (5.9 kB)
Installing collected packages: smartsheet-dataframe
Successfully installed smartsheet-dataframe-0.3.4
import smartsheet
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[3], line 1
----> 1 import smartsheet
ModuleNotFoundError: No module named 'smartsheet'
</code></pre>
<p>I am not getting errors saying other dependencies need to be installed as well. Can someone help me with this.</p>
|
<python><pandas><smartsheet-api>
|
2024-10-04 04:08:50
| 1
| 11,560
|
Karthik S
|
79,052,681
| 10,418,143
|
ValueError: If no `decoder_input_ids` or `decoder_inputs_embeds` are passed, `input_ids` cannot be `None`
|
<p>I am trying to get the decoder hidden state of the florence 2 model. I was following this <a href="https://huggingface.co/microsoft/Florence-2-large/blob/main/modeling_florence2.py" rel="nofollow noreferrer">https://huggingface.co/microsoft/Florence-2-large/blob/main/modeling_florence2.py</a> to understand the parameters that goes into the forward method.I tried something like</p>
<pre><code># Pass inputs to the model with output_hidden_states=True to get hidden states
with torch.no_grad():
outputs = model(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
attention_mask=inputs["attention_mask"],
output_hidden_states=True, # Request hidden states
)
</code></pre>
<p>But doing this I'm getting this error:</p>
<p><em>ValueError: If no <code>decoder_input_ids</code> or <code>decoder_inputs_embeds</code> are passed, <code>input_ids</code> cannot be <code>None</code>. Please pass either <code>input_ids</code> or <code>decoder_input_ids</code> or <code>decoder_inputs_embeds</code>.</em></p>
<p>I couldn't understand the error fully as it says "<em><code>input_ids</code> cannot be <code>None</code></em>" but my input_ids is not a None. It is something like:
tensor([[ 0, 2264, 473, 5, 2274, 6192, 116, 2]], device='cuda:1')</p>
<p>Also I amm using this model for inference only, what does the decoder_input_ids mean and where I can find this to pass in the forward method.</p>
|
<python><machine-learning><pytorch><huggingface-transformers><huggingface>
|
2024-10-04 02:40:48
| 0
| 352
|
user10418143
|
79,052,587
| 4,582,026
|
VS Code 1.94 - Run selection/line in Python terminal very slow
|
<p>I've just started using VSCode (moving over from Spyder) and I regularly run lines/selections from my code in the terminal.</p>
<p>When I have a terminal open, shift + enter runs my selection in a new terminal and is incredibly slow.</p>
<p>The below takes about 5 seconds to send</p>
<pre><code> print(1)
</code></pre>
<p>If i copy and paste it in the new terminal, it runs instantly. Why is it slow? I want to continue using shift+enter to run my selection.</p>
<p>Why does it need to open a new terminal anyway? It can run in the existing terminal, where i start with</p>
<pre><code>py
</code></pre>
<p>which begins a python interactive session.</p>
|
<python><visual-studio-code>
|
2024-10-04 01:05:33
| 1
| 549
|
Vik
|
79,052,556
| 4,003,134
|
gradio how to hide a webcam interface
|
<p>In <strong>Python 3.10</strong> with <strong>gradio 4.44.1</strong> on <strong>Windows 10</strong>, I struggle to hide a webcam "live" image with a button click. Currently, I am stuck with the following code:</p>
<pre class="lang-py prettyprint-override"><code>import gradio as gr
import numpy as np
def flip(im):
return np.flipud(im)
cam = gr.Image(sources=["webcam"], streaming=True)
with gr.Blocks(css="footer {visibility: hidden}") as demo:
webcam = gr.Interface(
flip,
inputs=cam,
outputs=None, # "image",
live=True,
clear_btn=None,
show_progress='hidden'
)
hide_btn = gr.Button("Hide")
def hide_cam(webcam):
webcam.visible = False
return webcam
hide_btn.click(fn=hide_cam, inputs=webcam, outputs=webcam)
demo.launch()
</code></pre>
<p>After clicking the button the error <code> gradio.exceptions.InvalidComponentError: <class 'gradio.interface.Interface'> Component not a valid input component.</code> shows up.</p>
<p>Can anyone please give me a hint, how to toggle the webcam visibility?</p>
|
<python><gradio>
|
2024-10-04 00:32:06
| 1
| 1,029
|
x y
|
79,052,411
| 4,582,026
|
VSCode - shift+enter treatment
|
<p>I have VSCode installed in two windows machines, with the same extensions installed (Pylance, Python, and Python debugger).</p>
<p>I'm getting different results when pressing shift+enter on a selection:</p>
<ul>
<li>On one machine, it runs the selection in the terminal (which is what
I want).</li>
<li>On the other machine, it opens up Python REPL and runs there.</li>
</ul>
<p>No manual keybinds have been added on either machine, and the keybinds are the same. What's causing the difference?</p>
|
<python><visual-studio-code>
|
2024-10-03 22:51:24
| 0
| 549
|
Vik
|
79,052,337
| 1,336,758
|
Understanding a subclass of "TypedDict" with just one field defined
|
<p>Despite reading the <code>TypedDict</code> documentation and numerous examples, I can't understand the code-snippet below. I'm definitely missing something (a concept).</p>
<p><code>MyClass</code> specifies one field (<code>messages</code>), but shouldn't it have at least two? For <code>key</code> and <code>value</code>? What is the <code>key</code> and <code>value</code> in this case?</p>
<pre><code>from typing import Annotated
from typing_extensions import TypedDict
class MyClass(TypedDict):
messages: Annotated[list, add_messages]
# messages have the type "list".
# "add_messages" is some function that appends
# rather than overwrites messages to list.
</code></pre>
|
<python><python-typing>
|
2024-10-03 22:16:46
| 1
| 5,759
|
nmvega
|
79,052,322
| 5,635,892
|
Amplitude of FFT in python
|
<p>I have the following code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.fft import fft, fftfreq
def polarization(t):
f = x*np.cos(2*np.pi*omega*t)/np.sqrt(1+(x*np.cos(2*np.pi*omega*t))**2)
return f
omega = 100
x = 5
N = 10000
t_max = 1
t = np.linspace(0,t_max,N)
yf = fft(polarization(t))
yf = 2.0/N * np.abs(yf[0:N//2])
np.max(yf)
</code></pre>
<p>which outputs 1.21. I am not sure I understand this value, as the maximum value obtained by my function is 1, so any frequency component should have a value below that.</p>
<p>How can I convert this 1.21 value to the actual amplitude of the associated frequency component?</p>
|
<python><fft>
|
2024-10-03 22:09:55
| 0
| 719
|
Silviu
|
79,052,117
| 6,471,140
|
huggingface model inference: ERROR: Flag 'minloglevel' was defined more than once (in files 'src/error.cc' and ..)
|
<p>I'm trying to use llama 3.1 70b from huggingface(end goal is to quantize it and deploy it in amazon <code>sagemaker</code>), but I'm facing:</p>
<blockquote>
<p>ERROR: Flag 'minloglevel' was defined more than once (in files
'src/error.cc' and
'home/conda/feedstock_root/build_artifacts/abseil-split_1720857154496/work/absl/log/flags.cc').</p>
</blockquote>
<p>The code is super simple (quantization not applied yet), but I cannot make it work (originally I was using a notebook and the kernel died, after exporting the code to python file and running it in the command line I found the error mentioned previously).</p>
<pre><code>import torch
from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer
model_id = "meta-llama/Llama-3.1-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto"
)
input_text = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>Translate the following English text to French:
'Hello, how are you?'<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
print(input_ids)
output = model.generate(**input_ids, max_new_tokens=10)
</code></pre>
<p>I have tried both the 70b and 8b versions, but none worked (so I know it is not something specific to 70). I have a feeling that this is related to package conflicts or similar and not necessarily the models.</p>
|
<python><artificial-intelligence><large-language-model><huggingface><llama>
|
2024-10-03 20:46:31
| 1
| 3,554
|
Luis Leal
|
79,052,005
| 4,434,941
|
Error loading page with scrapy and scrapy playwright - Says to enable javascript
|
<p>I am trying to access a webpage with scrapy and scrapy-playwright, however, I keep getting a 'Please enable JS and disable any ad blocker' messaged coupled with a timeout error. I have tried a variety of solutions but none seem to have worked...</p>
<pre><code>import scrapy
from scrapy.selector import Selector
from scrapy_playwright.page import PageMethod
class WsjNewsJSSpider(scrapy.Spider):
name = 'wsj_newsJS_BACKUP'
start_urls = ['https://www.wsj.com']
custom_settings = {
"DOWNLOAD_HANDLERS": {
'http': 'scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler',
'https': 'scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler',
},
"TWISTED_REACTOR": 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',
"PLAYWRIGHT_BROWSER_TYPE": "chromium", # Optional: specify the browser type (chromium, firefox, webkit)
"PLAYWRIGHT_LAUNCH_OPTIONS": {"headless": False}, # Optional: configure Playwright options
}
def start_requests(self):
# Enable Playwright for these requests
for url in self.start_urls:
yield scrapy.Request(
url,
meta={
'playwright': True,
"playwright_page_methods": [
PageMethod("wait_for_timeout", 5000), # Wait for 5 seconds
],
},
callback=self.parse
)
def parse(self, response):
# Parse the response with rendered JS content
html_content = response.text
sel = Selector(text=html_content)
print('Its working')
</code></pre>
<p>Any help would be deeply appreciated.</p>
|
<python><scrapy><playwright><scrapy-playwright>
|
2024-10-03 19:55:16
| 1
| 405
|
jay queue
|
79,051,885
| 23,260,297
|
styling dataframe with multi-index and export to excel
|
<p>I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame({
'Counterparty': ['foo', 'fizz', 'fizz', 'fizz','fizz'],
'Commodity': ['bar', 'bar', 'bar', 'bar','bar'],
'DealType': ['Buy', 'Buy', 'Buy', 'Buy', 'Buy'],
'StartDate': ['07/01/2024', '09/01/2024', '10/01/2024', '11/01/2024', '12/01/2024'],
'FloatPrice': [18.73, 17.12, 17.76, 18.72, 19.47],
'MTMValue':[10, 10, 10, 10, 10]
})
df = df.set_index(['Counterparty', 'Commodity', 'DealType','StartDate']).sort_index()[['FloatPrice', 'MTMValue']]
print(df)
</code></pre>
<p>i am trying to apply styles to the dataframe before I export it to excel. This is what I have done so far:</p>
<pre><code>def style_index(s):
return "background-color: lightblue; text-align: center; border: 1px solid black; vertical-align: middle;"
def style_header(s):
return "background-color: lightgrey; text-align: center;"
styled_df = df.style.set_properties(**{
'background-color': 'darkblue',
'color': 'white',
'text-align': 'center'}).map_index(style_index).map_index(style_header, axis="columns")
</code></pre>
<p>which yields this:</p>
<p><a href="https://i.sstatic.net/zgOEl25n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zgOEl25n.png" alt="enter image description here" /></a></p>
<p>however I need the dataframe to look like this:</p>
<p><a href="https://i.sstatic.net/gwVK9FAI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwVK9FAI.png" alt="enter image description here" /></a></p>
<ul>
<li>the index column headers are not getting highlighted</li>
<li>the items in the index are not aligned in the center</li>
</ul>
<p>Here is how I export my dataframes,
<code>item</code> is a dataframe:</p>
<pre><code>with pd.ExcelWriter(path, engine='openpyxl', mode='a', if_sheet_exists='overlay') as writer:
for item in list:
styled_df = item.style.set_properties(**{
'background-color': '#0F243E',
'color': 'white',
'text-align': 'center'}).map_index(style_index).map_index(style_header, axis="columns")
styled_df.to_excel(writer, sheet_name=name, startrow=rowPos, float_format = "%0.2f", index=bool)
</code></pre>
<p>Also, I feel like there is a more efficient way to accomplish what I am already doing so any suggestions are appreciated.</p>
<p>EDIT: managed to get the text vertically aligned in the center, but still no luck with highlighting the index column headers</p>
|
<python><pandas><dataframe>
|
2024-10-03 19:05:04
| 2
| 2,185
|
iBeMeltin
|
79,051,741
| 3,949,008
|
pandas.read_sas() fails when bad timestamps exist
|
<p>I have a file with some bad timestamps and the <code>read_sas</code> method in <code>pandas</code> fails. There seems to be no recourse. The file is read fine in R using the <code>haven</code> package, and the bad timestamps are identifiable.</p>
<pre><code>df = pd.read_sas('my_sas_file.sas7bdat')
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 83, in _convert_datetimes
return pd.to_datetime(sas_datetimes, unit=unit, origin="1960-01-01")
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/tools/datetimes.py", line 1068, in to_datetime
values = convert_listlike(arg._values, format)
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/tools/datetimes.py", line 393, in _convert_listlike_datetimes
return _to_datetime_with_unit(arg, unit, name, tz, errors)
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/tools/datetimes.py", line 557, in _to_datetime_with_unit
arr, tz_parsed = tslib.array_with_unit_to_datetime(arg, unit, errors=errors)
File "pandas/_libs/tslib.pyx", line 312, in pandas._libs.tslib.array_with_unit_to_datetime
pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: cannot convert input with unit 's'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sasreader.py", line 175, in read_sas
return reader.read()
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 742, in read
rslt = self._chunk_to_dataframe()
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 792, in _chunk_to_dataframe
rslt[name] = _convert_datetimes(rslt[name], "s")
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 85, in _convert_datetimes
s_series = sas_datetimes.apply(_parse_datetime, unit=unit)
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/series.py", line 4771, in apply
return SeriesApply(self, func, convert_dtype, args, kwargs).apply()
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/apply.py", line 1123, in apply
return self.apply_standard()
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/apply.py", line 1174, in apply_standard
mapped = lib.map_infer(
File "pandas/_libs/lib.pyx", line 2924, in pandas._libs.lib.map_infer
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/core/apply.py", line 142, in f
return func(x, *args, **kwargs)
File "/opt/anaconda3/lib/python3.10/site-packages/pandas/io/sas/sas7bdat.py", line 55, in _parse_datetime
return datetime(1960, 1, 1) + timedelta(seconds=sas_datetime)
OverflowError: days=-1176508800; must have magnitude <= 999999999
</code></pre>
<p>In R, it works:</p>
<pre class="lang-r prettyprint-override"><code>library(haven)
df <- read_sas('my_sas_file.sas7bdat')
summary(df$DtObgnOrig)
Min. 1st Qu.
"-3219212-04-24 00:00:00.0000" "2004-01-27 00:00:00.0000"
Median Mean
"2008-08-11 00:00:00.0000" "2009-03-08 10:15:16.7828"
3rd Qu. Max.
"2014-12-09 00:00:00.0000" "2027-06-12 00:00:00.0000"
NA's
"93215826"
</code></pre>
<p>Anyone have any magic tricks to make the <code>read_sas</code> work, but null out the bad timestamps somehow?</p>
|
<python><pandas><sas><r-haven>
|
2024-10-03 18:12:46
| 1
| 10,535
|
Gopala
|
79,051,710
| 1,564,070
|
Rendering issue with ttk.TreeView
|
<p>I'm trying to use the ttk treeview widget. I'm able to create and populate it, and it seems to work properly. However it is rendering with a large empty "column" on the left side and I can't figure out what is causing it. I've verified that column(0) is the first column I defined so don't know where this is coming from. Code to create the treeview:</p>
<pre><code>cols = ("EIR", "Env", "Name", "Function", "IP", "Model")
col_widths = (75, 50, 120, 150, 100, 150)
self.tv = tv = ttk.Treeview(self, columns=cols)
for col, col_w in zip(cols, col_widths):
tv.column(column=col, width=col_w, anchor=tk.W)
tv.heading(column=col, text=col, anchor=tk.W)
</code></pre>
<p>Rendered treeview:
<a href="https://i.sstatic.net/9DzPJGKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9DzPJGKN.png" alt="Treeview with empty column on left" /></a></p>
|
<python><tkinter><treeview><ttk>
|
2024-10-03 18:04:32
| 2
| 401
|
WV_Mapper
|
79,051,504
| 16,869,946
|
counting the number of success after certain date every year in Pandas using groupby cumsum
|
<p>I have a data frame that looks like</p>
<pre><code>Date Student_ID Exam_Score
2020-12-24 1 79
2020-12-24 3 100
2020-12-24 4 88
2021-01-19 1 100
2021-01-19 2 100
2021-01-19 3 99
2021-01-19 4 72
2022-09-30 3 100
2022-09-30 2 100
2022-09-30 1 100
2022-09-30 5 46
2023-04-23 3 100
2023-04-23 2 97
2023-04-23 1 100
2024-07-19 2 89
2024-07-19 1 100
2024-07-19 4 93
2024-07-19 3 100
2024-09-19 1 100
2024-09-19 2 80
2024-09-19 3 100
2024-09-19 4 80
2024-10-20 1 80
2024-10-20 3 99
2024-10-20 2 80
</code></pre>
<p>And I would like to compute a new column called <code>Recent_Full_Marks</code> which uses the following logic: for each <code>Student_ID</code>, compute the number of times that student getting 100 marks on the exam before the current date and after the Sept-1 one year before. So for example, on 2024-11-22, Student 1 has gotten 2 full marks before 2024-11-22 and after 2024-09-01. And the desired column looks like:</p>
<pre><code>Date Student_ID Exam_Score Recent_Full_Marks
2020-12-24 1 79 0
2020-12-24 3 100 0
2020-12-24 4 88 0
2021-01-19 1 100 0
2021-01-19 2 100 0
2021-01-19 3 99 1
2021-01-19 4 72 0
2022-09-30 3 100 0
2022-09-30 2 100 0
2022-09-30 1 100 0
2022-09-30 5 46 0
2023-11-23 3 100 0
2023-11-23 2 97 0
2023-11-23 1 100 0
2024-07-19 2 89 0
2024-07-19 1 100 1
2024-07-19 4 93 0
2024-07-19 3 100 1
2024-09-19 1 100 0
2024-09-19 2 80 0
2024-09-19 3 100 0
2024-09-19 4 80 0
2024-10-20 1 100 1
2024-10-20 3 99 1
2024-10-20 2 80 0
2024-11-22 1 70 2
2024-11-22 3 100 1
2024-11-22 2 78 0
</code></pre>
<p>Here is what I have tried:</p>
<pre><code>Date = pd.to_datetime(df['Date'], dayfirst=True)
full = (df.assign(Date=Date)
.sort_values(['Student_ID','Date'], ascending=[True,True])
['Exam_Score'].eq(100))
df['Recent_Full_Marks']=(full.groupby([df['Student_ID'], Date.dt.year], group_keys=False).apply(lambda g: g.shift(1, fill_value=0).cumsum()))
</code></pre>
<p>However, the above method only counts the number of full marks after every year start and not every sept-1 and I was unable to modify it to work.</p>
|
<python><pandas><dataframe><group-by><cumsum>
|
2024-10-03 16:55:59
| 1
| 592
|
Ishigami
|
79,051,434
| 6,464,525
|
How can I dynamically define a function signature for pylance etc.?
|
<p>I am creating a framework built on <code>pydantic.BaseModel</code> that will manage the putting and getting of records from dynamodb.</p>
<p>An example of a model is as follows:</p>
<pre class="lang-py prettyprint-override"><code>class User(DynamoDanticBaseModel):
user_id: str
first_name: str
@classmethod
def primary_key(cls) -> str:
return "USER#{user_id}"
@classmethod
def sort_key(cls) -> str:
return "USER"
</code></pre>
<p>Using <code>__init_subclass__</code>, I can fetch the formatable strings and validate that the required parts of the PK/SK are defined on the class.</p>
<p>Defined generically is a classmethod <code>get</code> that uses these values to call boto3 underneath and fetch the a record, fill in the model and return the new model instance.</p>
<pre class="lang-py prettyprint-override"><code>user = User(user_id='example', first_name='Bacon')
user.put(dynamo_table_ref)
user = User.get(dynamo_table_ref, user_id='example')
user.first_name # => "Bacon"
</code></pre>
<p>This all works. My issue is that the get methods interface is: <code>def get(cls, table, **kwargs):</code>. A developer cannot see what params are needed.</p>
<p>Is there a way to layout my code, or a pattern I should use, that would mean that I could have the signature hint show the required parameters? Dynamic to the required parameters defined in the primary key and sort key method?</p>
<p>e.g. <code>def get(cls, table, user_id: str):</code></p>
|
<python><python-typing><pyright>
|
2024-10-03 16:35:49
| 1
| 344
|
hi im Bacon
|
79,051,361
| 54,873
|
What is the most efficient way in Pandas to search for dates entries in a dataframe before or after a given date?
|
<p>I am trying to determine whether a given set of obstetrics patients have recieved their flu shots in the current flu season.</p>
<p>That is, I have essentially three dataframes:</p>
<pre><code>>>> edds
name expected_delivery_date
0 Susan 2024-12-01
1 Susan 2023-10-01
2 Marie 2024-10-15
>>> flu_shots
name flu_shot_date
0 Susan 2023-09-01
1 Marie 2022-09-01
>>> flu_season_begin_dates
0 2024-09-01
1 2023-09-01
2 2022-09-01
3 2021-09-01
</code></pre>
<p>What is the most efficient way to identify which women got a flu shot (a) before their due date but (b) on or after the start of the most recent flu season?</p>
<pre><code>>>> got_flu_shot
name expected_delivery_date got_required_flu_shot
0 Susan 2024-12-01 False
1 Susan 2023-10-01 True
2 Marie 2023-10-01 True
</code></pre>
<p>I can think of going row-by-row by <code>(name, expected_delivery_date)</code> and searching for the "most recent flu shot before the delivery" and "most recent flu season start date" but that feels inherently inefficient.</p>
|
<python><pandas>
|
2024-10-03 16:14:05
| 1
| 10,076
|
YGA
|
79,051,188
| 12,131,013
|
Expression replacement with Wildcard variable only works in isolation
|
<p>I am trying to replace a subexpression. There are many questions on this topic, but I've found that using a wildcard variable as a multiplier seems to work in isolation but not when there are extra terms. For example:</p>
<pre class="lang-py prettyprint-override"><code>import sympy as smp
x, y, z, a = smp.symbols("x y z a")
b = smp.Wild("b")
expr_1 = x/a**2 + y/a**2 + z/a**2
replacement = b*x + b*y + b*z
replaced = expr_1.replace(replacement, b) # essentially, I am saying that x + y + z = 1
</code></pre>
<p>That example works and the result is <code>1/a**2</code>. But if I add an extra part, the replacement won't work.</p>
<pre class="lang-py prettyprint-override"><code>expr_2 = expr_1 + 20/a
not_replaced = expr_2.replace(replacement, b)
</code></pre>
<p>The result <code>not_replaced</code> is the same as <code>expr_2</code>.</p>
<p>Is this behavior expected? Is there a way to achieve the desired result using <code>.replace</code> with a wildcard variable like I've tried?</p>
|
<python><sympy>
|
2024-10-03 15:21:17
| 1
| 9,583
|
jared
|
79,051,146
| 14,843,373
|
Pandas apply function behaving differently based on input size?
|
<p>I have a function which works fine with a tiny Pandas Dataframe and returns the adjustments as expected,
but when I apply it to a non-test Dataframe, which is only a small df (300 x 20), it gets completely messed up.</p>
<p>Using the example below as a reference, it would populate columns 3 - 7 with the same list of values for all rows in each respective column...</p>
<p>To add to the confusion, when I remove the code from within the function and run it as part of one file the expected results are achieved for the non-test dataframe...</p>
<p>Can anyone suggest what might be happening here and why the behaviour seems to vary? / what I am changing.</p>
<p>I provide the code here for recreation</p>
<pre><code>def test_populate_columns_from_Key_Value_String():
starting_df = pd.DataFrame(
{
"Names": ["Sarah", "John"],
"Relations": [
"has parent: Maggie, is parent of: Tom, is parent of: Grace, is parent of: Bart, is related to: Grandpa Simpson, is hated by: Joseph",
"is friends with: Tracey, has parent: Greg",
],
"has parent": [[], []],
"is parent of": [[], []],
"is related to": [[], []],
"is hated by": [[], []],
"is friends with": [[], []],
}
)
desire_df = pd.DataFrame(
{
"Names": ["Sarah", "John"],
"Relations": [
"has parent: Maggie, is parent of: Tom, is parent of: Grace, is parent of: Bart, is related to: Grandpa Simpson, is hated by: Joseph",
"is friends with: Tracey, has parent: Greg",
],
"has parent": [["Maggie"], ["Greg"]],
"is parent of": [["Tom", "Grace", "Bart"], []],
"is related to": [["Grandpa Simpson"], []],
"is hated by": [["Joseph"], []],
"is friends with": [[], ["Tracey"]],
}
)
starting_df.apply(
populate_columns_from_Key_Value_String, axis=1, args=("Relations",)
)
print(starting_df)
assert desire_df.loc[0, "has parent"] == starting_df.loc[0, "has parent"]
assert starting_df.loc[0, "is hated by"] == ["Joseph"]
assert starting_df.loc[1, "is friends with"] == ["Tracey"]
assert starting_df.loc[0, "is parent of"] == desire_df.loc[0, "is parent of"]
def populate_columns_from_Key_Value_String(
row: pd.Series, column_to_extract_from="Column_name_with_Relational_String"
):
string_to_extract_values_from = row[column_to_extract_from]
if isinstance(string_to_extract_values_from, str):
for key_value_pair in string_to_extract_values_from.split(","):
if isinstance(row[key_value_pair.split(":")[0].strip()], list):
row[key_value_pair.split(":")[0].strip()].append(
key_value_pair.split(":")[1].strip()
)
</code></pre>
|
<python><pandas><dataframe>
|
2024-10-03 15:11:39
| 2
| 361
|
beautysleep
|
79,051,021
| 11,857,547
|
Why "PyRun_SimpleString("from PIL import Image")" does not work in debug mode?
|
<p>I'm writing a c++ program that use Python library EasyOCR to read some text. I wrote such code:</p>
<pre class="lang-cpp prettyprint-override"><code>Py_Initialize();
PyRun_SimpleString("import sys");
PyRun_SimpleString("import easyocr");
//Other codes
</code></pre>
<p>When I run it in <code>Release</code> mode, it works fine, but if I switch to <code>Debug</code> mode, it reports a run-time error:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python311\Lib\site-packages\easyocr\__init__.py", line 1, in <module>
from .easyocr import Reader
File "C:\Python311\Lib\site-packages\easyocr\easyocr.py", line 3, in <module>
from .recognition import get_recognizer, get_text
File "C:\Python311\Lib\site-packages\easyocr\recognition.py", line 1, in <module>
from PIL import Image
File "C:\Python311\Lib\site-packages\PIL\Image.py", line 100, in <module>
from . import _imaging as core
ImportError: cannot import name '_imaging' from 'PIL' (C:\Python311\Lib\site-packages\PIL\__init__.py)
</code></pre>
<p>Then I tried to run</p>
<pre><code>PyRun_SimpleString("from PIL import Image");
</code></pre>
<p>insteaed of</p>
<pre><code>PyRun_SimpleString("import easyocr");
</code></pre>
<p>but the same error was reported again:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python311\Lib\site-packages\PIL\Image.py", line 100, in <module>
from . import _imaging as core
ImportError: cannot import name '_imaging' from 'PIL' (C:\Python311\Lib\site-packages\PIL\__init__.py)
</code></pre>
<p>I'm using <code>Visual Studio 2022</code> on <code>Windows 11 x64</code> to write my c++ code.My Python version is</p>
<pre><code>Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] on win32
</code></pre>
<p>My Pillow version is <code>10.4.0</code> and easyocr version is <code>1.7.2</code></p>
<p>I have already downloaded the debug symbols via python installer, and set the project's debug config to use the <code>python311_d.lib</code>.</p>
<p>I want to know why it is reporting such error and what should I do to debug this program in Visual Studio.</p>
<p>Update:
I use the debug version of python (python_d.exe) to run the import syntax, it reports the same error.</p>
<pre><code>PS C:\Users\15819> python_d
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:34:25) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
C:\Python311\Lib\site-packages\pyreadline3\lineeditor\history.py:87: ResourceWarning: unclosed file <_io.TextIOWrapper name='C:\\Users\\15819\\.python_history' mode='r' encoding='utf-8'>
for line in open(filename, 'r', encoding='utf-8'):
ResourceWarning: Enable tracemalloc to get the object allocation traceback
>>> import sys
>>> import easyocr
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python311\Lib\site-packages\easyocr\__init__.py", line 1, in <module>
from .easyocr import Reader
File "C:\Python311\Lib\site-packages\easyocr\easyocr.py", line 3, in <module>
from .recognition import get_recognizer, get_text
File "C:\Python311\Lib\site-packages\easyocr\recognition.py", line 1, in <module>
from PIL import Image
File "C:\Python311\Lib\site-packages\PIL\Image.py", line 100, in <module>
from . import _imaging as core
ImportError: cannot import name '_imaging' from 'PIL' (C:\Python311\Lib\site-packages\PIL\__init__.py)
>>> from PIL import Image
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python311\Lib\site-packages\PIL\Image.py", line 100, in <module>
from . import _imaging as core
ImportError: cannot import name '_imaging' from 'PIL' (C:\Python311\Lib\site-packages\PIL\__init__.py)
</code></pre>
|
<python><c++><python-imaging-library><python-3.11><easyocr>
|
2024-10-03 14:35:39
| 0
| 305
|
Object Unknown
|
79,051,008
| 19,155,645
|
RAG pipeline using Haystack - error with Pipeline embedder
|
<p>I'm trying to run a RAG pipeline using Haystack (and Milvus) on my cluster instance using python (3.10.12).<br>
The Imports and relevant packages I have in this env are shown at the end of this question.</p>
<p>my code is: <br></p>
<ol>
<li>model embedding & generator functions:</li>
</ol>
<pre><code>@component
def model_embedder(self, documents,cache_dir=cache_dir):
tokenizer = AutoTokenizer.from_pretrained(mymodel, cache_dir=cache_dir)
model = AutoModelForSeq2SeqLM.from_pretrained(mymodel, cache_dir=cache_dir)
embeddings = []
for doc in documents:
inputs = tokenizer(doc.content, padding="max_length", truncation=True, return_tensors="pt")
with torch.no_grad():
output = model(**inputs)
embedding = output.pooler_output.squeeze(0).cpu().numpy()
embeddings.append(embedding)
for doc, embedding in zip(documents, embeddings):
doc.embedding = embedding
return documents
@component
def model_generator(self, query, context=None, generation_kwargs={}, cache_dir=cache_dir):
tokenizer = AutoTokenizer.from_pretrained(mymodel, cache_dir=cache_dir)
model = AutoModelForSeq2SeqLM.from_pretrained(mymodel, cache_dir=cache_dir)
inputs = tokenizer(query, context=context, padding="max_length", truncation=True, return_tensors="pt")
with torch.no_grad():
output = model.generate(**inputs, **generation_kwargs)
return tokenizer.decode(output[0], skip_special_tokens=True)
</code></pre>
<ol start="2">
<li>RAG Pipeline:</li>
</ol>
<pre><code>rag_pipeline = Pipeline()
rag_pipeline.add_component("converter", MarkdownToDocument())
rag_pipeline.add_component(
"splitter", DocumentSplitter(split_by="sentence", split_length=2)
)
rag_pipeline.add_component("embedder", model_embedder)
rag_pipeline.add_component(document_store)
rag_pipeline.add_component(
"retriever", MilvusEmbeddingRetriever(document_store=document_store, top_k=3)
)
rag_pipeline.add_component("writer", DocumentWriter(document_store))
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
rag_pipeline.add_component(
"generator",
model_generator,
)
rag_pipeline.connect("converter.documents", "splitter.documents")
rag_pipeline.connect("splitter.documents", "embedder.documents")
rag_pipeline.connect("embedder", "writer")
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "generator")
rag_pipeline.draw('./rag_pipeline.png')
</code></pre>
<p>If I decorate with <code>@component</code>, I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
ComponentError Traceback (most recent call last)
Cell In[11], line 2
1 @component
----> 2 def model_embedder(self, documents,cache_dir=cache_dir):
4 tokenizer = AutoTokenizer.from_pretrained(mymodel, cache_dir=cache_dir)
File /.../rag_env/lib/python3.10/site-packages/haystack/core/component/component.py:517, in _Component.__call__(self, cls, is_greedy)
513 return self._component(cls, is_greedy=is_greedy)
515 if cls:
516 # Decorator is called without parens
--> 517 return wrap(cls)
519 # Decorator is called with parens
520 return wrap
File /.../rag_env/lib/python3.10/site-packages/haystack/core/component/component.py:513, in _Component.__call__.<locals>.wrap(cls)
512 def wrap(cls):
--> 513 return self._component(cls, is_greedy=is_greedy)
File /.../rag_env/lib/python3.10/site-packages/haystack/core/component/component.py:464, in _Component._component(self, cls, is_greedy)
462 # Check for required methods and fail as soon as possible
463 if not hasattr(cls, "run"):
--> 464 raise ComponentError(f"{cls.__name__} must have a 'run()' method. See the docs for more information.")
466 def copy_class_namespace(namespace):
...
469
470 Simply copy the whole namespace from the decorated class.
471 """
ComponentError: model_embedder must have a 'run()' method. See the docs for more information.
</code></pre>
<p>And if I dont use the <code>@component</code> decorator (and remove the <code>self</code> from both functions), these funcitons compile, but then when i run the <code>rag_pipeline</code> code, I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
PipelineValidationError Traceback (most recent call last)
Cell In[13], line 6
2 rag_pipeline.add_component("converter", MarkdownToDocument())
3 rag_pipeline.add_component(
4 "splitter", DocumentSplitter(split_by="sentence", split_length=2)
5 )
----> 6 rag_pipeline.add_component("embedder", model_embedder)
7 rag_pipeline.add_component(document_store)
8 rag_pipeline.add_component(
9 "retriever", MilvusEmbeddingRetriever(document_store=document_store, top_k=3)
10 )
File /.../rag_env/lib/python3.10/site-packages/haystack/core/pipeline/base.py:313, in PipelineBase.add_component(self, name, instance)
311 # Component instances must be components
312 if not isinstance(instance, Component):
--> 313 raise PipelineValidationError(
314 f"'{type(instance)}' doesn't seem to be a component. Is this class decorated with @component?"
315 )
317 if getattr(instance, "__haystack_added_to_pipeline__", None):
318 msg = (
319 "Component has already been added in another Pipeline. Components can't be shared between Pipelines. "
320 "Create a new instance instead."
321 )
PipelineValidationError: '<class 'function'>' doesn't seem to be a component. Is this class decorated with @component?
</code></pre>
<p>The imports I'm using are:</p>
<pre><code>import os
import urllib.request
from haystack import Pipeline
from haystack.components.converters import MarkdownToDocument
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.writers import DocumentWriter
from milvus_haystack import MilvusDocumentStore
from milvus_haystack.milvus_embedding_retriever import MilvusEmbeddingRetriever
from haystack.components.builders import PromptBuilder
import mdit_plain
from haystack import component
import huggingface_hub
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import torch
</code></pre>
<p>Since I suspect there might be some issue (/incompatability) with the haystack libraries I use, I show here all the "haystack" related libraries and versions i have in the current environment:</p>
<pre><code>Package Version
---------------------------- --------------
farm-haystack 1.26.3
haystack 0.42
haystack-ai 2.5.1
haystack-experimental 0.2.0
milvus-haystack 0.0.10
</code></pre>
<p>I'd be happy to get any suggestions / help how to resolve the issue and run the RAG pipeline.</p>
|
<python><rag><haystack>
|
2024-10-03 14:30:59
| 1
| 512
|
ArieAI
|
79,050,855
| 23,260,297
|
add specific values in dataframe with groupby.agg
|
<p>I am processing files that contain data in an unusable format. After processing one of the files I am left with a dataframe and a singular value.</p>
<p>The dataframe looks like this:</p>
<pre><code> df = pd.DataFrame({
'A': ['foo', 'foo', 'foo', 'fizz', 'fizz', 'fizz', 'fizz'],
'B': ['bar', 'bar', 'bar', 'buzz', 'buzz', 'buzz', 'baz'],
'C': [10,10,10,10,10,10,10]
})
val = 20.0
</code></pre>
<p>The value is not supposed to be apart of my dataframe, but needs to be included in my TOTAL calculation. This is how I extract it from the file after I read it into a dataframe (it returns a string so I am casting to float):</p>
<pre><code>if len(df.loc[df['ID'].eq("Settle"), 'C'].values) > 0:
temp = df.loc[df['ID'].eq("Settle"), 'C']
if temp.values[0].isnumeric():
num = float(temp.values[0])
else:
num = 0.0
else:
num = 0.0
</code></pre>
<p>now I need to do <code>groupby.agg()</code> with the following condition:</p>
<ul>
<li>If <code>A == 'foo'</code> I need to add val to the sum</li>
</ul>
<p>This is the basic code I would use to get the sums for each value in <code>A</code>, but I cannot figure out how factor in my condition. I'm assuming I can use <code>np.where</code> or <code>lambda</code> but unsure how to use that with <code>.agg()</code> and achieve my output.</p>
<pre><code>out = df.groupby(['A'], sort=False, as_index=False).agg({"C":"sum"})
</code></pre>
<p>Expected Output:</p>
<pre><code>A C
foo 50
fizz 40
</code></pre>
|
<python><pandas>
|
2024-10-03 13:51:07
| 1
| 2,185
|
iBeMeltin
|
79,050,619
| 10,430,926
|
How do I manage a python subprocess with an infinite loop
|
<p>I want to read input from a subprocess in python, interact with it, see the results of that interaction, and then kill the process. I have the following parent and child processes.</p>
<pre><code>child.py
from random import randint
x = 1
while x < 9:
x = randint(0, 10)
print("weeeee got", x)
name = input("what's your name?\n")
print("hello", name)
x = 0
while True:
x += 1
print("bad", x)
</code></pre>
<pre><code>parent.py
import curio
from curio import subprocess
async def main():
p = subprocess.Popen(
["./out"],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
)
async for line in p.stdout:
line = line.decode("ascii")
print(p.pid, "got:", line, end="")
if "what" in line and "name" in line:
out, _ = await p.communicate(input=b"yaboi\n")
print(p.pid, "got:", out.decode("ascii"), end="")
# stuff to properly kill the process ...
return
if __name__ == "__main__:
curio.run(main)
</code></pre>
<p>If I run this, the parent hangs on:</p>
<pre><code> out, _ = await p.communicate(input=b"yaboi\n")
</code></pre>
<p>Removing the following section from the child fixes the issue:</p>
<pre><code>x = 1
while x < 9:
x = randint(0, 10)
print("weeeee got", x)
</code></pre>
<p>Why doesn't it work with the infinite loop? How do I fix this?</p>
<p>Edit 1: putting <code>flush=True</code> in the print statement does not fix the issue.</p>
<p>Edit 2:
The processes are in the following states:</p>
<pre><code>4219 13.2 1.5 493968 487936 pts/0 S+ 14:08 0:01 python parent.py
4223 100 0.0 13764 9600 pts/0 R+ 14:08 0:14 python child.py
</code></pre>
<p>It looks like the python code is hanging on the <code>communicate</code> call.</p>
|
<python><python-3.x><curio>
|
2024-10-03 12:52:39
| 1
| 557
|
Edward
|
79,050,437
| 12,415,855
|
Parse data from local html-file using bs4?
|
<p>i try to parse a local html-document using the following code -</p>
<pre><code>import os, sys
from bs4 import BeautifulSoup
path = os.path.abspath(os.path.dirname(sys.argv[0]))
fnHTML = os.path.join(path, "inp.html")
page = open(fnHTML)
soup = BeautifulSoup (page.read(), 'lxml')
worker = soup.find("span")
wHeadLine = worker.text.strip()
wPara = worker.find_next("td").text.strip()
print(wHeadLine)
print(wPara)
</code></pre>
<p>The output look like that:</p>
<pre><code>Find your favesΓ’β¬βfaster
WeΓ’β¬β’ve made it easier than ever to see whatΓ’β¬β’s on now and continue watching your recordings, favorite teams and more.
</code></pre>
<p>But the text on the html looks like that - see the picture</p>
<p><a href="https://i.sstatic.net/TMcn4gSJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMcn4gSJ.png" alt="enter image description here" /></a></p>
<p>Why is the text not outputed with "β" and "Weβve"?</p>
|
<python><beautifulsoup>
|
2024-10-03 11:57:33
| 2
| 1,515
|
Rapid1898
|
79,050,277
| 9,681,081
|
Create hybrid table with Snowflake SQLAlchemy
|
<p>I want to add Snowflake <a href="https://docs.snowflake.com/en/user-guide/tables-hybrid" rel="nofollow noreferrer">hybrid tables</a> to my database schema using SQLAlchemy.</p>
<p>Per <a href="https://github.com/snowflakedb/snowflake-sqlalchemy/issues/461" rel="nofollow noreferrer">this issue</a>, support for hybrid tables is not implemented in the Snowflake SQLAlchemy official dialect.</p>
<p>Is there a way for me to customize the <code>CREATE TABLE ...</code> statement generated by SQLAlchemy so that it actually generates <code>CREATE HYBRID TABLE ...</code> for a given table?</p>
<p>Consider the example code below:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
pass
class MyTable(Base):
id: Mapped[int] = mapped_column(primary_key=True)
Base.metadata.create_all(bind=...)
</code></pre>
<p>Running it will emit the following SQL statement:</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE mytable (
id INTEGER NOT NULL AUTOINCREMENT,
CONSTRAINT pk_mytable PRIMARY KEY (id)
)
</code></pre>
<p>and I would like a way to have the following instead:</p>
<pre class="lang-sql prettyprint-override"><code>CREATE HYBRID TABLE mytable (
id INTEGER NOT NULL AUTOINCREMENT,
CONSTRAINT pk_mytable PRIMARY KEY (id)
)
</code></pre>
|
<python><sqlalchemy><snowflake-cloud-data-platform>
|
2024-10-03 11:15:36
| 1
| 2,273
|
RomΓ©o DesprΓ©s
|
79,050,009
| 4,751,700
|
Typehint *args for variable length heterogenous return values like asyncio.gather()
|
<p>I'm having some trouble with the typesystem here.
I need something like <code>asyncio.gather()</code> where the return value is the return values of the coroutines in the same order as given in the args.</p>
<pre class="lang-py prettyprint-override"><code>async def f1() -> int: ...
async def f2() -> str: ...
async def f3() -> bool: ...
a, b, c = await asyncio.gather(f1(), f2(), f3())
</code></pre>
<p>This works:</p>
<ul>
<li><code>a</code> is of type <code>int</code></li>
<li><code>b</code> is of type <code>str</code></li>
<li><code>c</code> is of type <code>bool</code></li>
</ul>
<p>I have a function that accepts multiple <code>partial[Coroutine[Any, Any, T]]</code>.</p>
<pre class="lang-py prettyprint-override"><code>type PartialCoroutine[T] = partial[Coroutine[Any, Any, T]]
async def run runPartialCoroutines[T](*args: PartialCoroutine[T]) -> list[T]: ...
</code></pre>
<p>If I run my function it only uses the return value of the first function</p>
<pre class="lang-py prettyprint-override"><code>async def f1() -> int: ...
async def f2() -> str: ...
async def f3() -> bool: ...
a, b, c = await runPartialCoroutines(f1(), f2(), f3())
</code></pre>
<p>This doesn't work:</p>
<ul>
<li><code>a</code> is of type <code>int</code></li>
<li><code>b</code> is of type <code>int</code></li>
<li><code>c</code> is of type <code>int</code></li>
</ul>
<p>I understand why it does this, but I can't seem to find a way to make it work.
I've checked the typehint of <code>asyncio.gather()</code> but I have a feeling that the typecheckers have some custom logic built in for this case.
Is it possible to do this at all?</p>
|
<python><python-typing>
|
2024-10-03 09:54:50
| 2
| 391
|
fanta fles
|
79,049,807
| 6,221,742
|
Genetic algorithm for kubernetes allocation
|
<p>I am trying to allocate Kubernetes pods to nodes using a genetic algorithm, where each pod is assigned to one node. Below is my implementation:</p>
<pre><code>from string import ascii_lowercase
import numpy as np
import random
from itertools import compress
import math
import pandas as pd
import random
def create_pods_and_nodes(n_pods=40, n_nodes=15):
# Create pod and node names
pod = ['pod_' + str(i+1) for i in range(n_pods)]
node = ['node_' + str(i+1) for i in range(n_nodes)]
# Define CPU and RAM options
cpu = [2**i for i in range(1, 8)] # 2, 4, 8, 16, 32, 64, 128
ram = [2**i for i in range(2, 10)] # 4, 8, 16, ..., 8192
# Create the pods DataFrame
pods = pd.DataFrame({
'pod': pod,
'cpu': random.choices(cpu[0:3], k=n_pods), # Small CPU for pods
'ram': random.choices(ram[0:4], k=n_pods), # Small RAM for pods
})
# Create the nodes DataFrame
nodes = pd.DataFrame({
'node': node,
'cpu': random.choices(cpu[4:len(cpu)-1], k=n_nodes), # Larger CPU for nodes
'ram': random.choices(ram[4:len(ram)-1], k=n_nodes), # Larger RAM for nodes
})
return pods, nodes
# Example usage
pods, nodes = create_pods_and_nodes(n_pods=46, n_nodes=6)
# Display the results
print("Pods DataFrame:\n", pods.head())
print("\nNodes DataFrame:\n", nodes.head())
print(f"total CPU pods: {np.sum(pods['cpu'])}")
print(f"total RAM pods: {np.sum(pods['ram'])}")
print('\n')
print(f"total CPU nodes: {np.sum(nodes['cpu'])}")
print(f"total RAM nodes: {np.sum(nodes['ram'])}")
# Genetic Algorithm Parameters
POPULATION_SIZE = 100
GENERATIONS = 50
MUTATION_RATE = 0.1
TOURNAMENT_SIZE = 5
def create_individual():
return [random.randint(0, len(nodes) - 1) for _ in range(len(pods))]
def create_population(size):
return [create_individual() for _ in range(size)]
def fitness(individual):
total_cpu_used = np.zeros(len(nodes))
total_ram_used = np.zeros(len(nodes))
unallocated_pods = 0
for pod_idx, node_idx in enumerate(individual):
pod_cpu = pods.iloc[pod_idx]['cpu']
pod_ram = pods.iloc[pod_idx]['ram']
if total_cpu_used[node_idx] + pod_cpu <= nodes.iloc[node_idx]['cpu'] and total_ram_used[node_idx] + pod_ram <= nodes.iloc[node_idx]['ram']:
total_cpu_used[node_idx] += pod_cpu
total_ram_used[node_idx] += pod_ram
else:
unallocated_pods += 1 # Count unallocated pods
# Reward for utilizing resources and penalize for unallocated pods
return (total_cpu_used.sum() + total_ram_used.sum()) - (unallocated_pods * 10)
def select(population):
tournament = random.sample(population, TOURNAMENT_SIZE)
return max(tournament, key=fitness)
def crossover(parent1, parent2):
crossover_point = random.randint(1, len(pods) - 1)
child1 = parent1[:crossover_point] + parent2[crossover_point:]
child2 = parent2[:crossover_point] + parent1[crossover_point:]
return child1, child2
def mutate(individual):
for idx in range(len(individual)):
if random.random() < MUTATION_RATE:
individual[idx] = random.randint(0, len(nodes) - 1)
def genetic_algorithm():
population = create_population(POPULATION_SIZE)
for generation in range(GENERATIONS):
new_population = []
for _ in range(POPULATION_SIZE // 2):
parent1 = select(population)
parent2 = select(population)
child1, child2 = crossover(parent1, parent2)
mutate(child1)
mutate(child2)
new_population.extend([child1, child2])
population = new_population
# Print the best fitness of this generation
best_fitness = max(fitness(individual) for individual in population)
print(f"Generation {generation + 1}: Best Fitness = {best_fitness}")
# Return the best individual found
best_individual = max(population, key=fitness)
return best_individual
# Run the genetic algorithm
print("Starting Genetic Algorithm...")
best_allocation = genetic_algorithm()
print("Genetic Algorithm completed.\n")
# Create the allocation DataFrame
allocation_df = pd.DataFrame({
'Pod': pods['pod'],
'Node': [nodes.iloc[best_allocation[i]]['node'] for i in range(len(best_allocation))],
'Pod_Resources': [list(pods.iloc[i][['cpu', 'ram']]) for i in range(len(best_allocation))],
'Node_Resources': [list(nodes.iloc[best_allocation[i]][['cpu', 'ram']]) for i in range(len(best_allocation))]
})
# Print the allocation DataFrame
print("\nAllocation DataFrame:")
print(allocation_df)
# Summarize total CPU and RAM utilization for each node
node_utilization_df = allocation_df.groupby('Node').agg(
Total_CPU_Used=pd.NamedAgg(column='Pod_Resources', aggfunc=lambda x: sum([res[0] for res in x if res])),
Total_RAM_Used=pd.NamedAgg(column='Pod_Resources', aggfunc=lambda x: sum([res[1] for res in x if res])),
Node_CPU=pd.NamedAgg(column='Node_Resources', aggfunc=lambda x: x.iloc[0][0] if x.iloc[0] is not None else 0),
Node_RAM=pd.NamedAgg(column='Node_Resources', aggfunc=lambda x: x.iloc[0][1] if x.iloc[0] is not None else 0)
)
# Calculate CPU and RAM utilization percentages for each node
node_utilization_df['CPU_Utilization'] = (node_utilization_df['Total_CPU_Used'] / node_utilization_df['Node_CPU']) * 100
node_utilization_df['RAM_Utilization'] = (node_utilization_df['Total_RAM_Used'] / node_utilization_df['Node_RAM']) * 100
# Print the total CPU and RAM utilization for each node
print("\nTotal CPU and RAM utilization for each node:")
print(node_utilization_df)
</code></pre>
<p>My implementation works if the total number of CPU and/or RAM of the pods is smaller than the total CPU and/or RAM of the nodes. However, I want to make it work even if the total CPU and/or RAM of the pods exceeds the total CPU and/or RAM of the nodes, allowing for unallocated pods if they cannot be assigned. How can I achieve this?</p>
<p>Any suggestions or improvements would be greatly appreciated!</p>
|
<python><algorithm><optimization><genetic-algorithm>
|
2024-10-03 09:03:13
| 1
| 339
|
AndCh
|
79,049,699
| 14,368,551
|
Why do i get "ImportError: sys.meta_path is None, Python is likely shutting down" when trying to run Milvus
|
<p>I have very simple code:</p>
<pre><code>from pymilvus import MilvusClient
client = MilvusClient("milvus.db")
print("hello")
</code></pre>
<p>which prints out this:</p>
<pre><code>hello
Exception ignored in: <function ServerManager.__del__ at 0x7f1b80758ea0>
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/milvus_lite/server_manager.py", line 58, in __del__
File "/usr/local/lib/python3.12/dist-packages/milvus_lite/server_manager.py", line 53, in release_all
File "/usr/local/lib/python3.12/dist-packages/milvus_lite/server.py", line 118, in stop
File "/usr/lib/python3.12/pathlib.py", line 1164, in __init__
File "/usr/lib/python3.12/pathlib.py", line 358, in __init__
ImportError: sys.meta_path is None, Python is likely shutting down
Exception ignored in: <function Server.__del__ at 0x7f1b80758cc0>
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/milvus_lite/server.py", line 122, in __del__
File "/usr/local/lib/python3.12/dist-packages/milvus_lite/server.py", line 118, in stop
File "/usr/lib/python3.12/pathlib.py", line 1164, in __init__
File "/usr/lib/python3.12/pathlib.py", line 358, in __init__
ImportError: sys.meta_path is None, Python is likely shutting down
</code></pre>
<p>As you can see i get the error for some reason. <strong>Why ?</strong></p>
|
<python><vector-database><milvus><rag>
|
2024-10-03 08:25:41
| 2
| 573
|
VanechikSpace
|
79,049,608
| 3,083,022
|
Inconsistent ratings when drawing using Trueskill
|
<p>I'm using <a href="https://trueskill.org/" rel="nofollow noreferrer">Trueskill</a> to try to create a rating system for a tennis tournament among my friends. Games are 1v1, so I'm trying out the following:</p>
<pre><code>from trueskill import Rating, quality_1vs1, rate_1vs1
alice, bob = Rating(25), Rating(25)
print('No games')
print(alice)
print(bob)
alice, bob = rate_1vs1(alice, bob)
print('First game, winner alice')
print(alice)
print(bob)
alice, bob = rate_1vs1(bob, alice)
print('Second game, winner bob')
print(alice)
print(bob)
</code></pre>
<p>This outputs the following:</p>
<pre class="lang-none prettyprint-override"><code>No games
trueskill.Rating(mu=25.000, sigma=8.333)
trueskill.Rating(mu=25.000, sigma=8.333)
First game, winner alice
trueskill.Rating(mu=29.396, sigma=7.171)
trueskill.Rating(mu=20.604, sigma=7.171)
Second game, winner bob
trueskill.Rating(mu=26.643, sigma=6.040)
trueskill.Rating(mu=23.357, sigma=6.040)
</code></pre>
<p>I would have expected both players having the same rating after these two games but I'll go with that, no issue. However, if I remove the second game and replace it with a draw and re-run the thing:</p>
<pre><code>alice, bob = rate_1vs1(bob, alice, True)
print('Second game, draw')
print(alice)
print(bob)
</code></pre>
<p>I get the following:</p>
<pre class="lang-none prettyprint-override"><code>First game, winner alice
trueskill.Rating(mu=29.396, sigma=7.171)
trueskill.Rating(mu=20.604, sigma=7.171)
Second game, draw
trueskill.Rating(mu=23.886, sigma=5.678)
trueskill.Rating(mu=26.114, sigma=5.678)
</code></pre>
<p><code>bob</code> seems to have a better ranking when having drawn than when having won.</p>
<p>What's going on here? What am I doing wrong?</p>
|
<python><ranking><rating><leaderboard><rating-system>
|
2024-10-03 07:56:34
| 1
| 567
|
user3083022
|
79,049,047
| 17,889,492
|
Plotly wireframe running along both directions
|
<p>I wanted to create a wireframe surface in plotly. However the following code only runs the wires along one axis. How do I convert these parallel wires into a mesh?</p>
<pre><code>import numpy as np
import plotly.graph_objects as go
import plotly.io as pio
x = np.linspace(-5, 5, 20)
y = np.linspace(-5, 5, 20)
x, y = np.meshgrid(x, y)
a = 1
b = 1
z = (x**2 / a**2) - (y**2 / b**2)
# Add lines
lines = []
line_marker = dict(color='#000000', width=4)
for i, j, k in zip(x, y, z):
lines.append(go.Scatter3d(x=i, y=j, z=k, mode='lines', line=line_marker))
fig = go.Figure(data=lines)
pio.templates["Mathematica"] = go.layout.Template()
# Update layout
fig.update_layout(
title='3D Hyperbolic Paraboloid',
scene=dict(
xaxis_title='X-axis',
yaxis_title='Y-axis',
zaxis_title='Z-axis'
),
template = 'Mathematica',
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/oJNxMpLA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJNxMpLA.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2024-10-03 03:42:28
| 1
| 526
|
R Walser
|
79,048,982
| 2,838,281
|
PyArrow error while using Streamlit agraph component
|
<p>While running 'https://github.com/ChrisDelClea/streamlit-agraph/examples/karate_club_graph.py' as is, got following error:</p>
<pre><code>StreamlitAPIException: To use Custom Components in Streamlit, you need to install PyArrow. To do so locally:
pip install pyarrow
And if you're using Streamlit Cloud, add "pyarrow" to your requirements.txt.
Traceback:
File "karate_club_graph.py", line 30, in <module>
return_value = agraph(nodes=nodes,
File "...envs\genai\lib\site-packages\streamlit_agraph\__init__.py", line 38, in agraph
component_value = _agraph(data=data_json, config=config_json)
</code></pre>
<p>I do have pyarrow installed, but still ..</p>
<p>How to fix?</p>
|
<python><streamlit><pyarrow>
|
2024-10-03 02:42:48
| 1
| 505
|
Yogesh Haribhau Kulkarni
|
79,048,979
| 1,413,856
|
Change terminal command colour in PyCharm
|
<p>Iβm running a fairly current version of PyCharm on Windows 10. Itβs an older version of Windows, so I canβt run the very latest PyCharm.</p>
<p>I have chosen a light them, which works well enough until it comes to the Terminal. The terminal, it appears uses PowerShell. The real problem is that commands, such as <code>dir</code> and <code>python</code> appear in yellow. Against a white background.</p>
<p>Iβve tried changing every setting I can find and the only thing that I canβt fix is this yellow colour.</p>
<p>Is there any way of changing this colour? If that involves using <code>cmd</code> instead of PowerShell, thatβs also fine, if someone can show me how.</p>
|
<python><powershell><pycharm>
|
2024-10-03 02:40:56
| 2
| 16,921
|
Manngo
|
79,048,862
| 17,005,119
|
Python decode() mangles bytes object (either returns empty string, or only last line of text)
|
<p>For the life of me, I can't figure out what's happening here...</p>
<p>I'm capturing the output of a command line utility via <code>subprocess.popen()</code>, and processing the stdout line by line via <code>process.stdout.readline()</code> (which returns a bytes object). I want to convert each stdout line to a string, but when I convert it with <code>output.decode()</code>, it either (1) returns an empty string (even though there is text in the bytes object), or (2) only returns the last line in the bytes object.</p>
<p>I have looked through the <a href="https://docs.python.org/3/library/codecs.html#codecs.Codec.decode" rel="nofollow noreferrer">python docs for Code.decode</a>, but can't figure out why this is happening or how to remedy it.</p>
<p>Below is a snippet of the code I'm using, and some example outputs.</p>
<p><strong>Code snippet:</strong></p>
<pre><code>output = process.stdout.readline() # returns a bytes object
if output:
print(str(output))
print(output.decode())
</code></pre>
<p><strong>Example 1 (only last line being returned by decode()):</strong></p>
<p><em>first print statement (the bytes object):</em></p>
<pre><code>b'0M Scan C:\\Users\\Me\\Documents\\\r \r24 folders, 8 files, 65 bytes (1 KiB)'
</code></pre>
<p><em>second print statement (result of decode()):</em></p>
<p><code>24 folders, 8 files, 65 bytes (1 KiB)</code></p>
<p><em>(I would expect it to be this:)</em></p>
<pre><code>Scan C:\\Users\\Me\\Documents\\
24 folders, 8 files, 65 bytes (1 KiB)
</code></pre>
<p><strong>Example 2 (decode() returning empty string):</strong></p>
<p><em>first print statement (the bytes object):</em></p>
<p><code>b'0%'</code></p>
<p><em>second print statement (result of decode()):</em></p>
<p>"" (an empty string)</p>
<p><em>(I would expect it to be this:)</em></p>
<p><code>0%</code></p>
<p>I have tried <code>output.decode('utf-8')</code> but the same result. <code>output</code> is NOT being accessed/modified elsewhere. Why might this be happening? Could this be because I'm on a Windows machine?</p>
|
<python><python-3.x><decode>
|
2024-10-03 01:17:18
| 1
| 617
|
bikz
|
79,048,641
| 12,011,020
|
Python polars write / read csv handling of linebreaks (eol)
|
<p>I want to read in mockup data that contains a linebreak (eol) char.
Here I utilize the faker package to simulate some data.</p>
<p>I initialize a polars.DataFrame and write it to .csv. When I later on try to read the csv
(see below), I receive an error, which indicates that the match between the column-name, the dtype (<code>schema_overrides</code>) and the data does not match.</p>
<p>My best gues is, that the error is due to the linebreak / eol in the line field. If I comment out the generation of the address field it runs through smoothly. Now how does one best handle strings with linebreaks? I thought this should be catched via the <code>quote_char</code> (default =<code>"</code>) and the <code>quoting_style</code> in <code>df.write_csv</code> (<a href="https://docs.pola.rs/api/python/stable/reference/api/polars.DataFrame.write_csv.html" rel="nofollow noreferrer">link</a>)</p>
<h2>Error</h2>
<blockquote>
<p>pydf = PyDataFrame.read_csv(
polars.exceptions.ComputeError: could not parse <code>"Edwards, Duncan and Moore"</code> as dtype <code>date</code> at column 'Date_of_birth' (column number
5)</p>
<p>The current offset in the file is 309563 bytes.</p>
</blockquote>
<h3>Code Producing the error</h3>
<pre class="lang-py prettyprint-override"><code># Reading in throws an error
df_throws = pl.read_csv("mockme_up.csv", schema_overrides=dtypes, separator=";")
</code></pre>
<h2>MRE Data</h2>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from faker import Faker
fake = Faker()
# Erstellung der Mockup-Daten unter Verwendung von Faker
N = int(1e4)
data = {
"Name": [fake.name() for _ in range(N)],
"Address": [fake.address() for _ in range(N)],
"Email": [fake.email() for _ in range(N)],
"Phonenumber": [fake.phone_number() for _ in range(N)],
"Date_of_birth": [
fake.date_of_birth(minimum_age=18, maximum_age=90) for _ in range(N)
],
"Company": [fake.company() for _ in range(N)],
"Job": [fake.job() for _ in range(N)],
"IBAN": [fake.iban() for _ in range(N)],
"Creditcard": [fake.credit_card_number() for _ in range(N)],
"Creation_date": [fake.date() for _ in range(N)],
}
dtypes = {
"Name": pl.Utf8,
"Address": pl.Utf8,
"Email": pl.Utf8,
"Phonenumber": pl.Utf8,
"Date_of_birth": pl.Date,
"Company": pl.Utf8,
"Job": pl.Utf8,
"IBAN": pl.Utf8,
"Creditcard": pl.Int64,
"Creation_date": pl.Date,
}
df = pl.DataFrame(data)
df.write_csv("mockme_up.csv", separator=";", quote_style="non_numeric")
print("=" * 50)
print(f"Succeful created mockup data of shape {df.shape=}")
print("=" * 50)
</code></pre>
<h2>Update / Solution</h2>
<p>This is now tracked as github issue: <a href="https://github.com/pola-rs/polars/issues/19078" rel="nofollow noreferrer">https://github.com/pola-rs/polars/issues/19078</a></p>
<p>As a workaround passing <code>n_threads=1</code> to <code>read_csv()</code> will fix the issue</p>
|
<python><dataframe><csv><python-polars><eol>
|
2024-10-02 22:32:39
| 0
| 491
|
SysRIP
|
79,048,626
| 1,306,779
|
How can I run a loop forever until a termination condition or for a fixed number of iterations?
|
<p>I have a program that runs an operation either a fixed number of times, or "forever" until some condition is reached. The implementation was fine, but I was having trouble getting a code structure I was happy with.</p>
<p>My initial logic boiled down to:</p>
<pre><code>max_iterations = 5 # Either an integer or None (the number actually comes from user input)
stop_flag = False
i = 0
while not stop_flag:
do_some_operation(i)
i += 1
stop_flag = (i == max_iterations) or test_some_stop_condition()
</code></pre>
<p>But I wasn't very happy with this, I don't like that there's both an <code>i</code> variable and a <code>stop</code> flag.</p>
<p>The stop flag can be eliminated by moving the logic to the while:</p>
<pre><code>i = 0
while not ((i == max_iterations) or test_some_stop_condition()):
do_some_operation(i)
i += 1
</code></pre>
<p>but this seems even less readable to me. That conditional is not particularly understandable now it's moved to the <code>while</code>.</p>
<p>So, I used <a href="https://docs.python.org/3/library/itertools.html#itertools.count" rel="nofollow noreferrer"><code>itertools.count</code></a> to do the iteration:</p>
<pre><code>import itertools
max_iterations = 5 # Either an integer or None (the number actually comes from user input)
for i in itertools.count():
do_some_operation(i)
if (i == max_iterations) or test_some_stop_condition():
break
</code></pre>
<p>I'm not very happy with this either, but I think it's reasonably clean and readable. Ideally I think the termination condition should be entirely in the <code>for</code> or <code>while</code>, leaving the body of the loop entirely for the actual work, but I can't think of a way to do it cleanly.</p>
<p>Is there a better solution?</p>
<p>Some notes:</p>
<ul>
<li>the <code>i</code> variable is required, since <code>do_some_operation</code> makes use of it.</li>
<li><code>test_some_stop_condition()</code> can still return <code>True</code> regardless of whether <code>i</code> is <code>None</code> or an <code>int</code>. This is fine.</li>
</ul>
<p><em>(This question might be better suited to <a href="https://codereview.stackexchange.com/">https://codereview.stackexchange.com/</a>, I'm honestly not sure...)</em></p>
|
<python><python-3.x><loops><logic>
|
2024-10-02 22:24:59
| 3
| 1,575
|
jfowkes
|
79,048,606
| 9,213,069
|
Poetry init in Google Colab is running for hour
|
<p>I'm trying to run langgraph using my Google Colab. So I used following command to install <code>poetry</code> in my Google Colab.</p>
<pre><code>from google.colab import drive
drive.mount('/content/gdrive')
# Move in your Drive
%cd /content/gdrive/MyDrive/
# Create and move in the new project directory
!rm -rf reflection-agent
!mkdir reflection-agent
%cd reflection-agent
!pip install poetry
# Configure poetry to create virtual environments in the project folder
!poetry config virtualenvs.in-project true
</code></pre>
<p>But while trying to create <code>pyproject.toml</code> file in my directory <code>reflection-agent</code>, My following command <code>!poetry init</code> running for hours without creating <code>pyproject.toml</code>
Can you please help me to resolve this issue?</p>
|
<python><google-colaboratory><python-poetry><langgraph>
|
2024-10-02 22:13:25
| 1
| 883
|
Tanvi Mirza
|
79,048,410
| 6,691,064
|
Error in pip command - directory not installable
|
<p>I am new to python and trying to setup using this-</p>
<p><a href="https://github.com/facebookresearch/CodeGen/blob/main/install_env.sh" rel="nofollow noreferrer">https://github.com/facebookresearch/CodeGen/blob/main/install_env.sh</a></p>
<p>in google collab notebook.</p>
<p>But, it is throwing error in the below line:</p>
<pre class="lang-bash prettyprint-override"><code>pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
</code></pre>
<p>I tried changing the last parameter to /bin but still doesn't work.
Can someone please help.</p>
<p>Below is the error:</p>
<pre class="lang-none prettyprint-override"><code>Using pip 24.1.2 from /usr/local/lib/python3.10/dist-packages/pip (python 3.10)
ERROR: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
/bin/bash: line 1: go: command not found
</code></pre>
|
<python><pip>
|
2024-10-02 20:40:13
| 1
| 1,509
|
vikash singh
|
79,048,003
| 20,591,261
|
Following and Count of State Changes Between Columns in Polars
|
<p>I have a dataframe with multiple IDs and corresponding states. I want to analyze how the states have changed over time and present this information effectively.</p>
<p>Here is an example:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
"ID": [1, 2, 3],
"T0": ["A", "B", "C"],
"T1": ["B", "B", "A"],
})
</code></pre>
<p>One aproach it's to "concat" the columns , and then do a <code>value_counts()</code> of the Change column</p>
<pre><code>df = df.with_columns(
(pl.col("T0") + " -> " + pl.col("T1")).alias("Change")
)
</code></pre>
<p>However, there might be a better approach to this, or even a built-in function that can achieve what I need more efficiently.</p>
<p>Current Output:</p>
<pre><code>shape: (3, 4)
βββββββ¬ββββββ¬ββββββ¬βββββββββ
β ID β T0 β T1 β Change β
β --- β --- β --- β --- β
β i64 β str β str β str β
βββββββͺββββββͺββββββͺβββββββββ‘
β 1 β A β B β A -> B β
β 2 β B β B β B -> B β
β 3 β C β A β C -> A β
βββββββ΄ββββββ΄ββββββ΄βββββββββ
shape: (3, 2)
ββββββββββ¬ββββββββ
β Change β count β
β --- β --- β
β str β u32 β
ββββββββββͺββββββββ‘
β C -> A β 1 β
β B -> B β 1 β
β A -> B β 1 β
ββββββββββ΄ββββββββ
</code></pre>
|
<python><python-polars>
|
2024-10-02 18:13:54
| 1
| 1,195
|
Simon
|
79,047,912
| 8,536,621
|
Testing constraints of pydantic models
|
<p>Say I have a model</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Book(BaseModel):
name: str
description: str = Field(min_length=1, max_length=64, pattern="^[a-z]+$")
</code></pre>
<p>I would like to write unit tests ensuring the model has properly been defined.
To be clear, I trust pydantic to be able to properly validate what <strong>is</strong> defined, I don't trust myself nor my coworkers to always properly define the model constraints, which are our business requirements.</p>
<p>To tests these constraints/requirements I'm tempted to instantiate the model with wrong values, then catch the <code>ValidationError</code>, then check the content of the error to make sure it's the expected error.
Something like:</p>
<pre class="lang-py prettyprint-override"><code>def test_book__error_invalid_description():
with pytest.raise(ValidationError) as err:
Book(name="toto", description="123")
assert "pattern_error" in err.value.errors # not working, that's just the idea
</code></pre>
<p>However I find this very ugly and difficult to maintain. Also, I believe this is mainly testing pydantic itself and just a little bit my requirement (not even all the cases for the pattern requirement are tested here).</p>
<p>I would rather test the presence of the constraints themselves, but I cannot find a clean and concise way of doing it.
Here is what I tried:</p>
<pre class="lang-py prettyprint-override"><code>from annotated_types import MinLen, MaxLen
def test_book__validation_constraints():
assert Book.model_fields["name"].is_required()
assert MinLen(1) in Book.model_fields["description"].metadata
assert MaxLen(64) in Book.model_fields["description"].metadata
assert # no idea how to assert pattern
</code></pre>
<p>Is there a better way of doing this ?
Do you think it is overkill to test this sort of requirements ?</p>
|
<python><unit-testing><pydantic><pydantic-v2>
|
2024-10-02 17:38:17
| 1
| 818
|
Abel
|
79,047,909
| 7,215,853
|
How to ensure compatibility between my local dev environment and AWS Lambda runtime (AWS CDK V2)
|
<p>I am currently having an issue with Python packages on AWS Lambda.</p>
<p>I have defined a Lambda layer like this:</p>
<pre><code> my_layer = _lambda.LayerVersion(
self, "MyLayer",
code=_lambda.Code.from_asset("layer_code_directory"),
compatible_runtimes=[_lambda.Runtime.PYTHON_3_12],
description="Lambda Layer for common dependancies"
)
</code></pre>
<p>The layer is used for common dependencies that most of my Lambdas share. One of them is the cryptography package. The pip install all the packages in the requirements.txt into "layer_code_directory". The CDK (afaik) packages those into a zip and then uploads in to the lambda layer.</p>
<p>However, when I try to run my Lambda, I get an error, that some files from cryptography are missing:</p>
<p><strong>Runtime.ImportModuleError: Unable to import module 'MyAuthorizer': /opt/python/cryptography/hazmat/bindings/_rust.abi3.so: cannot open shared object file: No such file or directory</strong></p>
<p>I have debugged this by checking whether the package is available to the Lambda and what paths those files have that are missing. I crosschecked them with the error message and <strong>all the files are available in the correct places</strong>.</p>
<p>From some Google research I now assume, that the version of cryptography, I install locally via pip is not compatible with the lambda runtime, and afaik, the pip installs different files depending on what OS you are on.</p>
<p>On my first tries, I was on Alpine (Official Python docker container), and I have now moved to a container based on the AWS Python, Lambda, and Image. I assumed, when I bundle packages installed in this environment, the packages would be compatible with AWS Lambda. But I still got the same error message.</p>
<p>How to solve this issue or what else there could be?</p>
<p>I assume, that deploying Lambda code and packages via AWS CDK is a very common practice, but using Google, AWS, Docs, and LLMs.</p>
<p>I was unable to find the solution or a working example.</p>
|
<python><amazon-web-services><aws-lambda><aws-cdk>
|
2024-10-02 17:38:04
| 2
| 320
|
MrTony
|
79,047,784
| 7,116,385
|
transformers[agents] library doesn't work properly on Linux systems
|
<p>I'm following a huggingface tutorial <a href="https://huggingface.co/docs/transformers/en/agents" rel="nofollow noreferrer">Huggingface agent 101</a>. I installed the library by executing <code>pip install transformers[agents]</code> on colab and when I imported the tool using <code>from transformers import tool</code> I get the following error: <code>AttributeError: module transformers has no attribute tool</code></p>
<p>Now when I checked the transformers library by run <code>dir(transformers)</code>, it indeed doesn't have any tool function or class.</p>
<p>Contrary to this when I ran the same set of commands on my Windows-11 machine, it worked and had <code>tool</code> listed too.</p>
<p>Just to be sure on my hypothesis, I ran the same code on Kaggle notebook as well, it too had the same issue as that of Google colab</p>
<p>The python version for the 3 environments are:</p>
<ol>
<li>Windows11: python 3.10.14</li>
<li>Kaggle notebook: python 3.10.14</li>
<li>Google colab: python 3.10.12</li>
</ol>
<p>Any help will be highly appreciated.</p>
|
<python><huggingface-transformers>
|
2024-10-02 16:55:41
| 0
| 467
|
Ashish Johnson
|
79,047,727
| 610,569
|
How to implement SwiGLU activation? Why does SwiGLU takes in two tensors?
|
<p>The SwiGLU variant introduced in <a href="https://arxiv.org/pdf/2002.05202" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.05202</a> is simply "divine benevolence" and the implementation on Flash-Attention just works out of the box <a href="https://github.com/Dao-AILab/flash-attention/tree/main" rel="nofollow noreferrer">https://github.com/Dao-AILab/flash-attention/tree/main</a></p>
<p>Comparing the the implementation:</p>
<pre><code>import torch
swiglu_fwd_codestring = """
template <typename T> T swiglu_fwd(T x, T y) {
return float(x) * float(y) / (1.0f + ::exp(-float(x)));
}
"""
swiglu_bwd_codestring = """
template <typename T> T swiglu_bwd(T x, T y, T g, T& dx, T& dy) {
float x_sigmoid = 1.0f / (1.0f + ::exp(-float(x)));
dx = x_sigmoid * (1 + float(x) * (1.0f - x_sigmoid)) * float(g) * float(y);
dy = float(x) * x_sigmoid * float(g);
}
"""
swiglu_fwd = torch.cuda.jiterator._create_jit_fn(swiglu_fwd_codestring)
swiglu_bwd = torch.cuda.jiterator._create_multi_output_jit_fn(swiglu_bwd_codestring, num_outputs=2)
class SwiGLUFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, x, y):
ctx.save_for_backward(x, y)
return swiglu_fwd(x, y)
@staticmethod
def backward(ctx, dout):
x, y = ctx.saved_tensors
return swiglu_bwd(x, y, dout)
swiglu = SwiGLUFunction.apply
swiglu(torch.tensor(1.0).to('cuda'), torch.tensor(0.5).to('cuda'))
</code></pre>
<p>and</p>
<pre><code>import torch.nn.functional as F
swiglu_torch = lambda x, y: F.silu(x) * y
swiglu_torch(torch.tensor(1.0).to('cuda'), torch.tensor(0.5).to('cuda'))
</code></pre>
<p>They both give the same output:</p>
<pre><code>tensor(0.3655, device='cuda:0')
</code></pre>
<p>The paper description, the input to the activation is a single <code>x</code> tensor,</p>
<p><a href="https://i.sstatic.net/JmidVP2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JmidVP2C.png" alt="enter image description here" /></a></p>
<p>but both the implementation is taking <code>x</code> and <code>y</code>, <strong>what does the <code>y</code> corresponds to when we only have the <code>x</code> tensor as in the input in the formula? Is that <code>x = W1.x</code> and <code>y=W2.x</code>?</strong></p>
|
<python><deep-learning><pytorch><large-language-model><flash-attn>
|
2024-10-02 16:39:09
| 1
| 123,325
|
alvas
|
79,047,570
| 8,564,676
|
Kafka external client receives other name than in ADVERTISED_LISTENERS
|
<p>On a local network, I have a broker at cgw.local and a client at rpi.local, connected to the same switch. No matter what I put into <code>kafka.advertised.listeners</code>: <code>PLAINTEXT://cgw.local:9092</code> or <code>PLAINTEXT://20.5.28.284:9092</code>, or whether the (Python's) client <code>BOOTSTRAP_SERVERS</code> is <code>['cgw.local']</code> or <code>['20.5.28.284:9092']</code>, after a successfull subscription the client complaints:</p>
<pre><code>WARNING:kafka.conn:DNS lookup failed for cgw:9092, exception was [Errno -2] Name or service not known. Is your advertised.listeners (called advertised.host.name before Kafka 9) correct and resolvable?
</code></pre>
<p>and does not download any messages. It looks for <code>cgw</code> which does not exist, what exists is <code>cgw.local</code> or (dynamic) <code>20.5.28.284</code>.</p>
<p>Here are the server properties:</p>
<pre><code>group.initial.rebalance.delay.ms=0
broker.id=0
log.dirs=/tmp/kafka-logs
kafka.listeners=PLAINTEXT://0.0.0.0:9092
kafka.advertised.listeners=PLAINTEXT://cgw.local:9092 # or 20.5.28.284:9092
</code></pre>
<p>There is no docker involved, Kafka runs directly on cgw.local.</p>
|
<python><apache-kafka><kafka-python>
|
2024-10-02 15:55:09
| 0
| 513
|
scriptfoo
|
79,047,541
| 446,786
|
Making instances of Django records with a through model
|
<p>Let's say you have a concept of a Battle, and in that battle there are Players and Enemies. The players are a simple ManyToMany, but Enemies require a Through model, because if there's a DB entry for "Goblin", players need to fight an INSTANCE of the Goblin model. Players may fight many Goblins, and each needs their own health/status at any given moment.</p>
<p>So far, I've got a Django model like so (I'm simplifying the code for readability, and to focus on the main issue)</p>
<pre><code>class Battle(models.Model):
players = ManyToMany(Player)
enemies = ManyToMany(Enemy, through=EnemyThroughModel)
</code></pre>
<p>With the appropriate adjustments to admin.py and such, this works in that I can attach multiple Enemies to a Battle and see them listed separately in admin, HOWEVER, those are all keyed to the same basic Enemy, and if I change one of them (say, they take damage), ALL the keyed references to that enemy now have their health reduced.</p>
<p>Is there a neat way I can use through models to create a new instance of the enemies, so that they have independent health/mana/etc?</p>
|
<python><django><database><django-models><orm>
|
2024-10-02 15:47:42
| 1
| 311
|
Josh
|
79,047,285
| 5,924,007
|
opentelemetry.instrumentation.instrumentor - DependencyConflict: requested: "psycopg2 >= 2.7.3.1" but found: "None"
|
<p>Has someone tried to get the OTEL for the PostgresSQL queries ? We have a fass-lambda that makes simple queries We are trying get info about how long the query took to execute by adding opentelemetry-instrumentation-psycopg2. Adding psycopg2 to requirements.txt doesn't work. We get the following error:</p>
<pre><code> Γ python setup.py egg_info did not run successfully.
--> β exit code: 1
--> β°β> [23 lines of output]
--> running egg_info
--> creating /tmp/pip-pip-egg-info-ngamcigt/psycopg2.egg-info
--> writing /tmp/pip-pip-egg-info-ngamcigt/psycopg2.egg-info/PKG-INFO
--> writing dependency_links to /tmp/pip-pip-egg-info-ngamcigt/psycopg2.egg-info/dependency_links.txt
--> writing top-level names to /tmp/pip-pip-egg-info-ngamcigt/psycopg2.egg-info/top_level.txt
--> writing manifest file '/tmp/pip-pip-egg-info-ngamcigt/psycopg2.egg-info/SOURCES.txt'
-->
--> Error: pg_config executable not found.
-->
--> pg_config is required to build psycopg2 from source. Please add the directory
--> containing pg_config to the $PATH or specify the full executable path with the
--> option:
-->
--> python setup.py build_ext --pg-config /path/to/pg_config build ...
-->
--> or with the pg_config option in 'setup.cfg'.
-->
--> If you prefer to avoid building psycopg2 from source, please install the PyPI
--> 'psycopg2-binary' package instead.
-->
--> For further information please check the 'doc/src/install.rst' file (also at
--> <https://www.psycopg.org/docs/install.html>).
-->
--> [end of output]
-->
--> note: This error originates from a subprocess, and is likely not a problem with pip.
--> error: metadata-generation-failed
-->
--> Γ Encountered error while generating package metadata.
--> β°β> See above for output.
-->
--> note: This is an issue with the package mentioned above, not pip.
--> hint: See above for details.
--> make: *** [install] Error 1
</code></pre>
<p>So we replaced psycopg2 with psycopg2-binary (v2.9.9)</p>
<p>But, when we added opentelemetry-instrumentation-psycopg2, the official OTEL dependency for psycopg2, we get the following error and OTEL is generated:</p>
<pre><code>[ERROR] opentelemetry.instrumentation.instrumentor - DependencyConflict: requested: "psycopg2 >= 2.7.3.1" but found: "None"
</code></pre>
<p>We are initiating the logs by adding the following line of code at the beginning of the file.</p>
<pre><code>Psycopg2Instrumentor().instrument(enable_commenter=True)
</code></pre>
<p>Has anyone encountered this before ? Is there a work around?</p>
|
<python><psycopg2><open-telemetry>
|
2024-10-02 14:44:09
| 0
| 4,391
|
Pritam Bohra
|
79,047,250
| 2,955,541
|
Numba cuda.jit and njit giving different results
|
<p>In the following example, I have a simple CPU function:</p>
<pre><code>import numpy as np
from numba import njit, cuda
@njit
def cpu_func(a, b, c, d):
for i in range(len(a)):
for l in range(d[i], 0, -1):
for j in range(l):
a[i, j] = (b[i] * a[i, j] + (1.0 - b[i]) * a[i, j + 1]) / c[i]
return a[:, 0]
</code></pre>
<p>and an equivalent GPU implementation of the same function above:</p>
<pre><code>@cuda.jit
def _gpu_func(a, b, c, d, out):
i = cuda.grid(1)
if i < len(a):
for l in range(d[i], 0, -1):
for j in range(l):
a[i, j] = (b[i] * a[i, j] + (1.0 - b[i]) * a[i, j + 1]) / c[i]
out[i] = a[i, 0]
def gpu_func(a, b, c, d):
d_a = cuda.to_device(a)
d_b = cuda.to_device(b)
d_c = cuda.to_device(c)
d_d = cuda.to_device(d)
d_out = cuda.device_array(len(a))
threads_per_block = 64
blocks_per_grid = 128
_gpu_func[blocks_per_grid, threads_per_block](d_a, d_b, d_c, d_d, d_out)
out = d_out.copy_to_host()
return out
</code></pre>
<p>However, when I call BOTH functions with the same inputs, I am getting very different results:</p>
<pre><code>a = np.array([[
1.150962188573305234e+00, 1.135924188549360281e+00, 1.121074496043255930e+00, 1.106410753047196494e+00, 1.091930631080626046e+00,
1.077631830820479752e+00, 1.063512081736074144e+00, 1.049569141728566635e+00, 1.035800796774924315e+00, 1.022204860576360286e+00,
1.008779174211164253e+00, 9.955216057918859773e-01, 9.824300501268078412e-01, 9.695024283856580327e-01, 9.567366877695103744e-01,
9.441308011848159598e-01, 9.316827669215179686e-01, 9.193906083351972569e-01, 9.072523735331967654e-01, 8.952661350646772265e-01,
8.834299896145549891e-01, 8.717420577012704452e-01, 8.602004833783417626e-01, 8.488034339396569594e-01, 8.375490996284553624e-01,
8.264356933499517055e-01, 8.154614503875622367e-01, 8.046246281226814290e-01, 7.939235057579688837e-01, 7.833563840441006842e-01,
7.729215850099430130e-01, 7.626174516961046201e-01, 7.524423478918242925e-01, 7.423946578751554615e-01, 7.324727861564024334e-01,
7.226751572247696043e-01, 7.130002152981841368e-01, 7.034464240762497989e-01, 6.940122664962961041e-01, 6.846962444924807878e-01,
6.754968787579095357e-01, 6.664127085097342196e-01, 6.574422912571935562e-01, 6.485842025725561122e-01, 6.398370358649341227e-01,
6.311994021569281577e-01, 6.226699298640693270e-01, 6.142472645770222783e-01, 6.059300688465173446e-01, 5.977170219709739829e-01,
0.000000000000000000e+00
]])
b = np.array([1.533940813369776279e+00])
c = np.array([1.018336794718317062e+00])
d = np.array([50], dtype=np.int64)
cpu_func(a.copy(), b, c, d) # Produces array([0.74204214])
gpu_func(a.copy(), b, c, d) # Produces array([0.67937252])
</code></pre>
<p>I understand that numerical precision can be an issue (e.g., overflow/underflow) and each compiler may optimize the code differently (e.g., fused multiply-and-add) but, setting that aside for now, is there some way to ensure/force the CPU code and the GPU code to both produce the SAME output (up to 4-5 decimal places)? Or maybe there is some clever way to improve the precision?</p>
<p><strong>Additional Validation</strong></p>
<p>For completeness, I attempted to perform the same computation using the <a href="https://mpmath.org/doc/current/" rel="nofollow noreferrer">mpmath</a> package which claims to provide fixed precision capabilities:</p>
<pre><code>import mpmath
mpmath.mp.dps = 100 # Precision
def mpmath_cpu_func(a, b, c, d):
for i in range(a.rows):
for l in range(d[i], 0, -1):
for j in range(l):
a[i, j] = (b[i] * a[i, j] + (1.0 - b[i]) * a[i, j + 1]) / c[i]
return a[:, 0]
# Convert inputs to mpmath types with high precision
mp_a = mpmath.matrix(a.copy())
for i in range(a.shape[0]):
for j in range(a.shape[1]):
mp_a[i, j] = mpmath.mpf(a[i, j].astype(str))
mp_b = [mpmath.mpf(b[0].astype(str))]
mp_c = [mpmath.mpf(c[0].astype(str))]
mpmath_cpu_func(mp_a, mp_b, mp_c, d) # Produces 0.6449015342958763156413719663014237700252472529553791866336058829452386808983049961922292904843391669
</code></pre>
<p>Note that the high precision result is also different from the CPU and GPU results above. Is there is some clever way to improve the precision of the GPU/CPU implementations above? I would be happy to get within 4-5 decimal places of agreement amongst all three versions.</p>
<p><strong>Update</strong></p>
<blockquote>
<p>I understand that numerical precision can be an issue (e.g., overflow/underflow) and each compiler may optimize the code differently (e.g., fused multiply-and-add) but, setting that aside for now, is there some way to ensure/force the CPU code and the GPU code to both produce the SAME output (up to 4-5 decimal places)? Or maybe there is some clever way to improve the precision?</p>
</blockquote>
<p>I was able to leverage the <a href="https://pypi.org/project/pyfma/" rel="nofollow noreferrer">pyfma</a> package in the CPU code (without <code>numba</code> <code>njit</code>!) to produce a result that matches the GPU code:</p>
<pre><code>import numpy as np
import pyfma
import _pyfma
from numpy.typing import ArrayLike
def monkey_patch_fma(a: ArrayLike, b: ArrayLike, c: ArrayLike) -> np.ndarray:
"""
This is a simple monkey patch to make `pyfma` compatible with NumPy v2.0
Based on this `pyfma` PR - https://github.com/nschloe/pyfma/pull/17/files
"""
a = np.asarray(a)
b = np.asarray(b)
c = np.asarray(c)
# dtype = np.find_common_type([], [a.dtype, b.dtype, c.dtype])
dtype = np.promote_types(np.promote_types(a.dtype, b.dtype), c.dtype)
a = a.astype(dtype)
b = b.astype(dtype)
c = c.astype(dtype)
if dtype == np.single:
return _pyfma.fmaf(a, b, c)
elif dtype == np.double:
return _pyfma.fma(a, b, c)
assert dtype == np.longdouble
return _pyfma.fmal(a, b, c)
pyfma.fma = monkey_patch_fma
def fma_cpu_func(a, b, c, d):
for i in range(len(a)):
for l in range(d[i], 0, -1):
for j in range(l):
a[i, j] = pyfma.fma(b[i], a[i, j], (1.0 - b[i]) * a[i, j + 1]) / c[i]
return a[:, 0]
a = np.array([[
1.150962188573305234e+00, 1.135924188549360281e+00, 1.121074496043255930e+00, 1.106410753047196494e+00, 1.091930631080626046e+00,
1.077631830820479752e+00, 1.063512081736074144e+00, 1.049569141728566635e+00, 1.035800796774924315e+00, 1.022204860576360286e+00,
1.008779174211164253e+00, 9.955216057918859773e-01, 9.824300501268078412e-01, 9.695024283856580327e-01, 9.567366877695103744e-01,
9.441308011848159598e-01, 9.316827669215179686e-01, 9.193906083351972569e-01, 9.072523735331967654e-01, 8.952661350646772265e-01,
8.834299896145549891e-01, 8.717420577012704452e-01, 8.602004833783417626e-01, 8.488034339396569594e-01, 8.375490996284553624e-01,
8.264356933499517055e-01, 8.154614503875622367e-01, 8.046246281226814290e-01, 7.939235057579688837e-01, 7.833563840441006842e-01,
7.729215850099430130e-01, 7.626174516961046201e-01, 7.524423478918242925e-01, 7.423946578751554615e-01, 7.324727861564024334e-01,
7.226751572247696043e-01, 7.130002152981841368e-01, 7.034464240762497989e-01, 6.940122664962961041e-01, 6.846962444924807878e-01,
6.754968787579095357e-01, 6.664127085097342196e-01, 6.574422912571935562e-01, 6.485842025725561122e-01, 6.398370358649341227e-01,
6.311994021569281577e-01, 6.226699298640693270e-01, 6.142472645770222783e-01, 6.059300688465173446e-01, 5.977170219709739829e-01,
0.000000000000000000e+00
]])
b = np.array([1.533940813369776279e+00])
c = np.array([1.018336794718317062e+00])
d = np.array([50], dtype=np.int64)
fma_cpu_func(a.copy(), b, c, d) # Produces array([0.67937252])
</code></pre>
<p>So, both the CPU and GPU outputs are identical and "closer" to the higher-precision output (<code>0.644901534295876315641</code>) but still inaccurate.</p>
|
<python><numpy><cuda><precision><numba>
|
2024-10-02 14:35:47
| 1
| 6,989
|
slaw
|
79,047,117
| 3,663,124
|
UV showing warning of incorrect Python version
|
<p>I'm using uv to manage my Python version and dependencies in a project I'm working on.</p>
<p>When I run <code>uv run pytest</code> to test said project I get the following:</p>
<pre><code>============================================================================= test session starts =============================================================================
platform darwin -- Python 3.12.5, pytest-8.3.3, pluggy-1.5.0
...
(test output)
...
============================================================================== warnings summary ===============================================================================
../../../../../usr/local/lib/python3.9/site-packages/certifi/core.py:36
/usr/local/lib/python3.9/site-packages/certifi/core.py:36: DeprecationWarning: path is deprecated. Use files() instead. Refer to https://importlib-resources.readthedocs.io/en/latest/using.html#migrating-from-legacy for migration advice.
_CACERT_CTX = get_path("certifi", "cacert.pem")
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
</code></pre>
<p>I don't understand what's going on.</p>
<p>First it says it's using Python3.12.5 which is the version I specified for this project, but then the warning is coming from Python3.9 ?</p>
<p>How can I get rid of this message?</p>
|
<python><pytest><uv>
|
2024-10-02 14:03:29
| 2
| 1,402
|
fedest
|
79,047,095
| 4,451,315
|
Does `row_number() over ()` have any guarantees when called on an in-memory dataframe?
|
<p>If I run</p>
<pre class="lang-py prettyprint-override"><code>duckdb.sql('select *, row_number() over () as index from df')
</code></pre>
<p>on a pandas/Polars dataframe, then does the row number always enumerate the rows in order of appearance?</p>
<p>This seems to be the case:</p>
<pre class="lang-py prettyprint-override"><code>In [11]: df = pl.DataFrame({'a': range(1, 1_280_000)}); duckdb.sql('select *, row_number() over () as index from df qualify a != index').pl()
Out[11]:
shape: (0, 2)
βββββββ¬ββββββββ
β a β index β
β --- β --- β
β i64 β i64 β
βββββββͺββββββββ‘
βββββββ΄ββββββββ
</code></pre>
<p>but is it something which can be relied upon more generally?</p>
|
<python><duckdb>
|
2024-10-02 13:57:41
| 0
| 11,062
|
ignoring_gravity
|
79,047,058
| 5,993,062
|
How can I bind a user license to a unique fingerprint in an app running in Docker container?
|
<p>I'm working on a Python application deployed inside a Docker container, and I need to bind a user's license to a unique fingerprint. The goal is to ensure that each license can only be used by one instance of the application at a time.</p>
<p>However, I cannot use any hardware information (e.g. using <code>lshw</code>) because Iβve noticed that, within a Docker container, hardware-related information can be easily changed by the resource allocator.</p>
<p>What are the best approaches for generating a reliable fingerprint in this context?</p>
<p>Any advice or best practices would be greatly appreciated!</p>
<p>I've thought about solutions like a file-based fingerprint, but still unsure how to make sure this cannot be spoofed.</p>
|
<python><docker><licensing><fingerprint>
|
2024-10-02 13:50:09
| 1
| 469
|
Steve Lukis
|
79,046,766
| 9,609,843
|
How to cancel long running asyncio.Task?
|
<p>I have the following example of code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import concurrent.futures
import functools
import time
async def run_till_first_success(tasks, timeout=None):
results = []
exceptions = []
while tasks:
try:
async with asyncio.timeout(timeout):
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED, timeout=timeout)
except TimeoutError:
print('timeout is catched')
for task in tasks:
task.cancel() # this is not working!
return
# >>> this code doesn't actually matter
for task in done:
if task.exception():
exceptions.append(task.exception())
else:
results.append(task.result())
tasks = pending
if results or len(pending) == 0:
for task in pending:
task.cancel()
break
if not results:
raise exceptions[0]
return results[0]
# <<<
class Worker:
def __init__(self, execute):
self.execute = execute
async def work(self):
return await self.execute(self._work)
def _work(self): # simulate long running not-async function
print('_work started')
time.sleep(5)
print('_work finished')
class LibraryInterfaceClass:
def __init__(self):
self._executor = concurrent.futures.ThreadPoolExecutor(10)
self._workers = [Worker(self._execute) for _ in range(2)]
async def use(self, timeout=1):
tasks = {asyncio.create_task(worker.work()) for worker in self._workers}
return await run_till_first_success(tasks, timeout)
async def _execute(self, func, *args, **kwargs):
return await asyncio.get_event_loop().run_in_executor(self._executor, functools.partial(func, *args, **kwargs))
interface = LibraryInterfaceClass()
asyncio.run(interface.use())
print('run is done')
</code></pre>
<p>When I run it, I get the following output:</p>
<pre class="lang-none prettyprint-override"><code>_work started
_work started
timeout is catched
run is done
_work finished
_work finished
</code></pre>
<p>My goal is to prevent <code>_work finished</code> from being printed. In another words, I want to cancel long running operations immediately after timeout occured. How to do this correctly? <code>task.cancel()</code> is not working in my case.</p>
<p>I can't modify <code>def _work</code>, it lies in an external library. It is a synchronous driver to external database.</p>
|
<python><python-asyncio>
|
2024-10-02 12:32:19
| 1
| 8,600
|
sanyassh
|
79,046,568
| 4,041,117
|
Netgraph Animation β how to display frame numbers
|
<p>I'm trying to add a frame number to a simulation visualization modified from <a href="https://netgraph.readthedocs.io/en/latest/sphinx_gallery_animations/plot_02_animate_edges.html#sphx-glr-sphinx-gallery-animations-plot-02-animate-edges-py" rel="nofollow noreferrer">here</a>. Is there any simple way to add a frame number to this animation so that it displays as a part of the plot title?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from netgraph import Graph
# Simulate a dynamic network with
total_frames = 10
total_nodes = 5
NODE_LABELS = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E'}
NODE_POS = {0: (0.0, 0.5), 1: (0.65, 0.25), 2: (0.7, 0.5), 3: (0.5, 0.75), 4: (0.25, 0.25)}
adjacency_matrix = np.random.rand(total_nodes, total_nodes) < 0.25
weight_matrix = np.random.randn(total_frames, total_nodes, total_nodes)
# Normalise the weights, such that they are on the interval [0, 1].
# They can then be passed directly to matplotlib colormaps (which expect floats on that interval).
vmin, vmax = -2, 2
weight_matrix[weight_matrix<vmin] = vmin
weight_matrix[weight_matrix>vmax] = vmax
weight_matrix -= vmin
weight_matrix /= vmax - vmin
cmap = plt.cm.RdGy
fig, ax = plt.subplots()
fig.suptitle('Simulation viz @ t=', x= 0.15, y=0.95)
g = Graph(adjacency_matrix, node_labels=NODE_LABELS,
node_layout = NODE_POS, edge_cmap=cmap, arrows=True, ax=ax)
def update(ii):
artists = []
for jj, kk in zip(*np.where(adjacency_matrix)):
w = weight_matrix[ii, jj, kk]
artist = g.edge_artists[(jj, kk)]
artist.set_facecolor(cmap(w))
artist.update_width(0.03 * np.abs(w-0.5))
artists.append(artist)
return artists
animation = FuncAnimation(fig, update, frames=total_frames, interval=200, blit=True, repeat=False)
plt.show()
</code></pre>
<p>Thank you. I would appreciate any help/suggestions.</p>
|
<python><matplotlib><frame><netgraph>
|
2024-10-02 11:42:30
| 1
| 481
|
carpediem
|
79,046,498
| 8,379,035
|
How can i correctly format string widths inside of a discord Embed?
|
<p>Iβm working on a Discord bot using the <code>discord.py</code> library and encountered an issue where the printed console output is correct, but the output in the Discord embed is not displaying as expected - the String width is not correct.</p>
<p>Here is the relevant part of my code:</p>
<pre class="lang-py prettyprint-override"><code>if in_database:
searched_strings: list[str] = ["PLACEHOLDER_MONDAY", "PLACEHOLDER_TUESDAY", "PLACEHOLDER_WEDNESDAY",
"PLACEHOLDER_THURSDAY", "PLACEHOLDER_FRIDAY", "PLACEHOLDER_SATURDAY",
"PLACEHOLDER_SUNDAY"]
days: list[str] = await translate_m(interaction.guild_id, searched_strings)
for row in in_database:
day: str = f"- **{days[row[0] - 1]}**:".ljust(50, ' ')
work_hours += day + f"`{row[1]}`\n"
print(work_hours)
translations[2] = translations[2].replace("[work_hours]", work_hours)
embed: Embed = Embed(description="\n".join(translations[:-1]), color=0xEA2027)
await channel.send(embed=embed)
</code></pre>
<p>The correct console output:
<a href="https://i.sstatic.net/0mPxJmCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0mPxJmCY.png" alt="correct console output" /></a></p>
<p>The incorrectly formatted discord embed:</p>
<p><a href="https://i.sstatic.net/JprlQ6H2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JprlQ6H2.png" alt="enter image description here" /></a></p>
<p>So my question is, how can I fix that? I heard about using a Codebox in discord to use it, but I'm not a fan of the idea to put all content here inside a Codebox - it's just ugly.</p>
|
<python><string><discord><discord.py><monospace>
|
2024-10-02 11:26:42
| 1
| 891
|
Razzer
|
79,046,492
| 12,962,917
|
How may I free memory allocated by foreign C calls in Python?
|
<p>TlDr: my Python code makes external C library calls. Internally, the Python scripts use a few hundred Mb of RAM but the external C calls are using many Gb, even when I no longer need the data they have outputted. I would like to know how to free this memory in Python - I don't fully understand how Python memory works, since everything is some kind of virtual variable, and gc.collect() possibly does nothing here since the memory was created by a C library and not Python code.</p>
<hr />
<p>I've recently noticed my project - which is developed exclusively in Python 3.12, as of right now, and builds on top of Sagemath - uses an absurd amount of memory and have debugged it using <code>tracemalloc</code> and the <code>resource</code> library as indicated <a href="https://stackoverflow.com/a/45679009">here</a>. Nothing fancy, I more or less copied the display features from that answer into my own code.</p>
<p>I observe <code>tracemalloc</code> thinks I am using on the order of 300Mb of RAM whereas <code>resource.getrusage</code> thinks I am using 2300Mb of RAM. Task manager shows a bloated figure of 6Gb, though I'm sure that's mostly due to the fact I'm running on top of the WSL virtual machine. I'm almost certain the discrepancy of 2Gb between <code>resource.getrusage</code> and <code>tracemalloc</code> is due to Sagemath calling from external C libraries, such as <code>libgap</code>, during runtime. I obviously want this 2Gb of memory to be freed, since when I use slightly larger input sizes my program occupies some 10Gb of RAM and crashes WSL.</p>
<p>The code looks roughly like this:</p>
<pre class="lang-py prettyprint-override"><code>for r in range(1, n):
# initialise variables, load some references
# import tracemalloc, resource, define a function to display memory usage
G = self.groups[r]
tracemalloc.start()
for N, word in enumerate(huge_data_set):
if N % threshold == 0:
display_memory_usage()
# do some stuff
generators = ...
stab = frozenset(G.subgroup(generators))
self.stabiliser_lookup.update({(word, otherdata) : (stab, moredata)})
# do similar things, occasionally calling Sage group creation
# or group element creation functions
perm_indices = ...
self.permutation_lookup.update({(word, otherdata) : G(perm_indices)})
# all of this is inside more nested for loops, omitted for readability
</code></pre>
<p>Tracemalloc indicates that my own, "personal" Python code is using very little memory. On an example where the program as a whole used 2.3Gb, it claimed the top usage in lines in <code>/home/user/main.py</code> was 30Mb. The greatest offender was <code>/sage/perm_gps/permgroup.py</code> which allegedly used 80Mb, but that's still nothing compared to 2.3Gb. I can only conclude <code>tracemalloc</code> doesn't detect external C library memory usage. However, please note I don't take the group object <code>G.subgroup(generators)</code>, no: I bake it into a Python <em>set</em>, wanting only an immutable list of elements, elements which may be identified with elements of the precomputed <code>self.groups</code> attribute, and therefore any extra bloat caused by GAP should surely be unnecessary. All the data I need is explicitly referenced in my Python script and the line <code>stab = frozenset(G.subgroup(generators))</code> reportedly uses a measly 17Mb of RAM.</p>
<p><em>How can I free up this extra 2Gb</em>?</p>
<p>Ideally, there should be some way whereby <code>frozenset</code> is replaced with something that definitively caches the memory and frees up all other references to that variable, but I wouldn't know how to do that. I realise <code>frozenset</code> probably does quite little because, whatever, everything is virtual (my ignorance should be clear, hence the question).</p>
|
<python><c><memory-management><memory-leaks><sage>
|
2024-10-02 11:24:25
| 0
| 381
|
FShrike
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.