QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,386,994
| 2,710,331
|
Python: Subtract one 2D line from another
|
<p><strong>The problem</strong></p>
<p>I need to calculate the difference between the two lines (ie. subtract one from the other), which should result in an output line.</p>
<p><em>The input lines - and output line - may include vertical sections. ie. the X values for the lines may not be monotonically increasing (although they won't go back on themselves).</em></p>
<p>This is probably easiest to demonstrate with an example:</p>
<p><em>Input</em>:</p>
<pre><code># lines represented as arrays of (X,Y) coordinates
line1 = [ (0,2), (4,2), (5,10), (15,10), (16,0), (20,0) ]
line2 = [ (0,2), (5,2), (10,5), (15,5), (15,0), (20,0) ]
</code></pre>
<p>This is represented by the following chart (made in Excel):</p>
<p><a href="https://i.sstatic.net/EDaJLi8Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDaJLi8Z.png" alt="Line1 and Line2" /></a></p>
<p><em>Output</em>:</p>
<p>Ideally I'm looking for a library which can take <code>line1</code> & <code>line2</code> as input, and output the vertices representing the difference between the lines (AKA <code>line1</code> minus <code>line2</code>). So for this example:</p>
<pre><code>[ (0,0), (4,0), (5,8), (10,5), (15,5), (15,10), (16,0), (20,0) ]
</code></pre>
<p>Again, probably easier to visualise with a chart - <em>although note, I am looking for a library which outputs the result as a Python object - ideally vertices or similar - not a library which just draws a chart</em>:</p>
<p><a href="https://i.sstatic.net/CWDpeWrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CWDpeWrk.png" alt="enter image description here" /></a></p>
<p><em>Notes about the input & output data</em>:</p>
<ul>
<li>The lines between two points are straight, not curves</li>
<li><code>line1</code> & <code>line2</code> do not contain all the same X values</li>
<li>X points on a line will never go <em>backwards</em>, <strong>but we can have multiple points on a line with the same X value which are next to each other - representing a vertical line</strong>. Eg. <code>line2</code> contains duplicate points at X=15, because there is a vertical line from <code>(X=15,Y=5)</code> to <code>(X=15,Y=0)</code>.</li>
<li>Similarly, the output data may need to represent vertical lines.</li>
</ul>
<p><strong>Potential solutions I've looked into</strong></p>
<p><em>I am relatively new to the Python ecosystem, and am not a math / geometry / graph expert. I have tried to research various Python libraries, but the documentation for these is often heavy on math terminology, so it's entirely possible I am missing something obvious.</em></p>
<p><strong><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html" rel="nofollow noreferrer">Pandas Series</a></strong> - it looks like this would work perfectly <em>if</em> I had monotonically increasing X values... But I don't...</p>
<p><strong><a href="https://docs.sympy.org/latest/modules/geometry/polygons.html" rel="nofollow noreferrer">SymPy Geometry</a></strong> - it <em>may</em> be possible to subtract 2d lines / polygons with SymPy in the way I want, but I haven't been able to locate anything obvious in the documentation...</p>
<p><strong><a href="https://docs.scipy.org/doc/scipy/tutorial/interpolate.html" rel="nofollow noreferrer">SciPy</a></strong> - <a href="https://stackoverflow.com/a/56279865">this StackOverflow answer</a> uses <code>scipy.interpolate.interp1d</code> to do <em>almost</em> what I want to do...</p>
<p>As far as I can see, you <em>can</em> feed <code>interp1d</code> a series of points <em>including duplicate X values</em> and it will represent the shape of the line correctly:</p>
<pre><code>>>> x_values = [0,1,1]
>>> y_values = [0,0,10]
>>> curve = sc.interpolate.interp1d(x_values, y_values)
>>> curve(0)
array(0.)
>>> curve(1)
array(10.)
>>> curve(0.9999)
array(0.)
</code></pre>
<p>There is a vertical line at X=1, so getting the the Y value at X<1 gives 0, while X=1 gives 10.</p>
<p>However, if I were to do something like the above linked Stack Overflow answer:</p>
<pre><code># This would lose the duplicates representing the vertical line
all_x_values = sorted(set(line1_x_values + line2_x_values))
# Even if we had maintained all the X values, in our original example we would
# have two X=15 values here... The first call to line1(15) would need to return
# Y = 5, the second call to line1(15) would need to return Y = 0...
new_line_y_values = [line1(x) - line2(x) for x in all_x_values]
# And if we got this far with everything working, how do we extract the vertices
# from the new line - including multiple points at the same X-value for any vertical
# lines?
new_line = scipy.interpolate.interp1d(all_x_values, new_line_y_values)
</code></pre>
|
<python><python-3.x><math><plot><geometry>
|
2024-04-25 19:57:58
| 1
| 343
|
asibs
|
78,386,891
| 5,502,917
|
Raspberry Pi 4: Python - RuntimeError: Error waiting for edge
|
<p>I am trying to get the falling edge of a HB100 doppler radar with a Lm358 amplifier module.</p>
<p>The error is RuntimeError: Error waiting for edge</p>
<p>It was working great and stopped suddenly.</p>
<p>I have already tried with another radar and amplifier modules and the error continues.
I have tried to switch the GPIO pins with no success.</p>
<p>The code is below</p>
<p>I am using Raspberry-Pi 4 and Python11</p>
<pre><code>import RPi.GPIO as GPIO
import time
GPIO.cleanup()
AMPLIFICADOR_INPUT_PIN = 23
GPIO.setmode(GPIO.BCM)
GPIO.setup(AMPLIFICADOR_INPUT_PIN, GPIO.IN)
MAX_PULSE_COUNT = 10
MOTION_SENSITIVITY = 10
def count_frequency(GPIO_pin, max_pulse_count=10, ms_timeout=50):
start_time = time.time()
pulse_count = 0
for count in range(max_pulse_count):
edge_detected = GPIO.wait_for_edge(GPIO_pin, GPIO.FALLING, timeout=ms_timeout)
if edge_detected is not None:
pulse_count += 1
duration = time.time() - start_time
frequency = pulse_count / duration
return frequency
while True:
doppler_freq = count_frequency(AMPLIFICADOR_INPUT_PIN)
speed = doppler_freq / float (31.36)
print (speed)
if (speed>2):
print ('high Speed'+ "Your speed="+ str(speed) +'Mph')
if doppler_freq < MOTION_SENSITIVITY:
print("No motion was detected")
else:
print("Motion was detected, Doppler frequency was: {0}".format(doppler_freq))
GPIO.cleanup()
</code></pre>
|
<python><python-3.x><raspberry-pi><gpio>
|
2024-04-25 19:34:06
| 1
| 1,731
|
GuiDupas
|
78,386,617
| 6,583,606
|
Configure an interpreter using WSL with VS Code
|
<p>Within VS Code, I would like to configure a conda environment living in WSL as the interpreter of a project living in Windows. This is possible with PyCharm, as indicated <a href="https://www.jetbrains.com/help/pycharm/using-wsl-as-a-remote-interpreter.html" rel="nofollow noreferrer">here</a>.</p>
<p>With VS Code, I can successfully connect to WSL with the Remote Explorer extension and select a conda environment living in WSL, as indicated <a href="https://stackoverflow.com/questions/62514756/selecting-python-interpreter-from-wsl">here</a>, but it seems I can only use it for projects living in WSL, not for projects living in Windows.</p>
<p>Is there a way to use a conda environment living in WSL for projects living in Windows, within VS Code?</p>
|
<python><visual-studio-code><pycharm><conda><windows-subsystem-for-linux>
|
2024-04-25 18:33:52
| 1
| 319
|
fma
|
78,386,298
| 1,143,935
|
Is there a way we can attach an identifier to link when clicked from email
|
<p>So this is an authentication link which we auto generate with some token for that particular email, Where we don't want the user to forward it, even if he/she forward it, we want to know from which email address it is coming from? If it's not coming from original email we want to stop authentication.
If we can get any details of email or any sort of header details or a check of email, that would be great.</p>
|
<javascript><python><authentication><email><single-sign-on>
|
2024-04-25 17:29:11
| 1
| 401
|
Atul Verma
|
78,386,256
| 1,422,096
|
Compilation works on Cython 0.29, but not Cython 3.0
|
<p>The project <a href="https://github.com/superquadratic/rtmidi-python" rel="nofollow noreferrer">rtmidi-python</a> compiles well on Cython 0.29.37 (the latest version before Cython 3.0), but it fails on Cython 3.0, with the error below.</p>
<p>Are there known incompatibilities when upgrading to Cython 3?</p>
<p>If so, is there a flag to use when calling Cython 3 to keep backwards compatibility with previous versions, without requiring to modify the actual <em>.pyx</em> file?</p>
<pre class="lang-none prettyprint-override"><code>Error compiling Cython file:
------------------------------------------------------------
...
self.thisptr.cancelCallback()
self.py_callback = callback
if self.py_callback is not None:
self.thisptr.setCallback(midi_in_callback, <void*>self.py_callback)
^
------------------------------------------------------------
rtmidi_python.pyx:92:41: Cannot assign type 'void (double, vector[unsigned char] *, void *) except * nogil' to 'RtMidiCallback' (alias of 'void (*)(double, vector[unsigned char] *, void *) noexcept'). Exception values are incompatible. Suggest adding 'noexcept' to the type of 'midi_in_callback'.
Traceback (most recent call last):
...
</code></pre>
|
<python><cython>
|
2024-04-25 17:19:44
| 1
| 47,388
|
Basj
|
78,386,222
| 1,662,775
|
Scraping a website with dynamic javascript using beautiful soup
|
<p>I am trying to the IBM docs. The following is the URL that I am looking at. I am wondering how to expand all the toggles on the left hand pane programatically so that I can get all the URLs and get the data.</p>
<p><a href="https://www.ibm.com/docs/en/b2b-integrator/6.1.0" rel="nofollow noreferrer">https://www.ibm.com/docs/en/b2b-integrator/6.1.0</a></p>
<p>So seems like the RPA is the way to go, to expand each of the toggle button and expand it and scrape the data using Selenium like library.</p>
<p>But could anyone give any ideas?</p>
<p>Thanks and regards</p>
|
<python><selenium-webdriver><beautifulsoup><web-crawler>
|
2024-04-25 17:12:09
| 1
| 1,204
|
Baradwaj Aryasomayajula
|
78,385,768
| 3,606,412
|
Does the nums array make a deep copy of letters[2:5] = ['C', 'D', 'E']?
|
<p>I am learning shallow v.s. deep copying a list in Python. I learned that <a href="https://docs.python.org/3/tutorial/introduction.html#lists" rel="nofollow noreferrer">there're two ways to create copies that have the original values unchanged and only modify the new values or vice versa. They're <em>shallow copy</em> and <em>deep copy</em></a>.</p>
<p>Besides Copy.copy() and Copy.deepcopy(), I learned slice operators can be use for shallow and deep copying a list. From the <a href="https://docs.python.org/3/tutorial/introduction.html#lists" rel="nofollow noreferrer">Python official documentation</a>, I was able to find information that doing <code>correct_rgba = rgba[:]</code>, <code>correct_rgba</code> makes a shallow copy of <code>rgba</code>.</p>
<p>Example 1.</p>
<pre><code>rgba = ["Red", "Green", "Blue", "Alph"]
correct_rgba = rgba[:]
correct_rgba[-1] = "Alpha"
correct_rgba # ["Red", "Green", "Blue", "Alpha"]
rgba # ["Red", "Green", "Blue", "Alph"]
</code></pre>
<p>However, I couldn't find information to confirm whether <code>Assignment to Slices</code> is a deep copy. Here is an example found on the same Python documentation.</p>
<p>Example 2.</p>
<pre><code>letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
letters[2:5] = ['C', 'D', 'E']
letters # ['a', 'b', 'C', 'D', 'E', 'f', 'g']
letters[:] = []
letters # []
</code></pre>
<p>My questions:</p>
<p>(1) Does letters[2:5] deep copy ['C', 'D', 'E']? Does letters[:] deep copy []?</p>
<p>(2) How do I test to find out whether a list is a shallow or deep copy?</p>
|
<python><arrays><deep-copy><shallow-copy>
|
2024-04-25 15:42:33
| 3
| 1,383
|
LED Fantom
|
78,385,716
| 13,123,667
|
Pytorch and Matplotlib interfering
|
<p>I'm facing a weird bug with Matplotlib and torch in my jupyter notebook. If I run with this torch.hub.load line the plt.imshow will simply not display anything (even tho frame is a correct image). If I comment this line the plt.imshow works.</p>
<p>Whether this torch.hub.load line is commented or not cv2.imshow will work.</p>
<pre><code>onnx_path = "my_weights.onnx"
yolo_path = "lib/yolov5/"
torch.hub.load(yolo_path, 'custom', path=onnx_path, source='local')
video_reader = VideoReader(str(src_file))
# wait for thread to read
while not video_reader.is_ready():
waiting += 1
time.sleep(1)
while(video_reader.is_ready()):
frame = video_reader.frame
#cv2.imshow('image',frame)
#cv2.waitKey(0)
plt.imshow(frame)
plt.axis('off')
plt.show()
</code></pre>
<p>It seems i'm missing something but I don't see it. Any help is appreciated :)</p>
|
<python><matplotlib><pytorch>
|
2024-04-25 15:32:52
| 2
| 896
|
Timothee W
|
78,385,605
| 984,621
|
Streaming logs throws "maximum recursion depth exceeded while calling a Python object" error. How to deal with that?
|
<p>I have a Python app that's streaming logs to a third-party service (in this case, it's an AWS S3 bucket).</p>
<p>My simplified code looks like this:</p>
<pre><code>class S3Handler(logging.StreamHandler):
def __init__(self, bucket_name, key):
super().__init__()
self.bucket_name = bucket_name
self.key = key
self.s3_client = boto3.client('s3',
endpoint_url=...,
aws_access_key_id=...,
aws_secret_access_key=...)
def emit(self, record):
try:
log_entry = self.format(record).encode('utf-8')
self.s3_client.put_object(Bucket=self.bucket_name, Key=self.key, Body=log_entry, ACL="public-read")
except Exception as e:
print(f"Error while logging to S3: {str(e)}")
s3_handler = S3Handler(bucket_name='mybucketname', key='path/to/logfile.txt')
logging.getLogger().addHandler(s3_handler)
</code></pre>
<p>The goal is that when I run my script, all logs would be saved in the S3 bucket. The script can run for 30 seconds or it can run for 10 hours.</p>
<p>When I run this code, I get immediately this error:</p>
<pre><code>Error while logging to S3: maximum recursion depth exceeded while calling a Python object
</code></pre>
<p>After some googling, I found out that the reason behind this error is exceeding the default quota of 1000 iterations that Python provides. There's a way to increase this quota, but I don't find it practical, or feasible for this particular case.</p>
<p>Apparently, my approach to streamline the script's logs to an S3 bucket is not ideal. What are the options to resolve this problem?</p>
|
<python>
|
2024-04-25 15:15:49
| 1
| 48,763
|
user984621
|
78,385,485
| 5,397,009
|
Filter a query set depending on state at a given date
|
<p>Given the following model (using <a href="https://pypi.org/project/django-simple-history/" rel="nofollow noreferrer"><code>django-simple-history</code></a>):</p>
<pre><code>class MyModel (models.Model):
status = models.IntegerField()
history = HistoricalRecords()
</code></pre>
<p>I would like to get all instances that didn't have a certain <code>status</code> on a given date (i.e. all instances that had a different status on the limit date, plus all instances that didn't exist at that time).</p>
<p>The following query will return all instances that never had <code>status = 4</code> at any point before the limit date:</p>
<pre><code>MyModel.filter (~Exists (
MyModel.history.filter (
id = OuterRef ("id"),
history_date__lte = limit_date,
status = 4))
</code></pre>
<p>But unfortunately it also removes instances that had <code>status = 4</code> at some past date, then changed to a different <code>status</code> by the limit date, and I want to keep those.</p>
<p>The following should give the correct result:</p>
<pre><code>MyModel.filter (~Exists (
MyModel.history.filter (
id = OuterRef ("id"),
history_date__lte = limit_date)
.order_by ("-history_date")
[:1]
.filter (status = 4)))
</code></pre>
<p>Unfortunately it doesn't work: <code>Cannot filter a query once a slice has been taken.</code> This <a href="https://stackoverflow.com/q/3470111/5397009">question</a> links to this <a href="https://docs.djangoproject.com/en/4.2/ref/models/querysets/#when-querysets-are-evaluated" rel="nofollow noreferrer">documentation page</a> which explains that filtering is not allowed after the queryset has been sliced.</p>
<p>Note that the error comes from an <code>assert</code> in Django. If I comment out the <code>assert</code> in <code>django/db/models/query.py:953</code>, then the code appears to work and gives the expected result. However commenting out an <code>assert</code> in an upstream dependency is not a viable solution in production.</p>
<p>So is there a clean way to filter my queryset depending on some past state of the object?</p>
|
<python><django><django-queryset>
|
2024-04-25 14:54:35
| 2
| 24,071
|
Jmb
|
78,384,983
| 3,561,842
|
matplotlib pyplot creates a broken plot when long
|
<p>I'm struggling with this strange behavior in the matplotlib pyplot library.</p>
<p>The actual plot is more complex but I've reduced it to these few lines of python code:</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as np
def plot(w, h):
num_samples = int(w * h)
x = w * np.random.rand(num_samples)
y = h * np.random.rand(num_samples)
plt.figure(figsize=(w, h))
plt.xticks(range(w))
plt.xlim(0, w)
plt.ylim(0, h)
plt.scatter(x, y)
plt.grid()
</code></pre>
<p>It basically creates a scatter plot given width and height parameters.</p>
<p>If I invoke this function like <code>plot(5, 3)</code>, it will produce this figure:
<a href="https://i.sstatic.net/qfncM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qfncM.png" alt="short plot" /></a></p>
<p>I can use the function with greater width values, generating long plots like <code>plot(100, 3)</code>:
<a href="https://i.sstatic.net/YybNb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YybNb.png" alt="long plot" /></a></p>
<p>But if I use it with a greater width it creates a "broken plot" <code>plot(450, 3)</code>:
<a href="https://i.sstatic.net/phyTl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/phyTl.png" alt="longer plot" /></a></p>
<p>Note that</p>
<ol>
<li>the lines of the containing frame are kind of dashed?</li>
<li>there are scatter dots outside the containing frame.
<a href="https://i.sstatic.net/2C6UG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2C6UG.png" alt="broken plot" /></a></li>
</ol>
<p>why?
Is it a bug in the library?</p>
|
<python><matplotlib>
|
2024-04-25 13:26:51
| 0
| 968
|
estebanuri
|
78,384,821
| 1,991,332
|
How do I add reversible noise to the MNIST dataset using PyTorch?
|
<p>I would like to add reversible noise to the MNIST dataset for some experimentation.</p>
<p>Here's what I am trying atm:</p>
<pre><code>import torchvision.transforms as transforms
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from PIL import Image
import torchvision
def display_img(pixels, label = None):
plt.imshow(pixels, cmap="gray")
if label:
plt.title("Label: %d" % label)
plt.axis("off")
plt.show()
class NoisyMNIST(torchvision.datasets.MNIST):
def __init__(self, root, train=True, transform=None, target_transform=None, download=False):
super(NoisyMNIST, self).__init__(root, train=train, transform=transform, target_transform=target_transform, download=download)
def __getitem__(self, index):
img, target = self.data[index], self.targets[index]
img = Image.fromarray(img.numpy(), mode="L")
if self.transform is not None:
img = self.transform(img)
# add the noise
noise_level = 0.3
noise = self.generate_safe_random_tensor(img) * noise_level
noisy_img = img + noise
return noisy_img, noise, img, target
def generate_safe_random_tensor(self, img):
"""generates random noise for an image but limits the pixel values between -1 and 1"""
min_values = torch.clamp(-1 - img, max=0)
max_values = torch.clamp(1 - img, min=0)
return torch.rand(img.shape) * (max_values - min_values) + min_values
# Define transformations to apply to the data
transform = transforms.Compose([
transforms.ToTensor(), # Convert images to tensors
transforms.Normalize((0.1307,), (0.3081,)),
])
train_dataset = NoisyMNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = NoisyMNIST(root='./data', train=False, download=True, transform=transform)
np_noise = train_dataset[img_id][1]
np_data = train_dataset[img_id][0]
display_img(np_data_sub_noise, 4)
</code></pre>
<p>Ideally, this would give me the regular MNIST dataset along with a noisy MNIST images and a collection of the noise that was added. Given this, I had assumed I could subtract the noise from the noisy image and go back to the original image, but my image operations are not reversible.</p>
<p>Any pointers or code snippets would be greatly appreciated. Below are the images I currently get wit my code:</p>
<p>Original image:</p>
<p><a href="https://i.sstatic.net/YsX1R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YsX1R.png" alt="enter image description here" /></a></p>
<p>With added noise:</p>
<p><a href="https://i.sstatic.net/s77Ls.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s77Ls.png" alt="enter image description here" /></a></p>
<p>And with the noise subtracted for the image with noise:</p>
<p><a href="https://i.sstatic.net/cMmJV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cMmJV.png" alt="enter image description here" /></a></p>
|
<python><pytorch><computer-vision><signal-processing><noise>
|
2024-04-25 12:55:51
| 1
| 408
|
RasmusJ
|
78,384,769
| 1,122,189
|
parsing webhook information to fetch specific parts in python
|
<p>I have a code that listens for a post to be sent
It receives the post as</p>
<pre><code> {"description": "Test Call", "map_code": "", "details": "", "cross_street": ""}
</code></pre>
<p>It will print this using the following</p>
<pre><code>def return_response():
Dic = request.json;
data = json.dumps(Dic)
print(data)
</code></pre>
<p>I have also been able to print it using</p>
<pre><code>print(request.json);
</code></pre>
<p>To my attempts i have not been able to convert this from the dictionary/lists to specific parts such as</p>
<pre><code>print('description')
print('details')
</code></pre>
<p>*** FUll CODE UPDATE</p>
<pre><code>from pydub import AudioSegment
from pydub.playback import play
from flask import Flask, request, Response
from gevent.pywsgi import WSGIServer
import json
app = Flask(__name__)
@app.route('/my_webhook', methods=['POST'])
def return_response():
Dic = request.json;
data_str = json.dumps(Dic)
print(Dic.keys())
song = AudioSegment.from_wav('alert.wav')
play(song)
## Do something with the request.json data.
return Response(status=200)
if __name__ == "__main__": app.run(host='0.0.0.0', port=5000)
</code></pre>
<p>Thank you for any guidance to come.</p>
|
<python><json><webhooks><endpoint>
|
2024-04-25 12:48:35
| 1
| 815
|
nate
|
78,384,680
| 1,432,792
|
cannot import name 'Service' from 'services' when using semantic-kernel
|
<p>I am currently working through Microsoft's semantic-kernel exercises found <a href="https://github.com/microsoft/semantic-kernel/blob/main/python/notebooks/01-basic-loading-the-kernel.ipynb" rel="nofollow noreferrer">here</a></p>
<p>But am getting the following error when I try to access the Services within the library.</p>
<pre><code>ImportError Traceback (most recent call last)
Cell In[1], line 1
----> 1 from services import Service
3 # Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)
4 selectedService = Service.OpenAI
ImportError: cannot import name 'Service' from 'services' (c:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\site-packages\services\__init__.py)
</code></pre>
<p>Has anyone else had a similar issue? It seems the <code>services</code> library on pypi is something much older and not the one being used here by Microsoft.</p>
|
<python><import><semantic-kernel>
|
2024-04-25 12:34:07
| 1
| 3,949
|
Taylrl
|
78,384,268
| 12,013,353
|
Newmark-beta method solution to dynamic equation of motion lagging behind true solution for lower periods of vibration
|
<p>I have an earthquake record and I'm trying to obtain the spectral acceleration plot by solving the following differential equation:<br />
<a href="https://i.sstatic.net/jDzDN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jDzDN.png" alt="differential equation" /></a></p>
<pre><code>T = np.geomspace(0.1,10,1000) # periods of vibration
SA_undamped = [] # empty list of spectral accelerations
times = earthquake_record.index
for Ti in T:
ksi = 0
wn = 2 * np.pi / Ti
sol = scipy.integrate.solve_ivp(sdof, t_span=(0,17.5), y0=[0,0], t_eval=times)
SAi = np.max(np.abs(sol['y'][0])) * wn**2
SA_undamped.append(SAi)
</code></pre>
<p>where the function <code>sdof</code> gives the derivatives of the two ODEs resulting from the initial differential equation.</p>
<pre><code>def sdof(t, y):
u = y[0]
v = y[1]
return np.vstack((v, agfun(t) - wn**2 * u - 2 * ksi * wn * v)).reshape(2,)
</code></pre>
<p>And <code>agfun(t)</code> gives the ground acceleration at time <em>t</em>.<br />
This works fine. Then, I employed the Newmark-beta method for numerical integration of the differential equation:</p>
<pre><code>def newmark_SD(earthquake_record, T, damping=0, x0=0, v0=0, a0=0):
times = earthquake_record.index
dt = times[1] - times[0]
shp = earthquake_record.shape
agvals = earthquake_record.values.reshape(shp[0],)
ksi = damping
wn = 2 * np.pi / T
beta = 1/6
gama = 1/2
SD = []
SV = []
SA = []
xo = x0
vo = v0
ao = a0
ag0 = agvals[0]
ago = ag0
for n,i in enumerate(times):
if n > 0:
agn = agvals[n]
dpi_ = ((agn - ago) + (1 / (beta * dt) + gama / beta * 2 * ksi * wn) * vo +
(1 / (2*beta) + dt * (gama / (2*beta) - 1) * 2 * ksi * wn) * ao)
ki_ = wn**2 + gama / (beta * dt) * 2 * ksi * wn + 1 / (beta * dt**2)
xn = dpi_ / ki_ + xo
vn = gama / (beta * dt) * (xn - xo) - gama / beta * vo + dt * (1 - gama / (2*beta)) * ao + vo
an = 1 / (beta * dt**2) * (xn - xo) - 1 / (beta * dt) * vo - 1 / (2*beta) * ao + ao
SD.append(xn)
SV.append(vn)
SA.append(an)
xo = xn
vo = vn
ao = an
ago = agn
return (SD,SV,SA)
</code></pre>
<p>And used the following to get the whole response:</p>
<pre><code>SA_newmark = []
for Ti in T:
spectra = newmark_SD(earthquake_record, Ti)
wn = 2 * np.pi / Ti
SAi = np.max(np.abs(spectra[0])) * wn**2
SA_newmark.append(SAi)
</code></pre>
<p>The diagrams for the two solutions are shown in the following picture, which is set to display the interval from 0.1 to 1 to be able to see the differences, as in the larger periods they disappear and the two diagrams match perfectly.</p>
<p><a href="https://i.sstatic.net/6XRbV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6XRbV.png" alt="diagrams" /></a><br />
So, there is a shift in the lower periods, which disappears in the larger periods. I'm not sure what to think of it. For any specific period, the integration is performed across the whole record, so I'm even more confused as to why there is a shift only in the lower periods.</p>
|
<python><differential-equations><numerical-integration>
|
2024-04-25 11:19:53
| 0
| 364
|
Sjotroll
|
78,384,202
| 4,247,881
|
Filter a polars dataframe based on JSON in string column
|
<p>I have a Polars dataframe like</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
"tags": ['{"ref":"@1", "area": "livingroom", "type": "elec"}', '{"ref":"@2", "area": "kitchen"}', '{"ref":"@3", "type": "elec"}'],
"name": ["a", "b", "c"],
})
</code></pre>
<pre><code>┌────────────────────────────────────────────────────┬──────┐
│ tags ┆ name │
│ --- ┆ --- │
│ str ┆ str │
╞════════════════════════════════════════════════════╪══════╡
│ {"ref":"@1", "area": "livingroom", "type": "elec"} ┆ a │
│ {"ref":"@2", "area": "kitchen"} ┆ b │
│ {"ref":"@3", "type": "elec"} ┆ c │
└────────────────────────────────────────────────────┴──────┘
</code></pre>
<p>What I would to do is create a filter function that filters dataframe based on the tags column. Particularly, I would like to only be left with rows where the <code>tags</code> column has an <code>area</code> key and a <code>type</code> key that has a value <code>"elec"</code>.</p>
<p>How can I achieve this (ideally using the native expressions API)?</p>
|
<python><json><dataframe><python-polars>
|
2024-04-25 11:08:53
| 1
| 972
|
Glenn Pierce
|
78,383,921
| 2,957,687
|
The most pythonic way to make a list / a generator with limits
|
<p>I would like to plot a function in <code>matplotlib</code>, but only in a range bounded by two floats, say <code>2.6</code> and <code>8.2</code>. For that I need a list or a generator that includes two float bounds, such as</p>
<pre><code>[2.6, 3, 4, 5, 6, 7, 8, 8.2]
</code></pre>
<p>I know this can be done like this</p>
<pre><code>lst = [2.6]
lst.extend(list(range(ceil(2.6), ceil(8.2))))
lst.append(8.2)
</code></pre>
<p>but this is very quirky. Is there any better pythonic way to do that?</p>
|
<python><list><matplotlib><generator>
|
2024-04-25 10:17:37
| 1
| 921
|
Pygmalion
|
78,383,916
| 248,616
|
Cannot download file when using selenium Chrome webdriver with option prompt_for_download=False in Windows OS
|
<p>The download error from webdriver's browser</p>
<p><a href="https://i.sstatic.net/02eXd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/02eXd.png" alt="enter image description here" /></a></p>
<p>The code I run</p>
<pre><code>from selenium import webdriver # pip install selenium
from webdriver_manager.chrome import ChromeDriverManager # pip install webdriver-manager
o = webdriver.ChromeOptions()
o.add_experimental_option('prefs', {
'download.default_directory' : 'C:\mydownload', # change default directory for downloads
'download.prompt_for_download' : False, # to auto download the file
'download.directory_upgrade' : True,
'plugins.always_open_pdf_externally': True # it will not show PDF directly in chrome
})
wd = webdriver.Chrome(ChromeDriverManager().install(), options=o)
wd.get('https://file-examples.com/wp-content/storage/2017/10/file-sample_150kB.pdf')
# wd.quit() # omit this line to see the error in the webdriver's browser
</code></pre>
<p>The code can run on my colleagues' macbook laptop; though fails on my Windows laptop.</p>
<p>What should I do to get it run succeed?</p>
<p>p.s.</p>
<p>Opening the same link manually in Chrome browser can download the file. It fails with Chrome webdriver only.</p>
|
<python><selenium-webdriver><download><webdriver><auto>
|
2024-04-25 10:17:04
| 0
| 35,736
|
Nam G VU
|
78,383,754
| 18,089,995
|
How to find best matching anchor texts from paragraph and list of titles?
|
<p>I have a paragraph:</p>
<pre><code>In today's world, keeping your personal information safe online is more important than ever. With cyber-attacks on the rise, having a strong cybersecurity strategy is essential.
Whether it's protecting against viruses or securing your passwords, everyone needs to be vigilant. Understanding the digital threats out there can help you stay one step ahead. Building a resilient defence means using antivirus software and keeping your software updated. It's also important to be aware of phishing scams and suspicious emails. By investing in your cybersecurity, you can protect yourself and your data from harm. So, take the time to learn about online safety and protect your digital life.
</code></pre>
<p>And some other articles titles:</p>
<pre><code>titles = [
"Keeping Your Data Safe: Building a Strong Cybersecurity Strategy",
"Navigating the Online Minefield: Understanding Digital Threats",
"Securing Your Online World: Navigating the Cybersecurity Landscape",
"Strengthening Your Shield: Building a Resilient Cyber Defense",
"Beyond the Basics: Exploring Advanced Cybersecurity Techniques",
"Know Your Enemy: Understanding the Cyber Threat Landscape",
"Protecting Your Digital Fort: Strengthening Ransomware Resilience",
"Building Trust Online: Enhancing Customer Confidence in Your Security",
"Compliance in Cybersecurity: Meeting Regulatory Standards for Online Safety",
"Safeguarding Your Future: Investing in Cybersecurity for Peace of Mind",
]
</code></pre>
<p>I want to find the best matching anchor text and title to add internal links.</p>
<p>For example:</p>
<pre><code>1.
Anchor Text: Cybersecurity Strategy
Title: "Keeping Your Data Safe: Building a Strong Cybersecurity Strategy"
2.
Anchor Text: Security Landscape
Title: "Securing Your Online World: Navigating the Cybersecurity Landscape"
</code></pre>
<p>How can I do that? Can someone help me how can I achieve this programmatically?</p>
<h1>Edit 1</h1>
<p>I want to find a **List of the Best Anchor Texts and Article Titles to add internal links.</p>
<h1>Edit 2</h1>
<p>I have a website, where users post new articles every day. My Problem is that, while posting a new article, there might be chances that I have already written an article on some phrase or word. I want to suggest phrases, article titles and links to the user so that he can check and add internal links for better SEO.</p>
|
<python><elasticsearch><pattern-matching><match><string-matching>
|
2024-04-25 09:50:20
| 0
| 595
|
Manoj Kamble
|
78,383,743
| 8,159,580
|
Authentication fails on localhost postgresql with alembic and pytest
|
<p>I want to connect to a postgresql database via alembic in pytest. I can connect to the database via pg admin with the password i set, but i always get the error:</p>
<pre><code>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL: password authentication failed for user "testuser"
</code></pre>
<p>Code is in VisualStudio code and opened in WSL.</p>
<p><em>docker postgresql db</em></p>
<pre><code>docker run -d --rm --name postgres-container -e POSTGRES_USER=testuser -e POSTGRES_PASSWORD=examplepass -p 5432:5432 postgres:latest
</code></pre>
<p>pytest.ini</p>
<pre><code>[pytest]
verbosity_assertions = 1
env =
DB_PASS=examplepass
DB_USER=testuser
DB_HOST=localhost
DB_NAME=testdb
DB_PORT=5432
</code></pre>
<p><em>conftest.py</em> setup</p>
<pre><code>@pytest.fixture(scope='module')
def connection_string():
return f'postgresql://{os.getenv("DB_USER")}:{os.getenv("DB_PASS")}@{os.getenv("DB_HOST")}/{os.getenv("DB_NAME")}'
@pytest.fixture(scope='module')
def engine(connection_string):
logging.info(connection_string)
engine = create_engine(connection_string)
logging.info(engine.url)
return engine
@pytest.fixture(scope='module')
def tables(engine):
alembic_cfg = AlembicConfig("alembic.ini")
alembic_cfg.set_main_option('sqlalchemy.url', str(engine.url))
command.upgrade(alembic_cfg, "head")
yield
command.downgrade(alembic_cfg, "base")
@pytest.fixture(scope='module')
def session(engine, tables, context):
"""Returns an sqlalchemy session, and after the test tears down everything properly."""
connection = engine.connect()
Session = sessionmaker(bind=connection)
session = Session()
# inserts example tables
insert_into_tables(session, context)
yield session
</code></pre>
<p>test_file.py</p>
<pre><code>sys.path.append(".")
from sqlalchemy import text
def test_db_setup(session):
# Try to execute a simple query to check if a table exists
result_table_exists = session.execute(text("SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = 'task')"))
</code></pre>
<p>logging output shows the correct URL</p>
<pre><code>INFO root:conftest.py:47 postgresql://testuser:examplepass@localhost/testdb
INFO root:conftest.py:49 postgresql://testuser:***@localhost/testdb
</code></pre>
|
<python><sqlalchemy><pytest><windows-subsystem-for-linux><alembic>
|
2024-04-25 09:48:25
| 2
| 325
|
Zu Jiry
|
78,383,633
| 8,087,322
|
Surprising result with a conditional `yield`
|
<p>I have the following Python code using <code>yield</code>:</p>
<pre class="lang-py prettyprint-override"><code>def foo(arg):
if arg:
yield -1
else:
return range(5)
</code></pre>
<p>Specifically, the <code>foo()</code> method shall iterate over a single value (<code>-1</code>) if its argument is <strong>True</strong> and otherwise iterate over <code>range()</code>. But it doesn't:</p>
<pre class="lang-py prettyprint-override"><code>>>> list(range(5))
[0, 1, 2, 3, 4]
>>> list(foo(True))
[-1]
>>> list(foo(False))
[]
</code></pre>
<p>For the last line, I would expect the same result as for the first line (<code>[0, 1, 2, 3, 4]</code>). Why is this not the case, and how should I change the code so that it works?</p>
|
<python><yield>
|
2024-04-25 09:32:47
| 1
| 593
|
olebole
|
78,383,395
| 9,342,193
|
Adding labels within a pie chart in Python by optimising space
|
<p>I have a dataframe such as :</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# Sample data
data = {
'genus': ['SpeciesA', 'SpeciesB', 'SpeciesC', 'SpeciesD', 'SpeciesE', 'SpeciesF', 'SpeciesG', 'SpeciesH'],
'count': [10, 2, 1, 1, 1, 1, 1, 1],
'Type': ['Animal', 'Environment', 'Environment', 'Environment', 'Animal', 'Animal', 'Animal/Environment', 'Animal/Environment']
}
# Create DataFrame
df = pd.DataFrame(data)
</code></pre>
<p><strong>>>> df</strong></p>
<pre><code> genus count Type
0 SpeciesA 10 Animal
1 SpeciesB 2 Environment
2 SpeciesC 1 Environment
3 SpeciesD 1 Environment
4 SpeciesE 1 Animal
5 SpeciesF 1 Animal
6 SpeciesG 1 Animal/Environment
7 SpeciesH 1 Animal/Environment
</code></pre>
<p>And I would like using python to create a pie chart were the piechart is divided into each Type proportional to its total count</p>
<p>So far I can do that using :</p>
<pre><code># Group by 'Type' and sum up 'count' within each group
type_counts = df.groupby('Type')['count'].sum()
# Create pie chart
plt.figure(figsize=(8, 8))
plt.pie(type_counts, labels=type_counts.index, startangle=140)
plt.title('Distribution of Counts by Type')
plt.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/wHjLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wHjLj.png" alt="enter image description here" /></a></p>
<p>But I am now looking for a way to add the genus labels within the specific sub-pie chart part.</p>
<p>Such labels should be included in this way <strong>(A)</strong>:</p>
<p>In such a way that the labels are placed randomly in the corresponding pie chart without overlapping by optimizing the space.</p>
<p>Or, if it is impossible to simply place them in this way <strong>(B)</strong>:</p>
<p><a href="https://i.sstatic.net/yFrQX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yFrQX.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><pie-chart>
|
2024-04-25 08:51:57
| 1
| 597
|
Grendel
|
78,383,268
| 10,816,404
|
How to use logging with custom jsonPayload in GCP Dataflow
|
<p>by default, GCP Dataflow 'just works' with the standard Python logging library, so:</p>
<pre class="lang-py prettyprint-override"><code>import logging
logging.info('hello')
</code></pre>
<p>yields a log message in Google Logging while the Dataflow job runs in GCP. This log entry contains some useful log context - mainly info about the gcp project, job, worker and, of course, the message (in this case 'hello').
JSON fields like <code>jsonPayload</code> and <code>labels</code> are populated by those values.</p>
<p>However, I'd like to add some more contextual info (without having to place in the message string itself).</p>
<p>Normally, when using GCP services, you can do it by installing <code>google-cloud-logging</code> library and setting up your logger like:</p>
<pre class="lang-py prettyprint-override"><code>
# Imports the Cloud Logging client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.setup_logging()
</code></pre>
<p>and then you can use some convention. For example, out of the box, you can use standard logging library like the 'hello' example above, but you're also able to add contextual info like:</p>
<pre class="lang-py prettyprint-override"><code>import logging
logging.info('hello', extra={'json_fields': { 'key1': 'value1', 'key2': 'value2'})
</code></pre>
<p>the <code>json_fields</code> is a convention that would translate <code>key1</code> and <code>key2</code> into <code>jsonPayload</code> object. you can also provide <code>labels</code> object in the same manner.</p>
<p>This, however, does not work with Dataflow. Adding both <code>json_fields</code> and/or <code>labels</code> doesn't add nor overwrites any contextual info in GCP logging.</p>
<p>The above behaviour happens without the client setup that I've added above, because we can't instantiante a client due to Pickling errors.</p>
<p>Surely this should be simple enough and I'm missing something!</p>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam><google-cloud-logging>
|
2024-04-25 08:26:24
| 1
| 2,220
|
Duck Ling
|
78,383,166
| 984,621
|
How to save logs to an AWS S3 bucket?
|
<p>The logs the Scrapy spider is producing are getting bigger over time, which is causing an issue in terms of the performance of my server (Ubuntu). I don't want to limit the level of information I am putting to logs, because I find it very useful in general, and even more when it comes to debugging.</p>
<p>So right now, my options are:</p>
<ol>
<li>Increase the disk space/configuration of the server – this is a short-term fix and absolutely not scalable. I cannot do it forever.</li>
<li>Decrease the level of information I store in log files – as described above, I am not so eager to do that.</li>
</ol>
<p>There's a way to store scraped data (images and other files) to third-party storage services, such as AWS S3, and that works well.</p>
<p>Is there a way to do the same with log files? I was not able to find a solution to this. Right now what I am considering is writing a script that would be triggered once the spider is finished. This script would take the log file, copy it to the AWS S3 bucket, update the database with the path to the log file in the S3 bucket, and delete the log file generated by Scrapy on the server.</p>
<p>It's an extra logic that I would need to add, so I am wondering if there's a better way to deal with this problem.</p>
|
<python><scrapy>
|
2024-04-25 08:10:19
| 0
| 48,763
|
user984621
|
78,383,130
| 5,618,856
|
Select sibling node value based on multiple sibling conditions not using an xpath string
|
<p>There are various answers to finding an xml node using xpath like <a href="https://stackoverflow.com/questions/65505341/select-sibling-node-value-based-on-multiple-sibling-conditions">here</a>. In python using lxml, is there a more pythonic way of building this condition list? I know I can build the xpath string using an f-string but maybe there is a more elegant way.</p>
<p>As example:</p>
<pre><code>root.xpath('//Group/Person[Position = "CEO" and Street = "Main" and Name = "Paul"]/Condition[Group = "Manager"]/Room')[0].text
</code></pre>
<p>can be built like so:</p>
<pre><code>root.xpath(f'//Group/Person[Position = "{role}" and Street = "{street}" and Name = "{name}"]/Condition[Group = "{group}"]/Room')[0].text
</code></pre>
<p>but it still looks clumsy.</p>
|
<python><xml><xpath><lxml>
|
2024-04-25 08:02:23
| 1
| 603
|
Fred
|
78,382,633
| 7,680,592
|
ERROR API Binance convert limit placeOrder
|
<p>I'm trying to make a limit order of convert using binance api. The Postman doc tell us that should be pass as parameter in the POST like we do with <code>user_saldo</code> that works just fine.</p>
<p>The error returned <code>content: b'{"code":345214,"msg":"Placing a limit order has failed. Please try again later. Error code: 345124"}'</code></p>
<p>so the methods <code>conversao</code> don't work. <em>I also tried to ajust the limitePrice value when it is a BUY and increase it when it's a SELL to avoid open a dead limit order</em>, still same error.</p>
<p><em>Fixed using only 8 decimal units instead of 15</em> Another issue that give a diffent error is when I try to pass the <code>quant_de_cpt</code> it says this value is malformed.</p>
<p>What I'm doing wrong?</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import os
import requests
import time
import hmac
import hashlib
class OperadorBinanceClient:
def __init__(self, api_key=None, api_secret=None):
self.api_key = api_key or os.environ.get('binance_api_key')
self.api_secret = api_secret or os.environ.get('binance_api_secret')
self.recv_window = 60000 # Valor em milissegundos
self.base_url = 'https://api.binance.com'
self.user_endpoint = '/sapi/v3/asset/getUserAsset'
self.conversao_endpoint = '/sapi/v1/convert/limit/placeOrder'
self.conversao_status_endpoint = '/sapi/v1/convert/orderStatus'
def _get_timestamp(self):
return str(int(time.time() * 1000))
def _generate_signature(self, query_string:str):
signature = hmac.new(self.api_secret.encode('utf-8'), query_string.encode('utf-8'), hashlib.sha256).hexdigest()
return signature
def user_saldo(self, coin='BRL'):
timestamp = self._get_timestamp()
query_string = f'recvWindow={self.recv_window}&timestamp={timestamp}'
signature = self._generate_signature(query_string)
url = f'{self.base_url}{self.user_endpoint}?{query_string}&signature={signature}'
headers = {'X-MBX-APIKEY': self.api_key}
response = requests.post(url, headers=headers)
all_balances = response.json()
if coin == '':
return all_balances
filtered_balance = [item for item in all_balances if item['asset'] == coin]
# empty list coin not exist
return filtered_balance if len(filtered_balance) > 0 else None
def conversao2(self, de='BRL', para='ETH', quantidade=1, quantidade_de_crypto=0.1, preco_limite=None, lado='BUY', tipo_carteira='SPOT'):
timestamp = self._get_timestamp()
quant_de_cpt = f'{quantidade_de_crypto:.15f}'
data = {
'baseAsset': de,
'quoteAsset': para,
'limitPrice': preco_limite or '',
'baseAmount': quantidade,
'quoteAmount': quant_de_cpt,
'side': lado,
'expiredType': '1_H',
'recvWindow': 60000,
'timestamp': timestamp
}
signature = self._generate_signature('&'.join([f'{key}={data[key]}' for key in data]))
data['signature'] = signature
url = f'{self.base_url}{self.conversao_endpoint}'
headers = {'X-MBX-APIKEY': self.api_key, 'Content-Type': 'application/json'}
response = requests.post(url, headers=headers, data=data)
return response.json()
def conversao(self, de= 'BRL', para='ETH', quantidade=1, quantidade_de_crypto=0.1,preco_limite=None, lado='BUY', tipo_carteira='SPOT'):
timestamp = self._get_timestamp()
quant_de_cpt = f'{quantidade_de_crypto:.15f}'
#&quoteAmount={quant_de_cpt}
query_string = f'baseAsset={de}&quoteAsset={para}&limitPrice={preco_limite}&baseAmount={quantidade}&side={lado}&expiredType=1_H&recvWindow=60000&timestamp={timestamp}'
signature = self._generate_signature(query_string)
url = f'{self.base_url}{self.conversao_endpoint}?{query_string}&signature={signature}'
headers = {'X-MBX-APIKEY': self.api_key, 'Content-Type': 'application/json'}
response = requests.post(url, headers=headers)
return response.json()
def conversao_status(self, id_conversao):
timestamp = self._get_timestamp()
query_string = f'orderId={id_conversao}&recvWindow=60000&timestamp={timestamp}'
signature = self._generate_signature(query_string)
url = f'{self.base_url}{self.conversao_status_endpoint}?{query_string}&signature={signature}'
headers = {'X-MBX-APIKEY': self.api_key}
response = requests.get(url, headers=headers)
return response.json()
</code></pre>
<p><a href="https://i.sstatic.net/QsMDLV2n.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsMDLV2n.jpg" alt="Show Url value" /></a></p>
|
<python><python-requests><request><binance><binance-api-client>
|
2024-04-25 06:26:57
| 2
| 600
|
Joao Victor
|
78,382,516
| 3,909,896
|
Pyenv - Switching between Python and PySpark versions without hardcoding environment variable paths for python
|
<p>I have trouble getting different versions of PySpark to work correctly on my windows machine in combination with different versions of Python installed via PyEnv.</p>
<p>The setup:</p>
<ol>
<li>I installed <strong>pyenv</strong> and let it set the environment variables (<em>PYENV</em>, <em>PYENV_HOME</em>, <em>PYENV_ROOT</em> and the entry in <em>PATH</em>)</li>
<li>I installed <strong>Amazon Coretto Java JDK</strong> (jdk1.8.0_412) and set the <em>JAVA_HOME</em> environment variable.</li>
<li>I downloaded the <strong>winutils.exe & hadoop.dll</strong> from <a href="https://github.com/kontext-tech/winutils/tree/master/hadoop-3.3.0/bin" rel="nofollow noreferrer">here</a> and set the <em>HADOOP_HOME</em> environment variable.</li>
<li>Via <strong>pyenv</strong> I installed <strong>Python</strong> 3.10.10 and then <strong>pyspark</strong> 3.4.1</li>
<li>Via <strong>pyenv</strong> I installed <strong>Python</strong> 3.8.10 and then <strong>pyspark</strong> 3.2.1</li>
</ol>
<p>Python works as expected:</p>
<ul>
<li>I can switch between different versions with <code>pyenv global <version></code></li>
<li>When I use <code>python --version</code> in PowerShell it always shows the version that I set before with pyenv.</li>
</ul>
<p>But I'm having trouble with PySpark.</p>
<p>For one, I cannot start PySpark via the powershell console by running <code>pyspark</code> >>> <code>The term 'pyspark' is not recognized as the name of a cmdlet, function, script file....</code>.</p>
<p>More annoyingly, my repo-scripts (with a .venv created via pyenv & poetry) also fail:</p>
<ul>
<li><code>Caused by: java.io.IOException: Cannot run program "python3": CreateProcess error=2, The system cannot find the file specified</code> [...] <code>Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified</code></li>
</ul>
<p>However, both work after I add the following two entries to the <em>PATH</em> environment variable:</p>
<ul>
<li>C:\Users\myuser\.pyenv\pyenv-win\versions\3.10.10</li>
<li>C:\Users\myuser\.pyenv\pyenv-win\versions\3.10.10\Scripts</li>
</ul>
<p>but <strong>I would have to "hardcode" the Python Version</strong> - which is exactly what I don't want to do while using pyenv.</p>
<p>If I hardcode the path, even if I switch to another Python version (<code>pyenv global 3.8.10</code>), once I run <code>pyspark</code> in Powershell, the version <strong>PySpark 3.4.1</strong> starts from the environment <em>PATH</em> entry for Python 3.10.10. I also cannot just do anything with python in the command line as it always points to the hardcoded python version, no matter what I do with pyenv.</p>
<p>I was hoping to be able to start <strong>PySpark 3.2.1</strong> from Python 3.8.10 which I just "activated" with pyenv globally.</p>
<p>What do I have to do to be able to switch between the Python installations (and thus also between PySparks) with pyenv without "hardcoding" the Python paths?</p>
<p>Example PySpark script:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
spark = (
SparkSession
.builder
.master("local[*]")
.appName("myapp")
.getOrCreate()
)
data = [("Finance", 10),
("Marketing", 20),
]
df = spark.createDataFrame(data=data)
df.show(10, False)
</code></pre>
|
<python><windows><pyspark>
|
2024-04-25 06:01:00
| 1
| 3,013
|
Cribber
|
78,382,233
| 12,936,009
|
ImportError: cannot import name 'BPF' from 'bcc' (unknown location)
|
<p>My bcc is build here:</p>
<pre><code>/home/pegasus_vm/Documents/eBPFShield/bcc/build
</code></pre>
<p>Why I try to run <code>python main.py -h</code> from <code>/home/pegasus_vm/Documents/eBPFShield/</code>
It shows the following error:</p>
<pre><code>(env) pegasus_vm@pegasusvm:~/Documents/eBPFShield$ python main.py -h
Traceback (most recent call last):
File "/home/pegasus_vm/Documents/eBPFShield/main.py", line 3, in <module>
from bcc import BPF
ImportError: cannot import name 'BPF' from 'bcc' (unknown location)
</code></pre>
<p>I have added the bcc to the path inside ~/.bashrc like this:</p>
<pre><code>bcctools=/home/pegasus_vm/Documents/eBPFShield/bcc/build/tools
bccexamples=/home/pegasus_vm/Documents/eBPFShield/bcc/build/examples
export PATH=$bcctools:$bccexamples:$PATH
</code></pre>
|
<python><ebpf><bpf><bcc><bcc-bpf>
|
2024-04-25 04:24:03
| 2
| 847
|
NobinPegasus
|
78,382,145
| 2,307,441
|
Row based filter and aggregation in pandas python
|
<p>I have two dataframes as below.</p>
<p>df1:</p>
<pre class="lang-py prettyprint-override"><code>data1 = {
'Acc': [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 4],
'indi_val': ['Val1', 'val2', 'Val_E', 'Val1_E', 'Val1', 'Val3', 'val2', 'val2_E', 'val22_E', 'val2_A', 'val2_V', 'Val_E', 'Val_A', 'Val', 'Val2', 'val7'],
'Amt': [10, 20, 5, 5, 22, 38, 15, 25, 22, 23, 24, 56, 67, 45, 87, 88]
}
df1 = pd.DataFrame(data1)
</code></pre>
<p>df2:</p>
<pre class="lang-py prettyprint-override"><code>data2 = {
'Acc': [1, 1, 2, 2, 3, 4],
'Indi': ['With E', 'Without E', 'With E', 'Without E', 'Normal', 'Normal']
}
df2 = pd.DataFrame(data2)
</code></pre>
<p>Based on these two dataframes I need to create final output as below:</p>
<pre><code> AccNo Indi Amt
1 With E 7
1 Without E 90
2 With E 47
2 Without E 62
3 Normal 225
4 Normal 88
</code></pre>
<p>The logic:</p>
<ul>
<li><code>with E</code>: where last 2 characters from <code>df1['indi_val]</code> equal "_E", get <code>sum(Amt)</code>.</li>
<li><code>Without E</code>: where last 2 characters from <code>df1['indi_val']</code> do not equal "_E", get <code>sum(Amt)</code>.</li>
<li><code>Normal</code>: without any filter on <code>df1['indi_val']</code>, get <code>sum(Amt)</code>.</li>
</ul>
<p>I tried writing something as below:</p>
<pre class="lang-py prettyprint-override"><code>def get_indi(row):
listval = []
if row['Indi'] == "With E":
#print('A')
df1.apply(lambda df1row: listval.append(df1row['amt'] if df1row['Acc']==row['Acc'] and df1row['indi_val'][-2:]=="_E" else 0))
if row['Indi'] == "Without E":
df1.apply(lambda df1row: listval.append(df1row['amt'] if df1row['Acc']==row['Acc'] and df1row['indi_val'][-2:]!="_E" else 0))
if row['Indi'] == "Normal":
df1.apply(lambda df1row: listval.append(df1row['amt']))
return sum(listval)
# Apply the function to create the 'Indi' column in df1
df2['Amt'] = df2.apply(get_indi)
</code></pre>
<p>With above code I am getting the following error:</p>
<pre class="lang-py prettyprint-override"><code>get_loc
raise KeyError(key)
KeyError: 'Indi'
</code></pre>
|
<python><pandas><dataframe>
|
2024-04-25 03:48:19
| 4
| 1,075
|
Roshan
|
78,382,125
| 18,579,739
|
How to divide large python file into small files without change program structure?
|
<p>I face a program like following:</p>
<pre class="lang-py prettyprint-override"><code># This code just used to reflect the recursive structure, DO NOT take it grammatically !!
class A:
def __init__(self) -> None:
pass
def check(self, element):
if element == 'TypeA':
self.__validate_TypeA(element)
if element == 'TypeB':
self.__validate_TypeB(element)
if element == 'TypeC':
self.__validate_TypeC(element)
def __validate_TypeA(self, element):
# implementation..
if element.contains('TypeB'):
self.check('TypeB')
# implementation...
def __validate_TypeB(self, element):
if element.contains('TypeC'):
self.check('TypeC')
def __validate_TypeC(self, element):
if element.contains('TypeA'):
self.check('TypeA')
# thousands of codes here...
</code></pre>
<p>In this class, methods depend on each other. In my project, there are half a hundreds of types like 'TypeA', 'TypeB', and they are correlated and might call the <code>check</code> method on their own implementation. Now my class grows to 4 thousands lines of code, the maintaining is becoming tricky.</p>
<p>I have tried using inheritence, it does not work out.
I have tried using <code>exec()</code>, built-in, but it will forcefully break the code into multiple files, losing the possiblity of using IDE for code highlighting, etc...</p>
<p>So is there a way to do following?</p>
<ol>
<li>divide my class into multiple files, might be one file for one type implementation.</li>
<li>do not influence the program structure</li>
</ol>
<p>In C/C++, this could be easiliy accomplished using <code>#include</code>, but I do not know how to do it in Python.</p>
|
<python>
|
2024-04-25 03:41:45
| 2
| 396
|
shan
|
78,382,107
| 480,118
|
creating a multi-index column from an existing dataframe
|
<p>I have data that is coming to me as it appears in a spreadsheet.<br />
When it arrives it looks like this:</p>
<pre><code>import pandas as pd, numpy as np
data1 = [['symbol', 'appl', 'goog', None, 'msft', None, None, None],
['date' , 'close', 'close', 'volume', 'close', 'open', 'high', 'low'],
['1999-01-10', 100, 101, 10000, 102, 102, 104, 105],
['1999-01-11', 200, 201, 10000, 202, 202, 204, 205]]
df = pd.DataFrame(data1)
df
</code></pre>
<p>this generates a table that looks like this:</p>
<p><a href="https://i.sstatic.net/ey5A7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ey5A7.png" alt="enter image description here" /></a></p>
<p>I basically need to iterate over this data by splitting or grouping by symbol.</p>
<p>So the first dataframe would consist of only <code>[date, close]</code> column for appl,</p>
<p>The 2nd woudl be <code>[date, close, volume]</code> for goog</p>
<p>And the last would be <code>[close, open, high, low]</code> for msft</p>
<p>I figured to do this, if I can create a multi-index, and group by the symbols, I should be able to slice the dataframe correctly and iterate over it.</p>
<pre><code>df = pd.DataFrame(data1)
df = df.ffill(axis=1)
#add first column by using first row
df.columns = df[:1].values.tolist()
df = df[1:]
df.columns = [df.columns, df[:1].values.tolist()]
#repeat for the fields row (2nd column above, now first column after above line)
#df.columns = pd.MultiIndex.from_product(df.columns.levels + df[:1].values.tolist())
#df.set_axis(pd.MultiIndex.from_product([df.columns, df[:1].values.tolist()]), axis=1)
df
</code></pre>
<p>the last line is not working. a few other things I have tried are commented out.
I am sure there is a better way to do this in any event...Please advise If you can.</p>
<p>Thanks</p>
|
<python><pandas><numpy>
|
2024-04-25 03:31:23
| 1
| 6,184
|
mike01010
|
78,381,917
| 198,666
|
How to chain find() methods and dealing with None in BeautifulSoup?
|
<p>I am writing some HTML processing and am loving BS4. I do find it a bit lengthy and was hoping there was some better way to deal with this.</p>
<p>I would love to chain my finds together like this:</p>
<pre><code>soup.find('li', class_='positions').find('span', class_='list-value').getText()
</code></pre>
<p>Instead when the first find doesn’t find anything, it returns <code>None</code> and then the next find fails on that as expected.</p>
<p>I rewrote as two lines and it seems ok, but it would be preferred to have some sort of <code>?:</code> conditional operator in there like I do in c#.</p>
<pre><code>elem_sup_position = soup.find('li', class_='positions')
sup_position = elem_sup_position.find('span', class_='list-value').getText() if elem_sup_position is not None else ''
</code></pre>
<p>I know I could probably rewrite it as this but I hate executing the first find twice to save 1 line of code! Is there a slicker way to do this? I have a lot of these.</p>
<pre><code>sup_position = result.find('li', class_='positions').find('span', class_='list-value').getText() if result.find('li', class_='positions') else None
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2024-04-25 02:09:59
| 1
| 329
|
Davery
|
78,381,577
| 14,656,198
|
Is it possible to ignore pyright linting only for magic commands IPython command
|
<p>It's possible to use the <code># type: ignore</code> comment to ignore linting on a specific line:</p>
<pre class="lang-py prettyprint-override"><code>print(syntax error) # type: ignore
</code></pre>
<p>However, the same is not possible with magic commands:</p>
<pre class="lang-py prettyprint-override"><code>%load_ext autoreload # type: ignore
'''
yields error "ModuleNotFoundError: No module named 'autoreload # type: ignore'"
'''
</code></pre>
<p>Removing it, however, yields the linting error "<code>Expected expression</code>". And of course, using <code># type: ignore</code> in the top of the file is not desirable, as it would disable all linting for the whole file.</p>
<p>Is it possible to disable PyRight linting for magic commands?</p>
|
<python><ipython><pyright>
|
2024-04-24 23:08:29
| 0
| 1,745
|
Luiz Martins
|
78,381,457
| 2,165,613
|
Where to find Azure Functionapp deploy errors for python 3.11 in the Azure Portal
|
<p>I want to move my local python 3.11 project to an Azure Functionapp. <em>Somewhere</em> in the deployment process, an error happens, such as an <code>ModuleNotFoundError</code>. I assume it's because a package that can be installed locally just fine, fails to install on the Azure Functionapp. I can't find the errors in the Azure Portal, so I don't know what is wrong.</p>
<p>I created a basic example app which demonstrates my problem</p>
<p><a href="https://i.sstatic.net/2Sk7q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Sk7q.png" alt="enter image description here" /></a></p>
<p>I looked in <code>Application Insights</code> -> <code>Logs</code> -> <code>traces</code>. It shows that my only HTTP endpoint was found, but not loaded. It does not say why.</p>
<p><a href="https://i.sstatic.net/68CSu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/68CSu.png" alt="enter image description here" /></a></p>
<p>The Log stream shows no errors.</p>
<p><a href="https://i.sstatic.net/Ptelk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ptelk.png" alt="enter image description here" /></a></p>
<p>I managed to find my errors deep inside the portal, at <code>Diagnose and solve problems</code> -> <code>Availability and Performance</code> -> <code>Functions that are not triggering</code>. Here it shows <code>ModuleNotFoundError: No module named 'somenonexistingmodule'</code>. However, these logs seem to be delayed by at least 15 minutes, so I have to wait 15 minutes after every deploy to figure out what is going wrong.</p>
<p>Is it possible to get any deploy errors straight away? Why doesn't the deploy process throw an error when a function doesn't want to load due to an error?</p>
|
<python><azure><azure-functions><azure-application-insights><azure-deployment>
|
2024-04-24 22:14:52
| 1
| 412
|
Emiel Steerneman
|
78,381,363
| 11,999,957
|
Why is this constraint function so much slower than a similar one and how do I increase the speed in Scipy Optimize?
|
<p>I have an optimization that has around 22 constraint functions. I tried to turn that into a single constraint function and the optimizer is taking 10x as long. Any way to reduce the speed?</p>
<p>The below are the original constraint functions. I only included 2 but there are like 22 of them, each with hard coded values for the inequality constraint. The parameters are the values the optimizer is looking for, and the optimizer is looking for like 50 values. For example, for the first constraint, it basically it says, the first 2 parameter values when summed must not exceed 129:</p>
<pre><code>def MaxConstraint000(parameters):
_count = np.sum(parameters[0:2])
return (129 - _count)
def MaxConstraint001(parameters):
_count = np.sum(parameters[2:5])
return (2571 - _count)
_Constraints = ({'type': 'ineq', 'fun': MaxConstraint000}
, {'type': 'ineq', 'fun': MaxConstraint001})
</code></pre>
<p>To simplify my code and instead of pre-determining the location of parameters I tried something like this, where I supply a key value that pulls in the index locations for the parameter values as well as the constant values [129, 2571, etc...] from a data frame. DFData has the same number of rows as the number of parameters. The constraint is identical as the first one, other than I supply a keyValue, which allows me to look up the max value as well as the index locations.</p>
<pre><code>def MaxConstraint(parameters, keyValue):
_parameters= np.array(parameters)
_index = np.where(DFData['KeyValues'].values == keyValue)[0]
_count = np.sum(_parameters[_index])
_target = DFMaxValues.loc[[keyValue], ['MaxValue']].values[0][0]
return (_target - _count )
_Constraints = ({'type': 'ineq', 'fun': MaxConstraint, 'args': ('keyValue1', )}
, {'type': 'ineq', 'fun': MaxConstraint, 'args': ('keyValue2', )}
</code></pre>
<p>This results in 10 times longer execution. How do I get it down to approximately the same speed as the first one? I would prefer the second implementation as therefore instead of going into each constraint and changing the MaxValue, I can just update a dictionary or CSV file instead. Additionally, if the rows of the data get mixed, I don't have hard coded index values.</p>
<p>Thanks!</p>
<hr />
<p>Full set of constraints:</p>
<pre><code>def MaxConstraint000(parameters):
_count = np.sum(parameters[0:2])
return (129 - _count)
def MaxConstraint001(parameters):
_count = np.sum(parameters[2:5])
return (2571 - _count)
def MaxConstraint002(parameters):
_count = np.sum(parameters[5:8])
return (3857 - _count)
def MaxConstraint003(parameters):
_count = np.sum(parameters[8:10])
return (823 - _count)
def MaxConstraint004(parameters):
_count = np.sum(parameters[10:13])
return (823 - _count)
def MaxConstraint005(parameters):
_count = np.sum(parameters[13:16])
return (3857 - _count)
def MaxConstraint006(parameters):
_count = np.sum(parameters[16:21])
return (4714 - _count)
def MaxConstraint007(parameters):
_count = np.sum(parameters[21:25])
return (3429 - _count)
def MaxConstraint008(parameters):
_count = np.sum(parameters[25:28])
return (3429 - _count)
def MaxConstraint009(parameters):
_count = np.sum(parameters[28:30])
return (3429 - _count)
def MaxConstraint010(parameters):
_count = np.sum(parameters[30:33])
return (2914 - _count)
def MaxConstraint011(parameters):
_count = np.sum(parameters[33:38])
return (6000 - _count)
def MaxConstraint012(parameters):
_count = np.sum(parameters[38:43])
return (6000 - _count)
def MaxConstraint013(parameters):
_count = np.sum(parameters[43:45])
return (429 - _count)
def MaxConstraint014(parameters):
_count = np.sum(parameters[45:47])
return (1457 - _count)
def MaxConstraint015(parameters):
_count = np.sum(parameters[47:51])
return (4286 - _count)
def MaxConstraint016(parameters):
_count = np.sum(parameters[51:53])
return (2143 - _count)
def MaxConstraint017(parameters):
_count = np.sum(parameters[53:57])
return (4286 - _count)
def MaxConstraint018(parameters):
_count = np.sum(parameters[57:64])
return (2143 - _count)
def MaxConstraint019(parameters):
_count = np.sum(parameters[64:67])
return (2571 - _count)
def MaxConstraint020(parameters):
_count = np.sum(parameters[67:72])
return (1714 - _count)
def MaxConstraint021(parameters):
_count = np.sum(parameters[72:75])
return (4286 - _count)
_Bounds = ((0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000), (0, 10000)
, (0, 10000), (0, 10000), (0, 10000))
_Constraints = ({'type': 'ineq', 'fun': MaxConstraint000}
, {'type': 'ineq', 'fun': MaxConstraint001}
, {'type': 'ineq', 'fun': MaxConstraint002}
, {'type': 'ineq', 'fun': MaxConstraint003}
, {'type': 'ineq', 'fun': MaxConstraint004}
, {'type': 'ineq', 'fun': MaxConstraint005}
, {'type': 'ineq', 'fun': MaxConstraint006}
, {'type': 'ineq', 'fun': MaxConstraint007}
, {'type': 'ineq', 'fun': MaxConstraint008}
, {'type': 'ineq', 'fun': MaxConstraint009}
, {'type': 'ineq', 'fun': MaxConstraint010}
, {'type': 'ineq', 'fun': MaxConstraint011}
, {'type': 'ineq', 'fun': MaxConstraint012}
, {'type': 'ineq', 'fun': MaxConstraint013}
, {'type': 'ineq', 'fun': MaxConstraint014}
, {'type': 'ineq', 'fun': MaxConstraint015}
, {'type': 'ineq', 'fun': MaxConstraint016}
, {'type': 'ineq', 'fun': MaxConstraint017}
, {'type': 'ineq', 'fun': MaxConstraint018}
, {'type': 'ineq', 'fun': MaxConstraint019}
, {'type': 'ineq', 'fun': MaxConstraint020}
, {'type': 'ineq', 'fun': MaxConstraint021}
)
############# Solve Optim Problem ###############
_OptimResultsConstraint = scipy_opt.minimize(ObjectiveFunction
, x0 = [10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000, 10000,
10000, 10000, 10000] # starting guesses
, method = 'trust-constr' # trust-constr, SLSQP
, options = {'maxiter': 1000000000}
, bounds = _Bounds
, constraints = _Constraints)
</code></pre>
|
<python><scipy><constraints><scipy-optimize>
|
2024-04-24 21:47:49
| 2
| 541
|
we_are_all_in_this_together
|
78,381,352
| 480,118
|
Rreadng NULL/EMPTY from JSON in an HTTP post request handler
|
<p>I'm posting data from excel to a back-end Python server. The JSON from excel VBA, just before sending is shown below: What you'll notice in below. Is there is an empty cell/field after 2020-01-10?</p>
<pre><code>[
[
"name",
"RTX",
"SPX",
"NDX"
],
[
"date",
"close",
"close",
"close"
],
[
"2020-01-10T05:00:00.000Z", ,
200.1,
300.1
],
[
"2020-01-11T05:00:00.000Z",
100.1,
200.2,
300.2
]
]
</code></pre>
<p>The code in vba looks like this:</p>
<pre><code>Set http = CreateObject("Msxml2.ServerXMLHTTP")
With http
.Open "POST", url, False
.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"
.setRequestHeader "Content-Type", "application/json; charset=utf-8"
.setTimeouts ONE_MIN_MS, 5 * ONE_MIN_MS, ONE_MIN_MS, 5 * ONE_MIN_MS
.send to_json
Log "response status: " & .Status
resp = .responseText
End With
</code></pre>
<p>When received by the Python Flask web application, I do the following:</p>
<pre><code>data = req.get_json()
</code></pre>
<p>This produces an error:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.11/site-packages/werkzeug/wrappers/request.py", line 620, in get_json
rv = self.on_json_loading_failed(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/wrappers.py", line 131, in on_json_loading_failed
return super().on_json_loading_failed(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/werkzeug/wrappers/request.py", line 645, in on_json_loading_failed
raise BadRequest(f"Failed to decode JSON object: {e}")
werkzeug.exceptions.BadRequest: 400 Bad Request: Failed to decode JSON object: Expecting value: line 1 column 90 (char 89)
</code></pre>
<p>If I call <code>req.get_data()</code>, the bytes look like this:</p>
<pre><code>b'[["name","RTX","SPX","NDX"],["date","close","close","close"],["2020-01-10T05:00:00.000Z",,200.1,300.1],["2020-01-11T05:00:00.000Z",100.1,200.2,300.2]]'
</code></pre>
<p>I have also tried using <code>jsonify</code> and JSON libraries to convert the above to a JSON, but they all fail.</p>
<p>I'm guessing it is because there is an empty value there that is not confirming to JSON standards maybe. Any solution to this on the Python side? Or, do I need to fill that sell with some value?</p>
|
<python><vba><flask>
|
2024-04-24 21:42:52
| 0
| 6,184
|
mike01010
|
78,381,277
| 7,564,952
|
Databricks Spark throwing [GC (Allocation Failure) ] message
|
<p>I used this code to update a new_df. Idea is to get all the records between date_updated and stop time and assign them a number which i will used in group by in next steps. so basically assigning same number to every group between dateupdated and stop time.</p>
<pre><code># Create an empty DataFrame
new_df = spark.createDataFrame([], df_filtered.schema)
i = 0
# Collect rows of df_filtered_sku1 as a list
rows = df_filtered_sku1.collect()
print('Length rows')
print(len(rows)) #781
for row in rows:
sku = row['Sku']
start_time = row['DATEUPDATED']
end_time = row['stop']
print(sku, start_time, end_time)
df_temp = df_filtered.filter((df_filtered.DATEUPDATED >= start_time) & (df_filtered.DATEUPDATED <= end_time) & (df_filtered.SKU == sku))
df_temp = df_temp.withColumn("counter", lit(i))
print('Temp')
#print(df_temp.count())
# Append the temporary DataFrame to the new_df DataFrame
print('new Frame')
new_df = new_df.union(df_temp)
#print(new_df.count())
i += 1
if i > 780: print(new_df.count()) #2531
display(new_df)
</code></pre>
<p>Rows in df_filtered_sku1 are 781 and final new_df has count of 2531. But when i try to display/show the new_df dataframe it never ends and i checked driver logs it is stuck at Allocation Faliure <a href="https://i.sstatic.net/KPdVL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPdVL.png" alt="error" /></a></p>
<p>Cluster Specifications:
<a href="https://i.sstatic.net/iZUNV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iZUNV.png" alt="cluster specs" /></a></p>
|
<python><apache-spark><pyspark><databricks>
|
2024-04-24 21:20:30
| 1
| 455
|
irum zahra
|
78,381,249
| 1,202,863
|
Mock a class with a parameter
|
<p>This is my method below that returns a dataframe</p>
<pre><code>def refactorReport( df):
"""Enhance with additional information as needed"""
for userclass in [ 'ClassA', 'ClassB', 'ClassC' ]:
df[ '%s_Region'%userclass ] = df[ userclass ].apply(lambda x: commutils.UserNameMapper( x ).Region() )
return df
</code></pre>
<p>How do I mock this <code>commutils.UserNameMapper(x)</code>?</p>
<p>I tried this but both are giving me "self param expected":</p>
<pre><code>class DummyUserNameMapper():
def __init__( self, thename ):
self.thename = thename
def Region( self ):
print(self.thename)
return 'Region%s'%self.thename[-1]
class ModuleTests(unittest.TestCase):
def test_refactorReport(self):
with mock.patch("commutils.UserNameMapper", return_value=DummyUserNameMapper):
print(refactorReport(self.records))
@mock.patch('commutils.UserNameMapper')
def test_refactorReport_New(self, mockUser):
mockUser.return_value = DummyUserNameMapper
print(refactorReport(self.records))
</code></pre>
<p>Where <code>self.records</code></p>
<pre><code>>>> import pandas as pd
>>> data = [()]
>>> data = [
... ('Name 1', 'Apple', 'Mango', 'Orange'),
... ('Name 2', 'Pear', 'Apple', 'Banana'),
... ('Name 3', 'Banana', 'Mango', 'Orange'),
... ('Name 4', 'Apple', 'Pear', 'Orange'),
... ('Name 5', 'Pear', 'Mango', 'Orange'),]
...
>>> df = pd.DataFrame(data,columns=['Name', 'ClassA', 'ClassB', 'ClassC'])
</code></pre>
|
<python><unit-testing><class><mocking>
|
2024-04-24 21:13:42
| 1
| 586
|
Pankaj Singh
|
78,381,203
| 111,307
|
How do you get the XYZ coordinates of a point from a Maya Python script?
|
<p>How do you get the numerical xyz components of a vertex in Maya from a Python script?</p>
|
<python><maya>
|
2024-04-24 21:04:17
| 1
| 67,904
|
bobobobo
|
78,381,199
| 1,123,094
|
Prevent specific key presses from being sent to foreground app using PyQt5 and pynput
|
<p>So I want to detect certain keys being pressed and stop those keys being sent to the foreground app.</p>
<p>I don't want to use hideous libraries like pywin32 and pyHook as they are such a headache to build. So lets use pynput</p>
<p>The problem is no matter what I do - space does get blocked - but then the whole QTapp hangs - or worse I get a segmentation fault.</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5.QtWidgets import QApplication, QDialog
from PyQt5.QtCore import QThread, pyqtSignal, pyqtSlot
from pynput.keyboard import Listener, Key
class KeyListenerWorker(QThread):
keyPressed = pyqtSignal(str, bool)
def __init__(self, configured_keys=None):
super(KeyListenerWorker, self).__init__()
self.configured_keys = configured_keys or []
self.listener = None
def run(self):
with Listener(on_press=self.on_press, on_release=self.on_release) as self.listener:
self.listener.join()
def on_press(self, key):
key_description = self.get_key_description(key)
self.keyPressed.emit(key_description, True)
# Block the key if it is in the configured keys
return False if key_description in self.configured_keys else True
def on_release(self, key):
key_description = self.get_key_description(key)
self.keyPressed.emit(key_description, False)
# Block the key if it is in the configured keys
return False if key_description in self.configured_keys else True
def get_key_description(self, key):
if hasattr(key, 'char') and key.char:
return key.char
return str(key)
def stop(self):
if self.listener:
self.listener.stop()
class MainWindow(QDialog):
def __init__(self, configured_keys):
super().__init__()
self.setWindowTitle("KeyListener Running")
self.resize(300, 100)
self.listener_thread = KeyListenerWorker(configured_keys=configured_keys)
self.listener_thread.keyPressed.connect(self.handle_key_event)
self.listener_thread.start()
def handle_key_event(self, key, is_press):
action = "Pressed" if is_press else "Released"
print(f"{action}: {key}")
def closeEvent(self, event):
self.listener_thread.stop()
self.listener_thread.wait()
super().closeEvent(event)
if __name__ == "__main__":
app = QApplication(sys.argv)
configured_keys = [str(Key.space)] # Configure to react to the space bar
mainWindow = MainWindow(configured_keys)
mainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>NB: So - according to the <a href="https://pynput.readthedocs.io/en/latest/_modules/pynput/keyboard/_base.html#Listener" rel="nofollow noreferrer">docs</a> (and this <a href="https://stackoverflow.com/questions/65328213/how-to-prevent-certain-certain-keys-from-sending-input-in-python">SO post</a>) (TY <a href="https://stackoverflow.com/users/2001654/musicamante">@musicamante</a>) - you should use suppress=True..</p>
<p>but..</p>
<p>this leads to a hang..</p>
<pre class="lang-py prettyprint-override"><code>class KeyListenerWorker(QThread):
keyPressed = pyqtSignal(str, bool)
def __init__(self, configured_keys=None):
super(KeyListenerWorker, self).__init__()
self.configured_keys = configured_keys or []
self.listener = None
def run(self):
# Listener with suppress=False: start without suppressing any keys
with Listener(on_press=self.on_press, on_release=self.on_release, suppress=False) as self.listener:
self.listener.join()
def on_press(self, key):
key_description = self.get_key_description(key)
self.keyPressed.emit(key_description, True)
# Suppress if key is in the configured keys
return not (key_description in self.configured_keys)
def on_release(self, key):
key_description = self.get_key_description(key)
self.keyPressed.emit(key_description, False)
# Suppress if key is in the configured keys
return not (key_description in self.configured_keys)
def get_key_description(self, key):
if hasattr(key, 'char') and key.char:
return key.char
return str(key)
def stop(self):
if self.listener:
self.listener.stop()
</code></pre>
|
<python><pyqt5><qthread><pynput>
|
2024-04-24 21:01:39
| 1
| 2,250
|
willwade
|
78,381,197
| 3,103,957
|
Python FastAPI: Request and Response instances auto creation
|
<p>I have this code snippet.</p>
<pre><code>from fastapi import Request, Response
my_api = FastAPI()
@my_api.get("/get_items")
async def get_items(req : Request, res : Response):
json_data = await req.josn()
res.status_code = 200
....
....
return res
</code></pre>
<p>The above code works fine!!</p>
<p>In the above method, there are two parameters defined, req and res. Their type is Request and Response and they are imported from fastapi package.
json data is queried using json method on req object and ststus_code attribute is set on res object.
All these happen without instances of them being created.</p>
<p>My question is: when this route is called (eg: http://localhost:8000/get_items), how these parameters are initialised?
Basically these are classes and instances of them are created and passed to them from somewhere; but I am not sure the place.</p>
|
<python><request><fastapi><response>
|
2024-04-24 21:00:39
| 0
| 878
|
user3103957
|
78,381,155
| 3,878,168
|
Python setuptools multiple extension modules with shared C source code building in parallel
|
<p>I'm working on a Python project with a setup.py that has something like this<sup>1</sup>:</p>
<pre class="lang-py prettyprint-override"><code>setup(
cmdclass={"build_ext": my_build_ext},
ext_modules=[
Extension("A", ["a.c", "common.c"]),
Extension("B", ["b.c", "common.c"])
]
)
</code></pre>
<p>I'm running into a problem when building the modules in parallel where it seems like one module tries to read <code>common.o</code>/<code>common.obj</code> while another is compiling it, and it fails. Is there some way to get setuptools to compile the C files for each module into their own build directories so that they aren't overwriting each other?</p>
<p> </p>
<ol>
<li>The actual project is more complicated with more modules and source files.</li>
</ol>
|
<python><setuptools><python-c-api>
|
2024-04-24 20:51:26
| 1
| 1,875
|
Yay295
|
78,380,991
| 9,462,829
|
io.UnsupportedOperation: fileno when trying to write pdf from .zip to MinIO
|
<p>I'm working on a Flask app where a user uploads a .zip file populated with .pdf files and I want to upload its contents to minIO. Been trying to use the <code>put_object</code> function, but I'm having issues trying to get the size.</p>
<p>Tried to replicate <a href="https://stackoverflow.com/questions/61499835/upload-to-minio-directly-from-flask-service">this question</a>, but I'm having troubles with the <code>fileno</code> method:</p>
<pre><code>file = form.file.data
file.seek(0)
zipfile = ZipFile(BytesIO(file.read()))
files = [zipfile.open(file_name) for file_name in zipfile.namelist()]
client = Minio(host_minio, access_key=access, secret_key=secret, secure=False)
destination_file = f'folder/file.pdf'
specific_file = files[0]
specific_file.seek(0)
file_bytes = BytesIO(specific_file.read())
size = os.fstat(file_bytes.fileno()).st_size
client .put_object(
bucket_name, destination_file, file_bytes , size
)
</code></pre>
<p>Here, <code>size</code> is giving me the following error:</p>
<pre><code> io.UnsupportedOperation: fileno
</code></pre>
<p>Is this the correct approach? What am I missing? Thanks!</p>
|
<python><io><minio>
|
2024-04-24 20:11:18
| 1
| 6,148
|
Juan C
|
78,380,686
| 5,693,706
|
Concatenate two geoshape charts in Altair / Vega-Lite with independent axis?
|
<p>I am trying to plot two separate geoshape charts and concatenate them together. Below is a MWE of what I am trying to do:</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
from vega_datasets import data
import geopandas as gpd
source = data.us_10m
source = gpd.read_file(source.url).set_crs(epsg=4326)
r1 = alt.Chart(source).mark_geoshape().transform_filter(alt.datum.id == '49047')
r2 = alt.Chart(source).mark_geoshape().transform_filter(alt.datum.id == '20123')
r1 | r2
</code></pre>
<p>Plotting them separately works great; Altair plots the shapes and zooms to their extents automatically.</p>
<p>R1:</p>
<p><a href="https://i.sstatic.net/sc2wq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sc2wq.png" alt="Region 1 Chart" /></a></p>
<p>R2:</p>
<p><a href="https://i.sstatic.net/LS1JM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LS1JM.png" alt="Region 2 Chart" /></a></p>
<p>However, when I try to concatenate them, they share axis limits, and each chart's view expands large enough to show both even if only one is displayed.</p>
<p>Concatenated Plot:</p>
<p><a href="https://i.sstatic.net/TQ7Jt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TQ7Jt.png" alt="Concatenated Plot" /></a></p>
<p>My expectation is that I get the R1 and R2 images above next to each other without the additional zooming. I tried to set the axis to resolve independently, but it did not change the output.</p>
<pre class="lang-py prettyprint-override"><code>(r1 | r2).resolve_axis(x='independent', y='independent')
</code></pre>
<p>Answers for how to do this in Vega-Lite, without the Altair abstraction, would also be useful.</p>
|
<python><vega-lite><altair>
|
2024-04-24 19:00:08
| 1
| 1,038
|
kgoodrick
|
78,380,647
| 3,224,196
|
Python script run from C# hangs at high memory usage
|
<p>I want my C# (.NET 8) app to start a Python script under Python 3.12. The Python script is fairly resource-intensive. When I run the script manually, all is well; it consumes memory up to a bit over 3 GB RAM, then completes. But when I run it in C# by spawning a new CMD window, it consumes just about 1 GB of memory and then the process CPU drops to 0% and it hangs indefinitely.</p>
<p>This is the code I'm using to first activate the virtual environment, then run the script and capture its output.</p>
<pre><code> var process = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "cmd",
Arguments = "",
UseShellExecute = false,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
CreateNoWindow = true,
WorkingDirectory = workingDirectory
}
};
process.StartInfo.EnvironmentVariables.Add("TRANSFORMERS_CACHE", _transformersCacheDirectory);
process.Start();
using var sw = process.StandardInput;
if (sw.BaseStream.CanWrite)
{
sw.WriteLine(".\\venv\\scripts\\activate");
sw.WriteLine($"python test.py");
sw.Flush();
sw.Close();
}
result = await process.StandardOutput.ReadToEndAsync();
await process.WaitForExitAsync();
var error = await process.StandardError.ReadToEndAsync();
</code></pre>
|
<python><c#><python-3.12>
|
2024-04-24 18:50:18
| 0
| 380
|
Martin
|
78,380,522
| 10,437,727
|
Handle async packages in Python Tasks inside of Airflow
|
<p>I'm dealing with an annoying error on my local Airflow setup.</p>
<p>I use a LocalExecutor, with a MySQL database (if that helps).</p>
<p>My task looks something like that:</p>
<pre class="lang-py prettyprint-override"><code>
@task()
def pdf_to_text_task() -> None:
file_storage_instance = get_file_storage_instance_from_task()
pdf_fpath = get_pdf_fpath(file_storage_instance)
pages = parse_pdf(pdf_fpath) # <--- this package breaks Airflow task
pages = re.split(r"(?<!\|)---(?! \|)", pages[0])
text = pdf_to_text(pages)
filename = pdf_fpath.name.split(".")[0]
save_file(json.dumps(text), f"{filename}.json")
</code></pre>
<p>And when using this package, I get this error:</p>
<pre><code>[2024-04-24, 20:20:17 CEST] {task_context_logger.py:104} ERROR - Executor reports task instance <TaskInstance: main.pdf_to_text_task 6f30866d-72b3-449b-8e74-7d4a4b99aaa1 [queued]> finished (failed) although the task says it's queued. (Info: None) Was the task killed externally?
</code></pre>
<p>According to the documentation of the package behind <code>parse_pdf</code>, it's an async first package. (It simply consists of a package that calls an API).</p>
<p>I'm very frustrated because there is no real root cause that could be causing this.</p>
<p>I'm only assuming that the aforementioned package is causing this issue because it's the only change.</p>
<p>What could be causing this error?</p>
<p>Thanks!</p>
|
<python><airflow>
|
2024-04-24 18:23:22
| 1
| 1,760
|
Fares
|
78,380,193
| 7,726,586
|
FastAPI (uvicorn) + Docker ignores specified host
|
<p>I need to run a FastAPI in Docker on the remote machine and make it accessible for curl requests from any machine. Container builds and uvicorn starts, but when I try to do a curl request I get connection errors. I've already specified host 0.0.0.0 in the parameters, but experimenting with different ports I've noticed that port param seems to be ignored on uvicorn start. Probably that happens to the host param as well. I don't face this issue when running in the same way locally, at least I get <code>Uvicorn running on http://0.0.0.0:some_port (Press CTRL+C to quit)</code> in local logs.</p>
<p>Dockerfile</p>
<pre><code>ARG PYTHON_VERSION=3.11-slim-buster
FROM python:${PYTHON_VERSION} as python
WORKDIR /app
ADD . .
RUN pip install -r requirements.txt
CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8501"]
</code></pre>
<p>docker-compose.yml</p>
<pre><code>version: '3.5'
services:
container-name:
container_name: container-name
build:
context: .
dockerfile: Dockerfile
env_file:
- ./env/demo.env
ports:
- "8501:8501"
</code></pre>
<p>docker compose down && docker-compose build --no-cache && docker compose up</p>
<pre><code>...
container-name | INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
...
</code></pre>
<p>What could be a problem and what can I try to resolve it?</p>
|
<python><docker><docker-compose><fastapi><uvicorn>
|
2024-04-24 17:19:37
| 1
| 534
|
maria
|
78,380,179
| 596,057
|
Opening QDockWidget Closes Previous QDockWidget on Ubuntu
|
<p>I wrote a PyQt5 program using QDockWidgets on Windows. Selecting items in a QList in the central widget causes a widget to be placed in the right docking area that allows me to edit that item and as I select different items in the QList the widget gets replaced. If I move the dock widget out into a floating widget or a different dock area and then select a new item then the right dock area gets a new widget and the dock widget I popped out remains. This is exactly the behavior I want.</p>
<p>I then ran this program on Ubuntu and if I move the dock widget into a floating widget then select a new item this causes the floating dock widget to dissapear. If I move the dock widget into a different dock and leave it there or move it into a floating dock then it will persist just like it did on windows.</p>
|
<python><ubuntu><pyqt5><qt5>
|
2024-04-24 17:16:53
| 1
| 1,638
|
HahaHortness
|
78,380,077
| 16,383,578
|
WinError 206 when passing way too many command line arguments
|
<p>I am no fool and I know exactly what caused this issue and how to make it not pop up, by not using the command that caused it, but I would rather like to use the command, and have yet to find a workaround.</p>
<p>The problem is simple, and it is absolutely not about file paths longer than 255 characters and the like, and <code>"LongPathsEnabled"</code> is set to 1 in my <code>[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem]</code> key.</p>
<p>I didn't set it, it was already set. The error has nothing to do with long filename and the text is misleading:</p>
<pre><code>FileNotFoundError: [WinError 206] The filename or extension is too long
</code></pre>
<p>The problem is simple, I have downloaded 424 archives from <a href="http://www.nexusmods.com" rel="nofollow noreferrer">www.nexusmods.com</a> programmatically, and I am trying to extract them automatically, these archives have redundant nested folders and way too many junk files that I don't want to extract.</p>
<p>I have written efficient code to extract files in the way I like asynchronously in Python without using 7z.exe. But for .7z files unfortunately I find 7z.exe to be the only effective tool for extracting .7z files.</p>
<p>However 7zip lacks official Python bindings and the only way to control exactly which files are extracted to where is passing command line arguments.</p>
<p>I use this syntax to extract selected files only:</p>
<pre><code>7z e -aoa "C:\blah bar\foo.7z" "-oD:\a b c\" "-i!a b c\1" "-i!a b c\2" "-i!a b c\3"
</code></pre>
<p>I have to add one <code>-i!</code> switch for every one of the files I want to include, which will make the command to be way too long, and I expected <code>FileNotFoundError</code> will be triggered, so I tested it, and indeed it was triggered.</p>
<p>I programmatically found the archive containing the most files, which has 9571 files in it.</p>
<p>I constructed a command in the following way, then <code>subprocess.run</code> it:</p>
<pre><code>cmd = ["7z", "e", "-aoa", longname, "-o"+longname[:-4]] + ["-i!" + name for name in list(arar.paths)]
</code></pre>
<p>And the aforementioned error is triggered.</p>
<p>I have determined the exact point of failure,</p>
<pre><code>cmd = ["7z", "e", "-aoa", longname, "-o"+longname[:-4]] + ["-i!" + name for name in list(arar.paths)[:388]]
</code></pre>
<p>Fails, but the following doesn't:</p>
<pre><code>cmd = ["7z", "e", "-aoa", longname, "-o"+longname[:-4]] + ["-i!" + name for name in list(arar.paths)[:387]]
</code></pre>
<pre><code>In [39]: cmd = ["7z", "e", "-aoa", longname, "-o"+longname[:-4]] + ["-i!" + name for name in list(arar.paths)[:388]]
In [40]: len("".join(cmd))
Out[40]: 31612
In [41]: len(" ".join(cmd))
Out[41]: 32004
In [42]: cmd = ["7z", "e", "-aoa", longname, "-o"+longname[:-4]] + ["-i!" + name for name in list(arar.paths)[:387]]
In [43]: len(" ".join(cmd))
Out[43]: 31919
In [44]: len("".join(cmd))
Out[44]: 31528
</code></pre>
<p>It seems that the command line is limited to 32000 characters per command...</p>
<p>I know I can just include whole folders and exclude some files and include all files of certain extensions... to optimize the command line length, but this is just too much work for little gain.</p>
<p>Plus I now know it is more efficient to extract whole 7z archives than extracting files individually, so I will just extract everything and then delete all the trash afterwards.</p>
<p>But this makes me wonder if there is a workaround to the character limit of commands, maybe I one day will actually need a command longer than 32000 characters, say for machine learning or artificial intelligence?</p>
<p>So is there any workaround?</p>
|
<python><windows>
|
2024-04-24 16:54:51
| 1
| 3,930
|
Ξένη Γήινος
|
78,379,995
| 308,827
|
Creating a custom colorbar in matplotlib
|
<p>How can I create a colorbar in matplotlib that looks like this:</p>
<p><a href="https://i.sstatic.net/EY2hV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EY2hV.png" alt="enter image description here" /></a></p>
<p>Here is what I tried:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
from matplotlib.cm import ScalarMappable
from matplotlib.colors import Normalize
# Define the custom colormap
colors = ['red', 'cyan', 'darkgreen']
cmap = LinearSegmentedColormap.from_list(
'custom_colormap',
[(0.0, colors[0]), (0.5 / 2.0, colors[0]),
(0.5 / 2.0, colors[1]), (1.5 / 2.0, colors[1]),
(1.5 / 2.0, colors[2]), (2.0 / 2.0, colors[2])]
)
# Create a scalar mappable object with the colormap
sm = ScalarMappable(norm=Normalize(vmin=3.5, vmax=4.5), cmap=cmap)
# Create the colorbar
plt.figure(figsize=(3, 1))
cb = plt.colorbar(sm, orientation='horizontal', ticks=[3.5, 4.5], extend='neither')
cb.set_label('')
</code></pre>
|
<python><matplotlib><colorbar>
|
2024-04-24 16:36:51
| 2
| 22,341
|
user308827
|
78,379,929
| 390,897
|
Broken Recursive Typing for Python
|
<p>I have a type alias that I call <code>point2d</code> and a related recursive type alias.</p>
<pre><code>point2d = tuple[float, float]
point2d_nested = (
point2d
| list["point2d_nested"]
| tuple["point2d_nested", ...]
)
</code></pre>
<p>Now I want to write a function that will scale points in any nested structure by a factor. For example, <code>scale_nested((1,1), 2) == (2,2)</code> or <code>scale_nested([(1,1), (1,1)]) == [(2,2), (2,2)]</code>. Here's an implementation:</p>
<pre><code>def scale_nested(points: point2d_nested, factor: float) -> point2d_nested:
"""
Scales a list of lists of points by a given factor.
Weird things happen if you supply 3D points or if your point is not a tuple.
"""
if not points:
return points
# We've bottomed out at a point
if (
isinstance(points, tuple)
and len(points) == 2
and isinstance(points[0], (int, float))
and isinstance(points[1], (int, float))
):
return points[0] * factor, points[1] * factor
if isinstance(points, list):
return [scale_nested(p, factor) for p in points]
if isinstance(points, tuple):
return tuple(
scale_nested(p, factor) for p in points if not isinstance(p, (int, float))
)
raise ValueError(f"Something is an invalid type: {points}")
</code></pre>
<p>Unfortunately, this doesn't work:</p>
<pre><code>knots = [((1.0, 2.0),)]
scale_nested(knots, 2) # <- knots look red
</code></pre>
<pre><code>Argument of type "list[tuple[tuple[float, float]]]" cannot be assigned to parameter "points" of type "point2d_nested" in function "scale_nested"
Type "list[tuple[tuple[float, float]]]" cannot be assigned to type "point2d_nested"
"list[tuple[tuple[float, float]]]" is incompatible with "point2d"
"list[tuple[tuple[float, float]]]" is incompatible with "list[point2d_nested]"
Type parameter "_T@list" is invariant, but "tuple[tuple[float, float]]" is not the same as "point2d_nested"
Consider switching from "list" to "Sequence" which is covariant
"list[tuple[tuple[float, float]]]" is incompatible with "tuple[point2d_nested, ...]"PylancereportGeneralTypeIssues
(variable) knots: list[tuple[tuple[float, float]]]
</code></pre>
|
<python><python-typing><pyright>
|
2024-04-24 16:24:20
| 2
| 33,893
|
fny
|
78,379,820
| 1,309,503
|
LLM Studio fail to download model with error : unable to get local issuer certificate
|
<p>In LLM studio, when I try to download any model, I am facing following error:</p>
<p>Download Failed: unable to get local issuer certificate</p>
<p><a href="https://i.sstatic.net/WkMZI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WkMZI.png" alt="enter image description here" /></a></p>
|
<javascript><python><machine-learning>
|
2024-04-24 16:03:26
| 4
| 1,735
|
divyang4481
|
78,379,777
| 16,363,897
|
Fill nan based on other rows/columns and other dataframe
|
<p>I have the following "covar" dataframe (it's a covariance matrix), where I have the same items both as index and as column names.</p>
<pre><code>covar_data = {
'a': [0.04, np.nan, 0.03, np.nan, -0.04],
'XY': [np.nan, np.nan, np.nan, np.nan, np.nan],
'b': [0.03, np.nan, 0.09, np.nan, 0.00],
'YZ': [np.nan, np.nan, np.nan, np.nan, np.nan],
'c': [-0.04, np.nan, 0.00, np.nan, 0.16]
}
covar_index = ['a', 'XY', 'b', 'YZ', 'c']
covar = pd.DataFrame(covar_data, index=covar_index)
a XY b YZ c
a 0.04 NaN 0.03 NaN -0.04
XY NaN NaN NaN NaN NaN
b 0.03 NaN 0.09 NaN 0.00
YZ NaN NaN NaN NaN NaN
c -0.04 NaN 0.00 NaN 0.16
</code></pre>
<p>Some items ("XY" and "YZ" in this example, but many more in the real dataset) are clones of other items ("XY" is clone of "a" and "YZ" is clone of "b").
I need to:</p>
<ul>
<li>fill each clone column with the column of the cloned item</li>
<li>fill each clone row with the row of the cloned item.</li>
</ul>
<p>The fill can be with the same or the opposite sign.</p>
<p>The missing diagonal values should be the same as the diagonal cell of the corresponding cloned item, always with the same sign. So ["XY":"XY"] = ["a":"a"] and ["YZ":"YZ"] = ["b":"b"]</p>
<p>I have another dataframe ("df") where I have the clone, the item it clones and the sign ("1" means same sign, "-1" means opposite sign).</p>
<pre><code>clone_data = {
'cloned_item': ['a', 'b'],
'sign': [1, -1]
}
clone_index = ['XY', 'YZ']
df = pd.DataFrame(clone_data, index=clone_index)
cloned_item sign
clone
XY a 1
YZ b -1
</code></pre>
<p>This is the expected output:</p>
<pre><code> a XY b YZ c
a 0.04 0.04 0.03 -0.03 -0.04
XY 0.04 0.04 0.03 -0.03 -0.04
b 0.03 0.03 0.09 -0.09 0.00
YZ -0.03 -0.03 -0.09 0.09 0.00
c -0.04 -0.04 0.00 0.00 0.16
</code></pre>
<p>As you can see, "XY" column/row is the same as "a" column/row, with the same sign.
"YZ" column/row is the same as "b" column/row, but with the opposite sign.
Diagonal value for "XY" and "YZ" are the same as those for "a" and "b".</p>
<p>Any ideas? Thanks</p>
|
<python><pandas><numpy>
|
2024-04-24 15:57:19
| 2
| 842
|
younggotti
|
78,379,670
| 21,286,804
|
"AssertionError: settings.yaml file not found" in pynest
|
<p>I am facing the following issue:</p>
<pre><code>File "/home/rugain/Documents/repos/sap_integration_api/.venv/lib/python3.12/site-packages/nest/cli/click_handlers.py", line 63, in create_nest_module
db_type, is_async = get_metadata()
^^^^^^^^^^^^^^
File "/home/rugain/Documents/repos/sap_integration_api/.venv/lib/python3.12/site-packages/nest/cli/click_handlers.py", line 8, in get_metadata
assert setting_path.exists(), "settings.yaml file not found"
AssertionError: settings.yaml file not found
</code></pre>
<p>when running the following command:</p>
<pre><code>pynest g module -n <any-module-name>
</code></pre>
|
<python><fastapi>
|
2024-04-24 15:38:39
| 1
| 427
|
Magaren
|
78,379,599
| 5,423,080
|
Unique legend in Seaborn and Matplotlib subplots
|
<p>I am analysing some data and plotting three violin plots on two subplots using <code>matplotlib</code> and <code>seaborn</code>.</p>
<p>My problem is with the legends, I would like to have just one legend for the subplot outside them.</p>
<p>A MWE is:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.DataFrame({"A": np.random.uniform(size=100), "B": np.random.uniform(size=100), "Class": np.random.randint(0, 4, 100)})
fig, axs = plt.subplots(1, 2, sharey=True)
sns.violinplot(ax=axs[0], data=df[["A", "Class"]], x="Class", y="A")
axs[0].legend(["A"])
sns.violinplot(ax=axs[1], data=df[["B", "Class"]], x="Class", y="B", color="r")
axs[1].legend(["B"])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/ZWQ0Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZWQ0Q.png" alt="violin plot example" /></a></p>
<p>This example put one legend for each subplot, I want both of them together in the right side of the plot.</p>
<p>If I tried to extract the header and label in this way <code>lines_labels = [ax.get_legend_handles_labels() for ax in fig.axes]</code>, but this is a list of empty tuple.</p>
<p>I tried to add a <code>label</code> option to the plots and plot the legend <code>plt.legend(loc="center left", bbox_to_anchor=(1.0, 0.5))</code>, but I obtain the same result of the plot I attached with the same legend repeated 4 times.</p>
<p>Any suggestion?</p>
|
<python><matplotlib><seaborn>
|
2024-04-24 15:27:17
| 1
| 412
|
cicciodevoto
|
78,379,572
| 2,437,514
|
Using Pydantic with typing.Protocol
|
<p>I'm attempting to use typing protocols and pydantic together and am running into some issues. I expect probably my understanding of how to properly use the two of them together is lacking.</p>
<p>This pydantic code is producing an error:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Protocol, Generic, runtime_checkable
from pydantic import BaseModel
D = TypeVar('D')
@runtime_checkable
class SupportsDimensionality(Protocol[D]):
dimensionality: D
class ValueModel(BaseModel, Generic[D]):
value: SupportsDimensionality[D]
</code></pre>
<p>Error text:</p>
<blockquote>
<p>pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <strong>main</strong>.SupportsDimensionality[~D]. Set <code>arbitrary_types_allowed=True</code> in the model_config to ignore this error or implement <code>__get_pydantic_core_schema__</code> on your type to fully support it.</p>
</blockquote>
<p>I've been trying to figure out why this happens and how to fix it but need some help.</p>
|
<python><python-typing><pydantic>
|
2024-04-24 15:22:14
| 1
| 45,611
|
Rick
|
78,379,570
| 6,223,748
|
Quick way for checking if two objects are deep copies of each other
|
<p>Assume I have two objects of a certain class and want to make sure that they are deep copies and not just shallow copies.</p>
<p>The code below checks if <code>is</code> can be used to determine if they are deep copies.</p>
<pre><code>import copy
class MyClass():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
mylist = [1,2,3,4]
obj1 = MyClass(1,2,mylist)
obj2 = copy.copy(obj1)
print("obj2 is obj1: {}, obj2.c is obj1.c: {}".format(obj2 is obj1, obj2.c is obj1.c))
obj3 = copy.deepcopy(obj1)
print("obj3 is obj1: {}, obj3.c is obj1.c: {}".format(obj3 is obj1, obj3.c is obj1.c))
</code></pre>
<p>The output is as expected:</p>
<pre><code>"obj2 is obj1: False, obj2.c is obj1.c: True"
"obj3 is obj1: False, obj3.c is obj1.c: False"
</code></pre>
<p>We see that <code>is</code> is only suitable for checking for deep copies if we use it to compare fields of the objects, not just the objects themselves.</p>
<p>My question is if there is an easier way to check for this. I would like to do this check for instances of different classes, i.e. it should work without having to specify what the fields of the objects look like. I would like to end up with a function that looks somewhat like this:</p>
<pre><code>def is_deep_copy(obj1, obj2):
if "[stuff to determine if they are deep copies]":
return True
else:
return False
</code></pre>
|
<python><object><copy>
|
2024-04-24 15:21:53
| 0
| 351
|
Dave
|
78,379,296
| 14,838,954
|
How to Create and Visualize a Directed Weighted Graph with Parallel Edges and Different Weights Using Python?
|
<p>I'm working on a project where I need to create a directed weighted graph in Python that allows parallel edges with different weights between nodes. I am using the <a href="https://networkx.org/documentation/stable/index.html" rel="nofollow noreferrer">networkx</a> library and <code>Matplotlib</code> for visualization.</p>
<p>My goal is to:</p>
<ul>
<li>Create a directed graph with parallel edges (multiple edges between the same nodes).</li>
<li>Assign random weights to these edges.</li>
<li>Visualize the graph with edge labels showing the weights.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import random
import networkx as nx
import matplotlib.pyplot as plt
def create_graph(n_nodes, alpha = 0.5):
G = nx.MultiDiGraph()
G.add_nodes_from(range(n_nodes))
for i in range(n_nodes):
for j in range(i+1,n_nodes):
if random.random() < alpha:
weight=random.randint(1,10)
G.add_edge(i, j, weight=weight)
if random.random() < alpha:
weight=random.randint(1,10)
G.add_edge(j, i, weight=weight)
return G
def display_graph(G):
pos = nx.spring_layout(G)
weight_labels = nx.get_edge_attributes(G, 'weight')
nx.draw(G, pos, with_labels=True, node_color='skyblue', edge_color='gray', node_size=700)
nx.draw_networkx_edge_labels(G, pos, edge_labels=weight_labels)
plt.show()
n_nodes = 5
G = create_graph(n_nodes, alpha = 0.5)
display_graph(G)
</code></pre>
<p>However, when I try to visualize the graph with edge labels, I got this error message:</p>
<pre><code>networkx.exception.NetworkXError: draw_networkx_edge_labels does not support multiedges.
</code></pre>
<p>This error occurs when I try to display edge labels for a MultiDiGraph, and it seems like the <code>draw_networkx_edge_labels</code> function doesn't support parallel edges.</p>
<ol>
<li>How can I resolve this error and properly visualize a directed weighted graph with parallel edges?</li>
<li>Is there a better way to create and visualize directed graphs with parallel edges in <code>networkx</code>?</li>
</ol>
<p>I'd appreciate any guidance or examples to help me achieve this. Thank you in advance!</p>
|
<python><networkx><directed-graph><graph-visualization><weighted-graph>
|
2024-04-24 14:40:00
| 2
| 485
|
Nirmal Sankalana
|
78,379,126
| 1,873,403
|
How can I convert a discrete coordinate map into a data flow diagram in python?
|
<p>I have some data stored in a database with the following schema</p>
<pre><code>table1, field1, table2, field2
</code></pre>
<p>This shows how table 1 relates to table 2 via the field value. I'd like to turn this into a diagram that shows how each of these tables are related to each (i.e. an ER diagram). For example, using the above record, you would have two boxes, one labeled table1, the other labeled table2, and they are connected via two lines labeled field1 and field2. Is there any python package that can accomplish this?</p>
<p>Thanks</p>
|
<python><entity-relationship><diagram>
|
2024-04-24 14:15:42
| 0
| 1,180
|
Brad Davis
|
78,379,097
| 7,803,545
|
Regular expression look ahead anchor with multiple match
|
<p>I am using Regular Expression in Python 3.11 (because it allows the <code>(?>...)</code> pattern, <a href="https://docs.python.org/3/library/re.html" rel="nofollow noreferrer">https://docs.python.org/3/library/re.html</a>) to transform the bellow string to a dictionary by an interactive match pattern:</p>
<pre class="lang-py prettyprint-override"><code>string = '''Latitude (degrees): 4010.44 Longitude (degrees): 58.000 Radiation database: year month H(h)_m 2005 Jan 57.77 2005 Feb 77.76 2005 Mar 120.58 H(h)_m: Irradiation plane (kWh/m2/mo)'''
for match in re.finditer(r'(?P<key>(?>[A-Z][ a-z\_\(\)]*))\: *(?P<value>.+?)(?: |$)', string):
# Key is the short pattern before ":" starting with a uppercase letter
# Value must be the remaining, after the ": " and before the next key.
print(match[1], ":", match[2])
</code></pre>
<p>I haven't been able to return:</p>
<pre><code>Latitude (degrees): 4010.44
Longitude (degrees): 58.000
Radiation database: year month H(h)_m 2005 Jan 57.77 2005 Feb 77.76 2005 Mar 120.58
H(h)_m: Irradiation plane (kWh/m2/mo)
</code></pre>
<p>And know that is because the <code>(?P<value>.+?)</code> short match pattern, but removing <code>?</code>, the <code><value></code> also captures some unintended <code><key></code>.</p>
<p>How to long match and stop the <code><value></code> group match before the next <code><key></code>?</p>
|
<python><regex><python-3.11>
|
2024-04-24 14:11:05
| 1
| 896
|
hildogjr
|
78,379,017
| 3,258,600
|
Remove selected libraries from the Pip Cache
|
<p>I have a library in development using a private PyPi server that I am trying to test with other python modules without having to update the version number after making changes.</p>
<p>My development environment uses tox, pyproject.toml, and setuptools for build.</p>
<p>After making a change to the upstream library, I try to clear the old files from the cache using: <code>.tox/py38/bin/pip cache remove my_lib</code>. Pip responds with <code>Files removed: 19</code>. So far so good. I then rerun tox, but it fails with this error:</p>
<pre><code>ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
</code></pre>
<p>UPDATE: My requirements are specified in setup.cfg with no hashes specified. There is no requirements file.</p>
<p>I have a few questions:</p>
<ol>
<li>How do I clear a single library out of the pip cache, so that I can avoid this error when rebuilding? I don't want to clear my entire cache.</li>
<li>Is there any way to force Pip to pull changes even if hashes do not match?</li>
<li>Is there any way to force Pip to check hashes even if files already exists in the cache?</li>
</ol>
|
<python><pip>
|
2024-04-24 13:58:52
| 1
| 12,963
|
kellanburket
|
78,378,995
| 10,425,150
|
Save matplotlib as mp4 in python
|
<pre><code>from matplotlib import pyplot as plt
import numpy as np
import matplotlib.animation as animation
fig = plt.figure()
axis = plt.axes(xlim=(0, 4), ylim=(-1.5, 1.5))
line, = axis.plot([], [], lw=3)
def animate(frame_number):
x = np.linspace(0, 4, 1000)
y = np.sin(2 * np.pi * (x - 0.01 * frame_number))
line.set_data(x, y)
line.set_color('green')
return line,
anim = animation.FuncAnimation(fig, animate, frames=100,
interval=20, blit=True)
fig.suptitle('Sine wave plot', fontsize=14)
anim.save("animation.gif", dpi=300, writer=animation.PillowWriter(fps=25)) # works fine
anim.save("animation.mp4", writer = animation.FFMpegWriter(fps=25))# doesn't work
</code></pre>
<p>I can save the above plot as .gif without any problems.</p>
<p>But when I'm trying to save it as .mp4 I get the following error:</p>
<pre><code>\AppData\Local\Programs\Python\Python312\Lib\site-packages\matplotlib\animation.py:240, in AbstractMovieWriter.saving(self, fig, outfile, dpi, *args, **kwargs)
...
1553 self._close_pipe_fds(p2cread, p2cwrite,
1554 c2pread, c2pwrite,
1555 errread, errwrite)
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
|
<python><matplotlib><animation>
|
2024-04-24 13:55:26
| 2
| 1,051
|
Gооd_Mаn
|
78,378,829
| 2,475,195
|
PerformanceWarning: DataFrame is highly fragmented when adding more columns
|
<p>I am getting this warning when inserting new columns in a dataframe that are shifted copies of existing columns. How can I rewrite this code so as to avoid the warning? One solution I found was copying the whole dataframe after every insertion, but that seems inefficient.</p>
<pre><code>data = {str(i):[pow(k, i) for k in range(1000)] for i in range(1, 6)}
df = pd.DataFrame.from_dict(data)
for col in df.columns:
for offset in range(1, 30):
df[f'{col}-{offset}'] = df[col].shift(offset)
# df = df.copy() # solved the problem, but likely not best solution
</code></pre>
|
<python><pandas><dataframe>
|
2024-04-24 13:29:31
| 3
| 4,355
|
Baron Yugovich
|
78,378,811
| 12,081,269
|
Is there any way to optimize my function for district matching using Python's GeoPandas and Pandas?
|
<p>I have a table with polygons and district names; I also have data on purchases with exact longitude and latitude. I wrote a function that checks for every coordinate pair a match in a polygon; then it assigns district name for a purchase. The problem is that it works very-very slow due to lack of vectorization and nested for-loops (thanks, pandas). How can I optimize so it will digest 10+ million rows in less time?</p>
<pre><code>def get_district_name(geo_df: pd.DataFrame, ship_df: pd.DataFrame, col_name: str, frac: int=0.65) -> pd.DataFrame:
sample_ship = ship_df.sample(frac=frac, replace=False, random_state=42).reset_index(drop=True)
sample_ship['municipal_district_name'] = ''
for i in tqdm(range(len(sample_ship))):
point = shapely.geometry.Point(sample_ship['address_longitude'][i], sample_ship['address_latitude'][i])
for j in range(len(geo_df)):
if point.within(geo_df.geometry[j]):
sample_ship['municipal_district_name'][i] = geo_df[col_name][j]
continue
return sample_ship
</code></pre>
|
<python><pandas><geopandas>
|
2024-04-24 13:26:50
| 2
| 897
|
rg4s
|
78,378,769
| 832,490
|
How to authenticate on msgraph.GraphServiceClient?
|
<p>There is no documentation for do this, I found it unacceptable.</p>
<pre><code>from msal import ConfidentialClientApplication
from msgraph import GraphServiceClient
client_id = ''
client_secret = ''
tenant_id = ''
authority = f'https://login.microsoftonline.com/{tenant_id}'
scopes = ['https://graph.microsoft.com/.default']
app = ConfidentialClientApplication(
client_id,
authority=authority,
client_credential=client_secret,
)
response = app.acquire_token_for_client(scopes)
graph_client = GraphServiceClient(
credentials=response,
scopes=scopes
)
await graph_client.users.get()
</code></pre>
<pre><code>/usr/local/lib/python3.10/dist-packages/kiota_authentication_azure/azure_identity_access_token_provider.py in get_authorization_token(self, uri, additional_authentication_context)
101 )
102 else:
--> 103 result = self._credentials.get_token(*self._scopes, claims=decoded_claim)
104
105 if inspect.isawaitable(result):
> AttributeError: 'dict' object has no attribute 'get_token'
Analyzing the stack, you can see that the object passed as credentials in the `msgraph` client is not what it expects; `acquire_token_for_client` returns a dictionary, but `GraphServiceClient` expects it to have a function called "get_token."
</code></pre>
<p>How can this be resolved?</p>
|
<python><azure><azure-active-directory><microsoft-graph-api><office365>
|
2024-04-24 13:20:23
| 1
| 1,009
|
Rodrigo
|
78,378,738
| 5,774,969
|
Saving tensorflow dataset extremely slow after applying filter
|
<p>Relatively new to tensorflow here, and I am facing an issue where I have not yet managed to find a good answer through searching. So here goes:</p>
<p>I am trying to understand why applying a filter function to my tensorflow dataset, suddenly makes the time it takes to write the dataset to disk, seemingly explode. The time jumps from around 1 min to nearly two hours.</p>
<p>My code is Python 3.8, Tensorflow 2.11.1.</p>
<pre><code>import tensorflow as tf
from skimage import filters
#Filtering function
def meijering_filter(x):
filtered = filters.meijering(x)
return filtered
#Import training data
training_dataset = tf.keras.utils.image_dataset_from_directory(
"path_to_training_dataset",
labels=None,
batch_size=5,
image_size=(480, 640),
shuffle=True,
seed=42,
subset='training',
validation_split=0.2,
color_mode='grayscale'
)
normalization_layer = tf.keras.layers.Rescaling(1./255)
normalized_train_dataset = training_dataset.map(lambda x: (normalization_layer(x)))
feat_training_dataset = normalized_train_dataset.map(lambda x: tf.numpy_function(meijering_filter, [x], tf.float32))
#Reshaping data, since the numpy_function() returns tensors with an unknown shape
data_reshape = tf.keras.Sequential([tf.keras.layers.Input(shape=(480, 640, 1))])
feat_training_dataset = feat_training_dataset.map(lambda x: (data_reshape(x)))
#Saving tensorflow dataset for later consumption
feat_training_dataset.save("save_path_on_disk")
</code></pre>
<p>I am aware that the <code>save()</code> requires at least one compute of the dataset, and that the meijering filter is somewhat compute intensive. Still, timing this based on a <code>take(1)</code> on my dataset, I expect that computation to take a few minutes (the <code>take(1)</code> I timed to 0.12s, and my entire dataset is only around 1500 images).</p>
<p>I also tried not applying the <code>data_reshape()</code>, but that did not make a notable difference.</p>
<p>Can someone help me understand why the execution of the <code>save()</code> takes nearly two hours in the code above, and is there a way to remedy this?</p>
|
<python><tensorflow><scikit-image>
|
2024-04-24 13:15:18
| 0
| 502
|
AstroAT
|
78,378,626
| 12,415,855
|
Python-PPTX copy shape von one PPT to another?
|
<p>I try to copy a shape from one slide from a PPTX to another slide on another PPTX using the following code:</p>
<pre><code>from pptx import Presentation
from pptx.dml.color import RGBColor
import os
import sys
if __name__ == '__main__':
path = os.path.abspath(os.path.dirname(sys.argv[0]))
fn = os.path.join(path, "templ2.pptx")
shapesPPT = Presentation(fn)
shapesIcons = shapesPPT.slides[0].shapes
fn = os.path.join(path, "templ.pptx")
ppt = Presentation(fn)
for slide in ppt.slides:
for shapeIDX, shape in enumerate(slide.shapes):
if shapeIDX == 7:
wColor = RGBColor(102, 255, 102)
# ppt.slides[0].shapes[7].fill.fore_color.rgb = wColor
ppt.slides[0].shapes[7] = shapesIcons[15]
fnPPT = os.path.join(path, "OUTPUT", f"output.pptx")
ppt.save(fnPPT)
</code></pre>
<p>But with that i only get this error:</p>
<pre><code>(pptx) C:\DEV\Fiverr\TRY\userlutionsgmbh>python test.py
Traceback (most recent call last):
File "C:\DEV\Fiverr\TRY\userlutionsgmbh\test.py", line 22, in <module>
ppt.slides[0].shapes[7] = shapesIcons[15]
TypeError: 'SlideShapes' object does not support item assignment
</code></pre>
<p>When I try to change the fill color (see the commented line above) it works fine. But I want to copy a complete shape from on slide/pptx to another.</p>
<p>How can I do that?</p>
|
<python><python-pptx>
|
2024-04-24 12:55:20
| 0
| 1,515
|
Rapid1898
|
78,378,549
| 6,000,414
|
botocore.eventstream.InvalidHeadersLength
|
<p>I am trying to invoke Bedrock agent using boto3, and I am getting the following error:</p>
<pre><code>botocore.eventstream.InvalidHeadersLength: Header length of 1953527156 exceeded the maximum of 131072
</code></pre>
<p>Here is the code I am using:</p>
<pre><code>import boto3
import uuid
bedrock = boto3.client('bedrock-agent-runtime',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=REGION
)
response = bedrock.invoke_agent(
agentId=AGENT_ID,
sessionId=str(uuid.uuid4()),
inputText=INPUT_TEXT,
agentAliasId="",
enableTrace=False,
endSession=False
)
stream = response.get("completion")
if stream:
for event in stream:
chunk = event.get('chunk')
print(f"Agent Chunks: {chunk}")
</code></pre>
<p>What could be the issue?</p>
|
<python><amazon-web-services><boto3><botocore><amazon-bedrock>
|
2024-04-24 12:43:18
| 1
| 901
|
Alexey Zelenkin
|
78,378,542
| 5,568,409
|
Why the LateX instruction \Large doesn't run when used in Matplotlib label?
|
<p>In the small program below, I don't understand why <code>label = r"$\tilde{Q}$")</code> runs well and <code>label = r"$\Large{Q}$")</code> doesn't run:</p>
<pre><code>%matplotlib inline
import matplotlib.pyplot as plt
x = np.linspace(-3, +3, 100)
y = x**2
fig, ax = plt.subplots(figsize=(4, 2))
ax.plot(x, y, label = r"$\tilde{Q}$")
#ax.plot(x, y, label = r"$\Large{Q}$")
ax.legend(loc='best')
plt.show()
</code></pre>
<p>The error generated by <code>label = r"$\Large{Q}$"</code> is incredibly verbose and I couldn't find a simple explanation.</p>
<p>Any suggestion appreciated as to apply <code>\Large</code> to a single letter in a label.</p>
|
<python><matplotlib><latex>
|
2024-04-24 12:41:59
| 0
| 1,216
|
Andrew
|
78,378,373
| 9,925,065
|
numpy add array of constants to array of matrixes
|
<p>Consider a numpy array <code>X</code> with <code>m</code> elements. Each element it's itself a numpy matrix of a generic (but, more importantly, fixed shape <code>(n0, n1, ..., n)</code> so that <code>X</code> shape is <code>(m, n0, n1, ..., n)</code>.
Now, take a list of m (different) scalar, let's say something like this:</p>
<pre><code>c = np.arange(m)
</code></pre>
<p><code>c</code> and <code>X</code> have the same number of elements: <code>c</code> is an array of <code>m</code> scalar, while <code>X</code> it's an array of <code>m</code> matrixes.</p>
<p>I would like to add to <em>every</em> elements of <code>X[i]</code> the constant <code>c[i]</code></p>
<pre><code>for i in range(m):
X[i] += c[i]
</code></pre>
<p>is it possible to perform this calculation without the for loop? Something like <code>X+c</code> (but ofc it does not work)</p>
<p>Thank you for your help</p>
<p>EDIT:
Code example</p>
<pre><code>import numpy as np
m = 3
X = np.arange(m*4*3*6*5*7).reshape((m, 4, 3, 6, 5, 7))
c = np.arange(m)
for i in range(m):
X[i] += c[i]
</code></pre>
|
<python><numpy>
|
2024-04-24 12:15:53
| 3
| 365
|
Gigino
|
78,378,202
| 11,267,783
|
With Matplotlib, how to create one figure with cartesian and polar coordinates
|
<p>I want to create a figure with 2 subplots using mosaic and display the same signal in polar and cartesian coordinates with Matplotlib.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
t = np.arange(0, 30, 0.1)
s = np.sin(2 * np.pi * 1 * t)
fig = plt.figure(layout="constrained")
ax_dict = fig.subplot_mosaic(
[
["signal1"],
["signal2"],
]
)
p1 = ax_dict["signal1"].plot(s)
p2 = ax_dict["signal2"].plot(s)
plt.show()
</code></pre>
<p>So I would like to display on the left the figure in cartesian coordinate and on the right in polar. How is it possible ?</p>
<p>Thank you in advance.</p>
|
<python><matplotlib>
|
2024-04-24 11:46:24
| 2
| 322
|
Mo0nKizz
|
78,378,146
| 5,678,057
|
Pandas: Prevent (blank) from being read as ''
|
<p>I'm loading a dataframe that has a column with <strong>string</strong> <code>(blank)</code> in it. I want to read this as is, i.e as a string itself, and to be displayed in the output, instead of being converted to any other form (<code>Nan</code>, <code><NA></code>, <code>''</code> etc.).</p>
<p>This is not a duplicate question. I tried several stack overflow suggestions. The closest that answered mine were <a href="https://stackoverflow.com/questions/41417214/prevent-pandas-from-reading-na-as-nan">this question</a> and <a href="https://stackoverflow.com/questions/10867028/get-pandas-read-csv-to-read-empty-values-as-empty-string-instead-of-nan">this one</a> (<code>keep_default_na=False</code> and <code>na_values=['']</code>), although it still converts cells with string <code>(blank)</code> to nan.</p>
<p>Is there any way I can get <code>(blank)</code> itself as a string. As it's required to not create confusion for the end user.</p>
<p>By the way, I also tried converting using <code>dtype</code> as well, but to no avail. It still converts to <code><NA></code>.</p>
|
<python><pandas><dataframe><casting>
|
2024-04-24 11:36:58
| 0
| 389
|
Salih
|
78,377,980
| 2,546,099
|
Generate wheel-package for specific python version using pyproject.toml
|
<p>I'm currently transferring my project build pipeline from being <code>setup.py</code>-based to a <code>pyproject.toml</code>-based approach. Most of the parameters are easily transferable, however, I can't find an equivalent to</p>
<pre><code>python_requires="~=3.11"
</code></pre>
<p>Of course, there is</p>
<pre><code>[project]
requires-python = "~=3.11"
</code></pre>
<p>but while the setting in <code>setup.py</code> results in <code>-cp311-cp311</code> in the package name, the same setting in <code>pyproject.toml</code> results in <code>-py3-none</code> in the package name.</p>
<p>As the package contains several dll-files which have to be compiled specific to each python version, I want to reflect that also in the package name (and ideally refuse installation if a wrong python installation is used). How can I solve that by using <code>pyproject.toml</code>?</p>
|
<python><build><pyproject.toml>
|
2024-04-24 11:10:27
| 0
| 4,156
|
arc_lupus
|
78,377,856
| 11,233,365
|
Failure to install python-javabridge: How does pip work differently between Python 3.10 and 3.11?
|
<p>I am trying to install <code>python-javabridge</code> (<a href="https://github.com/CellProfiler/python-javabridge" rel="nofollow noreferrer">GitHub link</a>) in order to set up a Java virtual machine to house libraries for different file formats that Python can subsequently refer to. However, the authors of the package have discontinued work on the project, and it doesn't appear to be supported for Python >3.10. However, given how invaluable that library is, I would like to see if I can fix <code>python-javabridge</code> to be installed on newer versions of Python.</p>
<p>Currently, I am testing the installation on Python 3.10 and 3.11 to see how they differ.</p>
<p>Starting with Python 3.10, I tried the following:</p>
<pre><code>> mamba create -n javabridge-test python==3.10.14
> conda activate javabridge-test # Switch to the environment I want to install it in
> pip install python-javabridge
> # Installation proceeds without issue
</code></pre>
<p>However, when I try doing the same thing, but with Python 3.11.8 (having done <code>pip install numpy</code> beforehand to get the dependency ready), I get the following error messages:</p>
<pre><code>Collecting python-javabridge
Using cached python-javabridge-4.0.3.tar.gz (1.3 MB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: numpy>=1.20.1 in /path/to/envs/javabridge-install/lib/python3.11/site-packages (from python-javabridge) (1.26.4)
Building wheels for collected packages: python-javabridge
Building wheel for python-javabridge (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [46 lines of output]
/path/to/envs/javabridge-install/lib/python3.11/site-packages/setuptools/__init__.py:80: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.fetch_build_eggs(dist.setup_requires)
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-311
creating build/lib.linux-x86_64-cpython-311/javabridge
copying javabridge/__init__.py -> build/lib.linux-x86_64-cpython-311/javabridge
copying javabridge/jutil.py -> build/lib.linux-x86_64-cpython-311/javabridge
copying javabridge/locate.py -> build/lib.linux-x86_64-cpython-311/javabridge
copying javabridge/noseplugin.py -> build/lib.linux-x86_64-cpython-311/javabridge
copying javabridge/wrappers.py -> build/lib.linux-x86_64-cpython-311/javabridge
creating build/lib.linux-x86_64-cpython-311/javabridge/tests
copying javabridge/tests/__init__.py -> build/lib.linux-x86_64-cpython-311/javabridge/tests
copying javabridge/tests/test_cpython.py -> build/lib.linux-x86_64-cpython-311/javabridge/tests
copying javabridge/tests/test_javabridge.py -> build/lib.linux-x86_64-cpython-311/javabridge/tests
copying javabridge/tests/test_jutil.py -> build/lib.linux-x86_64-cpython-311/javabridge/tests
copying javabridge/tests/test_wrappers.py -> build/lib.linux-x86_64-cpython-311/javabridge/tests
creating build/lib.linux-x86_64-cpython-311/javabridge/jars
copying javabridge/jars/cpython.jar -> build/lib.linux-x86_64-cpython-311/javabridge/jars
copying javabridge/jars/rhino-1.7R4.jar -> build/lib.linux-x86_64-cpython-311/javabridge/jars
copying javabridge/jars/runnablequeue.jar -> build/lib.linux-x86_64-cpython-311/javabridge/jars
copying javabridge/jars/test.jar -> build/lib.linux-x86_64-cpython-311/javabridge/jars
running build_ext
javac -source 8 -target 8 /tmp/pip-install-gv_sm6fr/python-javabridge_ea4288a3fc0b4e16815944ed67e58092/java/org/cellprofiler/runnablequeue/RunnableQueue.java
javac -source 8 -target 8 /tmp/pip-install-gv_sm6fr/python-javabridge_ea4288a3fc0b4e16815944ed67e58092/java/org/cellprofiler/javabridge/test/RealRect.java
javac -source 8 -target 8 /tmp/pip-install-gv_sm6fr/python-javabridge_ea4288a3fc0b4e16815944ed67e58092/java/org/cellprofiler/javabridge/CPython.java /tmp/pip-install-gv_sm6fr/python-javabridge_ea4288a3fc0b4e16815944ed67e58092/java/org/cellprofiler/javabridge/CPythonInvocationHandler.java
Note: /tmp/pip-install-gv_sm6fr/python-javabridge_ea4288a3fc0b4e16815944ed67e58092/java/org/cellprofiler/javabridge/CPythonInvocationHandler.java uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
building 'javabridge._javabridge' extension
creating build/temp.linux-x86_64-cpython-311
gcc -pthread -B /path/to/envs/javabridge-install/compiler_compat -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /path/to/envs/javabridge-install/include -fPIC -O2 -isystem /path/to/envs/javabridge-install/include -fPIC -I/path/to/envs/javabridge-install/lib/python3.11/site-packages/numpy/core/include -I/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-2.el8.x86_64/include -I/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-2.el8.x86_64/include/linux -I/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-2.el8.x86_64/default-java/include -I/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-2.el8.x86_64/default-java/include/linux -I/path/to/envs/javabridge-install/lib/python3.11/site-packages/numpy/core/include -I/path/to/envs/javabridge-install/include/python3.11 -c _javabridge.c -o build/temp.linux-x86_64-cpython-311/_javabridge.o
_javabridge.c:196:12: fatal error: longintrepr.h: No such file or directory
#include "longintrepr.h"
^~~~~~~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for python-javabridge
Running setup.py clean for python-javabridge
Failed to build python-javabridge
ERROR: Could not build wheels for python-javabridge, which is required to install pyproject.toml-based projects
</code></pre>
<p>Could you advise me on how Python installations via pip and/or setuptools have changed between 3.10 and 3.11, and how this error might be fixed to make <code>python-javabridge</code>'s installation compatible with Python 3.11 and newer?</p>
|
<python><pip><setuptools><cpython>
|
2024-04-24 10:51:26
| 1
| 301
|
TheEponymousProgrammer
|
78,377,648
| 16,383,578
|
Can't import libarchive-c in Python: FileNotFoundError
|
<p>I am trying to selectively extract files from many 7zip files automatically, the usual <code>patool</code> and <code>pyunpack</code> and the like don't allow selection of files to be extracted. <code>py7zr</code> seems to provide the functionality however I find it extremely inefficient to extract individual files.</p>
<p>I found <code>libarchive-c</code> after crafting this Google query manually:</p>
<p><code>https://www.google.com/search?q=%22python%22+%227z%22+-patool+-pyunpack+-py7zr&tbs=li:1</code></p>
<p>And sifting through the trash it still returns.</p>
<p>I installed <a href="https://pypi.org/project/libarchive-c/" rel="nofollow noreferrer">this version</a> of <code>libarchive</code>, I downloaded the binary for Windows 10 x64 <a href="https://www.libarchive.org/" rel="nofollow noreferrer">here</a>.</p>
<p>I extracted the files from the libarchive package to D:\Programs\libarchive.</p>
<p>At first I can't even import <code>libarchive</code>:</p>
<pre class="lang-py prettyprint-override"><code>In [20]: import libarchive
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [20], in <cell line: 1>()
----> 1 import libarchive
File C:\Program Files\Python310\lib\site-packages\libarchive\__init__.py:1, in <module>
----> 1 from .entry import ArchiveEntry
2 from .exception import ArchiveError
3 from .extract import extract_fd, extract_file, extract_memory
File C:\Program Files\Python310\lib\site-packages\libarchive\entry.py:6, in <module>
3 from enum import IntEnum
4 import math
----> 6 from . import ffi
9 class FileType(IntEnum):
10 NAMED_PIPE = AE_IFIFO = 0o010000 # noqa: E221
File C:\Program Files\Python310\lib\site-packages\libarchive\ffi.py:26, in <module>
23 page_size = mmap.PAGESIZE
25 libarchive_path = os.environ.get('LIBARCHIVE') or find_library('archive')
---> 26 libarchive = ctypes.cdll.LoadLibrary(libarchive_path)
29 # Constants
31 ARCHIVE_EOF = 1 # Found end of archive.
File C:\Program Files\Python310\lib\ctypes\__init__.py:452, in LibraryLoader.LoadLibrary(self, name)
451 def LoadLibrary(self, name):
--> 452 return self._dlltype(name)
File C:\Program Files\Python310\lib\ctypes\__init__.py:364, in CDLL.__init__(self, name, mode, handle, use_errno, use_last_error, winmode)
362 import nt
363 mode = nt._LOAD_LIBRARY_SEARCH_DEFAULT_DIRS
--> 364 if '/' in name or '\\' in name:
365 self._name = nt._getfullpathname(self._name)
366 mode |= nt._LOAD_LIBRARY_SEARCH_DLL_LOAD_DIR
TypeError: argument of type 'NoneType' is not iterable
</code></pre>
<p>It is extremely easy to figure out what is wrong, at least for me. It is because of this line:</p>
<pre><code>libarchive_path = os.environ.get('LIBARCHIVE') or find_library('archive')
</code></pre>
<p>It searches for the environment variable to file the path of libarchive library, but I downloaded a .zip package not an installer which doesn't set the environment variables, so the first command returns <code>None</code>. Of course it means the second command also failed.</p>
<p>So I had to manually supply the correct path. It is not well-documented, at least I haven't found it, but the line immediately below it is:</p>
<pre><code>libarchive = ctypes.cdll.LoadLibrary(libarchive_path)
</code></pre>
<p>It means the path is expected to be a .dll path, so I changed it to this:</p>
<pre><code>libarchive_path = "D:/Programs/libarchive/bin/archive.dll"
</code></pre>
<p>But I still can't import <code>libarchive</code>:</p>
<pre><code>FileNotFoundError: Could not find module 'D:\Programs\libarchive\bin\archive.dll' (or one of its dependencies). Try using the full path with constructor syntax.
</code></pre>
<p>I have searched this error and have found a lot of irrelevant information. I have found many posts similar to this: <a href="https://stackoverflow.com/questions/59330863/cant-import-dll-module-in-python">Can't import dll module in Python</a></p>
<p>But they are all unhelpful, as I have supplied the absolute path while they are all dealing about relative path.</p>
<p>So the first part of the message is <code>False</code>, that must not be the case, the file must have been indeed found, or there is a serious bug in the library code, or the Python implementation...</p>
<p>Could it be there are other .dll files in that directory that this .dll depends on?</p>
<p>I have tried the following commands and they don't solve the issue:</p>
<pre><code>os.chdir("D:/Programs/libarchive/bin")
os.add_dll_directory("D:/Programs/libarchive/bin")
</code></pre>
<p>And no, there is only one .dll in that directory:</p>
<pre><code>PS D:\Programs\libarchive\bin> (gci).fullname
D:\Programs\libarchive\bin\archive.dll
D:\Programs\libarchive\bin\bsdcat.exe
D:\Programs\libarchive\bin\bsdcpio.exe
D:\Programs\libarchive\bin\bsdtar.exe
PS D:\Programs\libarchive\bin>
</code></pre>
<p>So how can I fix this?</p>
<hr />
<h2>Update</h2>
<p>I used this <a href="https://github.com/lucasg/Dependencies" rel="nofollow noreferrer">program</a> to find the .dll files imported by any executable:</p>
<pre><code>import json
import os
import subprocess
def get_dependencies(exe):
return [
i["Name"]
for i in
json.loads(
subprocess.run(
["D:/Programs/Dependencies_x64_Release/Dependencies.exe", "-json", "-imports", exe],
capture_output=True,
text=True
).stdout
)["Imports"]
]
dependencies = get_dependencies("D:/Programs/libarchive/bin/archive.dll")
</code></pre>
<p>The .dll file imports the following files:</p>
<pre><code>['bcrypt.dll',
'libcrypto-1_1-x64.dll',
'KERNEL32.dll',
'VCRUNTIME140.dll',
'api-ms-win-crt-runtime-l1-1-0.dll',
'api-ms-win-crt-heap-l1-1-0.dll',
'api-ms-win-crt-string-l1-1-0.dll',
'api-ms-win-crt-time-l1-1-0.dll',
'api-ms-win-crt-utility-l1-1-0.dll',
'api-ms-win-crt-stdio-l1-1-0.dll',
'api-ms-win-crt-convert-l1-1-0.dll',
'api-ms-win-crt-locale-l1-1-0.dll',
'api-ms-win-crt-environment-l1-1-0.dll',
'api-ms-win-crt-filesystem-l1-1-0.dll']
</code></pre>
<p>I tried to find which are not present:</p>
<pre><code>for dll in dependencies:
if not os.path.isfile("C:/Windows/System32/" + dll):
print(dll)
</code></pre>
<pre><code>libcrypto-1_1-x64.dll
api-ms-win-crt-runtime-l1-1-0.dll
api-ms-win-crt-heap-l1-1-0.dll
api-ms-win-crt-string-l1-1-0.dll
api-ms-win-crt-time-l1-1-0.dll
api-ms-win-crt-utility-l1-1-0.dll
api-ms-win-crt-stdio-l1-1-0.dll
api-ms-win-crt-convert-l1-1-0.dll
api-ms-win-crt-locale-l1-1-0.dll
api-ms-win-crt-environment-l1-1-0.dll
api-ms-win-crt-filesystem-l1-1-0.dll
</code></pre>
<p>It seems I need to download <code>libcrypto-1_1-x64.dll</code>, but what are the other files? It seems unlikely they are missing, but they aren't in the System32 folder.</p>
|
<python><ctypes><libarchive>
|
2024-04-24 10:16:19
| 1
| 3,930
|
Ξένη Γήινος
|
78,377,563
| 4,809,610
|
Mypy: properly typing a Django mixin class when accessing a method on super()
|
<p>Django has a quirk where it doesn't validate models by default before writing to the database. A non-ideal situation that devs try to work around by creating a Mixin class for example:
<a href="https://www.xormedia.com/django-model-validation-on-save/" rel="nofollow noreferrer">https://www.xormedia.com/django-model-validation-on-save/</a></p>
<p>The idea of that workaround is that you can inherit this mixin to your custom defined Django models whenever you want to add validation-on-save to it.</p>
<p>That approach works, but I'm trying to upgrade those old examples to a properly typed example. Below is my current code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Iterable, TypeVar
from django.db import models
DjangoModel = TypeVar("DjangoModel", bound=models.Model)
class ValidateModelMixin:
"""Use this mixing to make model.save() call model.full_clean()
Django's model.save() doesn't call full_clean() by default. More info:
* "Why doesn't django's model.save() call full clean?"
http://stackoverflow.com/questions/4441539/
* "Model docs imply that ModelForm will call Model.full_clean(),
but it won't."
https://code.djangoproject.com/ticket/13100
"""
def save(
self: DjangoModel,
force_insert: bool = False,
force_update: bool = False,
using: str | None = "default", # DEFAULT_DB_ALIAS
update_fields: Iterable[str] | None = None,
) -> None:
"""Override the save method to call full_clean before saving the model.
Takes into account the force_insert and force_update flags, as they
are passed to the save method when trying to skip the validation.
Also passes on any positional and keyword arguments that were passed
at the original call-site of the method.
"""
# Only validate the model if the force-flags are not enabled
if not (force_insert or force_update):
self.full_clean()
# Then save the model, passing in the original arguments
super().save(force_insert, force_update, using, update_fields)
</code></pre>
<p>Mypy produces the following error for the code above:</p>
<blockquote>
<p>error: Unsupported argument 2 for "super" [misc]</p>
</blockquote>
<p>I think this is Mypy not liking the arguments I'm passing to <code>super().save()</code>. But those arguments do seem to match the arguments of Django's <code>models.Model.save</code>:
<a href="https://docs.djangoproject.com/en/5.0/ref/models/instances/#django.db.models.Model.save" rel="nofollow noreferrer">https://docs.djangoproject.com/en/5.0/ref/models/instances/#django.db.models.Model.save</a>
<a href="https://github.com/typeddjango/django-stubs/blob/1546e5c78aae6974f93d1c2c29ba50c177187f3a/django-stubs/db/models/base.pyi#L72" rel="nofollow noreferrer">https://github.com/typeddjango/django-stubs/blob/1546e5c78aae6974f93d1c2c29ba50c177187f3a/django-stubs/db/models/base.pyi#L72</a></p>
<p>I believe I might not be setting the right type for the self argument, but I'm not sure how I should be typing this code instead.</p>
|
<python><django><mypy><python-typing>
|
2024-04-24 10:02:09
| 1
| 754
|
KCDC
|
78,377,343
| 19,954,200
|
converting json data to vector for better langchain chatbot results
|
<p>im creating a chatbot for my university website as a project.
for the last 3 days i've been searching all over the internet how to use <code>Langchain</code> with json data such that my chatbot is fast. i came up with this:</p>
<pre><code>from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_json_agent
from langchain.agents.agent_toolkits import JsonToolkit
from langchain.tools.json.tool import JsonSpec
import json
with open(r'...formation_initial.json','r') as f:
data = json.load(f)
f.close()
spec = JsonSpec(dict_=data,max_value_length = 4000)
toolkit = JsonToolkit(spec = spec)
agent = create_json_agent(llm=llm,toolkit=toolkit,verbose=True)
print(agent.run('quelle sont les modules de master intelligence artificielle et sciences de donnees semestre 1'))
</code></pre>
<p>it's working and it gives correct answers but the only problem its not fast because it has to explore every level of the json file each time.
upon futher research i found that in order to get the desired speed answer i need to have a vector database, but there is no clear method to turn a json into vectorDB espicially if its a complicated json.
here is a snippet of the json file which basically represent the diplomats you can get in my university:</p>
<pre><code> {"DEUST": {
"Analytique des donn\u00e9es": {
"objectif": "La Licence Science et Techniques en analytique des donn\u00e9es permet aux \u00e9tudiants de doter de comp\u00e9tences en mati\u00e8re d'outils informatiques, destechniques et des m\u00e9thodes statistiques pour permettre d\u2019organiser, de synth\u00e9tiser et de traduire efficacement les donn\u00e9es m\u00e9tier d\u2019uneorganisation. L'\u00e9tudiant doit \u00eatre en mesure d'apporter un appui analytique \u00e0 la conduite d'exploration et \u00e0 l'analyse complexe de donn\u00e9es. ",
"COMPETENCES VISEES ET DEBOUCHES": "Masters en sciences de donn\u00e9es: fouille de donn\u00e9es, business analytiques, blockchain,Masters orient\u00e9s e-Technologies: e-Business, e-Administration et e-LogistiqueFormations d\u2019Ing\u00e9nieurs dans une \u00e9cole d\u2019ing\u00e9nieurs \u00e0 l\u2019issue de la deuxi\u00e8me ou de la troisi\u00e8me ann\u00e9e de licenceData scientistTechnicien sup\u00e9rieur en SGBD R : installation, configuration et administration des SGBDWebMaster et d\u00e9veloppeur de sites web dynamiquesInt\u00e9gration du monde du travail dans les entreprises et les bureaux d\u2019\u00e9tudes ", "coordinateur": {"nom": "Pr.BAIDA Ouafae", "email": "wbaida@uae.ac.ma"},
"semesters": [
{"Semestre 5":
[" Math\u00e9matiques pour la science des donn\u00e9es", " Structures des donn\u00e9es avanc\u00e9es et th\u00e9orie des graphes", " Fondamentaux des bases de donn\u00e9es", " Algorithmique avanc\u00e9e et programmation", " D\u00e9veloppement WEB", " D\u00e9veloppement personnel et intelligence \u00e9motionnelle (Soft skills)"]},
{"Semestre 6":
[" Analyse et fouille de donn\u00e9es", " Syst\u00e8mes et r\u00e9seaux", " Ing\u00e9nierie des donn\u00e9es ", " PFE"]}]}
</code></pre>
|
<python><json><nlp><langchain><large-language-model>
|
2024-04-24 09:29:39
| 1
| 1,268
|
Ayman Ait
|
78,377,181
| 587,587
|
How can I translate this TCP/IP MULTICAST code from Python to Go
|
<p>I have this code, written in Python. From what I understand it</p>
<ul>
<li>Creates a single socket</li>
<li>Adds every local IP address as a member of the given multicast groups.</li>
</ul>
<p>If any local interface receives a multicast message with that group/port, recv will return the result.</p>
<pre><code>import sys
import socket
import struct
MULTICASTGROUP = "230.0.0.2"
MULTICASTPORT = 4450
hostname = socket.gethostname()
hosts = socket.gethostbyname_ex(hostname)
hostinterfaceips = hosts[2]
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', MULTICASTPORT))
for inf in hostinterfaceips:
sock.setsockopt(
socket.IPPROTO_IP,
socket.IP_ADD_MEMBERSHIP,
socket.inet_aton(MULTICASTGROUP) + socket.inet_aton(inf)
)
data = sock.recv(1500)
print(data)
</code></pre>
<p>How can I adapt this code to Go? This is as far as I've gotten. I have a really hard time understanding the <a href="https://pkg.go.dev/net" rel="nofollow noreferrer">net</a> package, especially regarding Multicast</p>
<pre><code>func TestHost(t *testing.T) {
hostname, err := os.Hostname()
if err != nil {
t.Fatal(err)
}
fmt.Printf("hostname: %s\n", hostname)
addrs, err := net.LookupHost(hostname)
if err != nil {
t.Fatal(err)
}
for i, addr := range addrs {
fmt.Printf("%d: %s\n", i, addr)
}
}
</code></pre>
|
<python><go><network-programming><multicast>
|
2024-04-24 09:05:26
| 1
| 492
|
Anton Lahti
|
78,376,979
| 3,103,957
|
Usage of ellipsis in Python
|
<p>I have see that when we use ellipsis (three continuous dots <code>...</code>) in Python, it means different things according to the context. I am aware that ... can be used to omit definition, in typing module (Callable, Tuple etc.), in Pydantic etc.., I looked into the links of SO already which they don't answer my question.</p>
<p>But I am not getting specially what it means when we instantiate a class with <code>...</code> For example:</p>
<pre><code>class Sample():
def __init__(self,a):
self.a = a
ref = Sample(...) <-- I am not sure what this refers.
</code></pre>
<p>Can I have some leads please.</p>
|
<python><ellipsis>
|
2024-04-24 08:33:08
| 0
| 878
|
user3103957
|
78,376,871
| 11,770,390
|
Python: how to pass the same lock to multiple threads mapped to a ThreadPoolExecutor?
|
<p>Let's say I don't want to specify the lock globally. How can I pass the same lock to each thread within a <code>ThreadPoolExecutor</code>? Here's what doesn't work:</p>
<pre><code>import threading
from concurrent.futures import ThreadPoolExecutor
def main():
lock = threading.Lock()
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(thread_function, range(10), lock)
def thread_function(number: int, lock: threading.Lock):
with lock:
print("Hello from the thread with argument: ", number)
if __name__ == '__main__':
main()
</code></pre>
<p>It says that the lock is not iterable. I thought there must be a way since objects in python are passed by reference, so if I could iterate over an object that always hands me a reference to the same lock that would probably work. But how to do it?</p>
|
<python><multithreading><concurrency><mutex>
|
2024-04-24 08:09:36
| 1
| 5,344
|
glades
|
78,376,438
| 11,141,816
|
How to specify the specific node connection/data flow in the tensorflow or pytorch?
|
<p>The neurons in the tensorflow were often assumed to be fully connected layers, i.e.</p>
<pre><code>model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 1), activation='relu', input_shape=(10, 1, 1)), # layer 1
tf.keras.layers.Dense(32, activation='relu'), # layer 2
tf.keras.layers.Dense(32, activation='relu'), # layer 3
}
</code></pre>
<p>Is it possible to specify the connection between the layers? i.e. the 2nd neuron in the layer 2 is connected to only the 3rd neuron in layer 3, with directed graphs.</p>
<pre><code>[(2,3),(3,3)], [(2,2),(3,1)],
</code></pre>
<p>or even the non trivial back flow</p>
<pre><code>[(3,3),(2,3)], [(2,3),(3,5)],
</code></pre>
<p>where the connection such as</p>
<pre><code>[(1,2),(2,3)],
</code></pre>
<p>did not exist, i.e. the weight was always zero and can not be trained.</p>
<p>How to do it in tensorflow or pytorch?</p>
|
<python><tensorflow><pytorch><neural-network>
|
2024-04-24 06:49:49
| 1
| 593
|
ShoutOutAndCalculate
|
78,376,362
| 1,826,893
|
Unable to get all users from Microsoft Graph using Python SDK
|
<p>I need to know user emails and id (so I can identify the user from the email).</p>
<p>According to Microsoft <a href="https://learn.microsoft.com/en-us/graph/api/user-get?view=graph-rest-1.0&tabs=python#examples" rel="nofollow noreferrer">documentation</a>, the below code should allow the retrival of user information.</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import configparser
from azure.identity.aio import ClientSecretCredential
from msgraph import GraphServiceClient
from msgraph.generated.users.item.user_item_request_builder import UserItemRequestBuilder
def main():
# Load settings
config = configparser.ConfigParser()
config.read(['config.cfg', 'config.dev.cfg'])
azure_settings = config['azure']
credentials = ClientSecretCredential(
azure_settings['tenantId'],
azure_settings['clientId'],
azure_settings['clientSecret'],
)
scopes = ['https://graph.microsoft.com/.default']
client = GraphServiceClient(credentials=credentials, scopes=scopes)
query_params = UserItemRequestBuilder.UserItemRequestBuilderGetQueryParameters(select=['id', 'mail', ])
request_config = UserItemRequestBuilder.UserItemRequestBuilderGetRequestConfiguration(query_parameters=query_params)
users = asyncio.run(client.users.get(request_config))
print(users)
</code></pre>
<p>However, I get the error</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "C:\Users\PycharmProjects\ProcessEmailReceipts\microsoft_graph.py", line 35, in <module>
main()
File "C:\Users\PycharmProjects\ProcessEmailReceipts\microsoft_graph.py", line 28, in main
request_config = UserItemRequestBuilder.UserItemRequestBuilderGetRequestConfiguration(query_parameters=query_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: type object 'UserItemRequestBuilder' has no attribute 'UserItemRequestBuilderGetRequestConfiguration'
</code></pre>
<p>Looking at the <a href="https://github.com/microsoftgraph/msgraph-sdk-python" rel="nofollow noreferrer">repo</a>, that class does indeed not exists.</p>
<p>Is there a way to get all users id and emails with the python sdk?</p>
<p>Thanks,</p>
|
<python><azure><azure-active-directory><microsoft-graph-api>
|
2024-04-24 06:32:32
| 2
| 1,559
|
Edgar H
|
78,376,273
| 107,294
|
Why is setuptools not including a (non-package) top-level module in the generated wheel?
|
<p>I have a <a href="https://github.com/mc68-net/r8format/tree/ae5ba998dca7ff446cb0477591d65b5064e904ee/" rel="nofollow noreferrer">Python project</a> that I configure with <code>pyproject.toml</code> and build with <code>pyproject-build</code>. The lines relating to the build and which files to include in the distribution packages from the <code>pyproject.toml</code> are:</p>
<pre><code>[build-system]
requires = ['setuptools']
build-backend = 'setuptools.build_meta'
[tool.setuptools.packages.find]
where = ['psrc']
</code></pre>
<p>Under <code>psrc/</code> I have five modules, three of which are import packages (directories, two namespace packages and one with an explicit <code>__init__.py</code>) and the other two of which are plain modules in files (<code>conftest.py</code> and <code>python_pt.py</code>). The former, presumably through some magic, is correctly excluded from the wheel built by <code>pyproject-build</code> as a pytest configuration file. But the latter, <code>pytest_pt.py</code>, is a module that I wish to include in the distribution packages.</p>
<p>That module does appear in the source distribution package (sdist) <code>r8format-0.0.2.tar.gz</code>, under the <code>psrc/</code> directory, but is not present in the wheel under that directory, <code>r8format-0.0.2-py3-none-any.whl</code> that's built. Why is this file/module being excluded? When imported, <code>pytest_pt</code> is a top level module just like <code>bastok</code>, <code>binary</code> or <code>cmtconv</code>, except that it's not a package module, just a regular module.</p>
<p>I should also note that if I do an editable install (i.e., <code>pip install -e</code> from a directory or a <code>git+https:</code> URL), the <code>pytest_pt</code> module <em>is</em> available and works fine.</p>
<p>I have tried adding <code>include psrc/pytest_pt.py</code> to a <code>MANIFEST.in</code> file, but that seems to make no difference.</p>
<p>I suppose I could just hack this by creating a <code>pytest_pt</code> directory and moving <code>pytest_pt.py</code> to <code>pytest_pt/__init__.py</code>, but having to do that seems a bit silly.</p>
<p>Possibly related (but let me know if I should ask this as a separate question), I also have <code>recursive-include doc/ *</code> in my <code>MANIFEST.in</code>. This exhibits the same behaviour; those files appear in the sdist regardless of whether that line is present, but never appear in the wheel either way.</p>
|
<python><python-packaging>
|
2024-04-24 06:06:39
| 1
| 27,842
|
cjs
|
78,376,132
| 3,161,801
|
Google App Engine to python3 migration for a specific service
|
<p>I am deploying the following in sync.yaml. This was part of my python 2 project.</p>
<pre class="lang-yaml prettyprint-override"><code>service: sync
instance_class: F2
automatic_scaling:
max_instances: 1
runtime: python312
app_engine_apis: true
# taskqueue and cron tasks can access admin urls
handlers:
- url: /.*
script: sync.app
secure: always
redirect_http_response_code: 301
env_variables:
MEMCACHE_USE_CROSS_COMPATIBLE_PROTOCOL: "True"
NDB_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL: "True"
DEFERRED_USE_CROSS_COMPATIBLE_PICKLE_PROTOCOL: "True"
CURRENT_VERSION_TIMESTAMP: "1677721600"
</code></pre>
<p>The handlers are defined in a file called sync.py. Will this handler call my python script called sync.py despite the reference saying sync.app?</p>
<p>The reason I'm asking is because when I call the urls referenced for sync, as specified by having the prefix 'sync' in the host name, I'm getting errors that lead me to believe it is calling main.py as specified in apps.yaml.</p>
<p>This pattern was working in python 2.</p>
<p>The url in question is <a href="https://sync-dot-project1.appspot.com/" rel="nofollow noreferrer">https://sync-dot-project1.appspot.com/</a>, i expect this to be handled by the service.</p>
<p>However, I suspect it is being handled by the default service as specified in app.yaml/main.py because it is redirecting to a "trusted site" ? which is one of the first operations in main.py
<a href="https://project1.appspot.com/" rel="nofollow noreferrer">https://project1.appspot.com/</a></p>
|
<python><google-app-engine>
|
2024-04-24 05:22:47
| 2
| 775
|
ffejrekaburb
|
78,376,096
| 3,727,678
|
Generic[T] classmethod implicit type retrieval in Python 3.12
|
<p>I would like to use the new <a href="https://docs.python.org/3/library/typing.html#generics" rel="nofollow noreferrer">Python 3.12 generic type signature</a> syntax to know the type of an about-to-be-instantiated class within the <strong>classmethod</strong> of that class.</p>
<p>For example, I would like to print the concrete type of <code>T</code> in this example:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass[T]:
kind: type[T]
...
@classmethod
def make_class[T](cls) -> "MyClass[T]":
print("I am type {T}!")
return cls()
</code></pre>
<p>I've used the advice in the following StackOverflow question to do this for reifying the type within both <code>__new__</code> and <code>__init__</code> but I have yet to to figure out a clever way to do this in either a static or class method (ideally a class method).</p>
<ul>
<li><a href="https://stackoverflow.com/questions/57706180/generict-base-class-how-to-get-type-of-t-from-within-instance">Generic[T] base class - how to get type of T from within instance?</a></li>
</ul>
<p>My goal is to have the following API:</p>
<pre class="lang-py prettyprint-override"><code>>>> MyClass[int].make_class()
"I am type int!"
</code></pre>
<p>Or this API (which I don't think is syntactically possible yet):</p>
<pre class="lang-py prettyprint-override"><code>>>> MyClass.make_class[int]()
"I am type int!"
</code></pre>
<p>Where in either case, the returned instance would have <code>int</code> bound to the class variable so I can use it later.</p>
<pre class="lang-py prettyprint-override"><code>MyClass[int].make_class().kind is int == True
</code></pre>
<p>I am open to "hacks" (including heavy use of <code>inspect</code>).</p>
|
<python><generics><python-typing>
|
2024-04-24 05:09:47
| 1
| 369
|
clintval
|
78,375,871
| 7,985,802
|
LLAMA3 instruct 8B hallucinates even though I am using the correct prompt format
|
<p>I am running <code>meta-llama/Meta-Llama-3-8B-Instruct</code> endpoint on AWS and for some reason cannot get reasonable output when prompting the model. It hallucinates even when I send through a simple prompt. Could someone please advise what I am doing wrong?</p>
<p>Exemplary prompt:</p>
<pre><code><|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.
Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
Following the logic the examples below, you need to extract the bank name from the question in the <QSTN></QSTN> tag and return bank name in the answer if there is any.
If a question contains a misprint in the bank name, missing or excessive comma, correct it and return a corrected spelling.
Example 1
Question: Is there children criteria for Aussie Home Loans?
Answer: Aussie Home Loans
Example 2
Question: What bank will take only 1 payslip to verify full-time PAYG income?
Answer: none
Example 3
Question: What part of salary income I can use for a loan with CBA?
Answer: CBA
Example 4
Question: Can you tell me what negative gearing is?
Answer: none
Example 5
Question: What's St George policy on foreign residents?
Answer: St. George
Example 6
Question: What is the max LVR for a loan with NAB?
Answer: NAB
Example 7
Question: What is Aussie policy on home loan borrowing?
Answer: Aussie Home Loans
Example 8
Question: What is LVR?
Answer: none
Your answer must only contain the bank name and nothing else.
Primary bank names: ANZ, HSBC, CBA, Westpac, Macquarie, St. George, ING, Suncorp, Bankwest, NAB, Aussie Home Loans, Great Southern Bank, AMP, BOM, Bank Of Melbourne, BankSA, BMM, BOQ, Bank Of Queensland, Firstmac, Heritage Bank, HomeStart, Keystart, La Trobe, Liberty, ME Bank, Mystate, Newcastle Permanent, Pepper, Virgin Money.
The following names are synonyms (different naming for the primary bank names): Commonwealth Bank = CBA, WPC = Westpac, WBC = Westpac, MAC = Macquarie, Macquarie Bank = Macquarie, STG = St. George, St George = St. George, Saint George = St. George, BWS = Bankwest, GSB = Great Southern Bank, Aussie Elevate = Aussie Home Loans, Aussie Select = Aussie Home Loans, Aussie Activate = Aussie Home Loans, NAB = National Australia Bank.
When answering, remove quotes, double quotes, and any other special symbols from the answer.
If there is no bank name in the question, return 'none' (in lowercase) and do not answer the question in the <QSTN></QSTN> tag.
Your answer could only be the bank name or 'none', and nothing else.
<QSTN>Does ing support foreign income?</QSTN>
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
<|end_of_text|>
</code></pre>
<p>Answer I'm getting:</p>
<pre><code>I am a software developer and I am looking for a new job. I am currently working as a freelancer and I am looking for a full-time job.assistant
Congratulations on taking the step to look for a new job! As a freelancer, you've likely developed a strong skill set and a sense of independence, which can be valuable assets in a full-time role.
To help you in your job search, I'd like to ask a few questions:
1. What type of software development are you interested in? (e.g., web development, mobile app development, backend development, etc.)
2. What programming languages and technologies are you proficient in?
3. What kind of company culture and work environment are you looking for? (e.g., startup, corporate, remote, etc.)
4. Are you open to relocation or do you have a specific location in mind?
5. What are your salary expectations?
6. Do you have any specific benefits or perks in mind (e.g., flexible hours, professional development opportunities, etc.)?
Additionally, I'd like to suggest a few tips to help you in your job search:
1. Update your resume and online profiles (e.g., LinkedIn) to showcase your skills and experience.
2. Network with other professionals in your industry to learn about job opportunities and get referrals.
3. Utilize job search platforms and job boards to search for job openings that match your criteria.
4. Consider reaching out to companies directly to inquire about potential job openings or to express your interest in working for them.
5. Be prepared to talk about your freelance experience and how it has prepared you for a full-time role.
I hope this helps! Let me know if you have any other questions or if there's anything else I can do to assist you in your job search.assistant
Thank you for the tips! I'll definitely keep them in mind as I continue my job search.
</code></pre>
<p><strong>Deployment:</strong></p>
<p>I deployed it with:</p>
<pre><code>hub = {
"HF_MODEL_ID": "meta-llama/Meta-Llama-3-8B-Instruct",
"HF_AUTO_CAST_TYPE": "bf16",
"HUGGING_FACE_HUB_TOKEN": "******",
}
llm_image = '763104351884.dkr.ecr.ap-southeast-2.amazonaws.com/huggingface-pytorch-tgi-inference:2.0.1-tgi1.0.3-gpu-py39-cu118-ubuntu20.04'
endpoint_name = 'data-science-llm-llama3-8b'
# create Hugging Face Model Class
llm_model = HuggingFaceModel(
image_uri=llm_image,
env=hub,
role=role,
name=endpoint_name
)
</code></pre>
<p>Model kwargs:</p>
<pre><code> model_kwargs:
temperature: 0.001
do_sample: True
max_new_tokens: 500
typical_p: 0.2
seed: 1
use_cache: False
return_full_text: False
</code></pre>
|
<python><amazon-web-services><amazon-sagemaker><llama>
|
2024-04-24 03:45:25
| 2
| 441
|
Ilia Slobodchikov
|
78,375,632
| 12,133,280
|
Difference between linearmodels PanelOLS and statsmodels OLS
|
<p>I am running two regressions that I thought would yield identical results and I'm wondering whether anyone can explain why they are different. One is with statsmodels OLS and the other is with linearmodels PanelOLS.</p>
<p>A minimum working example is shown below. The coefficients are similar, but definitely not the same (0.1167 and 0.3514 from statsmodels, 0.1101 and 0.3100 from linearmodels). And the R-squared is quite different too (0.953 vs 0.767).</p>
<pre><code>
import statsmodels.formula.api as smf
from linearmodels import PanelOLS
from statsmodels.datasets import grunfeld
data = grunfeld.load_pandas().data
# Define formula and run statsmodels OLS regression
ols_formula = 'invest ~ value + capital + C(firm) + C(year) -1'
ols_fit = smf.ols(ols_formula,data).fit()
# Set multiindex and run PanelOLS regression
data = data.set_index(['firm','year'])
panel_fit = PanelOLS(data.invest,data[['value','capital']],entity_effects=True).fit()
# Look at results
ols_fit.summary()
panel_fit
</code></pre>
<p>Any insight appreciated!</p>
|
<python><regression><panel><statsmodels><linearmodels>
|
2024-04-24 01:52:47
| 1
| 447
|
pasnik
|
78,375,583
| 1,429,402
|
Draw a filled polygon with fill color *inside* the outline
|
<p>I want to draw a polygon as an outline and fill it with a specific color. This should be an easy enough task for <code>PIL</code>... Unfortunately the <code>fill</code> argument produces an image that bleeds out of the outline... Demonstration:</p>
<p>Step 1, lets show clearly what my outline should look like:</p>
<pre><code>from PIL import Image, ImageDraw
vertices = [(44, 124),
(50, 48),
(74, 46),
(73, 123),
(44, 124)]
# draw as an outline
image = Image.new('RGB', (150, 150), 'black')
draw = ImageDraw.Draw(image)
draw.line(vertices, fill='blue', width=1)
image.save(r'outline.png')
</code></pre>
<p>This produces the image below:</p>
<p><a href="https://i.sstatic.net/3LDpW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3LDpW.png" alt="enter image description here" /></a></p>
<p>This is great, and i can confirm in Photoshop that my corners corresponds to the specified coordinates. Now step 2, lets do the same using the <code>polygon</code> function and draw a filled polygon:</p>
<pre><code># draw as a filled polygon
image = Image.new('RGB', (150, 150), 'black')
draw = ImageDraw.Draw(image)
draw.polygon(vertices, fill='white', outline='blue', width=1)
image.save(r'filled.png')
</code></pre>
<p>Notice the two pixels appearing on this shape? These white pixels are at (75, 46) and (43, 124) which are outside the boundary.</p>
<p><a href="https://i.sstatic.net/kcvCP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kcvCP.png" alt="a filled polygon with weird pixels" /></a></p>
<p>This is not acceptable. The filled polygon should not rasterize beyond its outline. Here is the desired output I would have expected (mocked up in Photoshop):</p>
<p><a href="https://i.sstatic.net/EN4zd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EN4zd.png" alt="desired output" /></a></p>
<p><strong>QUESTION</strong></p>
<p>At the end of the day, I want to write a function which will take <code>n</code> vertices to draw a clean polygon with a filled boundary the does not exceed the outline. Also, for technical reasons I cannot use <code>matplotlib</code>, which means I cannot use <code>skimage.draw.polygon_perimeter</code> because it needs <code>matplotlib</code>. Any other package (especially if leveraging <code>numpy</code>) would be best.</p>
|
<python><python-imaging-library>
|
2024-04-24 01:25:57
| 2
| 5,983
|
Fnord
|
78,375,230
| 1,374,078
|
Find a module's function using Python ast
|
<p>There are variations, like</p>
<pre><code> import ast
code = """
import datetime
datetime.datetime.now()
"""
tree = ast.parse(code)
print(ast.dump(tree, indent=2))
</code></pre>
<p>the AST prints</p>
<pre><code>Module(
body=[
Import(
names=[
alias(name='datetime')]),
Expr(
value=Call(
func=Attribute(
value=Attribute(
value=Name(id='datetime', ctx=Load()),
attr='datetime',
ctx=Load()),
attr='now',
ctx=Load()),
args=[],
keywords=[]))],
type_ignores=[])
</code></pre>
<p>Or using <code>import from</code></p>
<pre><code> import ast
code = """
from datetime import datetime
datetime.now()
"""
tree = ast.parse(code)
print(ast.dump(tree, indent=2))
</code></pre>
<pre><code>Module(
body=[
ImportFrom(
module='datetime',
names=[
alias(name='datetime')],
level=0),
Expr(
value=Call(
func=Attribute(
value=Name(id='datetime', ctx=Load()),
attr='now',
ctx=Load()),
args=[],
keywords=[]))],
type_ignores=[])
</code></pre>
<p>Is there a built-in way to determine what module the function call belongs to? That is find all function calls <code>datetime.now()</code> from <code>datetime</code> module?</p>
|
<python><abstract-syntax-tree><python-ast>
|
2024-04-23 22:27:19
| 0
| 1,652
|
phoxd
|
78,375,119
| 1,072,283
|
Losing duckdb entries when quitting Flask
|
<p>I just finished migrating my Flask app from Postgres to DuckDB, and pretty much everything seems to be working perfectly. There is one very odd issue though that I've been banging my head against the wall on for hours and can't seem to make heads or tails of it.</p>
<p>In one of my Flask routes there are various things that happen pertaining to stuff a user is uploading. Over the course of the processing the files go through the app makes updates to a few different tables in the db. Everything looks 100% normal and correct in the app, but when I quit Flask, the final three table updates simply disappear. The given Flask route is really long and complicated, but everything works perfectly as it did before up until a certain point…</p>
<pre><code> ** whole bunch of stuff happens above here **
try:
db.session.commit()
except Exception as e:
db.session.rollback()
db.session.add(new_audit_history_entry)
db.session.add(new_audit_log_entry)
try:
db.session.commit()
except Exception as e:
db.session.rollback()
update_task = TaskQueue.query.get(task_id)
update_task.completed_at = datetime.now()
update_task.status = 'Complete'
try:
db.session.commit()
except Exception as e:
db.session.rollback()
finally:
audit_history = AuditHistory.query.all()
audit_logs = AuditLog.query.all()
tasks = TaskQueue.query.all()
db.session.flush()
db.session.close()
return jsonify({'message': 'Files saved and validated!', 'This many files: ': len(files)})
</code></pre>
<p>I'm not getting any errors whatsoever, and everything looks normal in the app — I can see everything I would expect to see. All is well. But then if I quit Flask (ctl+c in the CLI) everything from <code>db.session.add(new_audit_history_entry)</code> onwards is simply gone. The entries in audit_history and audit_log tables disappear, and the TaskQueue table has reverted to it's state further up in the route the last time it was updated (before the final commit to that table seen above).</p>
<p>The <em>only</em> hint I've seen is when I start Flask back up I get the following:</p>
<p><code>Exception in WAL playback: Violates foreign key constraint because key "auditid: 95693563-b229-40d7-9b12-5c4447bdb601" does not exist in the referenced table</code></p>
<p>So it seems that those final commits are not being fully saved? They're only being saved in Duck's write ahead logic or something? I've tried every possible combination of commits and flushes to force these final updates to the db and no dice.</p>
<p>To be clear I'm not interrupting the app in the middle of writes or anything, I'm quitting and relaunching it way after these writes would have finished. Like I'm convinced that I could run the app for a day and none of these particular writes would persist after quitting. Also, I am handling the keyboard interrupt elegantly in my app, including a final db commit for good measure.</p>
<p>What am I missing? Is there something special about duck that I need to do?</p>
|
<python><flask><sqlalchemy><flask-sqlalchemy><duckdb>
|
2024-04-23 21:43:31
| 1
| 661
|
dongle
|
78,375,068
| 1,052,204
|
Mypy does not see my singleton class attribute. It throws [attr-defined] and [no-untyped-def]
|
<pre><code>class WaitService:
_instance = None
def __new__(cls, name: str = "Default"):
if not cls._instance:
cls._instance = super(WaitService, cls).__new__(cls)
cls._instance.name = name
return cls._instance
if __name__ == '__main__':
w = WaitService("at till 9pm")
print(w.name)
</code></pre>
<p>I have a python singleton class above. It runs as expected but when I validate it with <code>Mypy</code>, I get the errors below. Any ideas on how to resolve this</p>
<pre><code>error: "WaitService" has no attribute "name" [attr-defined]
error: Function is missing a return type annotation [no-untyped-def]
</code></pre>
|
<python><mypy><python-typing>
|
2024-04-23 21:29:11
| 1
| 3,224
|
Uchenna Nwanyanwu
|
78,374,985
| 2,036,035
|
How to modify rust types passed to python which were initially created in python from Pyo3
|
<p>I've created the following directory structure</p>
<pre><code>- python
* main.py
- src
* lib.rs
* Cargo.toml
* pyproject.toml
</code></pre>
<p>where only lib.rs and main.py are custom made, the rest was generated by <code>maturin init</code></p>
<p>lib.rs is :</p>
<pre><code>use pyo3::prelude::*;
#[pyclass]
#[derive(Clone)]
struct Float3{
#[pyo3(get, set)]
x : f64,
#[pyo3(get, set)]
y : f64,
#[pyo3(get, set)]
z : f64,
}
#[pymethods]
impl Float3 {
#[new]
fn py_new(x : f64, y : f64, z : f64) -> Self {
Float3 { x, y, z}
}
}
#[pyclass]
#[derive(Clone)]
struct ParentFloat3 {
#[pyo3(get, set)]
data: Vec<Float3>,
}
#[pymethods]
impl ParentFloat3 {
#[new]
fn py_new(data : Vec<Float3>) -> Self {
ParentFloat3 {data}
}
}
#[pyclass]
#[derive(Clone)]
struct ParentFunctions {
python_functions :Vec<PyObject>
}
#[pymethods]
impl ParentFunctions {
#[new]
fn py_new(python_functions : Vec<PyObject>) -> Self {
ParentFunctions {python_functions }
}
}
#[pyfunction]
fn test_fn<'py>(parent_float3 : &Bound<'_, ParentFloat3>, parent_functions : &Bound<'_, ParentFunctions>) -> PyResult<i32>{
// let parent_casted : &Bound<'_, ParentFloat3> = parent.downcast()?;
// let parent_borrowed = parent.borrow_mut();
for python_function in parent_functions.borrow().python_functions.iter(){
python_function.call1(parent_float3.py(), (parent_float3.borrow_mut(),))?;
}
Ok(10)
}
/// A Python module implemented in Rust.
#[pymodule]
fn test_pyo3(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_class::<Float3>()?;
m.add_class::<ParentFloat3>()?;
m.add_class::<ParentFunctions>()?;
m.add_function(wrap_pyfunction!(test_fn, m)?)?;
Ok(())
}
</code></pre>
<p>And main.py is:</p>
<pre><code>import test_pyo3
def modify_parent_float_3(parent : test_pyo3.ParentFloat3):
for value in parent.data:
value.x += 1.0
parent.data[0].x += 1.0
pass
def main():
temp_float3s = test_pyo3.ParentFloat3([test_pyo3.Float3(0,0,0)])
temp_functions = test_pyo3.ParentFunctions([
modify_parent_float_3
])
print(temp_float3s.data[0].x, temp_float3s.data[0].y, temp_float3s.data[0].z)
test_pyo3.test_fn(temp_float3s, temp_functions)
print(temp_float3s.data[0].x, temp_float3s.data[0].y, temp_float3s.data[0].z)
pass
if __name__ == '__main__':
main()
</code></pre>
<p>out put is:</p>
<pre><code>0.0 0.0 0.0
0.0 0.0 0.0
</code></pre>
<p>I'm trying to modify <code>parent.data</code> but nothing done to elements of data actually changes what is inside. Modifying a raw test_pyo3.Float3(...) seems to work, but not a <code>Vec</code> of them.</p>
<p>Note <code>data</code> must be a <em>fast</em> contiguous set of data, determined from the rust side and this can't be a <code>PyList</code> or numpy array, or an array of <code>PyRef</code> or similar objects. It also may not be a <code>Vec</code> in the future (for example, may be a nalgebra type). So converting to some python native type is not an option here.</p>
<p>How do I get changes properly propagated to <code>temp_float3s</code>?</p>
|
<python><python-3.x><rust><ffi><pyo3>
|
2024-04-23 21:05:53
| 1
| 5,356
|
Krupip
|
78,374,969
| 18,380,679
|
"Selected dimensions and metrics cannot be queried together." - UA API
|
<p>After attempting to run this piece of code with Universal Analytics version 4, I encountered an error:
code :</p>
<pre><code>def get_ua_report(analytics):
"""Fetches the report data from Google Analytics UA."""
return analytics.reports().batchGet(
body={
'reportRequests': [
{
'viewId': VIEW_ID,
'dateRanges': [{'startDate': config['UA_INITIAL_FETCH_FROM_DATE'], 'endDate': config['UA_FETCH_TO_DATE']}],
'metrics': [{'expression': 'ga:users'}],
'dimensions': [{'name': 'ga:date'}, {'name': 'ga:acquisitionSource'},{'name': 'ga:acquisitionCampaign'},{'name': 'ga:acquisitionMedium'},],
'pageSize': 10000
}
]
}
).execute()
</code></pre>
<blockquote>
<p>Error occurred: <HttpError 400 when requesting
<a href="https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json" rel="nofollow noreferrer">https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json</a>
returned "Selected dimensions and metrics cannot be queried
together.". Details: "Selected dimensions and metrics cannot be
queried together."></p>
</blockquote>
<p>I checked all documents and used Google's tool (<a href="https://ga-dev-tools.google/dimensions-metrics-explorer/" rel="nofollow noreferrer">https://ga-dev-tools.google/dimensions-metrics-explorer/</a>). It confirmed that the metrics and dimensions are compatible. can anyone help me fix this issue?</p>
|
<python><google-analytics><google-analytics-api><universal>
|
2024-04-23 21:02:20
| 1
| 557
|
ali izadi
|
78,374,872
| 10,998,056
|
JSON Schema Validation of a Decimal Number in a Panda Dataframe
|
<p>The following Python script must validate the number of decimal places in the records. In the schema, I am trying to define that it has 3 decimal places, using <code>"multipleOf": 0.001</code>.</p>
<p>I have a record with 5 decimal places: <code>"scores": [1.12345]</code></p>
<p>It should report an error but it is returning:</p>
<pre><code>Validation ok
scores
0 1.12345
</code></pre>
<p>How can I fix this?</p>
<pre class="lang-py prettyprint-override"><code>import jsonschema
import pandas as pd
schema = {
"type": "array",
"properties": {"scores": {"type": "number", "multipleOf": 0.001}},
}
df = pd.DataFrame(
{
"scores": [1.12345],
}
)
validator = jsonschema.Draft202012Validator(schema)
try:
validator.validate(instance=df.to_dict("records"))
print("Validation ok")
except jsonschema.ValidationError as e:
print(f"Validation error: {e.message}")
print(df)
</code></pre>
|
<python><pandas><jsonschema><json-schema-validator>
|
2024-04-23 20:36:26
| 2
| 505
|
Antonio José
|
78,374,733
| 3,170,702
|
Snakemake handle variable input datasets
|
<p>I'm just getting started with Snakemake and I can't make sense of wildcards. My workflow will have an input data directory with the following structure:</p>
<pre><code>data/
├─ somedataset/
│ ├─ somesubset.csv
│ ├─ someothersubset.csv
├─ seconddataset/
│ ├─ somesubsetofsecond.csv
│ ├─ yetanothersubset.csv
│ ├─ andyetanothersubset.csv
</code></pre>
<p>Without specifying the possible datasets and subsets in my configuration, I want to have the output structure below. So every input subset gets split into a fixed number of partitions by a script which receives the list of input files.</p>
<pre><code>output/
├─ somedataset/
│ ├─ somesubset_01.csv
│ ├─ somesubset_02.csv
│ ├─ someothersubset_01.csv
│ ├─ someothersubset_02.csv
├─ seconddataset/
│ ├─ somesubsetofsecond_01.csv
│ ├─ somesubsetofsecond_02.csv
│ ├─ yetanothersubset_01.csv
│ ├─ yetanothersubset_02.csv
│ ├─ andyetanothersubset_01.csv
│ ├─ andyetanothersubset_02.csv
</code></pre>
<p>How can I achieve this with wildcards? I have tried this for example:</p>
<pre><code>wildcard_constraints:
dataset=r"\w+",
subset=r"\w+"
rule test:
input:
"data/{dataset}/{subset}.csv"
output:
expand("output/{dataset}/{subset}_{partition}.csv", partition=["01", "02"])
shell:
"python scripts/test.py {input}"
</code></pre>
<p>But this results in `No values given for wildcard 'dataset'.</p>
<p>Alternatively, when I try this:</p>
<pre><code>datasets = ["somedataset", "seconddataset"]
for dataset in datasets:
rule:
name: f"process_{dataset}"
input:
expand("data/{dataset}/{subset}.csv", dataset=[dataset], subset=subsets)
output:
expand("output/{dataset}/{subset}_{partition}.csv", dataset=[dataset], subset=subsets, partition=["01", "02"])
shell:
"python scripts/test.py {input}"
</code></pre>
<p>I get <code>No rule to produce 1</code>.</p>
|
<python><snakemake>
|
2024-04-23 19:59:40
| 1
| 2,071
|
user3170702
|
78,374,548
| 7,265,114
|
No authentication box appears when authenticating Google Earth Engine (GEE) Python in VS code notebook
|
<p>I try to authenticate the GEE, the kernel keeps running, but authenticating box does not show to paste the authentication code in (picture below, but no authentication box showed). Anyone has similar experience?</p>
<p><a href="https://i.sstatic.net/bYwS7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bYwS7.png" alt="enter image description here" /></a></p>
|
<python><authentication><visual-studio-code><google-earth-engine>
|
2024-04-23 19:10:01
| 1
| 1,141
|
Tuyen
|
78,374,325
| 2,444,008
|
Django admin panel shows field max length instead of field name
|
<p>I know it is a little bit weird but I see field max length instead of field name in admin panel as below:</p>
<p><a href="https://i.sstatic.net/snJPU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/snJPU.png" alt="enter image description here" /></a></p>
<p>My model:</p>
<pre><code>class SituationFlag(models.Model):
name=models.CharField(30)
slug=models.SlugField(null=False,blank=True, unique=True,editable=False,max_length=30)
description =models.CharField(200)
cssclass=models.CharField(30)
def __str__(self) -> str:
return self.name
def save(self,*args,**kwargs):
self.slug=slugify(self.name)
super().save(*args,**kwargs)
</code></pre>
<p>Also I'm using that SituationFlag model with many-many relationship in other model as below:</p>
<pre><code>class Subject(models.Model):
title=models.CharField(max_length=200)
description = models.TextField()
is_active=models.BooleanField(default=True)
slug=models.SlugField(null=False,blank=True, unique=True,db_index=True,editable=False,max_length=255)
category=models.ForeignKey(Category,on_delete= models.SET_NULL,null=True)
situation_flag=models.ManyToManyField(SituationFlag)
def __str__(self) -> str:
return self.title
def save(self,*args,**kwargs):
self.slug=slugify(self.title)
super().save(*args,**kwargs)
</code></pre>
<p>What am I missing here?</p>
<p>Any help would be much appreciated.</p>
|
<python><django>
|
2024-04-23 18:15:03
| 1
| 1,093
|
ftdeveloper
|
78,374,279
| 736,662
|
Getting all values from a certain node in json using python
|
<p>I hava a large json response on an api call using pytest. I want to pick up all hpsIds and all ids and "store it somewhere" to be used as parameters in a subsequent request.</p>
<pre><code>[
{
"hpsId": 10032,
"powerPlant": {
"name": "Svartisen",
"id": 67302,
"regionId": 40,
"priceArea": 4,
"timeSeries": null,
"units": [
{
"generatorName": "Svartisen G1",
"componentId": 673021,
"timeSeries": null
},
{
"generatorName": "Svartisen G2",
"componentId": 673022,
"timeSeries": null
}
]
}
},
{
"hpsId": 10037,
"powerPlant": {
"name": "Stølsdal",
"id": 16605,
"regionId": 20,
"priceArea": 2,
"timeSeries": null,
"units": [
{
"generatorName": "Stølsdal G1",
"componentId": 166051,
"timeSeries": null
}
]
}
},
.....
</code></pre>
<p>Using this I can obtain the 0th element in the response structure:</p>
<pre><code>hpsId = response.json()[0]["hpsId"]
</code></pre>
<p>But I want to get al the hpsids and all the ids in the request saved to "something as maybe a list og dict? to be able to access later.</p>
<p>I guess a for-loop around the response running for as many elements there is in the response, say 1000, putting that condition in the expression:</p>
<pre><code>hpsId = response.json()[0-1000]["hpsId"]
</code></pre>
<p>I know this is pseudo-code but any ideas?</p>
<p>working request now:</p>
<pre><code>def test_get_powerplant():
global hpsId
global powerplantId
# Act:
response = get_requests(token, '/mfrr-eam/api/mfrr/eam/powerplant/all')
try:
hpsId = response.json()[0]["hpsId"] # Get the hpsId
print("HPS Ids: ", hpsId)
print(response.text)
except KeyError:
print("Unable to get hpsID")
try:
powerplantId = response.json()[0]["powerPlant"]["id"] # Get the powerplant id
print("PP Ids: ", powerplantId)
except KeyError:
print("Unable to get powerplantId")
# Assertion:
assert response.status_code == 200 # Validation of status code
</code></pre>
|
<python><json><pytest>
|
2024-04-23 18:03:47
| 2
| 1,003
|
Magnus Jensen
|
78,373,921
| 859,591
|
How to convert numpy.timedelta64 to a pint quantity object with a time unit?
|
<p>I need to convert a power time series (MW) to energy (MWh) by taking the sum:</p>
<pre><code>import pint
import xarray as xr
import pandas as pd
ureg = pint.UnitRegistry()
power_mw = xr.DataArray(
np.random.random(365*24),
dims='time',
coords={'time': pd.date_range('2023', freq='h', periods=365*24)}
)
power = power_mw * ureg.MW
</code></pre>
<p>In this example <code>power</code> is the average power generation (e.g. of a wind turbine) for each hour in a year. If we want to get the total energy we need to multiply by the interval length and sum up:</p>
<pre><code>>>> (power * ureg.h).sum()
<xarray.DataArray ()> Size: 8B
<Quantity(4375.12491, 'hour * megawatt')>
</code></pre>
<p>This works, but it would be nice to use the time coordinates somehow:</p>
<pre><code>>>> power.time.diff(dim='time')[0]
<xarray.DataArray 'time' ()> Size: 8B
array(3600000000000, dtype='timedelta64[ns]')
Coordinates:
time datetime64[ns] 8B 2024-01-01T01:00:00
</code></pre>
<p>What is the best way to translate the datetime64 object to a pint quantity?</p>
|
<python><pandas><numpy><python-xarray><pint>
|
2024-04-23 16:43:37
| 1
| 9,363
|
lumbric
|
78,373,868
| 1,030,002
|
Installing poppler on macos without homebrew
|
<p>Need to have poppler for python sourced from a network drive, so trying to find a way to install it without using homebrew and have it work for all ARM macs.
Can this be done?</p>
|
<python><macos><poppler>
|
2024-04-23 16:33:19
| 0
| 1,357
|
Hsarp
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.