QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,137,153
1,171,746
How to install pip into a tmp directory?
<p>How to install pip into a tmp directory?</p> <p>I want to install pip into a tmp directory, install a few packages to a custom directory, then remove the tmp directory.</p> <p>I am creating an extension for LibreOffice that automatically installs <code>pip</code> via <code>get-pip.py</code> if it is not installed and then installs the extensions required packages. I have this working for Windows and Linux (LibreOffice sudo install).</p> <p>On Linux LibreOffice Flatpak it seems it is not possible to automatically install pip using <code>get-pip.py</code> and it seems that pip is not available otherwise.</p> <p>I am working out a work around. My thinking is if I can get access to a pip installer then I can then install packages to a custom directory in the extensions sub-directory. Then it should be a simple matter to add the sub-directory to the python <code>sys.path</code>.</p> <p>So this leaves me trying to figure out how to install pip into a tmp directory when the extension is installed into a flatpak version of LibreOffice.</p>
<python><pip><libreoffice>
2023-09-19 18:38:29
2
327
Amour Spirit
77,137,092
12,935,622
How to show all x-tick labels with seaborn.objects
<p>How do I make it so that it shows all x ticks from 0 to 9?</p> <pre><code> bin diff 1 4 -0.032748 3 9 0.106409 13 7 0.057214 17 3 0.157840 19 0 -0.086567 ... ... ... 1941 0 0.014386 1945 4 0.049601 1947 9 0.059406 1957 1 0.045282 1959 6 -0.033853 </code></pre> <pre><code> ( so.Plot(x='bin', y='diff', data=diff_df) .theme({**axes_style(&quot;whitegrid&quot;), &quot;grid.linestyle&quot;: &quot;:&quot;}) .add(so.Dots()) .add(so.Range(color='orange'), so.Est()) .add(so.Dot(color='orange'), so.Agg()) .add(so.Line(color='orange'), so.Agg()) .label( x=&quot;Image Similarity Bin&quot;, y=&quot;Difference&quot;, color=str.capitalize, ) ) </code></pre> <p>I tried to set <code>xticks</code> in .label, but it doesn't do anything.</p> <p><a href="https://i.sstatic.net/BKCrV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BKCrV.png" alt="Now it only shows 0, 2, 4..." /></a></p>
<python><seaborn><xticks><seaborn-objects>
2023-09-19 18:28:23
1
1,191
guckmalmensch
77,137,062
518,012
Python / Azure Event Hubs
<p>I'm putting together a very basic Python script which suppose to send a sample JSON data to Azure Event Hubs. Here is my script:</p> <pre><code>import asyncio from azure.eventhub.aio import EventHubProducerClient async def send_json_message(connection_string, event_hub_name, json_message): &quot;&quot;&quot;Sends a JSON message to an Azure Event Hub. Args: connection_string: The connection string to your Azure Event Hubs namespace. event_hub_name: The name of your event hub. json_message: The JSON message to send. &quot;&quot;&quot; producer = EventHubProducerClient.from_connection_string(connection_string, event_hub_name) async with producer: event_data_batch = await producer.create_batch() event_data_batch.add(EventData(body=json.dumps(json_message))) await producer.send_batch(event_data_batch) if __name__ == &quot;__main__&quot;: connection_string = &quot;Endpoint=sb://auditblobevents.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=k9GhYMwABTBa6PuHKczIolE7FJeR0bOpQ+AEhEgMAY8=&quot; event_hub_name = &quot;auditblobtopic&quot; json_message = { &quot;id&quot;: 1234567890, &quot;name&quot;: &quot;Peter&quot;, &quot;message&quot;: &quot;This is a JSON message.&quot; } asyncio.run(send_json_message(connection_string, event_hub_name, json_message)) </code></pre> <p>When I run this script (&quot;python3 send.py&quot;), I get this error:</p> <pre><code>Traceback (most recent call last): File &quot;send.py&quot;, line 29, in &lt;module&gt; asyncio.run(send_json_message(connection_string, event_hub_name, json_message)) File &quot;/usr/lib/python3.8/asyncio/runners.py&quot;, line 44, in run return loop.run_until_complete(main) File &quot;/usr/lib/python3.8/asyncio/base_events.py&quot;, line 616, in run_until_complete return future.result() File &quot;send.py&quot;, line 13, in send_json_message producer = EventHubProducerClient.from_connection_string(connection_string, event_hub_name) TypeError: from_connection_string() takes 2 positional arguments but 3 were given </code></pre> <p>I am not a Python developer, but I can only see 2 parameters being passed to &quot;from_connection_string()&quot;</p> <p>What can I change to get passed this issue?</p>
<python><azure-eventhub>
2023-09-19 18:22:59
1
15,684
Eugene Goldberg
77,136,999
2,634,153
Clean way to pass different parameters in Python
<p>I would like to write a wrapper function around <code>requests.post()</code> which implements retries in case when the call raises an exception. Somethingl like</p> <pre><code>def http_post(endpoint, headers=headers, payload=payload): ... try: response = requests.post(endpoint, headers=headers, data=payload) ... </code></pre> <p>The problem is that some of the wrapped calls are of form</p> <pre><code>response = requests.post(endpoint, headers=headers, data=data) </code></pre> <p>whereas others are</p> <pre><code>response = requests.post(endpoint, headers=headers, json=data) </code></pre> <p>I could pass a a flag indicating the type of the third parameter or maybe try using **kwargs combined with exec but it appears messy.</p> <p>Is there a way of doing it cleanly?</p>
<python><python-3.x>
2023-09-19 18:12:53
0
617
PassoGiau
77,136,916
9,873,381
How do I fix this image shape error when feeding image to ResNet50?
<p>I would like to use a camera attached to a Raspberry Pi 4 to stream video and detect objects simultaneously. I have been able to take photos and record videos using this apparatus.</p> <p>However, when I try to run a ResNet50 classifier on top of the stream, I am facing a shape inconsistency error.</p> <p>I request you to help me find the line number where the problem might be originating from.</p> <p>The error and the code (relatively simple) are attached below.</p> <p>Error:</p> <p><code>ValueError: Input 0 of layer &quot;resnet50&quot; is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, 1, 224, 224)</code></p> <p>Code:</p> <pre><code>#!/usr/bin/python3 import time import os from datetime import datetime import numpy as np import tensorflow as tf import tensorflow_hub as hub from picamera2 import Picamera2 from picamera2.encoders import H264Encoder from picamera2.outputs import CircularOutput from PIL import Image # Define the resolution for the low-resolution video stream lsize = (320, 240) # Load a pre-trained TensorFlow model classifier = tf.keras.applications.resnet50.ResNet50(weights='imagenet') # Initialize the Picamera2 picam2 = Picamera2() # Configure the video settings video_config = picam2.create_video_configuration( main={&quot;size&quot;: (1920, 1080), &quot;format&quot;: &quot;RGB888&quot;}, lores={&quot;size&quot;: lsize, &quot;format&quot;: &quot;YUV420&quot;} ) picam2.configure(video_config) # Initialize the H.264 encoder for video capture encoder = H264Encoder(2000000, repeat=True) encoder.output = CircularOutput() picam2.encoder = encoder # Start the camera and the encoder picam2.start() picam2.start_encoder(encoder) # Define the dimensions of the low-resolution frame w, h = lsize # Continuous loop to capture and process frames while True: # Capture a frame from the low-resolution stream cur = picam2.capture_buffer(&quot;lores&quot;) # Convert the frame to an image and # preprocess it for the deep learning model image = Image.fromarray(cur.astype('uint8')) image = image.resize((224, 224)) # Resize to match the model's input size image = np.array(image) / 255.0 # Normalize image = np.expand_dims(image, axis=0) # Add batch dimension # Run the deep learning model on the frame predictions = classifier.predict(np.expand_dims(image, axis=-1)) # Print the model's predictions (customize as needed) print( &quot;Model predictions class:&quot;, tf.keras.applications.imagenet_utils.decode_predictions( preds=predictions ) ) time.sleep(1) # Stop the camera and the encoder picam2.stop() picam2.stop_encoder() </code></pre>
<python><tensorflow><keras><resnet><image-classification>
2023-09-19 17:59:24
1
672
Skywalker
77,136,876
13,395,230
Ending Numpy Calculation Early
<p>Given two large arrays,</p> <pre><code>A = np.random.randint(10,size=(10000,2)) B = np.random.randint(10,size=(10000,2)) </code></pre> <p>I would like to determine if any of the vectors have a cross product of zero. We could do</p> <pre><code>C = np.cross(A[:,None,:],B[None,:,:]) </code></pre> <p>and then check if C contains a 0 or not.</p> <pre><code>not C.all() </code></pre> <p>However, this process requires calculating all the cross products which can be time consuming. Instead, I would prefer to let numpy perform the cross product, but IF a zero is reached at any point, then simply cut the whole operation and end early. Does numpy have such an &quot;early termination&quot; operation that will cut numpy operations early if they reach a condition? Something like,</p> <pre><code>np.allfunc() np.anyfunc() </code></pre> <p>The example above is such a case where A and B have an extremely high likelihood of having a zero cross product at some point (in fact is very likely to occur near the start), so much so, that performing a python-for-loop (yuck!) is much faster than using numpy's highly optimized code.</p> <p>In general, what is the fastest way to determine if A and B have a zero cross product?</p>
<python><arrays><numpy><optimization>
2023-09-19 17:53:47
2
3,328
Bobby Ocean
77,136,866
10,122,160
AWS Apprunner Failing Health Check when pytorch is involved. But container works well on local
<p>I have been experimenting with AWS app runner. I found a basic tutorial code that uses flask. Here is the code :</p> <pre><code>from flask import render_template from flask import Flask app = Flask(__name__) @app.route('/') def home(): return render_template('index.html') @app.route('/app') def blog(): return &quot;Hello, from App!&quot; if __name__ == '__main__': app.run(threaded=True,host='0.0.0.0',port=80) </code></pre> <p>and here is the docker file</p> <pre><code>FROM python:3.7-slim COPY ./requirements.txt /app/requirements.txt WORKDIR /app RUN pip install -r requirements.txt COPY . /app EXPOSE 80 ENTRYPOINT [ &quot;python&quot; ] CMD [ &quot;app.py&quot; ] </code></pre> <p>I easily managed to deploy this setup on Apprunner. However, when I tried to deploy my app it was throwing me an error related to a health check. But my local container worked fine without errors. So it means there compatibility issue with apprunner.</p> <blockquote> <p>09-19-2023 08:10:37 PM [AppRunner] Deployment with ID : da2bb9----- failed. Failure reason : Health check failed. 09-19-2023 08:10:25 PM [AppRunner] Health check failed on port '80'. Check your configured port number. For more information, read the application logs. 09-19-2023 08:04:14 PM [AppRunner] Performing health check on port '80'. 09-19-2023 08:04:04 PM [AppRunner] Provisioning instances and deploying image for publicly accessible service. 09-19-2023 08:03:53 PM [AppRunner] Successfully copied the image from ECR. 09-19-2023 07:52:50 PM [AppRunner] Deployment Artifact :- Repo Type: ECR; Image URL : 218512261774.dkr.ecr.us-west-2.amazonaws.com/test7; Image Tag : new 09-19-2023 07:52:50 PM [AppRunner] Deployment with ID : da2bb9532------- started. Triggering event: SERVICE_CREATE</p> </blockquote> <p>I tried to pinpoint the reason and in the end, I ended up deploying 7 different images with various code configurations. In the end, I ended up creating minimal replication of the issue by adding torch=2.0.1 dependency (i was experimenting with python 3.10, but everything else was same ) to the requirements.txt file</p> <p>I also tried with python3.10. passes health check without torch, fails with torch installed. I also experimented with different ports.</p> <p>My question is what might be the cause of this issue and how may I fix it. I must say I am quite new to containers and AWS deployment. but I have done my research and couldnt find any solution</p>
<python><deployment><pytorch><aws-app-runner>
2023-09-19 17:52:37
1
308
Enes Kuz
77,136,761
850,781
Python logger produces output without handlers
<p>Every now and then, while running under Jupyter and after interrupting the kernel, I start getting duplicate output on my logger: <code>stdout.info(&quot;foo&quot;)</code> prints:</p> <pre><code>2023-09-19 13:19:19 INFO 17892/MainThread/324310998 foo INFO:stdout:foo </code></pre> <p>I still have just one handler in the <code>stdout</code> logger: <code>stdout.handlers</code> is</p> <pre><code>[&lt;StreamHandler stdout (INFO)&gt;] </code></pre> <p>Moreover, if I remove this handler, I still get output:</p> <pre><code>In [96]: ha = stdout.handlers[0] stdout.removeHandler(ha) In [97]: stdout.handlers Out [97]: [] In [98]: stdout.info(&quot;foo&quot;) INFO:stdout:foo </code></pre> <p>A basic investigation shows that</p> <pre><code>In [99]: stdout.propagate Out [99]: True In [100]: stdout.hasHandlers() Out [100]: True </code></pre> <p>and <code>stdout.propagate = False</code> &quot;fixes&quot; the problem.</p> <p>Alternatively,</p> <pre><code>root = logging.getLogger() root.removeHandler(root.handlers[0]) </code></pre> <p>also fixes the problem.</p> <p>However, I would rather figure out how come the root logger all of a sudden got a handler!</p>
<python><logging><python-logging>
2023-09-19 17:34:04
1
60,468
sds
77,136,759
942,317
ModuleNotFoundError: No module named 'piplite'
<p>I'm trying to run a Jupiter file, please note I'm new to python.</p> <p>following is first code block which fails within Jupiter, I'm using it with Vscode.</p> <pre class="lang-py prettyprint-override"><code> # Dependency needed to install file # If running the notebook on your machine, else leave it commented #!pip install xlrd #!pip install openpyxl import piplite await piplite.install(['xlrd','openpyxl']) </code></pre> <p><a href="https://i.sstatic.net/WCde7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WCde7.png" alt="enter image description here" /></a> Error:</p> <pre><code> --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) c:\Users\..\Projects\python\ibm_backend\PY0101EN-4-3-LoadData.jupyterlite.ipynb Cell 7 line 7 1 # Dependency needed to install file 2 3 # If running the notebook on your machine, else leave it commented 4 #!pip install xlrd 5 6 #!pip install openpyxl ----&gt; 7 import piplite 8 await piplite.install(['xlrd','openpyxl']) ModuleNotFoundError: No module named 'piplite' </code></pre> <p>Also Im using virtual environment.</p> <pre><code> pip -V pip 23.1.2 from C:\Users\..\Projects\python\ibm_backend\.venv\Lib\site-packages\pip (python 3.11) py -m ensurepip Looking in links: c:\Users\..\AppData\Local\Temp\tmp1lqhflza Requirement already satisfied: setuptools in c:\users\..\projects\python\ibm_backend\.venv\lib\site-packages (65.5.0) Requirement already satisfied: pip in c:\users\..\projects\python\ibm_backend\.venv\lib\site-packages (23.1.2) </code></pre>
<python>
2023-09-19 17:33:56
1
10,317
STEEL
77,136,744
1,499,575
Does doxygen support python argument type hints?
<p>Given the function predicate below, doxygen seems to truncate after <code>test_point_count</code> is there any configuration option in the Doxyfile which can rectify this? Unfortunately, I am stuck with doxygen version 1.8.5 and I don't see much that references python compatibility.</p> <pre class="lang-py prettyprint-override"><code>def expect_equal(logger, test_point_count: int, expected: any, actual: any, tolerance: float = 0, desc: str = &quot;&quot;): &quot;&quot;&quot; Logging helper comparing 2 values and logging results @param logger: Logger used for outputting results @param test_point_count: iterator value for tp count @param expected: expected value for test @param actual: actual result of test @param tolerance: float describing the +/- tolerance for check @param desc: str description of test and results &quot;&quot;&quot; </code></pre>
<python><doxygen><python-typing>
2023-09-19 17:31:49
0
1,152
mreff555
77,136,421
11,053,343
Restricted Python in Zope - Unauthorized Error
<p>I have a python script in the Zope (5.8.3) ZMI. When I was running Zope 5.5.1 this worked perfectly but since I upgraded to 5.8.3 and thus updated all the accompanying python modules, I am now getting the error:</p> <p>Unauthorized: Cannot access verify in this context</p> <p>verify is part of the cryptography library and more specifically the ed25519 module in this Ed25519 class. I have allowed both the module(s) and classes necessary and even the method from_public_bytes works fine. However, I need to allow the use of verify so this script works again and have spent the last two days reading code and trying a bunch of allow statements to no avail.</p> <p>Hopefully someone can shed some light on this as right now, I've had to create an external method just for the verify line to get the API back and running. I would much rather be able to do this from within the ZMI like I did previously.</p> <p>Here's what I've got in my <strong>init</strong>.py for the allow statements:</p> <pre><code>from Products.PythonScripts.Utility import allow_module, allow_class from AccessControl import ModuleSecurityInfo, ClassSecurityInfo allow_module(&quot;base64&quot;) allow_module(&quot;Crypto&quot;) allow_module(&quot;Crypto.Cipher&quot;) allow_module(&quot;cryptography&quot;) allow_module(&quot;cryptography.exceptions&quot;) allow_module(&quot;cryptography.fernet&quot;) allow_module(&quot;cryptography.hazmat&quot;) allow_module(&quot;cryptography.hazmat.primitives&quot;) allow_module(&quot;cryptography.hazmat.primitives.asymmetric&quot;) allow_module(&quot;cryptography.hazmat.primitives.asymmetric.ed25519&quot;) allow_module(&quot;cryptography.hazmat.primitives.asymmetric.x25519&quot;) allow_module(&quot;cryptography.hazmat.primitives.kdf.hkdf&quot;) allow_module(&quot;json&quot;) allow_module(&quot;ZcPassword&quot;) from Crypto.Cipher import ChaCha20_Poly1305 allow_class(ChaCha20_Poly1305) from Crypto.Cipher.ChaCha20_Poly1305 import ChaCha20Poly1305Cipher allow_class(ChaCha20Poly1305Cipher) from cryptography.exceptions import InvalidSignature allow_class(InvalidSignature) from cryptography.fernet import Fernet allow_class(Fernet) from cryptography.hazmat.primitives import hashes allow_class(hashes) from cryptography.hazmat.primitives.asymmetric import ec allow_class(ec) from cryptography.hazmat.primitives.asymmetric import ed25519 allow_class(ed25519) from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PublicKey allow_class(Ed25519PublicKey) from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PrivateKey allow_class(X25519PrivateKey) from cryptography.hazmat.primitives.asymmetric.x25519 import X25519PublicKey allow_class(X25519PublicKey) from cryptography.hazmat.primitives.kdf.hkdf import HKDF allow_class(HKDF) </code></pre> <p>This is a sample script for testing:</p> <pre><code>import base64 import json from cryptography.exceptions import InvalidSignature from cryptography.hazmat.primitives.asymmetric import ed25519 json_data = '&quot;data&quot;:{&quot;userID&quot;:&quot;1234567&quot;}' signature_key = 'KpHjhW7uQ0N8=' signature = '5mXx/ubEK+BfbgSSq8JwAw==' # Get the digital signature digital_signature = base64.b64decode(signature) pubKey = base64.b64decode(signature_key) public_key = ed25519.Ed25519PublicKey.from_public_bytes(pubKey) # Verify signature if the device id has been verified try: public_key.verify(digital_signature, json_data.encode()) # print(&quot;Signature verified&quot;) except Exception as e: print(f&quot;An error occurred: {e}&quot;) return(printed) </code></pre> <p>Note: Obviously the values shown in the example are bogus values for testing. The script itself is fine. I just need to allow access so I can use verify from the ZMI in restricted python.</p>
<python><security><zope><python-cryptography><ed25519>
2023-09-19 16:35:17
0
1,515
kittonian
77,136,391
5,547,553
How to read columns of a hive partitioned parquet file in python?
<br> Generally you read the schema of a parquet file like: <pre><code>import pyarrow.parquet as pq sch = pq.read_schema(path+filename, memory_map=True) </code></pre> <p>But this does not work for hive partitioned files.<br> Tried adding the</p> <pre><code>partitioning='hive' </code></pre> <p>option, but it is not implemented.<br> How do I get the columns / schema of such a file?</p>
<python><parquet>
2023-09-19 16:31:45
2
1,174
lmocsi
77,136,363
6,599,648
Python threading Flask - do I need to close my thread using x.join()?
<p>I have some code in Python Flask which does the following:</p> <ol> <li>get user input</li> <li>calculates (0.3s)</li> <li>writes to DB (3s)</li> <li>return view to user</li> </ol> <p>Currently the user has to wait 3.3s for the view to return to them, so I'd like to return the view to the user while I write to the DB. My plan is to use threading for this, but I can never run x.join(). This is my code:</p> <pre class="lang-py prettyprint-override"><code>import threading def something_slow(my_variable): print(f&quot;Thread is starting and will execute on {my_variable}&quot;) time.sleep(3) return if __name__ == &quot;__main__&quot;: print(&quot;Starting&quot;) x = threading.Thread(target=something_slow, args=('my variable, probably a SQL query',)) x.start() print(&quot;this code will execute while the slow thing is happening&quot;) # code that will execute while something slow is happening: return render_template('results.html') x.join() # this wil NEVER execute!!! print('something slow is done') </code></pre> <p>From what I can tell on the <a href="https://superfastpython.com/thread-close/#:%7E:text=We%20can%20close%20a%20thread,the%20run()%20function%20directly." rel="nofollow noreferrer">internet</a>, the thread will close when I call <code>return</code> from inside the thread. Is it true that as long as I call <code>return</code> inside the thread, I don't need to run <code>x.join()</code> to close?</p>
<python><flask><python-multithreading>
2023-09-19 16:28:38
1
613
Muriel
77,136,355
409,568
How to work with someone else's Jupyter Notebook and virtualenv?
<p>I'm new to Jupyter Notebooks and I must be missing something that is entirely obvious to everyone else so apologies for the stupid question.</p> <p>I have been given a Jupyter Notebook (in Python) that imports a few third-party modules that seem to be installed in the environment of the person who created it, ie it starts with a cell with code like this:</p> <pre class="lang-py prettyprint-override"><code>from xxx import yyy import zzz </code></pre> <p>I now need to install those dependencies and I naively assumed I could do this inside the iPython Notebook in a venv for this project and then activate that venv and run the Notebook with it like this:</p> <pre><code>python3 -m venv project_env source project_env/bin/activate pip install somestuff </code></pre> <p>But of course that doesn't work because I would have to run the above in a cell as BASH code which will spawn a subprocess and exit, thus the venv will not be available to the next cell.</p> <p>I have seen <a href="https://saturncloud.io/blog/running-jupyter-notebook-in-a-virtual-environment-installed-scikitlearn-module-not-available/#:%7E:text=Running%20Jupyter%20Notebook%20in%20a%20virtual%20environment%20is%20essential%20for,on%20your%20project%20with%20confidence." rel="nofollow noreferrer">instructions</a> to install and register a new kernel on the server for this process but that doesn't seem ideal to me because I want to share the Notebook with others too and was hoping for the Notebook to contain all the data necessary to run it, including the creation of a venv with the correct versions of libraries that need to be installed.</p> <p>This seems such a basic usecase to me, is there really no solution for this? What am I missing?</p> <p>Thanks for your help!</p>
<python><jupyter-notebook>
2023-09-19 16:27:41
1
730
tospo
77,136,231
2,023,111
How to fix ChromeDriver SessionNotCreatedException in Google Colab?
<p>I was asked to use a python web scraping script that was created in Google Colab, but I'm getting the following error:</p> <blockquote> <p>SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 90 Current browser version is 117.0.5938.88 with binary path /root/.cache/selenium/chrome/linux64/117.0.5938.88/chrome</p> </blockquote> <p>Am I correct in thinking that because Google Colab is a hosted coding platform that my local chromedriver version is irrelevant? How can I fix this?</p> <p>If it helps, I believe this is the function causing the error...</p> <pre><code>def web_driver(): options = webdriver.ChromeOptions() options.add_argument(&quot;--verbose&quot;) options.add_argument('--no-sandbox') options.add_argument('--headless') options.add_argument('--disable-gpu') options.add_argument(&quot;--window-size=1920, 1200&quot;) options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome(options=options) return driver </code></pre>
<python><selenium-webdriver><selenium-chromedriver><google-colaboratory>
2023-09-19 16:07:24
1
319
Jonathan Cakes
77,135,971
12,300,981
Why aren't the dimensions of my x-axis and y-axis the same?
<p>So I'm attempting to generate a 4x4 where the bottom left is a perfect square.</p> <pre><code>fig=plt.figure() ax=np.empty((2,2,),dtype=object) ax[1][0]=fig.add_subplot(2,2,3) ax[0][0]=fig.add_subplot(2,2,1,sharex=ax[1][0]) ax[1][1]=fig.add_subplot(2,2,4,sharey=ax[1][0]) ax[0][1]=fig.add_subplot(2,2,2,sharex=ax[1][1],sharey=ax[0][0]) plt.subplots_adjust(hspace=0,wspace=0) old_position,old_position2,old_position3,old_position4=list(ax[0][0].get_position().bounds),list(ax[1][1].get_position().bounds),list(ax[1][0].get_position().bounds),list(ax[0][1].get_position().bounds) old_position[-1]=old_position[-1]/2 old_position2[-2]=old_position2[-2]/2 old_position3[-1]=old_position3[-2] old_position4[-1]=old_position4[-1]/2 old_position4[-2]=old_position4[-2]/2 ax[0][0].set_position(old_position) ax[1][1].set_position(old_position2) ax[1][0].set_position(old_position3) ax[0][1].set_position(old_position4) ax[0][0].xaxis.set_visible(False) ax[0][1].xaxis.set_visible(False) ax[1][1].yaxis.set_visible(False) ax[0][1].yaxis.set_visible(False) #plot stuff onto axes objects plt.suptitle('Test') plt.show() </code></pre> <p>So this should mean that the bottom left plot ([1][0]) should be a perfect square, since <code>old_position3[-1]=[-2]</code>, in fact looking at old_position3, you get <code>[0.125, 0.10999999999999999, 0.38749999999999996, 0.38749999999999996]</code>, so the x,y dimensions are both the same <code>0.3874</code>.</p> <p>However, upon plotting/display, these dimensions are not perfect for the bottom left. The ratio of height/width is about 3/5in, and I don't quite understand why. Furthermore, why is the title (suptitle) using the old dimensions (it's situated high above).</p> <p><a href="https://i.sstatic.net/mO7GC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mO7GC.png" alt="Test" /></a></p>
<python><matplotlib>
2023-09-19 15:30:56
1
623
samman
77,135,954
7,397,195
SQLAlchemy: Declarative Many to Many map for Parent with two primary keys
<p>I'm trying to declare a many to many relationship using SQLAlchemy where my parent RowHeader can create Child Economists via a relationship table. I keep running into errors around multiple join paths, lack of declared Foreign Keys. I also have situations where the creation of children violates primary key constraints. I only want a single Child but for it to be associated to many Parents.</p> <pre><code># association table row_econ_table = Table( &quot;row_econ_table&quot;, Base.metadata, Column(&quot;rh_f_index&quot;, ForeignKey(&quot;row_header.f_index&quot;), primary_key=True), #left Column(&quot;rh_row&quot;, ForeignKey(&quot;row_header.row&quot;), primary_key=True), # left Column(&quot;econ_un_id&quot;, ForeignKey(&quot;economist.un_name_id&quot;), primary_key=True), #right ) class RowHeader(Base): # Parent (many with 2 pk) __tablename__ = &quot;row_header&quot; f_index: Mapped[str] = mapped_column(primary_key=True) row: Mapped[int] = mapped_column(primary_key=True) Names: Mapped[str] = mapped_column() Organization: Mapped[str] = mapped_column(nullable=True) economists: Mapped[List[&quot;Economist&quot;]] = relationship( #cascade=&quot;all, delete&quot;, secondary=row_econ_table, foreign_keys=[f_index, row], primaryjoin=&quot;and_(RowHeader.f_index==row_econ_table.c.rh_f_index, &quot; &quot;RowHeader.row==row_econ_table.c.rh_row)&quot;, secondaryjoin=&quot;Economist.un_name_id==row_econ_table.c.econ_un_id&quot; ) def __repr__(self) -&gt; str: return f&quot;RowHeader(f_index={self.f_index!r}, row={self.row!r}, Names={self.Names!r}, Organization={self.Organization!r})&quot; class Economist(Base): # Child (One on other side of assoc table) __tablename__ = &quot;economist&quot; un_name_id: Mapped[str] = mapped_column(primary_key=True) </code></pre> <p>My issue seems to be that I have two primary keys in my Parent which create multiple join paths. I need two primary keys to make that table unique.</p> <p>My goal is that: Parent (RowHeader) creates all children (Economist) via relationship table. If it doesn't exist, create it. If it exists, just establish relationship to it via relationship table. But, don't create multiples of the same child.</p> <p>I'm not sure I can recount how many configurations I've tried!!</p> <p>I've gotten the following errors:</p> <pre><code>sqlalchemy.exc.ArgumentError: Could not locate any relevant foreign key columns for secondary join condition 'economist.un_name_id = row_econ_table.econ_un_id' on relationship RowHeader.economists. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or are annotated in the join condition with the foreign() annotation. sqlalchemy.exc.NoForeignKeysError: Could not determine join condition between parent/child tables on relationship RowHeader.economists - there are no foreign keys linking these tables via secondary table 'row_econ_table'. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or specify 'primaryjoin' and 'secondaryjoin' expressions. </code></pre> <p>Also, I got this to create the tables once (lost track of what combination created this) but when the Parent tried to create a new Child that already exists, I got the primary key constraint error from the Child.</p> <p><strong>Update</strong>: removing foreign_keys arg results in successful table creations. So, progress!</p> <p>But, when I try to add objects to the session which result in creating a row in the &quot;economist&quot; table that already exists, I get a Unique constraint error and rollback.</p> <pre><code>sqlite3.IntegrityError: UNIQUE constraint failed: economist.un_name_id </code></pre> <p>I guess I thought the session handled that for me but maybe it would if I didn't add_all and only added them only one at a time? I'm using SQLLITE as my backend. Not sure how to effect a merge or cause my &quot;row_header&quot; table to only be associated with an existing 'economist'</p> <p>Now wondering if this is the key to my problem ...</p> <p><a href="https://docs.sqlalchemy.org/en/20/orm/queryguide/dml.html#orm-queryguide-upsert" rel="nofollow noreferrer">ORM Upsert Statements</a></p> <p><a href="https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#insert-on-conflict-upsert" rel="nofollow noreferrer">INSERT ... ON CONFLICT (SQLLITE)</a></p> <p>But, am unsure how to do this automatically.</p>
<python><sqlite><sqlalchemy>
2023-09-19 15:28:55
1
454
leeprevost
77,135,931
2,112,406
Capture stdout from C++ function on python side in pytest (pybind11)?
<p>In C++, I have a class, and one of its methods (constructor, in this MRE) is supposed to print out something:</p> <pre><code>class Ex { std::string name; std::string content; public: Ex(std::string content_, std::string name_){ name = name_; content = content_; std::cout &lt;&lt; &quot;ASSIGNING&quot; &lt;&lt; std::endl; } }; PYBIND11_MODULE(example, module_handle){ module_handle.doc() = &quot;Ex class.&quot;; py::class_&lt;Ex&gt;( module_handle, &quot;PyEx&quot; ).def(py::init&lt;std::string, std::string&gt;()) ; } </code></pre> <p>I then want to test this on the python side with pytest and attempt to capture stdout/err with <code>capsys</code>:</p> <pre><code>from build.example import PyEx class TestExample: def test_type(self, capsys): ex = PyEx(&quot;foo&quot;, &quot;bar&quot;) captured = capsys.readouterr() assert captured.out == &quot;ASSIGNING&quot; </code></pre> <p>However, <code>captured</code> ends up being empty: <code>CaptureResult(out='', err='')</code>:</p> <pre><code>&gt; assert captured.out == &quot;ASSIGNING&quot; E AssertionError: assert '' == 'ASSIGNING' E - ASSIGNING test_init.py:7: AssertionError ----------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------ ASSIGNING </code></pre> <p>It's weird because it does print out &quot;Captured stdout call&quot;. What am I missing</p>
<python><c++><pytest><pybind11>
2023-09-19 15:27:00
0
3,203
sodiumnitrate
77,135,860
9,571,463
How to handle aiohttp.client_exceptions.ClientOSError: [Errno 1] [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:2624)
<p>I occasionally run into a TLSV1 error (might be because I am on a VPN) when making HTTP calls via <code>aiohttp</code>. I have read a bit about the error in this SO question <a href="https://stackoverflow.com/questions/44316292/ssl-sslerror-tlsv1-alert-protocol-version">here</a>.</p> <p>but as noted in the comments, it is not easy to handle this error via raw <code>try-except</code> logic. Is there a work-around way to <code>except</code> this error and handle it when it occurs?</p> <p>I tried using a few things:</p> <pre><code>try: # Make a request and check for status response.raise_for_status() except aiohttp.client_exceptions.ClientOSError: # Do something except Exception: # Do something else </code></pre> <p>But the error passes through both without being able to be handled..</p>
<python><tls1.2><aiohttp>
2023-09-19 15:16:19
0
1,767
Coldchain9
77,135,853
12,845,199
Extract date from weirdly formatted datetime using Polars
<pre><code>import polars as pl df = pl.DataFrame(['2023-08-01T06:13:24.448409', '2023-08-01T07:29:34.491027']) print(df.with_columns(pl.col('column_0').str.strptime(pl.Date,'%Y-%m-%d'))) </code></pre> <p>So I have the following mass of code, and for some ungodly reason I cant for the death of me extract the date from those given strings.</p> <p>In the following example I keep getting the error</p> <pre><code>exceptions.ComputeError: strict date parsing failed for 2 value(s) (2 unique): [&quot;2023-08-01T06:13:24.448409&quot;, &quot;2023-08-01T07:29:34.491027&quot;] </code></pre> <p>Any ideas on how can I extract the string from such format of datetime? And why I get this error?</p>
<python><dataframe><datetime><python-polars>
2023-09-19 15:15:32
2
1,628
INGl0R1AM0R1
77,135,788
165,260
ValueError: Shapes (None, 28) and (None, 28, 10) are incompatible
<p>I am trying to train model using keras sequential model and my code is below:</p> <pre><code>from tensorflow.keras import Input from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences # Example input and output sequences input_sequence = [&quot;Madhup is a good boy.&quot;, &quot;I am a large language model.&quot;] output_sequence = [&quot;Madgboy&quot;, &quot;Imllm&quot;] # Create tokenizers for input and output # input_tokenizer = Tokenizer(oov_token='&lt;START&gt;', lower=True) input_tokenizer = Tokenizer(char_level=True, lower=True) input_tokenizer.fit_on_texts(input_sequence) output_tokenizer = Tokenizer(char_level=True, lower=True) output_tokenizer.fit_on_texts(output_sequence) # Convert text to sequences of integers input_sequence_int = input_tokenizer.texts_to_sequences(input_sequence) output_sequence_int = output_tokenizer.texts_to_sequences(output_sequence) print(f&quot;Input Sequence : {input_sequence_int}\n Output Sequence : {output_sequence_int} \n&quot;) # Pad sequences to the same length max_input_length = max(len(seq) for seq in input_sequence_int) max_output_length = max(len(seq) for seq in output_sequence_int) max_sequence_length = max(max_input_length, max_output_length) print(&quot;Max Input Length &quot;, max_input_length) print(&quot;Max Output Length &quot;, max_output_length) print(&quot;Max Seq Length &quot;, max_sequence_length) input_sequence_padded = pad_sequences(input_sequence_int, maxlen=max_sequence_length, padding='post') output_sequence_padded = pad_sequences(output_sequence_int, maxlen=max_sequence_length, padding='post') print(&quot;Padded Input Sequence:&quot;, input_sequence_padded) print(&quot;Padded Output Sequence:&quot;, output_sequence_padded) print(output_tokenizer.word_index) import tensorflow as tf from tensorflow.keras.layers import Embedding, LSTM, Dense embedding_dim = 256 units = max_input_length # Calculate the input vocab size input_vocab_size = len(input_tokenizer.word_index) + 1 print(input_vocab_size) output_vocab_size = len(output_tokenizer.word_index) + 1 print(output_vocab_size) # Define the model model = tf.keras.Sequential([ Embedding(input_vocab_size, embedding_dim, input_length=max_input_length), LSTM(embedding_dim, return_sequences=True), Dense(output_vocab_size, activation='softmax') ]) # Compile the model model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # Train the model model.fit(input_sequence_padded, output_sequence_padded, epochs=10) import numpy as np def generate_nickname(input_text): input_sequence = input_tokenizer.texts_to_sequences([input_text]) print(f&quot;Input Sequence: {input_sequence}&quot;) # input_sequence = pad_sequences(input_sequence, maxlen=max_sequence_length, padding='post') input_sequence = pad_sequences(input_sequence, maxlen=max_sequence_length, padding='post') print(f&quot;Padded Input Sequence: {input_sequence}&quot;) predicted_sequence = model.predict(input_sequence) max_arg = np.argmax(predicted_sequence, axis=-1) print(max_arg) predicted_nickname = output_tokenizer.sequences_to_texts(max_arg)[0] return predicted_nickname # Example usage input_text = &quot;Madhup is a good boy.&quot; predicted_nickname = generate_nickname(input_text) print(f&quot;Input Phrase: {input_text}&quot;) print(f&quot;Generated Nickname: {predicted_nickname}&quot;) </code></pre> <p>Here are the padded input and output sequences</p> <blockquote> <p>Padded Input Sequence: [[ 5 2 6 12 9 13 1 10 14 1 2 1 3 4 4 6 1 15 4 16 11 0 0 0 0 0 0 0] [10 1 2 5 1 2 1 7 2 17 3 8 1 7 2 18 3 9 2 3 8 1 5 4 6 8 7 11]] Padded Output Sequence: [[1 3 4 5 6 7 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [9 1 2 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]</p> </blockquote> <p>The above code throws error at</p> <p><code>model.fit(input_sequence_padded, output_sequence_padded, epochs=10)</code></p> <pre><code>ValueError: Shapes (None, 28) and (None, 28, 10) are incompatible </code></pre> <p>I understand that i have to somehow reshape the output sequence before passing it to model. I am just confused how to do that</p> <p><strong>Note</strong> If I just pass <code>1</code> in place <code>output_vocab_size</code> the code will execute but that will of no use as the trained model will not return any output.</p> <p>All help is appreciated!</p>
<python><machine-learning><keras><sequential>
2023-09-19 15:08:05
1
8,124
Madhup Singh Yadav
77,135,749
5,150,025
I need to extract the config block as Python dict
<p>I have a dataform project and I need to extract the config block of .sqlx files and parse it as Python dictionary.</p> <p>I am trying using RegEx.</p> <p>RegEx: <code>config\s*{[^}]*}</code></p> <pre class="lang-py prettyprint-override"><code>import re config_pattern = r'config\s*{[^}]*}' config_match = re.search(config_pattern, sql_content, re.DOTALL) if config_match: config_block = config_match.group(0) print(config_block) else: print(&quot;Config block not found.&quot;) </code></pre> <p><strong>Input:</strong></p> <pre class="lang-sql prettyprint-override"><code>config { type:&quot;table&quot;, schema:&quot;xt_pto&quot;, name:&quot;xt_daily_pto&quot;, bigquery:{ partitionBy:&quot;BKDate&quot; }, tags:[ &quot;xt_daily_pto&quot;, &quot;budget&quot; ] } WITH latest_date AS ( SELECT MAX(dt) dt FROM ${ref('xt_daily_pto')} ) SELECT ... -- Rest of my query </code></pre> <p><strong>Desired Output:</strong></p> <pre class="lang-py prettyprint-override"><code>{ &quot;type&quot;:&quot;table&quot;, &quot;schema&quot;:&quot;xt_pto&quot;, &quot;name&quot;:&quot;xt_daily_pto&quot;, &quot;bigquery&quot;:{ &quot;partitionBy&quot;:&quot;BKDate&quot; }, &quot;tags&quot;:[ &quot;xt_daily_pto&quot;, &quot;budget&quot; ] } </code></pre> <p>But the RegEx <code>config\s*{[^}]*}</code> is matching the first <code>}</code> of bigquery key and output is truncated:</p> <pre><code>{ type:&quot;table&quot;, schema:&quot;xt_pto&quot;, name:&quot;xt_daily_pto&quot;, bigquery:{ partitionBy:&quot;BKDate&quot; } </code></pre>
<python><regex><dataform>
2023-09-19 15:03:10
1
1,123
Marcelo Gazzola
77,135,623
1,014,217
_nanquantile_dispatcher() got an unexpected keyword argument 'method'
<p>I have a pandaas dataframe with about 6k rows, with a timestamp column which is hourly and a value which is a float.</p> <p>I am using DARTS for prediction and plotting</p> <pre><code>import pandas as pd from darts import TimeSeries df = dfTienen_final.toPandas() df_no_duplicates = df.drop_duplicates(subset=['belgium_time']) series = TimeSeries.from_dataframe(df_no_duplicates, &quot;belgium_time&quot;, &quot;avg_value&quot;, freq=&quot;H&quot;) # Determine the number of rows in the DataFrame num_rows = len(series) # Set aside a percentage of the rows for validation (e.g., 10%) validation_percentage = 10 # Adjust as needed num_validation_points = (validation_percentage * num_rows) // 100 # Calculate the number of validation points # Set aside the calculated number of rows as a validation series train, val = series[:-num_validation_points], series[-num_validation_points:] from darts.models import ExponentialSmoothing model = ExponentialSmoothing() model.fit(train) prediction = model.predict(len(val), num_samples=1000) </code></pre> <p>and then this:</p> <pre><code>import matplotlib.pyplot as plt series.plot() prediction.plot(label=&quot;forecast&quot;, low_quantile=0.05, high_quantile=0.95) plt.legend() </code></pre> <p>However I get this error in the plot</p> <p>_nanquantile_dispatcher() got an unexpected keyword argument 'method'</p> <p>traceback</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) File &lt;command-3591413550716749&gt;, line 4 1 import matplotlib.pyplot as plt 3 series.plot() ----&gt; 4 prediction.plot(label=&quot;forecast&quot;, low_quantile=0.05, high_quantile=0.95) 5 plt.legend() File /local_disk0/.ephemeral_nfs/envs/pythonEnv-c1295c6b-8422-4d69-982b-173176ff6a3d/lib/python3.10/site-packages/darts/timeseries.py:3820, in TimeSeries.plot(self, new_plot, central_quantile, low_quantile, high_quantile, default_formatting, label, max_nr_components, ax, *args, **kwargs) 3818 central_series = comp.mean(dim=DIMS[2]) 3819 else: -&gt; 3820 central_series = comp.quantile(q=central_quantile, dim=DIMS[2]) 3821 else: 3822 central_series = comp.mean(dim=DIMS[2]) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-c1295c6b-8422-4d69-982b-173176ff6a3d/lib/python3.10/site-packages/xarray/core/dataarray.py:5123, in DataArray.quantile(self, q, dim, method, keep_attrs, skipna, interpolation) 5015 def quantile( 5016 self: T_DataArray, 5017 q: ArrayLike, (...) 5022 interpolation: QuantileMethods | None = None, 5023 ) -&gt; T_DataArray: 5024 &quot;&quot;&quot;Compute the qth quantile of the data along the specified dimension. 5025 5026 Returns the qth quantiles(s) of the array elements. (...) 5120 The American Statistician, 50(4), pp. 361-365, 1996 5121 &quot;&quot;&quot; -&gt; 5123 ds = self._to_temp_dataset().quantile( 5124 q, 5125 dim=dim, 5126 keep_attrs=keep_attrs, 5127 method=method, 5128 skipna=skipna, 5129 interpolation=interpolation, 5130 ) 5131 return self._from_temp_dataset(ds) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-c1295c6b-8422-4d69-982b-173176ff6a3d/lib/python3.10/site-packages/xarray/core/dataset.py:8035, in Dataset.quantile(self, q, dim, method, numeric_only, keep_attrs, skipna, interpolation) 8029 if name not in self.coords: 8030 if ( 8031 not numeric_only 8032 or np.issubdtype(var.dtype, np.number) 8033 or var.dtype == np.bool_ 8034 ): -&gt; 8035 variables[name] = var.quantile( 8036 q, 8037 dim=reduce_dims, 8038 method=method, 8039 keep_attrs=keep_attrs, 8040 skipna=skipna, 8041 ) 8043 else: 8044 variables[name] = var File /local_disk0/.ephemeral_nfs/envs/pythonEnv-c1295c6b-8422-4d69-982b-173176ff6a3d/lib/python3.10/site-packages/xarray/core/variable.py:2292, in Variable.quantile(self, q, dim, method, keep_attrs, skipna, interpolation) 2288 axis = np.arange(-1, -1 * len(dim) - 1, -1) 2290 kwargs = {&quot;q&quot;: q, &quot;axis&quot;: axis, &quot;method&quot;: method} -&gt; 2292 result = apply_ufunc( 2293 _wrapper, 2294 self, 2295 input_core_dims=[dim], 2296 exclude_dims=set(dim), 2297 output_core_dims=[[&quot;quantile&quot;]], 2298 output_dtypes=[np.float64], 2299 dask_gufunc_kwargs=dict(output_sizes={&quot;quantile&quot;: len(q)}), 2300 dask=&quot;parallelized&quot;, 2301 kwargs=kwargs, 2302 ) 2304 # for backward compatibility 2305 result = result.transpose(&quot;quantile&quot;, ...) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-c1295c6b-8422-4d69-982b-173176ff6a3d/lib/python3.10/site-packages/xarray/core/computation.py:1207, in apply_ufunc(func, input_core_dims, output_core_dims, exclude_dims, vectorize, join, dataset_join, dataset_fill_value, keep_attrs, kwargs, dask, output_dtypes, output_sizes, meta, dask_gufunc_kwargs, *args) 1205 # feed Variables directly through apply_variable_ufunc 1206 elif any(isinstance(a, Variable) for a in args): -&gt; 1207 return variables_vfunc(*args) 1208 else: 1209 # feed anything else through apply_array_ufunc 1210 return apply_array_ufunc(func, *args, dask=dask) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-c1295c6b-8422-4d69-982b-173176ff6a3d/lib/python3.10/site-packages/xarray/core/computation.py:761, in apply_variable_ufunc(func, signature, exclude_dims, dask, output_dtypes, vectorize, keep_attrs, dask_gufunc_kwargs, *args) 756 if vectorize: 757 func = _vectorize( 758 func, signature, output_dtypes=output_dtypes, exclude_dims=exclude_dims 759 ) --&gt; 761 result_data = func(*input_data) 763 if signature.num_outputs == 1: 764 result_data = (result_data,) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-c1295c6b-8422-4d69-982b-173176ff6a3d/lib/python3.10/site-packages/xarray/core/variable.py:2286, in Variable.quantile.&lt;locals&gt;._wrapper(npa, **kwargs) 2284 def _wrapper(npa, **kwargs): 2285 # move quantile axis to end. required for apply_ufunc -&gt; 2286 return np.moveaxis(_quantile_func(npa, **kwargs), 0, -1) File &lt;__array_function__ internals&gt;:4, in nanquantile(*args, **kwargs) TypeError: _nanquantile_dispatcher() got an unexpected keyword argument 'method' </code></pre> <p>package versions: darts 0.26.0 pandas 2.0.3 numpy 1.26.0</p> <p>some data</p> <pre><code>----+-----+-----+-------------------+------------------+--------------------+----------+------------------+------------------+------------------+------------------+------+-------+------------------+-------------------+-------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+---------+----------------------+ | kp| min| max| timestamp_hour| avg_value| value_diff|percentage| moving_avg_7h| moving_avg_30h| 5sma| mean|stddev|z_score| smoothed_value| dt_iso|rain_1h| belgium_time| Rain_cumulative_2h| Rain_cumulative_3h| Rain_cumulative_4h| Rain_cumulative_5h| Rain_cumulative_6h|dry_group|dry_hours_within_group| +----+-----+-----+-------------------+------------------+--------------------+----------+------------------+------------------+------------------+------------------+------+-------+------------------+-------------------+-------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+---------+----------------------+ |0090|150.0|848.0|2022-07-13 10:00:00|112.41159858703614| 0.0| 0.0|112.41159858703614|112.41159858703614|112.41159858703614|112.41159858703614| 0.0| 0.0|112.41159858703614|2022-07-13 12:00:00| 0.0|2022-07-13 10:00:00| 0.0| 0.0| 0.0| 0.0| 0.0| NULL| 0| |0090|150.0|848.0|2022-07-13 11:00:00|108.46443377043072| -3.9471648166054223| 0.0|110.43801617873342|110.43801617873342|110.43801617873342|110.43801617873342| 0.0| 0.0|110.43801617873342|2022-07-13 13:00:00| 0.0|2022-07-13 11:00:00| 0.0| 0.0| 0.0| 0.0| 0.0| NULL| 1| |0090|150.0|848.0|2022-07-13 12:00:00|109.19111235245414| 0.7266785820234247| 0.0|110.02238156997366|110.02238156997366|110.02238156997366|110.02238156997366| 0.0| 0.0|110.02238156997366|2022-07-13 14:00:00| 0.0|2022-07-13 12:00:00| 0.0| 0.0| 0.0| 0.0| 0.0| NULL| 2| |0090|150.0|848.0|2022-07-13 13:00:00|112.27772883365029| 3.0866164811961454| 0.0| 110.5862183858928| 110.5862183858928| 110.5862183858928| 110.5862183858928| 0.0| 0.0| 110.5862183858928|2022-07-13 15:00:00| 0.0|2022-07-13 13:00:00| 0.0| 0.0| 0.0| 0.0| 0.0| NULL| 3| |0090|150.0|848.0|2022-07-13 14:00:00|111.91993414837381|-0.35779468527647396| 0.0|110.85296153838901|110.85296153838901|110.85296153838901|110.85296153838901| 0.0| 0.0|110.85296153838901|2022-07-13 16:00:00| 0.0|2022-07-13 14:00:00| 0.0| 0.0| 0.0| 0.0| 0.0| NULL| 4| </code></pre>
<python><pandas><matplotlib><time-series><u8darts>
2023-09-19 14:48:46
0
34,314
Luis Valencia
77,135,614
3,936,496
Plot graph with area of two data set and get probability
<p>I am trying to plot a path of rotating rod ends as it moves forward in space. I made a vector that rotates around the centre with certain rotational speed and then move the vector to the place next place with certain translational speed. I made the whole function work with time-steps (for loop as time-step), so I can make it more accurate if want. So the result is this: <a href="https://i.sstatic.net/U388h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U388h.png" alt="enter image description here" /></a></p> <pre><code>def rotate_vector(vector, angle): x = vector[0] * math.cos(angle) - vector[1] * math.sin(angle) y = vector[0] * math.sin(angle) + vector[1] * math.cos(angle) return [x, y] fig = go.Figure() rod_1 = np.array([0, -(1.0/2)]) rod_2 = np.array([0, (1.0/2)]) center = np.array([-0.5,0.0]) fig.update_layout(yaxis_range=[-5, 5]) fig.update_layout(xaxis_range=[-2,12]) fig.update_layout(width=1000, height=500) fig.update_layout(showlegend=False) #plt.autoscale(False) speed = 13 #m/s rot_speed = (math.pi*2*1.5)/(11.50 -10.69) #deg/s step_size = 0.01 rod_end_1 = {&quot;x&quot;: [], &quot;y&quot;: []} rod_end_2 = {&quot;x&quot;: [], &quot;y&quot;: []} for i in np.arange(0, 1, step_size): #s fig.add_trace(go.Scatter(x = [center[0] ,rod_1[0] + center[0]], y = [center[1], rod_1[1]])) fig.add_trace(go.Scatter(x = [center[0] ,rod_2[0] + center[0]], y = [center[1], rod_2[1]])) rod_end_1[&quot;x&quot;].append(rod_1[0] + center[0]) rod_end_1[&quot;y&quot;].append(rod_1[1] + center[1]) rod_end_2[&quot;x&quot;].append(rod_2[0] + center[0]) rod_end_2[&quot;y&quot;].append(rod_2[1] + center[1]) rod_1 = rotate_vector(rod_1, rot_speed * step_size) rod_2 = rotate_vector(rod_2, rot_speed * step_size) center[0] += step_size * speed fig.show() </code></pre> <p>With this I get the paths of the rod ends. What I want to do, is fill this whole path, but I can't get it like I want it: <a href="https://i.sstatic.net/VsuWw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VsuWw.png" alt="enter image description here" /></a> I want to &quot;fill&quot; look like the first picture with fill, but it doesn't look like it. Here is the code with using plotly:</p> <pre><code>fig2 = go.Figure() fig2.update_layout(yaxis_range=[-1, 1]) fig2.update_layout(xaxis_range=[-1,15]) fig2.update_layout(width=1000, height=500) fig2.add_trace(go.Scatter(x = rod_end_1[&quot;x&quot;], y = rod_end_1[&quot;y&quot;], fill='tonexty')) fig2.add_trace(go.Scatter(x = rod_end_2[&quot;x&quot;], y = rod_end_2[&quot;y&quot;], fill='tonexty')) fig2.show() </code></pre> <p>Any good ideas how to make it look like the first one with fill?</p> <p>In the end I would like to get the &quot;probability&quot; or precent of area covert from rods moving point of view. So in the centre it would be 100% and on the positive y-axis it would be little lower than in negative y-axis. Like Gaussian plot.</p> <p>If anyone has a good idea on how to implement this, or any improvements to the code I've currently written, that would be much appreciated. Thanks!</p> <p>Here is an example what I'm searching:</p> <p><a href="https://i.sstatic.net/e5efQm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e5efQm.jpg" alt="enter image description here" /></a></p> <p>So the area that rod covers by moving, will be filled and in the bottom there would be a present level of how much area is covered.</p>
<python><plot><plotly><fill>
2023-09-19 14:47:12
2
401
pinq-
77,135,307
991,045
How to add custom parameter to pip create environment in VS code?
<p>I inherited a python project which needs some dependencies from a non-default repo. I want to add a custom parameter (address of the repo) to the .venv creation action in VSCode.</p> <p>The project uses pip. Adding <code>-f https://address.of.repo/stable.html</code> to the <code>pip install ...</code> command successfully builds the .venv from the command line (gitbash on windows 10), but I can't for the life of me find how to add the same parameter to the <code>Create Environment</code> VSCode action.</p>
<python><visual-studio-code><pip>
2023-09-19 14:09:52
0
470
iDPWF1
77,135,303
1,852,526
Pandas insert a blank row after each row in Excel
<p>New to Pandas firstly. I am using the following code to write to an Excel file using Pandas. Please see the line <code>worksheet.write(0, col_num, value, header_format)</code>. Question is, how can I insert a blank row after each row. For instance I want something like</p> <pre><code>Col1 Col2 Col3 Col4 (This is the Header) --------------Blank row C1 C2 C3 C4 --------------Blank row C1 C2 C3 C4 --------------Blank row C1 C2 C3 C4 </code></pre> <p>This is what I have</p> <pre><code>def create_excel_with_format(headers,values,full_file_name_with_path): df = pd.DataFrame(data=values,columns=headers) with pd.ExcelWriter(full_file_name_with_path) as writer: df.to_excel(writer, index=False) workbook = writer.book worksheet = writer.sheets['Sheet1'] #font_fmt = workbook.add_format({'font_name': 'Arial', 'font_size': 10}) header_format = workbook.add_format({ 'bold': False, 'border': False, 'text_wrap': True}) for col_num, value in enumerate(df.columns.values): worksheet.write(0, col_num, value, header_format) font_fmt = workbook.add_format({'font_name': 'Arial', 'font_size': 13}) worksheet.set_row(0, None, font_fmt) </code></pre>
<python><pandas><excel>
2023-09-19 14:09:35
1
1,774
nikhil
77,135,294
11,252,662
Filtering data from Spark dataframe based on datatype
<p>I am extracting data from DynamoDB based on date filter using Pyspark Glue job.</p> <p><strong>updated</strong> column here is in <strong>epoch</strong> format and is of long data type in DynamoDB.</p> <p><strong>updated</strong> column also has some bad data that are of <strong>struct</strong> data type. I would like to filter out struct data type and process only data that are of long data type.</p> <p>Bad data schema:</p> <pre><code>|-- updated: struct (nullable = true) | |-- long: long (nullable = true) | |-- string: string (nullable = true) </code></pre> <p>Good data schema:</p> <pre><code>|-- updated: long (nullable = true) </code></pre> <p>Please guide me how I can filter out struct data type and retain only number data type records from a DynamoDB column.</p> <pre><code>datasource = glue_context.create_dynamic_frame.from_options( connection_type=&quot;dynamodb&quot;, connection_options={&quot;dynamodb.input.tableName&quot;: &quot;retail&quot;, &quot;dynamodb.throughput.read.percent&quot;: &quot;0.5&quot;}, transformation_ctx=&quot;datasource&quot;) ddb_src_df = datasource.toDF() ddb_src_df = ddb_src_df.select(from_unixtime((ddb_src_df.updated.cast('bigint')/1000)).cast('date').alias('updated')) ddb_fnl_df = ddb_src_df.filter(ddb_src_df.updated.substr(1,10) &lt; current_date) </code></pre> <p><strong>Error message:</strong></p> <p><code>cannot resolve 'CAST(`updated` AS BIGINT)' due to data type mismatch:cannot cast struct&lt;long:bigint,string:string&gt; to bigint</code></p>
<python><pyspark><aws-glue>
2023-09-19 14:08:17
1
397
vvazza
77,135,230
16,659,468
How to fix errors when installing modules in a Docker image?
<p>I encountered an issue while making a docker file wherein the message: &quot;ERROR: failed to solve: process &quot;/bin/sh -c pip install streamlit&quot; did not complete successfully: exit code: 1&quot; popped up, and I can't figure out why I'm struggling to install the streamlit module into my image via the file. I've included a copy of my dockerfile.</p> <pre><code>FROM python:3-alpine3.15 WORKDIR /myapp_dir COPY . /myapp_dir/ RUN pip install --upgrade pip RUN pip install python-dotenv RUN pip install streamlit RUN pip install PyPDF2 Run pip install langchain EXPOSE 8501 CMD streamlit run app2.py </code></pre> <p>So help me in figure out what's the error and also how to see the error that encountered during creating image. Is there any way to figure out error in it? <a href="https://i.sstatic.net/RvsSt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RvsSt.png" alt="enter image description here" /></a></p>
<python><pip><dockerfile><streamlit>
2023-09-19 14:00:00
1
685
Saksham Paliwal
77,134,818
2,881,345
Get differences between 2 dataframes
<p>In Python, I need to get differences between 2 dataframes having a different number of columns and rows.</p> <p>A column &quot;Pk&quot; is present in both dataframes. The schemas are not known in advance.</p> <p>The output should be a new dataframe containing <strong>only</strong> the rows and columns having diferent values.</p> <p>For example:</p> <p><code>df_expected</code></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>pk</th> <th>column1</th> <th>amount1</th> <th>amount2</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>A</td> <td>11</td> <td>101</td> </tr> <tr> <td>2</td> <td>B</td> <td>12</td> <td>102</td> </tr> <tr> <td>3</td> <td>C</td> <td><strong>13</strong></td> <td>103</td> </tr> <tr> <td>4</td> <td>D</td> <td>14</td> <td><strong>104</strong></td> </tr> </tbody> </table> </div> <p><code>df_actual</code>:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>pk</th> <th>column1</th> <th>extra_column</th> <th>amount1</th> <th>amount2</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>A</td> <td>AA</td> <td>11</td> <td>101</td> </tr> <tr> <td>2</td> <td>B</td> <td>BB</td> <td>12</td> <td>102</td> </tr> <tr> <td>3</td> <td>C</td> <td>CC</td> <td><strong>13.1</strong></td> <td>103</td> </tr> <tr> <td>4</td> <td>D</td> <td>DD</td> <td>14</td> <td><strong>104.1</strong></td> </tr> </tbody> </table> </div> <p><code>df_output</code> should be:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>pk</th> <th>amount1</th> <th>amount2</th> </tr> </thead> <tbody> <tr> <td>3</td> <td>Actual: 13.1, Expected: 13</td> <td></td> </tr> <tr> <td>4</td> <td></td> <td>Actual: 104.1, Expected 104</td> </tr> </tbody> </table> </div> <p>Applied Rules:</p> <ul> <li>Rows 1 &amp; 2 excluded as identical in both dataframes for common columns</li> <li>extra_column exluded as not present in first data frame</li> <li>column1 excluded as identical in both data frames</li> </ul>
<python><dataframe><pyspark>
2023-09-19 13:05:33
2
3,437
bjnr
77,134,706
7,437,143
"Event loop is closed! Is Playwright already stopped?"
<h2>Context</h2> <p>After initialisation of a playwright browser object in function <code>initialise_playwright_browsercontroller</code>, I try to use its <code>Page</code> object in another function. However, that yields get error:</p> <blockquote> <p>&quot;Event loop is closed! Is Playwright already stopped?&quot;</p> </blockquote> <p>If I use the <code>Page</code> object in the same function (and <code>with</code> statement) in which it is created, the error does not occur.</p> <h2>MWE</h2> <p>Below is the an MWE that demonstrates the error:</p> <pre class="lang-py prettyprint-override"><code>from playwright.sync_api import sync_playwright from playwright.sync_api._generated import Locator # type: ignore[import] from playwright.sync_api._generated import Browser, Page from typeguard import typechecked @typechecked def initialise_playwright_browsercontroller( *, start_url: str, ) -&gt; tuple[Browser, Page]: &quot;&quot;&quot;Creates a Playwright browser, opens a new page, and navigates to a specified URL. Returns: tuple[Browser, Page]: A tuple containing the browser and page objects. &quot;&quot;&quot; with sync_playwright() as p: for browser_type in [p.chromium, p.firefox, p.webkit]: if ( browser_type.name != &quot;webkit&quot; and browser_type.name == &quot;firefox&quot; ): browser = browser_type.launch() # Create a new page and navigate to the URL page = browser.new_page() page.goto(start_url) # Return the browser and page objects return browser, page raise ValueError(&quot;Error: Could not find browser.&quot;) @typechecked def do_something_on_webpage() -&gt; None: &quot;&quot;&quot;Tries to do something with the object received from the the browser.&quot;&quot;&quot; # Declare the browser and page objects. browser: Browser page: Page # Specify the website to go to. start_url: str = &quot;https://github.com&quot; browser, page = initialise_playwright_browsercontroller( start_url=start_url ) print(f&quot;page url = {page.url}&quot;) sign_in_button: Locator = page.locator(&quot;Sign in&quot;) sign_in_button.click() print(&quot;Done.&quot;) do_something_on_webpage() </code></pre> <h2>Question</h2> <p>How can I create the <code>Page</code> object elsewhere, and pass it around and use it in other functions?</p>
<python><playwright><playwright-python>
2023-09-19 12:50:37
3
2,887
a.t.
77,134,697
7,231,968
Redmail: Python sending email too slow
<p>I was first using smtplib to send email but then shifted to redmail because it was more concise. (smptlib also took time sending email)</p> <p>this is how I am doing it</p> <pre><code>from redmail import outlook outlook.username = 'XXXXXXX' outlook.password = 'XXXXXX' def send_email(subject, body, to_email): try: outlook.send( subject=subject, receivers=[to_email], html=body ) print(f&quot;Email sent successfully to {to_email}&quot;) except Exception as e: print(f&quot;Email to {to_email} could not be sent. Error: {e}&quot;) try: obj = { &quot;name&quot;: &quot;John&quot;, &quot;start&quot; : &quot;Hi. Hope you are doing good&quot;, &quot;end&quot;: &quot;Regards&quot; } for index, recipient in enumerate(data['emails']): obj[&quot;name&quot;] = data['names'][index] subject = 'Report' body = generate_html(obj) start_time = time.time() msg = send_email(subject, body, recipient) print(f&quot;Email sent successfully to {recipient}&quot;) elapsed_time = time.time() - start_time print(elapsed_time) except Exception as e: print(f&quot;An error occurred: {e}&quot;) </code></pre> <p>Right now I ran this for two emails in the array and this is the time elapsed in seconds I get for both in the loop:</p> <p>17.251704931259155</p> <p>19.297873735427856</p> <p>Why is it so much? I have to loop through many emails. How to make it faster?</p>
<python><smtp>
2023-09-19 12:49:54
0
323
SunAns
77,134,663
12,468,387
Selenium deletes chrome profiles
<p>Selenium deletes chrome profiles. It happens ~1 in 30 times</p> <p>My code:</p> <pre><code>from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager import os, sys, time, json, glob from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as ec import shutil,requests chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--no-sandbox') chrome_options.add_argument(r'user-data-dir=C:\Users\Danzel\AppData\Local\Google\Chrome\User Data') chrome_options.add_argument(r'profile-directory=Profile 9') chrome_options.add_argument(&quot;--remote-debugging-port=9222&quot;) # this chrome_options.add_argument('--disable-blink-features=AutomationControlled') chrome_options.add_argument('--disable-web-security') chrome_options.add_argument('--disable-setuid-sandbox') chrome_options.add_argument('--allow-running-insecure-content') try: driver = webdriver.Chrome(ChromeDriverManager().install(),chrome_options=chrome_options) except ValueError: driver = webdriver.Chrome(ChromeDriverManager(version=&quot;116.0.5845.96&quot;).install(),chrome_options=chrome_options) #driver = webdriver.Chrome(chromedriver_autoinstaller.install(), options=chrome_options) driver.implicitly_wait(1) driver.get(&quot;https://www.google.com/&quot;) #some code driver.close() try: [x.kill() for x in psutil.process_iter() if 'chrome' in x.name().lower()] except psutil.NoSuchProcess: pass </code></pre> <p>And i get empty browser: <a href="https://i.sstatic.net/wLQkx.png" rel="noreferrer"><img src="https://i.sstatic.net/wLQkx.png" alt="enter image description here" /></a></p> <p>However, the profiles are still located in the \AppData\Local\Google\Chrome\User Data folder: <a href="https://i.sstatic.net/jz1k4.png" rel="noreferrer"><img src="https://i.sstatic.net/jz1k4.png" alt="enter image description here" /></a></p> <p>why they were removed from the browser?</p>
<python><python-3.x><google-chrome><selenium-webdriver>
2023-09-19 12:44:06
2
449
Denzel
77,134,662
19,238,204
How to Draw 3D Synthesis of Fourier Series
<p>I saw this by using Tikz: <a href="https://tikz.net/fourier_series/" rel="nofollow noreferrer">https://tikz.net/fourier_series/</a></p> <p>I want to create this 3D synthesis of Fourier Series: <a href="https://i.sstatic.net/mCWZL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mCWZL.png" alt="1" /></a></p> <p>with Python 3D plot, there is matplotlib that is able to do that.</p> <p>My MWE / What I have achieved is:</p> <pre><code># https://pythonnumericalmethods.berkeley.edu/notebooks/chapter24.02-Discrete-Fourier-Transform.html # Generate 3 sine waves with frequencies 1 Hz, 4 Hz, and 7 Hz, # amplitudes 3, 1 and 0.5, and phase all zeros. # Add this 3 sine waves together with a sampling rate 100 Hz import matplotlib.pyplot as plt import numpy as np plt.style.use('seaborn-poster') # sampling rate sr = 100 # sampling interval ts = 1.0/sr t = np.arange(0,1,ts) freq = 1. x = 3*np.sin(2*np.pi*freq*t) x1 = 3*np.sin(2*np.pi*freq*t) freq = 4 x += np.sin(2*np.pi*freq*t) x2 = np.sin(2*np.pi*freq*t) freq = 7 x += 0.5* np.sin(2*np.pi*freq*t) x3 = 0.5* np.sin(2*np.pi*freq*t) # Write a function DFT(x) which takes in one argument, # x - input 1 dimensional real-valued signal. # The function will calculate the DFT of the signal and return the DFT values. def DFT(x): &quot;&quot;&quot; Function to calculate the discrete Fourier Transform of a 1D real-valued signal x &quot;&quot;&quot; N = len(x) n = np.arange(N) k = n.reshape((N, 1)) e = np.exp(-2j * np.pi * k * n / N) X = np.dot(e, x) return X X = DFT(x) # calculate the frequency N = len(X) n = np.arange(N) T = N/sr freq = n/T n_oneside = N//2 # get the one side frequency f_oneside = freq[:n_oneside] # normalize the amplitude X_oneside =X[:n_oneside]/n_oneside # subplot(2, 2, 3)), the axes will go to the third section of the 2x2 matrix # i.e, to the bottom-left corner. plt.figure(figsize = (12, 6)) plt.subplot(3, 1, 1) plt.plot(t, x, 'r') plt.ylabel('Amplitude') plt.subplot(3, 1, 2) plt.stem(f_oneside, abs(X_oneside), 'b', \ markerfmt=&quot; &quot;, basefmt=&quot;-b&quot;) plt.xlabel('Freq (Hz)') plt.ylabel('DFT Amplitude |X(freq)|') plt.xlim(0, 10) plt.subplot(3, 2, 5) plt.plot(t, x1, 'g') plt.ylabel('Amplitude') plt.subplot(3, 2, 6) plt.plot(t, x2, 'g') plt.ylabel('Amplitude') plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/eClXZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eClXZ.png" alt="2" /></a></p> <p>The plot is still separated in 2D each of them.</p> <ol> <li>The single sine wave</li> <li>The Fourier Series (the sum of all the sine waves form number 1)</li> <li>The DFT plot</li> </ol> <p>it is not in 3D, so I hope someone can help me with this.</p>
<python><numpy><matplotlib><fft><matplotlib-3d>
2023-09-19 12:43:46
1
435
Freya the Goddess
77,134,624
7,167,564
How do I access the data in Google Cloud Function written in Python passed via JS FormData from frontend?
<p>I have a ReactJS Frontend, where I am sending a data along with POST request's body payload in the <code>FormData</code> structure, but I am unable to access the data in my Google Cloud Function written Python using <code>request</code> parameter`. Below is my frontend code:</p> <pre><code>handleUpload = async (file: any, dataFromChild: any) =&gt; { const token = await auth?.currentUser?.getIdToken(); if (!token) { enqueueSnackbar(`Session Expired. Login again`, { variant: &quot;error&quot; }); window.location.href(&quot;/login&quot;) } if (file) { try { const snackbarRef = enqueueSnackbar(&quot;Uploading File...&quot;, { variant: &quot;info&quot;, }); const payload = new FormData(); payload.append(&quot;file&quot;, file[0]); payload.append(&quot;name&quot;, dataFromChild?.fileName); payload.append(&quot;description&quot;, dataFromChild?.fileDesc); payload.append(&quot;title&quot;, dataFromChild?.selectedType); payload.append(&quot;price&quot;, 0); const response = await fetch( `[Cloud Function URL]`, { method: &quot;POST&quot;, headers: { Authorization: `Bearer ${token}`, &quot;Content-Type&quot;: &quot;multipart/form-data&quot;, }, body: payload } ); closeSnackbar(snackbarRef); if (response.ok) { enqueueSnackbar(`File uploaded successfully.`, { variant: &quot;success&quot;, }); this.setState({ fileUploaded: false }); } else { enqueueSnackbar(`Error uploading file.`, { variant: &quot;error&quot; }); this.setState({ fileUploaded: false }); } } catch (error) { enqueueSnackbar(`Error occurred while uploading the file.`, { variant: &quot;error&quot;, }); this.setState({ fileUploaded: false }); } } }; </code></pre> <p>I tried to access the data using following way in Python:</p> <pre><code>import functions_framework @functions_framework.http def main(request): headers = set_cors_headers(request) if request.method == &quot;OPTIONS&quot;: return headers get_data(request) def get_data(request): print(request.files['file']) print(request.form['name'] . . </code></pre> <p>But it did not get any data. I even tried to get data this way:</p> <pre><code>def get_data(request): print(request.files.get('file') print(request.form.get('name') . . </code></pre> <p>But all in vain. Anyone tell me what wrong am I doing?</p>
<python><reactjs><python-3.x><google-cloud-functions><form-data>
2023-09-19 12:37:36
1
311
Khubaib Khawar
77,134,615
1,714,692
How to scale axes independently in pyvista?
<p>Suppose I have <code>x</code>, <code>y</code>, <code>z</code> and <code>x2</code>, <code>y2</code>, <code>z2</code> arrays. I am plotting them as 2 different 3d splines in pyvista.</p> <pre><code>import numpy as np import pyvista as pv network = pv.MultiBlock() points_1 = np.column_stack((x, y, z)) spline_1 = pv.Spline(points_1, 500).tube(radius=0.1) points_2 = np.column_stack((x2, y2, z2)) spline_2 = pv.Spline(points_2, 500).tube(radius=0.1) network.append(spline_1) network.append(spline_2) p = pv.Plotter() labels = dict(zlabel='l1', xlabel='l2', ylabel='l2') p.show_grid(**labels) p.add_axes(**labels) p.add_mesh(spline_1, color=&quot;red&quot;, line_width=3) p.add_mesh(spline_2, color=&quot;blue&quot;, line_width=3) </code></pre> <p>The problem of this plot is that one axis is too compressed: <code>y</code> axis values are around 300 whereas <code>x</code> axis range from 0 to 20. It seems the plot uses the same scale for all axis. Furthermore, <code>x</code> and <code>y</code> are different quantity so I don't want to compare them.</p> <p><code>Matplotlib</code> for example automatically scales the axes and to use the same scale on, for example, <code>y</code> and <code>x</code> axes you do <code>ax.set_aspect(&quot;equalxy&quot;)</code>.</p> <p>In this case axes are already scaled and I want to make one use a different size. How can I do that?</p>
<python><3d><scale><pyvista>
2023-09-19 12:36:13
1
9,606
roschach
77,134,587
15,921,143
CORS issue when making GET requests to Django server behind Caddy reverse proxy
<p>I'm encountering a CORS (Cross-Origin Resource Sharing) issue when making GET requests to my Django server, which is hosted behind a Caddy reverse proxy. Here's the situation:</p> <ol> <li>I have a Django server running, serving an API.</li> <li>The Django server is behind a Caddy reverse proxy.</li> <li>I'm trying to make GET requests to my Django API from a front-end application hosted at a different domain. Using JS And Ajax requests</li> </ol> <p>I have already tried the following:</p> <ol> <li>Enabled the django-cors-headers package and added it to my INSTALLED_APPS and MIDDLEWARE settings in the correct order.</li> </ol> <pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'corsheaders', # Added this app ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'corsheaders.middleware.CorsMiddleware', # Placed CorsMiddleware here 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] </code></pre> <p>Configured CORS settings in my Django project's settings.py.</p> <pre class="lang-py prettyprint-override"><code>DEBUG = True ALLOWED_HOSTS = [&quot;*&quot;] USE_X_FORWARDED_HOST = True CORS_ALLOW_ALL_ORIGINS = True CORS_ALLOW_METHODS = [ 'GET', 'POST', ] CORS_ALLOWED_ORIGINS = [ &quot;https://www.example.net&quot;, &quot;http://localhost:5000&quot;, &quot;http://127.0.0.1:5000&quot;, &quot;http://127.0.0.1:5501&quot; ] </code></pre> <p>Despite these configurations, I'm still encountering a CORS-related error when making GET requests to my Django API.</p> <p>Here's the error message:</p> <pre class="lang-cs prettyprint-override"><code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://www.example.net/nodeData/?nid=13. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 200. </code></pre> <p>To add more context the return method on the django server looks like this:</p> <pre class="lang-py prettyprint-override"><code>@csrf_exempt def get_vertex_data(request): try: # get the search item id = request.GET.get('nid', '') # search for the item in the db graphdb = GDM.get_graph_database() vertices = graphdb['vertices'] vertRaw = vertices.find_one({'id':int(id)}) vertRaw.pop(&quot;_id&quot;, None) response = JsonResponse(vertRaw, safe=False) response[&quot;Access-Control-Allow-Origin&quot;] = &quot;*&quot; return response except Exception as e: print(e) return HttpResponse(&quot;Something broke sorry&quot;) </code></pre> <p>I've checked my Caddy configuration, and it seems to be correctly forwarding requests to my Django server.I can curl and get data from this server on my command prompt. But whenever I fetch data like so:</p> <pre class="lang-js prettyprint-override"><code>async function getVertex() { const rawData = await fetch(&quot;https://www.example.net/nodeData/?nid=13&quot;, { method: &quot;GET&quot;, // *GET, POST, PUT, DELETE, etc., // Set the request mode to 'no-cors' }); console.log(rawData) const JSONData = await rawData.json(); console.log(JSONData) return JSONData; } </code></pre> <p>It keeps throwing a coors error (The one mentioned above) on my browser. Where am I going wrong? Any insights or suggestions on resolving this CORS issue would be greatly appreciated. Thank you!</p>
<javascript><python><django><cors><caddy>
2023-09-19 12:31:34
0
303
Indrajeet Haldar
77,134,575
1,652,219
How to handle query batches in SQLAlchemy?
<p>I am currently experimenting with creating and executing stored procedures from Python using SQLAlchemy, and I keep running into this error: &quot;X must be the first statement in a query batch&quot;. From reading up on the topic, these &quot;query batches&quot; should be separated by &quot;GO&quot;, but &quot;GO&quot; in not supported in SQLAlchemy where I get the following error &quot;Incorrect syntax near 'GO'&quot;.</p> <h3>Working SQL code</h3> <pre><code>USE my_database; GO CREATE PROCEDURE test.test_procedure AS SELECT TOP 10 * FROM my_other_database.my_schema.my_table; </code></pre> <h3>Not Working SQLAlchemy</h3> <pre><code># Creating Identical Query identical_query = &quot;&quot;&quot; USE my_database; GO CREATE PROCEDURE test.test_procedure AS SELECT TOP 10 * FROM my_other_database.my_schema.my_table; &quot;&quot;&quot; # Creating engine from sqlalchemy import create_engine my_engine = create_engine(f&quot;mssql+pyodbc://{self.db_server}/{self.db_name}?trusted_connection=yes&amp;driver={self.db_driver}&quot;) </code></pre> <p>Now let's try to execute the command in various ways as suggested on different forums.</p> <pre><code># Execution 1 my_engine.execute(identical_query) # Execution 2 from sqlalchemy.sql.expression import text my_engine.execute(text(identical_query)) # Execution 3 with sql_con.db_engine.connect() as con: query = text(identical_query) con.execute(query) </code></pre> <p>None of the ways work, and I just keep getting the error &quot;Incorrect syntax near 'GO'&quot;. Shouldn't SQLAlchemy be able to execute the same SQL code that works as is in the SQL database?</p>
<python><sql><sqlalchemy>
2023-09-19 12:29:52
1
3,944
Esben Eickhardt
77,134,573
2,424,587
ConversationalRetrievalChain with ConversationBufferMemory "TypeError: tuple indices must be integers or slices, not str"
<p>When I run langchain's ConversationalRetrievalChain with ConversationBufferMemory I get an error:</p> <blockquote> <p>&quot;TypeError: tuple indices must be integers or slices, not str&quot;</p> </blockquote> <p>The problem is in: langchain/chains/conversational_retrieval/base.py:25) This line: <code>human = &quot;Human: &quot; + human_s</code> tries to concat a tuple and a string.</p> <p><code>human_s</code> is a tuple of this form: <code>('content', 'question user asked')</code> Instead, I would expect <code>human_s</code> to be <code>&quot;question user asked&quot;.</code></p> <p>This is the code I've used:</p> <pre><code>embedder = get_embedder() logger.info(&quot;loading vector store from {}&quot;.format(vector_store_path)) vector_store = FAISS.load_local(vector_store_path, embeddings=embedder) llm = get_llm() memory = ConversationBufferMemory(memory_key=&quot;chat_history&quot;, return_messages=True) qa_chain = ConversationalRetrievalChain.from_llm( retriever=vector_store.as_retriever(), llm=llm, memory=memory, verbose=True ) while True: question = input(&quot;You:&quot;) result = qa_chain({'question': question}) response = result['answer'] print(response) </code></pre>
<python><langchain>
2023-09-19 12:29:29
1
9,166
Hanan Shteingart
77,134,535
8,040,117
Migrate PostgresDsn.build from pydentic v1 to pydantic v2
<p>I have simple Config class from FastAPI tutorial. But it seems like it uses old pydantic version. I run my code with pydantic v2 version and get a several errors. I fix almost all of them, but the last one I cannot fix yet. This is part of code which does not work:</p> <pre><code>from pydantic import AnyHttpUrl, HttpUrl, PostgresDsn, field_validator from pydantic_settings import BaseSettings from pydantic_core.core_schema import FieldValidationInfo load_dotenv() class Settings(BaseSettings): ... POSTGRES_SERVER: str = 'localhost:5432' POSTGRES_USER: str = os.getenv('POSTGRES_USER') POSTGRES_PASSWORD: str = os.getenv('POSTGRES_PASSWORD') POSTGRES_DB: str = os.getenv('POSTGRES_DB') SQLALCHEMY_DATABASE_URI: Optional[PostgresDsn] = None @field_validator(&quot;SQLALCHEMY_DATABASE_URI&quot;, mode='before') @classmethod def assemble_db_connection(cls, v: Optional[str], info: FieldValidationInfo) -&gt; Any: if isinstance(v, str): return v postgres_dsn = PostgresDsn.build( scheme=&quot;postgresql&quot;, username=info.data.get(&quot;POSTGRES_USER&quot;), password=info.data.get(&quot;POSTGRES_PASSWORD&quot;), host=info.data.get(&quot;POSTGRES_SERVER&quot;), path=f&quot;{info.data.get('POSTGRES_DB') or ''}&quot;, ) return str(postgres_dsn) </code></pre> <p>That is the error which I get:</p> <pre><code>sqlalchemy.exc.ArgumentError: Expected string or URL object, got MultiHostUrl('postgresql://user:password@localhost:5432/database') </code></pre> <p>I check a lot of places, but cannot find how I can fix that, it looks like <code>build</code> method pass data to the sqlalchemy <code>create_engine</code> method as a <code>MultiHostUrl</code> instance instead of string. How should I properly migrate this code to use pydantic v2?</p> <p><strong>UPDATE</strong></p> <p>I have fixed that issue by changing typing for <code>SQLALCHEMY_DATABASE_URI: Optional[PostgresDsn] = None</code> to <code>SQLALCHEMY_DATABASE_URI: Optional[str] = None</code>. Because pydantic makes auto conversion of result for some reason. But I am not sure if that approach is the right one, maybe there are better way to do that?</p>
<python><sqlalchemy><fastapi><pydantic>
2023-09-19 12:24:36
5
2,265
Vladyslav
77,134,476
5,868,293
Select rows that come after a condition, under a condition in pandas
<p>I have a dataframe that looks like this:</p> <pre><code>import pandas as pd pd.DataFrame({'id': [1,1,1,1,2,2,2,2], 'time': [1,2,3,4,1,2,5,6], 'is': [0,1,0,0,0,1,0,0]}) id time is 0 1 1 0 1 1 2 1 2 1 3 0 3 1 4 0 4 2 1 0 5 2 2 1 6 2 5 0 7 2 6 0 </code></pre> <p>which is <code>sorted</code> by <code>id</code> and <code>time</code></p> <p>I want for each <code>id</code>, to select only the rows that satisfy at least one of the two conditions:</p> <ul> <li><code>is==1</code></li> <li>the rows after the rows where <code>is==1</code> and <code>time</code> between these 2 rows, does not have gaps.</li> </ul> <p>The resulting dataframe should look like this:</p> <pre><code>pd.DataFrame({'id': [1,1,2], 'time': [2,3,2], 'is': [1,0,1]}) </code></pre> <p>How could I do that ?</p>
<python><pandas>
2023-09-19 12:15:10
2
4,512
quant
77,134,430
10,589,923
Using the same variable between different Python Runs
<p>I have a script like this</p> <pre><code>for epoch in range(num_epochs): for bag in range(num_bags): feats = pd.read_csv(f&quot;feats_{bag}.csv&quot;) ... # some logic </code></pre> <p>As you can see, it repetitively reads data from a set of files. Each <code>&quot;feats_{bag}.csv&quot;</code> file is repetitively being read from disk <code>num_epoch</code> times. This slowed down the program. I preloaded all of the data at once, which helped significantly. In the following script, each <code>&quot;feats_{bag}.csv&quot;</code> is only read once.</p> <pre><code>all_feats = [pd.read_csv(f&quot;feats_{bag}.csv&quot;) for bag in range(num_bags)] for epoch in range(num_epochs): for bag in range(num_bags): feats = all_feats[bag] </code></pre> <p>The issue with the above program is the hight memory usage, as it loads all the data at once. The <code>all_feats</code> variable roughly takes 20GB of memory. I have about 64 GB of memory, so I am limited to executing the program 3 times simultaneously. As all the runs use the same set of <code>feats</code>, I thought there must be a way to load the data (<code>all_feats</code> variable) once and use it in all runs simultaneously for more than 3 runs.</p> <p>In other words, with only taking 20GB of storge for the <code>all_feats</code> variable, I want to run many scripts (that all use the <code>all_feats</code> variable), by sharing the <code>all_feats</code> variable between them.</p> <p>I've looked up <code>mmap</code> and Python <code>multiprocessing.shared_memory</code>. Although both allow sharing of a variable between processes, they seem unsuitable for my problem. For example for Shared Memory, I tried the following:</p> <pre><code># SharedMemory Server all_feats = [pd.read_csv(f&quot;feats_{bag}.csv&quot;) for bag in range(num_bags)] sl = shared_memory.ShareableList(all_feats, name='all_feats') </code></pre> <pre><code># SharedMemory Client all_feats = shared_memory.ShareableList(name='MyList') print(id(all_feats), all_feats) </code></pre> <p>However, After running the server, when I run the client multiple times, the id of <code>all_feats</code> seems to be different with every run, meaning they have used different memory locations, thus taking up more memory than what I intended again.</p> <p>Some other ideas I had for speed up:</p> <ul> <li>Load all_feats files in memory using Redis or some other in-memory database. Then use the first approach again. i.e. load only the current needed feats at each iteration, but this time from Redis.</li> </ul> <pre><code>for epoch in range(num_epochs): for bag in range(num_bags): feats = redis.get(&quot;bag_i&quot;) ... # some logic </code></pre> <p>I'm hoping reading from memory (redis) instead of the disk, gives a reasonable speed boost, Although not as much as preloading all the data in Python.</p> <ul> <li><s>Divide the feats in chunks, and preload them part by part.</s> won't work.</li> </ul> <p>In summary, I'm looking for a way to use the <strong>same variable</strong> in different runs of the same/different Python scripts. Note that I don't want to use duplicates of the variable. Thus, using a file in memory, and reading the file does not help me. All scripts only read the variable and do not change it.</p> <p>Can <code>shared_memory</code> solve this? then why did it assign different IDs for the <code>all_feats</code> variable in different runs?</p>
<python><redis><multiprocessing><shared-memory><mmap>
2023-09-19 12:08:30
2
444
Danial
77,134,285
10,533,225
Make a distinct dataframe based on a column with prioritization condition
<p>I want to make a distinct dataframe that if they have the same <code>product_number</code>, there is a prioritization about the <code>product_name</code>.</p> <p>Here is the prioritization: <strong>Product A &gt; B &gt; C</strong></p> <p>In this table, you can see that there are same <code>product_number</code> with different <code>product_name</code>.</p> <p><a href="https://i.sstatic.net/wHXh0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wHXh0.png" alt="enter image description here" /></a></p> <p>How can I make it distinct to something like this?</p> <p><a href="https://i.sstatic.net/sfNVA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sfNVA.png" alt="enter image description here" /></a></p> <p>For <code>product_number</code> 1000003, A has been prioritized.</p> <p>For <code>product_number</code> 1000005, A has been prioritized.</p> <p>For <code>product_number</code> 1000006, B has been prioritized.</p> <p><em>Note: Name of the product is not actually A, B, C nor only 3</em></p>
<python><pyspark>
2023-09-19 11:46:05
1
583
Tenserflu
77,134,254
8,324,480
Specify dependencies in pyproject.toml with install URL or with index-url
<p>I like to have my package installable with <code>pip install ...</code> and to use the <code>pyproject.toml</code> standard.</p> <p>I can specify dependencies to install from git, with:</p> <pre><code>dependencies = [ 'numpy&gt;=1.21', 'psychopy @ git+https://github.com/psychopy/psychopy', ] </code></pre> <p>But how can I specify a dependency to install from a different indexer, equivalent to:</p> <p><code>python -m pip install --pre --only-binary :all: -i https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy scipy</code></p> <p>With or without the pre-release flag?</p> <p>And how can I specify a dependency to install from a URL, e.g. <a href="https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-22.04/wxPython-4.2.1-cp310-cp310-linux_x86_64.whl" rel="noreferrer">https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-22.04/wxPython-4.2.1-cp310-cp310-linux_x86_64.whl</a> I tried with no luck:</p> <pre><code>dependencies = [ 'wxPython @ https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-22.04/wxPython-4.2.1-cp310-cp310-linux_x86_64.whl; python_version == &quot;3.10&quot;; sys_platform == &quot;linux&quot;' ] </code></pre>
<python><dependencies><packaging><pyproject.toml>
2023-09-19 11:42:38
3
5,826
Mathieu
77,134,246
1,062,967
stop spark dataframe distributing to cluster - it needs to stay on driver
<p>we have a workload that computes on spark cluster workers (cpu intensive). The results are pulled back to the driver which has a large memory allocation to collect the results via RDD .collect() Results are then further processed resulting in a pandas dataframe (pre-existing package logic, can't change).</p> <p>That pandas dataframe then needs to be stored into databricks. This is done via converting the pandas dataframe to spark dataframe after which .saveAsTable is called.</p> <p>The problem: for a table with 900 columns, and only 50k rows, the conversion of the pandas dataframe to the spark dataframe takes 5 minutes. I have googled this heavily, and the only information I can glean is that the conversion of the pandas dataframe to the spark dataframe causes an automatic partitioning of the data accross the cluster workers. I believe this might be why it is so slow, as the step to actually save the data from the spark dataframe to a databricks table only takes 10 seconds (in comparison).</p> <p>Is there a way to force spark dataframe to not distribute/parallelize ? I want it to simply stay on the driver, so I can use it's nice write api for saving into databricks.</p> <p>I have tried setting: spark.default.parallelism 1 to force it not to distribute the workload, yet it still seems to send the data to a worker, rather than staying on the driver (i could be wrong but that is what it looks like from the logs).</p> <p>There are methods on spark dataframe to repartition() &amp; coalesce(), but these apply once the dataframe is already created rather than before, which defeats the purpose of trying to save time when creating the spark dataframe in the first place.</p> <p>Any thoughts?</p> <pre><code>delta_frame.write \ .mode(&quot;append&quot;) \ .option(&quot;delta.columnMapping.mode&quot;, &quot;name&quot;) \ .option(&quot;mergeSchema&quot;, &quot;true&quot; if merge_schema else &quot;false&quot;) \ .option(&quot;path&quot;, target_path) \ .partitionBy(partition_cols) \ .saveAsTable(full_table_name) </code></pre>
<python><pyspark><apache-spark-sql><databricks><rdd>
2023-09-19 11:41:43
1
440
DaManJ
77,134,205
8,444,568
Measure CPU usage of a process and all it's subprocesses
<p>I'm creating a new process <code>A</code> using <code>multiprocessing.Process</code>, and <code>A</code> may fork/spawn new processes <code>B,C,D...</code>, B/C/D can also fork/spawn new processes <code>E,F,G...</code></p> <p>Now I want to measure the overall cpu usage of process <code>A</code> and all it's children/grandchildren (snapshot of the overall cpu usage at certain time), how can I do it in python?</p>
<python><python-multiprocessing>
2023-09-19 11:36:20
2
893
konchy
77,134,004
10,952,047
change color clusters umap python
<p>sorry I'm totally new in python but I need to use it for some analysis. I usually use R. However, I need to change colours in my umap plot:</p> <pre><code>import scanpy as sc import anndata from scipy import io from scipy.sparse import coo_matrix, csr_matrix import numpy as np import os import pandas as pd &gt;&gt;&gt; adata.obs.columns Index(['nCount_RNA', 'nFeature_RNA', 'percent.mt', 'nCount_ATAC', 'nFeature_ATAC', 'nCount_motif', 'nFeature_motif', 'TSS.enrichment', 'TSS.percentile', 'nucleosome_signal', 'nucleosome_percentile', 'blacklist_fraction', 'dataset', 'integrated_rna.weight', 'ATAC.weight', 'wsnn_res.0.1', 'wsnn_res.0.2', 'wsnn_res.0.5', 'wsnn_res.1', 'wsnn_res.1.5', 'treatment', 'timepoint', 'treatment_timepoint', 'RNA_snn_res.0.2', 'seurat_clusters', 'RNA_snn_res.0.3', 'RNA_snn_res.0.4', 'RNA_snn_res.0.5', 'seurat_cluster_0.4', 'treatment_timepoint_cluster', 'barcode', 'UMAP_1', 'UMAP_2'], dtype='object') cols =[&quot;#F8766D&quot;, &quot;#ABA300&quot;, &quot;#0CB702&quot;, &quot;#00B8E7&quot;, &quot;#ED68ED&quot;] sc.pl.umap(adata, color=['seurat_clusters'], palette= cols, frameon=False, save=True) </code></pre> <p>this not works. where I did a mistake?</p>
<python><scanpy>
2023-09-19 11:08:19
0
417
jonny jeep
77,133,997
6,546,694
Python runtime for the code significantly different depending upon where the recursion function is placed
<p>I am trying to solve a <a href="https://leetcode.com/problems/number-of-ways-to-rearrange-sticks-with-k-sticks-visible/" rel="nofollow noreferrer">leetcode problem</a>. My solution gives markedly different results depending upon the kind of construction I have of the code. The only way to explain this is by giving you the three codes.</p> <p><strong>Runtime: 3700ms</strong></p> <pre><code>class Solution: def rearrangeSticks(self, n: int, k: int) -&gt; int: self.m = 10**9 + 7 out = self.rs(n, k) return out @cache def rs(self,n, k): if k &lt;= 0 : return 0 if n &lt; k: return 0 if n == k: return 1 out1 = self.rs(n-1, k-1) out2 = self.rs(n-1, k) out = (out1+out2*(n-1))%(self.m) return out </code></pre> <p><strong>Runtime: 1700ms</strong></p> <pre><code>class Solution: def rearrangeSticks(self, n: int, k: int) -&gt; int: m = 10**9 + 7 @cache def rs(n, k): if k &lt;= 0 : return 0 if n &lt; k: return 0 if n == k: return 1 out1 = rs(n-1, k-1) out2 = rs(n-1, k) out = (out1+out2*(n-1))%(m) return out out = rs(n, k) return out </code></pre> <p><strong>Runtime: 200ms</strong></p> <pre><code>class Solution: def rearrangeSticks(self, n: int, k: int) -&gt; int: out = rs(n, k) return out m = 10**9 + 7 @cache def rs(n, k): if k &lt;= 0 : return 0 if n &lt; k: return 0 if n == k: return 1 out1 = rs(n-1, k-1) out2 = rs(n-1, k) out = (out1+out2*(n-1))%(m) return out </code></pre> <p>Can you help me understand why the 3 runtimes so different?</p>
<python><class><oop><recursion><caching>
2023-09-19 11:06:39
0
5,871
figs_and_nuts
77,133,860
3,801,449
Wheel dependencies - No matching distribution found
<p>I wanted to distribute my software via PyPi. My <code>pyproject.toml</code> configuration file looks like this:</p> <pre><code>[build-system] requires = [&quot;hatchling&quot;] build-backend = &quot;hatchling.build&quot; [project] name = &quot;saoovqe&quot; version = &quot;0.1.6&quot; authors = [ ... ] description = &quot;MyPackage&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.10&quot; classifiers = [ &quot;Programming Language :: Python :: 3&quot;, &quot;License :: OSI Approved :: GNU General Public License v3 (GPLv3)&quot;, &quot;Operating System :: POSIX :: Linux&quot;, ] dependencies=[ &quot;qiskit&gt;=0.43.0&quot;, &quot;qiskit-nature&gt;=0.6.2&quot;, &quot;numpy&gt;=1.22.0,&lt;1.24.0&quot;, &quot;deprecated&gt;=1.2.14&quot;, &quot;mendeleev&gt;=0.13.1&quot;, &quot;scipy&gt;=1.10.1&quot;, &quot;sympy&gt;=1.11.1&quot;, &quot;setuptools&gt;=67.8.0&quot;, &quot;lxml&gt;=4.9.2&quot;, &quot;nlopt&quot;, &quot;ipython&quot;, &quot;jupyter&quot;, &quot;pygments&quot;, &quot;scikit-learn&gt;=1.2.2&quot;, &quot;icecream&gt;=2.1.3&quot;, &quot;pytest&gt;=7.3.1&quot; ] </code></pre> <p>Creating of Wheel and subsequent upload are OK. The problem is, when I try to install the package from PyPi like this</p> <pre><code> pip install -i https://test.pypi.org/simple/ saoovqe==0.1.6 </code></pre> <p>I'm getting the following error</p> <pre><code>Looking in indexes: https://test.pypi.org/simple/ Collecting saoovqe==0.1.6 Obtaining dependency information for saoovqe==0.1.6 from https://test-files.pythonhosted.org/packages/c2/6e/4061ed360f3e942caffd9056c4cbda4908fa85718a99d6323a69ada828d7/saoovqe-0.1.6-py3-none-any.whl.metadata Downloading https://test-files.pythonhosted.org/packages/c2/6e/4061ed360f3e942caffd9056c4cbda4908fa85718a99d6323a69ada828d7/saoovqe-0.1.6-py3-none-any.whl.metadata (2.7 kB) INFO: pip is looking at multiple versions of saoovqe to determine which version is compatible with other requirements. This could take a while. ERROR: Could not find a version that satisfies the requirement deprecated&gt;=1.2.14 (from saoovqe) (from versions: none) ERROR: No matching distribution found for deprecated&gt;=1.2.14 </code></pre> <p>As can be seen <a href="https://pypi.org/project/Deprecated/" rel="nofollow noreferrer">here</a>, the package <code>Deprecated</code> really is available at the version 1.2.14. So, what is wrong with my configuration? I'd like to be able to specify the dependencies in a way, that they'll get downloaded automatically, if the package is installed.</p>
<python><pip><python-wheel>
2023-09-19 10:43:37
0
3,007
Eenoku
77,133,712
534,238
VS Code Python Debug uses virtual files rather than actual source code
<h1>Configurations</h1> <ul> <li>Python plugin (used to be called Pylance)</li> <li>Standard VS Code debugger</li> <li>PDM / <code>pyproject.toml</code> package format</li> <li>installing package locally</li> </ul> <p>I can provide other config details if needed, but I think they are irrelevant.</p> <h1>Issue</h1> <p>I have a package which has a module that is calling a function in another module. Instead of calling the <em>actual</em> source code that I am editing and which exists in the repo, it is calling the files used in the virtual environment. Ie,</p> <ul> <li>Function 1 in module A is called, which calls function 2 in module B.</li> <li>However, instead of module B being called, <em>the install of module B in my virtual environment is being called</em>. <ul> <li>Module B in actual source: <code>src/phxfakedata/utils/traverse.py</code></li> <li>Module B in virtual environment: <code>.venv/lib/python3.8/site-packages/phxfakedata/utils/traverse.py</code></li> <li>Module A is being debugged from the actual repo, <code>src/phxfakedata/generate/generate.py</code>, but when it delves into Module B, it is calling the module in the virtual environment.</li> </ul> </li> <li>This is a problem because I start working on the problem -- which is in module B, but only visible when being called from module A -- and then I realize that all of the fixes which I've made where done on the virtual environment file, so then when I run the (properly isolated) tests, they all fail because all the work I did was on the wrong file.</li> </ul> <h1>Solution</h1> <p>I'm hoping there is some sort of configuration <em>within VS Code</em> where I can tell it, please don't use the virtual environment, but use the actual code.</p> <p>I thought this was being I wasn't actually installing as an editable package, but that doesn't seem to be the issue.</p>
<python><visual-studio-code><debugging>
2023-09-19 10:20:54
1
3,558
Mike Williamson
77,133,672
7,483,509
Solving environment conflict with brew and libgio-2.0.0.dylib
<p>I am running a python gstreamer app on mac but I get a library conflict. Here is the app:</p> <pre class="lang-py prettyprint-override"><code>import gi gi.require_version('Gtk', '3.0') gi.require_version('Gst', '1.0') from gi.repository import Gtk, Gst, Gdk Gst.init(None) Gtk.init(None) class VideoPlayer: def __init__(self): self.window = Gtk.Window() self.window.set_default_size(640, 480) self.window.connect(&quot;delete-event&quot;, Gtk.main_quit) self.player = Gst.ElementFactory.make(&quot;playbin&quot;) self.playing = False self.setup_ui() self.setup_pipeline() def setup_ui(self): self.video_widget = Gtk.DrawingArea() self.video_widget.set_size_request(640, 360) self.play_button = Gtk.Button(label=&quot;Play&quot;) self.play_button.connect(&quot;clicked&quot;, self.toggle_play) self.repeat_button = Gtk.Button(label=&quot;Repeat&quot;) self.repeat_button.connect(&quot;clicked&quot;, self.toggle_repeat) self.slider = Gtk.Scale(orientation=Gtk.Orientation.HORIZONTAL) self.slider.set_range(0, 100) self.slider.connect(&quot;value-changed&quot;, self.seek) self.file_chooser = Gtk.FileChooserButton(title=&quot;Open Video&quot;) self.file_chooser.connect(&quot;file-set&quot;, self.load_file) self.grid = Gtk.Grid() self.grid.attach(self.video_widget, 0, 0, 4, 1) self.grid.attach(self.play_button, 0, 1, 1, 1) self.grid.attach(self.repeat_button, 1, 1, 1, 1) self.grid.attach(self.slider, 2, 1, 1, 1) self.grid.attach(self.file_chooser, 0, 2, 4, 1) self.window.add(self.grid) self.window.show_all() def setup_pipeline(self): self.bus = self.player.get_bus() self.bus.add_signal_watch() self.bus.connect(&quot;message&quot;, self.on_message) def toggle_play(self, button): if self.playing: self.player.set_state(Gst.State.PAUSED) self.play_button.set_label(&quot;Play&quot;) else: self.player.set_state(Gst.State.PLAYING) self.play_button.set_label(&quot;Pause&quot;) self.playing = not self.playing def toggle_repeat(self, button): pass # Implement repeat logic if needed def load_file(self, button): dialog = Gtk.FileChooserDialog(&quot;Open Video File&quot;, None, Gtk.FileChooserAction.OPEN, (Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OPEN, Gtk.ResponseType.OK)) filter_mp4 = Gtk.FileFilter() filter_mp4.set_name(&quot;MP4 files&quot;) filter_mp4.add_mime_type(&quot;video/mp4&quot;) dialog.add_filter(filter_mp4) response = dialog.run() if response == Gtk.ResponseType.OK: uri = dialog.get_uri() self.player.set_property(&quot;uri&quot;, uri) self.play_button.set_label(&quot;Play&quot;) self.playing = False self.slider.set_value(0) dialog.destroy() def seek(self, slider): if not self.playing: return value = slider.get_value() seek_time = int(value * self.player.query_duration(Gst.Format.TIME)[1] / 100) self.player.seek_simple(Gst.Format.TIME, Gst.SeekFlags.FLUSH, seek_time) def on_message(self, bus, message): if message.type == Gst.MessageType.EOS: self.play_button.set_label(&quot;Play&quot;) self.playing = False elif message.type == Gst.MessageType.ERROR: err, debug = message.parse_error() print(f&quot;Error: {err}, Debug: {debug}&quot;) if __name__ == '__main__': player = VideoPlayer() Gtk.main() </code></pre> <p>And the error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/username/app_example/gplayer2.py&quot;, line 1, in &lt;module&gt; import gi File &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/python3.11/site-packages/gi/__init__.py&quot;, line 40, in &lt;module&gt; from . import _gi ImportError: dlopen(/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/python3.11/site-packages/gi/_gi.cpython-311-darwin.so, 0x0002): Library not loaded: /opt/homebrew/opt/glib/lib/libgio-2.0.0.dylib Referenced from: &lt;15BE39D1-C102-3CA5-9F72-4B518DC6E3B7&gt; /opt/homebrew/Cellar/gobject-introspection/1.78.1/lib/libgirepository-1.0.1.dylib Reason: tried: '/opt/homebrew/opt/glib/lib/libgio-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/opt/glib/lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/opt/glib/lib/libgio-2.0.0.dylib' (no such file), '/usr/local/lib/libgio-2.0.0.dylib' (no such file), '/usr/lib/libgio-2.0.0.dylib' (no such file, not in dyld cache), '/opt/homebrew/Cellar/glib/2.78.0/lib/libgio-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/Cellar/glib/2.78.0/lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/Cellar/glib/2.78.0/lib/libgio-2.0.0.dylib' (no such file), '/usr/local/lib/libgio-2.0.0.dylib' (no such file), '/usr/lib/libgio-2.0.0.dylib' (no such file, not in dyld cache) </code></pre> <p>I tried:</p> <pre class="lang-bash prettyprint-override"><code>$ mv /opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib /opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib.backup $ python gplayer2.py Traceback (most recent call last): File &quot;/Users/nicholasscottodiperto/work/SPOC-SW/demos/streaming/gplayer2.py&quot;, line 1, in &lt;module&gt; import gi File &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/python3.11/site-packages/gi/__init__.py&quot;, line 40, in &lt;module&gt; from . import _gi ImportError: dlopen(/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/python3.11/site-packages/gi/_gi.cpython-311-darwin.so, 0x0002): Library not loaded: @rpath/libgio-2.0.0.dylib Referenced from: &lt;35AF13C3-C9FA-322E-81D8-92F96EFB3CC0&gt; /opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/python3.11/site-packages/gi/_gi.cpython-311-darwin.so Reason: tried: '/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/../lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/../lib/libgio-2.0.0.dylib' (no such file), '/usr/local/lib/libgio-2.0.0.dylib' (no such file), '/usr/lib/libgio-2.0.0.dylib' (no such file, not in dyld cache) </code></pre> <p>or</p> <pre class="lang-bash prettyprint-override"><code>$ mv /opt/homebrew/Cellar/glib/2.78.0/lib/libgio-2.0.0.dylib /opt/homebrew/Cellar/glib/2.78.0/lib/libgio-2.0.0.dylib.backup $ python gplayer2.py Traceback (most recent call last): File &quot;/Users/nicholasscottodiperto/work/SPOC-SW/demos/streaming/gplayer2.py&quot;, line 1, in &lt;module&gt; import gi File &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/python3.11/site-packages/gi/__init__.py&quot;, line 40, in &lt;module&gt; from . import _gi ImportError: dlopen(/opt/homebrew/Caskroom/miniforge/base/envs/switch/lib/python3.11/site-packages/gi/_gi.cpython-311-darwin.so, 0x0002): Library not loaded: /opt/homebrew/opt/glib/lib/libgio-2.0.0.dylib Referenced from: &lt;15BE39D1-C102-3CA5-9F72-4B518DC6E3B7&gt; /opt/homebrew/Cellar/gobject-introspection/1.78.1/lib/libgirepository-1.0.1.dylib Reason: tried: '/opt/homebrew/opt/glib/lib/libgio-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/opt/glib/lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/opt/glib/lib/libgio-2.0.0.dylib' (no such file), '/usr/local/lib/libgio-2.0.0.dylib' (no such file), '/usr/lib/libgio-2.0.0.dylib' (no such file, not in dyld cache), '/opt/homebrew/Cellar/glib/2.78.0/lib/libgio-2.0.0.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/Cellar/glib/2.78.0/lib/libgio-2.0.0.dylib' (no such file), '/opt/homebrew/Cellar/glib/2.78.0/lib/libgio-2.0.0.dylib' (no such file), '/usr/local/lib/libgio-2.0.0.dylib' (no such file), '/usr/lib/libgio-2.0.0.dylib' (no such file, not in dyld cache) </code></pre> <p>How can I fix my environment?</p>
<python><gtk><homebrew><gstreamer><glib>
2023-09-19 10:15:20
0
1,109
Nick Skywalker
77,133,666
2,071,807
Should Django enumeration types like TextChoices be compared with `is` or `==`?
<p>The <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#enumeration-types" rel="nofollow noreferrer">Django Enumeration Types documention</a> says that fields like <code>models.TextChoices</code></p> <blockquote> <p>... work similar to enum from Python’s standard library, but with some modifications.</p> </blockquote> <p>It's <a href="https://docs.python.org/3/howto/enum.html#comparisons" rel="nofollow noreferrer">recommended</a> to compare Enumeration members by identity (with <code>is</code>):</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; Color.RED is Color.RED True </code></pre> <p>But the Django docs don't seem to say anything about comparing <code>TextChoices</code> values. Which of these two options is preferred?</p> <pre class="lang-py prettyprint-override"><code>from django.db import models class FooChoices(models.TextChoices): FOO = &quot;foo&quot; BAR = &quot;bar&quot; class FooModel(models.Model): type = models.CharField(max_length=50, choices=FooChoices.choices) @property def is_foo(self): # Either... return self.type == DatapointDefinitionType.FOO # ... or return self.type is DatapointDefinitionType.FOO </code></pre>
<python><django>
2023-09-19 10:14:27
2
79,775
LondonRob
77,133,489
99,989
In Python, why doesn't dataclasses.asdict work with fields of type dictionary?
<p>Why doesn't dataclasses.asdict work with dictionary values (not keys)?</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import asdict, dataclass, field @dataclass(order=True, frozen=True, eq=True) class Path: p: tuple[str, ...] = field(default=()) @dataclass class C: p: dict[Path, int] = field(default_factory=dict) path = Path(('x', 'y')) print(hash(path), asdict(path)) c = C({path: 1}) print(asdict(c)) # TypeError: unhashable type: 'dict' </code></pre>
<python><python-dataclasses>
2023-09-19 09:49:38
1
33,551
Neil G
77,133,470
9,067,016
Split text by sentences
<p>I run into a problem to find a comfort method to split text by the list of predefined sentences. Sentences can include any special characters and whatever absolutely custom.</p> <p>Example:</p> <pre><code>text = &quot;My name. is A. His name is B. Her name is C. That's why...&quot; delims = [&quot;My name. is&quot;, &quot;His name is&quot;, &quot;Her name is&quot;] </code></pre> <p>I want something like:</p> <pre><code>def custom_sentence_split(text, delims): # stuff return result custom_sentence_split(text, delims) # [&quot;My name. is&quot;, &quot; A. &quot;, &quot;His name is&quot;, &quot; B. &quot;, &quot;Her name is&quot;, &quot; C. That's why...&quot;] </code></pre> <p>UPD. Well there can be <strong>non</strong>-comfort solution like that, I'd prefer to getting more comfort one</p> <pre><code> def collect_output(text, finds): text_copy = text[:] retn = [] for found in finds: part1, part2 = text_copy.split(found, 1) retn += [part1, found] text_copy = part2 return retn def custom_sentence_split(text, splitters): pattern = &quot;(&quot;+&quot;|&quot;.join(splitters)+&quot;|)&quot; finds = list(filter(bool, re.findall(pattern, text))) output = collect_output(text, finds) return output </code></pre> <p>UPD2: seems working solution is found.</p> <pre><code>pattern = &quot;(&quot;+&quot;|&quot;.join(map(re.escape, delims)) +&quot;)&quot;; re.split(pattern, text) </code></pre>
<python><regex><python-re><sentence>
2023-09-19 09:48:03
2
609
Vova
77,133,408
6,243,129
How to use patch for a variable in PyTest
<p>I have below Python code:</p> <pre><code>#datamanager.py import os BASE_DIR = '' #SOME_VALUE data_list = '' #SOME_VALUE loaded_data = dict.fromkeys(data_list) def update_data(): for key, current_model in loaded_data.items(): mod_dir = os.path.join(BASE_DIR, key) if not os.path.exists(mod_dir): print(&quot;Model dir not present&quot;) continue model_files = os.listdir(mod_dir) if not model_files: print(&quot;Model dir is empty&quot;) continue </code></pre> <p>I am writing a pytest to test if <code>mod_dir</code> exist or not. I have to mock this as initially <code>mod_dir</code> will not be present. To do this, there is for loop which loop over <code>loaded_data</code>. I am trying to patch this variable but looks like this is not possible. I am trying to do something like below:</p> <pre><code>def test_update_models(): mock_loaded_data = {'some_data': 'data_file'} with patch('datamanager.loaded_data', new=mock_loaded_data): update_data() </code></pre> <p>But I keep getting attribute error that <code>loaded_data</code> not found. I think it's not possible to patch the values of variables. What can I try next?</p>
<python><pytest><patch><pytest-mock>
2023-09-19 09:41:44
1
7,576
S Andrew
77,133,098
464,538
List bucket files in S3 files permission denied. But upload works
<p>I'm making a simple script in Python to upload and list the files in a S3 bucket, the problem is that I can upload the files, but I receive a <code>Permission Denied</code> error when I try to list the files.</p> <p>This is the policy:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: [ &quot;s3:*&quot;, &quot;s3-object-lambda:*&quot; ], &quot;Resource&quot;: &quot;arn:aws:s3:::notes-sync-test/*&quot; } ] } </code></pre> <p>This is the code:</p> <pre class="lang-py prettyprint-override"><code>import boto3 s3 = boto3.resource( service_name='s3', region_name='us-east-1', aws_access_key_id='XXX', aws_secret_access_key='ZZZZ' ) bucket_name = &quot;notes-sync-test&quot; bucket = s3.Bucket(bucket_name) # Works - Upload a new file data = open('test.pdf', 'rb') s3.Bucket(bucket_name).put_object(Key='test.pdf', Body=data) # Fail - List objects for my_bucket_object in bucket.objects.all(): print(my_bucket_object.key) </code></pre> <p>Any idea why I can upload but not see the files?</p> <p>Thanks!</p> <p>Edit:</p> <p>If I configure the policy like this, it works bot listObjects and PutObject, but I think it's weird:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: [ &quot;s3:*&quot;, &quot;s3-object-lambda:*&quot; ], &quot;Resource&quot;: &quot;arn:aws:s3:::notes-sync-test/*&quot; }, { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: [ &quot;s3:*&quot;, &quot;s3-object-lambda:*&quot; ], &quot;Resource&quot;: &quot;arn:aws:s3:::notes-sync-test&quot; } ] } </code></pre>
<python><python-3.x><boto3>
2023-09-19 08:57:38
1
1,584
Klian
77,133,095
3,337,089
Customize state_dict() and load_state_dict() pytorch
<p>I have a nested set of classes (each of type <code>torch.nn.module</code>). I need to do some preprocessing before saving the weights of one of the nested classes. Is it possible to override the <code>state_dict()</code> function so that I can insert the preprocessing in my custom implementation?</p> <p>Sample code:</p> <pre><code>Class A(torch.nn.module): def __init__(self): super().__init__() self.b1 = B1() self.b2 = B2() Class B1(torch.nn.module): def __init__(self): super().__init__() self.var = torch.nn.Parameter(torch.Tensor((3, 5), dtype=float)) Class B2(torch.nn.module): def __init__(self): super().__init__() self.var = torch.nn.Parameter(torch.Tensor((3, 5), dtype=float)) def state_dict(): # I want to override the default state_dict like this, but this does not work. # Is there a way to achieve this? bool_var = self.var.bool().cpu().numpy() state_dict1 = super.state_dict() state_dict1.update({'var': bool_var}) return state_dict1 def load_state_dict(state_dict): state_dict['var'] = state_dict['var'].float() super.load_state_dict(state_dict) return </code></pre> <p>Specifically, for one of the classes, I want to convert the variable to <code>bool</code> before saving it and convert it back to <code>float</code> while loading it. I can't make the tensor as <code>bool</code> by default since it needs to be learned as a float value.</p> <p>The code I am working with is <a href="https://github.com/apchenstu/TensoRF/blob/main/models/tensorBase.py#L248--L263" rel="nofollow noreferrer">this</a>. Here, they've hard-coded saving of such variable. I don't want to do that. I want <code>state_dict()</code> and <code>load_state_dict()</code> to automatically take care of the conversion.</p> <p>PS: While <a href="https://discuss.pytorch.org/t/how-to-customize-the-module-state-dict-load-state-dict/139604/4" rel="nofollow noreferrer">this discussion</a> indicates that customizing is not possible, I am hoping PyTorch would have added it in the recent versions or there are some other ways of achieving this.</p>
<python><pytorch><save><customization>
2023-09-19 08:57:08
1
7,307
Nagabhushan S N
77,133,078
7,483,509
No URI handler implemented for \"gst-pipeline\"." - using PyQt 5.15
<p>I am writing a simple video player with gstreamer and PyQt 5.15.8.</p> <p>The code is a slightly modified version from <a href="https://coderslegacy.com/python/pyqt5-video-player-with-qmediaplayer/" rel="nofollow noreferrer">this website</a> where I replaced:</p> <pre class="lang-py prettyprint-override"><code> def openFile(self): fileName, _ = QFileDialog.getOpenFileName(self, &quot;Open Movie&quot;, QDir.homePath()) if fileName != '': self.mediaPlayer.setMedia( QMediaContent(QUrl.fromLocalFile(fileName))) </code></pre> <p>by</p> <pre class="lang-py prettyprint-override"><code> def openFile(self): pipeline = f&quot;gst-pipeline: videotestsrc ! autovideosink&quot; self.mediaPlayer.setMedia( QMediaContent(QUrl(pipeline))) </code></pre> <p>as a test of the gst-pipeline feature as I would like to be able to pass my own pipeline for network streaming.</p> <p>Here is the full code:</p> <pre class="lang-py prettyprint-override"><code>from PyQt5.QtCore import QDir, Qt, QUrl from PyQt5.QtMultimedia import QMediaContent, QMediaPlayer from PyQt5.QtMultimediaWidgets import QVideoWidget from PyQt5.QtWidgets import (QMainWindow, QWidget, QPushButton, QApplication, QLabel, QFileDialog, QStyle, QVBoxLayout) import sys class VideoPlayer(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle(&quot;PyQt5 Video Player&quot;) self.mediaPlayer = QMediaPlayer(None, QMediaPlayer.VideoSurface) videoWidget = QVideoWidget() self.playButton = QPushButton() self.playButton.setIcon(self.style().standardIcon(QStyle.SP_MediaPlay)) self.playButton.clicked.connect(self.play) self.openButton = QPushButton(&quot;Open Video&quot;) self.openButton.clicked.connect(self.openFile) widget = QWidget(self) self.setCentralWidget(widget) layout = QVBoxLayout() layout.addWidget(videoWidget) layout.addWidget(self.openButton) layout.addWidget(self.playButton) widget.setLayout(layout) self.mediaPlayer.setVideoOutput(videoWidget) def openFile(self): pipeline = f&quot;gst-pipeline: videotestsrc ! autovideosink&quot; self.mediaPlayer.setMedia( QMediaContent(QUrl(pipeline))) def play(self): if self.mediaPlayer.state() == QMediaPlayer.PlayingState: self.mediaPlayer.pause() else: self.mediaPlayer.play() app = QApplication(sys.argv) videoplayer = VideoPlayer() videoplayer.resize(640, 480) videoplayer.show() sys.exit(app.exec()) </code></pre> <p>I am getting this error:</p> <pre><code>Error: &quot; videotestsrc ! autovideosink&quot; : &quot;no element \&quot;autovideosink\&quot;&quot; GStreamer; Unable to pause - &quot;gst-pipeline: videotestsrc ! autovideosink&quot; Error: &quot;No URI handler implemented for \&quot;gst-pipeline\&quot;.&quot; </code></pre> <p>All the answers I found regarding the same error point at a version of PyQt5 that is too old, but the gst-pipeline feature is supported since Qt 5.12.2 as from the doc and I am using 5.15 so it should work in theory. What am I missing?</p> <p><em>Edit: I am using mac OS Ventura 13.3.1 with M1 architecture</em></p> <p><em>Edit 2: When running with <code>QT_DEBUG_PLUGINS=1</code> I get the following output:</em></p> <pre><code>Qt: v 5.15.8 PyQt: v 5.15.9 QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms&quot; ... QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqcocoa.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqcocoa.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;cocoa&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QCocoaIntegrationPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;cocoa&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqminimal.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqminimal.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;minimal&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QMinimalIntegrationPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;minimal&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqoffscreen.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqoffscreen.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;offscreen&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QOffscreenIntegrationPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;offscreen&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqwebgl.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqwebgl.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;webgl&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QWebGLIntegrationPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;webgl&quot;) QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/platforms&quot; ... loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqcocoa.dylib&quot; QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platformthemes&quot; ... QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platformthemes/libqxdgdesktopportal.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platformthemes/libqxdgdesktopportal.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QPA.QPlatformThemeFactoryInterface.5.1&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;xdgdesktopportal&quot;, &quot;flatpak&quot;, &quot;snap&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QXdgDesktopPortalThemePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;xdgdesktopportal&quot;, &quot;flatpak&quot;, &quot;snap&quot;) QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/platformthemes&quot; ... QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/styles&quot; ... QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/styles/libqmacstyle.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/styles/libqmacstyle.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QStyleFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;macintosh&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QMacStylePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;macintosh&quot;) QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/styles&quot; ... loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/styles/libqmacstyle.dylib&quot; QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice&quot; ... QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstaudiodecoder.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstaudiodecoder.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.qt.mediaserviceproviderfactory/5.0&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;gstreameraudiodecode&quot; ], &quot;Services&quot;: [ &quot;org.qt-project.qt.audiodecode&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QGstreamerAudioDecoderServicePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;gstreameraudiodecode&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstcamerabin.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstcamerabin.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.qt.mediaserviceproviderfactory/5.0&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;gstreamercamerabin&quot; ], &quot;Services&quot;: [ &quot;org.qt-project.qt.camera&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;CameraBinServicePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;gstreamercamerabin&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstmediacapture.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstmediacapture.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.qt.mediaserviceproviderfactory/5.0&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;gstreamermediacapture&quot; ], &quot;Services&quot;: [ &quot;org.qt-project.qt.audiosource&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QGstreamerCaptureServicePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;gstreamermediacapture&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstmediaplayer.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstmediaplayer.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.qt.mediaserviceproviderfactory/5.0&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;gstreamermediaplayer&quot; ], &quot;Services&quot;: [ &quot;org.qt-project.qt.mediaplayer&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QGstreamerPlayerServicePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;gstreamermediaplayer&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libqavfcamera.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libqavfcamera.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.qt.mediaserviceproviderfactory/5.0&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;avfoundationcamera&quot; ], &quot;Services&quot;: [ &quot;org.qt-project.qt.camera&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;AVFServicePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;avfoundationcamera&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libqavfmediaplayer.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libqavfmediaplayer.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.qt.mediaserviceproviderfactory/5.0&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;avfoundationmediaplayer&quot; ], &quot;Services&quot;: [ &quot;org.qt-project.qt.mediaplayer&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;AVFMediaPlayerServicePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;avfoundationmediaplayer&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libqtmedia_audioengine.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libqtmedia_audioengine.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.qt.mediaserviceproviderfactory/5.0&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;audiocapture&quot; ], &quot;Services&quot;: [ &quot;org.qt-project.qt.audiosource&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;AudioCaptureServicePlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;audiocapture&quot;) QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/mediaservice&quot; ... loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstmediaplayer.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libqavfmediaplayer.dylib&quot; QMediaPluginLoader: loaded plugins for key &quot;org.qt-project.qt.mediaplayer&quot; : (&quot;gstreamermediaplayer&quot;, &quot;avfoundationmediaplayer&quot;) QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/resourcepolicy&quot; ... QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/resourcepolicy&quot; ... QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/iconengines&quot; ... QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/iconengines/libqsvgicon.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/iconengines/libqsvgicon.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QIconEngineFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;svg&quot;, &quot;svgz&quot;, &quot;svg.gz&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QSvgIconPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;svg&quot;, &quot;svgz&quot;, &quot;svg.gz&quot;) QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/iconengines&quot; ... QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats&quot; ... QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqgif.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqgif.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;gif&quot; ], &quot;MimeTypes&quot;: [ &quot;image/gif&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QGifPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;gif&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqicns.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqicns.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;icns&quot; ], &quot;MimeTypes&quot;: [ &quot;image/x-icns&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QICNSPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;icns&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqico.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqico.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;ico&quot;, &quot;cur&quot; ], &quot;MimeTypes&quot;: [ &quot;image/vnd.microsoft.icon&quot;, &quot;image/vnd.microsoft.icon&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QICOPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;ico&quot;, &quot;cur&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqjpeg.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqjpeg.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;jpg&quot;, &quot;jpeg&quot; ], &quot;MimeTypes&quot;: [ &quot;image/jpeg&quot;, &quot;image/jpeg&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QJpegPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;jpg&quot;, &quot;jpeg&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqmacheif.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqmacheif.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;heic&quot;, &quot;heif&quot; ], &quot;MimeTypes&quot;: [ &quot;image/heic&quot;, &quot;image/heif&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QMacHeifPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;heic&quot;, &quot;heif&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqmacjp2.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqmacjp2.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;jp2&quot; ], &quot;MimeTypes&quot;: [ &quot;image/jp2&quot;, &quot;image/jpx&quot;, &quot;image/jpm&quot;, &quot;video/mj2&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QMacJp2Plugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;jp2&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqsvg.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqsvg.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;svg&quot;, &quot;svgz&quot; ], &quot;MimeTypes&quot;: [ &quot;image/svg+xml&quot;, &quot;image/svg+xml-compressed&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QSvgPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;svg&quot;, &quot;svgz&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqtga.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqtga.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;tga&quot; ], &quot;MimeTypes&quot;: [ &quot;image/x-tga&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QTgaPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;tga&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqtiff.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqtiff.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;tiff&quot;, &quot;tif&quot; ], &quot;MimeTypes&quot;: [ &quot;image/tiff&quot;, &quot;image/tiff&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QTiffPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;tiff&quot;, &quot;tif&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqwbmp.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqwbmp.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;wbmp&quot; ], &quot;MimeTypes&quot;: [ &quot;image/vnd.wap.wbmp&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QWbmpPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;wbmp&quot;) QFactoryLoader::QFactoryLoader() looking at &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqwebp.dylib&quot; Found metadata in lib /opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqwebp.dylib, metadata= { &quot;IID&quot;: &quot;org.qt-project.Qt.QImageIOHandlerFactoryInterface&quot;, &quot;MetaData&quot;: { &quot;Keys&quot;: [ &quot;webp&quot; ], &quot;MimeTypes&quot;: [ &quot;image/webp&quot; ] }, &quot;archreq&quot;: 0, &quot;className&quot;: &quot;QWebpPlugin&quot;, &quot;debug&quot;: false, &quot;version&quot;: 331520 } Got keys from plugin meta data (&quot;webp&quot;) QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/imageformats&quot; ... loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqgif.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqicns.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqico.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqjpeg.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqmacheif.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqmacjp2.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqsvg.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqtga.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqtiff.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqwbmp.dylib&quot; loaded library &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqwebp.dylib&quot; QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/video/gstvideorenderer&quot; ... QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/video/gstvideorenderer&quot; ... QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/accessible&quot; ... QFactoryLoader::QFactoryLoader() checking directory path &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/bin/accessible&quot; ... 2023-09-20 11:03:58.284 python[51572:1372646] +[CATransaction synchronize] called within transaction 2023-09-20 11:03:58.395 python[51572:1372646] +[CATransaction synchronize] called within transaction Error: &quot; videotestsrc ! autovideosink&quot; : &quot;no element \&quot;autovideosink\&quot;&quot; GStreamer; Unable to pause - &quot;gst-pipeline: videotestsrc ! autovideosink&quot; Error: &quot;No URI handler implemented for \&quot;gst-pipeline\&quot;.&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqgif.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqicns.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqico.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqjpeg.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqmacheif.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqmacjp2.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqsvg.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqtga.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqtiff.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqwbmp.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/imageformats/libqwebp.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libgstmediaplayer.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/mediaservice/libqavfmediaplayer.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/styles/libqmacstyle.dylib&quot; QLibraryPrivate::unload succeeded on &quot;/opt/homebrew/Caskroom/miniforge/base/envs/switch/plugins/platforms/libqcocoa.dylib&quot; </code></pre>
<python><macos><pyqt5><gstreamer>
2023-09-19 08:55:28
0
1,109
Nick Skywalker
77,132,977
353,337
`PyRun_String` equivalent in Python stable ABI?
<p>I have a C++ Python library that I'd like to translate to use the <a href="https://docs.python.org/3/c-api/stable.html" rel="nofollow noreferrer">stable ABI</a>. It currently uses <a href="https://docs.python.org/3.12/c-api/veryhigh.html#c.PyRun_String" rel="nofollow noreferrer"><code>PyRun_String</code></a></p> <pre class="lang-cpp prettyprint-override"><code>PyRun_String(&quot;a = 1&quot;, start, globals, locals); </code></pre> <p>which is <em>not</em> part of the stable ABI. Is there a useful replacement for it?</p>
<python><c>
2023-09-19 08:42:19
1
59,565
Nico Schlömer
77,132,964
11,355,926
Converting string to list with dictionaries
<p>I have a big string in the following format (100K+ lines):</p> <pre><code>----------- Phone Information ----------- #TotalPhones, #TotalRegistered, #RegisteredSCCP, #RegisteredSIP, #UnRegistered, #Rejected, #PartiallyRegistered, StateId, #ExpUnreg 202, 202, 178, 24, 0, 0, 0, 842,0 DeviceName, Descr, Ipaddr, Ipv6addr, Ipv4Attr, Ipv6Attr, MACaddr, RegStatus, PhoneProtocol, DeviceModel, HTTPsupport, #regAttempts, prodId, username, seq#, RegStatusChg TimeStamp, IpAddrType, LoadId, ActiveLoadId, InactiveLoadId, ReqLoadId, DnldServer, DnldStatus, DnldFailReason, LastActTimeStamp, Perfmon Object SEP001E7A24E0EE, SEP001E7A24E0EE, 10.131.96.20, , 3, 0, 001E7A24E0EE, reg, SCCP, 115, yes, 0, 115, 53131, 1, 1688637575, 1, SCCP41.9-4-2SR3-1S, SCCP41.9-4-2SR3-1S, , , , 0, , 1688637575, 2 SEP0014F252F8C7, SEP0014F252F8C7, 10.129.4.20, , 0, 0, 0014F252F8C7, reg, SCCP, 30007, yes, 0, 30022, 24419, 2, 1692294899, 1, CP7912080004SCCP080108A, , , , , 0, , 1692294899, 2 SEP0012D9D6A17F, SEP0012D9D6A17F, 10.142.4.20, , 0, 0, 0012D9D6A17F, reg, SCCP, 7, yes, 0, 35, 24052, 203, 1694066462, 1, P0030801SR02, , , , , 0, , 1694066462, 2 SEP0014F29CDF41, SEP0014F29CDF41, 10.129.6.2, , 0, 0, 0014F29CDF41, reg, SCCP, 30007, yes, 0, 30022, 24430, 4, 1692294885, 1, CP7912080004SCCP080108A, , , , , 0, , 1692294885, 2 SEP000D285E94F3, SEP000D285E94F3, 10.144.96.33, , 0, 0, 000D285E94F3, reg, SCCP, 8, yes, 0, 36, ANJO, 5, 1688352492, 1, P0030801SR02, , , , , 0, , 1688352492, 2 SEP001469BC6AF4, SEP001469BC6AF4, 10.129.4.24, , 0, 0, 001469BC6AF4, reg, SCCP, 7, yes, 0, 35, stvdil, 6, 1693204667, 1, P0030801SR02, , , , , 0, , 1693204667, 2 SEPEC44761E9DAB, SEPEC44761E9DAB, 10.101.6.0, , 3, 0, EC44761E9DAB, reg, SCCP, 369, yes, 0, 268, mgsst, 7, 1688352492, 1, SCCP11.9-4-2SR3-1S, SCCP11.9-4-2SR3-1S, , , , 0, , 1688352492, 2 SEP00141CDBB7C8, SEP00141CDBB7C8, 10.142.6.7, , 0, 0, 00141CDBB7C8, reg, SCCP, 30007, yes, 0, 30022, 24079, 8, 1688352492, 1, CP7912080004SCCP080108A, , , , , 0, , 1688352492, 2 SEP001AA2460268, SEP001AA2460268, 10.120.6.5, , 3, 0, 001AA2460268, reg, SCCP, 30018, yes, 0, 30044, ASGBN, 9, 1693911319, 1, SCCP41.9-4-2SR3-1S, SCCP41.9-4-2SR3-1S, , , , 0, , 1693911319, 2 SEP0022555E8182, SEP0022555E8182, 10.124.6.23, , 3, 0, 0022555E8182, reg, SCCP, 30018, yes, 0, 30044, 24301, 10, 1688352492, 1, SCCP41.9-4-2SR3-1S, SCCP41.9-4-2SR3-1S, , , , 0, , 1688352492, 2 SEP0012D9B94F83, SEP0012D9B94F83, 10.142.6.8, , 0, 0, 0012D9B94F83, reg, SCCP, 7, yes, 0, 35, NoUserId, 11, 1694169703, 1, P0030801SR02, , , , , 0, , 1694169703, 2 SEP00141C222E29, SEP00141C222E29, 10.142.6.13, , 0, 0, 00141C222E29, reg, SCCP, 30007, yes, 0, 30022, 24085, 13, 1690930229, 1, CP7912080004SCCP080108A, , , , , 0, , 1690930229, 2 ---------------- Total count 202 ---------------- </code></pre> <p>I want to build a list with dictionaries that takes each headline and adds each value with the headline as the key, example (first line):</p> <pre><code>[ { &quot;DeviceName&quot;:&quot;SEP001E7A24E0EE&quot;, &quot;Descr&quot;:&quot;SEP001E7A24E0EE&quot;, &quot;Ipaddr&quot;:&quot;10.131.96.20&quot;, &quot;Ipv6addr&quot;:None, &quot;Ipv4Attr&quot;:3, &quot;Ipv6Attr&quot;:0, &quot;MACaddr&quot;:&quot;001E7A24E0EE&quot;, &quot;RegStatus&quot;:&quot;reg&quot;, &quot;PhoneProtocol&quot;:&quot;SCCP&quot;, &quot;DeviceModel&quot;:115, &quot;HTTPsupport&quot;:&quot;yes&quot;, &quot;#regAttempts&quot;:0, &quot;prodId&quot;:115, &quot;username&quot;:53131, &quot;seq#&quot;:1, &quot;RegStatusChg TimeStamp&quot;:1688637575, &quot;IpAddrType&quot;:1, &quot;LoadId&quot;:&quot;SCCP41.9-4-2SR3-1S&quot;, &quot;ActiveLoadId&quot;:&quot;SCCP41.9-4-2SR3-1S&quot;, &quot;InactiveLoadId&quot;:None, &quot;ReqLoadId&quot;:None, &quot;DnldServer&quot;:None, &quot;DnldStatus&quot;:0, &quot;DnldFailReason&quot;:None, &quot;LastActTimeStamp&quot;:1688637575, &quot;Perfmon Object&quot;:2 } ] </code></pre> <p>What have I tried (copy/paste)?</p> <pre><code>test = &quot;&quot;&quot;----------- Phone Information ----------- #TotalPhones, #TotalRegistered, #RegisteredSCCP, #RegisteredSIP, #UnRegistered, #Rejected, #PartiallyRegistered, StateId, #ExpUnreg 202, 202, 178, 24, 0, 0, 0, 842,0 DeviceName, Descr, Ipaddr, Ipv6addr, Ipv4Attr, Ipv6Attr, MACaddr, RegStatus, PhoneProtocol, DeviceModel, HTTPsupport, #regAttempts, prodId, username, seq#, RegStatusChg TimeStamp, IpAddrType, LoadId, ActiveLoadId, InactiveLoadId, ReqLoadId, DnldServer, DnldStatus, DnldFailReason, LastActTimeStamp, Perfmon Object SEP001E7A24E0EE, SEP001E7A24E0EE, 10.131.96.20, , 3, 0, 001E7A24E0EE, reg, SCCP, 115, yes, 0, 115, 53131, 1, 1688637575, 1, SCCP41.9-4-2SR3-1S, SCCP41.9-4-2SR3-1S, , , , 0, , 1688637575, 2 SEP0014F252F8C7, SEP0014F252F8C7, 10.129.4.20, , 0, 0, 0014F252F8C7, reg, SCCP, 30007, yes, 0, 30022, 24419, 2, 1692294899, 1, CP7912080004SCCP080108A, , , , , 0, , 1692294899, 2 SEP0012D9D6A17F, SEP0012D9D6A17F, 10.142.4.20, , 0, 0, 0012D9D6A17F, reg, SCCP, 7, yes, 0, 35, 24052, 203, 1694066462, 1, P0030801SR02, , , , , 0, , 1694066462, 2 SEP0014F29CDF41, SEP0014F29CDF41, 10.129.6.2, , 0, 0, 0014F29CDF41, reg, SCCP, 30007, yes, 0, 30022, 24430, 4, 1692294885, 1, CP7912080004SCCP080108A, , , , , 0, , 1692294885, 2 SEP000D285E94F3, SEP000D285E94F3, 10.144.96.33, , 0, 0, 000D285E94F3, reg, SCCP, 8, yes, 0, 36, ANJO, 5, 1688352492, 1, P0030801SR02, , , , , 0, , 1688352492, 2 SEP001469BC6AF4, SEP001469BC6AF4, 10.129.4.24, , 0, 0, 001469BC6AF4, reg, SCCP, 7, yes, 0, 35, stvdil, 6, 1693204667, 1, P0030801SR02, , , , , 0, , 1693204667, 2 SEPEC44761E9DAB, SEPEC44761E9DAB, 10.101.6.0, , 3, 0, EC44761E9DAB, reg, SCCP, 369, yes, 0, 268, mgsst, 7, 1688352492, 1, SCCP11.9-4-2SR3-1S, SCCP11.9-4-2SR3-1S, , , , 0, , 1688352492, 2 SEP00141CDBB7C8, SEP00141CDBB7C8, 10.142.6.7, , 0, 0, 00141CDBB7C8, reg, SCCP, 30007, yes, 0, 30022, 24079, 8, 1688352492, 1, CP7912080004SCCP080108A, , , , , 0, , 1688352492, 2 SEP001AA2460268, SEP001AA2460268, 10.120.6.5, , 3, 0, 001AA2460268, reg, SCCP, 30018, yes, 0, 30044, ASGBN, 9, 1693911319, 1, SCCP41.9-4-2SR3-1S, SCCP41.9-4-2SR3-1S, , , , 0, , 1693911319, 2 SEP0022555E8182, SEP0022555E8182, 10.124.6.23, , 3, 0, 0022555E8182, reg, SCCP, 30018, yes, 0, 30044, 24301, 10, 1688352492, 1, SCCP41.9-4-2SR3-1S, SCCP41.9-4-2SR3-1S, , , , 0, , 1688352492, 2 SEP0012D9B94F83, SEP0012D9B94F83, 10.142.6.8, , 0, 0, 0012D9B94F83, reg, SCCP, 7, yes, 0, 35, NoUserId, 11, 1694169703, 1, P0030801SR02, , , , , 0, , 1694169703, 2 SEP00141C222E29, SEP00141C222E29, 10.142.6.13, , 0, 0, 00141C222E29, reg, SCCP, 30007, yes, 0, 30022, 24085, 13, 1690930229, 1, CP7912080004SCCP080108A, , , , , 0, , 1690930229, 2 ---------------- Total count 202 ----------------&quot;&quot;&quot; headlines = [x.strip().split(&quot;, &quot;) for x in test.splitlines() if x.startswith(&quot;DeviceName&quot;)][0] data = [{headlines[index]:word} for line in test.splitlines() for index, word in enumerate(line.strip().split(&quot;, &quot;)) if line.startswith(&quot;SEP&quot;)] print(data) </code></pre> <p>But unfortunately it gives this result:</p> <pre><code>[{'DeviceName': 'SEP001E7A24E0EE'}, {'Descr': 'SEP001E7A24E0EE'}, {'Ipaddr': '10.131.96.20'}, ...] </code></pre> <p>Can someone give some hints or maybe share a better or easier way?</p> <p>I must have overseen something simple.</p>
<python><string><dictionary>
2023-09-19 08:41:12
5
3,060
Cow
77,132,940
449,193
Eclipse PyDev Debugger: Unable to load existing shared libraries - ImportError: libffi.so.7 / libssl.so.1.1
<p>When running pydev debugger on eclipse, it errors out about missing shared libraries namely libffi and libssl, although both are installed</p> <pre><code>&gt;locate libffi.so.7 /usr/lib/x86_64-linux-gnu/libffi.so.7 </code></pre> <pre><code>&quot;/home/user1/.var/app/org.eclipse.Java/eclipse/plugins/org.python.pydev.core_10.2.1.202307021217/pysrc/_pydevd_bundle/pydevd_utils.py&quot;, line 9, in &lt;module&gt; import ctypes File &quot;/home/user1/.pyenv/versions/3.11.5/lib/python3.11/ctypes/__init__.py&quot;, line 8, in &lt;module&gt; from _ctypes import Union, Structure, Array ImportError: libffi.so.7: cannot open shared object file: No such file or directory </code></pre> <p>Adding LD_LIBRARY_PATH to run/debug configurations does not solve the problem set: LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu</p>
<python><eclipse><pydev>
2023-09-19 08:38:19
1
8,794
omars
77,132,571
4,994,781
With pytest under Windows, create an unreadable file for unit testing
<p>I'm using <code>pytest</code> under Windows and one of the thing I have to test is behaviour when a file cannot be read and the operation raises <code>PermissionError</code>.</p> <p>In my unit test I cannot use <code>chmod()</code> because under Windows it cannot be used to remove read permission.</p> <p>I understand that I can achieve what I want using the <code>tmp_path</code> fixture for creating a temporary file and then use the <code>icacls</code> command (with the help of <code>subprocess.run()</code> in the test unit) to properly remove read permission, but I wonder if I'm missing a much simpler way of writing such an unit test.</p> <p>I can test the relevant code in other ways, but I would like to know a way of creating unreadable files for testing under <code>pytest</code>, just in case I need that in the future, and for learning purposes (as I'm quite new to <code>pytest</code>.</p> <p>Thanks a lot in advance.</p>
<python><windows><unit-testing><pytest>
2023-09-19 07:43:38
0
580
Raúl Núñez de Arenas Coronado
77,132,556
14,460,824
How to fill a set of columns values with values of another set of columns?
<p>I have a <code>dataframe</code> with two column sections <code>Pn</code> and <code>En</code>:</p> <pre><code>df_original = pd.DataFrame({ 'P1':['','',1,''], 'P2':[4,2,'',3], 'P3':[48,20,10,''], 'P4':['',320,160,264], 'E1':['','',1,3], 'E2':[4,2,10,264], 'E3':[48,20,'',528], 'E4':['','','',792], 'E5':['','','',''] }) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">P1</th> <th style="text-align: left;">P2</th> <th style="text-align: left;">P3</th> <th style="text-align: left;">P4</th> <th style="text-align: left;">E1</th> <th style="text-align: right;">E2</th> <th style="text-align: left;">E3</th> <th style="text-align: left;">E4</th> <th style="text-align: left;">E5</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;"></td> <td style="text-align: left;">4</td> <td style="text-align: left;">48</td> <td style="text-align: left;"></td> <td style="text-align: left;"></td> <td style="text-align: right;">4</td> <td style="text-align: left;">48</td> <td style="text-align: left;"></td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;"></td> <td style="text-align: left;">2</td> <td style="text-align: left;">20</td> <td style="text-align: left;">320</td> <td style="text-align: left;"></td> <td style="text-align: right;">2</td> <td style="text-align: left;">20</td> <td style="text-align: left;"></td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">1</td> <td style="text-align: left;"></td> <td style="text-align: left;">10</td> <td style="text-align: left;">160</td> <td style="text-align: left;">1</td> <td style="text-align: right;">10</td> <td style="text-align: left;"></td> <td style="text-align: left;"></td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;"></td> <td style="text-align: left;">3</td> <td style="text-align: left;"></td> <td style="text-align: left;">264</td> <td style="text-align: left;">3</td> <td style="text-align: right;">264</td> <td style="text-align: left;">528</td> <td style="text-align: left;">792</td> <td style="text-align: left;"></td> </tr> </tbody> </table> </div> <p>I try to use the <code>Pn</code> section as overlay for the <code>En</code> section to fill up its empty values with following conditions:</p> <ul> <li><p>Values that already exist in the <code>En</code> section must not be taken over from the <code>Pn</code> section.</p> </li> <li><p>Ideally, values gaps in the <code>En</code> section are closed (i.e. index row 2), whereby leading blank values in <code>E1</code>, if not filled by a value from <code>P1</code>, must not be indented to the left. (In case of doubt, also without moving any values to the left, may it is easier.)</p> </li> </ul> <p>So in result it should end up in something like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">P1</th> <th style="text-align: left;">P2</th> <th style="text-align: left;">P3</th> <th style="text-align: left;">P4</th> <th style="text-align: left;">E1</th> <th style="text-align: right;">E2</th> <th style="text-align: right;">E3</th> <th style="text-align: left;">E4</th> <th style="text-align: left;">E5</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;"></td> <td style="text-align: left;">4</td> <td style="text-align: left;">48</td> <td style="text-align: left;"></td> <td style="text-align: left;"></td> <td style="text-align: right;">4</td> <td style="text-align: right;">48</td> <td style="text-align: left;"></td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;"></td> <td style="text-align: left;">2</td> <td style="text-align: left;">20</td> <td style="text-align: left;">320</td> <td style="text-align: left;"></td> <td style="text-align: right;">2</td> <td style="text-align: right;">20</td> <td style="text-align: left;">320</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">1</td> <td style="text-align: left;"></td> <td style="text-align: left;">10</td> <td style="text-align: left;">160</td> <td style="text-align: left;">1</td> <td style="text-align: right;">10</td> <td style="text-align: right;">160</td> <td style="text-align: left;"></td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;"></td> <td style="text-align: left;">3</td> <td style="text-align: left;"></td> <td style="text-align: left;">264</td> <td style="text-align: left;">3</td> <td style="text-align: right;">264</td> <td style="text-align: right;">528</td> <td style="text-align: left;">792</td> <td style="text-align: left;"></td> </tr> </tbody> </table> </div> <p>My first thought was to transfer the sections into <code>dataframes</code> and then <code>combine</code>, <code>merge</code> or <code>map</code> them. The second to try row-by-row and iter sections as <code>lists</code> with <code>list comprehension</code>, but both did not produce the desired result.</p> <h5>EDIT</h5> <p>Thanks for the comments so far, glad to try and shed some light on this.</p> <ul> <li>In general, the values per section are ascending from left to right.</li> <li>The focus is on the <code>En</code> section, as this is the only section that is to be filled with values from the <code>Pn</code> section.</li> <li><code>Pn</code> only serves as a gap filler - if <code>En</code> has a gap in a line, this should be filled by the corresponding value in <code>Pn</code>, if it is not already present in <code>En</code>.</li> </ul> <p>Unfortunately, the data is real data and I can understand that it is difficult to understand due to the lack of my description, so I apologise.</p> <p>Both sections have only a conditional real reference, but one could imagine <code>Pn</code> as packaging units and <code>En</code> as scale price quantities. In normal cases, the values <code>En</code> and <code>Pn</code> go together, but in some cases this analogous reference is missing and filling up is an attempt to standardise them.</p>
<python><pandas><dataframe><merge>
2023-09-19 07:41:29
2
25,336
HedgeHog
77,132,536
3,833,632
How to show selenium browser in VNC after creating it over SSH with a virtual display
<p>I created a selenium browser over SSH using like so</p> <pre><code>from pyvirtualdisplay import Display display = Display(visible=0, size=(1920, 1920)) display.start() driver = webdriver.Chrome() </code></pre> <p>I then logged into the machine with VNC. And my goal is to move the chrome window created there onto my VNC session. However that instance of chrome doesn't even show up in the task bar even though I can tell it is open and even interact with it.</p> <p>I connected via</p> <pre><code>webdriver.Remote(command_executor=bla,options=options) </code></pre> <p>And I did try</p> <pre><code>driver.switch_to.window(driver.current_window_handle) </code></pre> <p>But that didn't work. I even tried</p> <pre><code>driver.switch_to.window(webdriver.Chrome().current_window_handle) </code></pre> <p>But this gave me a <code>selenium.common.exceptions.NoSuchWindowException</code></p>
<python><selenium-webdriver><vnc>
2023-09-19 07:38:25
1
715
CalebK
77,132,367
3,336,423
What is the difference between `asyncio.open_connection` and `socket.socket.accept()`
<p>I'm trying to connect to a server. Our client-side script opens the connection using:</p> <pre><code>reader, writer = await asyncio.open_connection(HOST, PORT) </code></pre> <p>We wanted to integrate this with legacy code that opens a connection using:</p> <pre><code>with socket.socket(socket.AF_INET,socket.SOCK_STREAM) as s: s.bind((HOST,PORT)) s.listen() conn, addr = s.accept() print(&quot;Accepted&quot;) </code></pre> <p>But then <code>s.accept()</code> hangs forever and connection is not established.</p> <p>Is there any reason why <code>asyncio.open_connection(HOST, PORT)</code> would work while <code>socket.socket.accept()</code> fails?</p>
<python><sockets><python-asyncio>
2023-09-19 07:10:31
1
21,904
jpo38
77,132,221
1,014,217
adding new features to pyspark dataframe takes 8 hours and doesnt finish
<p>The following function adds some features to a dataframe which is about 6k rows only. However after 8 hours running it didnt finish, what am I doing wrong?</p> <pre><code>from pyspark.sql import SparkSession from pyspark.sql.window import Window from pyspark.sql import functions as F def add_rain_features(df, lag_hours=7): &quot;&quot;&quot; Preprocesses the input PySpark DataFrame for training a machine learning model to predict buffer size value. Args: df (pyspark.sql.DataFrame): Input PySpark DataFrame with the following columns: - Location - Timestamp - Value (sludge in cubic meters) - Min of the sludge buffer - Max of the sludge buffer - Rain 1h (the amount of rain in mm at that timestamp) - Value_diff (the amount the sludge has increased or decreased since the previous hour in cubic meters) lag_hours (int): Number of lag hours for rain features. Returns: pyspark.sql.DataFrame: Processed DataFrame with lag features and additional suggested features. &quot;&quot;&quot; # Initialize a Spark session spark = SparkSession.builder.appName(&quot;PrepareData&quot;).getOrCreate() # Sort the DataFrame by Timestamp window_spec = Window.partitionBy(&quot;kp&quot;).orderBy(&quot;timestamp_hour&quot;) df = df.withColumn(&quot;row_num&quot;, F.row_number().over(window_spec)) df = df.orderBy(&quot;row_num&quot;).drop(&quot;row_num&quot;) # Generate lag features for rain for lag in range(1, lag_hours + 1): lag_col_name = f'Rain_{lag}h_ago' df = df.withColumn(lag_col_name, F.lag(df[&quot;rain_1h&quot;], lag).over(window_spec)) # Suggested additional features # 1. Rolling statistics for rain df = df.withColumn(&quot;Rain_mean_2h&quot;, F.avg(df[&quot;rain_1h&quot;]).over(Window.partitionBy(&quot;kp&quot;).orderBy(&quot;timestamp_hour&quot;).rowsBetween(-1, 0))) df = df.withColumn(&quot;Rain_mean_3h&quot;, F.avg(df[&quot;rain_1h&quot;]).over(Window.partitionBy(&quot;kp&quot;).orderBy(&quot;timestamp_hour&quot;).rowsBetween(-2, 0))) df = df.withColumn(&quot;Rain_mean_4h&quot;, F.avg(df[&quot;rain_1h&quot;]).over(Window.partitionBy(&quot;kp&quot;).orderBy(&quot;timestamp_hour&quot;).rowsBetween(-3, 0))) df = df.withColumn(&quot;Rain_mean_5h&quot;, F.avg(df[&quot;rain_1h&quot;]).over(Window.partitionBy(&quot;kp&quot;).orderBy(&quot;timestamp_hour&quot;).rowsBetween(-4, 0))) df = df.withColumn(&quot;Rain_mean_6h&quot;, F.avg(df[&quot;rain_1h&quot;]).over(Window.partitionBy(&quot;kp&quot;).orderBy(&quot;timestamp_hour&quot;).rowsBetween(-5, 0))) # Stop the Spark session when done spark.stop() return df </code></pre>
<python><pandas><pyspark>
2023-09-19 06:45:10
0
34,314
Luis Valencia
77,131,972
487,554
Unable to run a simple Airflow DAG with BashOperator calling Python scripts
<p>I installed Airflow, both through Apache and Astronomer and wrote a really simple DAG with two tasks, each of which are BashOperators that call a Python script. The first Python script, in turn, reads a file and writes out a file, while the second one reads the file created by the previous Python script, reads another hardcoded file and puts them both in a local DB. Nothing too crazy. But with both versions of Airflow (Apache and Astronomer), I'm doing something wrong that just does not work. The DAG loads, looks okay in Airflow and runs &quot;successfully&quot;, but does nothing at all. Any advice would be much appreciated.</p> <p>Here's my code and what I tried:</p> <p>The DAG:</p> <pre><code>import json from pendulum import datetime from airflow.operators.bash import BashOperator from airflow.models.baseoperator import chain from airflow.decorators import ( dag, task, ) PYTHON_SCRIPTS = &quot;python_scripts&quot; @dag( schedule=&quot;@daily&quot;, start_date=datetime(2023, 1, 1), catchup=False, default_args={ &quot;retries&quot;: 2, }, ) def parse_json_data(): @task( templates_exts=[&quot;.py&quot;], ) def parse_json(): BashOperator( task_id=&quot;parse_json_task&quot;, bash_command=f&quot;python {PYTHON_SCRIPTS}/json_file_parser.py&quot;, ) @task( templates_exts=[&quot;.py&quot;], ) def load_files(): BashOperator( task_id=&quot;load_files_task&quot;, bash_command=f&quot;python {PYTHON_SCRIPTS}/load_data.py&quot;, ) parse_json_task = parse_json() load_files_task = load_files() chain(parse_json_task, load_files_task) parse_json_data() </code></pre> <p>The two Python scripts:</p> <p>json_file_parser.py</p> <pre><code>import gzip import json input_json_file = &quot;../data_files/json_file.jsonl.gz&quot; parsed_json_file = &quot;../data_files/parsed_json_file.json&quot; # Parse function to extract necessary data from nhtsa file 1 def parse_json_file(): # Create an empty list to store the extracted data extracted_data = [] # Open the zipped file using and loop through each line with gzip.open(input_json_file, &quot;rt&quot;, encoding=&quot;utf-8&quot;) as jfile: # working code here # Write the extracted data to a new JSON file with open(parsed_json_file, &quot;w&quot;, encoding=&quot;utf-8&quot;) as output_file: json.dump(extracted_data, output_file, indent=4) parse_json_file() </code></pre> <p>load_data.py</p> <pre><code>import pandas as pd from sqlalchemy import create_engine def load_processed_json_file(engine): try: nhtsa_df = pd.read_json(&quot;../data_files/parsed_json_file.json&quot;) nhtsa_df.to_sql( name=&quot;processed_json_data&quot;, con=engine, if_exists=&quot;replace&quot;, index=False ) except FileNotFoundError: print(&quot;Parsed file not found.&quot;) def load_lookup_file(engine): try: nhtsa_lookup_df = pd.read_csv(&quot;../data_files/data_lookup_file.csv&quot;) nhtsa_lookup_df.to_sql( name=&quot;lookup_data&quot;, con=engine, if_exists=&quot;replace&quot;, index=False ) except FileNotFoundError: print(&quot;Lookup file not found.&quot;) def main(): db_url = &quot;...&quot; engine = create_engine(db_url) load_processed_nhtsa_file(engine) load_nhtsa_lookup_file(engine) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I tried a myriad of file locations and ways to get this to run, but it didn't work. When using Astronomer, I put the Python scripts in a folder called python_scripts under the include folder and then used a <code>template_searchpath</code>, after having tried to go the hardcoded route. However, neither approach worked. I put the data files in a folder of its own, also in the include folder, but it didn't work.</p> <p>For Apache Airflow, I took a look at the example dags and it seemed to indicate that I could just create the Python script and data folders inside the example_dags folder and it would work, but it didn't.</p> <p>To add to this mess, the DAG &quot;runs successfully&quot;, but nothing actually happens. I've used Airflow several times in the past, but this is somehow beyond me. Any help would be great. Thank you!</p> <p><strong>EDIT:</strong> I followed the suggestion below, and it worked. Kinda. It now gives me an error:</p> <pre><code>can't open file '/private/var/folders/rh/wzc6ln9d67s3n259nvkxq57w0000gn/T/airflowtmpa1sasfg8/python_scripts/json_file_parser.py': [Errno 2] No such file or directory </code></pre> <p>It seems like it's trying to look for the script, but can't find it. I tried setting the template_searchpath, giving it an absolute path etc., but nothing seems to work with this.</p>
<python><airflow><directed-acyclic-graphs><astronomer>
2023-09-19 05:48:15
3
2,837
CodingInCircles
77,131,968
839,733
Sorted set searching and ordering
<p>Python Sorted Containers package contains <a href="https://grantjenks.com/docs/sortedcontainers/sortedset.html" rel="nofollow noreferrer">SortedSet</a>, that can be initialized as follows:</p> <pre><code>__init__(iterable=None, key=None) </code></pre> <blockquote> <p>key: Function used to extract comparison key from values. Sorted set compares values directly when the key function is none.</p> </blockquote> <p>I want to store some strings that should be sorted by timestamp. If I store tuples like <code>(&quot;string&quot;, 1234)</code>, where <code>1234</code> is some monotonically increasing number like the difference between now and epoch, how can I search for the <code>string</code> in the set, and also keep it sorted using the 2nd element of the tuple? In other words, the timestamp shouldn't be used for searching, and the string shouldn't be used for ordering.</p>
<python><sorting><hash><set><sortedcontainers>
2023-09-19 05:47:12
2
25,239
Abhijit Sarkar
77,131,897
1,181,065
SimPy Concurrency
<p>I have a network simulation with SimPy. A packet is sent from the generator class (includes yield for 1.5s) to the Slot class (includes another yield for 1.5s) then to the wire class and then to the destination. I need to detect collisions coming from different Slots to the wire class and they share the same timestamp. However, simpy runs all events sequentially so if packet 1 from source 1 has a timestamp let's say 3, same as packet 1 from source 2, I cannot detect collision in wire since it passes p 1 s 1 first and then accepts p 1 s 2. I tried yielding for 1.5 in wire which solves the issue but causes propagation of delay in all packets. Is there any other way to wait in wire for the second packet to arrive?</p> <p>Class Slot:</p> <pre><code>yield self.env.timeout(slot_duration) . . . self.out.put(packet) # Send packet to the out attribute (wire) </code></pre> <p>Class Wire:</p> <pre><code>while True: print(&quot;before packet yield and wire yield: &quot;,self.env.now) if first_event: yield self.env.timeout(1.5 + self.epsilon) # Wait for epsilon time for the first event print(&quot;End of first yield in wire for {} until {}&quot;.format(1.5 + self.epsilon, self.env.now)) first_event = False # Set the flag to False after the first event else: yield self.env.timeout(1.5) # Wait for 1.5 time units for subsequent events print(&quot;End of yield in wire until {}&quot;.format(self.env.now)) num_items_in_store = len(self.store.items) print(&quot;Number of items in store:&quot;, num_items_in_store) # might be a problem here: # print(&quot;item 0 is: &quot;, self.store.items[0]) packet = self.store.items[0] packet_store_object = self.store.get() #yield self.store.get() colliding_packets = [] # reset print(&quot;poped packet from store:&quot;, packet) # current_time = self.env.now # if len(self.store.items) == 0: # continue # No packets arrived during epsilon time, so nothing to do if len(self.store.items) &gt;= 1: print(&quot;now current_time checking store for collision: &quot;,self.env.now) # Check for collisions using time packet entered wire colliding_packets = [p for p in self.store.items if p.current_time == packet.current_time] </code></pre>
<python><simpy>
2023-09-19 05:23:24
1
539
Hanna
77,131,869
14,842,800
What does Depends() do in FastAPI?
<p>I'm trying to understand dependency injection in FastAPI. Why is Depends() needed in the following example from the docs? What does Depends() do?</p> <pre><code>from typing import Annotated from fastapi import Depends, FastAPI app = FastAPI() async def common_parameters(q: str | None = None, skip: int = 0, limit: int = 100): return {&quot;q&quot;: q, &quot;skip&quot;: skip, &quot;limit&quot;: limit} @app.get(&quot;/items/&quot;) async def read_items(commons: Annotated[dict, Depends(common_parameters)]): return commons </code></pre> <p>Why does it not work to simply call the common_parameters function like so?</p> <pre><code>@app.get(&quot;/items/&quot;) async def read_items(commons: Annotated[dict, common_parameters()]): return commons </code></pre>
<python><dependency-injection><fastapi>
2023-09-19 05:16:07
1
321
Sam Archer
77,131,617
21,305,238
Using attrs decorators in PyCharm
<p>I have the following <em>attrs</em> class:</p> <pre class="lang-py prettyprint-override"><code>from attrs import define, field @define class Foo: bar: int = field() @bar.validator def check(self, attribute, value): ... </code></pre> <p>It works normally at runtime, as expected:</p> <pre><code>lorem = Foo(42) # Fine ipsum = Foo('') # Expected type 'int', got 'str' instead </code></pre> <p>However, PyCharm is giving me a warning about <code>@bar.validator</code> (this is <a href="https://youtrack.jetbrains.com/issue/PY-30209" rel="nofollow noreferrer">a known bug</a>):</p> <pre class="lang-none prettyprint-override"><code>Unresolved attribute reference 'validator' for class 'int' </code></pre> <p>I don't want to define module-level functions, nor can I stuff everything into <code>lambda</code>s. What other (Pythonic) choices do I have?</p>
<python><pycharm><python-typing><python-attrs>
2023-09-19 03:50:47
0
12,143
InSync
77,131,295
270,043
Data type error while reading Parquet files in pyspark
<p>I have converted a bunch of CSV files into parquet files, and while trying to load 5 rows of the parquet files into another Pyspark dataframe, I got an error.</p> <p>How I converted my CSV files into Parquet files:</p> <pre><code>df = spark.read.options(delimiter='|', inferSchema=True).csv(csv_folder, header=True) df.write.mode(&quot;overwrite&quot;).parquet(&quot;parquet_folder/&quot;) </code></pre> <p>How I read the Parquet files:</p> <pre><code>df = spark.read.parquet(&quot;parquet_folder/*.parquet&quot;) </code></pre> <p>One of the fields, <code>num_A</code>, is read as an <code>integer</code>, based on the <code>printSchema</code> output.</p> <p>I'm able to run <code>df.show(5)</code> successfully, but when I tried to run <code>test = df.limit(5)</code>, I got the error message</p> <pre><code>Parquet column cannot be converted in file ... Column: [num_A], Expected: int, Found: INT64 </code></pre> <p>If I specify a schema prior to reading the parquet file, i.e.</p> <pre><code>schema = StructType([StructField(&quot;num_A&quot;, LongType())]) df = spark.read.schema(schema).parquet(&quot;parquet_folder/*.parquet&quot;) </code></pre> <p>I get the error when running <code>df.show(5)</code>:</p> <pre><code>Parquet column cannot be converted in file ...(another parquet file) Column: [num_A], Expected: bigint, Found: INT32 </code></pre> <p>It seems like my <code>num_A</code> has both <code>integer</code> and <code>long</code> types, but the schema can only be set to one of them? How can I resolve this without having to rewrite my parquet files with an explicit schema?</p>
<python><pyspark><types><parquet>
2023-09-19 02:04:11
0
15,187
Rayne
77,131,261
11,141,816
High dimension convex hull library in python
<p>I tried to compute some high dimension convex hull in python. The computation had to be precise and &quot;QJ&quot; in Qhull was not quite an acceptable option.</p> <p>I found <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html" rel="nofollow noreferrer"><code>scipy.spatial.ConvexHull</code></a> to be able to compute the convex hull at high dimension, but encountered the precision issue.</p> <pre><code>hull=ConvexHull( points ) QhullError: QH6347 qhull precision error (qh_mergefacet): wide merge for facet f40487 into f40486 for mergetype 3 (concave). maxdist 0 (0.0x) mindist -0.0036 (501.9x) vertexdist 8e+02 Allow with 'Q12' (allow-wide) ERRONEOUS FACET: - f40487 - flags: bottom newfacet tested newmerge - merges: 6 - normal: 0.004859 -1 -5.183e-07 4.138e-07 -2.304e-07 7.062e-05 - offset: -1.015026 - center: 1201.673206636632 10989.79522797083 4820.321704992604 4545.351456156274 8367776.853656252 155567882.6656116 - vertices: p6(v120) p2773(v108) p2782(v101) p5(v59) p2776(v57) p2772(v56) p2785(v34) p2774(v24) p2779(v14) p23(v6) p2770(v4) p2789(v3) - neighboring facets: f27430 f40485 f40486 f40545 f40484 f40842 f40929 f40542 f40920 f40638 f40541 f41141 f40591 f40632 f40766 f40629 f40846 f40760 f40694 f40936 f40753 f40850 - ridges: - r1937 tested simplicialtop simplicialbot vertices: p5(v59) p2785(v34) p23(v6) p2770(v4) p2789(v3) between f27430 and f40487 - r1919 tested vertices: p2782(v101) p5(v59) p2785(v34) p23(v6) p2770(v4) between f40487 and f27430 - r1901 tested vertices: p2782(v101) p5(v59) p2779(v14) p23(v6) p2770(v4) between f27430 and f40487 - r1548 tested vertices: p2773(v108) p5(v59) p2772(v56) p2774(v24) p23(v6) between f40487 and f27430 - r1883 tested vertices: p5(v59) p2776(v57) p2779(v14) p23(v6) p2770(v4) between f27430 and f40487 - r1855 tested vertices: p5(v59) p2776(v57) p2774(v24) p23(v6) p2770(v4) between f40487 and f27430 - r1544 tested vertices: p5(v59) p2772(v56) p2774(v24) p23(v6) p2770(v4) between f27430 and f40487 - r1938 tested simplicialtop simplicialbot vertices: p6(v120) p2785(v34) p23(v6) p2770(v4) p2789(v3) between f40487 and f40485 - r1939 tested simplicialtop simplicialbot vertices: p6(v120) p5(v59) p23(v6) p2770(v4) p2789(v3) between f40486 and f40487 - r1940 tested simplicialtop simplicialbot vertices: p6(v120) p5(v59) p2785(v34) p2770(v4) p2789(v3) between f40487 and f40545 - r1941 tested simplicialtop simplicialbot vertices: p6(v120) p5(v59) p2785(v34) p23(v6) p2789(v3) between f40484 and f40487 - r1862 tested simplicialtop vertices: p6(v120) p2773(v108) p5(v59) p2774(v24) p23(v6) between f40842 and f40487 - r1879 tested simplicialtop vertices: p6(v120) p2776(v57) p2774(v24) p23(v6) p2770(v4) between f40929 and f40487 - r1921 tested simplicialbot vertices: p6(v120) p2782(v101) p2785(v34) p23(v6) p2770(v4) between f40487 and f40542 - r1882 tested simplicialtop vertices: p6(v120) p5(v59) p2776(v57) p2774(v24) p23(v6) between f40920 and f40487 - r1922 tested simplicialbot vertices: p6(v120) p2782(v101) p5(v59) p2785(v34) p2770(v4) between f40487 and f40638 - r1923 tested simplicialtop vertices: p6(v120) p2782(v101) p5(v59) p2785(v34) p23(v6) between f40541 and f40487 - r1881 tested simplicialbot vertices: p6(v120) p5(v59) p2776(v57) p2774(v24) p2770(v4) between f40487 and f41141 - r1866 tested simplicialbot vertices: p6(v120) p5(v59) p2772(v56) p23(v6) p2770(v4) between f40487 and f40591 - r1902 tested simplicialtop vertices: p6(v120) p2782(v101) p2779(v14) p23(v6) p2770(v4) between f40632 and f40487 - r1904 tested simplicialtop vertices: p6(v120) p2782(v101) p5(v59) p2779(v14) p2770(v4) between f40766 and f40487 - r1905 tested simplicialbot vertices: p6(v120) p2782(v101) p5(v59) p2779(v14) p23(v6) between f40487 and f40629 - r1861 tested vertices: p6(v120) p2773(v108) p2772(v56) p2774(v24) p23(v6) between f40487 and f40846 - r1864 tested simplicialbot vertices: p6(v120) p2772(v56) p2774(v24) p23(v6) p2770(v4) between f40487 and f40846 - r1884 tested simplicialbot vertices: p6(v120) p2776(v57) p2779(v14) p23(v6) p2770(v4) between f40487 and f40760 - r1863 tested simplicialbot vertices: p6(v120) p2773(v108) p5(v59) p2772(v56) p23(v6) between f40487 and f40694 - r1886 tested simplicialtop vertices: p6(v120) p5(v59) p2776(v57) p2779(v14) p2770(v4) between f40936 and f40487 - r1887 tested simplicialbot vertices: p6(v120) p5(v59) p2776(v57) p2779(v14) p23(v6) between f40487 and f40753 - r1851 tested nonconvex simplicialtop vertices: p6(v120) p5(v59) p2772(v56) p2774(v24) p2770(v4) between f40850 and f40487 - r1854 tested simplicialtop vertices: p6(v120) p2773(v108) p5(v59) p2772(v56) p2774(v24) between f40850 and f40487 ERRONEOUS OTHER FACET: - f40486 - flags: top simplicial newfacet tested - normal: 0.004859 -1 -1.489e-06 1.361e-06 -2.305e-07 7.062e-05 - offset: -1.018131 - center: 2414.957730667852 4889.8993845348 2714.755506575487 7583.523244594927 16887988.65529773 69141294.75300574 - vertices: p6(v120) p5(v59) p32(v20) p23(v6) p2770(v4) p2789(v3) - neighboring facets: f27381 f40485 f40487 f40482 f40431 f40617 - ridges: - r1939 tested simplicialtop simplicialbot vertices: p6(v120) p5(v59) p23(v6) p2770(v4) p2789(v3) between f40486 and f40487 While executing: | qhull i Qx Qt Options selected for Qhull 2019.1.r 2019/06/21: run-id 814845471 incidence Qxact-merge Qtriangulate _zero-centrum _max-width 2.1e+08 Error-roundoff 5.1e-07 _one-merge 6.7e-06 _near-inside 3.3e-05 Visible-distance 3.1e-06 U-max-coplanar 3.1e-06 Width-outside 6.2e-06 _wide-facet 1.9e-05 _maxoutside 7.2e-06 Last point added to hull was p6. Last merge was #352. At error exit: Convex hull of 2790 points in 6-d: Number of vertices: 120 Number of facets: 16079 Number of non-simplicial facets: 114 Statistics for: | qhull i Qx Qt Number of points processed: 120 Number of hyperplanes created: 41372 Number of distance tests for qhull: 314432 Number of distance tests for merging: 338622 Number of distance tests for checking: 0 Number of merged facets: 363 Maximum distance of point above facet: 3.5e-06 (0.5x) Maximum distance of vertex below facet: -0.00041 (57.5x) </code></pre> <p>And it mentioned that</p> <pre><code>precision problems (corrected unless 'Q0' or an error) 215 coplanar horizon facets for new vertices 2 nearly singular or axis-parallel hyperplanes A wide merge error has occurred. Qhull has produced a wide facet due to facet merges and vertex merges. This usually occurs when the input is nearly degenerate and substantial merging has occurred. See http://www.qhull.org/html/qh-impre.htm#limit </code></pre> <p>I used mpmath library and increased the precision from 53 to 106, but the error persisted. That the scipy's ConvexHull algorithm did not seem to be able to utilize the mpmath's utility.</p> <p>Is there any other convex hull library that can be used to implement the precision computation of the convex hull vertices in python?</p>
<python><scipy><qhull>
2023-09-19 01:53:25
0
593
ShoutOutAndCalculate
77,131,079
17,588,005
Why openCV's implementation of Blob Detection is different from skimage?
<p>For a long time, I thought that OpenCV implementation of <code>SimpleBlobDetector</code> is the approach mentioned in <a href="https://en.wikipedia.org/wiki/Blob_detection" rel="nofollow noreferrer">Wikipedia</a>, in which a stack of images with different blurring parameters is created and then a search for maximal points throws location and the size of the blobs.</p> <p>But I just read the implementation and it was completely different. It finds image contours in binary images of different thresholds, filters some out, and returns the rest as blobs! (kind of)</p> <p>First, why is it that? I mean in most literature the blob detection means something else. Does OpenCV's code executes some faster approximation of the algorithm or is completely different and just happens to have the same name?</p> <p>The second and my main question is, is there function similar to <a href="https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_blob.html" rel="nofollow noreferrer">skimage's Blob Detection</a> in in OpenCV?</p>
<python><opencv><scikit-image>
2023-09-19 00:45:20
0
328
Odeaxcsh
77,131,060
493,553
Pylance (VS Code type checking) mistakes the count() infinite iterator as finite
<p>I'm relatively new to Pylance (VS Code's static type checker, based on Pyright) and just stumbled upon this type checking error with the <code>count()</code> infinite iterator. I have some code I've been playing with, which I reduced to the following basic version which reproduces the problem:</p> <pre class="lang-py prettyprint-override"><code>from itertools import count def foo() -&gt; int: for i in count(): if i == 42: return i print(foo()) </code></pre> <p>I'm using this loop over <code>count()</code> as essentially a variation of <code>while True</code>, to manage an index for me while I'm waiting for a condition to be satisfied. Given it's an infinite loop, the only code path out of this function is via that condition inside the loop which returns an int, but Pylance/Pyright is giving me the following type error:</p> <blockquote> <p>Function with declared return type &quot;int&quot; must return value on all code paths<br /> Type &quot;None&quot; cannot be assigned to type &quot;int&quot; Pylance(reportGeneralTypeIssues)</p> </blockquote> <p>I saw that I can silence the issue by either of the following:</p> <ul> <li>Suppressing the warning with <code># type: ignore</code>, which could swallow unrelated issues.</li> <li>Making the return type <code>Optional[int]</code>, which is strictly wrong.</li> <li>Adding a <code>return -1</code> (or whatever fake value) at the end, which works, but adds a confusing no-op extra line.</li> <li>Or just avoiding the issue and just rewriting this as an explicit infinite loop, which Pylance handles correctly:</li> </ul> <pre class="lang-py prettyprint-override"><code>def foo() -&gt; int: i = 0 while True: if i == 42: return i i += 1 </code></pre> <p>But I'm wondering if there's a cleaner solution here to somehow correctly mark this loop as infinite.</p> <p>I would also appreciate insight as to the cause - is it just a Pylance/Pyright bug (missing feature), or is there a deeper technical limitation with marking <code>count()</code> as an infinite iterator?</p>
<python><python-typing><pyright>
2023-09-19 00:38:17
1
2,772
yoniLavi
77,130,821
5,278,205
cannot install python packages in rstudio project environment
<p>I primarily use rstudio for running python scripts. I'm interested in trying out rstudio's github integration. Unfortunately, I cannot get python libraries installed. <code>!pip install pandas</code> for example returns the error <code>sh: pip: command not found</code>. When i try installing from terminal I get that same error. I guess pip is not installed in this environment (I tried installing pip with <code>python get-pip.py</code> but <code>sh: python: command not found</code> I guess it indicates python is not installed, which is just wrong bc i can access it)?</p> <p>My google searches have not uncovered enough for me to go on. At best, I think this is related to the issue of installing python libraries in virtual environments but I'm not sure what to do with that information so any direction/help is appreciated.</p>
<python><r><github><rstudio>
2023-09-18 23:09:29
0
5,213
Cyrus Mohammadian
77,130,662
2,350,986
When using Flask-Login and Peewee as the ORM, is there a need to create the login attributes on the database?
<p>The <a href="https://flask-login.readthedocs.io/en/latest/#your-user-class" rel="nofollow noreferrer">Flask-Login documentation states</a> that you should create some attributes for your User model to handle some aspects of the authentication and session handling process, which is fair.</p> <p>However, when adding these attributes to the User model when using Peewee as the ORM for the project, that means those fields will be created on the database as well whenever the database boostrap script runs, because those three are now attributes of said User model class:</p> <pre class="lang-py prettyprint-override"><code>class User(Model): username = CharField() password = CharField() # required by Flask-Login is_authenticated = BooleanField(default=False) is_active = BooleanField(default=True) is_anonymous = BooleanField(default=False) # required by Flask-Login def get_id(self): return super().get_id() class Meta: database = SqliteDatabase(DEFAULT_DB_NAME) </code></pre> <p>So my questions are:</p> <ol> <li>Does it mean that those are required at the database level, or is it ok not to have them there (that is: just in memory)?</li> <li>If they are not required at the db level, then how can I exclude those three attributes from the table creation statements when creating the database for the first time, as Peewee creates the tables according to the class attributes?</li> <li>For Flask-Login, would the creation of a class that is not included in the Peewee's models list work (if #2 is true)?</li> </ol>
<python><flask><peewee><flask-login>
2023-09-18 22:12:04
1
504
Renato Oliveira
77,130,576
1,512,250
AttributeError: 'Depends' object has no attribute 'query'
<p>I'm new to this.</p> <p>Here is my generator function to retrieve database object:</p> <pre><code>def get_db_session(): db = SessionLocal() try: yield db finally: db.close() </code></pre> <p>Then I have a method to get a user from my database:</p> <pre><code>def get_user_from_db(email: str, db: Session = Depends(get_db_session)): user = db.query(User).filter(User.email == email).first() return user </code></pre> <p>Then I have my route, where I use dependency:</p> <pre><code>@app.post(&quot;/initialize_user&quot;) def initialize_user(email: str, db: Session = Depends(get_db_session)): # Check if the user already exists in the database existing_user = get_user_from_db(email, db) if existing_user: # User already exists, return their information return { &quot;email&quot;: existing_user.email, &quot;credit_balance&quot;: existing_user.credit_balance, &quot;subscription_type&quot;: existing_user.subscription_type } else: # Initialize a new user with default values new_user = User(email=email, credit_balance=20, subscription_type=&quot;trial&quot;) db.add(new_user) db.commit() db.refresh(new_user) return { &quot;email&quot;: new_user.email, &quot;credit_balance&quot;: new_user.credit_balance, &quot;subscription_type&quot;: new_user.subscription_type } </code></pre> <p>And I received an error:</p> <pre><code>user = db.query(User).filter(User.email == email).first() AttributeError: 'Depends' object has no attribute 'query' </code></pre> <p>According to <a href="https://stackoverflow.com/questions/68981634/attributeerror-depends-object-has-no-attribute-query-fastapi">this post</a>, I can't use dependencies in my own functions, so I tried to use them in routes, like this:</p> <pre><code>@app.post(&quot;/initialize_user&quot;) def initialize_user(email: str, db: Session = Depends(get_db_session)): # Check if the user already exists in the database existing_user = db.query(User).filter(User.email == email).first() ... } </code></pre> <p>And I received the same error.</p> <p>What I'm doing wrong?</p>
<python><dependency-injection><sqlalchemy><fastapi>
2023-09-18 21:50:12
2
3,149
Rikki Tikki Tavi
77,130,313
12,821,675
Two Layer Nested Serializers - Django Rest Framework
<p>Say I have models with grandparent-parent-child relations like so:</p> <p><strong>models.py</strong></p> <pre class="lang-py prettyprint-override"><code>class Unit(...): number = models.CharField(...) class Listing(...): unit = models.ForeignKey('Unit', related_name=&quot;listings&quot;, ...) number_of_bedrooms = models.IntegerField(...) class Price(...): listing = models.ForeignKey('Listing', realted_name=&quot;prices&quot;, ...) amount = models.DecimalField(...) </code></pre> <p>Using DRF - how can I grab all of the <code>price</code> instances for a listing in the <code>ListingSerializer</code>; all <code>prices</code> are determined by the grandparent <code>Unit</code>.</p> <p>I am trying:</p> <p><strong>serializers.py</strong></p> <pre class="lang-py prettyprint-override"><code>class PriceSerializer(serializers.ModelSerializer): class Meta: model = Price fields = [&quot;amount&quot;] class ListingSerializer(serializers.ModelSerializer): prices = PriceSerializer( source=&quot;unit__listings__prices&quot;, # &lt;-- this does not seem to work many=True, read_only=True, ) ... </code></pre> <p>Essentially:</p> <ul> <li>a unit can have multiple listings</li> <li>a listing can have multiple prices</li> <li>a unit will have many prices as its &quot;grandchildren&quot;</li> <li>how can I get all of the price &quot;grandchildren&quot; on the <code>ListingSerializer</code></li> </ul>
<python><django><django-rest-framework>
2023-09-18 20:51:09
1
3,537
Daniel
77,130,229
616,728
psycopg3 inserting dict into JSONB field
<p>I have a table with a <code>JSONB</code> field and would like to insert into it using a named dict like so:</p> <pre><code>sql = &quot;INSERT INTO tbl (id, json_fld) VALUES (%(id)s, %(json_fld)s)&quot; conn.execute(sql, {'id':1, 'json_fld': {'a':1,'b':false, 'c': 'yes'}}); </code></pre> <p>I tried the answers in <a href="https://stackoverflow.com/questions/31796332/psycopg2-insert-python-dictionary-as-json">this question</a> but those all apply to psycopg2 and NOT psycopg3 and they do not work here (notably I tried):</p> <pre><code>conn.execute(sql, {'id':1, 'json_fld': json.dumps({'a':1,'b':false, 'c': 'yes'})}); </code></pre> <p>The error remains the same:</p> <blockquote> <p>psycopg.ProgrammingError: cannot adapt type 'dict' using placeholder '%s' (format: AUTO)</p> </blockquote>
<python><postgresql><jsonb><psycopg3>
2023-09-18 20:32:55
2
2,748
Frank Conry
77,130,085
2,307,570
Does Python always use UTF-8 when reading from or writing to a file? (Open function with only file and mode args.)
<p>I have the impression, that reading from and writing to a file always uses UTF-8.<br> I tried to find a counter-example by using a different encoding for the script, but did not find one.</p> <p>My example uses the Yen-Symbol, whose encoding in UTF-8 is <code>C2 A5</code>.<br> <code>'¥'.encode('utf-8') == b'\xc2\xa5'</code></p> <p>Assume a file <em>yen.txt</em>, that contains the two bytes <code>C2 A5</code>.<br> Are there any circumstances, under which the following code could fail?</p> <pre class="lang-py prettyprint-override"><code>with open('yen.txt', 'r') as f: assert f.read() == '¥' </code></pre> <p>Or could the following code ever produce a text file, that does not contain the bytes <code>C2 A5</code>?</p> <pre class="lang-py prettyprint-override"><code>with open('yen.txt', 'w') as f: f.write('¥') </code></pre> <p>By &quot;circumstances&quot; I mean the encoding of the script, or some setting of the operating system.</p> <p>Assume, that the script uses some encoding, that allows the Yen-Symbol.<br> (Otherwise it would be equivalent to a question mark.)</p> <p>I tried the write script with <code># coding: cp437</code>, and the result was just the same.<br> I wondered, if the text file might instead contain only the byte <code>9D</code>,<br> because that is the Yen symbol's code point in <a href="https://en.wikipedia.org/wiki/Code_page_437" rel="nofollow noreferrer">CP 437</a>.<br> <code>'¥'.encode('cp437') == b'\x9d'</code></p> <p>BTW, I uploaded the modified script <a href="https://github.com/entenschule/examples_py/blob/main/a001_misc/b007_encoding/cp_437/write_yen.py" rel="nofollow noreferrer">on GitHub</a>, and somehow the Yen-Symbol disappeared.<br> Does anyone have an idea why?</p>
<python><utf-8><character-encoding>
2023-09-18 19:59:34
1
1,209
Watchduck
77,130,056
14,566,295
Converting a list to pandas dataframe where list contains dictionary
<p>I wanted to convert a <code>list</code> to <code>pandas</code> dataframe, where the first element of the <code>list</code> is a <code>dictionary</code>.</p> <p>I have below code</p> <pre><code>import pandas as pd import numpy as np pd.DataFrame([{'aa' : 10}, np.nan]) </code></pre> <p>However this fails with below message</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.11/site-packages/pandas/core/frame.py&quot;, line 782, in __init__ arrays, columns, index = nested_data_to_arrays( ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/pandas/core/internals/construction.py&quot;, line 498, in nested_data_to_arrays arrays, columns = to_arrays(data, columns, dtype=dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/pandas/core/internals/construction.py&quot;, line 832, in to_arrays arr, columns = _list_of_dict_to_arrays(data, columns) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/pandas/core/internals/construction.py&quot;, line 912, in _list_of_dict_to_arrays pre_cols = lib.fast_unique_multiple_list_gen(gen, sort=sort) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;pandas/_libs/lib.pyx&quot;, line 374, in pandas._libs.lib.fast_unique_multiple_list_gen File &quot;/usr/local/lib/python3.11/site-packages/pandas/core/internals/construction.py&quot;, line 910, in &lt;genexpr&gt; gen = (list(x.keys()) for x in data) ^^^^^^ AttributeError: 'float' object has no attribute 'keys' </code></pre> <p>Could you please help how to resolve this issue?</p>
<python><pandas><dataframe><list><dictionary>
2023-09-18 19:53:40
2
1,679
Brian Smith
77,129,892
210,867
Poetry config and publish are broken for me
<p>I'm on a new Ubuntu 22.04 desktop. I've installed Poetry. (Initially v1.5.1, but when I ran into this problem, I upgraded to v1.6.1 to see if it fixed anything - it didn't.)</p> <p>The problem is whenever I try to use <code>poetry config pypi-token.pypi &quot;MYTOKEN&quot;</code> to store my PyPI token, it says nothing, and does nothing.</p> <p>I had to use <code>poetry config cache-dir &quot;/tmp&quot;</code> (changing a different official setting) to force it to create the <code>~/config/pypoetry/config.toml</code> file, then manually added my token to the same file, but when I run <code>poetry config --list</code>, it shows the modified value for <code>cache-dir</code> but mentions nothing about the <code>pypi-token.pypi</code> setting.</p> <p>Furthermore, when I delete <code>config.toml</code> and then cd to the dir containing my PyPI library (which I've previously published on a previous desktop where I <em>was</em> able to config poetry properly), it behaves as if everything's fine:</p> <pre><code>ofer@minime:~/src/odigity/py-objects$ poetry publish Publishing py-objects (0.0.2) to PyPI - Uploading py_objects-0.0.2-py3-none-any.whl 100% - Uploading py_objects-0.0.2.tar.gz 100% </code></pre> <p>Even though the version hasn't been updated since my last publish, and I very clearly have <em>no credentials configured</em>.</p> <p>I'm so confused. Everything seems broken at the same time.</p>
<python><pypi><python-poetry>
2023-09-18 19:24:15
1
8,548
odigity
77,129,876
11,064,604
Pip install failing in Windows
<p>I cannot install python packages locally on my work windows machine. Inside powershell I have tried</p> <pre><code>py -m pip install pandas </code></pre> <p>only to get the following errors:</p> <blockquote> <p>WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1006)'))': /simple/pandas/</p> </blockquote> <blockquote> <p>Could not fetch URL <a href="https://pypi.org/simple/pandas/" rel="nofollow noreferrer">https://pypi.org/simple/pandas/</a>: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/selenium/ (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1006)'))) - skipping</p> </blockquote> <blockquote> <p>ERROR: Could not find a version that satisfies the requirement selenium (from versions: none)</p> </blockquote> <blockquote> <p>ERROR: No matching distribution found for selenium</p> </blockquote> <p>I would use something like conda, but my company will not permit conda. How can I download packages via pip?</p>
<python>
2023-09-18 19:20:53
2
353
Ottpocket
77,129,870
8,587,712
how to check if element of list contains one of the elements of another list
<p>I have a list of strings <code>['asdf', 'qwer', 'zxcv']</code> and I need to return a list containing the elements of this list which contain any of the strings in <code>['a', 'g', 'y']</code>. I have attempted this with the following list comprehension</p> <pre><code>lst = ['asdf', 'qwer', 'zxcv'] target_strings = ['a', 'g', 'y'] [element for element in lst if any(target_strings) in element] </code></pre> <p>Which does not work, as <code>any(target_strings)</code> is a <code>bool</code>, not <code>string</code>. The target output in this case would be <code>['asdf']</code>.</p>
<python><string><list><list-comprehension>
2023-09-18 19:19:26
3
313
Nikko Cleri
77,129,839
7,422,392
Django and Django-CMS Static and Media URLs: https://https://
<p>In Django the following <code>settings</code> are configured:</p> <pre><code>MEDIA_URL=&quot;https://abcdefg.cloudfront.net/media/&quot; STATIC_URL=&quot;https://abcdefg.cloudfront.net/static/&quot; </code></pre> <p>In web browsers the URL for the static files is correct. The URL for media files however is incorrect with <code>https://https:</code> leading to <code>404</code>'s: <code>https://https://abcdefg.cloudfront.net/media/</code></p> <p>When I <code>print(f&quot;MEDIA_URL: {MEDIA_URL}&quot;)</code>, from the settings the URL is correct. I use <code>{% get_media_prefix %}</code> and <code>{% get_static_prefix %}</code> for my static and media files. For example:</p> <pre><code>&lt;img src=&quot;{% get_media_prefix %}image.jpg&quot; alt=&quot;Image&quot;&gt; </code></pre> <p>This alternative does not work:</p> <pre><code>MEDIA_URL=&quot;abcdefg.cloudfront.net/media/&quot; </code></pre> <p>The result observed in the browser is <code>https://example.com/abcdefg.cloudfront.net/media/</code></p> <p><strong>EDIT:</strong> Am using Django-CMS. Only the URLs for Django-CMS seem to show the above behavior.</p> <p>What is the issue here?</p>
<python><django><django-cms>
2023-09-18 19:13:35
1
1,006
sitWolf
77,129,754
13,764,814
How to configure debug environment for vscode
<p>I am struggling to debug my project.</p> <p>The directory structure is as follows:</p> <p>Root Dir</p> <ul> <li>ModuleA <ul> <li>Submodule1</li> <li>Submodule2</li> </ul> </li> <li>ModuleB <ul> <li>Submodule3</li> <li>Submodule4</li> </ul> </li> </ul> <p>When debugging a python script within Submodule3 (e.g. <code>python ModuleB/Submodule3/script.py --arg-one string</code>) which imports from ModuleA, I am getting module import errors. When running via CLI, running <code>export PYTHONPATH=$PWD</code> from the root dir resolves the import errors, but I cannot figure out how to configure the path to enable the debugger</p> <p>Note: I am on macOS and I use pyenv via homebrew for my python installations. Additionally the project contains a venv at ./venv</p>
<python><visual-studio-code><path><vscode-debugger>
2023-09-18 18:54:57
1
311
Austin Hallett
77,129,672
2,112,406
Allowing deepcopy for a C++ object that has vector of pointers to another object in pybind11
<p>I have two classes defined in C++:</p> <pre><code>class Pet { public: Pet(std::string &amp;name) : name(name) {} void set_name(std::string &amp;name_) { name = name_;} std::string &amp;get_name () { return name; } private: std::string name; }; class Zoo { public: Zoo() {} void set_pets(std::vector&lt;Pet*&gt;&amp; pets_) {pets = pets_;} std::vector&lt;Pet*&gt;&amp; get_pets () {return pets;} // copy constructor Zoo(Zoo&amp; new_zoo){ std::vector&lt;Pet*&gt; new_pets; for ( auto&amp; t : new_zoo.get_pets()){ Pet new_pet(t-&gt;get_name()); new_pets.push_back(&amp;new_pet); } } private: std::vector&lt;Pet*&gt; pets; }; </code></pre> <p>with bindings:</p> <pre><code>PYBIND11_MODULE(example, m) { py::class_&lt;Pet&gt;(m, &quot;Pet&quot;) .def(py::init&lt;std::string &amp;&gt;()) .def(&quot;set_name&quot;, &amp;Pet::set_name) .def(&quot;get_name&quot;, &amp;Pet::get_name) .def_property(&quot;name&quot;, &amp;Pet::get_name, &amp;Pet::set_name) .def(&quot;__repr__&quot;, [](Pet &amp;a){ return &quot;&lt;example.Pet named '&quot; + a.get_name() + &quot;'&gt;&quot;; }) ; py::class_&lt;Zoo&gt;(m, &quot;Zoo&quot;) .def(py::init&lt;&gt;()) .def(&quot;set_pets&quot;, &amp;Zoo::set_pets) .def(&quot;get_pets&quot;, &amp;Zoo::get_pets) .def_property(&quot;pets&quot;, &amp;Zoo::get_pets, &amp;Zoo::set_pets) .def(&quot;__deepcopy__&quot;, [](Zoo &amp;a){ Zoo b = a; return b; }) ; } </code></pre> <p>This is basically my attempt to implement <code>__deepcopy__</code> for the zoo object, such that I can create another zoo object on the python side, with the stuff the pointers in the pets vector copied to another memory location. My goal is to be able to allow for:</p> <pre class="lang-py prettyprint-override"><code>import copy from build.example import Pet from build.example import Zoo p1 = Pet(&quot;Molly&quot;) p2 = Pet(&quot;Charley&quot;) z = Zoo() z.set_pets([p1, p2]) z2 = copy.deepcopy(z) </code></pre> <p>so that, if I then do</p> <pre class="lang-py prettyprint-override"><code>p1.name = &quot;Foo&quot; </code></pre> <p>the name of the first pet in <code>z</code> changes to <code>Foo</code> but the one in <code>z2</code> remains as a different pet with name <code>Molly</code>.</p> <p>This code doesn't compile, but I can't figure out why. I'm also unsure whether I need to have an explicit copy constructor to use for the binding of <code>__deepcopy__</code>, and whether I need to have a proper destructor.</p> <p><em>EDIT</em>: errors:</p> <pre><code>In file included from /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:1: In file included from /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/pybind11.h:13: In file included from /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/detail/class.h:12: In file included from /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/detail/../attr.h:14: /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/cast.h:1472:16: error: no matching constructor for initialization of 'enable_if_t&lt;!std::is_void&lt;Zoo&gt;::value, Zoo&gt;' (aka 'Zoo') return std::move(*this).template call_impl&lt;remove_cv_t&lt;Return&gt;&gt;( ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/pybind11.h:291:59: note: in instantiation of function template specialization 'pybind11::detail::argument_loader&lt;Zoo &amp;&gt;::call&lt;Zoo, pybind11::detail::void_type, (lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14) &amp;&gt;' requested here (void) std::move(args_converter).template call&lt;Return, Guard&gt;(cap-&gt;f); ^ /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/pybind11.h:143:9: note: in instantiation of function template specialization 'pybind11::cpp_function::initialize&lt;(lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14), Zoo, Zoo &amp;, pybind11::name, pybind11::is_method, pybind11::sibling&gt;' requested here initialize( ^ /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/pybind11.h:1616:22: note: in instantiation of function template specialization 'pybind11::cpp_function::cpp_function&lt;(lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14), pybind11::name, pybind11::is_method, pybind11::sibling, void&gt;' requested here cpp_function cf(method_adaptor&lt;type&gt;(std::forward&lt;Func&gt;(f)), ^ /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:50:10: note: in instantiation of function template specialization 'pybind11::class_&lt;Zoo&gt;::def&lt;(lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14)&gt;' requested here .def(&quot;__deepcopy__&quot;, ^ /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:23:5: note: candidate constructor not viable: expects an lvalue for 1st argument Zoo(Zoo&amp; new_zoo){ ^ /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:18:5: note: candidate constructor not viable: requires 0 arguments, but 1 was provided Zoo() {} ^ In file included from /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:1: In file included from /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/pybind11.h:13: In file included from /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/detail/class.h:12: In file included from /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/detail/../attr.h:14: /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/cast.h:1504:16: error: no matching constructor for initialization of 'Zoo' return std::forward&lt;Func&gt;(f)(cast_op&lt;Args&gt;(std::move(std::get&lt;Is&gt;(argcasters)))...); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/cast.h:1472:42: note: in instantiation of function template specialization 'pybind11::detail::argument_loader&lt;Zoo &amp;&gt;::call_impl&lt;Zoo, (lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14) &amp;, 0UL, pybind11::detail::void_type&gt;' requested here return std::move(*this).template call_impl&lt;remove_cv_t&lt;Return&gt;&gt;( ^ /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/pybind11.h:291:59: note: in instantiation of function template specialization 'pybind11::detail::argument_loader&lt;Zoo &amp;&gt;::call&lt;Zoo, pybind11::detail::void_type, (lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14) &amp;&gt;' requested here (void) std::move(args_converter).template call&lt;Return, Guard&gt;(cap-&gt;f); ^ /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/pybind11.h:143:9: note: in instantiation of function template specialization 'pybind11::cpp_function::initialize&lt;(lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14), Zoo, Zoo &amp;, pybind11::name, pybind11::is_method, pybind11::sibling&gt;' requested here initialize( ^ /Users/.../coding_experiments/pybind11/oop_docs/pybind11/include/pybind11/pybind11.h:1616:22: note: in instantiation of function template specialization 'pybind11::cpp_function::cpp_function&lt;(lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14), pybind11::name, pybind11::is_method, pybind11::sibling, void&gt;' requested here cpp_function cf(method_adaptor&lt;type&gt;(std::forward&lt;Func&gt;(f)), ^ /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:50:10: note: in instantiation of function template specialization 'pybind11::class_&lt;Zoo&gt;::def&lt;(lambda at /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:51:14)&gt;' requested here .def(&quot;__deepcopy__&quot;, ^ /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:23:5: note: candidate constructor not viable: expects an lvalue for 1st argument Zoo(Zoo&amp; new_zoo){ ^ /Users/.../coding_experiments/pybind11/oop_docs/main.cpp:18:5: note: candidate constructor not viable: requires 0 arguments, but 1 was provided Zoo() {} ^ 2 errors generated. make[2]: *** [CMakeFiles/example.dir/main.cpp.o] Error 1 make[1]: *** [CMakeFiles/example.dir/all] Error 2 make: *** [all] Error 2 </code></pre>
<python><c++><oop><pybind11>
2023-09-18 18:36:17
0
3,203
sodiumnitrate
77,129,544
529,286
Can't build a non-stacked bar plot progressively, starting from a DataFrame
<p>I'm trying to build a bar plot with Pandas, matplotlib and Jupyter, where I'd like to place multiple series about the same index values one after another, without stacking. I've tried this:</p> <pre class="lang-py prettyprint-override"><code> d = { &quot;shop&quot;: [ &quot;London&quot;, &quot;Berlin&quot;, &quot;Paris&quot; ], &quot;sales-2020&quot;: [ 1000, 2344, 1233 ], &quot;sales-2021&quot;: [ 355, 4003, 2344 ], &quot;sales-2022&quot;: [ 2344, 2949, 3443 ] } colors = { &quot;2020&quot;: &quot;Blue&quot;, &quot;2021&quot;: &quot;Green&quot;, &quot;2022&quot;: &quot;Red&quot; } df = pd.DataFrame ( d, index = d [ &quot;shop&quot; ] ) ax = None for yk in [ 2020, 2021, 2022 ]: yk = str ( yk ) ax = df.plot.bar ( y = &quot;sales-&quot; + yk, label = yk, color = colors [ yk ], stacked = False, ax = ax ) </code></pre> <p>But it gives me the wrong output, with the series always stacked and the first one not appearing as bars:</p> <p><a href="https://i.sstatic.net/umcnX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/umcnX.png" alt="enter image description here" /></a></p> <p>Yes, this is a XY question, I've tried to simplify a <a href="https://github.com/Rothamsted/graphdb-benchmarks/blob/master/results/querying-results.ipynb" rel="nofollow noreferrer">more complex case</a>. Because I already have 9 series, I find it simpler to add them to the chart one at a time in a loop, not building everything in one go, as it was suggested. Also, I don't need to build a pivot table, for the data I have are already grouped.</p>
<python><pandas><matplotlib>
2023-09-18 18:16:01
1
3,104
zakmck
77,129,493
2,107,488
retrieve secrets from Azure Key vault in azure databricks WITHOUT secret scope
<p>I know, I can use secret scope in Azure Databricks and easly retrieve secrets/keys from Azure KeyVault.</p> <p>But</p> <p>I would like to try another way -&gt; via a service principal.The service principal has permission to get/list secrets from my azure key vault.</p> <p>My steps:</p> <ol> <li>I create a cluster in azure databricks and install the libraries <code>azure-identity</code> and <code>azure-Keyvault</code></li> </ol> <p>I run the following code:</p> <pre><code>from azure.keyvault.secrets import SecretClient from azure.identity import ClientSecretCredential as cs from azure.keyvault.keys import KeyClient kv_URI = &quot;https://mykeyvault.vault.azure.net/&quot; TENANT_ID = 'yyyyy' CLIENT_ID = 'zzzz' CLIENT_SECRET = 'xxxxx' credentials = cs( tenant_id=TENANT_ID, client_id=CLIENT_ID, client_secret=CLIENT_SECRET) secret_client = SecretClient(vault_url=kv_URI, credential=credentials) secretlist=secret_client.get_secret(&quot;Mysecret&quot;) </code></pre> <p>but after some minutes I get time-out error.</p> <p>Do you know how can I solve the problem?</p>
<python><azure><azure-databricks><azure-keyvault><azure-service-principal>
2023-09-18 18:08:59
1
3,087
Kaja
77,129,441
6,068,731
Spotify API - Cannot get recently played songs
<blockquote> <p>I want to get my recently played songs using Python.</p> </blockquote> <p>I created an APP on the Spotify Developers website. I have the Client ID and the Client Secret. I managed to use these to use the <code>search</code> function to find the <code>artist_id</code> of an artist, and grab their top tracks. Hence, my credentials work.</p> <p>I now want to grab my recently played tracks, but I keep getting errors. Initially, I got an <code>insufficient client scope error</code>, but now I get a <code>Server Error</code> (Response 500). Unclear why.</p> <h1>Code</h1> <pre><code>import os import json import base64 from requests import post, get from datetime import datetime client_id = &quot;CLIENT_ID&quot; client_secret = &quot;CLIENT_SECRET&quot; def get_token(): &quot;&quot;&quot;Grabs Access Token&quot;&quot;&quot; # Generate Authorization string in Base64 auth_string = client_id + &quot;:&quot; + client_secret auth_bytes = auth_string.encode(&quot;utf-8&quot;) auth_base64 = str(base64.b64encode(auth_bytes), &quot;utf-8&quot;) # Generate URL, Headers, and Data for Post request url = &quot;https://accounts.spotify.com/api/token&quot; headers = { &quot;Authorization&quot;: &quot;Basic &quot; + auth_base64, &quot;Content-Type&quot;: &quot;application/x-www-form-urlencoded&quot; } data = { &quot;grant_type&quot;: &quot;client_credentials&quot;, &quot;scope&quot;: &quot;user-read-recently-played&quot; } # Post request result = post(url, headers=headers, data=data) # Transform to dictionary json_result = json.loads(result.content) token = json_result[&quot;access_token&quot;] return token def get_auth_header(token): return {&quot;Authorization&quot;: &quot;Bearer &quot; + token} def get_recently_played(token, start_date): url = &quot;https://api.spotify.com/v1/me/player/recently-played&quot; headers = get_auth_header(token) limit = 50 query = f&quot;?limit={limit}&quot; # Spotify requires the timestamp in milliseconds timestamp_ms = int(start_date.timestamp()) * 1000 query += f&quot;&amp;after={timestamp_ms}&quot; query_url = url + query result = get(query_url, headers=headers) print(result) json_result = json.loads(result.content)#[&quot;items&quot;] return json_result access_token = get_token() start_date = datetime(2023, 9, 10) recently_played = get_recently_played(access_token, start_date) for idx, item in enumerate(recently_played): song_name = item[&quot;track&quot;][&quot;name&quot;] artist_name = item[&quot;track&quot;][&quot;artists&quot;][0][&quot;name&quot;] print(f&quot;{idx + 1}. {song_name} by {artist_name}&quot;) </code></pre> <h1>Authorization Code</h1> <p>Following the comment, I have now changed the <code>get_token()</code> function to:</p> <pre><code>redirect_uri = &quot;http://localhost:8888/callback&quot; def get_token_new(): auth_url = &quot;https://accounts.spotify.com/authorize&quot; token_url = &quot;https://accounts.spotify.com/api/token&quot; # Generate the authorization URL auth_params = { &quot;client_id&quot;: client_id, &quot;response_type&quot;: &quot;code&quot;, &quot;redirect_uri&quot;: redirect_uri, &quot;scope&quot;: &quot;user-read-recently-played&quot;, # Add the required scope } authorization_url = auth_url + &quot;?&quot; + &quot;&amp;&quot;.join([f&quot;{k}={v}&quot; for k, v in auth_params.items()]) print(&quot;Please visit the following URL to authorize your application:&quot;) print(authorization_url) # After user authorization, you will receive an authorization code authorization_code = input(&quot;Enter the authorization code from the URL: &quot;) # Exchange the authorization code for an access token token_params = { &quot;grant_type&quot;: &quot;authorization_code&quot;, &quot;code&quot;: authorization_code, &quot;redirect_uri&quot;: redirect_uri, } auth_string = client_id + &quot;:&quot; + client_secret auth_bytes = auth_string.encode(&quot;utf-8&quot;) auth_base64 = str(base64.b64encode(auth_bytes), &quot;utf-8&quot;) token_headers = { &quot;Authorization&quot;: &quot;Basic &quot; + auth_base64, } result = post(token_url, headers=token_headers, data=token_params) # Transform to dictionary json_result = json.loads(result.content) token = json_result[&quot;access_token&quot;] return token </code></pre> <p>This somewhat works, but I always need to grab the code in the url which is a pain in the stomach. Is there any way to cache it?</p>
<python><spotify><spotipy>
2023-09-18 17:59:59
0
728
Physics_Student
77,129,052
1,061,095
How to schedule a custom Task object within an async block?
<p>I've developed the following prototype, towards a callbacks-based approach to using asyncio with HTTP (<a href="https://www.python-httpx.org/async/" rel="nofollow noreferrer">HTTPX</a>).</p> <p>My idea with this prototype was to produce a generic &quot;task chain&quot; structure for Python asyncio. This introduces a generic 'task data source' class, e.g running an HTTP request, together with a generic 'task chain' class extending the &quot;data source&quot; class with an additional &quot;input task&quot; field, such that each task chain object would be activated one the provided input task is complete. Each task chain object may then produce a value that any later objects could use. A task chain may alternately produce some output or an event in a user interface.</p> <p>For a purpose of sharing both the process state and output data of each link in the task chain, I've tried to reuse the <code>result()</code> field of an asyncio Task. In one approach - as below - it seems to almost work out. This example will define the initial data source class as subclass of <code>asyncio.Task</code>, with each data source initialized as to dispatch to the source's mid-level <code>run_process()</code> coroutine. The task chain class will then inherit this characteristic of the data source implementation. The implementing classes must each define a <code>process()</code> coroutine, such that will be called from <code>run_process()</code>. The <code>process()</code> coroutine should return the value that will be stored as the task's result value.</p> <p>I'd also tried developing this prototype with futures. It seems to be closer to working out, when using Tasks. It seems to work - almost - in the implementation below. At least, this implementation does not deadlock.</p> <p>When changing one line of code however, the limitations of this implementation might become more apparent.</p> <p>With a later update, it seems that even this implementation is not altogether working out - as though a link is being skipped in the task chain.</p> <p>I'm trying to diagnose those limitations, at what I think may be a question about scheduling for a custom task object.</p> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod from typing import Callable, Mapping from typing_extensions import Generic, TypeVar import asyncio as aio import httpx import json from urllib import parse Ti = TypeVar(&quot;Ti&quot;) To = TypeVar(&quot;To&quot;) class DataSource(aio.Task, Generic[To], ABC): def __init__(self, loop = None, name = None, context = None): super().__init__(self.run_process(), loop = loop, name = name, context = context) aio._register_task(self) def __hash__(self): return object.__hash__(self) @abstractmethod async def process(self) -&gt; To: raise NotImplementedError(self.process) async def run_process(self) -&gt; To: loop = self.get_loop aio._enter_task(loop, self) try: if __debug__: print(&quot;in run_process %r&quot; % self) return await self.process() finally: aio._leave_task(loop, self) aio._unregister_task(self) def __repr__(self): return &quot;&lt;%s at 0x%x&gt;&quot; % (self.__class__.__name__, id(self)) def __str__(self): return repr(self) class FutureChain(DataSource[To], Generic[Ti, To]): @property def in_future(self) -&gt; aio.Future[Ti]: return self._in_future def __init__(self, in_future: aio.Future[Ti], loop = None): super().__init__(loop) setattr(self, &quot;_in_future&quot;, in_future) in_future.add_done_callback(lambda ftr: ftr.get_loop().create_task(self.process())) async def run_process(self) -&gt; To: if __debug__: print(&quot;run_process %r wait for input from %r&quot; % (self, self.in_future)) await self.in_future if __debug__: print(&quot;run_process %r dispatch&quot; % self) await super().run_process() ## must be implemented in derived classes: process() Tr = TypeVar('Tr') class RequestBroker(Generic[Tr]): method: str url: str def __init__(self,method: str, url: str): self.method = method self.url = url @abstractmethod async def dispatch_request(self) -&gt; Tr: raise NotImplementedError(self.dispatch_request) class BytesRequestSource(DataSource[httpx.Response], RequestBroker[bytes]): ## FIXME also support an encoding object default_encoding: str def __init__(self, method: str, url: str, default_encoding: str = &quot;utf-8&quot;, loop = None): DataSource.__init__(self, loop) RequestBroker.__init__(self, method, url) self.default_encoding = default_encoding async def dispatch_request(self) -&gt; httpx.Response: async with httpx.AsyncClient() as client: client_mtd: Callable[..., httpx.Response] = getattr(client, self.method.lower()) return await client_mtd(self.url) async def process(self) -&gt; bytes: if __debug__: print(&quot;process() for %r&quot; % self) response = await self.dispatch_request() try: data = b'' async for chunk in response.aiter_bytes(): data += chunk encoding = response.charset_encoding or self.default_encoding if __debug__: print(&quot;done: %r&quot; % self) return data.decode(encoding) finally: await response.aclose() class ParseChain(FutureChain[bytes, Mapping]): async def process(self) -&gt; Mapping: if __debug__: print(&quot;process() for %r&quot; % self) rslt = self.in_future.result() if __debug__: print(&quot;parsing %r&quot; % rslt) return json.loads(rslt) class PresentationChain(FutureChain[Mapping, None]): async def process(self) -&gt; None: if __debug__: print(&quot;process() for %r&quot; % self) rslt = self.in_future.result() print(&quot;Response: %s&quot; % str(rslt)) def run_example(mtd, req): loop = aio.get_event_loop_policy().get_event_loop() bytesource = BytesRequestSource(method=mtd, url=req, loop = loop) parser = ParseChain(bytesource, loop = loop) presenter = PresentationChain(parser, loop = loop) if __debug__: print(&quot;running request&quot;) # return loop.run_until_complete(presenter) return loop.run_until_complete(bytesource) if __name__ == &quot;__main__&quot;: REQ=&quot;http://ip.jsontest.com&quot; print(&quot;Response: %r &quot; % run_example(&quot;GET&quot;, REQ)) </code></pre> <p>When called as in that example, beginning the task chain at the first object - the byte source - then the code presents almost an expected response at the last point, with approximately the following process:</p> <ol> <li>The byte source task runs an HTTP request and returns the result bytes. The aio frameork then sets the return value of the task coroutine as the task's s result value</li> <li>The bytes parser reads the byte sequence from the byte source task and returns a decoded object. The return value from the task coroutine will similarly become the task's result value, for the bytes parser</li> <li>The presenter task reads the decoded object* and prints it to stdout, with a task result value of None</li> </ol> <p>* ideally, but that's not what's happening in this example. The presenter task is receiving and presenting an unparsed string, though the parser task has in fact decoded the response bytes as a JSON mapping. This becomes apparent after an update to the example</p> <pre><code>running request in run_process &lt;BytesRequestSource at 0x1daa0384400&gt; process() for &lt;BytesRequestSource at 0x1daa0384400&gt; run_process &lt;ParseChain at 0x1daa0384f40&gt; wait for input from &lt;BytesRequestSource at 0x1daa0384400&gt; run_process &lt;PresentationChain at 0x1daa0385000&gt; wait for input from &lt;ParseChain at 0x1daa0384f40&gt; done: &lt;BytesRequestSource at 0x1daa0384400&gt; run_process &lt;ParseChain at 0x1daa0384f40&gt; dispatch in run_process &lt;ParseChain at 0x1daa0384f40&gt; process() for &lt;ParseChain at 0x1daa0384f40&gt; parsing '{&quot;ip&quot;: &quot;zyx.yxz.yyz.xyz&quot;}\n' Response: '{&quot;ip&quot;: &quot;zyx.yxz.yyz.xyz&quot;}\n' </code></pre> <p>I believe that that kind of works. However, when replacing this call from the example:</p> <pre class="lang-py prettyprint-override"><code> return loop.run_until_complete(bytesource) </code></pre> <p>... to instead use this call, in effect to begin the task chain at the end of the chain ...</p> <pre class="lang-py prettyprint-override"><code> return loop.run_until_complete(presenter) </code></pre> <p>then it appears that the top-level <code>presenter</code> never receives the initial HTTP response. Moreover, the updated code results in duplicate requests.</p> <pre><code>running request in run_process &lt;BytesRequestSource at 0x1a6d650c4c0&gt; process() for &lt;BytesRequestSource at 0x1a6d650c4c0&gt; run_process &lt;ParseChain at 0x1a6d650d000&gt; wait for input from &lt;BytesRequestSource at 0x1a6d650c4c0&gt; run_process &lt;PresentationChain at 0x1a6d650d0c0&gt; wait for input from &lt;ParseChain at 0x1a6d650d000&gt; done: &lt;BytesRequestSource at 0x1a6d650c4c0&gt; run_process &lt;ParseChain at 0x1a6d650d000&gt; dispatch in run_process &lt;ParseChain at 0x1a6d650d000&gt; process() for &lt;ParseChain at 0x1a6d650d000&gt; parsing '{&quot;ip&quot;: &quot;zyx.yxz.yyz.xyz&quot;}\n' process() for &lt;ParseChain at 0x1a6d650d000&gt; parsing '{&quot;ip&quot;: &quot;zyx.yxz.yyz.xyz&quot;}\n' run_process &lt;PresentationChain at 0x1a6d650d0c0&gt; dispatch in run_process &lt;PresentationChain at 0x1a6d650d0c0&gt; process() for &lt;PresentationChain at 0x1a6d650d0c0&gt; Response: None process() for &lt;PresentationChain at 0x1a6d650d0c0&gt; Response: None Response: None </code></pre> <p>I believe this may be due to the following, such that the task created under <code>add_done_callback()</code> might be producing and storing a value from <code>self.process()</code> independent to the <code>self</code> (custom Task) instance where this is called - i.e that the custom Task object is not in itself being updated for its value.</p> <pre><code>in_future.add_done_callback(lambda ftr: ftr.get_loop().create_task(self.process())) </code></pre> <p>I'm not exactly certain of how to schedule the custom task object, in itself. At least, the example will schedule a coroutine on the custom task object ...</p> <p>For at least that aspect of the implementation, is there some way to schedule the task itself outside of a call like <code>loop.run_until_complete()</code>?</p> <p>This does not work, as the task is not a coroutine:</p> <pre><code># from the ctor, self being a custom Task object in_future.add_done_callback(lambda ftr: ftr.get_loop().create_task(self)) </code></pre> <p>Similarly, this does not work, as the task is not a callable:</p> <pre><code># from the ctor, self being a custom Task object in_future.add_done_callback(lambda ftr: ftr.get_loop().call_soon(self)) </code></pre> <p>Now that this prototype may have produced a custom task object, how can that task be scheduled within its assigned loop, outside of <code>loop.run_until_complete()</code> etc?</p>
<python><http><python-asyncio>
2023-09-18 16:54:12
0
486
Sean Champ
77,129,015
993,812
Move data within dataframe conditionally
<p>I've got some data with two &quot;measurement series&quot;. What I'd like to do is essentially copy and paste the values from columns C&amp;D of Series A where type equals 0 to columns A&amp;B of Series B where the first occurrence of the Name and Day columns are equal to those found in Series A and the values of columns A&amp;B are not nan.</p> <p>I've tried getting the length of Series A data and using it to do a shift, but that isn't robust as it assumes the data is always in the same order and seems to break if there's more than one Name in a measurement series because then the shift could be off.</p> <p>How can I accomplish this in a more robust way?</p> <p>Input</p> <pre><code> Name Day Type A B C D Meas_Series 0 Test1 20230101 0 3456 7890 0.123 0.456 Series A 1 Test1 20230101 1 6789 1234 nan nan Series A 2 Test1 20230101 2 8901 2345 nan nan Series A 3 Test1 20230102 0 2345 6789 0.345 0.678 Series A 4 Test1 20230102 1 5678 9012 nan nan Series A 5 Test1 20230102 2 3456 7890 nan nan Series A 6 Test1 20230101 99 3456 7890 nan nan Series B 7 Test1 20230101 99 nan nan nan nan Series B 8 Test1 20230101 99 nan nan nan nan Series B 9 Test1 20230102 99 2345 6789 nan nan Series B 10 Test1 20230102 99 nan nan nan nan Series B 11 Test1 20230102 99 nan nan nan nan Series B </code></pre> <p>Output</p> <pre><code> Name Day Type A B C D Meas_Series 0 Test1 20230101 0 3456 7890 0.123 0.456 Series A 1 Test1 20230101 1 6789 1234 nan nan Series A 2 Test1 20230101 2 8901 2345 nan nan Series A 3 Test1 20230102 0 2345 6789 0.345 0.678 Series A 4 Test1 20230102 1 5678 9012 nan nan Series A 5 Test1 20230102 2 3456 7890 nan nan Series A 6 Test1 20230101 99 0.123 0.456 nan nan Series B 7 Test1 20230101 99 nan nan nan nan Series B 8 Test1 20230101 99 nan nan nan nan Series B 9 Test1 20230102 99 0.345 0.678 nan nan Series B 10 Test1 20230102 99 nan nan nan nan Series B 11 Test1 20230102 99 nan nan nan nan Series B </code></pre>
<python><pandas>
2023-09-18 16:48:18
2
555
John
77,128,939
1,991,502
Can I make a QStandardItem editable from one view but not the other?
<p>Consider the snippet below that creates two item models and views for a single item:</p> <pre><code>item = QStandardItem(&quot;my item&quot;) item_model_1 = QStandardItemModel() item_model_1.appendRow(item) item_model_2 = QStandardItemModel() item_model_2.appendRow(item) view_1 = QTreeView() view_1.setModel(item_model_1) view_2 = QTreeView() view_2.setModel(item_model_2) </code></pre> <p>I would like the user to be able to view the item from either view, but have it editable from only one view. Is this possible?</p>
<python><pyqt><pyside>
2023-09-18 16:34:48
1
749
DJames
77,128,850
9,582,176
How to set typehint for default type
<p>I'm trying to set correct typehint for default type in this situation</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar class Foo: pass class FooBar(Foo): pass T = ... def baz(type_: type[T] = Foo) -&gt; T: return type_() </code></pre> <p>Already tried to use</p> <pre class="lang-py prettyprint-override"><code>T = TypeVar(&quot;T&quot;, bound=Foo) </code></pre> <p>But mypy said:</p> <pre><code>Incompatible default for argument &quot;type_&quot; (default has type &quot;type[Foo]&quot;, argument has type &quot;type[T]&quot;) [assignment] def baz(type_: type[T] = Foo) -&gt; T: ^~~ </code></pre>
<python><mypy><python-typing><type-variables>
2023-09-18 16:21:54
1
659
Oleg
77,128,728
10,093,190
How can I detect strikethrough text from .docx tables?
<p>I'm using python-docx to parse some tables to dictionaries. However, some of those tables contain strikethrough text. This text needs to be excluded.</p> <p>I have already found <a href="https://stackoverflow.com/questions/27904470/checking-for-particular-style-using-python-docx/27910938#27910938">how to detect strike-through through text in paragraphs</a> or <a href="https://stackoverflow.com/questions/56269615/how-to-apply-strike-through-using-python-docx">how to apply strike-through text myself</a>, but nowhere can I find how to check for strikethrough text in tables. As far as I can tell from the documentation, neither the Table object nor the cells have a &quot;Run&quot; object, which is something that Paragraphs have that contain style data.</p> <p>Without the Run object, there isn't any style data.</p>
<python><docx><python-docx>
2023-09-18 16:01:36
1
501
Opifex
77,128,576
5,299,750
Why are not all the Conv2D weights of my tflite model in Int8 after conversion with dynamic range quantization?
<p>I have a Keras model and follow the <a href="https://www.tensorflow.org/lite/performance/post_training_quant" rel="nofollow noreferrer">Post-training dynamic range quantization guide</a> to convert it to a tflite model. Upon inspection with <a href="https://netron.app/" rel="nofollow noreferrer">Netron</a>, however, I am confused: Some of the Conv2D layers of the model use weights of type Int8, while others still use Float32. I cannot find any documentation on why only some of the weights might be quantized. Can someone point me in the right direction?</p> <p>The code I used to convert the model is</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(5, None, 1)), tf.keras.layers.Conv2D(16, (3, 3), padding='same', activation='relu'), tf.keras.layers.DepthwiseConv2D((3, 3), padding='same'), tf.keras.layers.Conv2D(16, (3, 3), padding='same', activation='relu'), tf.keras.layers.AveragePooling2D((3, 3), padding='same'), ]) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() </code></pre>
<python><tensorflow><quantization><dtype>
2023-09-18 15:38:32
1
954
Christian Steinmeyer
77,128,486
3,833,632
Can the parent process find out the PID of a daemon process created via a double fork
<p>I have some python code that forks twice to create a daemon. I know code like this is not at all uncommon so I was surprised I couldn't find people mentioning how you get the PID of your grandchild process here. I am willing to explore all options. Even if necessary using the filesystem. I just want to do it in the most correct/safe way possible.</p> <pre><code>try: child1Pid = os.fork() if child1Pid &gt; 0: # parent process, return and keep running return os.setsid() # do second fork (first parent has already exited) try: child2Pid = os.fork() if child2Pid &gt; 0: # exit from second parent sys.exit(0) </code></pre> <p>The sticker here is that the parent process has already exited by the time child2Pid is known and there would be no value to forking on the parent process twice.</p> <p>I am fine with using locks/waits/synchronization methods/whatever is needed. I am just trying to be cautious here as I want to make sure I do something safe.</p>
<python><fork>
2023-09-18 15:27:45
0
715
CalebK
77,128,442
1,517,108
Appending a series of 2d numpy array to create a 3d numpy array
<p>I thought this was an easy problem, but I have been struggling with it.</p> <p>I have a dataframe with 4 columns (Open, High, Low, Close).</p> <p>I need to iteratively select</p> <ul> <li>for 100 times</li> <li>a batch of 75 rows</li> <li>each having 4 columns. Such that the final shape is (100,75,4)</li> </ul> <p>I have tried <code>np.append, np.stack, np.dstack, np.concatenate</code>. None of it works.</p> <p>In <code>np.append</code> i get a shape (7500,4)</p> <p>In <code>np.stack</code> in the second iteration there is error that all input arrays must have the same shape (since after first stack the original arrays shape is different).</p> <p>My last code with <code>np.stack</code> (not put other attempts):</p> <pre><code>for i in range (100): print(i) if (i==0): temp_array=timeseries[['Open','High','Low','Close']].iloc[i:i+75].to_numpy() else: temp_temp_array=np.stack([temp_array,timeseries[['Open','High','Low','Close']].iloc[i:i+75].to_numpy()]) temp_array=temp_temp_array print(temp_array.shape) </code></pre> <p>It seems stackoverflow/internet does not have an answer (or may be I am not asking the right questions).</p>
<python><pandas><numpy-ndarray>
2023-09-18 15:21:24
1
2,425
user1517108
77,128,370
17,034,564
Sorting dataframe cells based on extracted value?
<p>I have a dataframe which looks as follows:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Column_1</th> <th>Column_2</th> <th>Column_3</th> <th>Column_4</th> </tr> </thead> <tbody> <tr> <td>'0 0 1 apple'</td> <td>'0 0 2 banana'</td> <td>'7 5 6 orange'</td> <td>'4 2 9 mango'</td> </tr> <tr> <td>'2 8 0 grape'</td> <td>'3 7 5 apple'</td> <td>'7 4 1 banana'</td> <td>'0 5 3 kiwi'</td> </tr> <tr> <td>'3 8 4 lemon'</td> <td>'5 0 7 grape'</td> <td>'3 8 9 pineapple'</td> <td>'6 1 8 watermelon'</td> </tr> <tr> <td>'3 7 6 orange'</td> <td>'0 1 8 lemon'</td> <td>'1 6 7 cherry'</td> <td>'2 9 0 raspberry'</td> </tr> <tr> <td>'5 2 7 cherry'</td> <td>'2 9 7 pear'</td> <td>'NaN'</td> <td>'NaN'</td> </tr> </tbody> </table> </div> <p>Each cell contains a string value of the following type: <code>'x y z SomeText'</code>. I want to reorder the columns from 'Column_2' onwards based on the 'z' value within the same row. The 'Column_1' should remain unchanged. THe output should look like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Column_1</th> <th>Column_2</th> <th>Column_3</th> <th>Column_4</th> </tr> </thead> <tbody> <tr> <td>'0 0 1 apple'</td> <td>'4 2 <strong>9</strong> mango'</td> <td>'7 5 <strong>6</strong> orange'</td> <td>'0 0 <strong>2</strong> banana'</td> </tr> <tr> <td>'2 8 9 grape'</td> <td>'3 7 <strong>5</strong> apple'</td> <td>'0 5 <strong>3</strong> kiwi'</td> <td>'7 4 <strong>1</strong> banana'</td> </tr> <tr> <td>'3 8 4 lemon'</td> <td>'3 8 <strong>9</strong> pineapple'</td> <td>'6 1 <strong>8</strong> watermelon'</td> <td>'grape 5 0 <strong>7</strong>'</td> </tr> </tbody> </table> </div> <p>etc.</p> <p>The real data frame looks like this:</p> <p><a href="https://i.sstatic.net/VLztO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VLztO.png" alt="enter image description here" /></a></p> <p>I would like the file to be read so that the final result would have, e.g., the following values swapped:</p> <p><a href="https://i.sstatic.net/Gpda7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gpda7.png" alt="enter image description here" /></a></p> <p>As you can see, <code>6 0 6 zwOl@</code> and <code>3 0 3 swOl@</code> should be re-ordered based on the third digit in each string.</p> <p>Thanks for the help!</p>
<python><pandas><loops>
2023-09-18 15:10:57
2
678
corvusMidnight
77,128,337
14,039,739
How to accommodate None types when comparing dates in SQL Alchemy Query?
<p>My query:</p> <pre><code>abandoned_days = 10 abandoned_date = datetime.now() - timedelta(days=abandoned_days) abandoned_requests = db.query(models.DocumentRequest)\ .filter( models.DocumentRequest.case_id.in_([case[&quot;id&quot;] for case in cases]), models.DocumentRequest.status == 'requested', models.DocumentRequest.requested_at &lt; abandoned_date) \ .all() </code></pre> <p>My Error: <code>TypeError: '&lt;' not supported between instances of 'NoneType' and 'datetime.datetime'</code></p> <p><code>abandoned_date</code> will always be a datetime type, however how can I make this database query to not query a record when it's requested_at value is <code>none</code> (or in SQL, <code>null</code>). But also still request records using the <code>models.DocumentRequest.requested_at &lt; abandoned_date</code> line?</p>
<python><sqlalchemy>
2023-09-18 15:07:57
1
561
personwholikestocode