QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,046,426
| 1,838,659
|
Ruamel preserving backslashes to indicate a line break
|
<p>Does ruamel have an option to preserve backslash line breaks? Consider the following example.</p>
<ul>
<li>the text was exported using SnakeYaml and included carriage returns.</li>
<li>the wrapping and text length was such that the <code>\r\n</code> was split as part of the wrapping</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>❯ cat example.yaml
---
items:
- description: "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmode\
\ ut labore et dolore magna aliqua.\r\n\r\nUt enim ad minim veniam, quis nost. \r\
\n \r\n\r\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum dol\
\ nulla pariatur.\r\n\r\nExcepteur sint occaecat cupidatat non proident, sunt ."
</code></pre>
<p>Using the following script to read and write the same file</p>
<pre class="lang-py prettyprint-override"><code>❯ cat parser_yaml.py
from ruamel.yaml import YAML
# Create a YAML instance
yaml = YAML()
# Read the YAML file
with open('example.yaml', 'r') as file:
data = yaml.load(file)
# Here you can modify `data` if needed, for example:
# data['new_key'] = 'new_value'
# Write the modified data back to the YAML file
with open('example_modified.yaml', 'w') as file:
yaml.dump(data, file)
print("YAML file read and written successfully.")
</code></pre>
<p>Upon rewriting, the <code>\</code> line endings are lost, meaning the <code>\r\n</code> is now split by a new line.</p>
<pre class="lang-bash prettyprint-override"><code>❯ python ./parser_yaml.py
YAML file read and written successfully.
❯ diff example_modified.yaml example.yaml
0a1
> ---
2,6c3,6
< - description: "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmode
< ut labore et dolore magna aliqua.\r\n\r\nUt enim ad minim veniam, quis nost. \r
< \n \r\n\r\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum
< dol nulla pariatur.\r\n\r\nExcepteur sint occaecat cupidatat non proident, sunt
< ."
---
> - description: "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmode\
> \ ut labore et dolore magna aliqua.\r\n\r\nUt enim ad minim veniam, quis nost. \r\
> \n \r\n\r\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum dol\
> \ nulla pariatur.\r\n\r\nExcepteur sint occaecat cupidatat non proident, sunt ."
</code></pre>
<p>It can no longer be parsed correctly, e.g.</p>
<pre class="lang-bash prettyprint-override"><code>❯ yq '.items[0].description' example.yaml
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmode ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nost.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dol nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt .
</code></pre>
<p>vs</p>
<pre class="lang-bash prettyprint-override"><code>❯ yq '.items[0].description' example_modified.yaml
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmode ut labore et dolore magna aliqua.
t enim ad minim veniam, quis nost.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dol nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt .
</code></pre>
|
<python><yaml><ruamel.yaml>
|
2024-10-02 11:05:53
| 1
| 738
|
Steve-B
|
79,046,349
| 5,790,653
|
How to show difference between two list of dictionaries
|
<p>I have these two lists:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'name': 'one', 'email': 'one@gmail.com', 'phone': '111'},
{'name': 'two', 'email': 'two@gmail.com', 'phone': '111'},
{'name': 'three', 'email': 'three@gmail.com', 'phone': '333'},
{'name': 'four', 'email': 'four@gmail.com', 'phone': '444'},
]
list2 = [
{'first_name': 'three', 'email': 'three@gmail.com', 'phone_number': '333'},
{'first_name': 'four', 'email': 'four@gmail.com', 'phone_number': '444'},
{'first_name': 'five', 'email': 'five@gmail.com', 'phone_number': '555'},
]
</code></pre>
<p>I know how to find difference between two lists based on a key:</p>
<pre class="lang-py prettyprint-override"><code>list1_only = list(set([x['phone'] for x in list1]) - set([x['phone_number'] for x in list2]))
</code></pre>
<p>But the current output is:</p>
<pre><code>['111']
</code></pre>
<p>But I'm going to have this output finally:</p>
<pre><code>[
{'name': 'one', 'email': 'one@gmail.com', 'phone': '111'},
{'name': 'two', 'email': 'two@gmail.com', 'phone': '111'}
]
</code></pre>
<p>I know this way, but I think (and am sure) there's a better way for it:</p>
<pre><code>for l in list1:
for d in diff:
if d == l['phone']:
print(l)
</code></pre>
<p>I read other questions like <a href="https://stackoverflow.com/questions/19755376/getting-the-difference-delta-between-two-lists-of-dictionaries">this</a>, but it prints the whole list (not the difference values):</p>
<pre class="lang-py prettyprint-override"><code>[x for x in list2 if x['phone_number'] not in list1]
</code></pre>
<p>Would you please help me regarding this?</p>
<p>Are there any one-line ways to replace with <code>list1_only = list(set([x['phone'] for x in list1]) - set([x['phone_number'] for x in list2]))</code>?</p>
|
<python>
|
2024-10-02 10:44:06
| 3
| 4,175
|
Saeed
|
79,046,036
| 1,251,549
|
How output single quote in dbt post hook macros with jinja?
|
<p>Here is post hook for dbt model:</p>
<pre><code>{{
config(
post_hook=[
'{{ execute_if_exists("delete from " ~ this ~ " where day = date('' " ~ var("mydatevar") ~ " '')") }}',
]
)
}}
</code></pre>
<p>Please note the <code>''</code> in the string. Next:</p>
<pre><code>{% macro execute_if_exists(query) -%}
{{ query }}
{%- endmacro %}
</code></pre>
<p>And I got</p>
<pre><code>delete from catalog.myschema.mytable where day = date( 2024-10-02 )
</code></pre>
<p>While expected is:</p>
<pre><code>delete from catalog.myschema.mytable where day = date(' 2024-10-02 ')
</code></pre>
<p>I have tried <code>\'</code> and got error. I have switched <code>"</code> and <code>'</code> again no effect.</p>
<p><strong>UPDATED</strong></p>
<ol>
<li><code>'{{ execute_if_exists("delete from " ~ this ~ " where day = date(' " ~ var("mydatevar") ~ " ')") }}</code> - compilation error</li>
<li><code>"{{ execute_if_exists('delete from ' ~ this ~ ' where day = date(' ' ~ var('mydatevar') ~ ' ')') }}"</code> - same effect</li>
<li><code>"{{ execute_if_exists('delete from ' ~ this ~ ' where day = date(' ' ~ var('mydatevar') ~ ' ')') }}"</code> - also compilation error</li>
</ol>
|
<python><jinja2><dbt>
|
2024-10-02 08:57:43
| 1
| 33,944
|
Cherry
|
79,045,596
| 1,641,112
|
How can I tame pytest's separators in the output without losing progress updates?
|
<p>pytest output has lots of headers that span the width of the terminal:</p>
<pre><code>========================================= FAILURES =========================================
</code></pre>
<pre><code>____________________________________ test_advanced_fail ____________________________________
</code></pre>
<p>If I filter them out after collecting the output from the command, I no longer get the real-time progress updates to the terminal.</p>
<p>Is there any way to modify the appearance of these separators without losing the ability to see real-time running of the tests?</p>
|
<python><pytest>
|
2024-10-02 06:37:10
| 0
| 7,553
|
StevieD
|
79,045,569
| 195,787
|
NumPy Typing of Array (Vector, Matrix) of Floats
|
<p>What's the proper way to add type hints to a function with NumPy arrays?<br />
Specifically hint the types are vectors / matrices of any supported float.</p>
<pre class="lang-py prettyprint-override"><code>def MyFun( myVectorFloat: <Typing>, myMatrixFloat: <Typing> ):
some code...
</code></pre>
<p>So I want to type hint <code>myVectorFloat</code> as a vector (<code>len(myVectorFloat.shape) == 1</code>) and <code>myMatrixFloat</code> as a matrix (<code>len(myMatrixFloat.shape) == 2</code>).</p>
<p>I want support for <code>Float16</code>, <code>Float32</code> and <code>Float64</code>.</p>
<p>What's it the proper way to achieve that?</p>
|
<python><numpy><python-typing>
|
2024-10-02 06:25:39
| 0
| 5,123
|
Royi
|
79,045,488
| 1,609,710
|
What is the best way to access immutable part when subclassing immutable type?
|
<p>Suppose that you want to create a new subclass for <code>float</code>, using the following code:</p>
<pre><code>class MyFloat(float):
def __new__(cls,number,extra):
return super().__new__(cls,number)
def __init__(self,number,extra):
float.__init__(number)
self.extra = extra
</code></pre>
<p>This, from what I understand, is good practice for this part of the class. And if we stop here, then for <code>x=MyFloat(0.0,'hi')</code>, then <code>x+1</code> will evaluate to 1.0 (as a float), as expected.</p>
<p>Now, suppose that we want it to instead return a MyFloat equal to <code>MyFloat(1.0,'hi')</code>. A first pass thought would be to add</p>
<pre><code> def __add__(self,other):
return MyFloat(self+other,self.extra)
</code></pre>
<p>but of course, if you try this, it will result in a recursion error, as it calls itself. (yes, I'm aware that adding two MyFloats will have only the first one's <code>extra</code>, this is basically a minimal example of what I'm asking).</p>
<p>There are a few obvious ways to handle it, but they all feel very hacky. There's the conversion to float:</p>
<pre><code> def __add__(self,other):
return MyFloat(float(self)+other,self.extra)
</code></pre>
<p>or calling <code>float.__add__</code> instead of using <code>+</code>, as:</p>
<pre><code> def __add__(self,other):
return MyFloat(float.__add__(self,other),self.extra)
</code></pre>
<p>Is there a "more pythonic", or perhaps more efficient, way to handle this kind of thing?</p>
<p>And what if I want to access the immutable part in another context? For example, if I want the <code>extra</code> part to be updated to <code>f"{self+1.0}"</code> for some reason, where <code>self</code> here is the immutable part, how do I get that part set up?</p>
<p><strong>Update</strong>: I've done a bit of speed testing, and I've found something interesting. Using <code>float.__add__</code> (or <code>super().__add__</code>) is relatively slow, typically taking more than 20% more time than my baseline code (which performs all of the same calculations, etc, but takes the inputs as separate parts, rather than as a single variable). Casting to float with <code>float(self)</code> and <code>float(other)</code> does better, typically around 15% slower than the baseline. It's about 12% slower than baseline to do <code>self.real</code> and <code>other.real</code>, which improves further...</p>
<p>Strangely, the fastest approach I've found is to use <code>+self</code> and <code>+other</code> - this gets it down to around 6-8% slower than baseline. I'm not sure why it's faster than <code>float(self)</code>, since from what I can tell, the underlying code should basically be exactly the same, but it's quite consistent.</p>
<p>I'd be interested to know whether anyone knows a more efficient approach than this.</p>
|
<python><subclassing>
|
2024-10-02 05:48:59
| 0
| 733
|
Glen O
|
79,045,268
| 19,090,490
|
Generate all possible Boolean cases from n Boolean Values
|
<p>If two fields exist, the corresponding fields are Boolean values.</p>
<ul>
<li>x_field(bool value)</li>
<li>y_field(bool value)</li>
</ul>
<p>I want to generate all cases that can be represented as a combination of multiple Boolean values.</p>
<p>For example, there are a total of 4 combinations that can be expressed by two Boolean fields as above.</p>
<ol>
<li>x_field(true), y_field(true)</li>
<li>x_field(true), y_field(false)</li>
<li>x_field(false), y_field(true)</li>
<li>x_field(false), y_field(false)</li>
</ol>
<p>If you have an array with two fields that are Boolean type, can you generate all cases and express like this form?</p>
<pre><code># before calculate ...
fields = ["x_field", "y_field"]
# after calculate ...
result = [{"x_field": True, "y_field": True},
{"x_field": True, "y_field": False},
{"x_field": False, "y_field": True},
{"x_field": False, "y_field": False},]
</code></pre>
<hr />
<p><strong>My Attempted</strong></p>
<p>I thought I could solve this problem with the itertools module, but I wasn't sure which function to use.</p>
<p>I tried to implement using various itertools module functions, but it failed.</p>
<pre><code>from itertools import combinations, permutations
boolean_fields = ["x_field", "y_field"]
# -->
boolean_fields = [True, False, True, False]
x = list(combination(boolean_fields, 2))
</code></pre>
|
<python><math><combinatorics>
|
2024-10-02 03:33:05
| 4
| 571
|
Antoliny Lee
|
79,045,133
| 22,860,226
|
Download Instagram reel audio only with Instaloader/Python or any way
|
<p>I am trying to download the audio in the following format
<a href="https://www.instagram.com/reels/audio/1997779980583970/" rel="nofollow noreferrer">https://www.instagram.com/reels/audio/1997779980583970/</a>. Below code is returning "Fetching metadata failed".
I am able to download the reels but not separate audio files.</p>
<p>What shall I do?</p>
<pre><code>def get_reel_audio_data(audio_id):
loader = instaloader.Instaloader()
try:
# Fetch the audio post using the audio ID
audio_post = instaloader.Post.from_shortcode(loader.context, audio_id)
audio_url = audio_post.video_url if audio_post.is_video else audio_post.url
return audio_url, True
except Exception as e:
if settings.DEBUG:
import traceback
print(traceback.format_exc())
capture_exception(e)
return None, False
</code></pre>
|
<python><instagram><instaloader>
|
2024-10-02 01:37:22
| 2
| 411
|
JTX
|
79,045,092
| 3,334,721
|
python class and numba jitclass for codes with numba functions
|
<p>At some point in my code, I call a Numba function and all subsequent computations are made with Numba jitted functions until post-processing steps.</p>
<p>Over the past days, I have been looking for an efficient way to send to the Numba part of the code all the variables (booleans, integers, floats, and float arrays mostly) while trying to keep the code readable and clear. In my case, that implies limiting the number of arguments and, if possible, regrouping some variables depending on the system they refer to.</p>
<p>I identified four ways to do this:</p>
<ol>
<li><strong>brut force</strong>: sending all the variables one by one as arguments of the first called Numba function. I find this solution is not acceptable as it makes the code barely readable (very large list of arguments) and inconsistent with my wish to my wish of regrouping variables,</li>
<li><strong>Numba typed dictionaries</strong> (see <a href="https://stackoverflow.com/questions/55078628/using-dictionaries-with-numba-njit-function">this post</a> for instance): I did not find this solution acceptable as I understand a given dictionary can only contain variables of similar types (dictionary of floats64 only for instance) while a given system may have related variables of different types. In addition, I observed a significant loss of performance (~ +10% computation times) with this option,</li>
<li><strong>Numba namedtuples</strong>: fairly easy to implement and use, but it is my understanding that these can only be efficiently used if defined within a Numba function and thus cannot be sent from a non jitted function/code to a jitted function without making it <a href="https://stackoverflow.com/questions/79008624/namedtuple-as-argument-of-a-numba-function?noredirect=1#comment139310120_79008624">impossible to benefit from the</a> <code>cache=True</code> option. This is a dealbreaker for me as compilation times may exceed the execution of the code itself.</li>
<li><strong>Numba @jitclass</strong>: I was initially reluctant to use classes for my code but it turns out it is very practical... but same as for the <code>namedtuples</code>, if an object from a <code>@jitclass</code> is initialized within a non jitted function, I observed that it becomes impossible to benefit from the <code>cache=True</code> option (see <a href="https://github.com/numba/numba/issues/4830" rel="nofollow noreferrer">this post</a>).</li>
</ol>
<p>In the end, none of these four alternatives allow me to do what I wanted to. I am probably missing something...</p>
<p><strong>Here is what I did in the end</strong> : I combined the use of regular python classes and Numba <code>@jitclass</code> in order to maintain the possibility to benefit from the <code>cache=True</code> option.</p>
<p>Here is my mwe:</p>
<pre><code>import numba as nb
from numba import jit
from numba.experimental import jitclass
spec_cls = [
('a', nb.types.float64),
('b', nb.types.float64),
]
# python class
class ClsWear_py(object):
def __init__(self, a, b):
self.a = a
self.b = b
# mirror Numba class
@jitclass(spec_cls)
class ClsWear(object):
def __init__(self, a, b):
self.a = a
self.b = b
def function_python(obj):
print('from the python class :', obj.a)
# call of a Numba function => this is where I must list explicitly all the keys of the python class object
oa, ob = function_numba(obj.a, obj.b)
return obj, oa, ob
@jit(nopython=True)
def function_numba(oa, ob):
# at the beginning of the Numba function, the arguments are used to define the @jitclass object
obj_nb = ClsWear(oa, ob)
print('from the numba class :', obj_nb.a)
return obj_nb.a, obj_nb.b
# main code :
obj_py = ClsWear_py(11,22)
obj_rt, a, b = function_python(obj_py)
</code></pre>
<p>The output of this code is :</p>
<pre><code>$ python mwe.py
from the python class : 11
from the numba class : 11.0
</code></pre>
<p>On the plus side :</p>
<ul>
<li>I have a clean data structure in python and Numba (use of classes)</li>
<li>I keep a fast running code and <code>cache=True</code> is working</li>
</ul>
<p>But on the down side :</p>
<ul>
<li>I must define python classes and their mirror in Numba</li>
<li>there remains one barely readable part of the code: the first call of a jitted function where all the content of my objects must be listed explicitly</li>
</ul>
<p>Am I missing something ? Is there a more obvious way to do this ?</p>
|
<python><class><numba>
|
2024-10-02 01:03:42
| 1
| 403
|
Alain
|
79,044,863
| 4,388,451
|
Implement a server and client communicating via sockets in Python
|
<p>I need to implement a server and client communicating via sockets in Python. I decided to implement the server with <code>asyncio.start_server</code>. Clients connect to the server with socket connections. The server generates some messages at random moments and immediately pushes those messages to the client. All clients should receive the same messages as soon as possible and print them.</p>
<p>My implementation of server:</p>
<pre><code>import asyncio
import datetime as dt
import struct
import sys
from random import randrange
from zoneinfo import ZoneInfo
async def prepare_weather_data():
while True:
await asyncio.sleep(randrange(0, 7))
now = dt.datetime.now(ZoneInfo("Europe/Kyiv")).replace(microsecond=0).isoformat()
massage = f"{now} The temperature is {str(randrange(20, 30))} degrees Celsius"
message_encoded = massage.encode()
message_length = struct.pack('>I', len(message_encoded))
yield message_length + message_encoded
async def read_massage(reader: asyncio.StreamReader):
# Read the 4-byte length header
length_data = await reader.readexactly(4)
message_length = struct.unpack('>I', length_data)[0]
message_encoded = await reader.readexactly(message_length)
return message_encoded.decode()
async def handle_client(
_: asyncio.StreamReader,
writer: asyncio.StreamWriter,
weather_data: bytes,
):
try:
writer.write(weather_data)
await writer.drain()
except KeyboardInterrupt:
print("Closing the connection")
writer.close()
sys.exit(0)
async def main():
async def client_handler(reader, writer):
message = await read_massage(reader)
print(f"Received the message \"{message}\" that won't be handled any way")
async for weather_data in prepare_weather_data():
await handle_client(reader, writer, weather_data)
server = await asyncio.start_server(client_handler, "localhost", 8000)
async with server:
await server.serve_forever()
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>The client:</p>
<pre><code>import sys
import socket
import struct
def recvall(sock: socket.socket, n: int) -> bytes | None:
# Helper function to recv n bytes or return None if EOF is hit
data = bytearray()
while len(data) < n:
packet = sock.recv(n - len(data))
if not packet:
return None
data.extend(packet)
return data
def read_massage(sock: socket.socket) -> str | None:
# Read the 4-byte length
message_length_encoded = recvall(sock, 4)
if not message_length_encoded:
print("Socket closed")
sys.exit(0)
message_length = struct.unpack('>I', message_length_encoded)[0]
return recvall(sock, message_length).decode()
so = socket.socket(
socket.AF_INET,
socket.SOCK_STREAM,
)
so.connect(("localhost", 8000)) # Blocking
print("Connected")
try:
message = "Hello, give me a weather, please".encode()
# Pack the length of the message into 4 bytes (big-endian)
message_length = struct.pack('>I', len(message))
sent = so.send(message_length + message) # Blocking
print(f"Sent {sent} bytes")
while True:
msg = read_massage(so)
print(msg)
except KeyboardInterrupt:
so.close()
print("Socket closed")
sys.exit(0)
</code></pre>
<p>This version of the server generally works. The server generates messages and pushes them to the clients. However, all clients receive different messages! How to modify the server to push the same message to all clients? I tried everything I could.</p>
<p><strong>UPD</strong></p>
<p>I see two possible approaches.</p>
<ol>
<li><p>Store generated data in some container accessible to all clients.
Like this:</p>
<pre><code> async def handle_client(reader: asyncio.StreamReader, writer: asyncio.StreamWriter, weather_data):
_ = await read_message(reader)
try:
while True:
async for data in weather_data:
writer.write(data)
await writer.drain()
except KeyboardInterrupt:
print("Closing the connection")
writer.close()
sys.exit(0)
async def main():
weather_data = prepare_weather_data()
async def client_handler(reader, writer):
await handle_client(reader, writer, weather_data)
server = await asyncio.start_server(client_handler, "localhost", 8000)
async with server:
await server.serve_forever()
</code></pre>
</li>
</ol>
<p>However, it crashes for the second connected client as <code>asynchronous generator is already running</code>.</p>
<ol start="2">
<li>Register all connected clients and push generated data to them. I tried getting all connected clients as <code>asyncio.all_tasks()</code>. However, I didn't get how to use it.</li>
</ol>
|
<python><sockets><asynchronous><python-asyncio>
|
2024-10-01 22:02:02
| 2
| 1,636
|
Andriy
|
79,044,831
| 8,500,958
|
How to override __len__ return type in python?
|
<p>I am trying to implement a class <code>Foo</code> that when we call <code>len(foo)</code> will return an instance of another class <code>MyCoolIntType</code> instead of an <code>int</code>. Something like the following:</p>
<pre><code>class Foo(object):
def __len__(self) -> MyCoolIntType:
return MyCoolIntType(self.size)
</code></pre>
<p>Of course, I keep geeting <code>TypeError: 'MyCoolIntType' object cannot be interpreted as an integer</code>.</p>
<p>What I imagine is happening is that when I call <code>len</code> from stdlib, the internal implementation does a type check to ensure that the result from <code>obj.__len__()</code> is an integer, and is raising in an exception.</p>
<p>So I guess my question can also be interpreted as "How do I re-declare a function (<code>len()</code>) from the stdlib in python?"</p>
<p>PS: I know I can work around it by just declaring a property <code>size</code> in my class, but I'm curious if it would be possible to use <code>len</code> to return <code>MyCoolIntType</code></p>
|
<python><magic-function>
|
2024-10-01 21:46:57
| 0
| 932
|
Icaro Mota
|
79,044,764
| 210,867
|
How do I call async code from `__iter__` method?
|
<p>New to async.</p>
<p>I had a class called PGServer that you could iterate to get a list of database names:</p>
<pre class="lang-py prettyprint-override"><code>class PGServer:
# ...connection code omitted...
def _databases( self ):
curr = self.connection().cursor()
curr.execute( "SELECT datname FROM pg_database ORDER BY datname" )
return [ row[ 0 ] for row in curr.fetchall() ]
def __iter__( self ):
return iter( self._databases() )
dbs = PGServer( ...connection params... )
for db in dbs:
print( db )
</code></pre>
<p>It worked fine with psycopg. Then I switched to asyncpg:</p>
<pre class="lang-py prettyprint-override"><code>class PGServer:
# ...connection code omitted...
async def _databases( self ):
conn = await self.connection()
rows = await conn.fetch( "SELECT datname FROM pg_database ORDER BY datname" )
return [ row[ 0 ] for row in rows ]
dbs = PGServer( ...connection params... )
for db in await dbs._databases():
print( db )
</code></pre>
<p>It works when I invoke <code>_databases()</code> directly, but how to I get <code>__iter__</code> working again? I can't make it async because that violates the protocol. I tried implementing <code>__aiter__</code> instead, but couldn't figure out how to make that work.</p>
<p>Some implementations that I tried:</p>
<pre class="lang-py prettyprint-override"><code>async def __aiter__( self ):
#return self._databases()
#return await self._databases()
#return aiter( self._databases() )
return aiter( await self._databases() )
</code></pre>
<p>Those all generated the following error:</p>
<pre><code>TypeError: 'async for' received an object from __aiter__ that does not implement __anext__: coroutine
</code></pre>
<h2>UPDATE</h2>
<p>I just created an implementation that seems to work:</p>
<pre class="lang-py prettyprint-override"><code>async def __aiter__( self ):
for db in await self._databases():
yield name
</code></pre>
<p>I don't know if that's optimal or idiomatic, though.</p>
<h2>UPDATE 2</h2>
<p>Unless someone can come up with something better, I'm just going to give up on having an <code>__iter__</code> and saying <code>for db in dbs:</code>, and instead just be more explicit:</p>
<pre class="lang-py prettyprint-override"><code>for db in await dbs.databases():
...
</code></pre>
<p>(I dropped the underscore because in this new context <code>databases()</code> is now the public API.)</p>
|
<python><python-asyncio><asyncpg>
|
2024-10-01 21:15:56
| 1
| 8,548
|
odigity
|
79,044,650
| 3,903,479
|
Print playwright browser in pytest terminal header
|
<p>I have pytest set up with playwright and can run it with a docker compose command:</p>
<pre class="lang-bash prettyprint-override"><code>docker compose run test pytest /tests --browser-channel chrome
</code></pre>
<p>Which outputs:</p>
<pre><code>============================ test session starts =============================
platform linux -- Python 3.12.6, pytest-8.3.3, pluggy-1.5.0
rootdir: /tests
plugins: base-url-2.1.0, playwright-0.5.2
collected 1 item
tests/test_example.py . [100%]
============================= 1 passed in 0.02s ==============================
</code></pre>
<p>I want to include the browser names in each session. I've tried using the <a href="https://playwright.dev/python/docs/test-runners#fixtures" rel="nofollow noreferrer"><code>browser_name</code> fixture</a> in the <a href="https://docs.pytest.org/en/stable/reference/reference.html#reporting-hooks" rel="nofollow noreferrer">pytest_report_header hook</a>, but fixtures don't work in the report hooks and it errors out.</p>
|
<python><pytest><playwright>
|
2024-10-01 20:26:54
| 1
| 1,942
|
GammaGames
|
79,044,503
| 4,727,280
|
Can Pyright/MyPy deduce the type of an entry of an ndarray?
|
<p>How can I annotate an <code>ndarray</code> so that Pyright/Mypy Intellisense can deduce the type of an entry? What can I fill in for <code>???</code> in</p>
<pre><code>x: ??? = np.array([1, 2, 3], dtype=int)
</code></pre>
<p>so that</p>
<pre><code>y = x[0]
</code></pre>
<p>is identified as an integer as rather than <code>Any</code>?</p>
|
<python><numpy><python-typing><mypy><pyright>
|
2024-10-01 19:32:55
| 2
| 945
|
fmg
|
79,044,359
| 7,746,472
|
How to return results from COPY INTO from scripting block in Snowflake
|
<p>How can I make a Snowflake / Snowpark scripting block return the resonse of a COPY INTO statment?</p>
<p>In our ELT pipeline I use Snowpark to copy data from an S3 stage into a table on a daily basis, like this:</p>
<pre><code>sql_copy_into = f"""
COPY INTO {target_table}
FROM {stage_file_name_full_path}
FILE_FORMAT = (
TYPE = CSV
COMPRESSION = 'AUTO'
FIELD_DELIMITER = {field_delimiter}
)
ON_ERROR = CONTINUE;
"""
response = session.sql(sql_copy_into)
results = response.collect()
</code></pre>
<p>Now I want to store the results of the COPY INTO in a logging table, which is working well so far.</p>
<p>However, once I wrap the COPY INTO into a Snowflake scripting block:</p>
<pre><code>sql_copy_into = f"""
BEGIN
TRUNCATE TABLE {target_table};
COPY INTO {target_table}
FROM {stage_file_name_full_path}
FILE_FORMAT = (...)
ON_ERROR = CONTINUE;
END;
"""
response = session.sql(sql_copy_into)
results = response.collect()
</code></pre>
<p>Then I get the results as</p>
<pre><code>[Row(anonymous block=None)]
</code></pre>
<p>It looks like the scripting block isn't returning anything.</p>
<p>How can I make the scripting block return the resonse of the COPY INTO statment?</p>
<p>I have read about <a href="https://docs.snowflake.com/developer-guide/snowflake-scripting/return" rel="nofollow noreferrer">the RETURN command</a>, but I can't figure out how this can help me here.</p>
<p>The only option I see is to send two different SQL statements, once for the TRUNCATE and another one for the COPY INTO. Then I could avoid the BEGIN / END. But there has to be a better way.</p>
<p>Any hints are greatly appreciated!</p>
|
<python><snowflake-cloud-data-platform>
|
2024-10-01 18:40:42
| 1
| 1,191
|
Sebastian
|
79,044,322
| 11,278,044
|
Conditionally slice a pandas multiindex on specific level
|
<p>For my given multi-indexed DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
np.random.randn(12),
index=[
[1,1,2,3,4,4,5,5,6,6,7,8],
[1,2,1,1,1,2,1,2,1,2,2,2],
]
)
</code></pre>
<pre class="lang-none prettyprint-override"><code> 0
1 1 1.667692
2 0.274428
2 1 0.216911
3 1 -0.513463
4 1 -0.642277
2 -2.563876
5 1 2.301943
2 1.455494
6 1 -1.539390
2 -1.344079
7 2 0.300735
8 2 0.089269
</code></pre>
<p>I would like to slice it such that I keep only rows where second index level contains BOTH 1 and 2</p>
<pre class="lang-none prettyprint-override"><code> 0
1 1 1.667692
2 0.274428
4 1 -0.642277
2 -2.563876
5 1 2.301943
2 1.455494
6 1 -1.539390
2 -1.344079
</code></pre>
<p>How can I do this?</p>
|
<python><pandas><multi-index>
|
2024-10-01 18:29:55
| 4
| 376
|
Kyle Carow
|
79,044,153
| 13,562,186
|
How to best solve partial differential equations in Python accurately but quickly
|
<p>I am trying to reproduce 4.1.3 Emission from Solid Materials as per this PDF.</p>
<p><a href="https://www.rivm.nl/bibliotheek/rapporten/2017-0197.pdf" rel="nofollow noreferrer">https://www.rivm.nl/bibliotheek/rapporten/2017-0197.pdf</a></p>
<p>Produced this version of the model and this is the best I have gotten.</p>
<pre><code># Import necessary libraries
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# -------------------------------
# Constants
# -------------------------------
R_J_per_mol_K = 8.314 # Universal gas constant [J/(mol·K)]
# -------------------------------
# Scenario Selection
# -------------------------------
selected_scenario = 1 # Change this to select scenarios (1, 2, or 3)
# -------------------------------
# Input Parameters
# -------------------------------
# General Parameters (Common across all scenarios)
body_weight_kg = 60 # Body weight [kg]
frequency_per_year = 1 # Frequency of exposure [per year]
# Scenario Specific Parameters
if selected_scenario == 1:
# Scenario 1 (Base Scenario)
product_surface_area_cm2 = 10 # Product surface area [cm²]
product_thickness_mm = 5 # Product thickness [mm]
product_density_g_per_cm3 = 0.1 # Product density [g/cm³]
diffusion_coefficient_m2_per_s = 1E-08 # Diffusion coefficient [m²/s]
weight_fraction_substance = 0.6 # Weight fraction of the substance [fraction]
product_air_partition_coefficient = 0.5 # Product/air partition coefficient [-]
room_volume_m3 = 0.5 # Room volume [m³]
ventilation_rate_per_hour = 58 # Ventilation rate [per hour]
inhalation_rate_m3_per_hour = 10 # Inhalation rate [m³/hr]
start_exposure_h = 1 # Start exposure time [hours]
exposure_duration_h = 2 # Exposure duration [hours]
mass_transfer_coefficient_m_per_h = 10 # Mass transfer coefficient [m/h]
#Absorption
absorption_fraction = 1 # Absorption fraction [fraction]
# -------------------------------
# Unit Conversions
# -------------------------------
product_surface_area_m2 = product_surface_area_cm2 / 10000 # Convert cm² to m²
product_thickness_m = product_thickness_mm / 1000 # Convert mm to m
product_density_kg_per_m3 = product_density_g_per_cm3 * 1000 # Convert g/cm³ to kg/m³
mass_transfer_coefficient_m_per_s = mass_transfer_coefficient_m_per_h / 3600 # Convert m/h to m/s
ventilation_rate_per_s = ventilation_rate_per_hour / 3600 # Convert per hour to per second
start_exposure_s = start_exposure_h * 3600 # Convert hours to seconds
exposure_duration_s = exposure_duration_h * 3600 # Convert hours to seconds
total_emission_time_s = start_exposure_s + exposure_duration_s # Total time to simulate [s]
frequency_per_day = frequency_per_year / 365 # Frequency per day
inhalation_rate_m3_per_s = inhalation_rate_m3_per_hour / 3600 # Convert inhalation rate from m³/h to m³/s
# -------------------------------
# Initial Concentrations
# -------------------------------
volume_material_m3 = product_surface_area_m2 * product_thickness_m # Volume of material [m³]
mass_material_kg = volume_material_m3 * product_density_kg_per_m3 # Mass of material [kg]
mass_substance_initial_kg = mass_material_kg * weight_fraction_substance # Initial mass of substance [kg]
concentration_initial_kg_per_m3 = mass_substance_initial_kg / volume_material_m3 # Initial concentration [kg/m³]
N = 1000 # Reduced number of layers for faster computation
dx = product_thickness_m / (N - 1) # Spatial step size [m]
t_eval = np.linspace(0, total_emission_time_s, 2000) # Time points for evaluation
# -------------------------------
# Emission Model Definition
# -------------------------------
# Precompute constants
D = diffusion_coefficient_m2_per_s
hm = mass_transfer_coefficient_m_per_s
K = product_air_partition_coefficient
S = product_surface_area_m2
V = room_volume_m3
q = ventilation_rate_per_s
def emission_model(t, y):
C = y[:-1] # Concentration in the material layers [kg/m³]
y_air = y[-1] # Air concentration [kg/m³]
dCdt = np.zeros_like(C)
# No-flux boundary condition at x=0 (back surface)
dCdt[0] = D * (C[1] - C[0]) / dx**2 # Diffusion at back surface [kg/(m²·s²)]
# Diffusion in the material
for i in range(1, N - 1):
dCdt[i] = D * (C[i + 1] - 2 * C[i] + C[i - 1]) / dx**2 # Diffusion in layers [kg/(m²·s²)]
# Flux boundary condition at x=L (front surface)
J_diff = -D * (C[N - 1] - C[N - 2]) / dx # Diffusion flux at surface [kg/(m²·s)]
J_air = hm * (C[N - 1] / K - y_air) # Mass transfer to air [kg/(m²·s)]
dCdt[N - 1] = D * (C[N - 2] - C[N - 1]) / dx**2 - J_air / dx # Change in surface layer concentration [kg/m³/s]
dy_air_dt = (S / V) * hm * (C[N - 1] / K - y_air) - q * y_air # Change in air concentration [kg/m³/s]
return np.concatenate((dCdt, [dy_air_dt]))
# -------------------------------
# Initial Conditions
# -------------------------------
C0 = np.full(N, concentration_initial_kg_per_m3) # Initial concentration in material layers [kg/m³]
y_air_0 = 0.0 # Initial air concentration [kg/m³]
y0 = np.concatenate((C0, [y_air_0])) # Initial condition vector
# -------------------------------
# Solve the ODE system with BDF solver for speed and accuracy
# -------------------------------
solution = solve_ivp(emission_model, [0, total_emission_time_s], y0, t_eval=t_eval, method='BDF', vectorized=False)
# Extract results
C_material = solution.y[:-1, :] # Material concentrations over time [kg/m³]
y_air = solution.y[-1, :] # Air concentrations over time [kg/m³]
time = solution.t # Time vector [s]
# -------------------------------
# Extract Air Concentration during Exposure
# -------------------------------
exposure_indices = np.where((time >= start_exposure_s) & (time <= start_exposure_s + exposure_duration_s))[0]
y_air_exposure = y_air[exposure_indices] # Air concentrations during exposure period [kg/m³]
time_exposure = time[exposure_indices] # Time during exposure period [s]
# Check for valid exposure period
if len(exposure_indices) == 0:
print("No exposure period found. Check start and duration times.")
exit()
# -------------------------------
# Results Formatting
# -------------------------------
def format_value(value):
"""Format a number in scientific notation with one decimal place."""
return f"{value:.1e}"
# -------------------------------
# Display Results
# -------------------------------
C_air_mg_per_m3 = y_air_exposure * 1e6 # Convert air concentration to mg/m³
C_mean_event_mg_per_m3 = np.trapz(C_air_mg_per_m3, time_exposure) / (time_exposure[-1] - time_exposure[0]) # Mean event concentration [mg/m³]
C_mean_day_mg_per_m3 = (C_mean_event_mg_per_m3 * exposure_duration_s) / (24 * 3600) # Mean concentration on day of exposure [mg/m³]
C_year_avg_mg_per_m3 = C_mean_day_mg_per_m3 * frequency_per_day # Year average concentration [mg/m³]
Inhalation_volume_event_m3 = inhalation_rate_m3_per_s * exposure_duration_s # Inhalation volume during exposure [m³]
Dose_external_event_mg_per_kg_bw = (C_mean_event_mg_per_m3 * Inhalation_volume_event_m3) / body_weight_kg # External event dose [mg/kg bw]
Dose_external_day_mg_per_kg_bw = Dose_external_event_mg_per_kg_bw # External dose on day of exposure [mg/kg bw]
Dose_internal_event_mg_per_kg_bw = Dose_external_event_mg_per_kg_bw * absorption_fraction # Internal event dose [mg/kg bw]
Dose_internal_day_mg_per_kg_bw = Dose_internal_event_mg_per_kg_bw # Internal dose on day of exposure [mg/kg bw]
Dose_internal_year_avg_mg_per_kg_bw_per_day = Dose_internal_day_mg_per_kg_bw * frequency_per_day # Internal year average dose [mg/kg bw/day]
# Display Results
print(f"Mean Event Concentration: {format_value(C_mean_event_mg_per_m3)} mg/m³")
print(f"Mean Concentration on Day of Exposure: {format_value(C_mean_day_mg_per_m3)} mg/m³")
print(f"Year Average Concentration: {format_value(C_year_avg_mg_per_m3)} mg/m³")
print(f"External Event Dose: {format_value(Dose_external_event_mg_per_kg_bw)} mg/kg bw")
print(f"External Dose on Day of Exposure: {format_value(Dose_external_day_mg_per_kg_bw)} mg/kg bw")
print(f"Internal Event Dose: {format_value(Dose_internal_event_mg_per_kg_bw)} mg/kg bw")
print(f"Internal Dose on Day of Exposure: {format_value(Dose_internal_day_mg_per_kg_bw)} mg/kg bw/day")
print(f"Internal Year Average Dose: {format_value(Dose_internal_year_avg_mg_per_kg_bw_per_day)} mg/kg bw/day")
# -------------------------------
# Additional Calculations for Plots
# -------------------------------
total_duration_s = start_exposure_s + exposure_duration_s # Total duration [s]
t_s_plot = np.linspace(0, total_duration_s, num=1000) # Time vector for plotting [s]
C_t_kg_per_m3 = np.interp(t_s_plot, time, y_air) # Interpolated air concentration over time [kg/m³]
C_t_mg_per_m3 = C_t_kg_per_m3 * 1e6 # Convert air concentration to mg/m³
delta_t_s = t_s_plot[1] - t_s_plot[0] # Time step [s]
delta_inhaled_volume_m3 = inhalation_rate_m3_per_s * delta_t_s # Inhaled volume per time step [m³]
exposure_mask = t_s_plot >= start_exposure_s # Mask for exposure period
incremental_external_dose_mg_per_kg_bw = np.zeros_like(t_s_plot) # Initialize dose array
incremental_external_dose_mg_per_kg_bw[exposure_mask] = (C_t_mg_per_m3[exposure_mask] * delta_inhaled_volume_m3) / body_weight_kg # Incremental dose [mg/kg bw]
cumulative_external_dose_mg_per_kg_bw = np.cumsum(incremental_external_dose_mg_per_kg_bw) # Cumulative external dose [mg/kg bw]
cumulative_internal_dose_mg_per_kg_bw = cumulative_external_dose_mg_per_kg_bw * absorption_fraction # Cumulative internal dose [mg/kg bw]
# -------------------------------
# Visualization
# -------------------------------
fig, axs = plt.subplots(1, 3, figsize=(24, 6)) # Create a figure with three subplots
# 1. Air Concentration Over Time
axs[0].plot(t_s_plot / 3600, C_t_mg_per_m3, label='Air Concentration (mg/m³)', color='blue')
axs[0].set_title('Air Concentration Over Time')
axs[0].set_xlabel('Time (hours)')
axs[0].set_ylabel('Concentration (mg/m³)')
axs[0].grid(True)
axs[0].legend()
axs[0].axvline(x=start_exposure_s / 3600, color='red', linestyle='--', label='Exposure Start')
axs[0].axvline(x=(start_exposure_s + 15 * 60) / 3600, color='green', linestyle='--', label='15 min after Exposure Start')
axs[0].legend()
# 2. Cumulative External Dose Over Time
axs[1].plot(t_s_plot / 3600, cumulative_external_dose_mg_per_kg_bw, label='Cumulative External Dose (mg/kg bw)', color='orange')
axs[1].set_title('External Dose Over Time')
axs[1].set_xlabel('Time (hours)')
axs[1].set_ylabel('Dose (mg/kg bw)')
axs[1].grid(True)
axs[1].legend()
axs[1].axvline(x=start_exposure_s / 3600, color='red', linestyle='--', label='Exposure Start')
# 3. Cumulative Internal Dose Over Time (Adjusted for absorption)
axs[2].plot(t_s_plot / 3600, cumulative_internal_dose_mg_per_kg_bw, label='Cumulative Internal Dose (mg/kg bw)', color='green')
axs[2].set_title('Internal Dose Over Time')
axs[2].set_xlabel('Time (hours)')
axs[2].set_ylabel('Dose (mg/kg bw)')
axs[2].grid(True)
axs[2].legend()
axs[2].axvline(x=start_exposure_s / 3600, color='red', linestyle='--', label='Exposure Start')
plt.tight_layout() # Adjust layout for clarity
plt.show() # Show the plots
# -------------------------------
# End of Script
# -------------------------------
</code></pre>
<p>RESULTS:</p>
<pre><code>Mean Event Concentration: 1.3e-01 mg/m³
Mean Concentration on Day of Exposure: 1.1e-02 mg/m³
Year Average Concentration: 2.9e-05 mg/m³
External Event Dose: 4.3e-02 mg/kg bw
External Dose on Day of Exposure: 4.3e-02 mg/kg bw
Internal Event Dose: 4.3e-02 mg/kg bw
Internal Dose on Day of Exposure: 4.3e-02 mg/kg bw/day
Internal Year Average Dose: 1.2e-04 mg/kg bw/day
</code></pre>
<p><a href="https://i.sstatic.net/Ixbh6ffW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ixbh6ffW.png" alt="enter image description here" /></a></p>
<p>EXPECTED:</p>
<p><a href="https://i.sstatic.net/ZzsJdAmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZzsJdAmS.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Mnwl3vpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mnwl3vpB.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/wi5hPPcY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wi5hPPcY.png" alt="enter image description here" /></a></p>
<pre><code>Mean event concentration
1.2 × 10⁻¹ mg/m³
average air concentration on exposure event. Note: depends strongly on chosen exposure duration
Mean concentration on day of exposure
9.6 × 10⁻³ mg/m³
average air concentration over the day (accounts for the number of events on one day)
Year average concentration
2.6 × 10⁻⁵ mg/m³
mean daily air concentration averaged over a year
External event dose
3.8 × 10⁻² mg/kg bw
the amount that can potentially be absorbed per kg body weight during one event
External dose on day of exposure
3.8 × 10⁻² mg/kg bw
the amount that can potentially be absorbed per kg body weight during one day
Internal event dose
3.8 × 10⁻² mg/kg bw
absorbed dose per kg body weight during one exposure event
Internal dose on day of exposure
3.8 × 10⁻² mg/kg bw/day
absorbed dose per kg body weight during one day. Note: these can be higher than the ‘event dose’ for exposure frequencies larger than 1 per day.
Internal year average dose
1.1 × 10⁻⁴ mg/kg bw/day
daily absorbed dose per kg body weight averaged over a year.
</code></pre>
<p>I have two issues:</p>
<ol>
<li>The results don't match nearly as much as I would like.</li>
<li>It can be very slow to load. I have adapted it to be quicker but still if I increase the layers to get a more accurate results it's painfully slow and not all that much more accurate.</li>
</ol>
<p>When I compare this to the web based application I am using to get the expected results these are almost instant.</p>
<p>For readers familiar with PDE/ODE and Mass transfer coefficients, what is the best way to solve this?</p>
<p>I feel this script is perhaps a very inefficient way to reproduce the model.</p>
|
<python><ode><pde>
|
2024-10-01 17:31:20
| 1
| 927
|
Nick
|
79,044,145
| 12,439,683
|
What do ** (double star/asterisk) and * (star/asterisk) inside square brakets mean for class and function declarations in Python 3.12+?
|
<p>What does <code>T, *Ts, **P</code> mean when they are used in square brackets directly after a class or function name or with the <code>type</code> keyword?</p>
<pre class="lang-py prettyprint-override"><code>class ChildClass[T, *Ts, **P]: ...
def foo[T, *Ts, **P](arg: T) -> Callable[P, tuple[T, *Ts]]:
type vat[T, *Ts, **P] = Callable[P, tuple[T, *Ts]]
</code></pre>
<hr />
<p><sub>See complementary questions about ** and * for function parameters and arguments:</p>
<ul>
<li><sub><a href="https://stackoverflow.com/questions/2921847/what-do-double-star-asterisk-and-star-asterisk-mean-in-a-function-call">What do ** (double star/asterisk) and * (star/asterisk) mean <strong>in a function call?</strong></a></sub></li>
<li><sub><a href="https://stackoverflow.com/q/36901/12439683">What does ** (double star/asterisk) and * (star/asterisk) do for <strong>parameters?</strong></a></sub></sub></li>
</ul>
|
<python><generics><python-typing><type-parameter>
|
2024-10-01 17:29:52
| 1
| 5,101
|
Daraan
|
79,044,131
| 6,738,917
|
Unable to properly install Python 3.7.0 on a Ubuntu 22.04 LTS VBox machine
|
<p>I am preparing a VirtualBox machine (Ubuntu 22.04) to work on a project with a friend and it needs to use Python 3.7.0. In order to install this python version, I am using the following command:</p>
<pre><code>pyenv install 3.7.0
</code></pre>
<p>However, I get the following error message which I don't get when installing python 3.7.0 locally on my host Ubuntu 22.04 LTS machine:</p>
<pre><code>ERROR: This script does not work on Python 3.7. The minimum supported Python version is 3.8. Please use https://bootstrap.pypa.io/pip/3.7/get-pip.py.
</code></pre>
<p>When I navigate the URL provided, it downloads a .py file. Which after checking it out, I executed and the pip updated. However, it looks like that it does not solve the main issue of not being able to download and set up python 3.7.0 on the Ubuntu 22.04 virtual machine.</p>
<p>Am I missing something?</p>
<p>If I do <code>pyenv version --list</code> I can clearly see that 3.7.0 exists. Why it does not allow me to install it?</p>
<p>I would glady appreciate some hints on how to achieve this purpose.</p>
|
<python><python-3.x><pyenv>
|
2024-10-01 17:21:27
| 1
| 576
|
Jonalca
|
79,043,974
| 9,670,009
|
Raspberry Pi not decoding base 64 encoded data sent over BLE using Nordic UART service from a React Native app
|
<p>I'm using a react native app to write to and read from a .ini file on a Raspberry Pi, which I call a hub, which runs a GATT server using the Nordic UART service (NUS).</p>
<p>I'm able to connect to the RPi and send a string, "retrieveHubConfig", which is supposed to read the current .ini file contents and display them in the app. I also sends JSON data that can be written to the .ini file automatically, without needing the user of the app to enter it. The rest of the .ini file is populated from data the user enters and is written to the RPi in the same way as before.</p>
<p>The Raspberry Pi receives base 64 encoded data and prints it out before and after being decoded.</p>
<p>However the decoding process doesn't seem to be working. It prints out a byte array of the JSON data and the "retrieveHubConfig" string in up to 100 character chunks. There's also an end of transmission character sent after the complete JSON data is sent, and another one for "retrieveHubConfig". The except branch of the try except block in the GATT server prints:</p>
<p>(first 100 characters of JSON data)</p>
<p>Error in WriteValue: Incorrect padding</p>
<p>(characters 101 - 200 of JSON data)</p>
<p>Error in WriteValue: 'utf-8' codec can't decode byte 0xdb in position 0: invalid continuation byte</p>
<p>(last 12 characters of JSON data)</p>
<p>Error in WriteValue: 'utf-8' codec can't decode byte 0xdf in position 0: invalid continuation byte</p>
<p>(all 17 characters of retrieveHubConfig)</p>
<p>Error in WriteValue: Invalid base64-encoded string: number of data characters (17) cannot be 1 more than a multiple of 4</p>
<p>Interestingly, it manages to execute the "Received base64 decoded data:" print statement for the end of transmission characters, but it's a blank string.</p>
<p>I get the same output from the Pi if I use:</p>
<p>const paddedData = command.padEnd(Math.ceil(command.length / 4) * 4, "=");</p>
<p>to try and resolve the incorrect padding problem.</p>
<p>I've tried using the buffer and react-native-base64 libraries. They both produce the same output from the Pi. I expect to see the JSON data:</p>
<pre class="lang-none prettyprint-override"><code>{"hub_settings": {"hello_timer": "3000", "inter-master_multicast_address": "244.0.0.221", "neighbour_inter-master_multicast_address": "244.0.0.222"}}
</code></pre>
<p>and: <code>"retrieveHubConfig"</code></p>
<p>in the "Received base64 decoded data:" print statement.</p>
<p>Here's the output of the Raspberry Pi GATT server:</p>
<pre class="lang-none prettyprint-override"><code>Received data: dbus.Array([dbus.Byte(123), dbus.Byte(34), dbus.Byte(104), dbus.Byte(117), dbus.Byte(98), dbus.Byte(83), dbus.Byte(101), dbus.Byte(116), dbus.Byte(116), dbus.Byte(105), dbus.Byte(110), dbus.Byte(103), dbus.Byte(115), dbus.Byte(34), dbus.Byte(58), dbus.Byte(123), dbus.Byte(34), dbus.Byte(104), dbus.Byte(117), dbus.Byte(98), dbus.Byte(95), dbus.Byte(115), dbus.Byte(101), dbus.Byte(116), dbus.Byte(116), dbus.Byte(105), dbus.Byte(110), dbus.Byte(103), dbus.Byte(115), dbus.Byte(34), dbus.Byte(58), dbus.Byte(123), dbus.Byte(34), dbus.Byte(105), dbus.Byte(110), dbus.Byte(116), dbus.Byte(101), dbus.Byte(114), dbus.Byte(45), dbus.Byte(109), dbus.Byte(97), dbus.Byte(115), dbus.Byte(116), dbus.Byte(101), dbus.Byte(114), dbus.Byte(95), dbus.Byte(109), dbus.Byte(117), dbus.Byte(108), dbus.Byte(116), dbus.Byte(105), dbus.Byte(99), dbus.Byte(97), dbus.Byte(115), dbus.Byte(116), dbus.Byte(95), dbus.Byte(97), dbus.Byte(100), dbus.Byte(100), dbus.Byte(114), dbus.Byte(101), dbus.Byte(115), dbus.Byte(115), dbus.Byte(34), dbus.Byte(58), dbus.Byte(34), dbus.Byte(50), dbus.Byte(52), dbus.Byte(52), dbus.Byte(46), dbus.Byte(48), dbus.Byte(46), dbus.Byte(48), dbus.Byte(46), dbus.Byte(50)], signature=dbus.Signature('y'))
Error in WriteValue: Incorrect padding
Received data: dbus.Array([dbus.Byte(50), dbus.Byte(49), dbus.Byte(34), dbus.Byte(44), dbus.Byte(34), dbus.Byte(110), dbus.Byte(101), dbus.Byte(105), dbus.Byte(103), dbus.Byte(104), dbus.Byte(98), dbus.Byte(111), dbus.Byte(117), dbus.Byte(114), dbus.Byte(95), dbus.Byte(105), dbus.Byte(110), dbus.Byte(116), dbus.Byte(101), dbus.Byte(114), dbus.Byte(45), dbus.Byte(109), dbus.Byte(97), dbus.Byte(115), dbus.Byte(116), dbus.Byte(101), dbus.Byte(114), dbus.Byte(95), dbus.Byte(109), dbus.Byte(117), dbus.Byte(108), dbus.Byte(116), dbus.Byte(105), dbus.Byte(99), dbus.Byte(97), dbus.Byte(115), dbus.Byte(116), dbus.Byte(95), dbus.Byte(97), dbus.Byte(100), dbus.Byte(100), dbus.Byte(114), dbus.Byte(101), dbus.Byte(115), dbus.Byte(115), dbus.Byte(34), dbus.Byte(58), dbus.Byte(34), dbus.Byte(50), dbus.Byte(52), dbus.Byte(52), dbus.Byte(46), dbus.Byte(48), dbus.Byte(46), dbus.Byte(48), dbus.Byte(46), dbus.Byte(50), dbus.Byte(50), dbus.Byte(50), dbus.Byte(34), dbus.Byte(44), dbus.Byte(34), dbus.Byte(104), dbus.Byte(101), dbus.Byte(108), dbus.Byte(108), dbus.Byte(111), dbus.Byte(95), dbus.Byte(116), dbus.Byte(105), dbus.Byte(109), dbus.Byte(101), dbus.Byte(114), dbus.Byte(34), dbus.Byte(58)], signature=dbus.Signature('y'))
Error in WriteValue: 'utf-8' codec can't decode byte 0xdb in position 0: invalid continuation byte
Received data: dbus.Array([dbus.Byte(34), dbus.Byte(51), dbus.Byte(48), dbus.Byte(48), dbus.Byte(48), dbus.Byte(34), dbus.Byte(125), dbus.Byte(125), dbus.Byte(125)], signature=dbus.Signature('y'))
Error in WriteValue: 'utf-8' codec can't decode byte 0xdf in position 0: invalid continuation byte
Received data: dbus.Array([dbus.Byte(4)], signature=dbus.Signature('y'))
Received base64 decoded data:
Received data: dbus.Array([dbus.Byte(114), dbus.Byte(101), dbus.Byte(116), dbus.Byte(114), dbus.Byte(105), dbus.Byte(101), dbus.Byte(118), dbus.Byte(101), dbus.Byte(72), dbus.Byte(117), dbus.Byte(98), dbus.Byte(67), dbus.Byte(111), dbus.Byte(110), dbus.Byte(102), dbus.Byte(105), dbus.Byte(103)], signature=dbus.Signature('y'))
Error in WriteValue: Invalid base64-encoded string: number of data characters (17) cannot be 1 more than a multiple of 4
Received data: dbus.Array([dbus.Byte(4)], signature=dbus.Signature('y'))
Received base64 decoded data:
</code></pre>
<p>The react native code. The <code>getHubConfig()</code> method contains the encoding and writing to Raspberry Pi over BLE statements:</p>
<pre><code>import base64 from "react-native-base64";
import { Context } from "../Auth/AuthContext.js";
// Method that presents the user with a list of BLE hubs or watches to choose from and allows
// them to connect to and transfer data between this app and the RPi.
const BleScanner = ({ engineerMode, deviceType }) => {
const globalContext = useContext(Context);
const { selectedDevice, setSelectedDevice } = globalContext;
// UUIDs for the UART service and characteristics. These are used to send the commands to the RPi.
// The RPi will respond with the result of the command.They are standard Nordic UART UUIDs.
const UART_SERVICE_UUID = "6e400001-b5a3-f393-e0a9-e50e24dcca9e";
const UART_RX_CHARACTERISTIC_UUID = "6e400002-b5a3-f393-e0a9-e50e24dcca9e";
const UART_TX_CHARACTERISTIC_UUID = "6e400003-b5a3-f393-e0a9-e50e24dcca9e";
// Set the MTU to the desired maximum value
const MTU = 100;
//End - of - Transmission marker, ASCII code for EOT
const EOT_MARKER = "\x04";
const [devices, setDevices] = useState([]);
const [isConnecting, setIsConnecting] = useState(false);
// New state to track if the app is connecting to the RPi or watch
const [showParameters, setShowParameters] = useState(false);
// New state to track if the parameters of the selected hub or watch are visible
const getHubConfig = async () => {
setShowParameters(false);
setIsConnecting(true);
const hubSettings = {"hub_settings": {"hello_timer": "3000", "inter-master_multicast_address": "244.0.0.221", "neighbour_inter-master_multicast_address": "244.0.0.222"}}
const commands = [
JSON.stringify({ hubSettings }),
"retrieveHubConfig",
];
try {
// Check if RPi is still connected to the device or not. If it is not connected, then connect to the device.
let connectedDevice = await handleDeviceConnection(selectedDevice);
const service = await connectedDevice.discoverAllServicesAndCharacteristics();
const characteristics = await service.characteristicsForService(UART_SERVICE_UUID);
const rx_characteristic = characteristics.find((c) => c.uuid === UART_RX_CHARACTERISTIC_UUID);
for (const command of commands) {
let data = null;
console.log("Command: ", command);
if (command === "retrieveHubConfig") {
// const paddedData = command.padEnd(Math.ceil(command.length / 4) * 4, "=");
data = base64.encode(command);
} else {
// const paddedData = command.padEnd(Math.ceil(command.length / 4) * 4, "=");
data = base64.encode(command);
}
console.log("Data to send: ", data);
console.log("Decoded data to send: ", Buffer.from(data, "base64").toString("ascii"));
// Calculate the number of packets needed to send the data
const numPackets = Math.ceil(data.length / MTU);
console.log("Number of packets: ", numPackets); // Log the number of packets
// Send the data in smaller units
for (let i = 0; i < numPackets; i++) {
const start = i * MTU;
const end = Math.min(start + MTU, data.length);
const packetData = data.slice(start, end);
console.log("Packet data: ", packetData);
console.log("Packet data length: ", packetData.length);
// Send the packet to the connected device
await rx_characteristic.writeWithoutResponse(packetData);
console.log(`Packet ${i + 1}/${numPackets} sent.`);
// Pause for 0.5 seconds between packets.
await new Promise(resolve => setTimeout(resolve, 1000));
if (i == numPackets - 1) {
await rx_characteristic.writeWithoutResponse(Buffer.from(EOT_MARKER).toString("base64"));
console.log("EOT length: ", Buffer.from(EOT_MARKER).toString("base64").length);
}
}
console.log("Data transmission complete.");
}
saveExtracedData(extractKeyValuePairs(readCharacteristic(connectedDevice)));
setShowParameters(true);
setIsConnecting(false);
} catch (error) {
// If an error occurs, log the error and set the isConnecting state variable to false
console.log("Failed to send commands: ", error);
console.log("Error code: ", error.errorCode);
setIsConnecting(false);
Alert.alert("Couldn't connect.",
Platform.OS === "ios" ?
"Check your device is paired with Tycho hub in your device's bluetooth settings. Stay close to Tycho hub." : "Stay close to Tycho hub & retry.");
}
};
const readCharacteristic = (device) => {
try {
let isEOTReceived = false;
let fullData = "";
while (!isEOTReceived) {
console.log("Reading characteristic...");
const characteristic = manager.readCharacteristicForDevice(
device.id,
UART_SERVICE_UUID,
UART_TX_CHARACTERISTIC_UUID
);
console.log("Characteristic: ", characteristic);
const data = characteristic.value;
console.log("Received data:", data);
const decodedData = Buffer.from(data, "base64").toString("ascii");
// Check for EOT packet. If EOT packet is received, then stop reading the characteristic.
if (decodedData.includes(EOT_MARKER)) {
isEOTReceived = true;
fullData += decodedData; // Add decoded data to fullData
break;
}
fullData += decodedData;
// Optional: Handle the case where data might exceed 100 bytes per packet
if (data.length > MTU) {
console.warn(`Received data packet exceeds ${MTU} bytes`);
}
}
console.log("Received full data:", fullData);
return fullData;
} catch (error) {
console.error("Read characteristic error:", error);
throw error; // Rethrow the error for further handling if necessary
}
};
};
export default BleScanner;
</code></pre>
<p>The Raspberry Pi Python code. The RxCharacteristic class' WriteValue() method contains the print and base 64 decoding statements mentioned above:</p>
<pre><code>import gi
gi.require_version('DBus', '1.0')
from gi.repository import GLib
from gi.repository import Gio
from gi.repository import DBus
import os
import time
import json
import base64
import sys
import subprocess
import configparser
import dbus
import dbus.mainloop.glib
from ble_advertisement import Advertisement
from ble_advertisement import register_ad_cb, register_ad_error_cb
from ble_gatt_server import Service, Characteristic
from ble_gatt_server import register_app_cb, register_app_error_cb
BLUEZ_SERVICE_NAME = 'org.bluez'
DBUS_OM_IFACE = 'org.freedesktop.DBus.ObjectManager'
LE_ADVERTISING_MANAGER_IFACE = 'org.bluez.LEAdvertisingManager1'
GATT_MANAGER_IFACE = 'org.bluez.GattManager1'
GATT_CHRC_IFACE = 'org.bluez.GattCharacteristic1'
UART_SERVICE_UUID = '6e400001-b5a3-f393-e0a9-e50e24dcca9e'
UART_RX_CHARACTERISTIC_UUID = '6e400002-b5a3-f393-e0a9-e50e24dcca9e'
UART_TX_CHARACTERISTIC_UUID = '6e400003-b5a3-f393-e0a9-e50e24dcca9e'
LOCAL_NAME = 'Tycho-Hub-1'
mainloop = None
PACKET_SIZE = 100
EOT_MARKER = "\x04" # End-of-Transmission marker, ASCII code for EOT
received_data = [] # Store received data packets
class CasePreservingConfigParser(configparser.ConfigParser):
def optionxform(self, optionstr):
return optionstr
class TxCharacteristic(Characteristic):
def __init__(self, bus, index, service):
Characteristic.__init__(self, bus, index, UART_TX_CHARACTERISTIC_UUID, ['read'], service)
GLib.io_add_watch(sys.stdin, GLib.IO_IN, self.on_console_input)
self.isReadable = True
def on_console_input(self, fd, condition):
s = fd.readline()
if s.isspace():
pass
else:
self.send_tx(s)
return True
def send_tx(self, config_dict):
global tx_data_packets
config_json = json.dumps(config_dict, indent=4)
print(f"Sending config JSON: {config_json}")
total_length = len(config_json)
print(f"Total Length of Data: {total_length} bytes")
# Split data into packets and store in the global list
tx_data_packets = [base64.b64encode(config_json[i:i + PACKET_SIZE].encode()).decode() for i in range(0, total_length, PACKET_SIZE)]
tx_data_packets.append(base64.b64encode(EOT_MARKER.encode()).decode()) # Append base64 encoded EOT marker to the end of the list
print(f"Number of Packets Prepared: {len(tx_data_packets)}")
def ReadValue(self, options):
global tx_data_packets
if not tx_data_packets:
return GLib.Variant('ay', []) # Return empty if no more packets
# Pop the first packet from the list and return it
packet = tx_data_packets.pop(0)
print(f"Sending packet: {packet}")
return GLib.Variant('ay', packet)
def StartNotify(self):
if self.notifying:
return
self.notifying = True
def StopNotify(self):
if not self.notifying:
return
self.notifying = False
class RxCharacteristic(Characteristic):
def __init__(self, bus, index, service):
Characteristic.__init__(self, bus, index, UART_RX_CHARACTERISTIC_UUID, ['write'], service)
def WriteValue(self, value, options):
try:
print("Received data: ", value)
print("Received data length: ", len(value))
# Convert the received data to a byte array
byte_array = bytearray(value)
# Decode the received base64 data
value_str = base64.b64decode(byte_array).decode('utf-8')
print("Received base64 decoded data: ", value_str)
received_data.append(value_str) # Append received data to the buffer
# Check for EOT (End of Transmission) marker
if EOT_MARKER in value_str:
received_data_str = ''.join(received_data).split(EOT_MARKER)[0] # Split at EOT and take first part
self.execute_command(received_data_str) # Execute the command with all received data
received_data.clear() # Clear the buffer
except Exception as e:
print(f"Error in WriteValue: {e}")
def execute_command(self, command):
"""
Executes the received command. If the command is "retrieveHubConfig"
then send_hub_config() is called, otherwise if it contains 'cloud_settings'
or 'hub_settings' in its JSON structure, it calls update_hub_config().
Otherwise, executes the command as a shell command.
"""
print(f"Received Command: '{command}'")
if "retrieveHubConfig" in command:
self.send_hub_config()
else:
try:
command_json = json.loads(command)
if 'cloud_settings' in command_json or 'hub_settings' in command_json:
self.update_hub_config(command)
except json.JSONDecodeError:
# Not a JSON command, execute as shell command
self.execute_shell_command(command)
def execute_shell_command(self, command):
"""
Executes a command in the shell.
"""
try:
output = subprocess.check_output(command, shell=True)
print("Shell Command Output:", output.decode())
except subprocess.CalledProcessError as e:
print(f"Error executing shell command: {e}")
def send_hub_config(self):
# Read cloud settings from hubConfig.ini using the case-preserving parser
config = CasePreservingConfigParser()
config.read(os.path.join(os.path.dirname(__file__), 'hubConfig.ini'))
config_dict = {}
# Iterate over sections and their options
for section in config.sections():
config_dict[section] = {}
for option in config.options(section):
config_dict[section][option] = config.get(section, option)
print("Sending hub config...")
self.service.tx_characteristic.send_tx(config_dict)
def update_hub_config(self, settings):
print("In update hub config method")
print(f"Settings: {settings}")
try:
# Parse the JSON string into a dictionary
settings_dict = json.loads(settings)
print(f"Settings dictionary: {settings_dict}")
# Create a CasePreservingConfigParser object and read the existing INI file
config = CasePreservingConfigParser()
config.read("hubConfig.ini")
# Iterate over the dictionary and update the ConfigParser object
for section, options in settings_dict.items():
if not config.has_section(section):
config.add_section(section)
for key, value in options.items():
config.set(section, key, str(value))
print(f"Config parser object after update: {config}")
# Write the updated configuration back to the INI file
with open("hubConfig.ini", "w") as configfile:
config.write(configfile)
print("Updated hubConfig.ini with JSON data")
except json.JSONDecodeError as e:
print(f"Error decoding JSON: {e}")
except configparser.Error as e:
print(f"ConfigParser error: {e}")
except Exception as e:
print(f"General error in update_hub_config: {e}")
class UartService(Service):
def __init__(self, bus, index):
Service.__init__(self, bus, index, UART_SERVICE_UUID, True)
self.tx_characteristic = TxCharacteristic(bus, 0, self)
self.rx_characteristic = RxCharacteristic(bus, 1, self)
self.add_characteristic(self.tx_characteristic)
self.add_characteristic(self.rx_characteristic)
class Application(dbus.service.Object):
def __init__(self, bus):
self.path = '/'
self.services = []
dbus.service.Object.__init__(self, bus, self.path)
def get_path(self):
return dbus.ObjectPath(self.path)
def add_service(self, service):
self.services.append(service)
@dbus.service.method(DBUS_OM_IFACE, out_signature='a{oa{sa{sv}}}')
def GetManagedObjects(self):
response = {}
for service in self.services:
response[service.get_path()] = service.get_properties()
chrcs = service.get_characteristics()
for chrc in chrcs:
response[chrc.get_path()] = chrc.get_properties()
return response
class UartApplication(Application):
def __init__(self, bus):
Application.__init__(self, bus)
self.add_service(UartService(bus, 0))
class UartAdvertisement(Advertisement):
def __init__(self, bus, index):
Advertisement.__init__(self, bus, index, 'peripheral')
self.add_service_uuid(UART_SERVICE_UUID)
self.add_local_name(LOCAL_NAME)
self.include_tx_power = True
def find_adapter(bus):
remote_om = dbus.Interface(bus.get_object(BLUEZ_SERVICE_NAME, '/'),
DBUS_OM_IFACE)
objects = remote_om.GetManagedObjects()
for o, props in objects.items():
if LE_ADVERTISING_MANAGER_IFACE in props and GATT_MANAGER_IFACE in props:
return o
print('Skip adapter:', o)
return None
def main():
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
bus = dbus.SystemBus()
adapter = find_adapter(bus)
if not adapter:
print('BLE adapter not found')
return
service_manager = dbus.Interface(
bus.get_object(BLUEZ_SERVICE_NAME, adapter),
GATT_MANAGER_IFACE)
ad_manager = dbus.Interface(bus.get_object(BLUEZ_SERVICE_NAME, adapter),
LE_ADVERTISING_MANAGER_IFACE)
app = UartApplication(bus)
adv = UartAdvertisement(bus, 0)
mainloop = GLib.MainLoop()
service_manager.RegisterApplication(app.get_path(), {},
reply_handler=register_app_cb,
error_handler=register_app_error_cb)
ad_manager.RegisterAdvertisement(adv.get_path(), {},
reply_handler=register_ad_cb,
error_handler=register_ad_error_cb)
try:
mainloop.run()
except KeyboardInterrupt:
adv.Release()
if __name__ == '__main__':
main()
</code></pre>
|
<python><react-native><base64><bluetooth-lowenergy>
|
2024-10-01 16:22:50
| 1
| 537
|
Tirna
|
79,043,944
| 4,769,503
|
Why is SQLAlchemy / Pydantic auto-loading relations only sometimes and not always?
|
<p>I have a weird issue in my FastAPI-based application using <strong>SQLAlchemy</strong> and <strong>Pydantic</strong> with a Postgres-Database.</p>
<p>A User-Model contains two different 1-to-Many-relationships. For unknown reasons, the first relationship is always automatically loaded although it shouldn't. The second relationship works as expected.</p>
<p><strong>My Models:</strong></p>
<pre><code>class User(Base):
__tablename__ = "user"
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid7, unique=True, nullable=False)
email = Column(String, unique=True, index=True)
hashed_password = Column(String)
is_active = Column(Boolean, default=True)
items = relationship("Item", back_populates="owner", lazy="select")
datastreams = relationship("DataStream", back_populates="owner", lazy="select")
class Item(Base):
__tablename__ = "item"
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid7, unique=True, nullable=False)
title = Column(String, index=True)
description = Column(String, index=True)
owner_id = Column(UUID(as_uuid=True), ForeignKey("user.id"))
owner = relationship("User", back_populates="items")
class DataStream(Base):
__tablename__ = "data_stream"
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid7, unique=True, nullable=False)
name = Column(String, index=True)
description = Column(String, index=True)
type = Column(data_stream_type_enum, nullable=False) # Enum for type: 'folder' or 'stream'
parent_id = Column(UUID(as_uuid=True), ForeignKey("data_stream.id"), nullable=True) # Self-referencing foreign key
owner_id = Column(UUID(as_uuid=True), ForeignKey("user.id"))
owner = relationship("User", back_populates="datastreams")
# Self-referential relationship for parent/child hierarchy
parent = relationship("DataStream", remote_side=[id], backref="children")
</code></pre>
<p><strong>My CRUD layer</strong> contains these two functions among others:</p>
<pre><code>def get_user(db: Session, user_id: UUID):
return db.query(models.User).filter(models.User.id == user_id).first()
def get_users(db: Session, skip: int = 0, limit: int = 100):
return db.query(models.User).offset(skip).limit(limit).all()
</code></pre>
<p>The API-Endpoint will respond with the following Schema:</p>
<pre><code>from pydantic import BaseModel
class UserBase(BaseModel):
email: str
class User(UserBase):
id: UUID
is_active: bool
items: list[Item] = []
dataStreams: list[DataStream] = []
class Config:
orm_mode = True
</code></pre>
<p><strong>The Endpoints</strong> look like this:</p>
<pre><code>@router.get("/", response_model=list[user_schemas.User])
def read_users(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
users = crud.get_users(db, skip=skip, limit=limit)
return users
@router.get("/{user_id}", response_model=user_schemas.User)
def read_user(user_id: UUID, db: Session = Depends(get_db)):
db_user = crud.get_user(db, user_id=user_id)
if db_user is None:
raise HTTPException(status_code=404, detail="User not found")
return db_user
</code></pre>
<p><strong>The API-Response</strong> contains always the fully loaded Items-relation, but not the DataStreams-relation, although they exists and are associated with the same owner-id.</p>
<pre><code>[
{
"email": “xxx@yyy”,
"id": "066fbcc7-61ff-7359-8000-440710dffadc",
"is_active": true,
"items": [
{
"title": "string",
"description": "string",
"id": "066fc123-237d-73e4-8000-bc6b98e36cfa",
"owner_id": "066fbcc7-61ff-7359-8000-440710dffadc"
},
{
"title": "string",
"description": "string",
"id": "066fc149-10bc-7569-8000-a629cd4d7e52",
"owner_id": "066fbcc7-61ff-7359-8000-440710dffadc"
}
],
"dataStreams": []
}
]
</code></pre>
<p><strong>My Questions are:</strong></p>
<ol>
<li>Why are Items automatically loaded and DataStreams not?</li>
<li>How can I avoid that Items will get loaded automatically?</li>
</ol>
|
<python><sqlalchemy><fastapi><pydantic>
|
2024-10-01 16:12:03
| 1
| 19,350
|
delete
|
79,043,862
| 6,260,154
|
Need help in fixing regex pattern to find strings which contains invalid escape sequence but not defined as raw string
|
<p>I'm attempting to create a regex pattern that can iterate over every content of the file and look for strings that Flake8 has a labelled as W605 which contains that indicate include an <code>invalid escape sequence</code>.</p>
<p>In other words, my goal is to locate certain strings and turn them into raw strings by appending {r{ to the beginning of those strings.</p>
<p>Now, I came up with this basic pattern:</p>
<pre><code>(?<!r)([\"\'])(.*?)[\\](.*?)([\"\'])
</code></pre>
<p>Which works for the below scenarios(Edit: These below lines are literal contents of file):</p>
<pre><code>test1 = re.compile('^(.*?)\test1?$', re.S) # matches correctly
test2 = re.compile(r'^(.*?)\test2?$', re.S) # not matches, which is correct behaviour as it is defined as a raw string
test3 = re.compile("^(.*?)\test3?$", re.S) # matches correctly
test4 = re.compile(r"^(.*?)\test4?$", re.S) # not matches, which is correct behaviour as it is defined as a raw string
</code></pre>
<p>However, it also matches the string below, which is problematic::</p>
<pre><code>test5.setdefault('test5', re.search('^(.*?)\test5?$', '').groups()[0])
</code></pre>
<p>Though, it also should selecting <code>'test5'</code>it should ideally not match that one. We got the whole entire match as.follow: <code>'test5', re.search('^(.*?)\test5?$'</code>.</p>
<p>This is the point at which I am unable to match the <strong>VALID</strong> strings exclusively.</p>
<p>This is the regex101: <a href="https://regex101.com/r/no99Sl/1" rel="nofollow noreferrer">https://regex101.com/r/no99Sl/1</a></p>
<p>If anyone can help me out in this, I will be grateful.</p>
|
<python><regex>
|
2024-10-01 15:45:51
| 0
| 1,016
|
Tony Montana
|
79,043,777
| 4,636,579
|
How to extract arguments from call with ast python
|
<p>I am working on a function to convert function calls, therefore I need the calls and parameters. I fiddled with AST, and I am able to extract the specific nodes, but somehow I can't distinguish the nodes by their type.</p>
<p>For example, I want to get the call and the parameters to do my make into a new syntax.</p>
<p>Example code:</p>
<pre class="lang-py prettyprint-override"><code>import ast
import argparse
class Visitor(ast.NodeVisitor):
def __init__(self):
self.arglist = []
def visit(self, node):
if isinstance(node, ast.Call):
print('VVVVVVVVVVVVVV CALL ANFANG VVVVVVVVVVVVVVVV')
print(f'_______ Call found ______')
super().visit(node)
print(f'Argumente: >>>{node.args.pop(0).__dict__}<<<<')
print('______________ CALL END ________________')
if isinstance(node, ast.Name):
print(f'_______ Name found ______ {node.__class__}')
print(f'Name Argumente: {node.id.__str__()}')
return super().visit(node)
elif isinstance(node, ast.For):
print(f'_______ FOR found ______ ')
return super().visit(node)
elif isinstance(node, ast.keyword):
print(f'_______ keyword found ______ {node.__class__}')
return super().visit(node)
elif isinstance(node, ast.Attribute):
return super().visit(node)
print(f'_______ Attribute fopund _____ {node.__class__}')
elif isinstance(node, ast.Expr):
return super().visit(node)
print(f'_______ Expr found ________ {node.__class__}')
elif isinstance(node, ast.Constant):
print(f'_______ Konstante found!!>>>> {node.value}')
return super().visit(node)
return super().visit(node)
tree = ast.parse('''
meat = get_mess(ONE, TWO, THREE)
name2 = 'Elise'
''')
print(ast.dump(tree))
vis = Visitor()
vis.visit(tree)
</code></pre>
<p>Output is:</p>
<pre><code>Module(body=[Assign(targets=[Name(id='meat', ctx=Store())], value=Call(func=Name(id='get_mess', ctx=Load()), args=[Name(id='ONE', ctx=Load()), Name(id='TWO', ctx=Load()), Name(id='THREE', ctx=Load())], keywords=[])), Assign(targets=[Name(id='name2', ctx=Store())], value=Constant(value='Elise'))], type_ignores=[])
_______ Name found ______ <class 'ast.Name'>
Name Argumente: meat
VVVVVVVVVVVVVV CALL ANFANG VVVVVVVVVVVVVVVV
_______ Call found ______
_______ Name found ______ <class 'ast.Name'>
Name Argumente: get_mess
_______ Name found ______ <class 'ast.Name'>
Name Argumente: ONE
_______ Name found ______ <class 'ast.Name'>
Name Argumente: TWO
_______ Name found ______ <class 'ast.Name'>
Name Argumente: THREE
Argumente: >>>{'id': 'ONE', 'ctx': <ast.Load object at 0x7f42d71fca10>, 'lineno': 2, 'col_offset': 16, 'end_lineno': 2, 'end_col_offset': 19}<<<<
______________ CALL END ________________
_______ Name found ______ <class 'ast.Name'>
Name Argumente: get_mess
_______ Name found ______ <class 'ast.Name'>
Name Argumente: TWO
_______ Name found ______ <class 'ast.Name'>
Name Argumente: THREE
_______ Name found ______ <class 'ast.Name'>
Name Argumente: name2
_______ Konstante found!!>>>> Elise
</code></pre>
<p>It is extracting function calls (get_mess) and parameters (ONE, TWO, THREE), but I didn't find a way to extract them by their type, so I can build new code.</p>
|
<python><abstract-syntax-tree>
|
2024-10-01 15:21:08
| 0
| 681
|
Coliban
|
79,043,588
| 6,511,990
|
Using Haystack and Ollama . ModuleNotFoundError: No module named 'ollama'
|
<p>I'm running an example from the haystack website <a href="https://haystack.deepset.ai/integrations/ollama#installation" rel="nofollow noreferrer">here</a></p>
<p>I have used poetry to add ollama-haystack</p>
<p>I am running the following code using Python 3.12.3 on WSL2 ubuntu 24.04</p>
<pre><code>from haystack_integrations.components.embedders.ollama import OllamaTextEmbedder
embedder = OllamaTextEmbedder()
result = embedder.run(text="What do llamas say once you have thanked them? No probllama!")
print(result['embedding'])
</code></pre>
<p>I am getting an error:</p>
<pre><code> File "/home/../.venv/lib/python3.12/site-packages/haystack_integrations/components/embedders/ollama/document_embedder.py", line 6, in <module>
from ollama import Client
ModuleNotFoundError: No module named 'ollama'
</code></pre>
<p>My python is a bit rusty so apologies if the error is obvious.</p>
|
<python><python-3.x><pip><ollama><haystack>
|
2024-10-01 14:27:46
| 1
| 2,769
|
dorriz
|
79,043,331
| 142,976
|
Fill a field inside dialog after return of method call
|
<p>I want to fill the <em>Conclusion</em> field after I click on <em>Copy Notes</em> button. How to do this in Odoo? I tried using the value property in the returned data but nothing happened.</p>
<pre><code>@api.multi
def copy_notes(self):
notes = ""
if self.uuid_v4 == 'e3d2e0c7-ba64-4522-abf3-9befcf2bdacc':
if self.physical_examination_id.id:
notes = "Physical Examination"
notes = notes + ("\n" if notes and self.physical_examination_id.eyes_remark else "")
#do many things here that is not relevant to the question
self.note = notes
else:
raise ValidationError(_("This exam cannot copy notes"))
return {
'value': {'note': notes},
'type': "ir.actions.do_nothing",
}
</code></pre>
<p><a href="https://i.sstatic.net/Hbm4K2Oy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hbm4K2Oy.png" alt="enter image description here" /></a></p>
|
<python><odoo><odoo-8>
|
2024-10-01 13:19:09
| 0
| 4,224
|
strike_noir
|
79,042,915
| 4,451,315
|
collect duckdb query into two dataframes?
|
<p>Say I have a csv file with</p>
<pre><code>date,value
2020-01-01,1
2020-01-02,4
2020-01-03,5
2020-01-04,9
2020-01-05,2
</code></pre>
<p>I would like to read it with duckdb, do some preprocessing, and ultimately end up with a train and validation set as Polars dataframes</p>
<p>I <em>could</em> do:</p>
<pre class="lang-py prettyprint-override"><code>train = duckdb.sql("""
select *, avg(value) over (order by date rows between 2 preceding and current row)
from read_csv(my_data.csv) qualify date < make_date(2020,1,4)
""").pl()
val = duckdb.sql("""
select *, avg(value) over (order by date rows between 2 preceding and current row)
from read_csv(my_data.csv) qualify date >= make_date(2020,1,4)
""").pl()
</code></pre>
<p>and this works, but doesn't it risk double-computing things?</p>
<p>Is there a way materialize two dataframes at once without double-computing things? Or should I just do</p>
<pre class="lang-py prettyprint-override"><code>data = duckdb.sql('select *, avg(value) over (order by date rows between 2 preceding and current row) from read_csv(my_data.csv)').pl()
train = data.filter(pl.col('date') < date(2020, 1, 4))
val = data.filter(pl.col('date') >= date(2020, 1, 4))
</code></pre>
<p>?</p>
|
<python><duckdb>
|
2024-10-01 11:21:08
| 1
| 11,062
|
ignoring_gravity
|
79,042,837
| 4,444,546
|
automatically add fields_validators based on hint type with pydantic
|
<p>I'd like to define once for all fields_validators function in a BaseModel class, and inherit this class in my model, and the validators should apply to the updated class attributes.</p>
<p>MWE</p>
<pre class="lang-py prettyprint-override"><code>def to_int(v: Union[str, int]) -> int:
if isinstance(v, str):
if v.startswith("0x"):
return int(v, 16)
return int(v)
return v
def to_bytes(v: Union[str, bytes, list[int]]) -> bytes:
if isinstance(v, bytes):
return v
elif isinstance(v, str):
if v.startswith("0x"):
return bytes.fromhex(v[2:])
return v.encode()
else:
return bytes(v)
class BaseModelCamelCase(BaseModel):
model_config = ConfigDict(
populate_by_name=True,
alias_generator=AliasGenerator(
validation_alias=lambda name: AliasChoices(to_camel(name), name)
),
)
# FIXME: should apply to int type from get_type_hints only
@field_validator("*", mode="before")
def to_int(cls, v: Union[str, int]) -> int:
return to_int(v)
# FIXME: should apply to bytes type from get_type_hints only
@field_validator("*", mode="before")
def to_bytes(cls, v: Union[str, bytes, list[int]]) -> bytes:
return to_bytes(v)
class BaseTransactionModel(BaseModelCamelCase):
nonce: int
gas: int = Field(validation_alias=AliasChoices("gasLimit", "gas_limit", "gas"))
to: Optional[bytes]
value: int
data: bytes
r: int = 0
s: int = 0
</code></pre>
<p>I tried to use <code>model_validate</code> but then I lost the alias parsing</p>
|
<python><field><pydantic>
|
2024-10-01 11:04:44
| 1
| 5,394
|
ClementWalter
|
79,042,577
| 5,746,740
|
Getting Google site reviews with Python
|
<p>I need to be able to get Google site reviews by their API do put them in our data warehouse.</p>
<p>I tried with the following code:</p>
<pre><code>from googleapiclient.discovery import build
API_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxx'
service = build('**mybusiness**', 'v4', developerKey=API_KEY)
location_id = 'locations/xxxxxxxxxxxx'
# Request reviews
reviews = service.accounts().locations().reviews().list(parent=location_id).execute()
# Print review data
for review in reviews['reviews']:
print(f"Review: {review['review']}")
print(f"Rating: {review['starRating']}")
print('---')
</code></pre>
<p>But I get an error that <strong>mybusiness</strong> v4 does not exist:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<PythonBin>\Lib\runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<PythonBin>\Lib\runpy.py", line 88, in _run_code
exec(code, run_globals)
File "<PythonBin>\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy\__main__.py", line 39, in <module>
cli.main()
File "<PythonBin>\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "<PythonBin>\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "<PythonBin>\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<PythonBin>\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "<PythonBin>\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "C:\src\DownloadGoogleReviewToDWH - test 2.py", line 4, in <module>
service = build('**mybusiness**', 'v4', developerKey=API_KEY)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<PythonBin>Python312\site-packages\googleapiclient\_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "<PythonBin>Python312\site-packages\googleapiclient\discovery.py", line 304, in build
content = _retrieve_discovery_doc(
^^^^^^^^^^^^^^^^^^^^^^^^
File "<PythonBin>Python312\site-packages\googleapiclient\discovery.py", line 417, in _retrieve_discovery_doc
content = discovery_cache.get_static_doc(serviceName, version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<PythonBin>Python312\site-packages\googleapiclient\discovery_cache\__init__.py", line 72, in get_static_doc
with open(os.path.join(DISCOVERY_DOC_DIR, doc_name), "r") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 22] Invalid argument: '<PythonBin>\\local-packages\\Python312\\site-packages\\googleapiclient\\discovery_cache\\documents\\**mybusiness**.v4.json'
</code></pre>
<p>I found the information that the mybusisness module is deprectated but still supported, so after all it should work? I also found information that I should use mybusinessbusinessinformation</p>
<pre><code>service = build('**mybusinessbusinessinformation**', 'v1', developerKey=API_KEY)
# Request reviews
reviews = service.accounts().locations().reviews().list(parent=location_id).execute()
# Print review data
for review in reviews.get('reviews', []):
print(f"Review: {review['review']}")
print(f"Rating: {review['starRating']}")
print('---')
</code></pre>
<p>Now I get an error that "'Resource' object has no attribute 'reviews'":</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<PythonBin>\Lib\runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<PythonBin>\Lib\runpy.py", line 88, in _run_code
exec(code, run_globals)
File "\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy\__main__.py", line 39, in <module>
cli.main()
File "<PythonBin>\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "<PythonBin>\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "<PythonBin>\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<PythonBin>\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "<PythonBin>\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "C:\src\DownloadGoogleReviewToDWH - test 2.py", line 9, in <module>
reviews = service.accounts().locations().reviews().list(parent=location_id).execute()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Resource' object has no attribute 'reviews'
</code></pre>
<p>I began reading the documentation but I did not find anything on where to find reviews: <a href="https://developers.google.com/my-business/reference/businessinformation/rest/v1/accounts.locations" rel="nofollow noreferrer">https://developers.google.com/my-business/reference/businessinformation/rest/v1/accounts.locations</a></p>
<p>What is a solution to this?</p>
|
<python><google-api-client><google-api-python-client><google-my-business-api><google-reviews>
|
2024-10-01 09:45:50
| 1
| 302
|
André Pletschette
|
79,042,426
| 10,855,529
|
Pure polars version of safe ast literal eval
|
<p>I have data like this,</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({'a': ["['b', 'c', 'd']"]})
</code></pre>
<p>I want to convert the string to a list
I use,</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(a=pl.col('a').str.json_decode())
</code></pre>
<p>it gives me,</p>
<pre><code>ComputeError: error inferring JSON: InternalError(TapeError) at character 1 (''')
</code></pre>
<p>then I use this function,</p>
<pre class="lang-py prettyprint-override"><code>import ast
def safe_literal_eval(val):
try:
return ast.literal_eval(val)
except (ValueError, SyntaxError):
return val
df = df.with_columns(a=pl.col('a').map_elements(safe_literal_eval, return_dtype=pl.List(pl.String)))
</code></pre>
<p>and get the expected output, but is there a pure polars way to achieve the same?</p>
|
<python><python-polars>
|
2024-10-01 09:11:24
| 2
| 3,833
|
apostofes
|
79,042,253
| 16,611,809
|
How do I speed up querying my >600Mio rows?
|
<p>My database has about 600Mio entries that I want to query (<a href="https://stackoverflow.com/questions/79033777/how-to-query-a-large-file-using-pandas-or-an-alternative">Pandas is too slow</a>). This local dbSNP only contains rsIDs and genomic positions. I used:</p>
<pre><code>import sqlite3
import gzip
import csv
rsid_db = sqlite3.connect('rsid.db')
rsid_cursor = rsid_db.cursor()
rsid_cursor.execute(
"""
CREATE TABLE rsids (
rsid TEXT,
chrom TEXT,
pos INTEGER,
ref TEXT,
alt TEXT
)
"""
)
with gzip.open('00-All.vcf.gz', 'rt') as vcf: # from https://ftp.ncbi.nih.gov/snp/organisms/human_9606/VCF/00-All.vcf.gz
reader = csv.reader(vcf, delimiter="\t")
i = 0
for row in reader:
if not ''.join(row).startswith('#'):
rsid_cursor.execute(
f"""
INSERT INTO rsids (rsid, chrom, pos, ref, alt)
VALUES ('{row[2]}', '{row[0]}', '{row[1]}', '{row[3]}', '{row[4]}');
"""
)
i += 1
if i % 1000000 == 0:
print(f'{i} entries written')
rsid_db.commit()
rsid_db.commit()
rsid_db.close()
</code></pre>
<p>I want to query multiple rsIDs and get their genomic position and alteration (query <code>rsid</code> and get <code>chrom</code>, <code>pos</code>, <code>ref</code>, <code>alt</code> and <code>rsid</code>). One entry looks like:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>rsid</th>
<th>chrom</th>
<th>pos</th>
<th>ref</th>
<th>alt</th>
</tr>
</thead>
<tbody>
<tr>
<td>rs537152180</td>
<td>1</td>
<td>4002401</td>
<td>G</td>
<td>A,C</td>
</tr>
</tbody>
</table></div>
<p>I query using:</p>
<pre><code>import sqlite3
import pandas as pd
def query_rsid(rsid_list,
rsid_db_path='rsid.db'):
with sqlite3.connect(rsid_db_path) as rsid_db:
rsid_cursor = rsid_db.cursor()
rsid_cursor.execute(
f"""
SELECT * FROM rsids
WHERE rsid IN ('{"', '".join(rsid_list)}');
"""
)
query = rsid_cursor.fetchall()
return query
</code></pre>
<p>It takes about 1.5 minutes no matter how many entries. Is there a way to speed this up?</p>
|
<python><sql><sqlite><query-optimization>
|
2024-10-01 08:18:00
| 2
| 627
|
gernophil
|
79,042,189
| 1,183,071
|
How to debug asyncio warning in base_events.py:1931?
|
<p>I've written a chat app that uses Quart, socketio, and uvicorn, along with some external frameworks like firestore. My dev environment is Replit. In some parts of my app, I get vague warning messages that I interpret as meaning that something is blocking an async function:</p>
<pre><code>WARNING
Executing <Task pending name='Task-32' coro=<AsyncServer._handle_event_internal() running at /home/runner/MyApp/.pythonlibs/lib/python3.11/site-packages/socketio/async_server.py:610> wait_for=<Task pending name='Task-33' coro=<UnaryStreamCall._send_unary_request() running at /home/runner/MyApp/.pythonlibs/lib/python3.11/site-packages/grpc/aio/_call.py:634> cb=[Task.task_wakeup()] created at /home/runner/MyApp/.pythonlibs/lib/python3.11/site-packages/grpc/aio/_call.py:629> cb=[set.discard()] created at /nix/store/f98g7xbckgqbkagdvpzc2r6lv3h1p9ki-python3-3.11.9/lib/python3.11/asyncio/tasks.py:680> took 0.109 seconds
</code></pre>
<p>It occurs at base_events.py on line 1931. How do you debug this kind of problem?</p>
|
<python><asynchronous><python-asyncio>
|
2024-10-01 07:58:50
| 0
| 8,092
|
GoldenJoe
|
79,042,155
| 1,801,588
|
How to call a Python script using several modules from a Bash script?
|
<p>Based on an <a href="https://stackoverflow.com/a/66491776/1801588">answer</a>, I've made a Bash script to install the requirements and run a Python script working with a MariaDB database. In WSL, the Python script runs well, but when I run the Bash script from Windows 11 command line, I get following error:</p>
<pre><code>ModuleNotFoundError: No module named 'mariadb'
</code></pre>
<p>Unlike <a href="https://stackoverflow.com/questions/66617851/problems-using-the-mariadb-module-for-python">another question concerning the same error</a>, I've verified by <code>python3 -V</code> I use Python 3.10 in the whole process. Commenting the MariaDB connector out has shown the same problem with other other imported modules. My Bash, editted:</p>
<pre><code>python3 -m venv "c:\path\to\venv"
"c:\path\to\venv\Scripts\activate"
pip3 install -r "c:/some/path/requirements.txt"
python3 "c:/some/other/path/myscript.py"
</code></pre>
<p>The <code>requirements.txt</code> file:</p>
<pre><code>mariadb==1.1.10
datasets
sentence_transformers
</code></pre>
<p>EDIT: Thanks to the comments and answers, I've got rid of the following error:</p>
<pre><code>bash: venv/bin/activate: No such file or directory
</code></pre>
<p>However, the main problem lasts, it just changed from <code>ImportError</code> to <code>ModuleNotFoundError</code>. All the requirements either install correctly, or report "Requirement already satisfied".</p>
|
<python><python-3.x><bash><mariadb><importerror>
|
2024-10-01 07:48:45
| 4
| 2,860
|
Pavel V.
|
79,041,361
| 1,676,880
|
How can I find lengths of all elements of the dataframe?
|
<p>I tried this:</p>
<pre class="lang-py prettyprint-override"><code>df_work.with_columns(
pl.all().str.len_chars()
)
</code></pre>
<p>But I got an error</p>
<blockquote>
<p><code>polars.exceptions.SchemaError</code>: invalid series dtype: expected <code>String</code>, got <code>i64</code></p>
</blockquote>
|
<python><python-polars>
|
2024-10-01 00:57:31
| 1
| 601
|
Yaiba
|
79,041,358
| 897,968
|
When (and why) did Python2's built-in `file()` get deprecated?
|
<p>I recently had to port some ancient Python2 code to Python3 and bumped into the use of the <a href="https://docs.python.org/2.7/library/functions.html#file" rel="nofollow noreferrer"><code>file</code></a> built-in function instead of <a href="https://docs.python.org/2.7/library/functions.html#open" rel="nofollow noreferrer"><code>open</code></a> I've been used to.</p>
<p>Interestingly there was no reference to <code>file</code> being deprecated and also the <a href="https://docs.python.org/3.0/library/2to3.html#to3-reference" rel="nofollow noreferrer"><code>2to3</code></a> tool did not report that this has or should be changed.</p>
<p>Could anyone point me to a document explaining what happened?</p>
|
<python><python-3.x><deprecated>
|
2024-10-01 00:55:27
| 0
| 3,089
|
FriendFX
|
79,041,221
| 725,932
|
Finding imports of a function globally
|
<p>I am working on a test fixture for the slash framework which needs to modify the behavior of <code>time.sleep</code>. For <em>reasons</em> I cannot use pytest, so I am trying to roll my own basic monkeypatching support.</p>
<p>I am able to replace <code>time.sleep</code> easily enough for things that just <code>import time</code>, but some things do <code>from time import sleep</code> before my fixture is instantiated. So far I'm using <code>gc.get_referrers</code> to track down any references to <code>sleep</code> before replacing them:</p>
<pre class="lang-py prettyprint-override"><code>self._real_sleep = time.sleep
for ref in gc.get_referrers(time.sleep):
if (isinstance(ref, dict)
and "__name__" in ref
and "sleep" in ref
and ref["sleep"] is self._real_sleep):
self._monkey_patch(ref)
</code></pre>
<p>In practice this works, but it feels very ugly. I do have access to reasonably current python (currently on 3.11), just limited ability to add 3rd party dependencies. Is there a better/safer way to find references to a thing or to patch out a thing globally, using only standard library methods?</p>
|
<python><reflection>
|
2024-09-30 23:00:47
| 1
| 3,258
|
superstator
|
79,041,199
| 495,990
|
networkx: identify instances where a node is both an ancestor and descendant of another node
|
<p>I'm not sure if specific terminology exists for this, but I'm looking to identify paths in a directed network in which case the directionality goes both ways. I'm building a network with a bunch of provided data and by accident stumbled on some artifacts where nodes A and B are both parents/children of each other.</p>
<p>In looking to identify these, I stumbled on the term "<a href="https://networkx.org/documentation/stable/auto_examples/drawing/plot_selfloops.html" rel="nofollow noreferrer">self loop</a>". This is what I want, just for paths longer than a single edge.</p>
<pre><code>import networkx as nx
g = nx.DiGraph()
g.add_edges_from([(0, 1), (0, 2), (0, 3), (1, 3), (3, 0), (3, 3), (2, 0), (1, 2), (2, 1)])
nx.draw_networkx(g, arrows=True, with_labels=True, width=0.2, edge_color='#AAAAAA', arrowsize=20,
node_size=2)
</code></pre>
<p><a href="https://i.sstatic.net/cW63N8jg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cW63N8jg.png" alt="enter image description here" /></a></p>
<p><code>networkx</code> <a href="https://networkx.org/documentation/stable/reference/generated/networkx.classes.function.nodes_with_selfloops.html" rel="nofollow noreferrer">has a function to identify nodes with self loops</a>, but I only get node 3 as a result.</p>
<pre><code>list(nx.nodes_with_selfloops(g))
[3]
</code></pre>
<p>Only node <code>3</code> is identified, but in this case I'd be looking to identify that 1/2 and 0/1 have arrows going both ways. It's not a single edge self loop, but allows 2 > 0 > 1 > 2 and 1 > 3 > 0 > 2 > 1. I start and end at the same node, just with other nodes between.</p>
<p>Is there an efficient way to identify "what nodes that have bi-directional arrows to the same node"? I have not identified enough of these cases to know if the cause will always be a single double sided arrow or something like 2+ multi-edge paths that start/stop with the same node.</p>
<p>The real world data is like process flows, so there should not be cases where somehow A flows to B and B also flows into A, so these are the data inconsistencies I'm looking to quickly identify.</p>
|
<python><networkx><digraphs>
|
2024-09-30 22:43:10
| 1
| 10,621
|
Hendy
|
79,041,106
| 13,968,392
|
Get lorenz curve and gini coefficient in pandas
|
<p>How can I get lorenz curve and gini coefficient with the pandas python package? Similar posts on the gini coefficient and lorenz curve mostly concern numpy or R.</p>
|
<python><pandas><plot><charts><gini>
|
2024-09-30 21:44:13
| 1
| 2,117
|
mouwsy
|
79,040,904
| 9,665,272
|
Python annotate adding class members in __get__
|
<p>I want to create a decorator that adds a member to the decorated class<br />
that is instance of outer class.</p>
<p>In this case, decorator <code>memberclass</code> adds attribute <code>self</code> to the instance of <code>class y</code> that is of type <code>X</code></p>
<pre class="lang-py prettyprint-override"><code>class memberclass[Y]:
def __init__(self, cls: type[Y]):
self.cls = cls
def __get__(self, x, cls) -> Y | type[Y]:
if x is None: return self.cls
# I deleted some lines related to memorizing y per instance of X
y = self.cls()
y.self = x
return y
class X:
z: int
@memberclass
class y:
def f(self):
# self.self is instace of X
# type checker should be aware of self.self.z
# X.y is class y
# X().y is instance of that class
# X().y.self is X()
</code></pre>
<p>My best guess would be that there might be some way<br />
to annotate <code>__get__</code> method to say "<code>Y</code> is getting a new member <code>self</code> with type <code>typeof x</code>".</p>
<hr />
<p>I checked whole documentation <a href="https://docs.python.org/3/library/typing.html" rel="nofollow noreferrer">https://docs.python.org/3/library/typing.html</a> and didn't find solution.<br />
This also didn't help <a href="https://stackoverflow.com/questions/75897075/how-to-type-hint-python-magic-get-method">How to type hint python magic __get__ method</a> since it doesn't show how to modify type<br />
This also didn't help <a href="https://stackoverflow.com/questions/65385563/is-it-possible-to-change-method-type-annotations-for-a-class-instance">Is it possible to change method type annotations for a class instance?</a> since it modifies function signature, but not the class members</p>
|
<python><python-typing><python-decorators><python-descriptors>
|
2024-09-30 20:11:08
| 1
| 885
|
Superior
|
79,040,746
| 2,288,506
|
Having trouble displaying an image in an Adw.AboutDialog in python
|
<p>I have the following file structure:</p>
<pre class="lang-bash prettyprint-override"><code>.
├── resources.gresource
├── resources.xml
├── texty.py
└── texty.svg
</code></pre>
<p>The resources.xml:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<gresources>
<gresource prefix="/texty">
<file>texty.svg</file>
</gresource>
</gresources>
</code></pre>
<p>Compiled using:</p>
<pre class="lang-py prettyprint-override"><code>glib-compile-resources resources.xml --target=resources.gresource
</code></pre>
<p>...and registered in the <code>texty.py</code> init function:</p>
<pre class="lang-py prettyprint-override"><code>resources = Gio.Resource.load('./resources.gresource')
Gio.resources_register(resources)
</code></pre>
<p>Retrieved like this it works:</p>
<pre class="lang-py prettyprint-override"><code>image = Gtk.Image.new_from_resource('/texty/texty.svg')
box.append(image)
</code></pre>
<p>...but not in the <code>AboutDialog</code> code:</p>
<pre class="lang-py prettyprint-override"><code>about_dialog = Adw.AboutDialog.new()
about_dialog.set_application_name("texty")
about_dialog.set_application_icon("/texty/texty.svg")
about_dialog.present()
</code></pre>
<p>I get a 'broken image' displayed. No errors. This works in C-code. Is there something different I need do in python? I tried using a png with the same result.</p>
|
<python><python-3.x><gtk4>
|
2024-09-30 19:09:51
| 1
| 512
|
CraigFoote
|
79,040,679
| 3,048,453
|
Module not found when using UV, py-shiny and external dependency
|
<p>I want to create a shiny app (using py-shiny) where I manage the dependencies using <code>uv</code>.</p>
<p>So far I have used the following code</p>
<pre class="lang-bash prettyprint-override"><code>uv init shiny-tester
cd shiny-tester
uv add shiny
vim app.py # see below
uvx shiny run # works as expected
</code></pre>
<p>where I write the following to <code>app.py</code> (directly taken from the official <a href="https://shiny.posit.co/py/components/inputs/action-button/" rel="nofollow noreferrer">doc</a>):</p>
<pre class="lang-py prettyprint-override"><code>from shiny import App, reactive, render, ui
# will be used later
# import numpy
app_ui = ui.page_fluid(
ui.input_action_button("action_button", "Action"),
ui.output_text("counter"),
)
def server(input, output, session):
@render.text()
@reactive.event(input.action_button)
def counter():
return f"{input.action_button()}"
app = App(app_ui, server)
</code></pre>
<p>This works as expected using <code>uvx shiny run</code> and the app runs.</p>
<p>When I try to add any other package, eg <code>numpy</code> by inserting a <code>import numpy</code> into <code>app.py</code>, I get the error that the module could not be found because I didnt install it yet.
To fix this I run <code>uv add numpy</code>, which works and does not throw any errors.</p>
<p>When I run <code>uvx shiny run</code> however, I still get the error</p>
<pre><code> File "/mnt/c/Users/david/Desktop/shiny-tester/app.py", line 2, in <module>
import numpy
ModuleNotFoundError: No module named 'numpy'
</code></pre>
<p>When I run <code>uv run python</code> and <code>import numpy</code> the module can be found and loaded.</p>
<p>Any idea what might cause this and how to fix it?</p>
<h2>Version etc</h2>
<pre><code>❯ uv --version
uv 0.4.17
❯ cat pyproject.toml
[project]
name = "shiny-tester"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"numpy>=2.1.1",
"shiny>=1.1.0",
]
❯ head uv.lock
version = 1
requires-python = ">=3.12"
resolution-markers = [
"platform_system != 'Emscripten'",
"platform_system == 'Emscripten'",
]
[[package]]
name = "anyio"
version = "4.6.0"
</code></pre>
|
<python><py-shiny><uv>
|
2024-09-30 18:41:57
| 1
| 10,533
|
David
|
79,040,667
| 249,199
|
In free-threaded Python without the GIL or other locks, can mutating an integer field on an object break the interpreter?
|
<p>As of CPython 3.13, the <a href="https://peps.python.org/pep-0703/" rel="nofollow noreferrer">Global Interpreter Lock can now be disabled</a>.</p>
<p>When running without the GIL, I'm curious about what operations are safe, where "safe" means "does not crash or corrupt the interpreter".</p>
<p>Specifically, consider this example code that creates a simple <code>class</code> and increments/decrements an <code>int</code> counter on that class in parallel:</p>
<pre class="lang-py prettyprint-override"><code>import threading
import time
class MyObject:
def __init__(self):
self.counter = 0
def run(self):
while True:
self.counter += 1
self.counter -= 1
instance = MyObject()
for _ in range(10):
threading.Thread(target=instance.run).start()
while True:
print(instance.counter)
time.sleep(1)
</code></pre>
<p>If run in Python (with or without the GIL), I understand that this code, which does not use any explicit locking, is <em>racy</em>, in that <code>print(instance.counter)</code> can observe non-zero values, and values can diverge over time. I am OK with that.</p>
<p>I am curious, however, whether free-threaded Python 3.13+ makes this code <em>safe</em>. Specifically:</p>
<ul>
<li>Can attempts to access <code>instance.counter</code> or <code>self.counter</code> ever segfault, crash, or corrupt interpreter state?</li>
<li>Can attempts to increment or decrement <code>instance.counter</code> ever segfault, crash, or corrupt interpreter state?</li>
<li>Can retrieving and printing/copying/storing <code>instance.counter</code> ever retrieve an unrepresentable or non-integer value?</li>
</ul>
<p>I understand that the answers to this question may be implementation-specific and potentially subject to change.</p>
<p>I am specifically, narrowly interested in mutating numeric fields on pure-Python objects that I fully control. I understand that the presence of other code in intermediate layers of containers or accessor logic may invalidate any safety guarantees that are provided in this specific case.</p>
<h3>What I've Tried</h3>
<p>Running the above code displays expected race conditions in GIL-enabled and GIL-disabled interpreters I have built locally, but does not crash or emit non-integer values. That said, maybe I haven't run it for long enough or on a system that triggers specific race conditions internal to the interpreter.</p>
<p>Reading through the GIL removal <a href="https://peps.python.org/pep-0703/" rel="nofollow noreferrer">PEP</a> and supporting <a href="https://docs.python.org/3.13/howto/free-threading-extensions.html" rel="nofollow noreferrer">documentation for C extension developers</a> does not indicate any provided behavior invariants regarding thread-safety where pure-Python code is concerned.</p>
<p>I know that interpreter-internal locks have been added (which slow down single-threaded processing) in GIL-less interpreters to prevent some instances of state corruption; I'm curious as to whether such a lock is in play in this specific instance, and if so what invariants it provides re: crash safety and observation of invalid intermediate states.</p>
<p>I know that explicit locking is <a href="https://stackoverflow.com/questions/105095">necessary</a> for multithreaded correctness in the presence of mutable data. However, I'm not interested in correctness, but rather safety as defined above.</p>
<h3>Why I'm Asking</h3>
<ol>
<li>Curiosity.</li>
<li>Concern about safety of pre-existing code I maintain.</li>
<li>Interest in using this pattern to implement deliberately inaccurate mutable numbers for fuzz testing and approximation.</li>
</ol>
|
<python><multithreading><python-multithreading><race-condition><gil>
|
2024-09-30 18:38:16
| 0
| 4,292
|
Zac B
|
79,040,567
| 11,457,006
|
Container based AWS python Lambda function cold start time
|
<p>I've converted a python and R containerized lambda to a multistage build which reduced the size of the container in ECR from 477 MB to 160 MB, but the cold start time is about the same for both (in fact a bit slower for the smaller package (from 2.3 -> 2.97s)</p>
<p>GIVEN my Dockerfile looks like this:</p>
<pre><code># Stage 1: Build stage
FROM python:3.10-slim-bullseye as builder
ENV DEBIAN_FRONTEND=noninteractive
# Set up environment variables
ENV LC_ALL=C.UTF-8
ENV LANG=en_US.UTF-8
ENV TZ=:/etc/localtime
ENV PATH=/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin
ENV LD_LIBRARY_PATH=/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib
# Install build dependencies and R in build stage
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev \
r-base-core \
&& apt-get clean
# Set up working directory
ARG FUNCTION_DIR="/var/task"
WORKDIR ${FUNCTION_DIR}
# Copy requirements for Python and R scripts
COPY requirements.txt .
COPY script_calcproper.R .
COPY main.py .
COPY data/. data/.
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Stage 2: Final stage
FROM python:3.10-slim-bullseye
ENV DEBIAN_FRONTEND=noninteractive
# Set up environment variables
ENV LC_ALL=C.UTF-8
ENV LANG=en_US.UTF-8
ENV TZ=:/etc/localtime
ENV PATH=/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin
ENV LD_LIBRARY_PATH=/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib
# Install R runtime dependencies in final stage
RUN apt-get update && \
apt-get install -y --no-install-recommends \
r-base-core \
libcurl4-openssl-dev \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Define custom function directory
ARG FUNCTION_DIR="/var/task"
WORKDIR ${FUNCTION_DIR}
# Copy only the necessary files from the build stage
COPY --from=builder /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
COPY --from=builder ${FUNCTION_DIR} ${FUNCTION_DIR}
# Set the CMD to your handler
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "main.handler"]
</code></pre>
<p>any suggestions as to why the cold start of my fcn is still so slow? My requirements.txt packages are pandas, rpy2, boto3 and awslambdric.</p>
|
<python><docker><aws-lambda>
|
2024-09-30 18:07:31
| 0
| 3,875
|
Jesse McMullen-Crummey
|
79,040,530
| 825,227
|
Optimal data structure for assembling data in Python
|
<p>I'm doing a calculation over 1M+ records, that's not individually intensive (ie, <<1s to complete a single calc) but in aggregate, is quite sluggish (only ~1000 data points produced per second). I'm wondering if the use of a list/append statement to create the output time series could be improved here (or, perhaps the obvious, this is just a non-trivial calculation so will take a while)? I've already utilized matrix multiplication as best as possible, so not sure how else this could be sped up. Data samples referenced below as well.</p>
<pre><code>import numpy as np
tlfv = []
for i, (la,lb) in enumerate(zip(LA,LB)):
_ = lvl_b.iloc[i,lb] + \
np.multiply(lvl_b.iloc[i,:] - lvl_b.iloc[i,lb], bk_b.iloc[i,:]).loc[(lvl_b.iloc[i,:] - lvl_b.iloc[i,lb])>0].sum() + \
lvl_a.iloc[i,la] + \
np.multiply(lvl_a.iloc[i,:] - lvl_a.iloc[i,la], bk_a.iloc[i,:]).loc[(lvl_a.iloc[i,la] - lvl_a.iloc[i,:])>0].sum()
tlfv.append(_)
</code></pre>
<p><strong>lvl_a</strong> -- [1.2M x 10] matrix</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9
38 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25 4489.5
39 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25 4489.5
40 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25 4489.5
41 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25 4489.5
42 4487.0 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25
43 4487.0 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25
44 4487.0 4487.25 4487.5 4487.75 4488.0 4488.25 4488.5 4488.75 4489.0 4489.25
</code></pre>
<p><strong>lvl_b</strong> -- [1.2M x 10] matrix</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9
18 4487.0 4486.75 4486.5 4486.25 4486.0 4485.75 4485.5 4485.25 4485.0 4484.75
19 4487.0 4486.75 4486.5 4486.25 4486.0 4485.75 4485.5 4485.25 4485.0 4484.75
20 4486.75 4486.5 4486.25 4486.0 4485.75 4485.5 4485.25 4485.0 4484.75 4484.5
21 4486.75 4486.5 4486.25 4486.0 4485.75 4485.5 4485.25 4485.0 4484.75 4484.5
22 4486.75 4486.5 4486.25 4486.0 4485.75 4485.5 4485.25 4485.0 4484.75 4484.5
23 4486.75 4486.5 4486.25 4486.0 4485.75 4485.5 4485.25 4485.0 4484.75 4484.5
24 4487.0 4486.75 4486.5 4486.25 4486.0 4485.75 4485.5 4485.25 4485.0 4484.75
25 4487.0 4486.75 4486.5 4486.25 4486.0 4485.75 4485.5 4485.25 4485.0 4484.75
</code></pre>
<p><strong>LA</strong> -- [1.2M x 1] vector</p>
<pre><code> 0
16 7
17 7
18 6
19 7
20 7
21 7
22 7
23 6
24 6
25 6
</code></pre>
<p><strong>LB</strong> -- [1.2M x 1] vector</p>
<pre><code> 0
18 4
19 4
20 3
21 3
22 3
23 3
24 4
25 4
26 4
</code></pre>
<p><strong>bk_a</strong> -- [1.2M x 10] matrix</p>
<pre><code>0 1 2 3 4 5 6 7 8 9
9 6 17 14 16 15 17 14 17 28 30
10 6 17 14 16 15 16 14 17 28 30
11 6 17 14 16 15 16 14 17 28 30
12 6 17 14 16 15 16 14 17 28 30
13 6 17 14 16 15 16 14 17 28 30
14 7 17 14 16 15 16 14 17 28 30
15 7 17 14 16 15 16 14 27 28 30
16 5 17 14 16 15 16 14 27 28 30
</code></pre>
<p><strong>bk_b</strong> -- [1.2M x 10] matrix</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9
22 26 28 34 34 62 39 45 46 28 23
23 26 28 34 34 62 39 45 46 28 23
24 2 26 28 34 34 62 39 45 46 28
25 2 24 28 34 34 62 39 45 46 28
26 2 24 29 34 34 62 39 45 46 28
27 2 24 29 34 35 62 39 45 46 28
28 2 24 29 34 35 62 40 45 46 28
29 2 24 29 34 35 62 40 46 46 28
</code></pre>
<p><strong>tlfv output</strong> -- [1.2M x 1] vector</p>
|
<python>
|
2024-09-30 17:56:32
| 0
| 1,702
|
Chris
|
79,040,327
| 13,142,245
|
Asynchronous S3 downloads in FastAPI
|
<p>I'm trying to add some functionality to my FastAPI application, which will asynchronously update an ML model (pulled from an S3 bucket.) The idea is to update this model once hourly without blocking the API's ability to respond to CPU-bound tasks, such as model inference requests.</p>
<pre><code># Global model variable
ml_model = None
# Async function to download model from S3
async def download_model_from_s3(path="integration-tests/artifacts/MULTI.joblib"):
global ml_model
s3_client = boto3.client("s3")
bucket = os.environ.get("BUCKET_BUCKET", "artifacts_bucket")
try:
local_model_path = './model.joblib'
download_coroutine = s3_client.download_file(bucket, path, local_model_path)
await download_coroutine
ml_model = joblib.load(local_model_path)
http://logging.info(f"Model updated.")
except Exception as e:
logging.exception(f"Error downloading or loading model: {e}")
# Asynchronous scheduler function that updates the model every interval
async def scheduler(bucket_name: str, model_key: str, interval=60):
while True:
# Sleep for the specified interval (in minutes)
await asyncio.sleep(interval * 60)
# Call the download function to update the model
await download_model_from_s3(bucket_name, model_key)
app = FastAPI()
# Startup event to start the scheduler
@app.on_event("startup")
async def startup_event():
# BLOCKING: Download the model once at startup to ensure it is available
download_model_from_s3() # Blocking, ensures model is available
# Start the scheduler to update the model every 60 minutes (async, non-blocking)
await scheduler(bucket_name, model_key, interval=60)
</code></pre>
<p>I’m familiar with FastAPI but relatively new to async programming. My question: Is this the right way to asynchronously pull data from S3? Is a separate async S3 client required?</p>
<p>I’ve opted to use a while true statement over explicitly scheduling the job once hourly. However, I’m uncertain about the appropriateness of this technique.</p>
|
<python><amazon-s3><async-await><fastapi>
|
2024-09-30 16:57:03
| 1
| 1,238
|
jbuddy_13
|
79,040,264
| 2,313,889
|
Python `multiprocessing.Queue` hanging process
|
<p>For whatever reason, when using <code>Process</code> with <code>Queue</code> <code>from multiprocessing</code> the process hangs when the queue is too big. I haven't investigated yet how big the queue needs to be in order to cause this.</p>
<p>My initial assumption is that the reason for the process to be hanging was that it used a queue, and at the end, the queue was not empty and closed. Having the queue emptied and closed does prevent the problem (demo 3 below), but it also does not happen when the queue has a few items and is left not empty and not closed (demo 1 below). So, I am a bit clueless on it.</p>
<p>Could someone clarify the reason for the process to be hanging?</p>
<hr />
<p><strong>DEMO</strong></p>
<ol>
<li>When using <code>p = Process(target=some_function_that_does_not_break)</code>, the console output is:</li>
</ol>
<pre><code>Function started
Function ended
queue size is=100
J1
J2
main ended
</code></pre>
<ol start="2">
<li>When using <code>p = Process(target=some_function_that_also_does_not_break)</code>, the console output is</li>
</ol>
<pre><code>Function started
Function ended. Time filling queue: 1.999461717 seconds
Emptying queue
queue size is=3005706
J1
Queue empty! Time emptying queue: 26.81551289 seconds
Queue closed
J2
main ended
</code></pre>
<p><em>(I have no idea why emptying queue takes way way more time than filling it)</em></p>
<ol start="3">
<li>When using <code>p = Process(target=some_function_that_breaks)</code>, the console output is</li>
</ol>
<pre><code>Function started
Function ended
queue size is=3152815
J1
(execution hanging here)
</code></pre>
<hr />
<p><strong>CODE</strong></p>
<pre><code>#!/usr/bin/env python
import time
from multiprocessing import Process, Queue, Value
q = Queue()
stop = Value("b", False)
# [EDIT]: I added the try...except... to prevent unnecessary comments
# about the queue being full, which should not happen since
# it does not have a max size.
def some_function_that_breaks():
print("Function started")
try:
while not stop.value:
q.put("Item")
except Exception as e:
print(f"Some exception happened: {e}")
print("Function ended")
def some_function_that_does_not_break(queue_size=100):
print("Function started")
for _ in range(queue_size):
q.put("Item")
print("Function ended")
def some_function_that_also_does_not_break():
print("Function started")
time_start = time.perf_counter_ns()
while not stop.value:
q.put("Item")
time_filling_s = (time.perf_counter_ns() - time_start) / 1e9
print(f"Function ended. Time filling queue: {time_filling_s} seconds")
print("Emptying queue")
time_start = time.perf_counter_ns()
while not q.empty():
q.get()
time_emptying_s = (time.perf_counter_ns() - time_start) / 1e9
print(f"Queue empty! Time emptying queue: {time_emptying_s} seconds")
q.close()
print("Queue closed")
p = Process(target=some_function_that_does_not_break)
p.start()
time.sleep(2)
stop.value = True
time.sleep(1)
print(f"queue size is={q.qsize()}")
print("J1")
p.join()
print("J2")
p.join()
print("main ended")
</code></pre>
|
<python><multithreading><queue>
|
2024-09-30 16:37:39
| 1
| 2,031
|
Eduardo Reis
|
79,040,165
| 3,912,693
|
`pipenv install` does not properly install mysqlclient while `pip install` does
|
<pre class="lang-bash prettyprint-override"><code>brew install mysql-client
mkdir foo && cd foo
export PKG_CONFIG_PATH="/opt/homebrew/opt/mysql-client/lib/pkgconfig"
pipenv install mysqlclient
pipenv run python -c 'import MySQLdb'
</code></pre>
<p>exits with an error:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/foo/.local/share/virtualenvs/foo-cSg51m4-/lib/python3.12/site-packages/MySQLdb/__init__.py", line 17, in <module>
from . import _mysql
ImportError: dlopen(/Users/foo/.local/share/virtualenvs/foo-cSg51m4-/lib/python3.12/site-packages/MySQLdb/_mysql.cpython-312-darwin.so, 0x0002): Library not loaded: @rpath/libmysqlclient.24.dylib
Referenced from: <74CE3BA6-6CF0-3583-93EC-6F733D03E6C3> /Users/foo/.local/share/virtualenvs/foo-cSg51m4-/lib/python3.12/site-packages/MySQLdb/_mysql.cpython-312-darwin.so
Reason: tried: '/opt/homebrew/lib/libmysqlclient.24.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/libmysqlclient.24.dylib' (no such file), '/opt/homebrew/lib/libmysqlclient.24.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/libmysqlclient.24.dylib' (no such file)
</code></pre>
<p>and</p>
<pre class="lang-bash prettyprint-override"><code>brew install mysql-client
mkdir foo && cd foo
export PKG_CONFIG_PATH="/opt/homebrew/opt/mysql-client/lib/pkgconfig"
pipenv run pip install mysqlclient
pipenv run python -c 'import MySQLdb'
</code></pre>
<p>exits with zero.</p>
<p>This is unintuitive. Why does <code>pipenv</code> work differently? Is it an intended behavior?</p>
<p>Environment: macOS 14.4 arm64 (M3) / Python 3.12 (Homebrew)</p>
<p>I have no idea where to start investigating. Python's dependency management is too complex to me.</p>
|
<python><macos><pipenv>
|
2024-09-30 16:04:44
| 2
| 333
|
Jinux
|
79,040,085
| 9,097,114
|
Page refresh at incremental intervals
|
<p>I have hard-coded <code>if</code> conditions with multiple lines of code to refresh a webpage at every 300th iteration in <code>for</code> loop.<br />
My code is as below</p>
<pre><code>if counter == 300:
driver.refresh()
time.sleep(20)
if counter == 600:
driver.refresh()
time.sleep(20)
if counter == 900:
driver.refresh()
time.sleep(20)
if counter == 1200:
driver.refresh()
time.sleep(20)
</code></pre>
<p>is there any modification in the above to shorten the <code>if</code> conditions.</p>
|
<python>
|
2024-09-30 15:42:03
| 1
| 523
|
san1
|
79,040,042
| 5,278,205
|
How can I group a spark dataframe into rows no more than 50,000 and no more than 90mb in size
|
<p>How can I group a spark dataframe into rows no more than 50,000 and no more than 90mb in size.</p>
<p>I've tried the following but some how occasionally I get partitions greater than 90mb.</p>
<pre><code>from pyspark.sql.window import Window
from pyspark.sql.functions import expr, length, sum as sql_sum, row_number, col, count
PARTITION_MB = 90
ROW_LIMIT = 50000
try:
# Select required columns
sdf = spark.table("table_name")
# Calculate total memory usage per row in MB
sdf = sdf.withColumn("json_string", expr("to_json(struct(*))")) \
.withColumn("memory_usage_per_row_MB", length("json_string") / 1024 / 1024) \
.drop("json_string")
# Generate row_number for ordering purposes
window_spec = Window.orderBy(expr("monotonically_increasing_id()"))
# Add row_number column to keep track of row order
sdf = sdf.withColumn("row_number", row_number().over(window_spec))
# Calculate cumulative memory usage
sdf = sdf.withColumn("cumulative_memory_MB", sql_sum("memory_usage_per_row_MB").over(window_spec)) \
.withColumn("cumulative_row_count", row_number().over(window_spec))
# Assign partition id based on memory and row limits
sdf = sdf.withColumn(
"partition_id",
expr(f"""
greatest(
floor(cumulative_memory_MB / {PARTITION_MB}),
floor((row_number - 1) / {ROW_LIMIT})
)
""")
)
# Validate partitions
partition_counts = sdf.groupBy("partition_id").agg(
sql_sum("memory_usage_per_row_MB").alias("partition_memory_MB"),
count("*").alias("row_count")
)
# Count the number of distinct partitions
num_partitions = sdf.select("partition_id").distinct().count()
# Assert that all partitions meet both memory and row constraints
assert partition_counts.filter(col("partition_memory_MB") <= PARTITION_MB).count() == partition_counts.count(), \
f"Error: Some partitions exceed {PARTITION_MB} MB"
assert partition_counts.filter(col("row_count") <= ROW_LIMIT).count() == partition_counts.count(), \
f"Error: Some partitions exceed {ROW_LIMIT} rows"
logger.info(f"Successfully partitioned dataframe by memory into {num_partitions} partitions each less than {PARTITION_MB} MB and less than {ROW_LIMIT} rows.")
except Exception as e:
logger.error(f"Error partitioning dataframe by memory: {e}")
raise Exception(e)
</code></pre>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2024-09-30 15:30:07
| 0
| 5,213
|
Cyrus Mohammadian
|
79,039,958
| 3,335,606
|
Getting Tokens Usage Metadata from Gemini LLM calls in LangChain RAG RunnableSequence
|
<p>I would like to have the token utilisation of my RAG chain each time it is invoked.</p>
<p>No matter what I do, I can't seem to find the right way to output the total tokens from the Gemini model I'm using.</p>
<pre class="lang-py prettyprint-override"><code>import vertexai
from langchain_google_vertexai import VertexAI
from vertexai.generative_models import GenerativeModel
vertexai.init(
project='MY_PROJECT',
location="MY_LOCATION",
)
question = "What is the meaning of life"
llm = VertexAI(model_name="gemini-1.5-pro-001",)
response1 = llm.invoke(question)
llm2 = GenerativeModel("gemini-1.5-pro-001",)
response2 = llm2.generate_content(question)
</code></pre>
<p><code>response1</code> above is just a string.</p>
<p><code>response2</code> is what I want i.e. a dictionary containing usage_metadata, safety_rating, finish_reason, etc. But I haven't managed to make my RAG chain run using this approach.</p>
<p>My RAG chain is a <code>RunnableSequence</code> (from <code>langchain_core.runnables</code>) and also I've tried using callbacks as the chain does not support <code>class 'vertexai.generative_models.GenerativeModel'</code></p>
<pre class="lang-py prettyprint-override"><code>from langchain_google_vertexai import VertexAI
from langchain.callbacks.base import BaseCallbackHandler
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.outputs import LLMResult
from langchain_core.messages import BaseMessage
class LoggingHandler(BaseCallbackHandler):
def on_llm_start(self, serialized, prompts, **kwargs) -> None:
print('On LLM Start: {}'.format(prompts))
def on_llm_end(self, response: LLMResult, **kwargs) -> None:
print('On LLM End: {}'.format(response))
callbacks = [LoggingHandler()]
llm = VertexAI(model_name="gemini-1.5-pro-001",)
prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")
chain = prompt | llm
chain_with_callbacks = chain.with_config(callbacks=callbacks)
response = chain_with_callbacks.invoke({"number": "2"})
</code></pre>
<p>This shows the content below</p>
<pre><code>On LLM Start: ['Human: What is 1 + 2?']
On LLM End: generations=[[GenerationChunk(text='Human: What is 1 + 2?\nAssistant: 3 \n', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': })]] llm_output=None run=None
</code></pre>
<p>I.e. no usage metadata.</p>
<p>Any idea how to have the usage metadata for each RAG chain call?</p>
|
<python><langchain><google-cloud-vertex-ai><retrieval-augmented-generation>
|
2024-09-30 15:04:06
| 1
| 1,659
|
Matheus Torquato
|
79,039,949
| 14,463,396
|
Pandas pct_change but loop back to start
|
<p>I'm looking at how to use the pandas pct_change() function, but I need the values 'wrap around', so the last and first values create a percent change value in position 0 rather than NaN.</p>
<p>For example:</p>
<pre><code>df = pd.DataFrame({'Month':[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'Value':[1, 0.9, 0.8, 0.75, 0.75, 0.8, 0.7, 0.65, 0.7, 0.8, 0.85, 0.9]})
Month Value
0 1 1.00
1 2 0.90
2 3 0.80
3 4 0.75
4 5 0.75
5 6 0.80
6 7 0.70
7 8 0.65
8 9 0.70
9 10 0.80
10 11 0.85
11 12 0.90
</code></pre>
<p>Using pct_change() + 1 gives:</p>
<pre><code>df['percent change'] = df['Value'].pct_change() + 1
Month Value percent change
0 1 1.00 NaN
1 2 0.90 0.900000
2 3 0.80 0.888889
3 4 0.75 0.937500
4 5 0.75 1.000000
5 6 0.80 1.066667
6 7 0.70 0.875000
7 8 0.65 0.928571
8 9 0.70 1.076923
9 10 0.80 1.142857
10 11 0.85 1.062500
11 12 0.90 1.058824
</code></pre>
<p>However I also need to know the % change between December (moth=12) and January (month=1), so the NaN should be 1.111111. I hope to eventually do this to several groups within a group by, so muddling about filling in the Nan with one value over the other, or manually calculating all the percentages seems a long winded way to do it. Is there a simpler way to achieve this?</p>
|
<python><pandas>
|
2024-09-30 15:00:45
| 3
| 3,395
|
Emi OB
|
79,039,864
| 3,907,561
|
Why (x / y)[i] faster than x[i] / y[i]?
|
<p>I'm new to <code>CuPy</code> and CUDA/GPU computing. Can someone explain why <code>(x / y)[i]</code> faster than <code>x[i] / y[i]</code>?</p>
<p>When taking advantage of GPU accelerated computations, are there any guidelines that would allow me to quickly determine which operation is faster? Which to avoid benchmarking every operation.</p>
<pre class="lang-py prettyprint-override"><code># In VSCode Jupyter Notebook
import cupy as cp
from cupyx.profiler import benchmark
x = cp.arange(1_000_000)
y = (cp.arange(1_000_000) + 1) / 2
i = cp.random.randint(2, size=1_000_000) == 0
x, y, i
# Output:
(array([ 0, 1, 2, ..., 999997, 999998, 999999], shape=(1000000,), dtype=int32),
array([5.000000e-01, 1.000000e+00, 1.500000e+00, ..., 4.999990e+05,
4.999995e+05, 5.000000e+05], shape=(1000000,), dtype=float64),
array([ True, False, True, ..., True, False, True], shape=(1000000,), dtype=bool))
</code></pre>
<pre class="lang-py prettyprint-override"><code>def test1(x, y, i):
return (x / y)[i]
def test2(x, y, i):
return x[i] / y[i]
print(benchmark(test1, (x, y, i)))
print(benchmark(test2, (x, y, i)))
# Output:
test1: CPU: 175.164 us +/- 61.250 (min: 125.200 / max: 765.100) us GPU-0: 186.001 us +/- 67.314 (min: 134.144 / max: 837.568) us
test2: CPU: 342.364 us +/- 130.840 (min: 223.000 / max: 1277.600) us GPU-0: 368.133 us +/- 136.911 (min: 225.504 / max: 1297.408) us
</code></pre>
|
<python><cuda><cupy>
|
2024-09-30 14:37:26
| 1
| 1,167
|
huang
|
79,039,724
| 3,564,164
|
Push all dependencies to a private python package index
|
<p>I am building my own python package and uploading it to a private python package index using <code>devpi</code>.</p>
<p>At the moment, I am installing my package this way:
<code>pip install --index-url <my-index-url> --extra-index-url <pypi-index></code>.</p>
<p>The reason I am using the <code>--extra-index-url</code> is that my Python package depends on publicly available python packages e.g. NumPy. My challenge is that, nowadays, there is a (new) publicly available package whose name conflicts with mine, and the `pip install´ command is trying to install the publicly available version of the package.</p>
<ul>
<li>How can I tackle this issue? My current thinking is that I should upload all project dependencies to my private repository and get rid of the <code>--extra-index-url</code> but I am not sure how to do that. My package has a <code>setup.py</code>.</li>
</ul>
|
<python><pip><python-packaging><devpi>
|
2024-09-30 14:04:14
| 1
| 1,919
|
ryuzakinho
|
79,039,709
| 5,462,743
|
Databricks custom keep alive cluster
|
<p>I've upgraded my databricks cluster to 10.4 LTS to 12.2 LTS and I have a breaking change in the way we use the cluster.</p>
<p>For some context, we deploy python code on Azure Machine Learning VMs that will connect to Databricks clusters.</p>
<p>We have one cluster that we share between algorithms. So that all python runs can access to the same cluster and scale horizontally. It's usefull for multiple users and also lowering our costs.</p>
<p>To lower even more our costs, we decided to put the <code>terminated after X minutes of inactivity</code> option to 10 minutes. But some of our algorithm need the spark session to be kept alive for longer than 10 minutes. Most of the time we write the DF into a temp table or a file, and we load it up after. But sometimes it's best for us to keep the sesison opened to keep the DF alive.</p>
<p>For this we made a thread to <code>ping</code> every 9 minutes the cluster.</p>
<pre class="lang-py prettyprint-override"><code> if keep_alive:
# Keep Spark alive in a separate thread
self._keep_spark_alive_thread = threading.Thread(
target=self._keep_spark_alive,
daemon=True,
)
self._keep_spark_alive_thread.start()
</code></pre>
<pre class="lang-py prettyprint-override"><code> def _keep_spark_alive(self):
"""
Keeps the Spark session alive by running a dummy job and sleeping for a specified interval.
This method runs an infinite loop that periodically executes a dummy job on the Spark session
to prevent it from being terminated due to inactivity. It sleeps for a specified interval between
each execution of the dummy job.
Returns:
None
"""
while True:
# Run a dummy job to keep Spark alive
try:
logging.debug("Keeping Spark alive...")
self._spark.sql(f"SELECT '{self.project_name}'").collect()
# Sleep for 9 minutes
# time.sleep(9 * 60)
time.sleep(20) # 20 seconds for debug
except Exception as e:
logging.debug(f"Error keeping Spark alive: {e}")
</code></pre>
<p>Using this thread in a 10.4 LTS cluster, the spark sql is sent and kept the cluster alive.
But in 12.2 LTS, the spark sql is sent BUT it ignors is as an <code>activity</code> and if there's no other query/actions within 10 minutes, the cluster shutdowns.</p>
<p>I tryed to lower the sleep to 20 seconds, and here is was happend. Every 20 seconds I could see the activity in Spark Logs, but after 10 minutes, it started to shutdown. 10 minutes and 20 seconds, it starts up again because of the thread. But i lose the spark session since the cluster is restarted.</p>
<p>My debug logs:</p>
<pre><code>Keeping Spark alive...
Keeping Spark alive...
24/09/30 13:54:05 WARN SparkServiceRPCClient: The cluster seems to be down. A
24/09/30 13:54:06 WARN SparkServiceRPCClient: Cluster xxxx-xxxxxx-xxxxxxxx in
24/09/30 13:54:16 WARN SparkServiceRPCClient: Cluster xxxx-xxxxxx-xxxxxxxx in
24/09/30 13:54:26 WARN SparkServiceRPCClient: Cluster xxxx-xxxxxx-xxxxxxxx in
24/09/30 13:54:37 WARN SparkServiceRPCClient: Cluster xxxx-xxxxxx-xxxxxxxx in
24/09/30 13:54:47 WARN SparkServiceRPCClient: Cluster xxxx-xxxxxx-xxxxxxxx in
24/09/30 13:54:57 WARN SparkServiceRPCClient: Cluster xxxx-xxxxxx-xxxxxxxx in
Error keeping Spark alive: requirement failed: Result for RPC Some(87d40863-d
Keeping Spark alive...
Keeping Spark alive...
</code></pre>
<p>Do you know why <code>spark.sql(f"SELECT '{self.project_name}'").collect()</code> is not working as an activity? Does the new version of pyspark knows that the query is the same, thus kept in cache somewhere?</p>
|
<python><pyspark><databricks><azure-databricks><databricks-connect>
|
2024-09-30 14:00:47
| 1
| 1,033
|
BeGreen
|
79,039,539
| 10,866,453
|
How to use FastAPI-Cache2 package in the application
|
<p>I want to implement Redis caching using <a href="https://pypi.org/project/fastapi-cache2/0.1.3.3/" rel="nofollow noreferrer">FastAPI-Cache2</a> and I have implemented the caching based on the <a href="https://github.com/long2ice/fastapi-cache?tab=readme-ov-file#quick-start:%7E:text=Usage-,Quick%20Start,-from%20collections." rel="nofollow noreferrer">documentation</a>:</p>
<pre><code>async def init_cache():
try:
redis = aioredis.from_url( # type: ignore
appConfig.REDIS_ENDPOINT,
encoding='utf-8',
decode_responses=False,
)
FastAPICache.init(RedisBackend(redis), prefix='igw-cache')
logging.info('Cache initialized successfully.')
except Exception as e:
logging.error(f'Error initializing cache: {e}')
@asynccontextmanager
async def lifespan(_app: FastAPI) -> AsyncIterator[None]:
await init_cache()
yield
</code></pre>
<p>and finally:</p>
<pre><code> app = FastAPI(lifespan=lifespan, docs_url=docs_url)
@router.get("/test-cache")
@cache(expire=60)
async def index(request: Request, response: Response):
logging.info('The endpoint executed...')
return dict(hello="world")
</code></pre>
<p>whenever I call the endpoint, I see a unique key and value in the Redis but the <code>The endpoint executed...</code> log is also printed. How can I ensure it works as I expected to not see the logs for 60 seconds?</p>
|
<python><caching><fastapi><fastapi-cache2>
|
2024-09-30 13:18:44
| 1
| 2,181
|
Abbasihsn
|
79,039,426
| 22,258,429
|
Is it possible to perform conditional unpacking in python?
|
<p>I'm currently doing an interface between several API in a python project.</p>
<p>really basically, at some point, I have to call a method from a "Builder" object. There are two types of builders, a GetBuilder and an AggregateBuilder (I'm using the Weaviate API for people wondering). They both allow me to setup an hybrid search as follow :</p>
<pre><code>search_settings = {param_1: value_1, param_2: value_2, ...}
get_builder = get_builder.with_hybrid(**search_settings)
aggregate_builder = aggregate_builder.with_hybrid(search_settings)
</code></pre>
<p>Notice that the only difference between the two syntax is the dict unpacking for the GetBuilder.</p>
<p>My question is general: is there a clean way to perform conditional unpacking ?</p>
<p>So far, the only options I have found are :</p>
<h3>1 if - else</h3>
<pre><code>aggregate: bool
builder: GetBuilder | AggregateBuilder
if aggregate:
builder = builder.with_hybrid(settings)
else:
builder = builder.with_hybrid(**settings)
</code></pre>
<p>Efficient, clear, but I feel It's redundant</p>
<h3>2 using eval</h3>
<pre><code>aggregate: bool
builder: GetBuilder | AggregateBuilder
builder = eval("builder.with_hybrid({}settings)".format("**"*(not aggregate)))
</code></pre>
<p>Works, but unclear and the eval function just feels wrong to use</p>
<p>Is there another way to perform a conditional unpacking cleanly ?</p>
|
<python><conditional-statements><unpack>
|
2024-09-30 12:51:00
| 1
| 422
|
Lrx
|
79,039,231
| 629,960
|
How to read async stream from non-async method?
|
<p>I use FastAPI to create the app. One of features is upload of files with PUT request.
Fast API supports uploading files with POST requests (multipart form).
But I need the PUT request where a file is just a body of a request.</p>
<p>I have found that I can use a Request object and there is the stream() method to access the request body.</p>
<p>However, I have got a problem with async code. The rest of my code is not async. Originally it was the WSGI application.</p>
<pre><code>@f_app.put("/share/{share_name}/file/upload")
def upload_file(request: Request, share_name:str):
path = request.headers.get("x-file-path","")
request.state.api_app.router.input.with_stream(request.stream())
return UploadAPIEndpoint(request.state.api_app).action("put",
share_name = share_name, path = path)
</code></pre>
<p>This code fails with the error</p>
<pre><code>'async_generator' object has no attribute 'read'
</code></pre>
<p>Before this code worked with WSGI and it was</p>
<pre><code>request.state.api_app.router.input.with_stream(environment['wsgi.input'])
</code></pre>
<p>Internally in that method there is executed read() operation on this object (with argument of with_stream())</p>
<p>request.stream() is "async" in FastAPI.</p>
<p>How can I solve this without changing my code to be async in many places?</p>
<p>Maybe it is possible to have some extra class that will work like a wrapper over async stream?
Or maybe it is possible to use some tricks like "channels" or "queues" to run two parallel coroutines? One will read from async stream and put to the queue/channel and my main code will read from that shares queue/channel?</p>
<p>Do you have any examples where it is solved?</p>
|
<python><asynchronous><async-await><python-asyncio><fastapi>
|
2024-09-30 11:48:34
| 1
| 2,113
|
Roman Gelembjuk
|
79,039,151
| 10,509,418
|
ParserError when reading csv file from github
|
<p>I'm getting a <code>ParserError</code> when I try to read a csv file directly from github:</p>
<pre><code>import pandas as pd
url = 'https://github.com/marcopeix/AppliedTimeSeriesAnalysisWithPython/tree/main/data/jj.csv'
df = pd.read_csv(url)
ParserError: Error tokenizing data. C error: Expected 1 fields in line 41, saw 29
</code></pre>
<p>but if I download the file and read it from disk, it works without issues:</p>
<pre><code>df = pd.read_csv('/home/data/jj.csv')
</code></pre>
|
<python><pandas>
|
2024-09-30 11:28:30
| 2
| 441
|
locus
|
79,039,087
| 6,438,779
|
How do I update an azure monitor LogSearchRuleResource using the python sdk?
|
<p>I'd like to use the python sdk to retrieve and then update alerts in azure monitor. However, there seems to be an issue with the sdk where it will not understand that I'm updating an existing alert - as apposed to creating a new one. Here is my example:</p>
<pre><code>from azure.core.credentials import AccessToken, TokenCredential
from azure.mgmt.monitor import MonitorManagementClient
credentials = AccessTokenCredential(AZURE_ACCESS_TOKEN)
monitor_client = MonitorManagementClient(credentials, AZURE_SUBSCRIPTION_ID)
alert_rule_name = "This is my alert_rule"
resource_group_name = "myresourcegroup"
alert_rule = monitor_client.scheduled_query_rules.get(
resource_group_name,
alert_rule_name)
monitor_client.scheduled_query_rules.create_or_update(
resource_group_name,
alert_rule_name,
alert_rule)
</code></pre>
<p>This code is very simple - it authenticates (successfully in my case), retrieves an alert_rule (also successfully) of type from azure.mgmt.monitor.models.LogSearchRuleResource
, and then sends it right back.</p>
<p>The result:</p>
<pre><code>HttpResponseError: (BadRequest) The query of a metric measurement alert rule must include an AggregatedValue column of a numeric type
</code></pre>
<p>This suggests to me that the sdk doesn't recognize this as an update, but instead a new alert. But it could also be some other issue with the SDK.</p>
<p>I would expect this resource - when returned to the API - to be a valid transaction with the server. Server gives me a resource that I requested. I give it back to the server - server should accept it. No?</p>
|
<python><azure><rest><azure-monitor>
|
2024-09-30 11:10:02
| 1
| 3,561
|
PaulG
|
79,039,069
| 17,721,722
|
Cancelling All Tasks on Failure with `concurrent.futures` in Python
|
<p>I am using Python's <code>concurrent.futures</code> library with <code>ThreadPoolExecutor</code> and <code>ProcessPoolExecutor</code>. I want to implement a mechanism to cancel all running or unexecuted tasks if any one of the tasks fails. Specifically, I want to:</p>
<ol>
<li>Cancel all futures (both running and unexecuted) when a task fails.</li>
<li>Raise the error that caused the first task to fail if that error is silently ignored; otherwise, let Python raise it naturally.</li>
</ol>
<p>Here is the approach I have tried:</p>
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ProcessPoolExecutor, as_completed
from functools import partial
copy_func = partial(copy_from, table_name=table_name, column_string=column_string)
with ProcessPoolExecutor(max_workers=cores_to_use) as executor:
futures = {executor.submit(copy_func, file_path): file_path for file_path in file_path_list}
for f in as_completed(futures):
try:
f.result()
except Exception as e:
executor.shutdown(wait=False) # Attempt to stop the executor
for future in futures:
future.cancel() # Cancel all futures
raise e # Raise the exception
</code></pre>
<h3>Questions:</h3>
<ul>
<li>Is this the correct way to handle task cancellation in <code>ThreadPoolExecutor</code> and <code>ProcessPoolExecutor</code>?</li>
<li>Are there any better approaches to achieve this functionality?</li>
<li>How can I ensure that the raised exception is not silently ignored?</li>
<li>How can I free up all resources used by <code>concurrent.futures</code> after an exception?</li>
</ul>
<p>Thank you!</p>
|
<python><error-handling><concurrency><multiprocessing><concurrent.futures>
|
2024-09-30 11:03:24
| 3
| 501
|
Purushottam Nawale
|
79,038,972
| 11,638,153
|
How to find integer index of a string in a column in pandas dataframe?
|
<p>I am importing a csv file in pandas containing data like this. Referring to following code, I want to get integer index of row containing <code>name_to_search</code> in column <code>name</code> in <code>df1</code>.</p>
<pre><code>name, ColB, ColC, ColD
P1, 1,1,1
P2, 0,1,0
P3, 1,1,0
...
df1 = pd.read_csv(filepath_or_buffer='file.csv', header=[0])
df1['name'].str.lower()
name_to_search = 'p1'
row_indx1 = df1.index.get_loc(df1[df1['name'] == name_to_search].index[0]) # error line
</code></pre>
<p>However, I am getting error <code>IndexError: index 0 is out of bounds for axis 0 with size 0</code> in <code>error line</code> row. Any idea how to fix?</p>
|
<python><pandas><string><dataframe>
|
2024-09-30 10:37:36
| 3
| 441
|
ewr3243
|
79,038,753
| 1,516,331
|
Coroutines are stopped in asyncio.gather(*aws, return_exceptions=False) when exception happens
|
<p>My question is from this <a href="https://python-forum.io/thread-21211.html" rel="nofollow noreferrer">post</a>. I'll describe it here.</p>
<p>I have the following Python code:</p>
<pre><code>import asyncio, time
async def fn1(x):
await asyncio.sleep(3)
print(f"fn1 is called with x={x}")
return "fn1 SUCCESS"
async def fn2(y):
await asyncio.sleep(2)
print(f"fn2 is called with y={y}")
raise asyncio.TimeoutError()
print(y, '*'*10)
return "fn2 SUCCESS"
async def main():
print("start:",time.ctime())
result = ['No results']
try:
result = await asyncio.gather(
fn1("fn1"),
fn2("fn2"),
return_exceptions=False,
# return_exceptions=True,
)
except Exception as e:
print('e:', type(e), str(e))
print("end:",time.ctime())
print(result)
asyncio.run(main())
</code></pre>
<p>This is the result I got:</p>
<pre><code>start: Mon Sep 30 17:25:28 2024
fn2 is called with y=fn2
e: <class 'TimeoutError'>
end: Mon Sep 30 17:25:30 2024
['No results']
</code></pre>
<p>According to <a href="https://docs.python.org/3/library/asyncio-task.html#asyncio.gather" rel="nofollow noreferrer"><code>awaitable asyncio.gather(*aws, return_exceptions=False)¶</code></a>,</p>
<blockquote>
<p>If return_exceptions is False (default), the first raised exception is
immediately propagated to the task that awaits on gather(). <strong>Other awaitables in the aws sequence won’t be cancelled and will continue to run.</strong></p>
</blockquote>
<p>But this is contrary to the result I got. <strong>Why <code>fn1("fn1")</code> coroutine does not get to finish running?</strong> If it finishes running, the line <code>print(f"fn1 is called with x={x}")</code> should print it out.</p>
<p>Then, I made a simple change to <code>fn1</code>, where the "sleep" time is shorter:</p>
<pre><code>async def fn1(x):
await asyncio.sleep(3-1)
print(f"fn1 is called with x={x}")
return "fn1 SUCCESS"
</code></pre>
<p>In this time the coroutine <code>fn1</code> gets to finish:</p>
<pre><code>start: Mon Sep 30 17:30:29 2024
fn1 is called with x=fn1
fn2 is called with y=fn2
e: <class 'TimeoutError'>
end: Mon Sep 30 17:30:31 2024
['No results']
</code></pre>
<p>This is unexpected for me! The behaviours here seem to be not in line with the doc that "Other awaitables in the aws sequence won’t be cancelled and will continue to run." Can you please explain why?</p>
|
<python><asynchronous><async-await><python-asyncio>
|
2024-09-30 09:36:48
| 1
| 3,190
|
CyberPlayerOne
|
79,038,720
| 216,681
|
SQLAlchemy Many-to-Many using PostgreSQL's on_conflict_do_update doesn't get committed
|
<p>I'm trying to use a many-to-many relationship in SQLAlchemy with PostgreSQL. I want to take advantage of the <code>on_conflict_do_update</code>, so I have to use <code>insert</code> statements instead of creating an ORM object, adding it to the session, and committing.</p>
<p>I've discovered that when an object is created this way, adding a many-to-many relationship with <code>thing.relationship.append</code> doesn't work. I.e. when you commit, there's no new row in the association table.</p>
<p>I've reproduced the problem below with tables B and C (and BC to join them). I noted the way to make the relationship work, but you can't run the code twice because it'll lead to a UniqueConstraint error.</p>
<pre class="lang-py prettyprint-override"><code>from typing import List
from sqlalchemy import Column, create_engine, ForeignKey, String, Table
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship, Session
from sqlalchemy.dialects.postgresql import insert
class Base(DeclarativeBase):
pass
bc_table = Table(
"bc",
Base.metadata,
Column("b_id", ForeignKey("b.id", ondelete="CASCADE"), index=True, primary_key=True),
Column("c_id", ForeignKey("c.id", ondelete="CASCADE"), index=True, primary_key=True)
)
class B(Base):
__tablename__ = 'b'
id: Mapped[int] = mapped_column(primary_key=True)
foo: Mapped[str] = mapped_column(String, unique=True)
cs: Mapped[List["C"]] = relationship(back_populates="bs", secondary=bc_table)
class C(Base):
__tablename__ = 'c'
id: Mapped[int] = mapped_column(primary_key=True)
bar: Mapped[str] = mapped_column(String, unique=True)
bs: Mapped[List["B"]] = relationship(back_populates="cs", secondary=bc_table)
engine = create_engine('postgresql://postgres:hunter2@localhost:5432/replica')
Base.metadata.create_all(engine)
with Session(engine) as session, session.begin():
# I have to use an insert to take advantage of `on_conflict_do_update`
set_ = {'foo': 1}
b_stmt = insert(B).values(set_)
b_stmt = b_stmt.on_conflict_do_update(index_elements=[B.foo], set_=set_)
cro = session.execute(b_stmt)
b = B(id=cro.inserted_primary_key[0], **set_)
set_ = {'bar': 1}
c_stmt = insert(C).values(set_)
c_stmt = c_stmt.on_conflict_do_update(index_elements=[C.bar], set_=set_)
cro = session.execute(c_stmt)
c = C(id=cro.inserted_primary_key[0], **set_)
# Try to create a many-to-many relationship entry between b and c.
c.bs.append(b)
## The easy way would be to do this, but I can't because I need to use on_conflict_do_update
# b = B(foo=1)
# c = C(bar=1)
# c.bs.append(b)
# session.add_all([b, c])
session.commit()
</code></pre>
<p>How can I make <code>c.bs.append</code> get committed? Is the solution just to check whether the <code>b</code>s or <code>c</code>s exists and only create it via <code>b = ...; c = ...; session.add_all([b,c])</code>? I'd prefer the atomicity of <code>on_conflict_do_update</code>.</p>
|
<python><postgresql><sqlalchemy>
|
2024-09-30 09:29:17
| 1
| 305
|
Mike Benza
|
79,038,696
| 9,918,920
|
what input should I use to predict rl model? will it be scaled or inv scaled?
|
<p>I am using sb3 DQN to train stock data where my obs is last 120 candle with 7 feature i.e open high low close hour min rsi etc... . so obs shape would be (120,7) output would be discrete with 3 int 0, 1, 2 (Hold, buy, sell respectively).</p>
<p>My questions are:</p>
<ol>
<li>I am only scaling obs using minmaxscaler is it correct or do I need to scale all data in that case I have 200000 rows of 5 min candle?</li>
</ol>
<blockquote>
<p>here is a <code>scale_data</code> fun in my custom gym env</p>
<pre><code>def scale_data(self,obs):
df1 = obs
df1[['OPEN', 'HIGH', 'LOW', 'CLOSE', 'rsi', 'TICKVOL']] = scaler.fit_transform(
obs[['OPEN', 'HIGH', 'LOW', 'CLOSE', 'rsi', 'TICKVOL']])
df1[['MINUTE','HOUR','DAY_OF_WEEK']] = obs[['MINUTE','HOUR','DAY_OF_WEEK']]
return df1.values
</code></pre>
</blockquote>
<ol start="2">
<li>Another thing is if answer is to <code>scale all data</code> what to pass while prediction as on the time of predication I might not have all of the data and I would like to predict with most recent 120 candle data. If we <code>scale all data</code> and use only last 120 to predict than scaling would be entirely different!</li>
</ol>
|
<python><machine-learning><artificial-intelligence><reinforcement-learning><stablebaseline3>
|
2024-09-30 09:24:12
| 0
| 958
|
manan5439
|
79,038,628
| 9,213,069
|
How do I set .env file using poetry in Google Colab?
|
<p>I'm using Google Colab for my Python project. I have created <code>pyproject.toml</code> using <code>!poetry init</code>.</p>
<p>Can you please help me to load environment variable using poetry? I have <code>.env</code> file.</p>
|
<python><python-poetry>
|
2024-09-30 09:03:51
| 1
| 883
|
Tanvi Mirza
|
79,038,582
| 14,450,325
|
FFMPEG unable to stream videos frame by frame to RTMP Youtube Live stream using Python
|
<p>I need to open a locally stored video, process it frame by frame and send it to YouTube live RTMP stream. I am able to do it using FFMPEG in command line terminal but unable to do it using Python. In Python on console, it shows stream is properly sent but on YouTube Live control room it shows no data. I tried all other tools like Vidgear, Gstreamer etc. But most of them use FFMPEG backend and it does not work.</p>
<p>Here is my command to directly send video from .mp4 source file that works properly on terminal and video is streamed on YouTube Live control room -</p>
<p><code>ffmpeg -re -i "video.mp4" -c:v libx264 -preset veryfast -maxrate 3000k -bufsize 6000k -pix_fmt yuv420p -c:a aac -b:a 128k -f flv rtmp://a.rtmp.youtube.com/live2/youtube-key</code></p>
<p>My Python program which reads and send video frame by frame shows everything is fine on console but YouTube shows No Data -</p>
<pre><code>import cv2
import subprocess
# Path to your video file
video_path = "video.mp4"
# FFmpeg command to stream to YouTube
rtmp_url = "rtmp://a.rtmp.youtube.com/live2/youtube-key"
ffmpeg_command = [
'ffmpeg',
'-y', # Overwrite output files without asking
'-f', 'rawvideo',
'-pixel_format', 'bgr24',
'-video_size', '1280x720', # Change according to your video resolution
'-framerate', '30', # Frame rate
'-i', '-', # Input from stdin
'-c:v', 'libx264',
'-preset', 'veryfast',
'-maxrate', '3000k',
'-bufsize', '6000k',
'-pix_fmt', 'yuv420p',
'-f', 'flv',
rtmp_url
]
# Start FFmpeg process
ffmpeg_process = subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE)
# Open video with OpenCV
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
print("Error: Could not open video.")
exit()
while True:
ret, frame = cap.read()
if not ret:
break # End of video
# Write the frame to FFmpeg's stdin
ffmpeg_process.stdin.write(frame.tobytes())
# Cleanup
cap.release()
ffmpeg_process.stdin.close()
ffmpeg_process.wait()
</code></pre>
<p>Console output -</p>
<p><a href="https://i.sstatic.net/BOPopM7z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOPopM7z.png" alt="console output" /></a></p>
<p>FFMPEG Build info -</p>
<p><a href="https://i.sstatic.net/GPKTbeUQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPKTbeUQ.png" alt="FFMPEG build info" /></a></p>
<p>I tried in Linux and Windows both and got same results. In the python program, I am not processing frames now but in future I will do it. I just want to stream video frame by frame as of now so that I can do the processing in future. Please help!!!</p>
|
<python><ffmpeg><youtube><rtmp><live-streaming>
|
2024-09-30 08:47:40
| 0
| 567
|
YadneshD
|
79,038,483
| 21,185,825
|
Replace Tokens - 'init' not found - name not supplied
|
<p>I need to replace a particular file tokens, so I have set the sources to</p>
<pre><code>src/some_folder/somes_cript.py
</code></pre>
<p>As the pipeline is launched, the template is called and ran properly</p>
<p>but I get these errors:</p>
<pre><code>token pattern '__\s*((?:(?!__)(?!\s*__).)*)\s*__'
transform pattern '\s*(.*)\(\s*((?:(?!\()(?!\s*\)).)*)\s*\)\s*'
replacing tokens in '/home/someuser/agent/_work/8/s/src/some_folder/somes_cript.py'
encoding 'ascii'
##[error]variable 'init' not found
init:
##[error]Error: name not supplied
</code></pre>
<p>This is my pipeline template :</p>
<pre><code>parameters:
- name: targetDirectory
type: string
default: '$(System.DefaultWorkingDirectory)'
steps:
- task: qetza.replacetokens.replacetokens-task.replacetokens@6
name: replace_tokens
displayName: "Inject secrets"
inputs:
root: "${{ parameters.targetDirectory }}"
sources: |
**/src/some_folder/somes_cript.py
tokenPattern: 'doubleunderscores'
logLevel: 'debug'
missingVarLog: 'error'
</code></pre>
|
<python><azure-devops><azure-pipelines><pipeline>
|
2024-09-30 08:16:21
| 1
| 511
|
pf12345678910
|
79,038,328
| 511,302
|
Why are Image Fields not being created in Django Factory Boy?
|
<p>I tried to follow the standard recipe for image fields in django factory boy:</p>
<pre><code>class ConfigurationSingletonFactory(DjangoModelFactory):
class Meta:
model = Configuration
django_get_or_create = ("id",)
id = 1
custom_theme = ImageField(color="blue", width=200, height=200)
class GeneralConfiguration(SingletonModel):
custom_theme = PrivateMediaImageField("Custom background", upload_to="themes", blank=True)
</code></pre>
<p>I tried to test it with the following code snippet:</p>
<pre><code>def test_config(self):
conf = GeneralConfigurationSingletonFactory(
custom_theme__name="image.png"
)
print(conf.custom_theme.width)
self.assertEqual(conf.custom_theme.width, 200)
</code></pre>
<p>On doing so, I encounter the following error:</p>
<p><code>ValueError: The 'custom_theme' attribute has no file associated with it.</code></p>
<p>I thought I did exactly what <a href="https://factoryboy.readthedocs.io/en/stable/orms.html#factory.django.ImageField" rel="nofollow noreferrer">the documentation</a> says. What am I misunderstanding?</p>
|
<python><django><unit-testing><factory-boy>
|
2024-09-30 07:29:15
| 0
| 9,627
|
paul23
|
79,037,962
| 4,983,469
|
Read excel with same column names with pandas
|
<p>I am trying to convert an excel to csv. The excel has the following headers -</p>
<pre><code>DATE,FIELD1,FEEDER BRANCH,50,100,200,500,1000,2000,FIELD2,50,100,200,500,1000,2000,FIELD3,50,100,200,500,1000,2000
</code></pre>
<p>As seen, some columns are repeated. On loading the excel using pandas, it appends an index number to it. Eg. On repetition, <code>50 becomes 50.1, 100 becomes 100.1</code> ... and so on.</p>
<p>How do I load the excel without this suffix. I want the col headers as is so that when writing as csv, the same is retained.</p>
<p>Current code:</p>
<pre><code>def pandas_csv_from_excel(source):
dir_and_file = source.split('/')
filename = dir_and_file[len(dir_and_file) - 1].split('.')
if not ((filename[1]).lower().startswith('xls')):
return source
csv_filename = f"{os.path.join(os.path.dirname(source), filename[0].lower())}.csv"
location = os.path.dirname(source)
df = pd.read_excel(source, index_col=None)
df.to_csv(csv_filename, index=None)
return csv_filename
</code></pre>
|
<python><excel><pandas><csv>
|
2024-09-30 05:04:03
| 2
| 1,997
|
leoOrion
|
79,037,945
| 1,870,832
|
Visual Studio Code trying, but failing to use my uv venv
|
<p>I have the following code in file <em>main.py</em> file:</p>
<pre class="lang-py prettyprint-override"><code>import pymupdf
print(f"pymupdf version is: {pymupdf.__version__}")
</code></pre>
<p>I can create a Python 3.12 environment with <code>pymupdf</code> installed and run it from my (<a href="https://en.wikipedia.org/wiki/Kubuntu#Releases" rel="nofollow noreferrer">Kubuntu 24.04</a> (Noble Numbat)) shell without issues, with the following commands (showing shell log after each >command):</p>
<pre class="lang-none prettyprint-override"><code>>uv --version
uv 0.4.17
>uv init --no-pin-python
Initialized project `askliz2`
>uv venv --python 3.12.6
Using CPython 3.12.6
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate
>source .venv/bin/activate
askliz2>uv add pymupdf
Resolved 3 packages in 281ms
Installed 2 packages in 11ms
+ pymupdf==1.24.10
+ pymupdfb==1.24.10
askliz2>python main.py
pymupdf version is: 1.24.10
</code></pre>
<p>However, when I launch Visual Studio Code from this folder with my uv venv activated with <code>code .</code>, and then highlight the line <code>import pymupdf</code> and hit <kbd>Shift</kbd> + <kbd>Enter</kbd> to run it in the integrated terminal, I get the following Visual Studio Code integrated terminal output:</p>
<pre class="lang-none prettyprint-override"><code>/home/max/askliz2/.venv/bin/python
Python 3.12.6 (main, Sep 9 2024, 22:11:19) [Clang 18.1.8 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
]633;E;exit()]633;D;0]633;A>>> ]633;B]633;Cimport pymupdf
]633;E;import pymupdf]633;D;0]633;A>>> ]633;B]633;C
</code></pre>
<p>Things I've tried so far:</p>
<ul>
<li>upgrading the uv version</li>
<li>hosing and reinstalling Visual Studio Code</li>
<li>installing different Python versions (e.g., 3.12.0, 3.12.6, 3.9)</li>
<li>right-clicking to <code>clear terminal</code> and <code>kill terminal</code> and then trying again.</li>
</ul>
<p>When I use the command pallet to <code>select python interpreter</code>, it shows me it's trying to use <code>Python 3.12.6 ('.venv') ./.venv/bin/python</code>, as it should be, and as the Visual Studio Code integrated logs above already indicate.</p>
|
<python><visual-studio-code><uv>
|
2024-09-30 04:51:13
| 0
| 9,136
|
Max Power
|
79,037,841
| 6,293,886
|
nested dicts processing with irregular nesting hierarchy
|
<p>How can one process a pair of nested dicts with partial key overlap?</p>
<pre><code>dict1 = {
'A': {'a': 1},
'B': 2,
'C': {'c': 3},
'D': {'d': {'dd': 4}}
}
dict2 = {
'A': {'a': 1},
'D': {'d': {'dd': 4}}
}
</code></pre>
<p>the desired output should be:</p>
<pre><code>dict1 + dict2= {
'A': {'a': 2},
'B': 2,
'C': {'c': 3},
'D': {'d': {'dd': 8}}
}
</code></pre>
<p>A key-wise addition of flat dicts is straightforward:</p>
<pre><code>{k: dict1[k] + dict2[k] for k, v in dict2.items()}
</code></pre>
<p>How can I do the same with nested dicts?</p>
|
<python><dictionary>
|
2024-09-30 03:37:39
| 2
| 1,386
|
itamar kanter
|
79,037,822
| 3,667,142
|
Pytest equivalent for unittest-style base/derived class tests
|
<p>I have an older <code>Python</code> test suite that used a <code>unittest</code> class-based setup for testing, and I'm trying to convert it over to use <code>pytest</code>. An example of which is shown below where a base class with generalized tests is defined and then derived classes set class attributes that are then used by the tests. These are pretty large classes with many tests and many class attributes that control the tests for the derived classes. My main questions are:</p>
<p>1.) Should classes be used at all with pytest?</p>
<p>2.) Is there a particular pattern for this? i.e. "When you see base/derived tests, think of this pytest equivalent approach"</p>
<pre><code>class Base:
val_1 = None
val_2 = None
val_3 = None
val_4 = 5
val_5 = 0
def test_1(self):
def test_2(self):
class Test1(Base):
val_1 = 1
val_2 = 2
val_3 = 3
class Test2(Base):
val_1 = 2
val_2 = 3
val_3 = 4
class Base2(Base):
val_4 = 0
val_5 = 0
def test_3(self):
def test_4(self):
Class Test3(Base2):
val_1 = 1
val_2 = 2
val_3 = 3
</code></pre>
|
<python><pytest>
|
2024-09-30 03:14:58
| 1
| 1,183
|
wandadars
|
79,037,592
| 2,487,988
|
Pyplot Printing All Bars Overlapping in First Position in Grouped Bar Chart
|
<p>I'm generating a grouped bar chart showing what orders are ordered to arrive on what day of the week. My code is as follows:</p>
<pre><code>plt.figure(1)
x = np.arange(7)
width = .1
for j in range(9):
print(pdChart[j])
plt.bar((j-4)/10, pdChart[j], width, label=list(flavors.keys())[j])
plt.xticks(x,daysOWeek)
plt.ylabel("Orders")
plt.legend()
plt.title("Preferred Delivery Days")
plt.show()
</code></pre>
<p>However, it's printing everything in the Monday position. So the output is only "Monday" data with the highest day's total appearing as the bar height for each flavor. The individual bars are aligned properly and all that. But there's nothing for Tuesday, Wednesday, etc.</p>
<pre><code>[420 352 425 391 423 388 387]
[0 0 0 0 0 0 0]
[857 840 881 900 852 851 874]
[248 234 246 239 242 223 249]
[235 243 227 249 270 251 249]
[1670 1822 1835 1818 1796 1804 1764]
[427 461 430 490 428 435 444]
[102 101 94 86 102 110 107]
[718 743 718 705 741 781 716]
</code></pre>
<p>So the data is right. But instead of seven bars per line, I get one that's 425, 0, 900, 249, etc. only appearing to show the highest day total for the week, which is why I assume it's overlapping.</p>
|
<python><matplotlib>
|
2024-09-29 23:55:35
| 1
| 503
|
Jeff
|
79,037,326
| 235,267
|
Raspberry Pi (Python) - button press (and hold) to run a script loop
|
<p>I am trying to write a program which will flicker LEDs (and eventually - hopefully - flicker a small strip of LEDs). It's meant to look like lightning - or flickering electricity (for an electric chair halloween prop)</p>
<p>I've cobbled together the following from MANY sources. The script should have the bulb start off. When the button is pressed, it runs the loop and flickers the LED. This works, but when the button is let go of, it should switch the bulb off and exit that loop. But not the script. It should then just sit and wait for the button to be pressed again.</p>
<p>Scratching my head here!! Thanks for any help.</p>
<pre><code>import RPi.GPIO as GPIO
import time
import random
import math
bulb = 18
button = 23
pwms = []
intensity = 1.0
def setupGPIO():
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(bulb, GPIO.OUT, initial=GPIO.LOW)
GPIO.setup(button, GPIO.IN, pull_up_down=GPIO.PUD_UP)
# GPIO.add_event_detect(button, GPIO.BOTH, callback=handleButtonPress, bouncetime=300)
def handleButtonPress():
p = GPIO.PWM(bulb, 300)
p.start(0)
while True:
GPIO.wait_for_edge(button, GPIO.FALLING)
p.ChangeDutyCycle(random.randint(1, 44) * math.pow(intensity, 2) if intensity > 0 else 0)
rand_flicker_sleep()
while GPIO.input(button) == GPIO.LOW:
print("button is being pressed")
def rand_flicker_sleep():
time.sleep(random.randint(3, 10) / 100.0)
def flicker_it(_):
global intensity
intensity = min(0, 1.0)
def main():
try:
setupGPIO()
handleButtonPress()
except KeyboardInterrupt:
pass
finally:
for p in pwms:
p.stop()
GPIO.cleanup()
if __name__ == '__main__':
main()
</code></pre>
<p>It's on a Rpi 3b using python3 (if that makes a difference!)</p>
|
<python><python-3.x><raspberry-pi><raspberry-pi3>
|
2024-09-29 20:29:55
| 1
| 3,105
|
Matt Facer
|
79,037,283
| 22,407,544
|
Django ` [ERROR] Worker (pid:7) was sent SIGKILL! Perhaps out of memory?` message when uploading large files
|
<p>My dockerized Django app allows users to upload files(uploaded directly to my DigitalOcean Spaces). When testing it on my local device(and on my Heroku deployment) I can successfully upload small files without issue. However when uploading large files, e.g. 200+MB, I can get these error logs:</p>
<pre><code> [2024-09-29 19:00:51 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:7)
web-1 | [2024-09-29 19:00:52 +0000] [1] [ERROR] Worker (pid:7) was sent SIGKILL! Perhaps out of memory?
web-1 | [2024-09-29 19:00:52 +0000] [29] [INFO] Booting worker with pid: 29
</code></pre>
<p>The error occurs about 30 seconds after I've tried uploading so I suspect it's gunicorn causing the timeout after not getting a response. I'm not sure what to do to resolve it. other than increasing the timeout period which I've been told is not recommended. Here is my code handling file upload:</p>
<p>views.py:</p>
<pre><code>@csrf_protect
def transcribe_submit(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
uploaded_file = request.FILES['file']
request.session['uploaded_file_name'] = uploaded_file.name
request.session['uploaded_file_size'] = uploaded_file.size
session_id = str(uuid.uuid4())
request.session['session_id'] = session_id
try:
transcribed_doc, created = TranscribedDocument.objects.get_or_create(id=session_id)
transcribed_doc.audio_file = uploaded_file
transcribed_doc.save()
...
except Exception as e:
# Log the error and respond with a server error status
print(f"Error occurred: {str(e)}")
return HttpResponse(status=500)
...
else:
return HttpResponse(status=500)
else:
form = UploadFileForm()
return render(request, 'transcribe/transcribe-en.html', {"form": form})
</code></pre>
<p>forms.py:</p>
<pre><code>def validate_audio_language(value):
#code to validate audio language
if value not in allowed_languages:
raise ValidationError("Error")
def validate_output_file_type(value):
#code to validate file type
if value not in output_file_type:
raise ValidationError("Error")
class UploadFileForm(forms.Form):
file = forms.FileField(validators=[validate_file])
</code></pre>
<p>docker-compose.yml:</p>
<pre><code>#version: "3.9"
services:
web:
build: .
#command: python /code/manage.py runserver 0.0.0.0:8000
command: gunicorn mysite.wsgi -b 0.0.0.0:8000 --reload
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
- redis
- celery
environment:
- "DJANGO_SECRET_KEY="
user: user-me
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
redis:
image: redis:6
ports:
- 6379:6379
celery:
build: .
command: celery -A mysite worker --loglevel=info
volumes:
- .:/code
depends_on:
- redis
- db
environment:
- "DJANGO_SECRET_KEY="
user: user-me
volumes:
postgres_data:
</code></pre>
|
<python><django><docker><gunicorn>
|
2024-09-29 20:06:14
| 0
| 359
|
tthheemmaannii
|
79,036,931
| 22,407,544
|
Why does my Celery task not start on Heroku?
|
<p>I currently have a dockerized Django app deployed on Heroku. I've recently added Celery with <code>redis</code>. The app works fine on my device, but when I try to deploy it on Heroku, everything works fine up until the Celery task is started. However, nothing happened, and I don't get any error logs from Heroku. I use <code>celery-redis</code> and follow their setup instructions, but my task still does not start when I deploy to Heroku.</p>
<p>Here is my code:</p>
<p>heroku.yml:</p>
<pre><code>setup:
addons:
- plan: heroku-postgresql
- plan: heroku-redis
build:
docker:
web: Dockerfile
celery: Dockerfile
release:
image: web
command:
- python manage.py collectstatic --noinput
run:
web: gunicorn mysite.wsgi
celery: celery -A mysite worker --loglevel=info
</code></pre>
<p>views.py:</p>
<pre><code>from celery.result import AsyncResult
task = transcribe_file_task.delay(file_path, audio_language, output_file_type, 'ai_transcribe_output', session_id)
</code></pre>
<p>task.py:</p>
<pre><code>from celery import Celery
app = Celery('transcribe')
@app.task
def transcribe_file_task(path, audio_language, output_file_type, dest_dir, session_id):
print(str("TASK: "+session_id))
#rest of code
return output_file
</code></pre>
<p>celery.py:</p>
<pre><code>from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
app = Celery("mysite")
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
</code></pre>
<p>I ensured that my CELERY_BROKER_URL and CELERY_RESULT_BACKEND are getting the correct REDIS_URL from environment variables by having it printing it's value before the task is to be started. So I know that's not the issue. The only error logs I have are R14 errors:</p>
<pre><code>2024-09-29T16:11:03.658186+00:00 heroku[router]: at=info method=POST path="/transcribe/init-transcription/6a41ee9a-1489-4858-a142-693767b3d0da/" host=myst...5f5.herokuapp.com request_id=c6c90dbe-1165-408a-85dd-9996d8ca82c1 fwd="69.160.121.69" dyno=web.1 connect=0ms service=6086ms status=200 bytes=402 protocol=https
2024-09-29T16:11:14.125891+00:00 heroku[celery.1]: Process running mem=788M(154.0%)
2024-09-29T16:11:14.130305+00:00 heroku[celery.1]: Error R14 (Memory quota exceeded)
2024-09-29T16:11:32.783420+00:00 heroku[celery.1]: Process running mem=788M(154.0%)
2024-09-29T16:11:32.786173+00:00 heroku[celery.1]: Error R14 (Memory quota exceeded)
2024-09-29T16:11:52.871883+00:00 heroku[celery.1]: Process running mem=788M(154.0%)
2024-09-29T16:11:52.873514+00:00 heroku[celery.1]: Error R14 (Memory quota exceeded)
2024-09-29T16:11:13.000000+00:00 app[heroku-redis]: source=REDIS addon=redis-cyl...6136 sample#active-connections=6 sample#max-connections=18 sample#connection-percentage-used=0.33333 sample#load-avg-1m=7.945 sample#load-avg-5m=6.65 sample#load-avg-15m=6.685 sample#read-iops=0 sample#write-iops=0.35443 sample#max-iops=3000 sample#iops-percentage-used=0.00012 sample#memory-total=16070676kB sample#memory-free=6724104kB sample#memory-percentage-used=0.58159 sample#memory-cached=5657108kB sample#memory-redis=1097464bytes sample#hit-rate=0.46878 sample#evicted-keys=0
</code></pre>
<p>However, the R14 error doesn't show while the website tries to initiate the task, but only a few seconds afterwards so I'm not sure if that is the cause.</p>
|
<python><django><docker><heroku><redis>
|
2024-09-29 16:57:38
| 1
| 359
|
tthheemmaannii
|
79,036,867
| 1,658,617
|
Implementing a context-switching mechanism in asyncio
|
<p>I wish to create something similar to a context switching mechanism, that allows using a shared resource one at a time.</p>
<p>In my case it's a single session object connecting to a website, and each context would be the current account connected to the website. As a system constraint - no two accounts can be connected at the same time, and the connection is a costly operation.</p>
<p>I have devised the following mechanism which involves letting go of the context (switching account) during expensive background operations:</p>
<pre><code>counter = 0 # i.e. ContextVar for current account
def increase_counter():
global counter
counter += 1
def decrease_counter():
global counter
counter -= 1
async def run_operation():
while True:
operation = operation_queue.get()
increase_counter()
task = asyncio.create_task(operation())
task.add_done_callback(lambda fut: decrease_counter())
await wait_for_free() # wait until counter == 0
switch_context() # Log in to a different account
async def operation():
# do stuff
decrease_counter()
sleep(60) # Long background operation
await wait_for_context() # Wait for context to come back and be 1.
# continue
</code></pre>
<p>When there is only one chain of operations, this mechanism works well. Inner functions can always release the counter and take it back after the operation.</p>
<p>Unfortunately, when there are two operations running at the same time, it stops working:</p>
<pre><code>async def operation():
increase_counter()
task1 = asyncio.create_task(sub_op())
task1.add_done_callback(decrease)
increase_counter()
task2 = asyncio.create_task(sub_op())
task2.add_done_callback(decrease)
decrease_counter()
await asyncio.gather(task1, task2)
await wait_for_context()
async def sub_op():
decrease_counter()
sleep(60)
await wait_for_context()
</code></pre>
<p>The counter in that case is advanced as follows:</p>
<pre><code>run_op (+1 = 1)
task1_creation (+1 = 2)
task2_creation (+1 = 3)
gather_release (-1 = 2)
task1_sleep (-1 = 1)
task2_sleep (-1 = 0) # Context released
task1_resume (+1 = 1)
task2_resume (+1 = 2)
task1_done_callback (-1 = 1)
task2_done_callback (-1 = 0) # Context needlessly released
gather_resume (+1 = 1)
</code></pre>
<p>Is it possible to prevent the needless release or is that just a fact of life when using such a mechanism? If so, is there any other mechanism that can solve the issue besides using a counter?</p>
|
<python><concurrency><architecture><python-asyncio><context-switch>
|
2024-09-29 16:36:33
| 1
| 27,490
|
Bharel
|
79,036,853
| 6,293,886
|
how to apply function on nested dictionary values
|
<p>I have a nested <em>dict</em> with different levels of nesting</p>
<pre><code>dict1 = {
'A': {'a': 1},
'B': 2,
'C': {'c': 3},
'D': {'d': {'dd': 4}}
}
</code></pre>
<p>I want to perform (for instance) key-wise multiplication. With flat dict this is straightforward:</p>
<pre><code>{k: v * 2 for k, v in dict.items()}
</code></pre>
<p>How can I do the same in the above dict?</p>
|
<python><dictionary>
|
2024-09-29 16:29:41
| 1
| 1,386
|
itamar kanter
|
79,036,744
| 5,544,691
|
Type hinting to return the same type of Sized iterable as the input?
|
<p>The following code works, but depending on how I fix the type hinting, either PyCharm or mypy complains about it. I've tried Sized, Iterable, and Collection for the type of S.</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
S = TypeVar("S", bound=Collection[T])
def limit(i: S, n: int) -> S:
"""
Limits the size of the input iterable to n. Truncation is randomly chosen.
"""
if len(i) <= n:
return i
return type(i)(random.sample(list(i), n))
</code></pre>
<p>What I want is something like <code>random.sample</code> with the following two properties:</p>
<ol>
<li>If <code>n > len(i)</code>, return <code>i</code> instead of throwing an error</li>
<li>The type of Collection of the output should be identical to the input. <code>list[int | str]</code> in = <code>list[int | str]</code> out; <code>set[float]</code> in = <code>set[float]</code> out.</li>
</ol>
<p>If there's already something like this in the standard library, I'll use that.</p>
|
<python><pycharm><python-typing><mypy>
|
2024-09-29 15:33:49
| 2
| 503
|
cjm
|
79,036,641
| 8,543,025
|
Plotly Image ignores alpha channel
|
<p>I'm attempting to display a (semi) transparent image using plotly's <code>go.Image</code> object. I use <code>cv2</code> to load the image from disk and convert it to <code>rgba</code> format. Following <a href="https://github.com/plotly/plotly.py/issues/3895" rel="nofollow noreferrer">this issue</a>, I manually set <code>colormodel='rgba'</code> but the resulting image still ignores the <code>alpha</code> value.</p>
<p>Example code:</p>
<pre><code>import cv2
import plotly.graph_objects as go
original = cv2.imread(<path_to_image>)
original = cv2.cvtColor(original, cv2.COLOR_BGR2RGB)
transparent = cv2.cvtColor(original , cv2.COLOR_RGB2RGBA)
transparent[:, :, 3] = 127 # set to 255 // 2 for half-transparency
fig1 = go.Figure(go.Image(z=original))
fig1.show()
fig2 = go.Figure(go.Image(z=img2, colormodel='rgba'))
fig2.show()
</code></pre>
<p>Both <code>fig1</code> and <code>fig2</code> present the same image, but <code>fig2</code> displays 4-channel pixels:
<a href="https://i.sstatic.net/Lg5y3Mdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lg5y3Mdr.png" alt="fig2" /></a></p>
<p>Seems like plotly cannot display transparent images. Is there a way to display the image <strong>with</strong> the <code>alpha</code> channel?</p>
|
<python><plotly>
|
2024-09-29 14:38:42
| 0
| 593
|
Jon Nir
|
79,036,519
| 4,375,983
|
Cannot seem to parse dates correctly in spark 3
|
<p>I'm trying to write a utilitary that "evaluates" the well formatting of dates. I cannot seem to suceed in this because I keep getting errors like :</p>
<pre><code>Exception has occurred: Py4JJavaError (note: full exception trace is shown but execution is paused at: <module>)
An error occurred while calling o184.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 2) (172.21.66.190 executor driver): org.apache.spark.SparkUpgradeException: [INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER] You may get a different result due to the upgrading to Spark >= 3.0:
Fail to parse '2023-10-15 13:45:30' in the new parser. You can set "spark.sql.legacy.timeParserPolicy" to "LEGACY" to restore the behavior before Spark 3.0, or set to "CORRECTED" and treat it as an invalid datetime string.
...
Caused by: org.apache.spark.SparkUpgradeException: [INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER] You may get a different result due to the upgrading to Spark >= 3.0:
Fail to parse '2023-10-15 13:45:30' in the new parser. You can set "spark.sql.legacy.timeParserPolicy" to "LEGACY" to restore the behavior before Spark 3.0, or set to "CORRECTED" and treat it as an invalid datetime string.
...
Caused by: java.time.format.DateTimeParseException: Text '2023-10-15T13:45:30' could not be parsed, unparsed text found at index 10
at java.base/java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2049)
at java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1874)
at org.apache.spark.sql.catalyst.util.Iso8601TimestampFormatter.parse(TimestampFormatter.scala:193)
... 21 more
</code></pre>
<p>Here's a minimal script of what i'm trying to recreaate the error</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import to_timestamp, col, coalesce
# Initialize SparkSession
spark = SparkSession.builder.appName("DateParsingTest").master("local[*]").getOrCreate()
# Sample data for testing
data = [
("2023-10-15T13:45:30",),
("2023-10-15 13:45:30",),
("2023-10-15",),
("20231015",),
("15-Oct-2023",),
("10/15/2023",),
("15/10/2023",),
("2023.10.15",),
("Oct 15, 2023",),
("15 Oct 2023",),
("2023/10/15",),
("15-10-2023",),
("10-15-2023",),
("15.10.2023",),
("10.15.2023",),
("InvalidDate",),
(None,),
]
# Create DataFrame
df = spark.createDataFrame(data, ["date_string"])
# Define date formats
date_formats = [
"yyyy-MM-dd",
"yyyyMMdd",
"MM/dd/yyyy",
"dd-MMM-yyyy",
"dd/MM/yyyy",
"yyyy.MM.dd",
"MMM dd, yyyy",
"dd MMM yyyy",
"yyyy/MM/dd",
"dd-MM-yyyy",
"MM-dd-yyyy",
"dd.MM.yyyy",
"MM.dd.yyyy",
]
# Define time formats to append
time_formats = [
"", # No time
" HH:mm:ss",
" HH:mm:ss.SSS",
"'T'HH:mm:ss",
"'T'HH:mm:ss.SSS",
]
# Generate combined date-time formats
date_time_formats = []
for date_fmt in date_formats:
for time_fmt in time_formats:
date_time_formats.append(date_fmt + time_fmt)
# Parse the date strings
parsing_expressions = [to_timestamp(col("date_string"), fmt) for fmt in date_time_formats]
# Use coalesce to get the first successfully parsed timestamp
parsed_date_expr = coalesce(*parsing_expressions)
# Add the parsed date column to the DataFrame
df = df.withColumn("parsed_date", parsed_date_expr)
# Show the results
df.select("date_string", "parsed_date").show(truncate=False)
# Stop the SparkSession
spark.stop()
</code></pre>
<p>The goal of this module is to evaluate date time strings in data so that we may fix them when necessary. These columns often have heterogenous date time formats.</p>
<p>I found <a href="https://stackoverflow.com/questions/62943941/to-date-fails-to-parse-date-in-spark-3-0">this question</a> with a similar premise, but the answers within it did not fix my problem.</p>
<p>Is what I'm trying to achieve even possible?</p>
|
<python><apache-spark><date><pyspark>
|
2024-09-29 13:42:40
| 1
| 2,811
|
Imad
|
79,036,307
| 6,115,317
|
Python ctypes: How to raise exception when my "Material" class goes out of scope without freeing resources in C level?
|
<p>I'm writting a Graphics Rendering API with a "Material" class containing resources allocated by calling a "create_material" function in my C DLL, which should be accompanied by a respective "destroy_material" call.</p>
<p>I want to raise an exception when the Material class gets destroyed/GC'ed without calling the destroy_material, and I can't quite find a reliable way to do that, especially considering <strong>unit testing</strong>.</p>
<p>I tried this <em>(self.handle is a c_void_p)</em>:</p>
<pre class="lang-py prettyprint-override"><code>def __del__(self):
if self.handle.value:
raise ResourceNotDeallocated("Destroying material that was never destroyed on C side")
</code></pre>
<p>But this is highly counter advised due to how <code>__del__</code> and GC really works, and I couldn't test a <code>with self.assertRaises(...)</code>, even after calling <code>gc.collect()</code> it won't raise.</p>
<p>I also tried <code>weakref.finalize</code> approach, but Material's bound methods result in an extra reference count that defeats the purpose.</p>
<pre class="lang-py prettyprint-override"><code>def __init__(self):
self._finalizer = weakref.finalize(self, self._on_finalize_material, self)
def _on_finalize_material(self):
if self.handle.value:
msg = f"Destroying material that was never destroyed on Remix side (Handle: {material.handle}, Albedo: {material.albedo_texture})"
raise ResourceNotDeallocated(msg)
</code></pre>
|
<python><memory-management><dll><ctypes><raii>
|
2024-09-29 11:49:57
| 0
| 409
|
Emanuel Kozerski
|
79,036,219
| 7,266,996
|
Python being sourced from multiple environments
|
<p>So I am using this framework called <code>frappe</code> which seems to have messed Python environment initialization.</p>
<p>To provide some context, there is a command line tool named <code>bench</code> which is used to control the framework.</p>
<p>The command <code>bench start</code> works with a <code>Procfile</code>. Here are the contents of the Procfile:</p>
<pre><code>redis_queue: redis-server config/redis_queue.conf
web: bench serve --port 8000
socketio: /home/amol/.nvm/versions/node/v22.3.0/bin/node apps/frappe/socketio.js
watch: bench watch
schedule: bench schedule
worker: bench worker 1>> logs/worker.log 2>> logs/worker.error.log
</code></pre>
<p>The command <code>bench start</code> runs this proc file internally.</p>
<p>I have used a virtual environment to set up the python virtual environment. Here are some of the commands I am able to run when the virtual environment is active:</p>
<pre><code>(.venv) amol@amol-IdeaPad-Gaming-3-15ARH05:~/WorkSpace/frappe/frappe-bench$ which python
/home/amol/WorkSpace/frappe/.venv/bin/python
(.venv) amol@amol-IdeaPad-Gaming-3-15ARH05:~/WorkSpace/frappe/frappe-bench$ which bench
/home/amol/WorkSpace/frappe/.venv/bin/bench
(.venv) amol@amol-IdeaPad-Gaming-3-15ARH05:~/WorkSpace/frappe/frappe-bench$ python
Python 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import frappe;
>>> frappe
<module 'frappe' from '/home/amol/WorkSpace/frappe/frappe-bench/apps/frappe/frappe/__init__.py'>
>>>
</code></pre>
<p>Now the command <code>bench start</code> that works with the <code>Procfile</code> seems to use a different Python environment than one being used by my terminal. This is the error that I get:</p>
<pre><code>16:26:56 watch.1 | /home/amol/WorkSpace/frappe/frappe-bench/env/bin/python: Error while finding module specification for 'frappe.utils.bench_helper' (ModuleNotFoundError: No module named 'frappe')
</code></pre>
<p>My understanding is that it is clearly using a different Python environment than the one used by my terminal.</p>
<p>This is the error encountered when I run <code>bench watch</code> using the command line.</p>
<pre><code>(.venv) amol@amol-IdeaPad-Gaming-3-15ARH05:~/WorkSpace/frappe/frappe-bench$ bench watch
/home/amol/WorkSpace/frappe/frappe-bench/env/bin/python: Error while finding module specification for 'frappe.utils.bench_helper' (ModuleNotFoundError: No module named 'frappe')
</code></pre>
<p>I was hoping if someone from the frappe team or someone otherwise well versed with Python virtual environments could help me with this issue.</p>
|
<python><virtualenv><frappe>
|
2024-09-29 11:04:46
| 1
| 2,684
|
Amol Gupta
|
79,036,141
| 12,291,425
|
convert mouse cursor image to left-handed (horizontally flip) using python
|
<p>Recently I find that Windows mouse cursor is not very friently for left-handed people. Because it point from right to left.</p>
<p>So I wonder if there is a way to change it to "left to right". I searched the internet but havnen't found one.</p>
<p>Further search give me a message that the "cur" file format is a derivation of the "ico" format: <a href="https://en.wikipedia.org/wiki/ICO_(file_format)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/ICO_(file_format)</a></p>
<p>So I think if we can flip it programatically.</p>
<p>Currently I can use these steps to modify one image:</p>
<h2>1. Copy <code>C:\Windows\Cursors</code> to a directory, then use the following script to convert to "ico" files so GIMP can recognize it</h2>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
CURRENT = Path(__file__).resolve().parent
CURRENT.joinpath("Cursors_ico").mkdir(exist_ok=True)
files = list(CURRENT.joinpath("Cursors").glob("*.cur"))
for file in files:
dest = CURRENT.joinpath("Cursors_ico")
data = file.read_bytes()
data = bytearray(data)
data[2] = 0x01
dest.joinpath(file.stem + ".ico").write_bytes(data)
</code></pre>
<h2>2. use GIMP to flip</h2>
<p>this step isn't very intresting. I will have to delete all other layers, and use selection to select the cursor, and use "Layer -> Transform -> Flip horizontally". Then export ico file.
Also the hotspot location has changed so we need to remember its location.</p>
<h2>3. convert back to "cur" file</h2>
<p>script:</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
file = Path("aero_link.ico")
data = bytearray(file.read_bytes())
data[10] = 102 # hover on the finger and you can see the location in the GIMP bottom pane. fill it here
data[2] = 0x02 # change the file header back
Path("aero_link.cur").write_bytes(data)
</code></pre>
<h2>4. Load file</h2>
<p>In settings - bluetooth & devices - mouse - additional mouse settings - pointers, create a new scheme, and change the "link select" image</p>
<p>I wonder if we can do the step 2 automatically</p>
|
<python><mouse-cursor>
|
2024-09-29 10:11:44
| 0
| 558
|
SodaCris
|
79,036,069
| 2,440,692
|
Pandas boxplots - styling props with colors not working
|
<p>I would like to style all aspects of my boxplots via style sheets and apply the defined styles via plt.style.use("my_style"). So the goal is to "out source" all the styling attributes of the chart via code and define them inside the style.</p>
<p>Unfortunately not all colors are recognized. <a href="https://i.sstatic.net/vTzng5lo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTzng5lo.png" alt="faulty colors" /></a></p>
<p>This is just an excerpt from the file:</p>
<pre><code>### BoxPLots
boxplot.showmeans: True
boxplot.showcaps: True
boxplot.showbox: True
boxplot.showfliers: True
boxplot.meanline: True
boxplot.showmeans: True
boxplot.showcaps: True
boxplot.showbox: True
boxplot.showfliers: True
boxplot.meanline: True
# these color styles are parially applied
boxplot.medianprops.color: "red" # not applied
boxplot.medianprops.linestyle: solid # applied
boxplot.meanprops.color: "yellow" # applied
boxplot.meanprops.linestyle: dashed # applied
boxplot.boxprops.color: 'red' # not applied
boxplot.boxprops.linestyle: dotted # applied
</code></pre>
<p>its getting applied by:</p>
<pre><code> df = pd.DataFrame(array, columns=['Values'])
plt.style.use("my_style")
bp = df.boxplot(column=['Values'],
capprops=dict(color='#ff5c00'),
flierprops=dict(marker='o', markeredgecolor='#fdfc60', markersize=10))
</code></pre>
<p>Result:
The meanline is shown and its color, as well as all linestyles are applied but neither the colors for the box nor the median.</p>
<p>The programmatically definitions like for capprops and flierprops are working.</p>
|
<python><pandas><matplotlib>
|
2024-09-29 09:35:03
| 1
| 1,229
|
neuralprocessing
|
79,035,737
| 696,836
|
How do I get a callback for QListWidgetItem becoming visible in PyQT5
|
<p>I created a simple MediaBrowser using a <code>QListWidget</code> in PyQT5:</p>
<pre><code>class MediaBrowser(QListWidget):
def __init__(self, database: Database, viewtab, dir_path):
QListWidget.__init__(self)
self.log = logging.getLogger('mediahug')
self.database = database
self.viewtab = viewtab
self.setLayoutMode(QListView.Batched)
self.setBatchSize(10000)
self.setUniformItemSizes(True)
self.current_directory = dir_path
# self.current_file_widgets = []
self.thumb_loader_thread = None
self.itemSelectionChanged.connect(self.selection_change)
# Should theoretically speed things up but it does not
# self.setSizeAdjustPolicy(QListWidget.AdjustToContents)
self.setSelectionMode(QAbstractItemView.SingleSelection)
self.setViewMode(QListWidget.IconMode)
self.setResizeMode(QListWidget.Adjust)
self.setIconSize(QSize(THUMB_WIDTH, THUMB_HEIGHT))
self.load_files(dir_path)
...
...
def all_files(self):
return self.findItems('*', Qt.MatchWildcard)
def load_files(self, dir_path):
if self.thumb_loader_thread and self.thumb_loader_thread.isRunning():
self.log.info('Killing Previous Thumbnail Loading Thread')
self.thumb_loader_thread.requestInterruption()
self.thumb_loader_thread.wait(sys.maxsize)
self.log.info('Previous Thumbnail Thread Done')
self.clear()
# Load New File widgets
onlyfiles = [f for f in listdir(dir_path) if isfile(join(dir_path, f))]
for f in onlyfiles:
vid = join(dir_path, f)
self.log.debug(f"Creating File/Thumb Widget {vid}")
self.addItem(ThumbWidget(vid, self.database))
self.thumb_loader_thread = ThumbLoaderThread(self.all_files(), dir_path)
self.thumb_loader_thread.start()
</code></pre>
<p>The <code>MediaBrowser</code> which is a <code>QListWidget</code>, adds a bunch of <code>ThumbWidget</code> items (which are <code>QListWidgetItem</code> objects) when it starts:</p>
<pre><code>class ThumbWidget(QListWidgetItem):
def __init__(self, filename: str, database):
QListWidgetItem.__init__(self)
self.filename = filename
self.database = database
self.setText(basename(self.filename))
standard_file_icon = QWidget().style().standardIcon(QStyle.SP_FileIcon)
self.setIcon(standard_file_icon)
self.setSizeHint(QSize(THUMB_WIDTH, THUMB_HEIGHT + FILENAME_MARGIN))
def __str__(self):
return f'Thumbnail for {self.filename}'
def load_thumb(self):
metadata = self.database.file_metadata(self.filename)
img_thumb = metadata['thumb']
if img_thumb:
img = QPixmap()
img.loadFromData(img_thumb, 'JPEG')
self.setIcon(QIcon(img))
</code></pre>
<p>This takes a lot of time at startup. I'd like to only load a thumbnail for the item when it is scrolled into view. Elsewhere within my code, the <code>MediaBrowser</code> is within a <code>QScrollArea</code>.</p>
<pre><code> self.media_scroller = QScrollArea()
self.media_scroller.setWidget(self._media_browser)
self.media_scroller.setWidgetResizable(True)
</code></pre>
<p>Is there any way to get events to know when a particular <code>QWidgetItem</code> is current scrolled in/out of view? That way I can load and unload thumbnails, making for more efficient startup times.</p>
<p>The full code for this project can be found here:</p>
<p><a href="https://gitlab.com/djsumdog/mediahug/-/tree/master/mediahug/gui/viewtab" rel="nofollow noreferrer">https://gitlab.com/djsumdog/mediahug/-/tree/master/mediahug/gui/viewtab</a></p>
|
<python><pyqt5><qt5>
|
2024-09-29 06:33:30
| 1
| 2,798
|
djsumdog
|
79,035,687
| 4,105,475
|
Unable to Trigger Lambda function in AWS using Python code from my local setup
|
<p>I am trying to trigger a Lambda function using Python code as follows:</p>
<pre><code>import boto3
from botocore.exceptions import NoCredentialsError, PartialCredentialsError
def get_lambda_client():
return boto3.client('lambda')
def invoke_lambda():
lambda_client = get_lambda_client()
if lambda_client:
try:
response = lambda_client.invoke(
FunctionName='MyLambdaFunctionName',
InvocationType='RequestResponse', # or 'event for async invocation'
Payload=b'{}' #Not sending any payload
)
print(f" the response from the aws = {response}")
except Exception as e:
print(f" Error invoking Lambda function: {e}")
invoke_lambda()
</code></pre>
<p>I am using the following policies attached to the Role:</p>
<ol>
<li><p>Policy to trigger the Lambda function:</p>
<pre><code> {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "< arn of my lambda function>"
},
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "< arn of the role I created for lambda function which intern
will trigger aws step function>"
}
]
}
</code></pre>
</li>
<li><p>Trusted policy for the role I created for this Lambda function trigger:</p>
<pre><code> {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com",
"AWS": "<arn for the iam user>"
},
"Action": "sts:AssumeRole"
}
]
}
</code></pre>
</li>
</ol>
<p>Please let me know if anything is missing here. The error I am getting when I try to trigger a lambda function from python code is:</p>
<pre><code>Error invoking Lambda function: An error occurred (ExpiredTokenException) when calling the Invoke operation: The security token included in the request is expired
</code></pre>
<p>Suggest a solution which can be used here by assuming the sts role, considering that I don't have permission to fetch <code>AccessKey</code>, <code>SecreteKey</code> and <code>SessionToken</code>.</p>
|
<python><python-3.x><amazon-web-services><aws-lambda><amazon-iam>
|
2024-09-29 05:42:14
| 2
| 753
|
Humble_PrOgRaMeR
|
79,035,623
| 2,698,972
|
How do I remove escape characters from a JSON value in Python?
|
<p>I have a big JSON data with one of the keys having escape characters like :</p>
<pre><code>{"key":{"keyinner1":"\"escapeddata\"","keyinner2":"text"}}
</code></pre>
<p>I want to convert it to :</p>
<pre><code>{"key":{"keyinner1":"escapeddata","keyinner2":"text"}}
</code></pre>
<p>json.loads does not work on this data , i tried using replace function but it does not work as well</p>
|
<python><json><dictionary>
|
2024-09-29 04:42:57
| 1
| 1,041
|
dev
|
79,035,429
| 18,091,627
|
How to install packages installed in a python Virtual Environment into a new Virtual Environment?
|
<p>It is as simple as the title says. I've tried:</p>
<pre><code>pip install --no-index --find-links=".../another-venv/Lib/site-packages/" django
</code></pre>
<p>but I got the error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement django (from versions: none)
ERROR: No matching distribution found for django
</code></pre>
<p>Which tells me I'm not going right about this since django is actually in the given location. If there is a proper way to do this (that I really hope is possible 'cause my internet is terrible and installing the packages from scratch every time is absolutely atrocious) I will thank the help a lot.</p>
|
<python><pip><python-packaging>
|
2024-09-29 00:53:38
| 0
| 371
|
42WaysToAnswerThat
|
79,035,416
| 777,081
|
Can Django identify first-party app access
|
<p>I have a Django app that authenticates using allauth and, for anything REST related, dj-rest-auth.</p>
<p>I'm in the process of formalising an API</p>
<ul>
<li>My Django app uses the API endpoints generally via javascript fetch commands. It authenticates with Django's authentication system (with allauth). It <em>doesn't</em> use dj-rest-auth for authentication, it uses the built in Django auth system.</li>
<li>My Discord bot uses the API as would be typical of a third-party app. It authenticates via dj-rest-auth, meaning it internally handles the refresh and session tokens as defined in dj-rest-auth's docs.</li>
</ul>
<p>Currently the API is completely open, which means anyone could use cURL to access the endpoints, some of which an anonymous user can access. Others, require an authenticated user, and this is done with the request header data: <code>Authorization: Bearer</code>, as what dj-rest-auth takes care of (this is what my Discord bot uses).</p>
<p>I now want to expand on this by incorporating the <a href="https://florimondmanca.github.io/djangorestframework-api-key/" rel="nofollow noreferrer">Django Rest Framework API package</a> so that API Keys are required to identify thirdparty client apps. For example, my Discord bot would use the <code>Authorization: Api-Key</code> header data to identify itself as an API client. It would be up to the thirdparties to make sure their API key doesn't get abused. The idea is that the API-Key is used as an extra authorisation layer such that different/extra access and throttles can be applied. My Discord bot, for example, would be granted more lenient throttling than say, a regular user who's applied for an API-Key.</p>
<p>Now my question...</p>
<p>API-Keys can be easily hidden in a bespoke Discord bot written in python because it's hidden from users of the bot, but what should I do for my regular/current Django app that uses the template system? If I use the <code>HasAPIKey</code> permission check for all endpoints as described in the <a href="https://florimondmanca.github.io/djangorestframework-api-key/" rel="nofollow noreferrer">Django Rest Framework API package</a>, then my own Django app would be denied access.</p>
<p>I see two options</p>
<ul>
<li>Either insert the Api-Key via middleware (and thus treating my Django app like any other client) or,</li>
<li>Adding a permission check to my DRF endpoints, e.g. IsUsingFirstPartyApp.</li>
</ul>
<p>The problem is, I can only think of one way to identify "First party access" and that's via the HTTP_ORIGIN and HTTP_REFERER request header data.</p>
<p>I could have something like the following (code comments removed for brevity)...</p>
<pre><code>
class IsUsingFirstPartyApp(permissions.BasePermission):
def __init__(self):
self.ALLOWED_ORIGIN = getattr(settings, 'SITE_URL', None)
if not self.ALLOWED_ORIGIN:
raise ValueError("SITE_URL is not set in Django settings.")
def has_permission(self, request, view):
origin = request.META.get('HTTP_ORIGIN')
referer = request.META.get('HTTP_REFERER')
def is_valid_origin(header_value):
try:
parsed_origin = urlparse(header_value)
return f"{parsed_origin.scheme}://{parsed_origin.netloc}" == self.ALLOWED_ORIGIN
except ValueError: # Catch specific parsing errors
return False
if origin:
return is_valid_origin(origin)
if referer:
return is_valid_origin(referer)
return False
</code></pre>
<p>For this, it's checking if the requests are being made from itself, e.g. from app.example.com.
If I combine this with the HasAPIKey permission check (i.e. a logical OR) then my regular Django website can be used API-Key less as it always has. I can test this locally with cURL, where cURL gets an authentication error, but the anon-user using the website via a browser can access the endpoint just fine (via the javascript fetch request).</p>
<p>The only alternative I can think of is to move this same check to Django middleware and insert an API-Key into the request header. This would offer benefits for controlled throttling.</p>
<p>Or maybe what I'm doing is completely off the mark and not best practices. Am I on the right track, or is there a different way I should be doing this?</p>
|
<python><django><rest>
|
2024-09-29 00:42:26
| 1
| 420
|
gmcc051
|
79,035,273
| 1,857,915
|
Is there any way to catch and handle infinite recursive functions that create Tkinter windows?
|
<p>I have a student that submitted a function like this:</p>
<pre><code>def infinite_windows():
window = tkinter.Tk()
infinite_windows()
</code></pre>
<p>I already test things inside a try-except block. My code reports the RecursionError, but then it freezes my desktop the next time it asks for user input and I have to kill Python. The following code hangs, but only when the input() call is included:</p>
<pre><code>import tkinter
def infinite_windows():
window = tkinter.Tk()
infinite_windows()
try:
infinite_windows()
except Exception as e:
print("caught the exception!")
print(e)
input("hi there") #hangs here
</code></pre>
<p>Is there any way for me to handle this behavior and continue running Python without removing the call to input()?</p>
|
<python><python-3.x><tkinter><recursion>
|
2024-09-28 22:06:55
| 1
| 584
|
Kyle
|
79,035,217
| 7,106,581
|
How to play a mp3 file with PyScript?
|
<p>I want to play an mp3 file using the HTML tag like so:</p>
<pre><code> <audio controls>
<source id="player" src="sounds//ziege.mp3" type="audio/mpeg">
</audio>
</code></pre>
<p>Playing the mp3 file trough the player UI works of course but not within PyScript:</p>
<pre><code> <button py-click="play">Play</button>
<button py-click="pause">Pause</button>
<button py-click="stop">Stop</button>
<script type="py">
from pyscript import document, display
from pyweb import pydom
from pyodide.ffi import to_js
# probably not necessary - does not work without the conversion either
sound = to_js(pydom["#player"][0])
display(sound)
def play(event):
display(sound)
sound.play()
def pause(event):
sound.pause()
def stop(event):
sound.stop()
</script>
</code></pre>
<p>Hitting any of the buttons just produce an "AttributeError: play"</p>
<p>How can I play an mp3 file through the player or is there a PyScript module that I have overlooked so far?</p>
<p>I am using PyScript version 3.11.3</p>
|
<python><pyscript>
|
2024-09-28 21:18:05
| 1
| 948
|
Peter M.
|
79,034,910
| 25,413,271
|
Python 2.7: difference between codecs.open(<f>, 'r', encoding=<enc>) and <string>.encode(<enc>)
|
<p>What is the difference between code like this:</p>
<pre><code>with codecs.open(<file path>, 'r', encoding='my_enc') as f:
json.load(f)
string = json['key']
</code></pre>
<p>and code like this:</p>
<pre><code>with open(<file path>, 'r') as f:
json.load(f)
string = json['key'].encode('my_enc')
</code></pre>
<p>There must be differences since the first doesnt work for my application, but I dont understand why exactly. I feel some misunderstanding of how this encoding stuff works.</p>
<p>I am using Python 2.7 and cannot update it since I receive this environment and I am limited by this.</p>
|
<python><python-2.7><encoding>
|
2024-09-28 18:27:47
| 0
| 439
|
IzaeDA
|
79,034,526
| 315,168
|
Performance optimal way to serialise Python objects containing large Pandas DataFrames
|
<p>I am dealing with Python objects containing Pandas <code>DataFrame</code> and Numpy <code>Series</code> objects. These can be large, several millions of rows.</p>
<p>E.g.</p>
<pre class="lang-py prettyprint-override"><code>
@dataclass
class MyWorld:
# A lot of DataFrames with millions of rows
samples: pd.DataFrame
addresses: pd.DataFrame
# etc.
</code></pre>
<p>I need to cache these objects, and I am hoping to find an efficient and painless way to serialise them, instead of standard <code>pickle.dump()</code>. Are there any specialised Python serialisers for such objects that would pickle <code>Series</code> data with some efficient codec and compression automatically? Alternatively, I need to hand construct several Parquet files, but that requires a lot of more manual code to deal with this, and I'd rather avoid that if possible.</p>
<p>Performance here may mean</p>
<ul>
<li>Speed</li>
<li>File size (can be related, as you need to read less from the disk/network)</li>
</ul>
<p>I am aware of <a href="https://joblib.readthedocs.io/en/latest/persistence.html" rel="nofollow noreferrer">joblib.dump()</a> which does <em>some</em> magic for these kind of objects, but based on the documentation I am not sure if this is relevant anymore.</p>
|
<python><pandas><numpy><pickle>
|
2024-09-28 15:16:48
| 4
| 84,872
|
Mikko Ohtamaa
|
79,034,296
| 6,762,755
|
Filter a LazyFrame by row index
|
<p>Is there an idiomatic way to get specific rows from a LazyFrame? There's two methods I could figure out. Not sure which is better, or if there's some different method I should use.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"x": ["a", "b", "c", "d"]}).lazy()
rows = [1, 3]
# method 1
(
df.with_row_index("row_number")
.filter(pl.col("row_number").is_in(rows))
.drop("row_number")
.collect()
)
# method 2
df.select(pl.all().gather(rows)).collect()
</code></pre>
|
<python><python-polars>
|
2024-09-28 13:20:27
| 1
| 28,795
|
IceCreamToucan
|
79,034,285
| 7,123,797
|
Are there any practical uses for the copy() method of the frozenset objects?
|
<p>In Python the <code>frozenset</code> type is the only built-in immutable type that has a <code>copy()</code> method. And it looks like this method always returns <code>self</code>:</p>
<pre><code>>>> f1 = frozenset({1,2,3})
>>> f2 = f1.copy()
>>> f1 is f2
True
</code></pre>
<p>I don't see how this method could be useful, since technically it doesn't create a copy (a new object) at all. All other built-in immutable types don't have <code>copy()</code> method. Are there any practical use cases where it makes sense to use this method of <code>frozenset</code> objects?</p>
|
<python><frozenset>
|
2024-09-28 13:15:17
| 0
| 355
|
Rodvi
|
79,034,098
| 1,654,229
|
How to create a decorator for a FastAPI route that is able to capture the path param value?
|
<p>I have a FastAPI route like so:</p>
<pre><code>@router.put("/{workflowID}", response_model=WorkflowResponse)
async def update_workflow_endpoint(
workflowID: int,
workflow: WorkflowUpdateRequest,
db: AsyncSession = Depends(get_db)
):
... // Remaining code
</code></pre>
<p>I want to write a decorator which gets the path parameter <code>workflowID</code>, like so:</p>
<pre><code>@allowed_perm('WF_ADMIN', {workflowID})
</code></pre>
<p>So it gets two parameters - a string <code>WF_ADMIN</code> and the value of <code>workflowID</code> path param.</p>
<p>For ex: if the API was called like so <code>/5</code>, then I want the decorator to internally get the value of <code>5</code>.</p>
<p>How can this be done? I'm trying to build my permission module, and although I know this can be done using dependencies, I wanted to know if the same can be done via decorators.</p>
|
<python><fastapi><decorator>
|
2024-09-28 11:42:11
| 1
| 1,534
|
Ouroboros
|
79,033,942
| 4,451,315
|
How to type function which returns associated type so that it works in subclass?
|
<p>I have a class <code>Foo</code> which has a <code>to_bar</code> method which returns <code>Bar</code>. I also have a subclass <code>MyFoo</code> for which <code>to_bar</code> returns <code>MyBar</code> (a subclass of <code>Bar</code>). <code>Foo</code> has quite a lot of methods which can return <code>Bar</code>, and for each one of these, the corresponding method on <code>MyFoo</code> should return <code>MyBar</code>.</p>
<p>Here's what I've written:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
class Foo:
def __init__(self, value):
self.value = value
@property
def _bar(self) -> type[Bar]:
return Bar
def to_bar(self) -> Bar:
return self._bar(self.value)
class Bar:
def __init__(self, value):
self.value = value
class MyFoo(Foo):
@property
def _bar(self) -> type[MyBar]:
return MyBar
class MyBar(Bar):
...
print(type(Foo(3).to_bar()))
print(type(MyFoo(3).to_bar()))
</code></pre>
<p>It outputs</p>
<pre><code>$ python t.py
<class '__main__.Bar'>
<class '__main__.MyBar'>
</code></pre>
<p>which is correct. All good so far.</p>
<p>BUT, if I add</p>
<pre class="lang-py prettyprint-override"><code>reveal_type(Foo(3).to_bar())
reveal_type(MyFoo(3).to_bar())
</code></pre>
<p>to the end of the file and run <code>mypy</code> on it, I get</p>
<pre><code>$ mypy t.py
t.py:28: note: Revealed type is "t.Bar"
t.py:29: note: Revealed type is "t.Bar"
Success: no issues found in 1 source file
</code></pre>
<p>How can I type <code>to_bar</code> such that the return value is going to be correct for <code>MyFoo.to_bar</code>? I <em>could</em> overwrite <code>to_bar</code> in <code>MyFoo</code>, but doing so for every method which returns <code>Bar</code> in <code>Foo</code> would be tedious and error-prone</p>
|
<python><python-typing><mypy>
|
2024-09-28 10:16:59
| 0
| 11,062
|
ignoring_gravity
|
79,033,878
| 2,595,216
|
Make symbolic link as admin with New-Item and subprocess
|
<p>I got PS working CmdLet:</p>
<pre class="lang-bash prettyprint-override"><code>New-Item -ItemType SymbolicLink -Path "C:\Users\user1\Saved Games\Scripts\BIOS" -Target "C:\Users\user1\Saved Games\bios\Scripts\BIOS"
</code></pre>
<p>I need run as Admin (of course) so wrap around <code>Start-Process</code> and pass to Python's <code>subprocess</code></p>
<p>I image it can be something like:</p>
<pre class="lang-py prettyprint-override"><code>def run_cmd(*args):
p = subprocess.Popen(*args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, error = p.communicate()
return out, error
cmd_symlink = '"New-Item -ItemType SymbolicLink -Path "C:\Users\user1\Saved Games\Scripts\BIOS" -Target "C:\Users\user1\Saved Games\bios\Scripts\BIOS"'
ps_command = f"& {{Start-Process powershell.exe -argumentlist '-command {cmd_symlink}' -Verb RunAs}}"
command = ['powershell.exe', '-command', ps_command]
run_cmd(command)
</code></pre>
<p>or????</p>
<p><strong>Solution:</strong></p>
<pre class="lang-py prettyprint-override"><code>cmd_symlink = r'"New-Item -ItemType SymbolicLink -Path \"C:\Users\user1\Saved Games\Scripts\BIOS\" -Target \"C:\Users\user1\Saved Games\bios\Scripts\BIOS\"'
ps_command = f"Start-Process pwsh.exe -argumentlist '-command {cmd_symlink}' -Verb RunAs"
command = ['pwsh.exe', '-command', ps_command]
run_cmd(command)
</code></pre>
|
<python><powershell><subprocess>
|
2024-09-28 09:36:37
| 0
| 553
|
emcek
|
79,033,795
| 9,592,585
|
Google GenerativeAI unable to load video files
|
<p>I was building a small Flask app with HTML templates where a user could upload a video and I would use an LLM to describe the video.</p>
<p>I receive the file URL through an HTML form, which I then save to a local folder.</p>
<pre><code>file_url = request.form[f'media-url-{i}']
extension = file_url.split('?')[0].split('.')[-1]
filename = str(uuid.uuid4()) + '.' + extension
file_path = os.path.join(os.environ['TMP_DIR'], filename)
</code></pre>
<p>The file works successfully with the image/video being saved.</p>
<p>I then try to use the LLM as such:</p>
<pre><code>genai.configure(api_key=os.environ["GEMINI_API_KEY"])
model = genai.GenerativeModel('gemini-1.5-flash')
gen_file = genai.upload_file(path=file_path)
response = model.generate_content([llm_prompt, gen_file).text
</code></pre>
<p>This seems to work pretty well with image files.</p>
<p>When I upload videos, this doesn't work on my Flask app (although the mechanism works when I test on a Jupyter notebook).</p>
<p>This is the error I get:</p>
<pre><code>line 91, in generate_posts
response = model.generate_content([llm_prompt, gen_file).text
File "C:\miniconda3\lib\site-packages\google\generativeai\generative_models.py", line 331, in generate_content
response = self._client.generate_content(
File "C:\miniconda3\lib\site-packages\google\ai\generativelanguage_v1beta\services\generative_service\client.py", line 830, in generate_content
response = rpc(
File "C:\miniconda3\lib\site-packages\google\api_core\gapic_v1\method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
File "C:\miniconda3\lib\site-packages\google\api_core\retry\retry_unary.py", line 293, in retry_wrapped_func
return retry_target(
File "C:\miniconda3\lib\site-packages\google\api_core\retry\retry_unary.py", line 153, in retry_target
_retry_error_helper(
File "C:\miniconda3\lib\site-packages\google\api_core\retry\retry_base.py", line 212, in _retry_error_helper
raise final_exc from source_exc
File "C:\miniconda3\lib\site-packages\google\api_core\retry\retry_unary.py", line 144, in retry_target
result = target()
File "C:\miniconda3\lib\site-packages\google\api_core\timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs)
File "C:\miniconda3\lib\site-packages\google\api_core\grpc_helpers.py", line 78, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.FailedPrecondition: 400 The File shtw63tpykp2 is not in an ACTIVE state and usage is not allowed.
</code></pre>
<p>Would you guys have any idea?</p>
|
<python><flask><google-generativeai>
|
2024-09-28 08:52:39
| 1
| 343
|
Shubhankar Agrawal
|
79,033,777
| 16,611,809
|
How to query a large file using pandas (or an alternative)?
|
<p>I want to query a large file that I generated from the <a href="https://ftp.ncbi.nih.gov/snp/organisms/human_9606/VCF/00-All.vcf.gz" rel="nofollow noreferrer">dbSNP VCF (16GB)</a> using this shell command:</p>
<pre><code>gzcat 00-All.vcf.gz | grep -v ## | awk -v FS='\t' -v OFS='\t' '{print $3, $1, $2, $4, $4}' | gzip > 00-All_relevant.vcf.gz
</code></pre>
<p>From this <code>00-All_relevant.vcf.gz</code> I want to query multiple rsIDs (column ID in the MRE) and retrieve the corresponding genomic location and change (columns #CHROM, POS, REF, ALT). I use pandas for my project so I started with trying pandas for this too. I am open to other Python compatible options though.</p>
<p>I cannot read the whole file at once since my kernel always crashes, if using this command:</p>
<pre><code>import pandas as pd
rsid_df = pd.read_csv('00-All_relevant.vcf.gz',
sep='\t')
</code></pre>
<p>That's why i tried reading it in chunks and after implementing user27243451's solution I came up with that solution:</p>
<pre><code>rsid_df = pd.read_csv('00-All_relevant.vcf.gz',
sep='\t',
chunksize=1000000)
rsids = ['rs537152180','rs376204250','rs181326522']
variants = pd.DataFrame()
for data in rsid_df:
rsid_found = data['ID'].isin(rsids)
if rsid_found.astype(int).sum() > 0:
variant = data.loc[rsid_found]
variants = pd.concat([variants,variant])
for id in variant['ID'].tolist():
rsids.remove(id)
if not rsids:
break
print(variants)
</code></pre>
<p>This works, but it's terribly slow, especially if one of the rsIDs is on the last chunk. Is there any way to speed this up?</p>
|
<python><pandas>
|
2024-09-28 08:42:02
| 2
| 627
|
gernophil
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.