QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,205,470
| 536,262
|
setuptools pyproject.toml - managing paths of enclosed config files
|
<p>When using pyproject.toml setuptools how can I dynamically assure that my binary finds its config enclosed in the same directory as the python code?</p>
<p>After pip install files are located:</p>
<pre><code>:
c:\dist\venvs\ranchercli\lib\site-packages\ranchercli\ranchercli-projects.json
c:\dist\venvs\ranchercli\lib\site-packages\ranchercli\ranchercli.py
c:\dist\venvs\ranchercli\scripts\ranchercli.exe
:
</code></pre>
<p>But when I run:</p>
<pre><code># same dir:
ranchercli.exe --ns='keycloak' --env=prod --refresh
20240322095209.115|ERROR|C:\dist\venvs\ranchercli\Lib\site-packages\ranchercli\ranchercli.py:89|--projectfile=./ranchercli-projects.json not found
# relative:
20240322101246.345|ERROR|C:\dist\venvs\ranchercli\Lib\site-packages\ranchercli\ranchercli.py:89|--projectfile=../lib/site-packages/ranchercli/ranchercli-projects.json not found
</code></pre>
<p>I only got full path working but the path changes with any install:</p>
<p><code>C:\\dist\\venvs\\ranchercli\\Lib\\site-packages\\ranchercli\\ranchercli-projects.json</code></p>
|
<python><setuptools><pyproject.toml>
|
2024-03-22 10:14:57
| 2
| 3,731
|
MortenB
|
78,205,398
| 8,764,412
|
RabbitMQ - consume messages from a classic queue to a MQTT connection
|
<p>I'm trying to consume messages from a queue in RabbitMQ which is already set up and has some messages. I am testing the MQTT subscriptor with a connection with the paho library:</p>
<pre><code>import paho.mqtt.client as mqtt
from paho.mqtt.enums import CallbackAPIVersion
def on_connect(client, userdata, connect_flags, reason_code, properties):
print("Connected with result code "+str(reason_code))
client.subscribe("sv/iqf/area/0/#")
def on_message(client, userdata, msg):
print("Received message: "+msg.payload.decode())
client = mqtt.Client(callback_api_version=CallbackAPIVersion.VERSION2) # Provide callback_api_version argument
client.username_pw_set("user", "password")
client.on_connect = on_connect
client.on_message = on_message
client.connect("IP", 1883, 60)
client.loop_forever()
</code></pre>
<p>However, I can't make the new subscription queue (MQTT connection) consume the classic queue messages as described in the <a href="https://www.rabbitmq.com/blog/2023/03/21/native-mqtt" rel="nofollow noreferrer">documentation</a>:</p>
<p><a href="https://i.sstatic.net/1YMUL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1YMUL.png" alt="Desired connection" /></a></p>
<p>My current set up for the classic queue to get the messages from is the following:</p>
<p><a href="https://i.sstatic.net/3fReS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3fReS.png" alt="Queue with messages to consume from" /></a></p>
<p>And the MQTT subscription queue is set as:</p>
<p><a href="https://i.sstatic.net/RFX82.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RFX82.png" alt="Queue to send the messages to" /></a></p>
<p>If the connection is up and the messages are sent to the specified routing key then both queues will get the messages, but my goal is to get them to the first queue in case the subscriber connection is unstable and can consume them whenever it connects.</p>
<p>What am I missing?</p>
|
<python><rabbitmq><mqtt>
|
2024-03-22 10:02:36
| 1
| 383
|
Arduino
|
78,205,293
| 7,162,827
|
FastAPI override Depends & Security
|
<p>I have a problem with testing endpoints that use both <code>Depends</code> AND <code>Security</code>. First of all, here is my root endpoint which i can test perfectly fine using <code>app.dependency_override</code>:</p>
<pre><code># restapi/main.py
from api_v1.api import router as api_router
from authentication.verification import Verification
from fastapi import FastAPI, Security
from mangum import Mangum
app = FastAPI()
security = Verification()
@app.get("/")
async def root(auth_result: str = Security(security.verify)):
return {"message": "Hello World this is restapi.py!"}
app.include_router(api_router, prefix="/api/v1")
handler = Mangum(app)
</code></pre>
<p>And then to test it:</p>
<pre><code>from fastapi.testclient import TestClient
from restapi.main import app, security
def test_root_api():
client = TestClient(app=app)
app.dependency_overrides[security.verify] = lambda: ""
response = client.get("/")
assert response.status_code == 200
assert response.json() == {"message": "Hello World this is restapi.py!"}
app.dependency_overrides = {}
def test_root_api_without_security():
client = TestClient(app=app)
response = client.get("/")
assert response.status_code == 403
assert response.json() == {"detail": "Not authenticated"}
</code></pre>
<p>There are no problems when I only use the <code>Security</code> function. However I have another endpoint (that i define as <code>api/v1/exchange_rates</code> in my api_router) which has both <code>Depends</code> and <code>Security</code>:</p>
<pre><code># restapi/api_v1/endpoints/exchange_rates.py
security = Verification()
router = APIRouter()
@router.get("/")
async def exchange_rates(
query: ExchangeRatesQuery = Depends(),
auth_result: str = Security(security.verify),
):
</code></pre>
<p>The <code>ExchangeRatesQuery</code> is a pydantic BaseModel. The test below for this endpoint works perfectly fine ONLY if I remove the <code>auth_result: str = Security...</code> from my endpoint.</p>
<pre><code>from restapi.main import app
from restapi.api_v1.endpoints.exchange_rates import security
def test_exchange_rates():
client = TestClient(app=app)
app.dependency_overrides[security.verify] = lambda: ""
response = client.get(
"/api/v1/exchange-rates?start_date=2023-03-20&end_date=2023-03-21&base_currency=USD&target_currencies=EUR",
)
assert response.status_code == 200
assert response.json() == [
{
"exchange_date": "2023-03-20",
"exchange_rate": "0.88",
"base_currency": "USD",
"target_currency": "EUR",
}
]
app.dependency_overrides = {}
</code></pre>
<p>Even though I import <code>security</code> from my <code>exchange_rates.py</code> to ensure that I'm overriding the dependencies of the correct object, this doesn't seem to work. Any help is greatly appreciated.</p>
<p><strong>Edit 1</strong>
I've also tried patching the verify function with a standard return value of <code>""</code> using unittest.mock, but have not been able to make it work that way either.</p>
<pre><code>@mock.patch.object(Verification, "verify", lambda: "") # also tried security instead of Verification
def test_exchange_rates()
...
</code></pre>
|
<python><dependency-injection><pytest><fastapi>
|
2024-03-22 09:44:30
| 1
| 567
|
armara
|
78,205,206
| 7,176,676
|
Color edges distinctly in network based on attribute value
|
<p>Consider an undirected <code>multigraph</code> (without <code>self-loops</code>) in which the maximum vertex degree <code>d=8</code> (in other words, there are at most 4 edges between two neighboring nodes).</p>
<p>I would like to assign to each of the parallel edges between two neighboring nodes a unique number from the set <code>{1,2,3,4}</code> based on the edges' attribute values <code>z</code>, given that:</p>
<ul>
<li>the attribute value <code>z</code> of each of the parallel edges between two nodes is guaranteed to be unique;</li>
<li>the attribute values can however be repeated in neighboring edges;</li>
<li>an attribute value from a parallel edge is only repeated in neighboring edges (cluster), but it never re-occurs elsewhere in the network (only 1 single cluster per attribute value);</li>
<li>in principle I consider this to be an undirected network, but I can also make it directed if so required.</li>
</ul>
<p><strong>I am not sure this can be solved with only 4 colors/numbers. The above may be relaxed to a larger set of numbers if required.</strong></p>
<p>I am looking for an algorithm that can solve this problem using the Python package <code>networkx</code>.</p>
<p>Below I give some examples of sections of the network that can occur. In these examples, the <code>|</code> indicates a node, and the <code>z</code> values the edges with attribute value <code>z_{i}</code></p>
<pre><code>z1 | z1,z2 | z2 | z2,z3 | z3 | z3 | z3,z4 | z4
z1 | z1,z2 | z1,z2,z3 | z2,z3 | z3
z1 | z1,z2 | z1,z2,z3 | z1,z2,z3,z4 | z1,z2,z3,z4 | z2,z3,z4 | z3,z4 | z4
</code></pre>
<p>Although it is trivial to color the sections as shown in the examples, the point is that there are branches in the network that converge at which the numbering should be unique again.</p>
<p>I have seen that <code>networkx</code> does have a function to uniquely color <code>nodes</code>. There also exist <code>edge coloring</code> algorithms, but this problem is different in the sense that I do not want each edge to have a unique color.</p>
<p>My feeling is that this requires a greedy algorithm as I read it is in general an <code>NP-hard</code> problem, but due to some of the conditions above I imagine smarter algorithms exist?</p>
|
<python><networkx><graph-theory><graph-coloring>
|
2024-03-22 09:27:20
| 0
| 395
|
flow_me_over
|
78,205,081
| 826,112
|
Why do Python list sizes in memory not match documentation?
|
<p>I've been trying to gain a better understanding of how Python allocates memory for lists when the list is being extended by appending. This <a href="https://stackoverflow.com/questions/63204377/memory-size-of-list-python">question</a> covers the basics well and explains the memory increments increase in size as the length of the list increases. This <a href="https://www.opensourceforu.com/2021/05/memory-management-in-lists-and-tuples/" rel="nofollow noreferrer">article</a> is an explaination of the source code that can be found <a href="https://github.com/python/cpython/blob/main/Objects/listobject.c" rel="nofollow noreferrer">here</a>.</p>
<p>I want to ask about this explanation:</p>
<blockquote>
<pre><code>/* This over-allocates proportional to the list size, making room
* for additional growth. The over-allocation is mild, but is
* enough to give linear-time amortized behavior over a long
* sequence of appends() in the presence of a poorly-performing
* system realloc().
* Add padding to make the allocated size multiple of 4.
* The growth pattern is: 0, 4, 8, 16, 24, 32, 40, 52, 64, 76, ...
* Note: new_allocated won't overflow because the largest possible value
* is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t.
*/
new_allocated = ((size_t)newsize + (newsize >> 3) + 6) & ~(size_t)3;
/* Do not overallocate if the new size is closer to overallocated size
* than to the old size.
*/
</code></pre>
</blockquote>
<p>And specifically, this calculation:</p>
<blockquote>
<p>new_allocated = ((size_t)newsize + (newsize >> 3) + 6) & ~(size_t)3;</p>
</blockquote>
<p>My understanding of this calculation is the new memory allocation will be equal to the newsize (the current size triggering the increase) + the newsize right rolled three bits (an effective divide by eight) + 6. This is then ANDed with the 1's complement of 3 so the last two bits are forced to zero to make the value divisible by 4.</p>
<p>I used this code to generate my lists and report the sizes:</p>
<pre><code>a = [i for i in range(108)]
print(sys.getsizeof(a)) # 920 bytes
b = [i for i in range(109)]
print(sys.getsizeof(b)) # 1080 bytes
</code></pre>
<p>At 109 elements the resize is triggered an newsize now equals 928 bytes</p>
<p>The calculation above should look like this:
<a href="https://i.sstatic.net/8eQJY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8eQJY.png" alt="Binary calculation" /></a></p>
<p>1048 bytes is less than the reported size of 1060 bytes.</p>
<p>The documentation states that this process may not work well with small lists so I tried a bigger list. I won't reproduce this in binary.</p>
<pre><code>a = [i for i in range(10640)]
print(sys.getsizeof(a)) # 85176 bytes
b = [i for i in range(10641)]
print(sys.getsizeof(b)) # 95864 bytes
</code></pre>
<blockquote>
<p>[85184 + (86184 >> 3) + 6] = 95838 bytes</p>
</blockquote>
<p>This will drop to 95836 when the "& ~3" is applied. Again, short of the 95864 reported.</p>
<p>Why is the reported resize greater then the calculated resize?</p>
|
<python><memory>
|
2024-03-22 09:05:29
| 1
| 536
|
Andrew H
|
78,205,054
| 6,610,407
|
Type hint for a factory classmethod of an abstract base class in Python 3
|
<p>I have two abstract classes, <code>AbstractA</code> and <code>AbstractB</code>. <code>AbstractB</code> is generic and its type parameter is bound to <code>AbstractA</code>. <code>AbstractB</code> further has a factory classmethod that returns an instance of one of its subclasses--which one is determined from some input parameter. See below for a minimal example. Note that, after trial and error, I found that I need to add a type hint for <code>B_type</code> in <code>AbstractB.factory()</code>.</p>
<h3>Minimal example</h3>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from abc import ABC
from typing import Generic, TypeVar, Type, Any
class AbstractA(ABC):
pass
class ConcreteA1(AbstractA):
pass
class ConcreteA2(AbstractA):
pass
ATypeT = TypeVar("ATypeT", bound=AbstractA)
class AbstractB(ABC, Generic[ATypeT]):
@classmethod
def factory(cls, typeB_selector: int) -> AbstractB[Any]: # which type hint here?
B_type: Type[AbstractB[Any]] # which type hint here?
if typeB_selector == 1:
B_type= ConcreteB1
elif typeB_selector == 2:
B_type= ConcreteB2
return B_type()
class ConcreteB1(AbstractB[ConcreteA1]):
pass
class ConcreteB2(AbstractB[ConcreteA2]):
pass
</code></pre>
<p>I'm trying to understand what type hints to use for the return value of <code>AbstractB.factory()</code> and <code>B_type</code>. From my (admittedly limited) understanding of generics, I believe the appropriate type should be <code>AbstractB[AbstractA]</code>. However, with <code>strict=true</code>, mypy gives errors for the two <code>B_type=...</code> lines; e.g. for the first one:</p>
<p><code>Incompatible types in assignment (expression has type "type[ConcreteB1]", variable has type "type[AbstractB[AbstractA]]")</code>.</p>
<p>The only way I can avoid errors is by using <code>AbstractB[Any]</code>, as in the example. However, this feels wrong to me, since we know that the type parameter is bound to <code>AbstractA</code>. I also tried <code>AbstractB[ATypeT]</code>, but that seems wrong as well, and also results in mypy errors.</p>
<h3>Questions</h3>
<ul>
<li>Is my assumption that the correct type hint should be <code>AbstractB[AbstractA]</code> correct, or am I misunderstanding things?</li>
<li>Am I doing something weird here? According to my understanding of the factory pattern this seems like a valid approach.</li>
</ul>
<h3>Notes</h3>
<ul>
<li>My actual code properly handles the case when <code>factory()</code> is called on one of the subclasses (<code>ConcreteB1</code>, <code>ConcreteB2</code>).</li>
<li>There are several similar questions on SO (e.g. <a href="https://stackoverflow.com/questions/46007544/python-3-type-hint-for-a-factory-method-on-a-base-class-returning-a-child-class">1</a>, <a href="https://stackoverflow.com/questions/62456303/type-annotations-for-factory-method">2</a>), but I believe my case is different because (when <code>factory()</code> is called on <code>AbstractB</code>) there's no way of telling the type checker which subclass will be returned.</li>
</ul>
|
<python><generics><abstract-class><mypy><python-typing>
|
2024-03-22 08:59:17
| 1
| 475
|
MaartenB
|
78,205,031
| 2,202,989
|
Numpy image array to pyglet image
|
<p>I am trying to load image to a numpy array using imageio, and display it using pyglet. The end result is garbled, though I can see some structure. Code:</p>
<pre><code>import pyglet as pg
import imageio.v3 as im
import numpy as np
window = pg.window.Window()
#Load image, and get_shape
np_image = im.imread("test.png")[100:400, 100:400] #Get smaller section from much larger image (~3Kx3K)
height = np_image.shape[0]
width = np_image.shape[1]
depth = np_image.shape[2]
#Create pyglet image and load image data to it (+ set anchor for displaying)
pg_image = pg.image.create(width, height)
pg_image.set_data("RGB", width*3, np_image)
pg_image.anchor_x = width//2
pg_image.anchor_y = height//2
#Print shapes and dtype, all should be correct
print(np_image.shape)
print(width, height, depth)
print(np_image.dtype)
#Put into sprite
gp_sprite = pg.sprite.Sprite(pg_image, x = window.width//2, y=window.height//2)
@window.event
def on_draw():
window.clear()
gp_sprite.draw()s
pg.app.run()
</code></pre>
<p>End result is:</p>
<p><a href="https://i.sstatic.net/rBd1T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rBd1T.png" alt="Garbled image data as result of previous code" /></a></p>
<p>What am I doing wrong here?</p>
<p>EDIT:</p>
<p>Debug print is:</p>
<pre><code>(300, 300, 3)
300 300 3
uint8
</code></pre>
|
<python><numpy><pyglet><python-imageio>
|
2024-03-22 08:53:54
| 1
| 383
|
Nyxeria
|
78,204,869
| 900,898
|
Python: urllib.parse.urljoin VS string concatenation
|
<p>Could you please explain to me the difference between two approaches of url formatting : concatenation and urllib.parse.urljoin and which is better to use?</p>
<pre><code># base_url = "http://example.com"
# url_path = "/ssome/endpoint"
from urllib.parse import urljoin
url = urljoin(base_url, url_path)
</code></pre>
<p>VS</p>
<pre><code>url = base_url + url_path
</code></pre>
<p>Thank you</p>
|
<python><url>
|
2024-03-22 08:20:45
| 0
| 548
|
ZigZag
|
78,204,798
| 7,905,329
|
urllib3 warning ReadTimeoutError while connecting to Nominatim's geolocator
|
<p>The code below code to extract pincodes, but raises the error below. How to work around this?</p>
<pre><code>def get_zipcode(df, geolocator, lat_field, lon_field):
location = geolocator.reverse(str(df[lat_field]) + "," + str(df[lon_field]))
# print(location) #-- uncomment this in case of checking the output
#return location.raw['address']['postcode']
if location == None:
return "none"
else:
return location.raw['address'].get('postcode')
geolocator = geopy.Nominatim(user_agent="MyGeocodingScript")
zipcode = df.apply(get_zipcode, axis=1, geolocator=geolocator, lat_field='GPS_Lat', lon_field='GPS_Lon')
</code></pre>
<p>Error:</p>
<pre><code>WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='nominatim.openstreetmap.org', port=443): Read timed out. (read timeout=1)")': /reverse?lat=11.2181948&lon=78.1811652&format=json&addressdetails=1
</code></pre>
|
<python><urllib3><httpconnection><nominatim><geolocator>
|
2024-03-22 08:02:26
| 0
| 364
|
anagha s
|
78,204,656
| 1,274,613
|
How do I test whether two structural types are subtypes of each other?
|
<p>I have two Protocols in different modules.</p>
<pre><code>class TooComplexProtocolInNeedOfRefactoring(Protocol):
@property
def useful(self) -> int:
...
@property
def irrelevant1(self) -> int:
...
@property
def irrelevant2(self) -> int:
...
@property
def irrelevant3(self) -> int:
...
</code></pre>
<p>and</p>
<pre><code>class SpecificSubProtocol(Protocol):
@property
def useful(self) -> int:
...
</code></pre>
<p>I want to check that all classes that implement <code>TooComplexProtocolInNeedOfRefactoring</code> can be used for <code>SpecificSubProtocol</code>, so the former is a subtype of the latter.</p>
<p>All the tricks for checking structural subtypes I have found need instantiation or at least concrete classes, like</p>
<pre><code>test_subtype: Type[SpecificSubProtocol] = TooComplexProtocolInNeedOfRefactoring
</code></pre>
<p>(“Can only assign concrete classes to a variable of type "type[SpecificSubProtocol]"”)</p>
<p>How can I test whether two protocols are structural subtypes of each other?</p>
|
<python><python-typing><structural-typing>
|
2024-03-22 07:29:35
| 1
| 6,472
|
Anaphory
|
78,204,636
| 7,641,854
|
How to properly type hint usage of child classes in list?
|
<p>How do I properly type hint this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional
class A: ...
class B(A): ...
class C(A): ...
my_dict: dict[str, list[Optional[A]]] = {"b": [], "c": []}
b: list[Optional[B]] = [B()]
c: list[Optional[C]] = [C()]
my_dict["b"] = b
my_dict["c"] = c
</code></pre>
<p>I obatain the following error in <code>mypy</code>:</p>
<pre><code>error: Incompatible types in assignment (expression has type "list[B | None]", target has type "list[A | None]") [assignment]
</code></pre>
<p>This works but is not what I want:</p>
<pre class="lang-py prettyprint-override"><code>b: list[Optional[A]] = [B()]
c: list[Optional[A]] = [C()]`
</code></pre>
<p>Naively I assumed that the information would be inherited that <code>B</code> and <code>C</code> are children of <code>A</code> and thus the assignment is valid since this works:</p>
<pre class="lang-py prettyprint-override"><code>bb: A = B()
cc: A = C()
</code></pre>
<p>Am I approaching it the wrong way? Can anyone help out?</p>
<p>I don't want it too verbose. Assume I have longer class names for <code>B</code> and <code>C</code> in real life.</p>
|
<python><mypy><python-typing>
|
2024-03-22 07:25:02
| 1
| 760
|
Ken Jiiii
|
78,204,517
| 13,392,257
|
How to get list of imported items in python file
|
<p>I want to get list of imported items in file</p>
<p>Example
I have a file</p>
<pre><code># file1.py
from mod1 import a
from mod2 import b, c
print(a)
</code></pre>
<p>I want a function that takes path of the file <code>foo("file1.py")</code>
and returns the following</p>
<pre><code>{
"file1.py": [
"mod1": ["a"],
"mod2": ["b", "c"]
]
}
</code></pre>
<p>Is there existing python library to solve my problem?</p>
<p>What python lib do you recommend to analyze python code?</p>
|
<python>
|
2024-03-22 06:58:05
| 1
| 1,708
|
mascai
|
78,204,333
| 143,397
|
How to run Rust library unit tests with Maturin?
|
<p>I'm building a custom Python module in Rust, with maturin.</p>
<p>I am using PyEnv, with Python 3.12.2, installed with <code>env PYTHON_CONFIGURE_OPTS="--enable-shared"</code> and <code>--keep</code> so the Python sources are available. I have a venv and I'm using <code>maturin develop</code> to build my library and update the venv as I need to.</p>
<p>I have some unit tests in my <code>lib.rs</code> file, and usually I'd just run <code>cargo test</code> to run them.</p>
<p>But within the Maturin-managed environment, I get linker errors:</p>
<pre><code>error: linking with `cc` failed: exit status: 1
...
= note: /usr/bin/ld: .../target/debug/deps/libpyo3-c286c2d4bbe22ea2.rlib(pyo3-c286c2d4bbe22ea2.pyo3.7555584b645de9e8-cgu.01.rcgu.o): in function `pyo3_ffi::object::Py_DECREF':
$HOME/.cargo/git/checkouts/pyo3-a22e69bc62b9f0fd/da24f0c/pyo3-ffi/src/object.rs:597: undefined reference to `_Py_Dealloc'
</code></pre>
<p>I was able to work around this with the following two methods:</p>
<ol>
<li>Using <code>RUSTFLAGS</code> and <code>LD_LIBRARY_PATH</code> / rpath:</li>
</ol>
<pre><code>$ export RUSTFLAGS="-C link-arg=-Wl,-rpath,.../pyenv.git/versions/3.12.2/lib -C link-arg=-L.../pyenv.git/versions/3.12.2/lib -C link-arg=-lpython3.12"
</code></pre>
<ol start="2">
<li>Or creating <code>.cargo/config.toml</code>:</li>
</ol>
<pre><code>[target.'cfg(all())']
rustflags = [
"-C", "link-arg=-Wl,-rpath,.../pyenv.git/versions/3.12.2/lib",
"-C", "link-arg=-L.../pyenv.git/versions/3.12.2/lib",
"-C", "link-arg=-lpython3.12",
]
</code></pre>
<p>I've snipped the paths for brevity/privacy.</p>
<p>Both of these methods are doing the same thing - providing an explicit linker path, library to link against, and run-time library path. This works, but it feels wrong.</p>
<p>I wasn't able to find a <code>maturin test</code> or equivalent, and it doesn't seem right that I have to manually specify the linker arguments to <code>cargo</code> / <code>rustc</code> when I have maturin right there to do that for me.</p>
<p>Is there a good way to do this with maturin?</p>
<hr />
<p>EDIT: In the PyO3 docs I found the <a href="https://pyo3.rs/v0.7.0-alpha.1/advanced.html#testing" rel="noreferrer">Testing</a> section, which recommends converting this part of <code>Cargo.toml</code>:</p>
<pre><code>[dependencies]
pyo3 = { git = "https://github.com/pyo3/pyo3", features = ["extension-module"] }
</code></pre>
<p>Into this:</p>
<pre><code>[dependencies.pyo3]
git = "https://github.com/pyo3/pyo3"
[features]
extension-module = ["pyo3/extension-module"]
default = ["extension-module"]
</code></pre>
<p>And then running <code>cargo test --no-default-features</code>. This does in fact resolve the linker errors, but it still has trouble locating the library at runtime:</p>
<pre><code> Running unittests src/lib.rs (target/debug/deps/pyo3_example-cc3941bd091dc64e)
.../target/debug/deps/pyo3_example-cc3941bd091dc64e: error while loading shared libraries: libpython3.12.so.1.0: cannot open shared object file: No such file or directory
</code></pre>
<p>This is resolved with setting <code>LD_LIBRARY_PATH</code> to <code>.../pyenv.git/versions/3.12.2/lib</code>, so it goes some way to helping, but it's not quite sufficient.</p>
|
<python><unit-testing><rust><maturin>
|
2024-03-22 06:13:08
| 1
| 13,932
|
davidA
|
78,203,981
| 2,666,270
|
Downloading file from Google Drive
|
<p>I'm trying to download a file from Google Drive using <code>gdown</code> as follows:</p>
<pre><code>file_id = url.split('=')[-1]
gdrive_url = f'https://drive.google.com/uc?id={file_id}'
output_path = os.path.join(extract_to, 'file.tgz')
gdown.download(gdrive_url, output_path, quiet=False)
</code></pre>
<p>However, I keep getting:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_431/1011135980.py in <cell line: 5>()
3 extract_to = '../fonts_data/fonts_recommend/third_party/adobe_research'
4
----> 5 download_and_extract(zip_url, extract_to, gdrive=True)
/tmp/ipykernel_431/1160197662.py in download_and_extract(url, extract_to, gdrive)
34 tar_ref.extractall(path=extract_to)
35 else:
---> 36 raise ValueError('Unsupported file extension for: ' + url)
37
38 print(f"Extracted contents to {extract_to}")
ValueError: Unsupported file extension for: https://drive.google.com/open?id=10GRqLu6-1JPXI8rcq23S4-4AhB6On-L6
</code></pre>
<p>No matter what I try. Not sure what I missing. This is the file URL (which I can download locally):</p>
<p><a href="https://drive.google.com/file/d/10GRqLu6-1JPXI8rcq23S4-4AhB6On-L6/view" rel="nofollow noreferrer">https://drive.google.com/file/d/10GRqLu6-1JPXI8rcq23S4-4AhB6On-L6/view</a></p>
|
<python>
|
2024-03-22 04:07:12
| 2
| 9,924
|
pceccon
|
78,203,521
| 5,156,525
|
R `dnbinom` giving different results than Python `scipy.stats.nbinom.pmf` even after accounting for different parameterization
|
<p>I am trying to translate my colleague's R code into Python. It involves making a calculation with a negative binomial distribution's probability mass function, but the issue is that R's <code>dnbinom</code> uses a differing parameterization from Python's <code>scipy.stats.nbinom.pmf</code>. According to <a href="https://stackoverflow.com/questions/77888226/cdf-of-a-negative-binomial-with-mu-and-alpha-in-python">this</a> question and it's answer, given R's <code>mu</code> (mean) and <code>size</code>(dispersion), I should be able to get Scipy's <code>n</code> and <code>p</code> with the following code:</p>
<pre><code> p = 1 / (1 + size * mu)
n = 1 / size
</code></pre>
<p>However, if I assume <code>convert_params</code> does the above calculation and apply it like this:</p>
<pre><code> from scipy.stats import nbinom
n, p = convert_params(15, 0.463965)
nbinom.pmf(3, n, p)
</code></pre>
<p>I get <code>0.036</code>, whereas if I do this in R:</p>
<pre><code> dnbinom(3, mu=15, size=0.463965)
</code></pre>
<p>I get <code>0.05</code>.</p>
<p>Does anyone know what's going on here? Have I used an incorrect formula to change the parameterization?</p>
|
<python><r><statistics>
|
2024-03-22 00:56:11
| 1
| 318
|
Pacific Bird
|
78,203,369
| 9,582,542
|
Python Scrapy Function that does always work
|
<p>The script below work 90% of the time to collect weather data. However, there are few cases where for some reason it just fails and the html code is consistant with the other request. There times where code is the same with the same request but it fails.</p>
<pre><code>class NflweatherdataSpider(scrapy.Spider):
name = 'NFLWeatherData'
allowed_domains = ['nflweather.com']
# start_urls = ['http://nflweather.com/']
def __init__(self, Week='', Year='', Game='', **kwargs):
self.start_urls = [f'https://nflweather.com/{Week}/{Year}/{Game}'] # py36
self.Year = Year
self.Game = Game
super().__init__(**kwargs)
print(self.start_urls) # python3
def parse(self, response):
self.log(self.start_urls)
#self.log(self.Year)
# pass
# Extracting the content using css selectors
# Extracting the content using css selectors
game_boxes = response.css('div.game-box')
for game_box in game_boxes:
# Extracting date and time information
Datetimes = game_box.css('.col-12 .fw-bold::text').get()
# Extracting team information
team_game_boxes = game_box.css('.team-game-box')
awayTeams = team_game_boxes.css('.fw-bold::text').get()
homeTeams = team_game_boxes.css('.fw-bold.ms-1::text').get()
# Extracting temperature and probability information
TempProbs = game_box.css('.col-md-4 .mx-2 span::text').get()
# Extracting wind speed information
windspeeds = game_box.css('.icon-weather + span::text').get()
winddirection = game_box.css('.md-18 ::text').get()
# Create a dictionary to store the scraped info
scraped_info = {
'Year': self.Year,
'Game': self.Game,
'Datetime': Datetimes.strip(),
'awayTeam': awayTeams,
'homeTeam': homeTeams,
'TempProb': TempProbs,
'windspeeds': windspeeds.strip(),
'winddirection': winddirection.strip()
}
# Yield or give the scraped info to Scrapy
yield scraped_info
</code></pre>
<p>These are the scrapy commands to run the crawler</p>
<pre><code>scrapy crawl NFLWeatherData -a Week=week -a Year=2012 -a Game=week-6 -o NFLWeather_2012_week_6.json
scrapy crawl NFLWeatherData -a Week=week -a Year=2012 -a Game=week-7 -o NFLWeather_2012_week_7.json
scrapy crawl NFLWeatherData -a Week=week -a Year=2012 -a Game=week-8 -o NFLWeather_2012_week_8.json
</code></pre>
<p>Week 6 crawl works perfect no issues</p>
<p>Week 7 Crawl returns nothing</p>
<pre><code>ERROR: Spider error processing <GET https://nflweather.com/week/2012/week-7> (referer: None)
Traceback (most recent call last):
File "G:\ProgramFiles\MiniConda3\envs\WrkEnv\lib\site-packages\scrapy\utils\defer.py", line 279, in iter_errback
yield next(it)
</code></pre>
<p>Week 8 retrives 2 lines and errors out fro the rest</p>
<pre><code>ERROR: Spider error processing <GET https://nflweather.com/week/2012/week-8> (referer: None)
Traceback (most recent call last):
File "G:\ProgramFiles\MiniConda3\envs\WrkEnv\lib\site-packages\scrapy\utils\defer.py", line 279, in iter_errback
yield next(it)
</code></pre>
<p>Any idea why these files fails and the other files have no issues?</p>
|
<python><scrapy>
|
2024-03-21 23:55:47
| 1
| 690
|
Leo Torres
|
78,203,314
| 6,458,245
|
Avoid restarting jupyter notebook when encountering cuda out of memory exception?
|
<p>I am using pytorch and jupyter notebook. Frequently I'll encounter cuda out of memory and need to restart the notebook. How can I avoid needing to restart the whole notebook? I tried del a few variables but it didn't change anything.</p>
|
<python><jupyter-notebook><pytorch>
|
2024-03-21 23:32:00
| 1
| 2,356
|
JobHunter69
|
78,203,312
| 15,412,256
|
Polars map_batches UDF with Multi-processing
|
<p>I want to apply a <code>numba UDF</code>, which generates the same length vectors for each groups in <code>df</code>:</p>
<pre class="lang-py prettyprint-override"><code>import numba
df = pl.DataFrame(
{
"group": ["A", "A", "A", "B", "B"],
"index": [1, 3, 5, 1, 4],
}
)
@numba.jit(nopython=True)
def UDF(array: np.ndarray, threshold: int) -> np.ndarray:
result = np.zeros(array.shape[0])
accumulator = 0
for i, value in enumerate(array):
accumulator += value
if accumulator >= threshold:
result[i] = 1
accumulator = 0
return result
df.with_columns(
pl.col("index")
.map_batches(
lambda x: UDF(x.to_numpy(), 5)
)
.over("group")
.cast(pl.UInt8)
.alias("udf")
)
</code></pre>
<p>Inspired by <a href="https://stackoverflow.com/questions/74747889/polars-apply-performance-for-custom-functions">this post</a> where a <code>multi-processing</code> application has being introduced. However, in the case above, I am applying the UDF using a <code>over</code> window function. Is there an efficient approach by <strong>parallelizing</strong> the above executions?</p>
<p>expected output:</p>
<pre class="lang-py prettyprint-override"><code>shape: (6, 3)
┌───────┬───────┬─────┐
│ group ┆ index ┆ udf │
│ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ u8 │
╞═══════╪═══════╪═════╡
│ A ┆ 1 ┆ 0 │
│ A ┆ 3 ┆ 0 │
│ A ┆ 5 ┆ 1 │
│ B ┆ 1 ┆ 0 │
│ B ┆ 4 ┆ 1 │
└───────┴───────┴─────┘
</code></pre>
|
<python><parallel-processing><python-polars>
|
2024-03-21 23:31:50
| 1
| 649
|
Kevin Li
|
78,203,204
| 6,458,245
|
urlopen inconsistent API return format for no content?
|
<p>I'm using an api http endpoint service where I retrieve some information from their url:</p>
<pre><code>response = urlopen(url, cafile=certifi.where())
print(response.read().decode("utf-8"))
</code></pre>
<p>However, sometimes their service returns nothing and the code above prints either: '' or '[]'</p>
<p>The inconsistent returns caused me issues in my json parsing.</p>
<p>I am wondering if this is expected behavior on the end of urlopen, or if it's a bug from the http endpoint I am using?</p>
|
<python><url><urllib>
|
2024-03-21 22:53:12
| 0
| 2,356
|
JobHunter69
|
78,203,187
| 6,312,511
|
NBA_API: why is this Python code returning an empty dataframe?
|
<p>I am simply attempting to pull all of today's games; later I will merge team statistics onto it, but that's a different question.</p>
<p>My question is, why does the code below return an empty dataframe? I can print the results using a for loop -- that's how I got the desired output below -- but I cannot put them into a dataframe for some reason. What am I doing wrong?</p>
<pre><code>from nba_api.live.nba.endpoints import scoreboard
import pandas as pd
todayBoard = scoreboard.ScoreBoard()
ScoreBoardDate = todayBoard.score_board_date
games = todayBoard.games.get_dict()
todayGames = pd.DataFrame()
for game in games:
todayGames['GAME_ID']=game['gameId']
todayGames['AWAY']=game['awayTeam']['teamTricode']
todayGames['HOME']=game['homeTeam']['teamTricode']
todayGames
</code></pre>
<p>The expected output for 2024-03-21 should be:</p>
<pre><code>GAME_ID AWAY HOME
0022301004 NOP ORL
0022301005 SAC WAS
0022301006 CHI HOU
0022301007 BKN MIL
0022301008 UTA DAL
0022301009 NYK DEN
0022301010 ATL PHX
</code></pre>
|
<python><pandas><nba-api>
|
2024-03-21 22:47:32
| 1
| 1,447
|
mmyoung77
|
78,203,150
| 525,865
|
BeautifulSoup: iteration over 24 char (from a to z) fails : reducing the complexity to get a first insight into the dataset:
|
<p>i have a list of insurers in spain - it is collected in 24 rubriques - on a website: See the following</p>
<p>insurandes - espanol:
the full list: <a href="https://www.unespa.es/en/directory" rel="nofollow noreferrer">https://www.unespa.es/en/directory</a></p>
<p>it is divided into 24 pages:
<a href="https://www.unespa.es/en/directory/#A" rel="nofollow noreferrer">https://www.unespa.es/en/directory/#A</a>
<a href="https://www.unespa.es/en/directory/#Z" rel="nofollow noreferrer">https://www.unespa.es/en/directory/#Z</a></p>
<p>idea - what is aimed: i want to fetch the data from the pages- with BS4 and request - and finally save it into a dataframe:
Well - the task of scraping the list from the website using BeautifulSoup (BS4) and requests in Python seems to be apropiate; i think that we need to go the following steps:</p>
<p><strong>a.</strong> firstly we need to import necessary libraries: BeautifulSoup, requests, and pandas.
<strong>b.</strong> then we need to use the requests library to get the HTML content of each of the pages that are interesting: i.e. A to Z-page.
<strong>c.</strong> then i use BeautifulSoup to parse the HTML content.
<strong>d.</strong> subsequently i think extracting the relevant information (insurers' names) from the parsed HTML is the next step
<strong>e.</strong> finally i want to store the extracted data in a pandas DataFrame.</p>
<p>but this does not work... - also not for the iteration from A to Z:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
# Function to scrape insurers from a given URL
def scrape_insurers(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
# Extracting insurer names
insurers = [insurer.text.strip() for insurer in soup.find_all('h3')]
return insurers
else:
print("Failed to retrieve data from", url)
return []
# Define the base URL
base_url = "https://www.unespa.es/en/directory/"
# List to store all insurers
all_insurers = []
# Loop through each page (A to Z)
for char in range(65, 91): # ASCII codes for A to Z
page_url = f"{base_url}#{chr(char)}"
insurers = scrape_insurers(page_url)
all_insurers.extend(insurers)
# Convert the list of insurers to a pandas DataFrame
df = pd.DataFrame({'Insurer': all_insurers})
# Display the DataFrame
print(df.head())
# Save DataFrame to a CSV file
df.to_csv('insurers_spain.csv', index=False)
</code></pre>
<p>....it fails with the following results:</p>
<pre><code>Failed to retrieve data from https://www.unespa.es/en/directory/#A
Failed to retrieve data from https://www.unespa.es/en/directory/#B
Failed to retrieve data from https://www.unespa.es/en/directory/#C
Failed to retrieve data from https://www.unespa.es/en/directory/#D
Failed to retrieve data from https://www.unespa.es/en/directory/#E
</code></pre>
<p>and so forth and so forth:</p>
<p>well i think it is quite easier to reduce the steps of complexity in the first place.</p>
<p>i think that its better to take one single URL i want to visit. It is just better to test what results we get back with our request. After this is finished, now i can evaluate the request; well i think i can use the beautiful soup lib to check for specific fields in common.
well i think that i should avoid to do three things (which can obviously terrible wrong) in one step.</p>
<p>so i do it like so for the first character: for A:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
# Function to scrape insurers from a given URL
def scrape_insurers(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
# Extracting insurer names
insurers = [insurer.text.strip() for insurer in soup.find_all('h3')]
return insurers
else:
print("Failed to retrieve data from", url)
return []
# Define the base URL
base_url = "https://www.unespa.es/en/directory/#"
# Define the character we want to fetch data for
char = 'A'
# Construct the URL for the specified character
url = base_url + char
# Fetch and print data for the specified character
insurers_char = scrape_insurers(url)
print(f"Insurers for character '{char}':")
print(insurers_char)
</code></pre>
<p>but see the Output here:</p>
<pre><code>Failed to retrieve data from https://www.unespa.es/en/directory/#A
Insurers for character 'A':
[]
</code></pre>
|
<python><dataframe><web-scraping><beautifulsoup><request>
|
2024-03-21 22:35:36
| 1
| 1,223
|
zero
|
78,203,142
| 11,248,638
|
How to Populate Null Values in Columns After Outer Join in Python Pandas
|
<p>My goal is to join two dataframes from different sources in Python using Pandas and then fill null values in columns with corresponding values in the same column.</p>
<p>The dataframes have similar columns, but some text/object columns may have different values due to variations in the data sources. For instance, the "Name" column in one dataframe might contain "Nick M." while in the other it's "Nick Maison". However, certain columns such as "Date" (formatted as YYYY-MM-DD), "Order ID" (numeric), and "Employee ID" (numeric) have consistent values across both dataframes (we join dataframes based on them). Worth mentioning, some columns may not even exist in one or another dataframe, but should also be filled.</p>
<pre><code>import pandas as pd
# Create DataFrame df1
df1_data = {
'Date (df1)': ['2024-03-18', '2024-03-18', '2024-03-18', '2024-03-18', '2024-03-18', "2024-03-19", "2024-03-19"],
'Order Id (df1)': [1, 2, 3, 4, 5, 1, 2],
'Employee Id (df1)': [825, 825, 825, 825, 825, 825, 825],
'Name (df1)': ['Nick M.', 'Nick M.', 'Nick M.', 'Nick M.', 'Nick M.', 'Nick M.', 'Nick M.'],
'Region (df1)': ['SD', 'SD', 'SD', 'SD', 'SD', 'SD', 'SD'],
'Value (df1)': [25, 37, 18, 24, 56, 77, 25]
}
df1 = pd.DataFrame(df1_data)
# Create DataFrame df2
df2_data = {
'Date (df2)': ['2024-03-18', '2024-03-18', '2024-03-18', "2024-03-19", "2024-03-19", "2024-03-19", "2024-03-19"],
'Order Id (df2)': [1, 2, 3, 1, 2, 3, 4],
'Employee Id (df2)': [825, 825, 825, 825, 825, 825, 825],
'Name (df2)': ['Nick Mason', 'Nick Mason', 'Nick Mason', 'Nick Mason', 'Nick Mason', 'Nick Mason', 'Nick Mason'],
'Region (df2)': ['San Diego', 'San Diego', 'San Diego', 'San Diego', 'San Diego', 'San Diego', 'San Diego'],
'Value (df2)': [25, 37, 19, 22, 17, 9, 76]
}
df2 = pd.DataFrame(df2_data)
# Combine DataFrames
outer_joined_df = pd.merge(
df1,
df2,
how = 'outer',
left_on = ['Date (df1)', 'Employee Id (df1)', "Order Id (df1)"],
right_on = ['Date (df2)', 'Employee Id (df2)', "Order Id (df2)"]
)
# Display the result
outer_joined_df
</code></pre>
<p>Here is the output of joined dataframes. Null values colored in yellow should be filled.</p>
<p><a href="https://i.sstatic.net/fQ3jR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fQ3jR.png" alt="enter image description here" /></a></p>
<p>I tried below code and it works for Date, Order Id and Employee Id columns as expected (because they are the same across two dataframes and we join based on them), but not for other, because they may have different values. Basically, the logic in this code is if Null, then fill with values from the same row in specified column. However, since values may be different, filled column becomes messy, because it has multiple variations of the same value.</p>
<pre><code>outer_joined_df['Date (df1)'] = outer_joined_df['Date (df1)'].combine_first(outer_joined_df['Date (df2)'])
outer_joined_df['Date (df2)'] = outer_joined_df['Date (df2)'].combine_first(outer_joined_df['Date (df1)'])
outer_joined_df['Order Id (df1)'] = outer_joined_df['Order Id (df1)'].combine_first(outer_joined_df['Order Id (df2)'])
outer_joined_df['Order Id (df2)'] = outer_joined_df['Order Id (df2)'].combine_first(outer_joined_df['Order Id (df1)'])
outer_joined_df['Employee Id (df1)'] = outer_joined_df['Employee Id (df1)'].combine_first(outer_joined_df['Employee Id (df2)'])
outer_joined_df['Employee Id (df2)'] = outer_joined_df['Employee Id (df2)'].combine_first(outer_joined_df['Employee Id (df1)'])
outer_joined_df['Name (df1)'] = outer_joined_df['Name (df1)'].combine_first(outer_joined_df['Name (df2)'])
outer_joined_df['Name (df2)'] = outer_joined_df['Name (df2)'].combine_first(outer_joined_df['Name (df1)'])
outer_joined_df['Region (df1)'] = outer_joined_df['Region (df1)'].combine_first(outer_joined_df['Region (df2)'])
outer_joined_df['Region (df2)'] = outer_joined_df['Region (df2)'].combine_first(outer_joined_df['Region (df1)'])
</code></pre>
<p>Here is the output:</p>
<p><a href="https://i.sstatic.net/ZsWG7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZsWG7.png" alt="enter image description here" /></a></p>
<p>As you can see, it populated the data, but not the way I want.</p>
<p>Output I need:</p>
<p><a href="https://i.sstatic.net/AKTA6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AKTA6.png" alt="enter image description here" /></a></p>
|
<python><pandas><join><outer-join>
|
2024-03-21 22:33:05
| 1
| 401
|
Yara1994
|
78,203,113
| 3,471,286
|
Shallow copy: is the Python.org documentation wrong?
|
<p>Is the official documentation on Python.org wrong, or did I interpret something wrongly?</p>
<p>Near the end of the <a href="https://docs.python.org/3/tutorial/introduction.html#lists" rel="nofollow noreferrer">Lists Section</a> of the documentation in "An Informal Introduction to Python", one can find the following description about copying lists:</p>
<blockquote>
<p>Simple assignment in Python never copies data. When you assign a list to a variable, the variable refers to the existing list. Any changes you make to the list through one variable will be seen through all other variables that refer to it.:</p>
<pre class="lang-py prettyprint-override"><code>>>> rgb = ["Red", "Green", "Blue"]
>>> rgba = rgb
>>> id(rgb) == id(rgba) # they reference the same object True
>>> rgba.append("Alph")
>>> rgb ["Red", "Green", "Blue", "Alph"]
</code></pre>
</blockquote>
<p>So I understand that the new list is a reference to the original list. But immediately then, the documentation states:</p>
<blockquote>
<p>All slice operations return a new list containing the requested elements. This means that the following slice returns a <a href="https://docs.python.org/3/library/copy.html#shallow-vs-deep-copy" rel="nofollow noreferrer">shallow copy</a> of the list:</p>
<pre class="lang-py prettyprint-override"><code>>>> correct_rgba = rgba[:]
>>> correct_rgba[-1] = "Alpha"
>>> correct_rgba ["Red", "Green", "Blue", "Alpha"]
>>> rgba ["Red", "Green", "Blue", "Alph"]
</code></pre>
</blockquote>
<p>So, if I understand correctly:</p>
<ul>
<li><code>rgba[:]</code> is the slice operation</li>
<li>this makes a <em>shallow copy</em> of the original list</li>
<li>it copies data from the original, into a new list that isn't a reference to the original</li>
</ul>
<p>But:</p>
<ul>
<li>after reading about the <a href="https://docs.python.org/3/library/copy.html#shallow-vs-deep-copy" rel="nofollow noreferrer">difference between <em>shallow</em> and <em>deep</em> copies</a>, I understand that <em>shallow copies</em> are references to their original, whilst <em>deep copies</em> are independent (unreferenced) copies</li>
<li>in the example above, the documentation creates a <em>deep copy</em> but mentions it as a <em>shallow copy</em>?</li>
</ul>
|
<python>
|
2024-03-21 22:23:42
| 2
| 731
|
Gui Imamura
|
78,203,093
| 11,796,910
|
The --memory-init-file is no longer supported
|
<p>I started using emscripten with Python to deploy my game to the web and it looks
like the flag <code>--memory-init-file</code> is no longer valid and breaks the build.</p>
<p>The github repo isn't sharing any older releases of the code so I cannot use older version of the code and carry on.</p>
<pre><code>mark@eli:~/deploy_test/panda3d$ python3.8 makepanda/makepanda.py --nothing --use-python --use-vorbis --use-bullet --use-zlib --use-freetype --use-harfbuzz --use-openal --no-png --use-direct --use-gles2 --optimize 4 --static --target emscripten --threads 4
Version: 1.11.0
Platform: emscripten-wasm32
Using Python 3.8
Target OS: emscripten
Target arch: wasm32
shared:INFO: (Emscripten: Running sanity checks)
Generating dependencies...
WARNING: file depends on Python but is not in an ABI-specific directory: built/bin/deploy-stub.js
[T1] Building C++ object built/tmp/interrogate_composite1.o
[T2] Linking executable built/bin/interrogate_module.js
[T3] Building C++ object built/tmp/parse_file_parse_file.o
[T4] Building C++ object built/tmp/test_interrogate_test_interrogate.o
em++: error: --memory-init-file is no longer supported
The following command returned a non-zero value: em++ -o built/bin/interrogate_module.js -Lbuilt/lib -Lbuilt/tmp built/tmp/interrogate_module_interrogate_module.o built/tmp/interrogate_module_preamble_python_native.o built/tmp/libp3cppParser.a built/lib/libp3dtool.a built/lib/libp3dtoolconfig.a built/lib/libp3interrogatedb.a -s WARN_ON_UNDEFINED_SYMBOLS=1 --memory-init-file 0 -s EXIT_RUNTIME=1 -O3
Storing dependency cache.
Elapsed Time: 2 sec
Build process aborting.
Build terminated.
</code></pre>
<p>Can you please advise on how can I go about troubleshooting this ?</p>
|
<python><emscripten>
|
2024-03-21 22:17:41
| 1
| 1,559
|
Mark
|
78,202,830
| 9,518,890
|
Pydantic validation fails when importing module from a package when running pytest
|
<p>I am trying to understand why pydantic validation fails here. I was running into this in larger codebase and was able to condense it into small example.</p>
<p>there is <code>misc_build.py</code> where pydantic model is defined</p>
<p><code>metadata_manager/misc_build.py</code></p>
<pre><code>from pydantic import BaseModel
from md_enums import MiscBuildType
class VersionInfo(BaseModel):
build_type: MiscBuildType
</code></pre>
<p>then there is <code>md_enums.py</code> where the Enum is defined</p>
<p><code>metadata_manager/md_enums.py</code></p>
<pre><code>class MiscBuildType(enum.Enum):
PROD = "PROD"
DEV = "DEV"
</code></pre>
<p>finally there is test file
<code>metadata_manager/tests/test_build.py</code></p>
<pre><code>from misc_build import VersionInfo
import md_enums
import metadata_manager.md_enums
from md_enums import MiscBuildType as BuildType1
from metadata_manager.md_enums import MiscBuildType as BuildType2
def test_build():
assert md_enums.__file__ == metadata_manager.md_enums.__file__
print(
f"imported from the same file: {md_enums.__file__ == metadata_manager.md_enums.__file__}"
)
version_info = VersionInfo(build_type=BuildType1.DEV) # this works
print(f"version info 1: {version_info}")
version_info = VersionInfo(build_type=BuildType2.DEV) # this fails
print(f"version info 2: {version_info}")
</code></pre>
<p>The first instance of <code>VersionInfo</code> is created successfully but the second one fails with this message when running <code>pytest</code>.</p>
<pre><code>E pydantic_core._pydantic_core.ValidationError: 1 validation error for VersionInfo
E build_type
E Input should be 'PROD' or 'DEV' [type=enum, input_value=<MiscBuildType.DEV: 'DEV'>, input_type=MiscBuildType]
</code></pre>
<p>It seems that pydantic sees those as two different enums even though they are imported from the same file.</p>
<p>So my question is - why is this happening when the module is imported as <code>metadata_manager.md_enums</code>?</p>
<p>btw there are <code>conftest.py</code> files in <code>metadata_manager</code> directory as well as in the parent folder which the metadata_manager is part of but there is nothing there that seems likely to cause this.</p>
|
<python><pytest><pydantic>
|
2024-03-21 21:17:02
| 0
| 14,592
|
Matus Dubrava
|
78,202,811
| 726,730
|
PyQt5 - Using multiprocessing for matplotlib purposes
|
<p>With this code i am trying to plot a pydub.AudioSegment signal which has duration=125msec.
The plot time window (x-axis) is 3 sec.</p>
<p>code:</p>
<pre class="lang-py prettyprint-override"><code>import time
from PyQt5.QtCore import pyqtSignal, QThread
from multiprocessing import Process, Queue, Pipe
from datetime import datetime, timedelta
import traceback
import matplotlib.pyplot as plt
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.dates import num2date
from matplotlib.ticker import FuncFormatter
import numpy as np
class Final_Slice_Plot:
def __init__(self, main_self):
self.main_self = main_self
# chart
self.chart = Canvas(self)
self.chart.ax.set_facecolor((1, 1, 1))
self.chart.ax.tick_params(labelcolor='white')
# create process
self.process_number = 94
self.final_slice_plot_mother_pipe, self.final_slice_plot_child_pipe = Pipe()
self.final_slice_plot_queue = Queue()
self.final_slice_plot_emitter = Final_Slice_Plot_Emitter(self.final_slice_plot_mother_pipe)
self.final_slice_plot_emitter.error_signal.connect(lambda error_message: print(error_message))
self.final_slice_plot_emitter.plot_data_signal.connect(lambda plot_data: self.plot(plot_data))
self.final_slice_plot_emitter.start()
self.final_slice_plot_child_process = Final_Slice_Plot_Child_Proc(self.final_slice_plot_child_pipe, self.final_slice_plot_queue)
self.final_slice_plot_child_process.start()
counter = 0
for process in self.main_self.manage_processes_instance.processes:
if "process_number" in process:
if process["process_number"] == self.process_number:
self.main_self.manage_processes_instance.processes[counter][
"pid"] = self.final_slice_plot_child_process.pid
self.main_self.manage_processes_instance.processes[counter]["start_datetime"] = datetime.now()
self.main_self.manage_processes_instance.processes[counter]["status"] = "in_progress"
counter += 1
if self.main_self.manage_proccesses_window_is_open:
self.main_self.manage_proccesses_window_support_code.manage_proccesses_queue.put(
{"type": "table-update", "processes": self.main_self.manage_processes_instance.processes})
def close(self):
try:
try:
if self.final_slice_plot_child_process is not None:
self.final_slice_plot_child_process.terminate()
except:
print(traceback.format_exc())
try:
if self.final_slice_plot_emitter is not None:
self.final_slice_plot_emitter.terminate()
except:
print(traceback.format_exc())
counter = 0
for process in self.main_self.manage_processes_instance.processes:
if "process_number" in process:
if process["process_number"] == self.process_number:
self.main_self.manage_processes_instance.processes[counter]["pid"] = None
self.main_self.manage_processes_instance.processes[counter]["start_datetime"] = None
self.main_self.manage_processes_instance.processes[counter]["status"] = "stopped"
self.main_self.manage_processes_instance.processes[counter]["cpu"] = 0
self.main_self.manage_processes_instance.processes[counter]["ram"] = 0
counter += 1
if self.main_self.manage_proccesses_window_is_open:
self.main_self.manage_proccesses_window_support_code.manage_proccesses_queue.put(
{"type": "table-update", "processes": self.main_self.manage_processes_instance.processes})
self.clear_plot()
except:
error_message = traceback.format_exc()
print(error_message)
# signal for plot_data_signal
def plot(self, plot_data):
try:
x_vals = plot_data[0]
y_vals = plot_data[1]
self.chart.li.set_xdata(x_vals)
self.chart.li.set_ydata(y_vals)
x_ticks = []
if (len(x_vals) > 0):
for i in range(499, 2500 + 1, 1000):
tick = x_vals[0] + timedelta(milliseconds=i)
x_ticks.append(tick)
plt.xticks(x_ticks)
self.chart.ax.set_xlim(x_vals[0], x_vals[0] + timedelta(milliseconds=3000))
self.chart.ax.xaxis.set_major_formatter(FuncFormatter(self.date_formatter_1))
self.chart.fig.canvas.draw()
self.chart.fig.canvas.flush_events()
except:
error_message = str(traceback.format_exc())
print(error_message)
# Clear the matplotlib plot
def clear_plot(self):
try:
x_vals = [datetime.now()]
y_vals = [0]
self.chart.li.set_xdata(x_vals)
self.chart.li.set_ydata(y_vals)
x_ticks = []
if (len(x_vals) > 0):
for i in range(499, 2500 + 1, 1000):
tick = x_vals[0] + timedelta(milliseconds=i)
x_ticks.append(tick)
plt.xticks(x_ticks)
self.chart.ax.set_xlim(x_vals[0], x_vals[0] + timedelta(milliseconds=3000))
self.chart.ax.xaxis.set_major_formatter(FuncFormatter(self.date_formatter_1))
self.chart.fig.canvas.draw()
self.chart.fig.canvas.flush_events()
except Exception as e:
error_message = str(traceback.format_exc())
print(error_message)
# Formats the x-axis of matplotlib plot
def date_formatter_1(self, a, b):
try:
t = num2date(a)
ms = str(t.microsecond)[:1]
res = f"{t.hour:02}:{t.minute:02}:{t.second:02}.{ms}"
# res = f"{t.hour:02}:{t.minute:02}:{t.second:02}"
return res
except Exception as e:
error_message = str(traceback.format_exc())
print(error_message)
class Final_Slice_Plot_Emitter(QThread):
try:
error_signal = pyqtSignal(str)
plot_data_signal = pyqtSignal(list)
except:
pass
def __init__(self, from_process: Pipe):
try:
super().__init__()
self.data_from_process = from_process
except:
pass
def run(self):
try:
while True:
'''if self.data_from_process.poll():
data = self.data_from_process.recv()
else:
time.sleep(0.1)
continue
'''
data = self.data_from_process.recv()
if data["type"] == "error":
self.error_signal.emit(data["error_message"])
elif data["type"]=="plot_data":
self.plot_data_signal.emit(data["plot_data"])
except:
error_message = traceback.format_exc()
self.error_signal.emit(error_message)
class Final_Slice_Plot_Child_Proc(Process):
def __init__(self, to_emitter, from_mother):
try:
super().__init__()
self.daemon = False
self.to_emitter = to_emitter
self.data_from_mother = from_mother
except:
try:
error_message = str(traceback.format_exc())
to_emitter.send({"type": "error", "error_message": error_message})
except:
pass
def run(self):
try:
self.TIME_WINDOW = 3000
self.chunk_number = 0
self.current_duration_milliseconds = 0
self.now = datetime.now()
self.x_vals = np.array([])
self.y_vals = np.array([])
while(True):
data = self.data_from_mother.get()
if data["type"] == "slice":
slice = data["slice"]
chunk_time = len(slice)
samples = slice.get_array_of_samples()
left_samples = samples[::2]
right_samples = samples[1::2]
left_audio_data = np.frombuffer(left_samples, np.int16)[::128] # down sampling
right_audio_data = np.frombuffer(right_samples, np.int16)[::128] # down sampling
audio_data = np.vstack((left_audio_data, right_audio_data)).ravel('F')
time_data = np.array([])
for i in range(0, len(audio_data)):
time_data = np.append(time_data, self.now)
self.now = self.now + timedelta(milliseconds=chunk_time / len(audio_data))
self.x_vals = np.concatenate((self.x_vals, time_data))
self.y_vals = np.concatenate((self.y_vals, audio_data))
if (self.x_vals.size > audio_data.size * (self.TIME_WINDOW / chunk_time)):
self.x_vals = self.x_vals[audio_data.size:]
self.y_vals = self.y_vals[audio_data.size:]
plot_data_values = [self.x_vals, self.y_vals]
self.to_emitter.send({"type": "plot_data", "plot_data": plot_data_values})
self.now = datetime.now()
self.chunk_number += 1
self.current_duration_milliseconds += chunk_time
except:
error_message = str(traceback.format_exc())
self.to_emitter.send({"type": "error", "error_message": error_message})
class Canvas(FigureCanvas):
def __init__(self, parent):
try:
self.fig, self.ax = plt.subplots(figsize=(5, 4), dpi=200)
super().__init__(self.fig)
self.fig.patch.set_facecolor((6 / 255, 21 / 255, 154 / 255))
self.ax.set_position([0., 0, 1., 0.8])
self.ax.xaxis.tick_top()
self.ax.tick_params(axis='both', which='major', pad=1, length=2.4, width=0.5, color=(1, 1, 1))
parent.main_self.ui.verticalLayout_3.addWidget(self)
self.now = datetime.now()
self.chart_stop = self.now + timedelta(milliseconds=3000)
plt.cla()
plt.gca().xaxis.set_major_formatter(FuncFormatter(parent.date_formatter_1))
plt.xticks(fontsize=3)
self.ax.grid(False)
self.ax.set_ylim(-32768, 32768)
x_ticks = []
for i in range(499, 2500 + 1, 1000):
tick = self.now + timedelta(milliseconds=i)
x_ticks.append(tick)
plt.xticks(x_ticks)
self.ax.set_xlim(self.now, self.now + timedelta(milliseconds=3000))
self.li, = self.ax.plot([self.now, self.now + timedelta(milliseconds=3000)], [0, 0], color=(0, 1, 0.29),
linestyle='solid', marker=",")
self.show()
except:
error_message = traceback.format_exc()
print(error_message)
</code></pre>
<p>Error: <strong>Sometimes</strong> when i close the window, the application doesn't return until hard stop from PyCharm stop button (like Ctrl+C).</p>
<p>What am i doing wrong?</p>
<p>This is how the application closes:</p>
<pre class="lang-py prettyprint-override"><code> def closeEvent(self,event):
try:
self.speackers_deck_instance.close()
self.final_slice_instance.close()
self.final_slice_plot_instance.close()
event.accept()
except Exception as e:
print(traceback.format_exc())
sys.exit()
</code></pre>
|
<python><pyqt5><multiprocessing><qthread>
|
2024-03-21 21:10:58
| 0
| 2,427
|
Chris P
|
78,202,760
| 15,412,256
|
Polars Groupby Describe Extension
|
<p><code>df</code> is a demo Polars DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"groups": ["A", "A", "A", "B", "B", "B"],
"values": [1, 2, 3, 4, 5, 6],
}
)
</code></pre>
<p>The current <code>group_by.agg()</code> apporach is a bit inconvinient for creating descriptive statistics:</p>
<pre class="lang-py prettyprint-override"><code>print(
df.group_by("groups").agg(
pl.len().alias("count"),
pl.col("values").mean().alias("mean"),
pl.col("values").std().alias("std"),
pl.col("values").min().alias("min"),
pl.col("values").quantile(0.25).alias("25%"),
pl.col("values").quantile(0.5).alias("50%"),
pl.col("values").quantile(0.75).alias("75%"),
pl.col("values").max().alias("max"),
pl.col("values").skew().alias("skew"),
pl.col("values").kurtosis().alias("kurtosis"),
)
)
out:
shape: (2, 11)
┌────────┬───────┬──────┬─────┬───┬─────┬─────┬──────┬──────────┐
│ groups ┆ count ┆ mean ┆ std ┆ … ┆ 75% ┆ max ┆ skew ┆ kurtosis │
│ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ u32 ┆ f64 ┆ f64 ┆ ┆ f64 ┆ i64 ┆ f64 ┆ f64 │
╞════════╪═══════╪══════╪═════╪═══╪═════╪═════╪══════╪══════════╡
│ B ┆ 3 ┆ 5.0 ┆ 1.0 ┆ … ┆ 6.0 ┆ 6 ┆ 0.0 ┆ -1.5 │
│ A ┆ 3 ┆ 2.0 ┆ 1.0 ┆ … ┆ 3.0 ┆ 3 ┆ 0.0 ┆ -1.5 │
└────────┴───────┴──────┴─────┴───┴─────┴─────┴──────┴──────────┘
</code></pre>
<p>I want to write a customized <code>group_by</code> extension module that allows me to achieve the same results by calling:</p>
<pre class="lang-py prettyprint-override"><code>df.describe(by="groups", percentiles=[xxx], skew=True, kurt=True)
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>df.group_by("groups").describe(percentiles=....)
</code></pre>
|
<python><python-polars>
|
2024-03-21 20:58:49
| 1
| 649
|
Kevin Li
|
78,202,730
| 11,318,930
|
polars: efficient way to apply function to filter column of strings
|
<p>I have a column of long strings (like sentences) on which I want to do the following:</p>
<ol>
<li>replace certain characters</li>
<li>create a list of the remaining strings</li>
<li>if a string is all text see whether it is in a dictionary and if so keep it</li>
<li>if a string is all numeric keep it</li>
<li>if a string is a mix of numeric/text, find ratio of numbers to letters and keep if above a threshold</li>
</ol>
<p>I currently do this as follows:</p>
<pre><code> for memo_field in self.memo_columns:
data = data.with_columns(
pl.col(memo_field).map_elements(
lambda x: self.filter_field(text=x, word_dict=word_dict))
)
</code></pre>
<p>The filter_field method uses plain python, so:</p>
<ul>
<li><code>text_sani = re.sub(r'[^a-zA-Z0-9\s\_\-\%]', ' ', text)</code> to replace</li>
<li><code>text_sani = text_sani.split(' ')</code> to split</li>
<li><code>len(re.findall(r'[A-Za-z]', x))</code> to find num letters for each element in text_sani list (similar for num digits) and ratio is difference divided by overall num characters</li>
<li>list comprehension and <code>if</code> to filter list of words</li>
</ul>
<p>It actually isn't too bad, 128M rows takes about 10 minutes. Unfortunately, future files will be much bigger. On a ~300M row file this approach gradually increases memory consumption until the OS (Ubuntu) kills the process. Also, all processing seems to take place on a single core.</p>
<p>I have started to try to use the Polars string expressions and <strong>code and a toy example are provided below</strong>.</p>
<p>At this point it looks like my only option is a function call to a do the rest. My questions are:</p>
<ol>
<li>in my original approach is it normal that memory consumption grows? Does <code>map_elements</code> create a copy of the original series and so consumes more memory?</li>
<li>is my original approach correct or is there a better way eg. I have just started reading about <code>struct</code> in Polars?</li>
<li>is it possible to do what I want using <em>just</em> Polars expressions?</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>The code example in answers from @Hericks and @ΩΠΟΚΕΚΡΥΜΜΕΝΟΣ were applied and largely addressed my third question. Implementing the Polars expressions greatly reduced run time with two observations:</p>
<ol>
<li>the complexity of the <code>memo</code> fields in my use-case greatly affect the run time. The key challenge is the look up of items in the dictionary; a large dictionary and many valid words in the <code>memo</code> field can severely affect run time; and</li>
<li>I experienced many seg fault errors when saving in <code>.parquet</code> format when I used <code>pl.DataFrame</code>. When using <code>pl.LazyFrame</code> and <code>sink_parquet</code> there were no errors but run time was greatly extended (drives are NVME SSD at 2000MB/s)</li>
</ol>
<p><strong>EXAMPLE CODE/DATA:</strong></p>
<p><code>Toy data:</code></p>
<pre><code>temp = pl.DataFrame({"foo": ['COOOPS.autom.SAPF124',
'OSS REEE PAAA comp. BEEE atm 6079 19000000070 04-04-2023',
'ABCD 600000000397/7667896-6/REG.REF.REE PREPREO/HMO',
'OSS REFF pagopago cost. Becf atm 9682 50012345726 10-04-2023']
})
</code></pre>
<p><code>Code Functions:</code></p>
<pre><code>def num_dec(x):
return len(re.findall(r'[0-9_\/]', x))
def num_letter(x):
return len(re.findall(r'[A-Za-z]', x))
def letter_dec_ratio(x):
if len(x) == 0:
return None
nl = num_letter(x)
nd = num_dec(x)
if (nl + nd) == 0:
return None
ratio = (nl - nd)/(nl + nd)
return ratio
def filter_field(text=None, word_dict=None):
if type(text) is not str or word_dict is None:
return 'no memo and/or dictionary'
if len(text) > 100:
text = text[0:101]
print("TEXT: ",text)
text_sani = re.sub(r'[^a-zA-Z0-9\s\_\-\%]', ' ', text) # parse by replacing most artifacts and symbols with space
words = text_sani.split(' ') # create words separated by spaces
print("WORDS: ",words)
kept = []
ratios = [letter_dec_ratio(w) for w in words]
[kept.append(w.lower()) for i, w in enumerate(words) if ratios[i] is not None and ((ratios[i] == -1 or (-0.7 <= ratios[i] <= 0)) or (ratios[i] == 1 and w.lower() in word_dict))]
print("FINAL: ",' '.join(kept))
return ' '.join(kept)
</code></pre>
<p><code>Code Current Implementation:</code></p>
<pre><code>temp.with_columns(
pl.col("foo").map_elements(
lambda x: filter_field(text=x, word_dict=['cost','atm'])).alias('clean_foo') # baseline
)
</code></pre>
<p><code>Code Partial Attempt w/Polars:</code></p>
<p>This gets me the correct <code>WORDS</code> (see next code block)</p>
<pre><code>temp.with_columns(
(
pl.col(col)
.str.replace_all(r'[^a-zA-Z0-9\s\_\-\%]',' ')
.str.split(' ')
)
)
</code></pre>
<p><code>Expected Result</code> (at each step, see <code>print</code> statements above):</p>
<pre><code>TEXT: COOOPS.autom.SAPF124
WORDS: ['COOOPS', 'autom', 'SAPF124']
FINAL:
TEXT: OSS REEE PAAA comp. BEEE atm 6079 19000000070 04-04-2023
WORDS: ['OSS', 'REEE', 'PAAA', 'comp', '', 'BEEE', '', 'atm', '6079', '19000000070', '04-04-2023']
FINAL: atm 6079 19000000070 04-04-2023
TEXT: ABCD 600000000397/7667896-6/REG.REF.REE PREPREO/HMO
WORDS: ['ABCD', '600000000397', '7667896-6', 'REG', 'REF', 'REE', 'PREPREO', 'HMO']
FINAL: 600000000397 7667896-6
TEXT: OSS REFF pagopago cost. Becf atm 9682 50012345726 10-04-2023
WORDS: ['OSS', 'REFF', 'pagopago', 'cost', '', 'Becf', '', 'atm', '9682', '50012345726', '10-04-2023']
FINAL: cost atm 9682 50012345726 10-04-2023
</code></pre>
|
<python><regex><dataframe><python-polars>
|
2024-03-21 20:50:39
| 2
| 1,287
|
MikeB2019x
|
78,202,681
| 17,729,094
|
Explode a dataframe into a range of another dataframe
|
<p>I have some data in 2 dataframes that look like:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = {"channel": [0, 1, 2, 1, 2, 0, 1], "time": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]}
time_df = pl.DataFrame(data)
data = {
"time": [10.0, 10.5],
"event_table": [["start_1", "stop_1", "start_2", "stop_2"], ["start_3"]],
}
events_df = pl.DataFrame(data)
</code></pre>
<p>where <code>channel</code> 0 in the <code>time_df</code> means that a new "event table" start here. I want to explode each row of the <code>event_table</code> starting at the channels 0 in the <code>event_df</code> and have a result like:</p>
<pre class="lang-py prettyprint-override"><code>data = {
"channel": [1, 2, 1, 2, 1],
"time": [0.2, 0.3, 0.4, 0.5, 0.7],
"event": ["start_1", "stop_1", "start_2", "stop_2", "start_3"],
}
result_df = pl.DataFrame(data)
</code></pre>
<p>What I am currently doing is to remove all channel 0 from the first dataframe, explode the second data frame, and use <code>hstack</code> to combine both dataframes. This works OK if my data is perfect.</p>
<p>In reality, the event table can have more (or less) events. In these cases I want to "truncate" the explosion (or fill with nulls) e.g.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = {"channel": [0, 1, 2, 1, 2, 0, 1], "time": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]}
time_df = pl.DataFrame(data)
data = {
"time": [10.0, 10.5],
"event_table": [["start_1", "stop_1", "start_2"], ["start_3", "stop_3"]],
}
events_df = pl.DataFrame(data)
data = {
"channel": [1, 2, 1, 2, 1],
"time": [0.2, 0.3, 0.4, 0.5, 0.7],
"event": ["start_1", "stop_1", "start_2", None, "start_3"],
}
result_df = pl.DataFrame(data)
</code></pre>
<p>I appreciate any help.</p>
|
<python><python-polars>
|
2024-03-21 20:39:00
| 1
| 954
|
DJDuque
|
78,202,544
| 7,981,292
|
Python with Bleak - Pair with a device that requires a pin
|
<p>I'm attempting to read some characteristics from a Bluetooth device. The device requires a Pin to pair but I can't find any resources on how to enter a pin with python-bleak.</p>
<p>My code is below. It will successfully connect and get the services. Any attempt to bond with the device gives me an error.</p>
<blockquote>
<p>bleak.exc.BleakError: Could not pair with device: 19</p>
</blockquote>
<p>Is it possible to pair to a device that requires a pin using python-bleak? If not is there a different way to pair to the device with python?</p>
<pre><code>import asyncio
from bleak import BleakScanner, BleakClient, exc, BLEDevice, backends
from typing import Any, List, Dict
deviceAdress = "zz:yy:xx:ww:vv:uu".upper()
fixedPin = "012345"
async def GetServices(client:BleakClient) -> backends.service.BleakGATTServiceCollection:
services = await client.get_services()
return services
async def main():
device:BLEDevice = await BleakScanner.find_device_by_address(deviceAdress)
print(device)
client:BleakClient = BleakClient(device, timeout=60)
try:
connectResults = await client.connect()
except Exception as e:
print(e)
await GetServices(client) # Successfully gets services
pairResults = await client.pair() # Fails here
asyncio.run(main())
</code></pre>
|
<python><bluetooth><python-bleak>
|
2024-03-21 20:09:06
| 1
| 363
|
Dave1551
|
78,202,512
| 2,675,349
|
Why do I get "Validators defined with incorrect fields" when trying to validate JSON data using pydantic?
|
<p>I am trying to parse a JSON object to pydantic validator and it's giving a compiler error.</p>
<blockquote>
<p>pydantic.errors.ConfigError: Validators defined with incorrect fields:
validate_grade</p>
</blockquote>
<pre class="lang-json prettyprint-override"><code>{
"grade": "A"
"subject": [
{
"physics": "PHY",
"chemistry": "CHE",
"classTime": {
"date": "2024-03-25",
"time": null
},
"labTime":null
}
}
]
}
</code></pre>
<p>The implementation code:</p>
<pre><code>class Grade(BaseModel):
@validator("grade")
def validate_grade(cls, val):
if (val == None):
raise ValueError("grade is null")
class subject(BaseModel):
@validator("classTime", pre=True, always=True)
def validate_classTime(cls, value, values):
date_str = value.get('date', '')
time_str = value.get('time', '')
</code></pre>
<p>The above implementation is throwing the error after adding the section:</p>
<pre><code>class Grade(BaseModel):
</code></pre>
<p>I also want to get the "grade" inside the class subject. I tried different options after referring to the similar implementations, but there is no luck.</p>
|
<python><pydantic>
|
2024-03-21 19:59:33
| 1
| 1,027
|
Ullan
|
78,202,491
| 1,845,408
|
"Invalid constructor input for Tool" error when using function calling with gemini-pro
|
<p>I have the following code to enabling function calling with gemini-pro model (it is based on <a href="https://ai.google.dev/tutorials/function_calling_python_quickstart" rel="nofollow noreferrer">this example</a>).</p>
<pre><code>def getWordCount(sentence:str):
return len(sentence.split(' '))
model = genai.GenerativeModel(model_name='models/gemini-pro', tools=[getWordCount])
model._tools.to_proto()
</code></pre>
<p>For some reason, I received the following error:</p>
<blockquote>
<p>TypeError: Invalid constructor input for Tool: <function getWordCount at 0x7a9baaf95c60></p>
</blockquote>
<p>The error repeats for all the following models:</p>
<pre><code>models/gemini-1.0-pro
models/gemini-1.0-pro-001
models/gemini-1.0-pro-latest
models/gemini-1.0-pro-vision-latest
models/gemini-pro
models/gemini-pro-vision
</code></pre>
<p>I could not find any resources to resolve this issue. I appreciate any help.</p>
|
<python><google-api><google-gemini>
|
2024-03-21 19:52:42
| 1
| 8,321
|
renakre
|
78,202,488
| 6,943,622
|
Combination of Non-overlapping interval PAIRS
|
<p>I recently did a coding challenge where I was tasked to return the number of unique interval pairs that do not overlap when given the starting points in one list and the ending points in one list. I was able to come up with an n^2 solution and eliminated duplicates by using a set to hash each entry tuple of (start, end). I was wondering if there was a more efficient way of approaching this, or this is the best I could do:</p>
<pre><code>def paperCuttings(starting, ending):
# Pair each start with its corresponding end and sort
intervals = sorted(zip(starting, ending), key=lambda x: x[1])
non_overlaps = set()
print(intervals)
# Store valid combinations
for i in range(len(intervals)):
for j in range(i+1, len(intervals)):
# If the ending of the first is less than the starting of the second, they do not overlap
if intervals[i][1] < intervals[j][0]:
non_overlaps.add((intervals[i], intervals[j]))
return len(non_overlaps)
starting = [1,1,6,7]
ending = [5,3,8,10]
print(paperCuttings(starting, ending)) # should return 4
starting2 = [3,1,2,8,8]
ending2 = [5, 3, 7, 10, 10]
print(paperCuttings(starting2, ending2)) # should return 3
</code></pre>
<p>I ask because I timed out in some hidden test cases</p>
|
<python><algorithm><combinations><intervals>
|
2024-03-21 19:52:05
| 2
| 339
|
Duck Dodgers
|
78,202,160
| 3,083,406
|
Python3 trying to get access to the object of my class that was instantiated in a thread call
|
<p>Working in Python3, I've created a class that is managing some tables. This class is instantiated in a call with threading.Thread() in which I have a "run" function that is called by the threads mechanism so it will instantiate the class. Now, that function will never exit as it runs some tk code and sits in a endless loop. Therefore the return value of the class instantiation is unavailable.</p>
<pre><code>key_gui = threading.Thread(target=key_gui_run, args=(key_name_list, list, sendQueue, ))
key_gui.start()
</code></pre>
<p>the called function is just simply:</p>
<pre><code> def key_gui_run(large_list, small_list, queue):
myObject = MyClass(large_list, small_list, queue)
</code></pre>
<p>As the system runs the object made from MyClass manipulates some data. I would like to just call a method in that object to get that data. However, the requirements of the thread class does not allow me to get a reference to the object. This is because key_gui_run() never exits and the value of myObject is never available.</p>
<p>Is there a standard way around this dilemma? Am I missing something about Python classes that I should know? I'm a bit of a novice to the language.</p>
<p>Any answers would be appreciated.</p>
<p>Cheers!!</p>
|
<python><python-3.x><multithreading>
|
2024-03-21 18:34:16
| 1
| 911
|
Dowd
|
78,202,039
| 2,735,009
|
Show 2 histograms on the same plot with 1 calculated histogram in Python
|
<p>I have created the following sample dataset:</p>
<pre><code>sample_df = pd.DataFrame({
'user_id':np.arange(0,1000),
'sent_click_diff':np.random.randint(400, size=1000),
'clicked_or_not':np.random.choice( ['clicked','not clicked'], 1000),
})
</code></pre>
<p>And here's a sample code to display the % of users on the Y axis:</p>
<pre><code>stepsize = 10
x_axis_values = np.arange(1, 400, stepsize)
#create histogram, using percentages instead of counts
plt.hist(sample_df['sent_click_diff'], weights=np.ones(len(sample_df)) / len(sample_df), bins=x_axis_values)
#apply percentage format to y-axis
plt.gca().yaxis.set_major_formatter(PercentFormatter(1))
plt.xlim(1,400)
plt.title('Distribution of number of days since last click')
plt.xlabel('Number of days since last click')
plt.ylabel('% users')
plt.show()
</code></pre>
<p>It gives me the following chart:
<a href="https://i.sstatic.net/EB42f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EB42f.png" alt="enter image description here" /></a></p>
<p>I would like <code>% clicked</code> for each bin on a Y2 axis in a different color on the same plot. How can I do that?
In particular, how do I calculate <code>% clicked</code> for each bin?</p>
<p>TIA for any help.</p>
|
<python><pandas><matplotlib><histogram>
|
2024-03-21 18:11:36
| 0
| 4,797
|
Patthebug
|
78,201,819
| 9,244,371
|
jupyterlab with pylsp .virtual_documents error
|
<p>I'm using <code>jupyterlab</code> 4.1.5 and am trying to follow these instructions: <a href="https://jupyterlab.readthedocs.io/en/latest/user/lsp.html" rel="nofollow noreferrer">https://jupyterlab.readthedocs.io/en/latest/user/lsp.html</a></p>
<p>I <code>pip</code> installed <code>pylsp</code> using the provided command. When I start <code>jupyterlab</code> it says the lsp extension was initialized. However, as soon as I open a .ipynb, I get this error:</p>
<pre class="lang-py prettyprint-override"><code>[W 2024-03-21 16:54:39.012 ServerApp] [lsp] initialization of shadow filesystem failed three times
check if the path set by `LanguageServerManager.virtual_documents_dir` or `JP_LSP_VIRTUAL_DIR` is correct;
if this is happening with a server for which you control (or wish to override) jupyter-lsp specification
you can try switching `requires_documents_on_disk` off. The errors were:
[PermissionError(13, 'Permission denied'), PermissionError(13, 'Permission denied'),
PermissionError(13, 'Permission denied')]
</code></pre>
<p>I think I want to follow the suggest of turning off <code>requires_documents_on_disk</code>. It's unclear to me how I'm supposed to do that. I tried adding it as a language server setting/property (per those same instructions) and I set it to <code>false</code>. That didn't fix it.</p>
|
<python><jupyter-lab><language-server-protocol>
|
2024-03-21 17:28:58
| 0
| 428
|
Hutch3232
|
78,201,741
| 354,051
|
Generate MFCC with good noise for an audio signal of 0.01 seconds
|
<p>I'm using 16000Hz, mono WAV files for my CNN project. Here is the code for MFCC generation</p>
<pre class="lang-py prettyprint-override"><code>import librosa
import numpy as np
signal, sr = librosa.load('test.wav', sr=None)
mfccs = np.mean(librosa.feature.mfcc(y=signal[0:160], sr=16000, n_fft=160,
n_mfcc=20, n_mels=50).T, axis=0)
</code></pre>
<p>The problem is when using this mfcc-based tensorflow model for testing and prediction, all the predictions are almost equal to 0.99. When using a larger input signal, e.g. y=signal[0:1600], the predictions are much better.</p>
<p>How do I generate good quality mfcc for an input signal of 0.01 seconds or signal[0:160]?</p>
<p>Is it possible to write a function that takes three parameters:
y, sr, and numpoints and generate the best possible mfcc by calculating the rest of the parameters automatically?</p>
<pre class="lang-py prettyprint-override"><code>def mfcc(y:array, sr:int, numpoints:int):
# calculate params
return librosa.feature.mfcc(...)
</code></pre>
|
<python><mfcc>
|
2024-03-21 17:12:43
| 0
| 947
|
Prashant
|
78,201,636
| 4,475,588
|
How to Stop an Airflow DAG and Skip Future Processing if File is Empty?
|
<p>I am working on an Airflow DAG where I need to perform certain processing tasks on a file only if the file is not empty. The workflow should ideally check if the file has content, and if it's empty, the DAG should stop executing further and skip any future processing related to this file.</p>
<p>Here is the simplified structure of my Airflow DAG:</p>
<pre><code>from google.cloud import storage
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from datetime import datetime
def check_file_not_empty():
client = storage.Client()
bucket = client.get_bucket(src_bucket_name)
blob = bucket.get_blob(blob_name)
if blob.size == 0:
raise Exception(f"The file {blob_name} in bucket {src_bucket_name} is empty")
def process_file():
# Code to process the file
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2024, 3, 21),
'retries': 1,
}
dag = DAG('file_processing_dag', default_args=default_args, schedule_interval='@daily')
check_file_task = PythonOperator(
task_id='check_file_not_empty',
python_callable=check_file_not_empty,
dag=dag,
)
process_file_task = PythonOperator(
task_id='process_file',
python_callable=process_file,
dag=dag,
)
check_file_task >> process_file_task
</code></pre>
<p>It seems that I need to call some Airflow internal option to stop the execution, because the exception part only generates retries, and that is not something I need. I would like to fail fast instead. How can I do that?</p>
|
<python><airflow><directed-acyclic-graphs>
|
2024-03-21 16:53:30
| 1
| 626
|
Igor Tiulkanov
|
78,201,631
| 4,976,543
|
How would you vectorize a fraction of sums of matrices (Expectation Maximization) in numpy?
|
<p>I am trying to vectorize the following Expectation-Maximization / clustering equation for a 2-dimensional Gaussian distribution using numpy. I have a naive approach that I will include at the end of my question:</p>
<p><a href="https://i.sstatic.net/M8Wqw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M8Wqw.png" alt="Expectation Maximization Covariance Matrix" /></a></p>
<p>For context, the variables and dimensions are defined as follows:</p>
<ul>
<li><a href="https://latex.codecogs.com/svg.image?n" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?n" alt="n" /></a> = data point index (i.e. 1-1000)</li>
<li><a href="https://latex.codecogs.com/svg.image?k" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?k" alt="k" /></a> = cluster index (i.e. 1-3)</li>
<li><a href="https://latex.codecogs.com/svg.image?%5Clangle&space;z_k%5En%5Crangle" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?%5Clangle&space;z_k%5En%5Crangle" alt="z" /></a> = a conditional probability that datapoint <a href="https://latex.codecogs.com/svg.image?n" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?n" alt="n" /></a> is in cluster <a href="https://latex.codecogs.com/svg.image?k" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?k" alt="k" /></a> (in [0,1])</li>
<li><a href="https://latex.codecogs.com/svg.image?y%5En" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?y%5En" alt="y" /></a> = value of datapoint <a href="https://latex.codecogs.com/svg.image?n" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?n" alt="n" /></a> (shape (2,))</li>
<li><a href="https://latex.codecogs.com/svg.image?%5Chat%7B%5Cmu%7D_k" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?%5Chat%7B%5Cmu%7D_k" alt="mu" /></a> = current estimated multi-variate mean of cluster <a href="https://latex.codecogs.com/svg.image?k" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?k" alt="k" /></a> (shape (2,))</li>
</ul>
<p>The end product is a numerator that is a sum of (2, 2) shape matrices and the denominator is a scalar. The final value is a (2, 2) covariate matrix estimate. This must also be done for each value of "k" (1, 2, 3).</p>
<p>I've achieved a vectorized approach for other values by defining the following numpy arrays:</p>
<ul>
<li><a href="https://latex.codecogs.com/svg.image?&space;Z%5Cin(n,k)=(1000,3)" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?&space;Z%5Cin(n,k)=(1000,3)" alt="Z" /></a> = est. probability values for each datapoint, cluster</li>
<li><a href="https://latex.codecogs.com/svg.image?&space;X%5Cin(n,2)=(1000,2)" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?&space;X%5Cin(n,2)=(1000,2)" alt="X" /></a> = multivariate data matrix</li>
<li><a href="https://latex.codecogs.com/svg.image?%5Cmu%5Cin(k,2)=(3,2)" rel="nofollow noreferrer"><img src="https://latex.codecogs.com/svg.image?%5Cmu%5Cin(k,2)=(3,2)" alt="MU" /></a> = est. cluster means</li>
</ul>
<p>My naive code is as follows:</p>
<pre><code>for kk in range(k):
numsum = 0
for ii in range(X.shape[0]):
diff = (X[ii, :]-mu[kk, :]).reshape(-1, 1)
numsum = numsum + Z[ii, kk]*np.matmul(diff, diff.T)
sigma[kk] = numsum / np.sum(Z[:, kk])
</code></pre>
<p>Long story long - is there any better way to do this?</p>
|
<python><numpy><vectorization>
|
2024-03-21 16:53:09
| 2
| 712
|
Branden Keck
|
78,201,566
| 13,142,245
|
AWS Step Functions: Unable to apply ReferencePath
|
<p>I am attempting a step function but am not certain about the <code>"$.<variable>"</code> syntax. For brevity, resources like Lambda functions and S3 buckets are defined elsewhere in infraConstruct (details that I can provide upon request.)</p>
<p>My goal is to construct two Lambda functions. The first determines some independent tasks for execution by reading contents of an S3 bucket and applying some filtering. The output should be stored in taskArray.</p>
<p>The purpose of the second lambda function is to perform process to each task. Because these tasks are independent, I've selected the mapIterator in order to execute tasks in parallel.</p>
<pre><code>export class StepFunctionStack extends DeploymentStack {
readonly infra;
readonly stateMachine;
readonly stepOneState;
readonly stepTwoState;
readonly sfnDefinition;
readonly mapIterator;
constructor(scope: Construct, id: string, props: StepFunctionStackProps) {
super(scope, id, props);
this.infra = new InfraConstruct(this, id, props);
this.stepOneState = new LambdaInvoke(this, 'stepOneState', {
lambdaFunction: this.infra.stepOneLambda,
resultPath: '$.taskArray',
});
this.stepTwoState = new LambdaInvoke(this, 'stepTwoState', {
lambdaFunction: this.infra.stepTwoLambda,
resultPath: '$.output',
});
this.mapIterator = new Map(this, 'Map State', {
itemsPath: '$.taskArray.Payload',
maxConcurrency: 100,
});
this.mapIterator.iterator(this.stepTwoState);
this.sfnDefinition = this.stepOneState.next(this.mapIterator);
this.stateMachine = new StateMachine(this, 'StateMachine', {
definition: this.sfnDefinition,
stateMachineName: 'StateMachine',
timeout: Duration.minutes(60),
});
}
}
</code></pre>
<p>The first Lambda function executes successfully! However, the error that I'm receiving in the next step is <code>Unable to apply ReferencePath $.output to input "task_abc"</code> Where <code>"task_abc"</code> is returned among the elements returned by the first Lambda function.</p>
<p>Note, the second Lambda function has a return type of None. Perhaps the <code>$.output</code> should be deleted. I assume that when None is returned, the default behavior is to return the input.</p>
<p>Nonetheless, why can <code>"$.output"</code> not be assigned a string value?</p>
|
<python><amazon-web-services><aws-lambda><aws-step-functions>
|
2024-03-21 16:42:25
| 1
| 1,238
|
jbuddy_13
|
78,201,546
| 4,522,501
|
Global variable not being updated with a thread in python
|
<p>I have a Thread that is checking every hour of a time and keep updating a global variable, for some reason global statement is not working and it's keeping the same value in the global variable even if the thread update it, please reference to my code:</p>
<pre><code>paymentDate = "2024-03-22"
currentDate = ""
#Function that is in charge of validate the date
def isSystemValidDateReached():
global currentDate
try:
if currentDate == "":
currentDate = str(urlopen('http://just-the-time.appspot.com').read().strip().decode('utf-8'))
if paymentDate < currentDate:
return True
else:
return False
except Exception as e:
log.appendToLog("Error", "critical")
#function that is being used in a thread
def getTimeFromInternet(timeSleep=3600):
global currentDate
while True:
try:
time.sleep(timeSleep)
currentDate = str(urlopen('http://just-the-time.appspot.com').read().strip().decode('utf-8'))
print("new current date {}".format(currentDate))
except Exception as e:
log.appendToLog("Error: {}".format(e), "critical")
</code></pre>
<p>This is a portion of how it looks in main script and this function is being checked in a while loop every certain time:</p>
<pre><code>if isSystemValidDateReached():
exit()
# get time every hour
getDateFunctionThread = multiprocessing.Process(target=getTimeFromInternet)
getDateFunctionThread.start()
</code></pre>
<p>The problem that I'm facing is that I have a breakpoint in <code>getTimeFromInternet</code> function and it updates <code>currentDate</code>, but when the function <code>isSystemValidDateReached()</code> is called, I see the same old value assigned in this portion of code:</p>
<pre><code>if currentDate == "":
currentDate = str(urlopen('http://just-the-time.appspot.com').read().strip().decode('utf-8'))
</code></pre>
<p>What needs to be changed to have this variable getting updated every hour in my threaded function?</p>
|
<python><multithreading><variables><global>
|
2024-03-21 16:39:12
| 1
| 1,188
|
Javier Salas
|
78,201,299
| 13,314,132
|
Unable to send keys to text box using Selenium. Getting selenium.common.exceptions.TimeoutException: Message:
|
<p>I am trying to send keys to the search bar text box using Selenium but getting the error:</p>
<pre><code>Traceback (most recent call last):
File ".\test.py", line 19, in <module>
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input#typeahead-input-56352"))).send_keys("2216 NW 6 PL FORT LAUDERDALE")
File "D:\Python Projects\Title Search\titlesearch\lib\site-packages\selenium\webdriver\support\wait.py", line 95, in
until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
Stacktrace:
GetHandleVerifier [0x00007FF6D9C9AD02+56930]
(No symbol) [0x00007FF6D9C0F602]
(No symbol) [0x00007FF6D9AC42E5]
(No symbol) [0x00007FF6D9B098ED]
(No symbol) [0x00007FF6D9B09A2C]
(No symbol) [0x00007FF6D9B4A967]
(No symbol) [0x00007FF6D9B2BCDF]
(No symbol) [0x00007FF6D9B481E2]
(No symbol) [0x00007FF6D9B2BA43]
(No symbol) [0x00007FF6D9AFD438]
(No symbol) [0x00007FF6D9AFE4D1]
GetHandleVerifier [0x00007FF6DA016F8D+3711213]
GetHandleVerifier [0x00007FF6DA0704CD+4077101]
GetHandleVerifier [0x00007FF6DA06865F+4044735]
GetHandleVerifier [0x00007FF6D9D39736+706710]
(No symbol) [0x00007FF6D9C1B8DF]
(No symbol) [0x00007FF6D9C16AC4]
(No symbol) [0x00007FF6D9C16C1C]
(No symbol) [0x00007FF6D9C068D4]
BaseThreadInitThunk [0x00007FFD02F37344+20]
RtlUserThreadStart [0x00007FFD03F226B1+33]
</code></pre>
<p>My code that I am using for sending the keys:</p>
<pre><code># Import Dependencies
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
import time
options = webdriver.ChromeOptions()
options.headless = True
driver = webdriver.Chrome()
driver.maximize_window()
driver.get("https://broward.county-taxes.com/public/search/property_tax")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@class="svg-icon"]'))).click()
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input#typeahead-input-56352"))).send_keys("2216 NW 6 PL FORT LAUDERDALE")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="search-icon"]/svg'))).click()
</code></pre>
<p>I am even closing a pop-up before sending the keys. I have used <code>XPATH</code>, <code>CSS_SELECTOR</code> both. And both seems to be getting the same error.</p>
<p>Please help.</p>
|
<python><selenium-webdriver>
|
2024-03-21 15:58:28
| 1
| 655
|
Daremitsu
|
78,201,240
| 10,462,461
|
Records not getting caught in if statement
|
<p>I've created a sample dataframe below.</p>
<pre><code>import pandas as pd
import numpy_financial as npf
df = pd.DataFrame({
'loannum': [111, 222],
'datadt': [dt.datetime(2024, 2, 29), dt.datetime(2024, 2, 29)],
'balloondt_i': [dt.datetime(2024, 8, 1), dt.datetime(2024, 8, 1)],
'balloondt': [dt.datetime(2024, 8, 1), dt.datetime(2024, 8, 1)],
'currbal': [21662536.64, 32424669.41],
'rate': [7.349, 7.349],
'pmtfreq': [1, 1],
'int_only': [None, None],
'dpd_mult': [1, 1],
'prinbal': [0, 0],
'prinamt': [0, 0],
'pmtamt': [34669, 51893],
'intamt': [0, 0],
'intbal': [0, 0]
})
</code></pre>
<p>when I run the following code, I get unexpected negative results for the <code>prinbal</code> and <code>prinamt</code> fields. For loan 111, I get <code>-97995.98</code> and loan 222 I get <code>-146681.08</code> for both <code>prinbal</code> and <code>prinamt</code>. I expect <code>3038254.50</code> for loan 111 and <code>4547685.23</code> for loan 222.</p>
<pre><code>for idx in df.index:
dpd_mult = df.loc[idx, 'dpd_mult']
datadt = df.loc[idx, 'datadt']
balloondt_i = df.loc[idx, 'balloondt_i']
currbal = df.loc[idx, 'currbal']
rate = df.loc[idx, 'rate']
pmtfreq = df.loc[idx, 'pmtfreq']
pmtamt = df.loc[idx, 'pmtamt'] if not pd.isna(df.loc[idx, 'pmtamt']) else None
int_only = df.loc[idx, 'int_only']
intbal = df.loc[idx, 'intbal']
if balloondt_i <= datadt + relativedelta(months=+1): # Approximates 'intnx'
df.loc[idx, 'prinamt'] = df.loc[idx, 'currbal']
else:
for i in range(dpd_mult):
if currbal <= 0: # Break if current balance is zero or negative
break
df.loc[idx, 'intbal'] = df.loc[idx, 'currbal'] * rate / 1200 * pmtfreq
if int_only == 'IO':
df.loc[idx, 'prinbal'] = 0
elif intbal >= pmtamt:
nperiods = max(1, 12 * (balloondt_i.year - datadt.year) + (balloondt_i.month - datadt.month) - (i - 1) * pmtfreq) / pmtfreq
df.loc[idx, 'prinbal'] = npf.pmt(rate * pmtfreq / 1200, nperiods, currbal) - intbal
else:
df.loc[idx, 'prinbal'] = df.loc[idx, 'pmtamt'] - df.loc[idx, 'intbal']
df.loc[idx, 'currbal'] -= df.loc[idx, 'prinbal']
df.loc[idx, 'prinamt'] += df.loc[idx, 'prinbal']
df.loc[idx, 'intamt'] += df.loc[idx, 'intbal']
</code></pre>
<p>I'm not sure why the loans are not getting caught in the <code>elif</code> statement. I can filter the dataframe for records where <code>df.loc[df['intbal'] > df['pmtamt']]</code> and I get back the expected results. <code>intbal</code> is getting calculated correctly. I've tried moving the <code>elif</code> to the first evaluation but received the same results. I tried explicitly calling the two fields using <code>elif df.loc[idx, 'intbal'] >= df.loc[idx, 'pmtamt']</code> and still get the same results.</p>
<p>My expected output is</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>loannum</th>
<th>prinbal</th>
<th>prinamt</th>
<th>intamt</th>
<th>intbal</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>3038254.50</td>
<td>3038254.50</td>
<td>132664.98</td>
<td>132664.98</td>
</tr>
<tr>
<td>222</td>
<td>4547685.23</td>
<td>4547685.23</td>
<td>198574.08</td>
<td>198574.08</td>
</tr>
</tbody>
</table></div>
|
<python><pandas>
|
2024-03-21 15:48:59
| 1
| 340
|
gernworm
|
78,201,163
| 3,639,372
|
Python Mediapipe replace pose landmark line drawings with custom image drawings
|
<p>I'm using a webcam to stream a video and would like to use mediapipe to estimate and replace/blur a user's face with a custom yellow image and the user's eyes with a heart (❤) emoji and the user's lips with a smile drawing.</p>
<p>The code I currently have below is only drawing landmark lines.</p>
<p>Could someone please help me alter the code below to achieve the result as described?</p>
<p>Thanks in advance!</p>
<p>Current image result:</p>
<p><a href="https://i.sstatic.net/h0e3u.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h0e3u.jpg" alt="Current result" /></a></p>
<p>Desired Image result:</p>
<p><a href="https://i.sstatic.net/ExAIH.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ExAIH.jpg" alt="Desired Result" /></a></p>
<p>The current code:</p>
<pre class="lang-py prettyprint-override"><code>
import cv2
import mediapipe as mp
## initialize pose estimator
mp_drawing = mp.solutions.drawing_utils
mp_pose = mp.solutions.pose
pose = mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5)
cap = cv2.VideoCapture(0)
while cap.isOpened():
# read frame
_, frame = cap.read()
try:
# resize the frame for portrait video
# frame = cv2.resize(frame, (350, 600))
# convert to RGB
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# process the frame for pose detection
pose_results = pose.process(frame_rgb)
# print(pose_results.pose_landmarks)
# draw skeleton on the frame
mp_drawing.draw_landmarks(frame, pose_results.pose_landmarks, mp_pose.POSE_CONNECTIONS)
# display the frame
cv2.imshow('Output', frame)
except:
break
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
|
<python><graphics><drawing><mediapipe>
|
2024-03-21 15:39:01
| 1
| 867
|
Vakindu
|
78,201,124
| 11,251,938
|
Django query doesn't return the instance that I just saved in database
|
<p>I'm working on a project that requires to trace the events related to a process. In my case I have <code>Registration</code> and <code>RegistrationEvent</code> models with the latter connected throug a ForeignKey to <code>Registration</code>.</p>
<p>I also have written a method of <code>RegistrationEvent</code> called <code>_ensure_correct_flow_of_events</code> which prevents from adding Events in an order that doesn't make sense and is called when <code>model.save</code> is called. Infact the flow of events must be <code>SIGNED -> STARTED -> SUCCESS -> CERTIFICATE_ISSUED</code>. At any moment the event <code>CANCELED</code> can happen.
In order to evaluate event sequence this method calls another method <code>_get_previous_event</code> which return the last event registered to <code>Registration</code>.</p>
<p>After a <code>SUCCESS</code> event is created, the <code>save</code> method calls <code>Registration.threaded_issue_certificate</code> which is a method that should create a certifcate and after that create a <code>CERTIFICATE_ISSUED</code> event in a new thread, in order to process the response fastly. Problem is when <code>CERTIFICATE_ISSUED</code> is about to be created <code>_ensure_correct_flow_of_events</code> and <code>_get_previous_event</code> are called and at this point <code>_get_previous_event</code> does not return the just created <code>SUCCESS</code> event but the previous <code>STARTED</code> event.</p>
<p>My log is</p>
<pre><code>Checking correct flow, previous event: Course started - admin registration id: 1 current event_type: 3
Checking correct flow, previous event: Course started - admin registration id: 1 current event_type: 4
Exception in thread Thread-2 (threaded_issue_certificate):
Traceback (most recent call last):
File "/Users/zenodallavalle/miniconda3/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/Users/zenodallavalle/miniconda3/lib/python3.10/threading.py", line 953, in run
[21/Mar/2024 15:16:41] "POST /admin/main/registrationevent/add/ HTTP/1.1" 302 0
self._target(*self._args, **self._kwargs)
File "/Users/zenodallavalle/Downloads/test/main/models.py", line 79, in threaded_issue_certificate
return self.issue_certificate()
File "/Users/zenodallavalle/Downloads/test/main/models.py", line 84, in issue_certificate
RegistrationEvent.objects.create(
File "/Users/zenodallavalle/Downloads/test/env/lib/python3.10/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Users/zenodallavalle/Downloads/test/env/lib/python3.10/site-packages/django/db/models/query.py", line 679, in create
obj.save(force_insert=True, using=self.db)
File "/Users/zenodallavalle/Downloads/test/main/models.py", line 175, in save
self._ensure_correct_flow_of_events(is_new=is_new)
File "/Users/zenodallavalle/Downloads/test/main/models.py", line 155, in _ensure_correct_flow_of_events
raise ValueError(
ValueError: After started next event must be 'success' or 'canceled'
</code></pre>
<p>Why is that?</p>
<p>I leave my <code>models.py</code> here in order to replicate the behaviour.</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
from logging import getLogger
from main.utils import make_thread
logger = getLogger(__name__)
import threading
def make_thread(fn):
def _make_thread(*args, **kwargs):
thread = threading.Thread(target=fn, args=args, kwargs=kwargs)
thread.start()
return thread
return _make_thread
class Registration(models.Model):
course_user = models.ForeignKey(
User,
on_delete=models.CASCADE,
related_name="registrations",
)
created_by = models.ForeignKey(
User,
null=True,
on_delete=models.SET_NULL,
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def _check_user_not_signed_for_other_courses(self):
other_registrations = self.course_user.registrations.exclude(pk=self.pk)
if any([not r.ended for r in other_registrations]):
raise ValueError("User already signed for another course")
def _ensure_created_by_is_not_null(self):
if self.created_by is None:
raise ValueError("Created by is null")
def save(self, *args, **kwargs) -> None:
is_new = self._state.adding
if is_new:
self._ensure_created_by_is_not_null()
self._check_user_not_signed_for_other_courses()
ret = super().save(*args, **kwargs)
if is_new:
RegistrationEvent.objects.create(
course_registration=self,
event_type=RegistrationEvent.EventType.SIGNED,
)
return ret
def __str__(self):
return f"{self.course_user} registration id: {self.pk}"
def __repr__(self):
return f"<Registration: {self.course_user} registration id: {self.pk}>"
@property
def ended(self):
return self.events.filter(
event_type__in=(
RegistrationEvent.EventType.CERTIFICATE_ISSUED,
RegistrationEvent.EventType.CANCELED,
RegistrationEvent.EventType.FAILED,
)
).exists()
@property
def last_event(self):
return self.events.order_by("-created_at").first()
@make_thread
def threaded_issue_certificate(self):
return self.issue_certificate()
def issue_certificate(self):
# Do something here
# Register it as an event
RegistrationEvent.objects.create(
course_registration=self,
event_type=RegistrationEvent.EventType.CERTIFICATE_ISSUED,
)
class RegistrationEvent(models.Model):
class EventType(models.IntegerChoices):
SIGNED = 1, "Signed up"
STARTED = 2, "Course started"
SUCCESS = 3, "Course success"
CERTIFICATE_ISSUED = 4, "Certificate issued"
CANCELED = 5, "Cancelled"
FAILED = 6, "Course failed"
course_registration = models.ForeignKey(
Registration,
on_delete=models.CASCADE,
related_name="events",
)
event_type = models.IntegerField(choices=EventType.choices)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="Creato il")
updated_at = models.DateTimeField(auto_now=True, verbose_name="Aggiornato il")
@property
def event_type_description(self):
return self.EventType(self.event_type).label
def __str__(self):
return f"{self.event_type_description} - {self.course_registration}"
def __repr__(self):
return f"<RegistrationEvent: {self.event_type_description} - {self.course_registration}>"
def _get_previous_event(self, is_new):
qs = self.course_registration.events.all()
if not is_new:
qs = qs.exclude(created_at__gte=self.created_at)
return qs.order_by("-created_at").first()
def _ensure_correct_flow_of_events(self, is_new):
# If the course is completed (certificate issued or cancelled), no more events can be added
if is_new:
if self.course_registration.ended:
raise ValueError(
"Il corso è completato, non è possibile aggiungere eventi"
)
if self.event_type == self.EventType.CANCELED:
return # No further checks needed
previous_event = self._get_previous_event(is_new=is_new)
print(
"Checking correct flow, previous event:",
previous_event,
"current event_type:",
self.event_type,
)
if not previous_event:
if self.event_type != self.EventType.SIGNED:
raise ValueError("First event must be 'signed'")
elif previous_event.event_type == self.EventType.SIGNED:
if self.event_type != self.EventType.STARTED:
raise ValueError("After signed next event must be 'started'")
elif previous_event.event_type == self.EventType.STARTED:
if self.event_type not in (
self.EventType.SUCCESS,
self.EventType.CANCELED,
):
raise ValueError(
"After started next event must be 'success' or 'canceled'"
)
elif previous_event.event_type == self.EventType.SUCCESS:
if self.event_type != self.EventType.CERTIFICATE_ISSUED:
raise ValueError(
"After success next event must be 'certificate issued'"
)
def _issue_certificate_if_needed(self, is_new):
if not is_new:
return
if not self.event_type == self.EventType.SUCCESS:
return
self.course_registration.threaded_issue_certificate()
def save(self, *args, **kwargs):
is_new = self._state.adding
if is_new:
self._ensure_correct_flow_of_events(is_new=is_new)
ret = super().save(*args, **kwargs)
self._issue_certificate_if_needed(is_new=is_new)
return ret
</code></pre>
|
<python><django><django-models>
|
2024-03-21 15:31:14
| 1
| 929
|
Zeno Dalla Valle
|
78,201,028
| 6,606,057
|
Cannot Convert Local CSV to dataframe in Pandas on Spyder
|
<p>Hello I am learning Python using Spyder (Python 3.8.10 64-bit | Qt 5.15.2 | PyQt5 5.15.10 | Windows 10 (AMD64). I am unable to convert a local copy of a csv into a dataframe using pandas</p>
<pre><code># import pandas module
import pandas as pd
# making dataframe
df = pd.read_csv("C:\Test\test.txt")
# output the dataframe
print(df)
</code></pre>
<p>i have verified folder exists and is contain file. Spyder returns</p>
<pre><code>df = pd.read_csv("C:\Test\test.txt")
Traceback (most recent call last):
Cell In[114], line 1
df = pd.read_csv("C:\Test\test.txt")
File C:\Program Files\Spyder\pkgs\pandas\io\parsers\readers.py:912 in read_csv
return _read(filepath_or_buffer, kwds)
File C:\Program Files\Spyder\pkgs\pandas\io\parsers\readers.py:577 in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File C:\Program Files\Spyder\pkgs\pandas\io\parsers\readers.py:1407 in __init__
self._engine = self._make_engine(f, self.engine)
File C:\Program Files\Spyder\pkgs\pandas\io\parsers\readers.py:1661 in _make_engine
self.handles = get_handle(
File C:\Program Files\Spyder\pkgs\pandas\io\common.py:859 in get_handle
handle = open(
OSError: [Errno 22] Invalid argument: 'C:\\Test\test.txt'
</code></pre>
<p>What am I doing wrong? Note: I am barely a code monkey.</p>
<p>I have tried:</p>
<pre><code>
!pip install ipython-sql
</code></pre>
<p>prior to the commands above. Spyder spits out some text beginning with</p>
<pre><code>Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: ipython-sql in c:
</code></pre>
<p>but is otherwise stable.</p>
|
<python><pandas><dataframe>
|
2024-03-21 15:17:31
| 1
| 485
|
Englishman Bob
|
78,200,996
| 4,999,991
|
Automatically Detecting and Installing Compatible Versions of `setuptools` and `pip` with PowerShell for Any Python Version
|
<p>I am trying to automate the setup of Python environments on Windows and encountered a challenge with legacy Python versions. Specifically, I need a PowerShell script that:</p>
<ol>
<li>Detects the last version of <code>setuptools</code> that is compatible with the currently active local Python version, downloads it, unpacks the archive, and installs it.</li>
<li>Does the same for the latest version of <code>pip</code> that supports the active local Python version.</li>
</ol>
<p>For context, the necessity arises from managing multiple Python environments where each might be using a different Python version, including legacy ones (e.g., Python 2.6). The script must be robust enough to automatically determine which versions of <code>setuptools</code> and <code>pip</code> are compatible with the active Python version and perform the installation process without manual intervention.</p>
<p>I have found <a href="https://stackoverflow.com/a/77609580/4999991">various approaches</a> for downloading and installing specific versions of packages, but I am unsure how to dynamically determine the compatible version for the current Python environment.</p>
<p>Has anyone tackled a similar problem, or could you provide insights into how this could be achieved using PowerShell? Any guidance or examples would be greatly appreciated.</p>
<p><strong>P.S.</strong> Here is how I think we can achieve a solution:</p>
<ol>
<li>fetch the local version of Python using</li>
</ol>
<pre><code>$pythonVersion = python --version 2>&1 | ForEach-Object { $_ -replace '^Python\s', '' }
</code></pre>
<ol start="2">
<li>Define a function to query compatible package versions using the PyPI API for the package. Iterate over the releases to find the highest version number that requires a Python version less than or equal to the current Python version. for example:</li>
</ol>
<pre><code>function Get-CompatibleVersion {
param (
[string]$PackageName,
[string]$PythonVersion
)
$url = "https://pypi.org/pypi/$PackageName/json"
$response = Invoke-RestMethod -Uri $url
$releases = #parsing the JSON format
foreach ($release in $releases) {
$requiresPython = $response.releases.$release[0].requires_python
if (-not $requiresPython) { continue }
$isCompatible = $false
try {
$isCompatible = [System.Version]::Parse($PythonVersion) -le [System.Version]::Parse($requiresPython.Split(',')[0].Trim('<>=').Split(' ')[0])
} catch {}
if ($isCompatible) { return $release }
}
return $null
}
</code></pre>
<ol start="3">
<li>then getting the latest version of <code>setuptools</code> and <code>pip</code>:</li>
</ol>
<pre><code>$setuptoolsVersion = Get-CompatibleVersion -PackageName "setuptools" -PythonVersion $pythonVersion
$pipVersion = Get-CompatibleVersion -PackageName "pip" -PythonVersion $pythonVersion
</code></pre>
|
<python><windows><powershell><pip><setuptools>
|
2024-03-21 15:12:26
| 0
| 14,347
|
Foad S. Farimani
|
78,200,945
| 4,928,783
|
Firebase Python Function called from Firebase Hosted App
|
<p>I would like to call a firebase function written in Python from my firebase hosted app. I've followed the documentation here (<a href="https://firebase.google.com/docs/functions/callable?gen=2nd#call_the_function" rel="nofollow noreferrer">https://firebase.google.com/docs/functions/callable?gen=2nd#call_the_function</a>) for writing the calling part of the function in my JS, and the documentation here (<a href="https://firebase.google.com/docs/functions/callable?gen=2nd" rel="nofollow noreferrer">https://firebase.google.com/docs/functions/callable?gen=2nd</a>) for how to call the app.</p>
<p>A main point I think I've seen from the documentation and writing my functions:</p>
<ol>
<li>If you want to call the function with a simple https request (using fetch or similar), then the Python firebase function uses the <code>@https_fn.on_request()</code> decorator. If you want to call it using the firebase app functions call, you use the <code>@https_fn.on_call()</code> decorator. From what I've seen, <code>on_request</code> will work (without auth), but <code>on_call</code> never works (despite being advertised as native Firebase, passing auth automagically).</li>
</ol>
<p>I've got a minimal sample code for the <code>on_call</code> decorator below. I deployed the function via the google cloud console directly (I'm also having errors deploying code from the firebase CLI). The entrypoint on google cloud console is spec'd as <code>say_hello</code>, and if I run the code with the commented <code>on_request</code> decorator I can call it successfully from an http request.</p>
<p>Python Firebase Function [function name: <code>function-hw</code>]:</p>
<pre class="lang-py prettyprint-override"><code>import functions_framework
# [START v2imports]
# Dependencies for callable functions.
from firebase_functions import https_fn, options
# Dependencies for writing to Realtime Database.
from firebase_admin import db, initialize_app, auth
# [END v2imports]
from datetime import datetime
#@https_fn.on_request(
@https_fn.on_call(
cors=options.CorsOptions(
cors_origins=[r"https://jr\.mysite\.app$"],
cors_methods=["get", "post"],
)
)
def say_hello(req: https_fn.Request) -> https_fn.Response:
print(req.__dict__)
return {
"Hello": "World!",
"Time": "CORS!",
"Now": datetime.now()
}
</code></pre>
<p>JS Calling code:</p>
<pre class="lang-js prettyprint-override"><code>import { initializeApp } from "https://www.gstatic.com/firebasejs/10.8.0/firebase-app.js";
import { getAuth, onAuthStateChanged, signInWithEmailAndPassword, createUserWithEmailAndPassword, signOut } from "https://www.gstatic.com/firebasejs/10.8.0/firebase-auth.js";
import { getFunctions, httpsCallable } from "https://www.gstatic.com/firebasejs/10.8.0/firebase-functions.js";
// Correct info is in my actual app
const firebaseConfig = {
apiKey: "...",
authDomain: "...",
projectId: "...",
storageBucket: "...",
messagingSenderId: "...",
appId: "...",
measurementId: "..."
};
const firebaseApp = initializeApp(firebaseConfig);
const auth = getAuth(firebaseApp);
const functions = getFunctions(firebaseApp);
var callFunction = httpsCallable('function-hw');
callFunction()
.then(function(result) {
// Handle success
console.log(result.data);
})
.catch(function(error) {
// Handle error
console.error('Error:', error);
});
async function requestFunction() {
const fcnUrl = "url for firebase function with on_request decorator";
const response = await fetch(fcnUrl);
const data = await response.json();
console.log(data);
}
requestFunction()
</code></pre>
<p>When I load my site, I expect it to call the function, but I get:</p>
<pre><code>service.ts:185 Uncaught TypeError: e._url is not a function
at call (service.ts:185:59)
at service.ts:215:3
at index.js:37:1 // -> This is the `callFunction()` line in my Javascript
</code></pre>
<p>And the 'on_request' function executes as expected, returning JSON in the console:</p>
<pre><code>{Hello: 'World!', Now: 'Thu, 21 Mar 2024 15:01:07 GMT', Time: 'CORS!'}
</code></pre>
<p>This would work for me, except once I go to add auth headers it falls over.</p>
<p>So my questions:</p>
<ol>
<li>Am I doing something incorrect with the 'on_call' Firebase function?</li>
<li>Is there a way I can implement auth with 'on_request'? I've tried the 'bearer ${token}' routine already with no success.</li>
</ol>
|
<javascript><python><firebase><firebase-authentication><google-cloud-functions>
|
2024-03-21 15:05:52
| 0
| 390
|
John Robinson
|
78,200,863
| 7,631,505
|
Draw an image and a stem plot in 3d with matpltolib
|
<p>I'm trying to make a 3d plot with python that shows an image at a certain level and certain specific points as a stem plot (basically the image is a sort of 2d map and the stem plots represent the value of a certain parameter at the point in the map)</p>
<p>Current code :</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 101)
y = np.linspace(-1, 1, 101)
X, Y = np.meshgrid(x, y)
z = np.exp(-(X**2) - Y**2)
x0 = [-0.5, 0, 0.5]
y0 = [-0.5, 0, 0.5]
z0 = [0.5, 1, 2]
fig, ax = plt.subplots(subplot_kw=dict(projection="3d"))
ax.contourf(
x,
y,
z,
zdir="z",
offset=-0.0,
)
ps = ax.stem(x0, y0, z0)
for p in ps:
p.set_zorder(20)
plt.show()
</code></pre>
<p>I added the <code>set_zorder()</code> part to try to put the stem plot forward, however the resulting plot looks like this:
<a href="https://i.sstatic.net/bjgNS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjgNS.png" alt="h" /></a></p>
<p>Where the lines are hidden behind the 2d image while I would like it to look like the stem where originating on the 2d plot.</p>
<p>Random image I've found on the internet to explicate what I'm trying to achieve (with bars instead of stem plots because I couldn't find one with stem plots).
Many thanks.</p>
<p><a href="https://i.sstatic.net/5ZmHd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5ZmHd.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><matplotlib-3d>
|
2024-03-21 14:56:30
| 1
| 316
|
mmonti
|
78,200,862
| 2,989,642
|
How may I combine multiple adapters in Python requests?
|
<p>I need to make multiple calls to a web endpoint that is somewhat unreliable, so I have put together a timeout/retry strategy that I issue to a <code>requests.Session()</code> object as an adapter.</p>
<p>However, I <em>also</em> need to mount this same endpoint using a client PKCS12 certificate and a public certificate authority (verify), which I can easily perform with the <code>requests_pkcs12</code> package - this also creates an adapter.</p>
<p>So, adapter 1 looks like this, (pieced together with things I found across the internet):</p>
<pre><code># timeout.py
from requests.adapters import HTTPAdapter
DEFAULT_TIMEOUT = 30
class TimeoutAdapter(HTTPAdapter):
def __init__(self, *args, **kwargs):
self.timeout = DEFAULT_TIMEOUT
if "timeout" in kwargs:
self.timeout = kwargs["timeout"]
super().__init__(*args, **kwargs)
def send(self, request, **kwargs):
timeout = kwargs.get("timeout")
if timeout is None:
kwargs["timeout"] = self.timeout
return super().send(request, **kwargs)
## set up adapter
if __name__ == "__main__":
from urllib3.util.retry import Retry
u = 'https://some-webservice.com'
s = requests.Session()
retry_strategy = Retry(
total=10,
status_forcelist=[429, 500, 502, 503, 504],
backoff_factor=1)
s.mount(u, adapter=TimeoutAdapter(max_retries=retry_strategy)
</code></pre>
<p>Then, the second adapter looks like this:</p>
<pre><code>from requests import Session
from requests_pkcs12 import Pkcs12Adapter
pki_adapter = Pkcs12Adapter(
pkcs12_filename='path/to/cert.p12',
pkcs12_password='cert-password')
u = 'https://some-webservice.com'
s = requests.Session()
s.mount(u, adapter=pki_adapter)
</code></pre>
<p>Both work lovely by themselves, but how would I go about merging them into a single adapter so that the session uses the PKI to authenticate, but also honors the timeout/retry strategy?</p>
|
<python><python-requests>
|
2024-03-21 14:56:22
| 2
| 549
|
auslander
|
78,200,819
| 6,665,586
|
ModuleNotFoundError: No module named 'msgraph.generated.users'
|
<p>I'm trying to run a Python application that uses msgraph-sdk. This error <code>ModuleNotFoundError: No module named 'msgraph.generated.users'</code> is happening in this method</p>
<pre><code>async def get_user_groups(self, token: str) -> List[str]:
credentials = OnBehalfOfCredential(tenant_id=TENANT_ID,
client_id=CLIENT_ID,
client_secret=self.aws_secret_manager_service.get_secret(),
user_assertion=token)
scopes = ['User.Read']
graph_client = GraphServiceClient(credentials, scopes)
result = await graph_client.me.member_of.get()
groups = [group.id for group in result.value]
return groups
</code></pre>
<p>Specifically in the line <code>result = await graph_client.me.member_of.get()</code>.</p>
<p>I checked the package and indeed there is no users module in it. There is a folder called "generated" with a lot of modules, but none of them is "users". The package version is <code>msgraph-sdk==1.1.0</code>. It looks like a bug in the package, but I've searched the whole internet and seems that nobody experienced this error before.</p>
<p>Here the full stack trace</p>
<pre><code> File "C:\Users\hesd\Documents\projects\backend\app\services\user.py", line 77, in get_user_groups
user_azure_groups = await self.azure_ad_service.get_user_groups(token.get('jwt_token'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hesd\Documents\projects\backend\app\services\azure_ad.py", line 20, in get_user_groups
result = await graph_client.me.member_of.get()
^^^^^^^^^^^^^^^
File "c:\Users\hesd\Documents\projects\backend\venv\Lib\site-packages\msgraph\graph_service_client.py", line 57, in me
from .generated.users.item.user_item_request_builder import UserItemRequestBuilder
ModuleNotFoundError: No module named 'msgraph.generated.users'
</code></pre>
|
<python><microsoft-graph-api>
|
2024-03-21 14:50:05
| 0
| 1,011
|
Henrique Andrade
|
78,200,730
| 4,009,645
|
Pandas: replace regex with string ending with tab not working
|
<p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame({'Depth':['7500', '7800', '8300', '8500'],
'Gas':['25-13 PASON', '9/8 PASON', '19/14', '56/26'],
'ID':[1, 2, 3, 4]})
</code></pre>
<p><a href="https://i.sstatic.net/FXvXI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FXvXI.png" alt="enter image description here" /></a></p>
<p>I want to add the word "PASON" to the end of any value in the Gas column that does not have it already, so that it ends up looking like this:</p>
<p><a href="https://i.sstatic.net/y82Q9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y82Q9.png" alt="enter image description here" /></a></p>
<p>It should be a simple replace using Regex, but it doesn't work. Here is my code:</p>
<pre><code>df['Gas'] = df['Gas'].replace(to_replace =r'(\d+/\d+)(\t)(\d+)', value = r'\1 PASON\2', regex = True)
</code></pre>
<p>I've checked the Regex in a Regex checker, and it works fine there, but when I add it to Pandas it doesn't work. What am I missing?</p>
<p>Thanks!</p>
|
<python><pandas><regex>
|
2024-03-21 14:35:08
| 3
| 1,009
|
Heather
|
78,200,693
| 4,999,991
|
pyenv-win not overriding system Python version with local setting
|
<p>I've installed <code>pyenv-win 3.1.1</code> on Windows using Chocolatey and successfully installed Python 2.6 using <code>pyenv install 2.6</code>. I then tried setting the local Python version for my project by navigating to the project directory and running <code>pyenv local 2.6</code>, which completed without errors.</p>
<p>When I check the versions with <code>pyenv versions</code>, I see:</p>
<pre class="lang-none prettyprint-override"><code>* 2.6 (set by path\to\project\.python-version)
3.8.10
</code></pre>
<p>indicating that Python 2.6 is correctly set as the local version for my project. Additionally, <code>pyenv which python</code> points to the expected Python executable at <code>C:\Users\<userName>\.pyenv\pyenv-win\versions\2.6\python.exe</code>.</p>
<p>However, when I run <code>python --version</code> in my project directory, it still points to my system Python installation (<code>Python 3.11.7</code> at <code>C:\Program Files\Python312\python.exe</code>). Running <code>where python</code> yields:</p>
<pre class="lang-none prettyprint-override"><code>C:\tools\msys64\mingw64\bin\python.exe
C:\Program Files\Python312\python.exe
C:\Users\<userName>\.pyenv\pyenv-win\shims\python
C:\Users\<userName>\.pyenv\pyenv-win\shims\python.bat
</code></pre>
<p>I expected that after running <code>pyenv local 2.6</code>, the <code>python</code> command would point to the Python 2.6 version installed via <code>pyenv-win</code>. Can anyone help me understand why <code>pyenv local 2.6</code> does not seem to override the system Python version as expected in this context?</p>
|
<python><windows><environment-variables><pyenv><pyenv-win>
|
2024-03-21 14:31:01
| 2
| 14,347
|
Foad S. Farimani
|
78,200,539
| 7,920,004
|
Existing column unrecognized by Delta merge
|
<p>Sample of my data I'm working on:</p>
<p><strong>Source</strong></p>
<pre><code>+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
|store_id |type |store_status | name | owner |owner_code |store_asOfDate |
+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
| 123 |type |not_active |name |xyz | xyz | 2024-03-20|
+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
</code></pre>
<p><strong>Target</strong></p>
<pre><code>+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
|store_id |type |store_status | name | owner |owner_code |store_asOfDate |
+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
| 123 |type |active |name |xyz | xyz | 2024-03-15|
+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
</code></pre>
<p><strong>Code snippet</strong></p>
<pre><code>target_dt.alias("target") \
.merge(
source=df_trusted.alias("source"),
condition="target.store_id=source.store_id AND target.store_status=source.store_status"
) \
.whenNotMatchedBySourceUpdate(
set={
"store_status": F.col("source.store_status"),
"store_asOfDate": F.col("source.store_asOfDate")
}
) \
.execute()
</code></pre>
<p><strong>Expected behaviour</strong>:</p>
<ul>
<li>target's row <code>store_status</code> and <code>store_asOfDate</code> are updated.</li>
</ul>
<p><strong>Target (after merge/upsert)</strong></p>
<pre><code>+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
|store_id |type |store_status | name | owner |owner_code |store_asOfDate |
+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
| 123 |type |not_active |name |xyz | xyz | 2024-03-20|
+---------+--------------------+-------------+--------------------+--------------------+--------------+------------------+
</code></pre>
<p><strong>Currently</strong>, error is thrown:</p>
<pre><code>24/03/21 14:06:29 ERROR Error occured in my_method() method: [DELTA_MERGE_UNRESOLVED_EXPRESSION] Cannot resolve source.store_id in UPDATE condition given columns....
</code></pre>
<p>Please suggest where I can debug further for root cause.
Thanks in advance!</p>
|
<python><pyspark><delta-lake>
|
2024-03-21 14:06:38
| 1
| 1,509
|
marcin2x4
|
78,200,420
| 2,123,706
|
remove spaces from pandas column if there are 2 or 3
|
<p>I have:</p>
<pre><code>pd.DataFrame({'id':[1,2],'col1':['a b c','a b c d']})
</code></pre>
<p>I want:</p>
<pre><code>pd.DataFrame({'id':[1,1,2],'col1':['ab c', 'a bc','ab cd']})
</code></pre>
<ul>
<li>there will always be 1,2 or 3 spaces in a column</li>
<li>if there are 2 spaces (3 words), I want to duplicate the row, and alter the value to show <code>word1word2 word3</code> in one row, and see <code>word1 word2word3</code> in the second row</li>
<li>if there are 3 words (4 spaces), I want to alter the value to <code>word1word2 word3word4</code></li>
</ul>
<p>I have ~ 30 columns, and test for number of spaces with</p>
<pre><code>df['name_cnt'] = df['name'].str.count('\s+')
</code></pre>
|
<python><pandas>
|
2024-03-21 13:50:48
| 1
| 3,810
|
frank
|
78,200,263
| 3,590,067
|
Python: how to fill a ternary plot
|
<p>I want to make a legend in python as a ternary plot to compare the values of three matrices A,B,C. This what I am doing.</p>
<pre><code>import numpy as np
import matplotlib.patches as mpatches
def create_color_map(A, B, C):
"""
Creates a color map based on the values in three matrices (A, B, C) representing
pixel values between 0 and 1.
Args:
A, B, C (numpy.ndarray): Arrays of the same shape representing the three values
for each pixel. Values range from 0 to 1.
Returns:
numpy.ndarray: A color map with RGB values for each pixel, shaped (N, M, 3),
where N and M are the dimensions of the input arrays.
"""
# Handle potential shape mismatches
if not np.all(A.shape == B.shape == C.shape):
raise ValueError("Arrays A, B, and C must have the same shape.")
# Ensure values are within 0-1 range (clamp if necessary)
A = np.clip(A, 0, 1)
B = np.clip(B, 0, 1)
C = np.clip(C, 0, 1)
# Calculate weights for each color component (black, red, green)
black_weights = 1 - A - B
red_weights = B
green_weights = C
# Combine weights and scale to 0-255 range (assuming uint8 output)
colors = np.dstack((black_weights * 255, red_weights * 255, green_weights * 255)).astype(np.uint8)
return colors
# Create sample matrices (A, B, C)
A = np.array([[0.2, 0.7, 0.1],
[0.0, 0.5, 0.9],
[1.0, 0.3, 0.0]])
B = np.array([[0.9, 0.1, 0.6],
[0.3, 0.0, 0.2],
[0.0, 0.8, 0.3]])
C = np.array([[0.0, 0.2, 0.8],
[0.5, 0.6, 0.1],
[0.3, 0.0, 0.4]])
# Generate the color map
color_map = create_color_map(A, B, C)
# Visualize the color map using matplotlib
plt.imshow(color_map)
plt.title("Color Map Visualization")
# Create legend
vertices = [(0, 0), (1, 0), (0.5, np.sqrt(3)/2)] # Triangle vertices
colors = ['black', 'red', 'green'] # Vertex colors
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/h5YJU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h5YJU.png" alt="enter image description here" /></a></p>
<p>Now I would like to create a legend as a ternary map where I have color combinations for different values of A,B,C. This is what I am doing.</p>
<pre><code>import ternary
def plot_ternary_legend():
"""Plot ternary legend with vertices red, black, and green."""
fig, tax = ternary.figure(scale=1)
tax.boundary()
tax.set_title("Ternary Legend", fontsize=20)
# Define vertices
red = (1, 0, 0) # Red
black = (0, 0, 0) # Black
green = (0, 1, 0) # Green
# Plot vertices
tax.scatter([red, black, green], marker='o', color=['red', 'black', 'green'], label=['Red', 'Black', 'Green'])
# Label vertices
tax.legend()
tax.gridlines(multiple=0.1, color="blue", alpha=0.5)
tax.ticks(axis='lbr', linewidth=1, multiple=0.1, tick_formats="%.1f")
tax.clear_matplotlib_ticks()
plt.show()
# Plot ternary legend
plot_ternary_legend()
</code></pre>
<p><a href="https://i.sstatic.net/4kbWH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4kbWH.png" alt="enter image description here" /></a></p>
<p>I would like to to fill the subtriangles inside the ternary plot with colors representing the mix of red, black, and green. How can I do that?</p>
|
<python><matplotlib><ternary>
|
2024-03-21 13:25:58
| 0
| 7,315
|
emax
|
78,199,905
| 3,943,162
|
`expected string or buffer` error when running retriever within an LCEL chain, even working fine independently
|
<p>Working with LangChain, I created the following retriever based on the <a href="https://python.langchain.com/docs/modules/data_connection/retrievers/parent_document_retriever" rel="nofollow noreferrer">Parent Document Retriever docs</a>. I basically copy and paste the example, but I'm using just one .txt file instead of two.</p>
<pre class="lang-py prettyprint-override"><code>retriever_2 = ParentDocumentRetriever(
vectorstore=vectorstore_2,
docstore=store_2,
child_splitter=child_splitter_2,
parent_splitter=parent_splitter_2,
search_kwargs={"k": 1}
)
</code></pre>
<p>When I use it as <code>retriever_2.invoke("justice breyer")</code>, it works, but within an LCEL chain like below.</p>
<pre class="lang-py prettyprint-override"><code>full_chain = RunnableParallel({"user_input": RunnablePassthrough(), "retriever": retriever_2}) | llm
full_result = full_chain.invoke({"user_input": "justice breyer"}, config=cfg)
</code></pre>
<p>It fails with the error <code>expected string or buffer</code>. I know that this is a more general Python error, not LangChain, but I don't know what to do after this error.</p>
|
<python><langchain><data-retrieval><py-langchain>
|
2024-03-21 12:21:06
| 0
| 1,789
|
James
|
78,199,880
| 458,742
|
why does TemporaryDirectory change type within a "with" block?
|
<p>In python3 console</p>
<pre><code>>>> x1=tempfile.TemporaryDirectory()
>>> print(type(x1))
<class 'tempfile.TemporaryDirectory'>
>>> with tempfile.TemporaryDirectory() as x2:
... print(type(x2))
...
<class 'str'>
</code></pre>
<p>Why is <code>x1</code> a <code>TemporaryDirectory</code> and <code>x2</code> a <code>str</code>?</p>
|
<python><python-3.x>
|
2024-03-21 12:17:09
| 1
| 33,709
|
spraff
|
78,199,832
| 12,297,666
|
Keras Hyperparameter Tuning with Functional API
|
<p>I am trying to user Keras Tuner to optimize some hyperparameters, and this is the code:</p>
<pre><code>def criar_modelo(hp):
lstm_input = Input(shape=(x_train_lstm.shape[1], 1), name='LSTM_Input_Layer')
static_input = Input(shape=(x_train_static.shape[1], ), name='Static_Input_Layer')
# LSTM 1
lstm_layer_1 = LSTM(units=hp.Int('units_lstm_layer_1', min_value=128, max_value=256, step=64), activation='tanh', return_sequences=False, name='1_LSTM_Layer')(lstm_input)
# Static 1
static_layer_1 = Dense(units=hp.Int('units_static_layer_1', min_value=64, max_value=192, step=64), activation=hp.Choice('activation', ['relu', 'tanh']), name='1_Static_Layer')(static_input)
# Static 2 e/ou 3
for i in range(hp.Int('num_static_layers', 1, 3)):
static_layer = Dense(units=hp.Int(f'static_units_{i}', 128, 192, step=32), activation=hp.Choice('activation', ['relu', 'tanh']), name=f'{i+1}_Static_Layer')(static_layer_1)
static_layer_1 = static_layer
concatenar = Concatenate(axis=1, name='Concatenate')([lstm_layer_1, static_layer_1])
dense_1 = Dense(units=4*len(np.unique(y_train)), activation='relu', name='1_Dense_Layer')(concatenar)
dense_2 = Dense(units=2*len(np.unique(y_train)), activation='relu', name='2_Dense_Layer')(dense_1)
saida = Dense(units=len(np.unique(y_train)), activation='softmax', name='Output_Layer')(dense_2)
model = Model(inputs=[lstm_input, static_input], outputs=[saida])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
</code></pre>
<p>But, i get the following error:</p>
<pre><code>Traceback (most recent call last):
Cell In[11], line 27
tuner = keras_tuner.GridSearch(
File ~\miniconda3\envs\tf-gpu\lib\site-packages\keras_tuner\src\tuners\gridsearch.py:420 in __init__
super().__init__(oracle, hypermodel, **kwargs)
File ~\miniconda3\envs\tf-gpu\lib\site-packages\keras_tuner\src\engine\tuner.py:122 in __init__
super().__init__(
File ~\miniconda3\envs\tf-gpu\lib\site-packages\keras_tuner\src\engine\base_tuner.py:132 in __init__
self._populate_initial_space()
File ~\miniconda3\envs\tf-gpu\lib\site-packages\keras_tuner\src\engine\base_tuner.py:192 in _populate_initial_space
self._activate_all_conditions()
File ~\miniconda3\envs\tf-gpu\lib\site-packages\keras_tuner\src\engine\base_tuner.py:149 in _activate_all_conditions
self.hypermodel.build(hp)
Cell In[11], line 23 in criar_modelo
model = Model(inputs=[lstm_input, static_input], outputs=[saida])
File ~\miniconda3\envs\tf-gpu\lib\site-packages\tensorflow\python\training\tracking\base.py:629 in _method_wrapper
result = method(self, *args, **kwargs)
File ~\miniconda3\envs\tf-gpu\lib\site-packages\keras\engine\functional.py:146 in __init__
self._init_graph_network(inputs, outputs)
File ~\miniconda3\envs\tf-gpu\lib\site-packages\tensorflow\python\training\tracking\base.py:629 in _method_wrapper
result = method(self, *args, **kwargs)
File ~\miniconda3\envs\tf-gpu\lib\site-packages\keras\engine\functional.py:229 in _init_graph_network
nodes, nodes_by_depth, layers, _ = _map_graph_network(
File ~\miniconda3\envs\tf-gpu\lib\site-packages\keras\engine\functional.py:1049 in _map_graph_network
raise ValueError(
ValueError: The name "1_Static_Layer" is used 2 times in the model. All layer names should be unique.
</code></pre>
<p>I don't understand why i get the error, since those <code>i_Static_Layer</code> are defined inside a for loop with <code>range(1, 3)</code>.</p>
<p>If i try to <code>print('i =', i)</code>, it indeed shows <code>i = 0</code>, but why?</p>
<p>I can always change <code>name=f'{i+1}_Static_Layer'</code> to <code>name=f'{i+2}_Static_Layer'</code>, to skip the error. But still, i would like to understand what's going on.</p>
|
<python><keras><keras-tuner>
|
2024-03-21 12:06:46
| 1
| 679
|
Murilo
|
78,199,615
| 10,722,752
|
How to filter group's max and min rows using `transform`
|
<p>I am working on a task wherein, I need to <strong>filter in</strong> rows that contain the group's max and min values and filter out other rows. This is to understand how the values change at each decile.</p>
<pre><code>np.random.seed(0)
df = pd.DataFrame({'id' : range(1,31),
'score' : np.random.uniform(size = 30)})
df
id score
0 1 0.548814
1 2 0.715189
2 3 0.602763
3 4 0.544883
4 5 0.423655
5 6 0.645894
6 7 0.437587
7 8 0.891773
8 9 0.963663
9 10 0.383442
10 11 0.791725
11 12 0.528895
12 13 0.568045
13 14 0.925597
14 15 0.071036
15 16 0.087129
16 17 0.020218
17 18 0.832620
18 19 0.778157
19 20 0.870012
20 21 0.978618
21 22 0.799159
22 23 0.461479
23 24 0.780529
24 25 0.118274
25 26 0.639921
26 27 0.143353
27 28 0.944669
28 29 0.521848
29 30 0.414662
</code></pre>
<p>I then add the decile column using:</p>
<pre><code>df['decile'] = pd.qcut(df['score'], 10, labels=False)
</code></pre>
<p>Now I tried both:</p>
<pre><code>df.transform((df['score'] == df.groupby('decile')['score'].min()) or (df['score'] == df.groupby('decile')['score'].max()))
</code></pre>
<p>and</p>
<pre><code>df.transform(df['score'].eq(df.groupby('decile')['score'].min().values).any() or df['score'].eq(df.groupby('decile')['score'].max().values).any())
</code></pre>
<p>But both are not working, can someone please help with this.</p>
|
<python><pandas>
|
2024-03-21 11:33:54
| 1
| 11,560
|
Karthik S
|
78,199,605
| 10,211,480
|
libxml2 and libxslt development packages issue in Redhat Linux
|
<p>I am trying to install python module shareplum on python3.8 and My machine is RHEL 6.10 , but getting below error while using command:</p>
<p><code>pip3.8 install shareplum</code></p>
<pre><code> × Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [4 lines of output]
<string>:67: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
Building lxml version 5.1.0.
Building without Cython.
Error: Please make sure the libxml2 and libxslt development packages are installed.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>Even though I tried this command, but it is not working. Requesting a big help. Kindly suggest.</p>
<pre><code>[root@xyz Python-3.8.16]# yum install libxslt-devel libxml2-devel
Loaded plugins: product-id, security, subscription-manager
Setting up Install Process
No package libxslt-devel available.
No package libxml2-devel available.
Error: Nothing to do
</code></pre>
|
<python><linux><redhat><libxml2><libxslt>
|
2024-03-21 11:32:16
| 0
| 343
|
Ashu
|
78,199,468
| 525,865
|
Beautifulsoup: iterate over a list from a to z and parse the data in order to store it in a df
|
<p>I am currently ironing out a very very easy parser that goes from a to z on a memberlist :: we have a memberlist here:</p>
<p>see: <a href="https://vvonet.vvo.at/vvonet_mitgliederverzeichnisneu" rel="nofollow noreferrer">https://vvonet.vvo.at/vvonet_mitgliederverzeichnisneu</a></p>
<p>note: we have to open the link "kontaktinformationen" and scrape the data there to a pandas df</p>
<p>Well I think that I can do this with python beautifulsoup request and either print it to screen or store it in a df.</p>
<p>First of all, the script should fetch the member list page, extracts the links to individual member pages, visits each member's "kontaktinformationen" page, and subsequently it should extract the contact information.</p>
<p>Finally, I think it is best to store the contact information in a DataFrame.</p>
<p>Well - I am finally able to print the DataFrame to the screen or save it to a CSV file.</p>
<p>Here is my attempt:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
# first, we send a GET request to the member list page
url = "https://vvonet.vvo.at/vvonet_mitgliederverzeichnisneu"
response = requests.get(url)
# here a check if the request was successful
if response.status_code == 200:
# Parse the HTML content of the page
soup = BeautifulSoup(response.content, "html.parser")
# Find now all member links
member_links = soup.find_all("a", class_="font1")
# now - Initialize lists to store data
member_data = []
# Iterate over member links
for member_link in member_links:
# Get the URL of the "kontaktinformationen" page
member_url = "https://vvonet.vvo.at" + member_link["href"] + "/kontaktinformationen"
# Send a GET request to the member's "kontaktinformationen" page
member_response = requests.get(member_url)
# Check if the request was successful
if member_response.status_code == 200:
# Parse the HTML content of the page
member_soup = BeautifulSoup(member_response.content, "html.parser")
# Find the contact information section
contact_info_div = member_soup.find("div", class_="contact")
# Check if contact information section exists
if contact_info_div:
# Extract the contact information
contact_info_text = contact_info_div.get_text(separator="\n", strip=True)
member_data.append(contact_info_text)
else:
member_data.append("Contact information not found")
else:
member_data.append(f"Failed to retrieve contact information for {member_link.text.strip()}")
# Create a DataFrame
df = pd.DataFrame(member_data, columns=["Contact Information"])
# Display the DataFrame
print(df)
# Alternatively, you can save the DataFrame to a CSV file
# df.to_csv("member_contact_information.csv", index=False)
else:
print("Failed to retrieve the member list page.")
</code></pre>
<p>But at the moment I get an empty dataframe..</p>
<pre><code>Empty DataFrame
Columns: [Contact Information]
Index: []
</code></pre>
|
<python><pandas><dataframe><request>
|
2024-03-21 11:12:04
| 2
| 1,223
|
zero
|
78,198,984
| 14,739,428
|
immediately throw sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
|
<p>I encountered a very strange issue where I consistently receive the following exception at a fixed point in my test data when using the db.session.execute('insert into table_a select * from table_b where xxx') statement within a loop.</p>
<pre><code>sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
</code></pre>
<p>I believe this is unrelated to connection counts or timeout settings. Due to the large join operation, executing the loop 150 times takes about 5 seconds, and the exception occurs exactly on the 150th iteration every time. This is very fast in Java and has never been a problem with same logic and same MySQL. Furthermore, the total amount of data is very small, for about only 5000 rows, less than 300 KB.</p>
<p>In python, I perform a db.session.commit() at the end of each iteration. But with no help.</p>
<p>The connection loss exception is thrown immediately upon reaching the 150th iteration, and I'm not sure what's causing it.</p>
<p>Here is the test code:</p>
<pre><code> normal_date_pattern = '%Y-%m-%d'
day_start_hour_format_pattern = '%Y-%m-%d %H:00:00'
day_end_hour_format_pattern = '%Y-%m-%d %H:59:59'
while actual_end_time > start_time:
start_current_hour_str = start_time.strftime(self.day_start_hour_format_pattern)
end_current_hour_str = start_time.strftime(self.day_end_hour_format_pattern)
# log when meet 0'oclock
if start_current_hour_str.endswith('00:00:00'):
current_app.logger.info(f'processing date {start_time.strftime(self.normal_date_pattern)}')
# insert into db
db.session.execute(
text(
"""
INSERT INTO table_1
SELECT * FROM table_2
WHERE table_2.start_time>=:start_time
AND table_2.end_time<=:end_time
ON DUPLICATE KEY UPDATE
snapshot_date = snapshot_date,
snapshot_date_loc = VALUES(snapshot_date_loc)
"""
),
params={
'start_time': start_current_hour_str,
'end_time': end_current_hour_str
}
)
db.session.commit()
# add one hour to process next hour
start_time = start_time + timedelta(hours=1)
</code></pre>
<p>This code is running in a flask application, but even I extract it to an individual script and using individual engine and sessionmaker, it crashes as always with the same error.</p>
|
<python><mysql><flask><flask-sqlalchemy>
|
2024-03-21 09:58:37
| 1
| 301
|
william
|
78,198,787
| 10,211,480
|
SSL module not available in Python3.8
|
<p>I have RHEL6.10 machine where I used below steps to install python3.8</p>
<pre><code>wget https://www.python.org/ftp/python/3.8.16/Python-3.8.16.tar.xz
tar xcf Python-3.8.16.tar.xz
cd Python-3.8.16.tar.xz
./configure
make altinstall
pip3.8 install shareplum
</code></pre>
<p>While using pip command to install any module, I am getting ssl module not found error</p>
<pre><code>[root@xyzPython-3.8.16]# pip3.8 install shareplum
WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/shareplum/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/shareplum/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/shareplum/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/shareplum/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/shareplum/
Could not fetch URL https://pypi.org/simple/shareplum/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/shareplum/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
</code></pre>
<p>Even though I checked openssl is installed or not, It is available in machine. Requesting a big help, if anyone can point out the missing piece.</p>
<pre><code>[root@xyzPython-3.8.16]# openssl version
OpenSSL 1.0.1e-fips 11 Feb 2013
[root@xyzPython-3.8.16]# which openssl
/usr/bin/openssl
[root@xyzPython-3.8.16]# whereis openssl
openssl: /usr/bin/openssl /usr/lib64/openssl /usr/include/openssl /opt/splunkforwarder/bin/openssl /usr/share/man/man1/openssl.1ssl.gz
[root@xyzPython-3.8.16]#
</code></pre>
|
<python><python-3.x><ssl><pip><openssl>
|
2024-03-21 09:30:09
| 1
| 343
|
Ashu
|
78,198,539
| 2,801,669
|
How to distinguish between Base Class and Derived Class in generics using Typing and Mypy
|
<p>Consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
import dataclasses
@dataclasses.dataclass
class A:
pass
@dataclasses.dataclass
class B(A):
pass
T = TypeVar("T", A, B)
def fun(
x1: T,
x2: T,
) -> int:
if type(x1) != type(x2):
raise TypeError("must be same type!")
if type(x1) == A:
return 5
elif type(x1) == B:
return 10
else:
raise TypeError("Type not handled")
fun(x1=A(), x2=A()) # OK
fun(x1=B(), x2=B()) # OK
fun(x1=B(), x2=A()) # Will throw TypeError, how can I get mypy to say this is an error?
fun(x1=A(), x2=B()) # Will throw TypeError, how can I get mypy to say this is an error?
</code></pre>
<p>Mypy is not seeing any problem here. It seems like it is always interpreting the passed object as a base class object of type <code>A</code>.</p>
<p>Is there a way to make the generic even more strict in the sense that it is sensitive to the exact class type? Such that if <code>x1</code> is of type <code>B</code>, then also <code>x2</code> must be exactly of type <code>B</code>? If <code>x1</code> is of type <code>A</code> then also <code>x2</code> must be exactly of type <code>A</code>?</p>
|
<python><generics><mypy><python-typing>
|
2024-03-21 08:47:02
| 1
| 1,080
|
newandlost
|
78,198,322
| 13,491,504
|
Plotting and solving three related ODEs in python
|
<p>I have a problem that might be a little more mathematical but the crux is that I want ot solve it in python and plot it. I have three ODEs which are related to one antoher in the following way:</p>
<pre><code>x''(t)=b*x'(t)+c*y'(t)+d*z'(t)+e*z(t)+f*y(t)+g*x(t)
y''(t)=q*x'(t)+h*y'(t)+i*z'(t)+p*z(t)+l*y(t)+m*x(t)
z''(t)=a*x'(t)+w*y'(t)+v*z'(t)+u*z(t)+o*y(t)+n*x(t)
</code></pre>
<p>How would I solve them in order to plot them in a 3d graph, through their acceleration.
I know, that I have to solve them, the difficulty lies in the problem that they are second order ODEs and interlinked through their dependency onone antoher.</p>
<p>For some additional Info, here is the code (it doesnt really work that good feel free to try it in a different way):</p>
<pre><code>from sympy import symbols, Function, Eq
import numpy as np
from scipy.integrate import solve_ivp
t = symbols('t')
x = Function('x')(t)
y = Function('y')(t)
z = Function('z')(t)
b, c, d, e, f, g = symbols('b c d e f g')
q, h, i, p, l, m = symbols('q h i p l m')
a, w, v, u, o, n = symbols('a w v u o n')
eqx = b*x.diff(t) + c*y.diff(t) + d*z.diff(t) + e*z + f*y + g*x
eqy = q*x.diff(t) + h*y.diff(t) + i*z.diff(t) + p*z + l*y + m*x
eqz = a*x.diff(t) + w*y.diff(t) + v*z.diff(t) + u*z + o*y + n*x
#First I replaced the derivatives to turn it into a first order ODE
eqx = eqx.subs({ sp.Derivative(y,t):dy, sp.Derivative(z,t):dz, sp.Derivative(x,t):dx,})
eqy = eqy.subs({ sp.Derivative(y,t):dy, sp.Derivative(z,t):dz, sp.Derivative(x,t):dx,})
eqz = eqz.subs({ sp.Derivative(y,t):dy, sp.Derivative(z,t):dz, sp.Derivative(x,t):dx,})
#to solve them nummerically I started to lambdify them:
freex = eqx.free_symbols
freey = eqy.free_symbols
freez = eqz.free_symbols
Xl = sp.lambdify(freex , eqx , 'numpy')
Yl = sp.lambdify(freey , eqy , 'numpy')
Zl = sp.lambdify(freez , eqz , 'numpy')
#from here on I tried to solve them, but had trouble with the dependencies and the arguments
#(so here is only the line for x)
Xo = [0]
ExampleValues = np.array([0, 0, 0, 1, 9, 0, 1, 2, 4])
space= np.linspace(0, 10, 50)
Solx = solve_ivp(XSL, (0, 10), Xo, t_eval=sace, args=ExampleValues)
</code></pre>
<p>Thanks for your answer in advance!</p>
|
<python><scipy><sympy><ode>
|
2024-03-21 08:02:13
| 1
| 637
|
Mo711
|
78,198,291
| 3,336,423
|
How to correctly inherit from an abc interface and another class preserving restriction to create objects missing implementation?
|
<p>I'm trying to combine a class implementing an <code>abc</code> interface with another class.</p>
<p>Let's start with <code>abc</code> simple example:</p>
<pre><code>import abc
class Interface(abc.ABC):
@abc.abstractmethod
def pure_virtual_func(self):
raise NotImplementedError
class Impl(Interface):
def __init__(self):
pass
impl = Impl()
</code></pre>
<p>This piece of code will fail when impl is created: <code>Can't instantiate abstract class Impl with abstract methods pure_virtual_func</code>. This is great, that's what you expect.</p>
<p>Now, when <code>Impl</code> needs to derive from another class:</p>
<pre><code>import abc
from PySide6.QtWidgets import QApplication, QWidget
class Interface(abc.ABC):
@abc.abstractmethod
def pure_virtual_func(self):
raise NotImplementedError
class Impl(QWidget,Interface):
def __init__(self):
pass
app = QApplication()
impl = Impl()
</code></pre>
<p>You get the error: <code>Error: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases</code>.</p>
<p>According to some posts (<a href="https://stackoverflow.com/questions/11276037/resolving-metaclass-conflicts">Resolving metaclass conflicts</a> and <a href="https://stackoverflow.com/questions/68304453/abstract-class-inheriting-from-abc-and-qmainwindow">Abstract class inheriting from ABC and QMainWindow</a>), it is recommended to do that:</p>
<pre><code>import abc
from PySide6.QtWidgets import QApplication, QWidget
class Interface(abc.ABC):
@abc.abstractmethod
def pure_virtual_func(self):
raise NotImplementedError
class Meta(type(QWidget), type(Interface)):
pass
class Impl(QWidget, Interface, metaclass=Meta):
def __init__(self):
pass
app = QApplication()
impl = Impl()
</code></pre>
<p>OK, now the <code>metaclass</code> error is gone. But you don't get the error <code>Can't instantiate abstract class Impl with abstract methods pure_virtual_func</code> anymore, now the <code>impl</code> is simply created, and you'll get the <code>NotImplementedError</code> exception if <code>pure_virtual_func</code> is called.</p>
<p>The <code>abc</code> great feature detecting not implemented pure virtual method is gone, how can one derive from <code>Interface</code> and another class (<code>QWidget</code> in my exemple) and preserve not implemented pure virtual method detection?</p>
<p>There's many posts explaining how to make the code work with metaclass stuff, but I found none that preserves <code>abc</code> capability to detect not implemented pure virtual methods.</p>
|
<python><abc>
|
2024-03-21 07:55:34
| 1
| 21,904
|
jpo38
|
78,198,143
| 11,023,647
|
Python: Creating Zip file from Minio objects results in duplicate entries for each file
|
<p>In my application, I need to get files from Minio storage and create a Zip-file from them. Some files might be really large so I'm trying to write them in chunks to be able handle the process more efficiently. The result however is a zip file with multiple entries with the same file name. I assume these are the chunks. How can I combine the chunks so that I would only have the original file in the Zip-file? Or is there some better way to handle writing large files into Zip?</p>
<p>This is the code block where I write the chunks:</p>
<pre><code> zip_buffer = io.BytesIO()
with zipfile.ZipFile(zip_buffer, "w") as zip_file:
for url in minio_urls:
file_name = url.split("/")[-1]
# Retrieve the Minio object
minio_object, object_name = get_object_from_minio(url)
stream = minio_object.stream()
while True:
chunk = next(stream, None) # Read the next chunk
if chunk is None:
break
zip_file.writestr(file_name, chunk)
</code></pre>
|
<python><stream><zip><minio>
|
2024-03-21 07:22:06
| 1
| 379
|
lr_optim
|
78,198,048
| 678,342
|
Start chat with context
|
<pre><code>model = GenerativeModel("gemini-1.0-pro", generation_config={"temperature": 0}, tools=[retail_tool])
context = [
Content(role = "user", parts = [
Part(text = "System prompt: You are an assistent who can process json and answers off that."),
Part(text = f"json is provided here {json.dumps(data)}")
]),
Content(role = "model", parts = [Part(text = "understood")])
]
chat2 = model.start_chat(history = context)
</code></pre>
<p>I am not able to provide history to start chat conversation. Getting following error:</p>
<pre><code>ValueError: history must be a list of Content objects.
</code></pre>
<p>I am passing content object in the array but its not happy. Ideally this method should have accepted a list of dict anyways to keep it simple. Any idea how to solve it? I guess I am not able to create the write data type. Just need a Python expert to answer this. Relevant function signature is attached below.</p>
<pre><code>Documentation is here: https://github.com/googleapis/python-aiplatform/blob/main/vertexai/generative_models/_generative_models.py#L602
</code></pre>
|
<python><google-gemini><google-generativeai>
|
2024-03-21 06:59:17
| 2
| 2,230
|
Richeek
|
78,197,808
| 10,141,885
|
Python NetworkX graphviz layout - RuntimeWarning about gvplugin_pango.dll dependencies
|
<p>After installing pygraphviz via conda, and trying to create graph:</p>
<pre><code># graph: nx.DiGraph
pos = nx.nx_agraph.graphviz_layout(graph, prog='dot', ...)
# pos[node] ...
</code></pre>
<p>positions are calculated normally, but the next warning is shown:</p>
<blockquote>
<p>agraph.py:1405: RuntimeWarning: Warning: Could not load "*\Library\bin\gvplugin_pango.dll"
It was found, so perhaps one of its dependents was not. Try ldd.</p>
</blockquote>
<p>Downgrade or upgrade graphwiz and pygraphwiz in Conda was not flexible, install via pip fails due to missing header 'graphviz/cgraph.h' (MSVS build).</p>
<p>Versions:<br />
env: Python 3.8.19<br />
conda pygraphviz: 1.9<br />
conda graphviz: 3.0.0</p>
<p>Also noticed that but by the <a href="https://networkx.org/documentation/stable/reference/drawing.html" rel="nofollow noreferrer">docs</a> it appears to be that <code>nx_agraph.graphviz_layout</code> is close to <code>nx.nx_pydot.pydot_layout</code>, which does not cause a warning.<br />
But a command line test of pydot command shows that the warning is still there, just not propagated into debug output.</p>
<p>What would be the proper way to fix the warning?</p>
|
<python><networkx><graphviz>
|
2024-03-21 06:03:23
| 1
| 1,033
|
halt9k
|
78,197,752
| 22,860,226
|
Firebase , Active Directory - Will AD users get created in Firebase as well?
|
<p>I am reading about integrating Azure AD with Firebase so that our corporate customers can use our system using their accounts.
My question is:</p>
<p>When a user with an email a@x.com signs in using AD for the first time, will a User(User with uid, etc) get created in Firebase?</p>
|
<python><firebase><firebase-authentication><saml><firebase-admin>
|
2024-03-21 05:48:24
| 2
| 411
|
JTX
|
78,197,681
| 5,439,546
|
Pyppeteer closed unexpectedly for specific sites which has password alert box
|
<p>I am trying to open a web page which has alertbox when opening,</p>
<p>When we set <code>await page.setRequestInterception(True)</code> it takes forever and not finishing it,</p>
<p>I also tried dismissing the dialog box but nothing works, here is my code below.</p>
<pre><code>from pyppeteer import launch
from utils.agents import get_user_agent
import asyncio
browser = await launch(headless=True,
ignoreHTTPSErrors=True,
acceptInsecureCerts=True,
# autoClose=False,
# handleSIGINT=False,
# handleSIGTERM=False,
# handleSIGHUP=False,
args=[
"--no-sandbox",
"--disable-gpu",
"--ignore-certificate-errors",
"--allow-running-insecure-content",
"--disable-web-security",
"--disable-setuid-sandbox"
'--disable-popup-blocking',
'--disable-dev-shm-usage',
# '--single-process',
# '--disable-gpu',
'--no-zygote'
# "--user-data-dir=/tmp/chromium"
])
page = await browser.newPage()
user_agent = get_user_agent()
await page.setUserAgent(user_agent)
await page.setRequestInterception(True)
url = "http://mogilitycapital.com"
netcalls = []
async def handle_request_redirects(request, url):
if request.resourceType in ['stylesheet', 'css', 'image', 'font']:
await request. Abort('blockedbyclient')
else:
await request.continue_()
async def intercept_network_response(response, netcalls):
netcalls.append(
{
"url": response.url,
"method": response.request.method,
"headers": response.headers,
"status": response.status
}
)
async def handle_dialog(dialog):
if dialog. Type == 'alert':
await dialog.dismiss()
elif dialog. Type == 'confirm':
await dialog.accept()
page.on('request', lambda req: asyncio.ensure_future((handle_request_redirects(req, url))))
page.on('response', lambda response: asyncio.ensure_future(intercept_network_response(response, netcalls)))
page.on('dialog', lambda dialog: asyncio.ensure_future(handle_dialog(dialog)))
resp = await page.goto(url, timeout=60000)
</code></pre>
<p>When we comment <code>await page.setRequestInterception(True)</code> code works fine,</p>
<p>our use case is to scrape more than million domains , get raw HTML, understand network calls ( so we need interception ) , not just this website, but some websites having this basic auth when we request, so we just need to click cancel on it, but that I am unable to do . if you open the URL provided in my example, you will see the authentication box, we just need to click cancel and extract whatever HTML comes.</p>
<p>Another expected result:</p>
<p>i debugged the program, it blocks with the network interceptor function, may be somehow we need to have a logic to abort the request for these kind of authentication websites, that is also OK</p>
<p>anything is fine</p>
<ol>
<li>get the html after cancelling</li>
<li>block these kind of authentication websites</li>
</ol>
<p>anything is fine.</p>
|
<python><pyppeteer>
|
2024-03-21 05:23:13
| 0
| 6,169
|
Pyd
|
78,197,665
| 13,347,225
|
Django models field not getting generated in database
|
<p>I am creating a basic Django project. I am using the default db sqlite3. I have created an app product inside my main folder. So, my folder structure is like this:</p>
<pre><code>-project
-djangoproject
-products
-models.py
-manage.py
</code></pre>
<p>I am adding my new product model in models.py but when I am running python3 manage.py makemigrations I am checking my migration file. The model is getting created and there is an id field also in it but no other field of my product is coming in my migrations file. I also ran the command python3 manage.py migrations and checked my database. Only id field is present in the product table. I am not able to understand the cause of this problem.</p>
<p>Here is the look at my migration file and models.py file:</p>
<pre><code>from django.db import models
class Product(models.Model):
name: models.CharField(max_length=200)
price: models.FloatField()
stock: models.IntegerField()
image_url: models.CharField(max_length=2083)
</code></pre>
<pre><code># Generated by Django 5.0.3 on 2024-03-21 04:46
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
('products', '0004_delete_offer_delete_product'),
]
operations = [
migrations.CreateModel(
name='Product',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
]
</code></pre>
<p>Django version: 5</p>
<p>If anyone knows the reason then please comment or post your answer. It will be really helpful for me.</p>
<p>Thanks in advance.</p>
|
<python><django><database><sqlite>
|
2024-03-21 05:17:02
| 1
| 393
|
Piyush Mittal
|
78,197,479
| 673,018
|
python gRPC authentication issues with a self signed certificates
|
<p>Referred to the grpc docs: <a href="https://grpc.github.io/grpc/python/_modules/grpc.html#ssl_channel_credentials" rel="nofollow noreferrer">grpc docs</a> and attempting secure authentication, I have already the certificate (.cert file), server URL, and credentials. Moreover, I am able to consume the gRPC proto using <code>API-DOG</code> without any issues.</p>
<p>However, while writing a Python gRPC client, I am facing issues at the authentication level (<code>StatusCode.UNAUTHENTICATED</code>). It seems like I am still missing some pieces or there might be a compatibility issue between Python and gRPC with self-signed certificates.</p>
<pre><code># Read certificate:
with open('./foo.crt', 'rb') as file:
certificate = file.read()
channel_creds = ssl_channel_credentials(root_certificates=certificate, private_key=None, certificate_chain=None)
channel = secure_channel(target=server_url, credentials=channel_creds, options=None, compression=None)
token = base64.b64encode(f'{username}:{password}'.encode('utf-8')).decode('ascii')
auth = GrpcAuth(f'Basic {token}')
metadata_call_credentials(metadata_plugin=auth)
# Create stub for the gRPC service
stub = report_service_pb2_grpc.ABCServiceStub(channel=channel)
# Generate request data
req_data = input_param_to_protobuf(request_data)
# Check response
try:
response = stub.def(req_data)
print("Response received:", response)
except grpc.RpcError as e:
print("Error:", e)
</code></pre>
<p>Error:</p>
<pre><code>Error: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "Received http2 header with status: 401"
</code></pre>
|
<python><python-3.x><grpc><grpc-python>
|
2024-03-21 04:04:19
| 0
| 13,094
|
Mandar Pande
|
78,197,195
| 10,853,071
|
Pandas categorical columns to factorize tables
|
<p>I am working on a huge denormalized table on a SQL server (10 columns x 130m rows). Take this as data example :</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.DataFrame({
'status' : ['pending', 'pending','pending', 'canceled','canceled','canceled', 'confirmed', 'confirmed','confirmed'],
'clientId' : ['A', 'B', 'C', 'A', 'D', 'C', 'A', 'B','C'],
'partner' : ['A', np.nan,'C', 'A',np.nan,'C', 'A', np.nan,'C'],
'product' : ['afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard'],
'brand' : ['brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3','brand_1', 'brand_3', 'brand_3'],
'gmv' : [100,100,100,100,100,100,100,100,100]})
data = data.astype({'partner':'category','status':'category','product':'category', 'brand':'category'})
</code></pre>
<p>As you can see, many of it columns are categories/strings that could be factorize (replaced by a small int identification to another x.1 join).</p>
<p>My question is if there is a easy way to extract another "dataframe" from each category columns and <s>factory</s> factorize the main table, so the bytes transmitted over a single query could be faster! Is there any easy library for it?</p>
<p>I would expect to get this output:</p>
<pre><code> data = pd.DataFrame({
'status' : ['1', '1','1', '2','2','2', '3', '3','3'],
'clientId' : ['1', '2', '3', '1', '4', '3', '1', '2','3'],
'partner' : ['A', np.nan,'C', 'A',np.nan,'C', 'A', np.nan,'C'],
'product' : ['afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard','afiliates', 'pre-paid', 'giftcard'],
'brand' : ['brand_1', 'brand_2', 'brand_3','brand_1', 'brand_2', 'brand_3','brand_1', 'brand_3', 'brand_3'],
'gmv' : [100,100,100,100,100,100,100,100,100]})
status_df = {1 : 'pending', 2:'canceled', 3:'confirmed'}
clientid = {1 : 'A', 2:'B', 3:'C', 4:'D'}
</code></pre>
<p>and so on!</p>
<p><strong>Bonus question! My table is big, so I probably would need to apply something using DASK.</strong></p>
|
<python><sql><pandas><dask><categorical-data>
|
2024-03-21 02:20:45
| 2
| 457
|
FábioRB
|
78,197,178
| 11,276,356
|
3D RGB-D Panorama to 3D mesh
|
<p>I am projecting a Panorama image back to 3D, however I am struggling with the projection.</p>
<p><a href="https://i.sstatic.net/z8Lrr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z8Lrr.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/WTGqB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTGqB.png" alt="Panorama" /></a></p>
<pre><code>import argparse
import os
import cv2
import numpy as np
import open3d
class PointCloudReader():
def __init__(self, resolution="full", random_level=0, generate_color=False, generate_normal=False):
self.random_level = random_level
self.resolution = resolution
self.generate_color = generate_color
self.point_cloud = self.generate_point_cloud(self.random_level, color=self.generate_color)
def generate_point_cloud(self, random_level=0, color=False, normal=False):
coords = []
colors = []
# Load and resize depth image
depth_image_path = 'DPT/output_monodepth/basel_stapfelberg_panorama.png'
depth_img = cv2.imread(depth_image_path, cv2.IMREAD_ANYDEPTH)
depth_img = cv2.resize(depth_img, (depth_img.shape[1] // 2, depth_img.shape[0] // 2))
# Load and resize RGB image
equirectangular_image = 'DPT/input/basel_stapfelberg_panorama.png'
rgb_img = cv2.imread(equirectangular_image)
rgb_img = cv2.resize(rgb_img, (depth_img.shape[1], depth_img.shape[0]))
# Define parameters for conversion
focal_length = depth_img.shape[1] / 2
sensor_width = 36
sensor_height = 24
y_ticks = np.deg2rad(np.arange(0, 360, 360 / depth_img.shape[1]))
x_ticks = np.deg2rad(np.arange(-90, 90, 180 / depth_img.shape[0]))
# Compute spherical coordinates
theta, phi = np.meshgrid(y_ticks, x_ticks)
depth = depth_img + np.random.random(depth_img.shape) * random_level
x_sphere = depth * np.cos(phi) * np.sin(theta)
y_sphere = depth * np.sin(phi)
z_sphere = depth * np.cos(phi) * np.cos(theta)
# Convert spherical coordinates to camera coordinates
x_cam = x_sphere.flatten()
y_cam = -y_sphere.flatten()
z_cam = z_sphere.flatten()
coords = np.stack((x_cam, y_cam, z_cam), axis=-1)
if color:
colors = rgb_img.reshape(-1, 3) / 255.0
points = {'coords': coords}
if color:
points['colors'] = colors
return points
def visualize(self):
pcd = open3d.geometry.PointCloud()
pcd.points = open3d.utility.Vector3dVector(self.point_cloud['coords'])
if self.generate_color:
pcd.colors = open3d.utility.Vector3dVector(self.point_cloud['colors'])
open3d.visualization.draw_geometries([pcd])
def main(args):
reader = PointCloudReader(random_level=10, generate_color=True, generate_normal=False)
reader.visualize()
if __name__ == "__main__":
main(None)
</code></pre>
<p>How can I get a valid 3d representation from the panorama? As we can see in my reprojection the distortion of the panorama affects my 3d scene.</p>
|
<python><projection><360-panorama>
|
2024-03-21 02:14:26
| 0
| 4,122
|
Hannah Stark
|
78,197,152
| 14,109,040
|
How do I fill in days of the week between two days
|
<p>I have a list like <code>['Monday', 'Thursday']</code> with two days of the week. I want to be able to retrieve a list of days of the week between these two -> <code>['Monday', 'Tuesday', 'Wednesday', 'Thursday']</code></p>
<ol>
<li><p>I am happy to write a custom function in Python to do this. But wondering if there is an easier way that I might be missing.</p>
</li>
<li><p>How can I retrieve the day index for a given day of the week (day of the week name - example:'Wednesday') in Python?</p>
</li>
</ol>
|
<python>
|
2024-03-21 02:05:29
| 1
| 712
|
z star
|
78,197,003
| 4,549,682
|
How to use warmupTrigger in Python V1 azure function
|
<p>I'm confused by how to actually use a Python warmup trigger in Python V1 functions. My understanding is:</p>
<ul>
<li>follow the example <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-warmup?tabs=isolated-process%2Cnodejs-v4&pivots=programming-language-python" rel="nofollow noreferrer">here</a> (except don't use the function name "warmup" as it says, use "main" as it shows in the example)</li>
<li>remove the type hint because <a href="https://github.com/MicrosoftDocs/azure-docs/issues/62804#issuecomment-1180075019" rel="nofollow noreferrer">it is a bug</a></li>
<li>it only works on "premium" (now elastic premium and dedicated?) but definitely not on consumption plans</li>
</ul>
<p>We are on the Elastic Premium plan.</p>
<p>However, when I do all this and try all sorts of permutations (adding different various configs, etc) I can never get the warmup trigger to actually work. When I view the execution count for the warmup function it shows 0 even with scaling events. When I query the logs with:</p>
<pre><code>requests
| project
timestamp,
id,
operation_Name,
success,
resultCode,
duration,
operation_Id,
cloud_RoleName,
invocationId=customDimensions['InvocationId']
| where timestamp > ago(30d)
| where cloud_RoleName =~ 'efwusprodfunc04' and operation_Name =~ 'warmup'
| order by timestamp desc
| take 20
</code></pre>
<p>It shows nothing. When I query the logs for the info log it should have from the warmup trigger, nothing.</p>
<p>So I don't think the warmup trigger is actually working. It also seems like there can be some cold-start issues if it scales out quickly.</p>
<ul>
<li>Is there somewhere else I should look in logs to see if it's actually working?</li>
<li>Is there something I'm missing with how to set up a warmupTrigger in Python V1 azure functions programming model?</li>
<li>Does warmupTrigger even work any more?</li>
<li>Is there something else I should change in logging configs so I can see the logs from the warmupTrigger, which are from python logging?</li>
</ul>
|
<python><azure><azure-functions>
|
2024-03-21 01:01:07
| 1
| 16,136
|
wordsforthewise
|
78,196,999
| 4,443,784
|
How does psycopg2.pool.SimpleConnectionPool manage the connections in the pool
|
<p>As a connection pool, SimpleConnectionPool should take care of the connection management in the pool. For example,</p>
<ol>
<li>keep alive the connections in the pool</li>
<li>evict the closed connections</li>
<li>close the long idle connections and create new connections upon needed</li>
</ol>
<p>But, when I look at the source code of SimpleConnectionPool, I didn't find the related code for the above connection management.</p>
<p>Not sure I have missed something, I can't believe that a connection pool doesn't take care of the connection manangement.</p>
<p>I write following code to test the connection management, an error occurred:
<code> c = conn.cursor() psycopg2.InterfaceError: connection already closed</code> which means that the pool still uses the already closed connection.</p>
<p>So, I would ask: SimpleConnectionPool doesn't take care of the connection management? If so, then I can't use it in my long running web application.</p>
<p>I am really confused about whether SimpleConnectionPool can be used in the long running web application, some articles/blogs says that it can be use d in the web application such as</p>
<blockquote>
<p>psycopg2 is a popular PostgreSQL adapter for Python that provides support for connection pooling through its psycopg2.pool module. Let's see how we can utilize psycopg2 to implement connection pooling in a web application. This is a simple FastAPI-based web application that manages blog posts.</p>
</blockquote>
<p>from <a href="https://medium.com/datauniverse/optimizing-database-interaction-in-web-applications-connection-pooling-with-psycopg2-and-c56b37d155f8" rel="nofollow noreferrer">https://medium.com/datauniverse/optimizing-database-interaction-in-web-applications-connection-pooling-with-psycopg2-and-c56b37d155f8</a></p>
<p>Waiting for psycopg2 expert's explanation for this question, thanks!
Or which connection pool can be used in the web application</p>
<pre><code>import psycopg2.pool
import time
conf = {
'dbname': "postgres",
'user': 'postgres',
'password': '1234',
'host': 'localhost',
'port': 5432,
'sslmode': 'disable'
}
pool = psycopg2.pool.SimpleConnectionPool(
1, 2, user='postgres', password='root',
host='localhost', port='5432', database='postgres')
for i in range(5):
conn = pool.getconn()
# dsn = conn.dsn
c = conn.cursor()
sql = """
select a,c from t_010 where c = '{}'
""".format(i * 10)
c.execute(sql)
rows = c.fetchall()
c.close()
pool.putconn(conn)
# close conn so that conn in the pool can NOT be reused and should be evicted
conn.close()
for row in rows:
print(row[0])
print("start to sleep for {}".format(i))
time.sleep(2)
print("end to sleep for {}".format(i))
</code></pre>
|
<python><psycopg2>
|
2024-03-21 01:00:37
| 0
| 6,382
|
Tom
|
78,196,921
| 2,289,030
|
Is there a way to "feature flag" python dependencies?
|
<p>In Rust, you can use <a href="https://doc.rust-lang.org/cargo/reference/features.html" rel="nofollow noreferrer">features</a> to gate entire sections of your application behind flags you set at build/dependency install time.</p>
<p>This allows you to, for example, enable "all" features, which is great for developers, or enable just "client" features, for released software (that may not need all the extra packages you use when debugging, like <code>deepdiff</code>).</p>
<p>Is there a way to do this in any of the major Python package managers (pip, poetry, anaconda)?</p>
|
<python><pip><python-packaging><python-poetry><anaconda3>
|
2024-03-21 00:31:33
| 0
| 968
|
ijustlovemath
|
78,196,675
| 12,304,000
|
vercel: Error: Size of uploaded file exceeds 300MB
|
<p>I am trying to deploy a basic python app which uses the Deepface.analyze function. When trying to deploy the app on Vercel, I get this error:</p>
<pre><code>Size of uploaded file exceeds 300MB
</code></pre>
<p>Is it because of large libraries like deepFace and tensorflow? Or could it because of my code structure?</p>
<p>I have a basic python flask app with this structure:</p>
<pre><code>static
--styles.css
templates
--index.html
app.py
requirements.txt
</code></pre>
|
<python><tensorflow><machine-learning><vercel><deepface>
|
2024-03-20 23:05:34
| 1
| 3,522
|
x89
|
78,196,632
| 1,711,271
|
In a text file, count lines from string 'foo' to first empty line afterwards. Raise exception if 'foo' not found
|
<p><strong>Background</strong>: I want to read some data from a text file, into a <code>polars</code> dataframe. The data starts at the line containing the string <code>foo</code>, and stops at the first empty line afterwards. Example file <code>test.txt</code>:</p>
<pre><code>stuff to skip
more stuff to skip
skip me too
foo bar foobar
1 2 A
4 5 B
7 8 C
other stuff
stuff
</code></pre>
<p><code>pl.read_csv</code> has args <code>skip_rows</code> and <code>n_rows</code>. Thus, if I can find the line number of <code>foo</code> and the line number of the first empty line afterwards, I should be able to read the data into a <code>polars</code> dataframe. How can I do that? I'm able to find <code>skip_rows</code>:</p>
<pre><code>from pathlib import Path
file_path = Path('test.txt')
with open(file_path, 'r') as file:
skip_rows = 0
n_rows = 0
for line_number, line in enumerate(file, 1):
if 'foo' in line:
skip_rows = line_number - 1
</code></pre>
<p>But how can I find also <code>n_rows</code> without scanning the file again? Also, the solution must handle the case when there's no line containing <code>foo</code>, e.g.</p>
<pre><code>stuff to skip
more stuff to skip
skip me too
1 2 A
4 5 B
7 8 C
other stuff
stuff
</code></pre>
<p>In that case, I would like to either return a value indicating that <code>foo</code> was not found, or raise an exception so that the caller knows something went wrong (maybe a <code>ValueError</code> exception?).</p>
<p><strong>EDIT</strong>: I forgot an edge case. Sometimes the data may continue until the end of the file:</p>
<pre><code>stuff to skip
more stuff to skip
skip me too
foo bar foobar
1 2 A
4 5 B
7 8 C
</code></pre>
|
<python><file-io><python-polars><text-parsing>
|
2024-03-20 22:53:15
| 6
| 5,726
|
DeltaIV
|
78,196,623
| 12,304,000
|
the layer sequential has never been called and thus has no defined input
|
<p>I am running a simple script within my Anaconda virtual env</p>
<pre><code>from deepface import DeepFace
face_analysis = DeepFace.analyze(img_path = "face3.jpeg")
print(face_analysis)
</code></pre>
<p>But I keep getting this error.</p>
<pre><code>Action: age: 25%|██████████████████████████▊ | 1/4 [00:02<00:06, 2.08s/it]
Traceback (most recent call last):
File "C:\Users\Ctrend.pk\Cheer-Check\test2.py", line 9, in <module>
analysis = DeepFace.analyze(img_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Ctrend.pk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\DeepFace.py", line 222, in analyze
return demography.analyze(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Ctrend.pk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\modules\demography.py", line 157, in analyze
apparent_age = modeling.build_model("Age").predict(img_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Ctrend.pk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\modules\modeling.py", line 57, in build_model
model_obj[model_name] = model()
^^^^^^^
File "C:\Users\Ctrend.pk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\extendedmodels\Age.py", line 32, in __init__
self.model = load_model()
^^^^^^^^^^^^
File "C:\Users\Ctrend.pk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deepface\extendedmodels\Age.py", line 61, in load_model
age_model = Model(inputs=model.input, outputs=base_model_output)
^^^^^^^^^^^
File "C:\Users\Ctrend.pk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\keras\src\ops\operation.py", line 228, in input
return self._get_node_attribute_at_index(0, "input_tensors", "input")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Ctrend.pk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\keras\src\ops\operation.py", line 259, in _get_node_attribute_at_index
raise ValueError(
ValueError: The layer sequential_1 has never been called and thus has no defined input.
</code></pre>
<p>Deepface version: 0.0.87</p>
<p>tensorflow
Version: 2.16.1</p>
<p>I think it fetches the age but then doesn't proceed. What am I missing out on?</p>
|
<python><tensorflow><machine-learning><deep-learning><deepface>
|
2024-03-20 22:50:10
| 1
| 3,522
|
x89
|
78,196,391
| 8,157,102
|
Persistently tracking user input asynchronously using Python
|
<p>I'm seeking a Python script that persistently awaits user input for some time at each interval. Should the user fail to provide input within this timeframe, the script should automatically execute a predefined operation and continue this process indefinitely. In other words, I require a routine that perpetually runs, yet swiftly evaluates any character input by the user.</p>
<p>To achieve this, I've crafted a code snippet wherein the <code>getch</code> function captures a character input from the user. The program awaits the user's input within a timeout period. If the user fails to provide input within this interval, a notification is issued to indicate that the time has elapsed.</p>
<pre class="lang-py prettyprint-override"><code>import threading
def getch(input_str: list) -> None:
"""Gets a single character"""
try:
import msvcrt
input_str.append(msvcrt.getch().decode("utf-8"))
except ImportError:
import sys
import termios
import tty
fd = sys.stdin.fileno()
oldsettings = termios.tcgetattr(fd)
try:
tty.setraw(fd)
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, oldsettings)
input_str.append(ch)
def input_with_timeout(timeout):
print("You have {} seconds to enter something:".format(timeout))
input_str = []
input_thread = threading.Thread(target=getch, args=(input_str,))
input_thread.start()
input_thread.join(timeout)
if input_thread.is_alive():
print("Time's up! Performing default operation.")
else:
print("You entered:", input_str)
while True:
input_with_timeout(5)
</code></pre>
<p>When the user provides input within the allotted time, the system functions correctly. However, should time elapse, two issues arise: firstly, there is a glitch in text display, resulting in misplacement of the text, and secondly, the user must input their response multiple times, as a single attempt does not seem to register the input accurately.</p>
<p><a href="https://i.sstatic.net/WqhKZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WqhKZ.png" alt="enter image description here" /></a>
Because libraries such as <a href="https://pynput.readthedocs.io/en/latest/limitations.html" rel="nofollow noreferrer">pynput</a> and <a href="https://pypi.org/project/keyboard/" rel="nofollow noreferrer">keyboard</a> have limitations in some environments, I cannot use them.</p>
|
<python><python-3.x><multithreading><asynchronous>
|
2024-03-20 21:42:30
| 1
| 961
|
kamyarmg
|
78,196,339
| 5,495,134
|
How to solve "OperationFailure: PlanExecutor error ... embeddings is not indexed as knnVector"
|
<p>I'm trying to perform a vector search using pymongo, here is my index definition:</p>
<pre class="lang-json prettyprint-override"><code>{
"fields": [
{
"numDimensions": 1536,
"path": "embeddings",
"similarity": "cosine",
"type": "vector"
},
{
"path": "company",
"type": "filter"
},
{
"path": "age",
"type": "filter"
}
]
}
</code></pre>
<p>and here is my python code</p>
<pre class="lang-py prettyprint-override"><code>
db.collection.aggregate([
{
"$vectorSearch": {
"index": "test_index",
"path": "embeddings",
"queryVector": embeddings,
"numCandidates": 100,
"limit": 5
}
}
])
</code></pre>
<p>When I run this code I get the error:</p>
<pre class="lang-none prettyprint-override"><code>pymongo.errors.OperationFailure: PlanExecutor error during aggregation
:: caused by :: embeddings is not indexed as knnVector
</code></pre>
<p>I tried updating the index definition to <code>"type": "knnVector"</code> but I'm still getting the same error, any idea of how to solve this issue?</p>
|
<python><mongodb><aggregation-framework><pymongo>
|
2024-03-20 21:30:47
| 1
| 787
|
Rodrigo A
|
78,196,316
| 6,392,523
|
PyTorch Segementation Fault (core dumped) when moving Pytorch tensor to GPU
|
<p>I have a machine with RTX 6000 ADA GPUs.</p>
<p>We used to have CUDA version 11.x and I used the following image:
<code>nvcr.io/nvidia/pytorch:21.04-py3</code>
(I use PyTorch 1.x).</p>
<p>However, it seems that drivers on our machine were updated to the following -</p>
<p>Driver information from nvidia-smi command:</p>
<p><code>NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2</code></p>
<p>Now I keep getting:
<code>Segmentation fault (core dumped)</code>
Whenever I try to move a tensor to cuda device.</p>
<p>I assume it's a CUDA version issue so I tried updating the image to:
<code>nvcr.io/nvidia/pytorch:23.02-py3</code>
that supports newer CUDA version, but the issue remains the same. Here is an example program that causes the issue:</p>
<pre><code>import torch
tensor_cpu = torch.tensor([1, 2, 3, 4, 5])
print("Tensor device before moving to CUDA:", tensor_cpu.device)
if torch.cuda.is_available():
device = torch.device("cuda")
tensor_cuda = tensor_cpu.to(device)
print("Tensor device after moving to CUDA:", tensor_cuda.device)
else:
print("CUDA is not available. Cannot move tensor to CUDA.")
</code></pre>
<p>And the output is:</p>
<pre><code>Tensor device before moving to CUDA: cpu
Segmentation fault (core dumped)
</code></pre>
<p>How can I fix it? What is the correct image (with PyTorch 1.x) that I should use with my GPU?</p>
<p>Or is the issue be related to something else?</p>
|
<python><pytorch><gpu>
|
2024-03-20 21:25:44
| 2
| 1,054
|
ChikChak
|
78,196,314
| 6,118,662
|
Make DBT choose the schema in dbt_project.yml instead of the name of the source
|
<p>Here is how I formatted my dbt_project.yml</p>
<pre><code>name: 'project_name'
version: '0.1.0'
config-version: 2
profile: 'project_name'
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target"
clean-targets: # directories to be removed by `dbt clean`
- "target"
- "dbt_packages"
sources:
dbt_proj:
business-unit-x:
+database: "BUSINESS_UNIT_X_{{ env_var('SF_ENV')}}"
gold:
+schema: GOLD
silver:
+schema: SILVER
bronze:
+schema: BRONZE
</code></pre>
<p>Here is in example of a source, in the folder business-unit-x/silver:</p>
<pre><code>version: 2
sources:
- name: I_DONT_WANT_TO_BE_USED_AS_SCHEMA
tables:
- name: table_name
columns:
- name: column_name
</code></pre>
<p>Is there an option to make DBT choose the schema in my dbt_project.yml instead of a schema or name property in my source ?</p>
<p>If not, how to configure the schema property to synchronize with the dbt_project.yml ?</p>
|
<python><dbt>
|
2024-03-20 21:24:57
| 1
| 458
|
linSESH
|
78,196,296
| 14,293,020
|
Scipy 2D interpolation not accomodating every point
|
<p><strong>Context:</strong> I have a sparse 2D array representing measures of ice thickness along flight traces. However, because of instrument errors, intersects between traces can have different values. I want to interpolate the 2D grid between the traces.</p>
<p><strong>Problem:</strong> I am comparing 2 interpolations from Scipy: <code>griddata</code> and <code>CloughTocher2DInterpolator</code>. I would like to introduce some tolerance in the interpolation: if two points close together have vastly different values, then the interpolation should not try to fit them exactly.</p>
<p><strong>Question:</strong> Is there a way to use any of the two functions to that extent ? I know <code>CloughTocher2DInterpolator</code> has the <code>tol</code> argument but it does not remove artifacts from the interpolation.</p>
<p><strong>Real Case:</strong>
I plotted below the two interpolations. As you can notice, the <code>CloughTocher2DInterpolator</code> has weirdness induced from the overlapping traces, while the linear interpolation does not.
Eventually, I would like something halfway between the two methods: no apparent weirdness, but also a better interpolation than just linear fit (because topography is better represented by splines than straight lines).
<a href="https://i.sstatic.net/ef4d5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ef4d5.png" alt="Interp 1" /></a>
<a href="https://i.sstatic.net/cOMUp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cOMUp.png" alt="Interp 2" /></a>
<a href="https://i.sstatic.net/Oysxq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Oysxq.png" alt="Traces" /></a></p>
<p><strong>Example Code:</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Create a 100x100 array of NaNs
array = np.full((100, 100), np.nan)
# Store indices of points along the lines
line_indices = []
vals = []
# Generate random lines with sinusoidal shape
num_lines = 40
for _ in range(num_lines):
x_start = np.random.randint(0, 100)
y_start = np.random.randint(0, 100)
length = np.random.randint(20, 50)
angle = np.random.uniform(0, 2*np.pi)
values = np.linspace(0, 400, length) + np.random.normal(0, 50, length)
line_x = (np.arange(length) * np.cos(angle)).astype(int) + x_start
line_y = (np.arange(length) * np.sin(angle)).astype(int) + y_start
line_x = np.clip(line_x, 0, 99)
line_y = np.clip(line_y, 0, 99)
line_indices.append((line_y, line_x)) # Store indices
array[line_y, line_x] = values
vals.append(values)
# Plot the array
plt.imshow(array, cmap='jet')
plt.colorbar(label='Value')
plt.title('Random Lines with Sinusoidal Shape')
plt.show()
# Prepare the interpolation
line_indices = np.hstack(line_indices)
indices_x = line_indices[1]
indices_y = line_indices[0]
Y = np.arange(0, array.shape[0])
X = np.arange(0, array.shape[1])
X, Y = np.meshgrid(X,Y)
# Interpolation
interp = CloughTocher2DInterpolator(list(zip(indices_x, indices_y)), np.hstack(vals))
Z = interp(X, Y) # Interpolated bed
plt.imshow(Z)
</code></pre>
|
<python><scipy><interpolation><spline>
|
2024-03-20 21:20:33
| 0
| 721
|
Nihilum
|
78,196,235
| 9,208,758
|
"Update your browser error" when trying to access the SQL database using pyodbc in python
|
<p>I am trying to access the SSMS database using python. Please note, that the authentication required for the SSMS login uses MFA - "Azure Active Directory - Universal with MFA". My dummy code is as follows:</p>
<pre><code>import pandas as pd
import numpy as np
import sqlalchemy
import pymssql
import pyodbc
DRIVER = "ODBC Driver 17 for SQL Server"
SERVERNAME = "some.server.name.database.windows.net"
AUTHENTICATION='ActiveDirectoryInteractive'
DB = "SOME_DB"
connection_str = f'DRIVER={DRIVER};SERVER={SERVERNAME},1433;DATABASE={DB};PWD=;Encrypt=yes;Authentication=ActiveDirectoryInteractive;UID=;TrustServerCertificate=no;'
cnxn = pyodbc.connect(connection_str)
cursor = cnxn.cursor()
query_string = '''
select top(10) * from smth.other
'''
df = pd.read_sql(query_string, connection_str)
df
</code></pre>
<p>When I run the code, I am taken to the login page where I fill out my ID and password. However, when I press next, I get the following:</p>
<p><a href="https://i.sstatic.net/L1Dwb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L1Dwb.png" alt="enter image description here" /></a></p>
<p>How would I go about solving this issue? I expect the window to display an area for me to enter my MFA info.</p>
|
<python><sql-server><azure>
|
2024-03-20 21:05:57
| 1
| 589
|
Isaac A
|
78,196,062
| 3,597,746
|
Python Import is failing with error message ImportError: attempted relative import with no known parent package
|
<p>I'm working with Python and have this directory structure:</p>
<pre><code>/new/uiautomation/testscripts/dropdown_test.py
/new/uiautomation/apis/runner.py
/new/apis/restapi.py
from os import path
sys.path.append( path.dirname( path.dirname( path.abspath(__file__) ) ) )
</code></pre>
<p>In my dropdown_test.py file, located within testscripts, I want to import restapi.py and runner.py, both located in different directories. The apis directory is not directly above testscripts, so how can I import these modules without hardcoding the project directory name (new)? Here's what I've tried:</p>
<pre><code>from apis.runner import Runner
from ..apis.restapi import Login
</code></pre>
<p>However, this isn't working as expected. Is there a way to provide relative import statements without specifying the project directory name?"</p>
<p>Only problem is that i am not able to resolve or import apis/restapi.py inside my testscripts/dropdown_test.py</p>
|
<python>
|
2024-03-20 20:22:11
| 1
| 4,255
|
Ammad
|
78,195,754
| 1,436,763
|
Chrome driver exception: Message: session not created: Chrome failed to start: crashed
|
<p>I'm running python 3.12. Both chrome for testing and chrome driver are the same version i.e. 123.0.6312.58 - this has been checked and verified</p>
<p>It's working perfectly fine on my Mac. However when I run the dockerized app I get the error</p>
<pre><code>024-03-20 19:53:58 03/20/2024 06:53:58PM - ERROR - helper_functions.py::helper_functions::__init__:36 - Chrome driver exception: Message: session not created: Chrome failed to start: crashed.
2024-03-20 19:53:58 (disconnected: unable to connect to renderer)
2024-03-20 19:53:58 (The process started from chrome location /usr/local/bin/chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
2024-03-20 19:53:58 Stacktrace:
2024-03-20 19:53:58 #0 0x555555cc9993 <unknown>
2024-03-20 19:53:58 #1 0x5555559c4136 <unknown>
2024-03-20 19:53:58 #2 0x5555559f8448 <unknown>
2024-03-20 19:53:58 #3 0x5555559f443d <unknown>
2024-03-20 19:53:58 #4 0x555555a3d239 <unknown>
</code></pre>
<p>My docker config looks like this</p>
<pre><code>FROM --platform=linux/amd64 ubuntu:20.04
ENV PATH /usr/local/bin:$PATH
RUN set -eux; \
apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates \
tzdata \
wget \
unzip \
;
RUN apt-get install -y git
RUN apt-get install -y python3.12
RUN apt-get install -y python3-pip
RUN apt-get update && apt-get install -y libgbm1 gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 \
libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libnss3 lsb-release xdg-utils wget ca-certificates
RUN set -eux; \
wget -O chrome-linux64.zip https://storage.googleapis.com/chrome-for-testing-public/123.0.6312.58/linux64/chrome-linux64.zip && \
unzip chrome-linux64.zip && \
mv chrome-linux64/* /usr/local/bin/ && \
chmod +x /usr/local/bin/chrome && \
rm -rf chrome-linux64.zip chrome-linux64
RUN set -eux; \
wget -O chromedriver-linux64.zip https://storage.googleapis.com/chrome-for-testing-public/123.0.6312.58/linux64/chromedriver-linux64.zip && \
unzip chromedriver-linux64.zip && \
mv chromedriver-linux64/chromedriver /usr/local/bin/chromedriver && \
chmod +x /usr/local/bin/chromedriver && \
rm -rf chromedriver-linux64.zip chromedriver-linux64
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
RUN mkdir /home/test
WORKDIR /home/test
COPY . .
ENTRYPOINT ["behave"]
CMD ["-v", "features/scenarios", "-D", "browser=chrome", "-f", "html", "-o", "test-results/report.html"]
</code></pre>
<p>Environment</p>
<ul>
<li>Selenium 4.18</li>
<li>Ubuntu Linux Docker image</li>
<li>Behave 1.2.7dev5</li>
</ul>
|
<python><google-chrome><selenium-webdriver>
|
2024-03-20 19:08:09
| 2
| 427
|
Seroney
|
78,195,717
| 6,714,667
|
How can i authenticate using token instead of api key for azure searchClient?
|
<p>I am trying to use the searchClient library:</p>
<pre><code>service_endpoint = os.environ["AZURE_SEARCH_SERVICE_ENDPOINT"]
index_name = os.environ["AZURE_SEARCH_INDEX_NAME"]
key = os.environ["AZURE_SEARCH_API_KEY"]
search_client = SearchClient(service_endpoint, index_name, AzureKeyCredential(key))
</code></pre>
<p>Is there a way to create the searchclient by passing a token instead of an API or another way to authenticate without this api key?</p>
|
<python><azure><azure-cognitive-search>
|
2024-03-20 18:58:16
| 1
| 999
|
Maths12
|
78,195,680
| 1,999,585
|
How can I draw the bottom spline if I use set_xlim and set_ylim in Matplotlib?
|
<p>I want to plot two sets of data, called <strong>real_values</strong> and <strong>predicted_values</strong>. The code I use is this:</p>
<pre><code>def plot_experimental_vs_predicted_values(real_values, predicted_values, plot_title, show_plot=True):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_xlim(0, len(real_values) + 1)
ax.set_ylim(min(chain(predicted_values, real_values)) * 0.95, max(chain(predicted_values, real_values)) * 1.05)
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.plot(range(1, len(real_values) + 1), real_values, '.', label='Experimental value')
ax.plot(range(1, len(predicted_values) + 1), predicted_values, '.', label='Estimated value')
plt.xlabel('Sample number')
plt.ylabel('Experimental or estimated value')
plt.legend()
plt.title(plot_title)
plt.xticks(range(0, len(real_values) + 1))
plt.savefig('Full.png')
if show_plot:
plt.show()
plt.close()
</code></pre>
<p>When call run the above function, I get the following graph: <a href="https://i.sstatic.net/LofNb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LofNb.png" alt="graph:" /></a></p>
<p>What can I do to display the bottom line? The X-axis?</p>
|
<python><matplotlib>
|
2024-03-20 18:48:21
| 0
| 2,424
|
Bogdan Doicin
|
78,195,645
| 10,517,777
|
How to add an unique id of each value in a new column of dask dataframe
|
<p>I have the following dask dataframe</p>
<pre><code>column1 column2
a 1
a 2
b 3
c 4
c 5
</code></pre>
<p>I need to add a new column with the unique consecutive number of the values in the column1. My output will be:</p>
<pre><code>column1 column2 column 3
a 1 1
a 2 1
b 3 2
c 4 3
c 5 3
</code></pre>
<p>How do I achieve it?. Thanks in advance for your help.</p>
|
<python><dask>
|
2024-03-20 18:42:28
| 1
| 364
|
sergioMoreno
|
78,195,465
| 54,873
|
Best way to dump large pandas dataframe into an *existing* Excel file?
|
<p>I have a large <code>pandas</code> dataframe that I'd like to dump into an existing excel file (essentially, that file then takes the data and does Excel-y things with it, and I want to update a single <code>raw_data</code> tab).</p>
<p>For now I use code from here: <a href="https://stackoverflow.com/questions/20219254/how-to-write-to-an-existing-excel-file-without-overwriting-data-using-pandas">How to write to an existing excel file without overwriting data (using pandas)?</a> that looks like</p>
<pre><code>import pandas as pd
with pd.ExcelWriter('the_file.xlsx', engine='openpyxl', mode='a') as writer:
data_filtered.to_excel(writer, sheet_name="raw_data")
</code></pre>
<p>But the dump is now so memory- and processor- intensive that it takes down my computer for ten minutes at a stretch.</p>
<p>I see solutions for dumping to a new excel file, but nothing good for overwriting an tab of an existing file.</p>
<p>What is the best, most efficient way to do this? I am also game for writing to a new tempfile, and then copying that tempfile into the existing file's <code>raw_data</code> tab - but the process needs to end with the existing file being updated.</p>
<p>Many thanks!
/YGA</p>
|
<python><pandas><excel>
|
2024-03-20 18:07:00
| 1
| 10,076
|
YGA
|
78,195,369
| 351,771
|
Applying matrix multiplication in chain to several sets of values
|
<p>Mathematically, I'm trying to calculate <strong>x</strong>^T <strong>A</strong> <strong>x</strong>, where <strong>x</strong> is an n-dimensional coordinate and <strong>A</strong> a n-dimensional square matrix. However, I'd like to efficiently calculate this for a set of coordinates. For example, in two dimensions:</p>
<pre><code>import numpy as np
x = np.column_stack([[1,2,3,4,5],[6,7,8,9,0]])
A = np.array([[1,0],[0,2]])
print(x[0] @ A @ x[0]) # works
# How can I get efficiently an array of x[i] @ A @ x[i]?
y = [x[i] @ A @ x[i] for i in range(x.shape[0])]
</code></pre>
|
<python><numpy><matrix>
|
2024-03-20 17:46:17
| 1
| 2,717
|
xioxox
|
78,195,336
| 2,343,309
|
Preserve coordinates in xarray.Dataset when subsetting a DataArray
|
<p>I want to have multiple coordinate systems for a given dimension, e.g., a single <code>"time"</code> dimension with coordinates for (the default) <code>datetime64[ns]</code> type but also for a numeric year, or a season. This seems to be possible using <code>Dataset.assign_coords()</code> but has some unexpected behavior when subsetting a <code>DataArray</code> within the <code>Dataset</code>.</p>
<p>I'm using <a href="https://archive.podaac.earthdata.nasa.gov/podaac-ops-cumulus-protected/TELLUS_GRAC-GRFO_MASCON_CRI_GRID_RL06.1_V3/GRCTellus.JPL.200204_202311.GLO.RL06.1M.MSCNv03CRI.nc" rel="nofollow noreferrer">the publicly available NASA dataset available here</a> (with Earthdata login); described in more detail <a href="https://podaac.jpl.nasa.gov/dataset/TELLUS_GRAC-GRFO_MASCON_CRI_GRID_RL06.1_V3" rel="nofollow noreferrer">at this link</a>.</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
import numpy as np
ds = xr.open_mfdataset('GRCTellus.JPL.200204_202311.GLO.RL06.1M.MSCNv03CRI.nc')
ds.coords['lon'] = ds.coords['lon'] - 180
# Get a list of all the dates from the "time" coordinate
dates = ds.coords['time'].values
# Convert each "datetime64[ns]" object to a 4-digit year
years = []
for each in dates:
years.append(int(str(each)[0:4]))
# Create a new set of coordinates
ds = ds.assign_coords(year = ('year', years))
</code></pre>
<pre><code><xarray.Dataset>
Dimensions: (lat: 360, time: 227, lon: 720, bounds: 2, year: 227)
Coordinates:
* lat (lat) float64 -89.75 -89.25 -88.75 ... 88.75 89.25 89.75
* time (time) datetime64[ns] 2002-04-17T12:00:00 ... 2023-11-16
* lon (lon) float64 -179.8 -179.2 -178.8 ... 178.8 179.2 179.8
* year (year) int64 2002 2002 2002 2002 2002 ... 2023 2023 2023 2023
</code></pre>
<p>This works great, so far, in that I have a <code>year</code> coordinate now. <strong>I also have a <code>year</code> dimension listed, which is not what I wanted.</strong></p>
<p>The more serious problem is that when I select a <code>DataArray</code> from this <code>Dataset</code>, it doesn't know anything about the new coordinates.</p>
<pre class="lang-py prettyprint-override"><code>ds['lwe_thickness'].sel(lon = slice(-124, -114), lat = slice(32, 42))
</code></pre>
<pre><code><xarray.DataArray 'lwe_thickness' (time: 227, lat: 20, lon: 20)>
dask.array<getitem, shape=(227, 20, 20), dtype=float64, chunksize=(46, 20, 20), chunktype=numpy.ndarray>
Coordinates:
* lat (lat) float64 32.25 32.75 33.25 33.75 ... 40.25 40.75 41.25 41.75
* time (time) datetime64[ns] 2002-04-17T12:00:00 ... 2023-11-16
* lon (lon) float64 -123.8 -123.2 -122.8 -122.2 ... -115.2 -114.8 -114.2
</code></pre>
<p>The result is that <code>"time"</code> is listed as a coordinate, but not <code>"year"</code>. This has implications for using <a href="https://docs.xarray.dev/en/stable/generated/xarray.DataArray.polyfit.html#xarray.DataArray.polyfit" rel="nofollow noreferrer"><code>DataArray.polyfit()</code></a>, because without a <code>"year"</code> coordinate I can't get coefficient values denominated by year.</p>
<p><strong>How can I make a new coordinate system available to each <code>DataArray</code> in my <code>Dataset</code>?</strong> I have looked at:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/56951314/add-new-coordinates-to-an-xarray-dataarray">"add new coordinates to an xarray.DataArray"</a> - But when I try this, I get a <code>ValueError: cannot add coordinates with new dimensions to a DataArray</code></li>
</ul>
|
<python><python-xarray>
|
2024-03-20 17:41:43
| 1
| 376
|
Arthur
|
78,195,234
| 11,402,025
|
SPARQL query response to objects
|
<p>I have a SPARKQL query that send me a response in json.</p>
<pre><code> "results": {
"response": [
{
"nameinfo": {
"data": "data1"
},
"namedetails": {
"data": "data2"
},
"childinfo": {
"data": "data3"
},
"childdetails": {
"data": "data4"
},
"parentinfo": {
"data": "data5"
},
"parentdetails": {
"data": "data6"
},
"prop": {
"data": "data7"
}
},
{
......
}
]
}
</code></pre>
<pre><code>for data in response["results"]["response"]:
data_dict = {
"nameinfo": "",
"namedetails": "",
"childinfo": "",
"childdetails": "",
"parentinfo": "",
"parentdetails": "",
"prop": "",
}
if "nameinfo" in data:
data_dict["nameinfo"] = data[
"nameinfo"]["data"]
if "namedetails" in data:
data_dict["namedetails"] = data[
"namedetails"]["data"]
if "childinfo" in data:
data_dict["childinfo"] = data[
"childinfo"
]["value"]
...
hierarchy.append(data_dict)
hierarchy = build_heirarchy(
hierarchy, parent=data_dict["nameinfo"]
)
</code></pre>
<pre><code>def build_heirarchy(data, parent=None):
//recursive funtion to build the heirarchy
</code></pre>
<p>I need to iterate the response to create an object in Python. I am using for in and if else and recursive aproach. But it is very timeconsuming and the performance is slow. I am iterating the json object and adding to the Pydantic Model I have used.</p>
<p>Is there way to make it perform better, faster ?</p>
|
<python><json><fastapi><sparql><pydantic>
|
2024-03-20 17:24:17
| 0
| 1,712
|
Tanu
|
78,195,193
| 16,717,009
|
Removing Pandas duplicates with more complicated preference than first or last
|
<p>Pandas .drop_duplicates lets us specify that we want to keep the first or last (or none) of the duplicates found. I have a more complicated condition. Let's say I have a set of preferred values for a column. If a duplicate pair is found and one is in the preferred set, I want to keep that one, regardless of whether its first or last. If both are preferred, usual drop_duplicates behavior should apply. If neither is in preferred set, then again, usual drop_duplicates behavior should apply.</p>
<p>I've tried playing with masks but can't seem to get it right. I think it might be the wrong way to go about it. Here's what I've tried.</p>
<pre><code>import pandas as pd
def conditional_remove_duplicates(df, preferred_tags):
duplicates_mask = df.duplicated(subset=['id', 'val'], keep=False)
preferred_mask = df['tag'].isin(preferred_tags)
mask = duplicates_mask | preferred_mask
df = df[mask].drop_duplicates(subset=['id', 'val'], keep='first')
return df
data = {'id': ['A', 'A', 'A', 'A', 'B', 'B', 'C', 'D', 'D'],
'val': [10, 10, 11, 10, 20, 20, 30, 40, 40],
'tag': ['X', 'Z', 'X', 'Y', 'Z', 'X', 'X', 'Z', 'Z']}
preferred_tags = {'X', 'Y'}
df = pd.DataFrame(data)
print(df)
"""
id val tag
0 A 10 X
1 A 10 Z
2 A 11 X
3 A 10 Y
4 B 20 Z
5 B 20 X
6 C 30 X
7 D 40 Z
8 D 40 Z
"""
result_df = conditional_remove_duplicates(df, preferred_tags)
print(result_df)
""" Produces:
id val tag
0 A 10 X
2 A 11 X
4 B 20 Z
6 C 30 X
7 D 40 Z
Should be:
id val tag
0 A 10 X
2 A 11 X
5 B 20 X
6 C 30 X
7 D 40 Z
"""
</code></pre>
|
<python><pandas>
|
2024-03-20 17:17:20
| 3
| 343
|
MikeP
|
78,195,132
| 14,793,223
|
Aligning to "North west" in moviepy
|
<p>I am new to moviepy and have a question. I want to know whether it is possible to align TextClip both to north and west</p>
<pre class="lang-py prettyprint-override"><code>title_clip = TextClip(title, font="Inter-SemiBold", size=(1002, 88), method="caption", align="west", fontsize=36,
color='rgba(0,0,0,0.75)')
title_clip = title_clip.set_position((226, 803)).set_duration(duration)
detail_clip = TextClip(news, font="Rubik-Regular", size=(1469, 474), method="caption", align=("north", "west"), fontsize=34,
color='rgb(103,103,103)')
detail_clip = detail_clip.set_position((226, 226)).set_duration(duration)
</code></pre>
|
<python><moviepy>
|
2024-03-20 17:04:42
| 1
| 690
|
Arya Anish
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.