QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,044,016
| 850,781
|
Fill Na in pandas with averages per another column
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame({
"species":["cat","dog","dog","cat","cat"],
"weight":[5,4,3,7,None],
"length":[12,None,13,14,15],
})
species weight length
0 cat 5.0 12.0
1 dog 4.0 NaN
2 dog 3.0 13.0
3 cat 7.0 14.0
4 cat NaN 15.0
</code></pre>
<p>and I want to fill the missing data with the average for the species, i.e.,</p>
<pre><code>df.loc[1,"length"] = 13 # the average dog length
df.loc[4,"weight"] = 6 # (5+7)/2 the average cat weight
</code></pre>
<p>How do I do that?</p>
<p>(presumably I need to pass <code>value=DataFrame</code> to <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>df.fillna</code></a>, but I don't see an <em>easy</em> way to construct the frame)</p>
|
<python><pandas><dataframe><fillna>
|
2024-02-22 21:08:54
| 1
| 60,468
|
sds
|
78,043,846
| 5,036,476
|
How to prevent socket exhaustion on http requests?
|
<p>I'm a .NET developer writing a python client which needs to do multiple HTTP requests in a short amount of time.</p>
<p>In .NET, when you use HttpClient to make a request, it opens a socket connection to make the request and get the response. When you're done with the HttpClient and dispose it, the socket isn't immediately closed. Instead, it's left in a state called TIME_WAIT, in which it waits to see if any more packets from the same connection arrive. This is standard behavior for TCP/IP and is designed to ensure all packets are received before closing the connection.</p>
<p>However, this has a side effect: sockets in the TIME_WAIT state still count against your machine’s maximum limit of open sockets, which is a finite resource. The length of the TIME_WAIT period varies depending on the OS, but it’s usually around 4 minutes.</p>
<p>As a result, using a HTTPClientFactory is recommended to manage instances of http clients.</p>
<p>How is this problem handled in python, using the requests module? What is best practice?</p>
|
<python><http>
|
2024-02-22 20:27:45
| 0
| 1,748
|
Tobias von Falkenhayn
|
78,043,661
| 2,641,242
|
How to find out if wandb is initialized?
|
<p>At the start of my application, I initialize a connection to WeightsAndBiases via</p>
<pre class="lang-py prettyprint-override"><code>run = wandb.init(project="...")
</code></pre>
<p>Later on, I would like to check if a <code>wandb</code> run has been initialized.
Is there a way to do this without passing the <code>run</code> object around?</p>
|
<python><machine-learning><wandb>
|
2024-02-22 19:47:13
| 1
| 3,997
|
jraufeisen
|
78,043,476
| 1,115,716
|
Split string, preserve quoted substring
|
<p>I have a string that I'm trying to split such that the quoted substring stays preserved. Based on another thread here, I tried using <code>shlex</code>:</p>
<pre><code>import shlex
shlex.split('“COBRA COMMANDER” Section 4Q/C')
</code></pre>
<p>However, the result isn't what I expected:</p>
<pre><code>['“COBRA', 'COMMANDER”', 'Section', '4Q/C']
</code></pre>
<p>I need the quoted text in one batch, something like:</p>
<pre><code>['“COBRA COMMANDER”', 'Section', '4Q/C']
</code></pre>
<p>what am I doing wrong?</p>
|
<python>
|
2024-02-22 19:04:27
| 1
| 1,842
|
easythrees
|
78,043,415
| 3,423,825
|
How to debug memory leak on DigitalOcean App Platform
|
<p>My Django application run perfectly fine on my local machine, memory usage fluctuate between 700MB and 800MB, but when I execute the exact same code on DigitalOcean App Platform the memory usage is constantly increasing until it reach 100%. Then the application reboot.</p>
<p>What could cause the issue and how to find the root cause ?</p>
<p>I have tried to force the execution of the garbage collector with <code>gc.collect()</code> and also to isolate the problem with the <code>@profile</code> decorator from the <code>memory_profiler</code> package, but the memory incrementation isn't associated with any line of my code.</p>
<p><strong>Environnement version</strong></p>
<ul>
<li>Local : Python 3.10.12 + Ubuntu 22.04.3</li>
<li>DigialOcean : Docker + Python 3.10.13</li>
</ul>
<p>I track memory usage like this :</p>
<pre><code>total = memory_usage()[0]
logger.info(f"Memory used : {total} MiB")
</code></pre>
|
<python><django><digital-ocean>
|
2024-02-22 18:51:37
| 0
| 1,948
|
Florent
|
78,043,310
| 5,344,240
|
Abstract @property - instantiating a "partially implemented" class?
|
<p>I read <a href="https://pymotw.com/3/abc/" rel="nofollow noreferrer">this</a> very nice documentation on abstract class <code>abc.ABC</code>. It has this example (shortened by me for the purpose of this question):</p>
<pre><code>import abc
class Base(abc.ABC):
@property
@abc.abstractmethod
def value(self):
return 'Should never reach here'
@value.setter
@abc.abstractmethod
def value(self, new_value):
return
class PartialImplementation(Base): # setter not defined/overridden
@property
def value(self):
return 'Read-only'
</code></pre>
<p>To my biggest surprise, <code>PartialImplementation</code> can be instantiated though it only overrides the getter:</p>
<pre><code>>>> PartialImplementation()
<__main__.PartialImplementation at 0x7fadf4901f60>
</code></pre>
<p>Naively, I would have thought that since the interface has two abstract methods both would have to be overridden in any concrete class, which is what is written in the documentation: "Although a concrete class must provide implementations of all abstract methods,...". The resolution must be in that we actually have only one abstract name, <code>value</code>, that needs to be implemented and that does happen in <code>PartialImplementation</code>.</p>
<p>Can someone please explain this to me properly?</p>
<p>Also, why would you want to lift the setter to the interface if you are not required to implement it; current implementation does nothing if at all callable on a <code>PartialImplementation</code> instance.</p>
|
<python><inheritance><abstract-class><python-decorators>
|
2024-02-22 18:29:52
| 1
| 455
|
Andras Vanyolos
|
78,043,285
| 13,086,128
|
df.drop_duplicates() in polars?
|
<pre><code>import polars as pl
df = pl.DataFrame(
{
"X": [4, 2, 3, 4],
"Y": ["p", "p", "p", "p"],
"Z": ["b", "b", "b", "b"],
}
)
</code></pre>
<p>We know the equivalent of <a href="/questions/tagged/pandas" class="post-tag" title="show questions tagged 'pandas'" aria-label="show questions tagged 'pandas'" rel="tag" aria-labelledby="tag-pandas-tooltip-container">pandas</a>'s <strong><code>df.drop_duplicates()</code></strong> is <strong><code>df.unique()</code></strong> in <a href="/questions/tagged/python-polars" class="post-tag" title="show questions tagged 'python-polars'" aria-label="show questions tagged 'python-polars'" rel="tag" aria-labelledby="tag-python-polars-tooltip-container">python-polars</a></p>
<p>But, each time I execute my query I get a different result?</p>
<pre><code>print(df.unique())
</code></pre>
<hr />
<pre><code>X Y Z
i64 str str
3 "p" "b"
2 "p" "b"
4 "p" "b"
</code></pre>
<hr />
<pre><code>X Y Z
i64 str str
4 "p" "b"
2 "p" "b"
3 "p" "b"
</code></pre>
<hr />
<pre><code>X Y Z
i64 str str
2 "p" "b"
3 "p" "b"
4 "p" "b"
</code></pre>
<p>Is this intentional and what is the reason behind it?</p>
|
<python><python-3.x><dataframe><unique><python-polars>
|
2024-02-22 18:26:18
| 1
| 30,560
|
Talha Tayyab
|
78,043,284
| 146,780
|
Removing the for loops for mean calculation with batch with pytorch
|
<pre><code>B = spec_x.size(0)
H = spec_x.size(1)
T = spec_x.size(2)
# Initialize x tensor with zeros
z = torch.zeros(B, 256, H).to(pitch.device)
# Iterate over each batch element
for b in range(B):
# Iterate over each pitch index
for i in range(256):
# Mask spec_x where pitch equals i
masked_spec_x = spec_x[b].masked_select(pitch[b] == i)
# Compute mean along the time dimension
mean_spec_x = torch.mean(masked_spec_x, dim=0)
# Assign the mean to the corresponding position in x
z[b, i] = mean_spec_x
</code></pre>
<p>The above code has 2 tensors, spec_x, and pitch. pitch is B T, it's a 2D tensor and it tells us an index from 0 to 255 corresponding to the pitch of the spectrogram at each frame.</p>
<p>The goal is to build tensor z which is B, 256, H, where H is the hidden size of spec_x.</p>
<pre><code>z[b][i] = average of spec_x[b] where pitch == i
</code></pre>
<p>The above code works, but it's very slow because of the loops, I'm just not sure if there's a way to remove the loops using pytorch built ins.</p>
<p>Thanks!</p>
|
<python><pytorch>
|
2024-02-22 18:26:13
| 1
| 54,489
|
jmasterx
|
78,043,216
| 10,007,302
|
xlwings not reading config file information
|
<p>I'm attempting to automate an install for my team that requires the use of xlwings.</p>
<p>My understanding is that is that if you go to the ribbon within Excel and leave the Interpreter and PYTHONPATH fields blank, xlwings will then look for a config file that's saved in the home directory in a folder called .xlwings. (c:users/USERNAME/.xlwings/xlwings.config).</p>
<p>I've set up the folder and file in the above location. The config file only has two lines:</p>
<pre><code>"INTERPRETER_WIN","C:\Program Files\Python39\python.exe"
"PYTHONPATH","C:\Users\USERNAME\OneDrive - Global (1)\Database\pythonProject"
</code></pre>
<p>When I run code, I get an error message that the script cannot be found. If I copy and paste the paths out of the file and into Excel, it works fine.</p>
<p>Am I misunderstanding how the configuration works?</p>
|
<python><excel><xlwings>
|
2024-02-22 18:12:44
| 1
| 1,281
|
novawaly
|
78,043,002
| 19,077,881
|
Selecting a range of columns from names in Polars
|
<p>I have a many-column DF from which I need to process various ranges of columns.</p>
<p>In Pandas I could use an expression along the lines of : df.loc[:, 'first_name': 'last_name'] to obtain the required columns between the two end-points. Is there an equivalent in Polars which does not involve listing all the numerous column names in each required range?</p>
|
<python><dataframe><python-polars>
|
2024-02-22 17:33:06
| 2
| 5,579
|
user19077881
|
78,042,886
| 4,992,248
|
On the 2nd iteration of loop Python throws error "RuntimeWarning: coroutine 'to_thread' was never awaited"
|
<p>I have one "parent" sync function and one "child" sync function. I want to run few "child" functions as a result I convert them to coroutine. Child functions/coroutines I run inside loop - for some reason the 1st loop returns result, but the 2nd one throws <code>RuntimeWarning: coroutine 'to_thread' was never awaited</code></p>
<p>Minimal code:</p>
<pre><code>import asyncio
def some_func(number):
print('task number:', number)
return number
def start():
for r in range(2):
print('r:', r)
tasks = []
for i in range(5):
tasks.append(asyncio.to_thread(some_func, i))
responses = asyncio.gather(*tasks, return_exceptions=True)
loop = asyncio.get_event_loop()
results = loop.run_until_complete(responses)
loop.close()
print('results:', results)
if __name__ == '__main__':
start()
</code></pre>
<p>Result:</p>
<pre class="lang-none prettyprint-override"><code>r: 0
task number: 0
task number: 1
task number: 2
task number: 3
task number: 4
results: [0, 1, 2, 3, 4]
r: 1
Traceback (most recent call last):
File "/media/antonio/www/tourbase/booking-system/back-end/arctic_reservations/src/test.py", line 27, in <module>
start()
File "/media/antonio/www/tourbase/booking-system/back-end/arctic_reservations/src/test.py", line 18, in start
responses = asyncio.gather(*tasks, return_exceptions=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/tasks.py", line 827, in gather
fut = _ensure_future(arg, loop=loop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/tasks.py", line 680, in _ensure_future
return loop.create_task(coro_or_future)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/base_events.py", line 434, in create_task
self._check_closed()
File "/usr/lib/python3.11/asyncio/base_events.py", line 519, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
sys:1: RuntimeWarning: coroutine 'to_thread' was never awaited
</code></pre>
<p>If I put <code>loop = asyncio.get_event_loop()</code> and <code>loop.close()</code> outside of for loop, everything is ok.</p>
<p>Could anybody explain why it happens? As I understand, the 2nd loop should create a new event loop.</p>
<p>Python v3.11.6</p>
|
<python><python-asyncio>
|
2024-02-22 17:12:47
| 1
| 5,204
|
TitanFighter
|
78,042,758
| 68,304
|
Finding conan package folder from conanfile.py
|
<p>We're using <code>conan v1.59</code> in our project (forced to as that's client's setup). It's a project in C, the build tool is <code>CMake</code>. This project has several dependencies. We're utilizing <code>conanfile.py</code> for building and packaging artifacts.</p>
<p>Let's say, <code>ch-common</code> is a requirement for this project. It's mentioned in <code>conanfile.py</code> as follows:</p>
<pre><code>def requirements(self):
...
self.requires("ch-common/0.0.10@x/y")
...
</code></pre>
<p>The development environment is <code>Debian Linux</code>. <code>ch-common</code> is downloaded and CMake can find it, code compiles properly. <code>ch-common</code> only has a header file <code>cr_def.h</code>.</p>
<p>The artifact of our project is a library. Internally the library uses an enum in <code>cr_def.h</code>. The users of our project (those who'll be calling various functions of the library) will also need the <code>cr_def.h</code> because in many APIs defined in the library the enum is used as parameter.</p>
<p>We have a <code>package()</code> in <code>conanfile.py</code> to package the artifacts:</p>
<pre><code>def package(self):
cmake = CMake(self)
cmake.install()
self.copy("*.h", dst="include/project/", src="include/public/", keep_path=False)
</code></pre>
<p>We'd like to copy <code>cr_def.h</code> in the <code>package()</code> like others.</p>
<p>How can <code>cr_def.h</code> be referred in the <code>package()</code> without hardcoded fixed path? We <strong>don't want</strong> to do something like:</p>
<pre><code>self.copy("cr_def.h",
dst="include/project/",
src=os.path.expanduser('~') + "/.conan/data/ch-common/0.0.10/x/y/package/<hash>/include/",
keep_path=False)
</code></pre>
<p>Is there any variable holding the package location? Is there any better way to copy <code>cr_def.h</code>?</p>
<p>We can't ask our library users to include <code>ch-common</code> in their project - that's a client requirement.</p>
|
<python><cmake><conan>
|
2024-02-22 16:51:47
| 0
| 13,045
|
Donotalo
|
78,042,672
| 1,773,592
|
In polars how do I name a column that is created using group_by and n_unique?
|
<p>I am trying to replace some pandas with polars. I am new to polars.</p>
<p>Original code:</p>
<pre><code>return pd.DataFrame(
data.groupby([list_of_fields]).size(),
columns=["Count"],
).reset_index()
</code></pre>
<p>I am stuck on how to name the column "Count". So far I have tried:</p>
<pre><code>return pl.DataFrame(data.groupby([list_of_fields]).n_unique( "Count"))
</code></pre>
<p>but this gives:</p>
<pre><code>TypeError: n_unique() takes 1 positional argument but 2 were given
</code></pre>
<p><code>n_unique</code> only seems to allow a name parameter when it is not attached to <code>groupby</code>. How can i do this?</p>
|
<python><dataframe><python-polars>
|
2024-02-22 16:38:32
| 1
| 3,391
|
schoon
|
78,042,596
| 2,473,382
|
How to use different module in the parent class depending on the child?
|
<p>I am trying to have a parent class which will do the same thing for 2 child databases (Hana and Oracle in my example).</p>
<p>I would like to properly type the parent class.</p>
<p>The example I would like is as follows. It works at runtime, but typing gets crazy for multiple reasons:</p>
<ul>
<li>Module cannot be used as a type</li>
<li><code>T.Connection</code> is not valid syntax</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from typing import Type, cast, get_args
import oracledb # want to use oracledb.Connection
from hdbcli import dbapi # want to use dbapi.Connection
class Parent[T: oracledb | dbapi]:
_db_type: Type[T]
def __init_subclass__(cls) -> None:
# see https://stackoverflow.com/a/71720366/2473382 to get T at runtime
cls._db_type = cast(Type[T], get_args(cls.__orig_bases__[0])[0])
def conn(self, **kwargs) -> T.Connection:
return self._db_type.Connection(**kwargs)
class Hana(Parent[dbapi]): ...
class Oracle(Parent[oracledb]): ...
</code></pre>
<p>I could give <code>Connection</code> as parameter of <code>super().__init__</code>, for instance:</p>
<pre class="lang-py prettyprint-override"><code>class Parent[T: dbapi.Connection]:
_conn: Type[T]
def __init__(self, conn: Type[T]):
self._conn = conn
def conn(self, **kwargs) -> T:
return self._conn(**kwargs)
class Hana(Parent[dbapi.Connection]):
def __init__(self):
super().__init__(dbapi.Connection)
</code></pre>
<p>But then if I need more than one object (Connection, Cursor...) I need to duplicate the parameters and generics.</p>
<p>What would be a clean way to achieve this?</p>
|
<python><python-3.x><python-typing>
|
2024-02-22 16:28:31
| 1
| 3,081
|
Guillaume
|
78,042,589
| 4,659,530
|
pip requirements for relative packages
|
<p>I have following structure</p>
<pre class="lang-bash prettyprint-override"><code>libs/lib_1/pyproject.toml
libs/lib_2/pyproject.toml -> uses lib_1
service/service_1/pyproject.toml -> uses lib_2
</code></pre>
<p>In <code>service_1/pyproject.toml</code></p>
<p>I am using <code>lib_2</code> and imported it as</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "service-2"
version = "0.0.1"
dependencies = [
"lib_2@file:///${PROJECT_ROOT}/../../libs/lib_2",
]
</code></pre>
<p>In <code>lib2/pyproject.toml</code></p>
<p>I am using <code>lib_1</code> and imported it as</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "lib-2"
version = "0.0.1"
dependencies = [
"lib_1@file:///${PROJECT_ROOT}/../lib_1",
]
</code></pre>
<p>When I am in directory <code>libs/lib_2</code> and try to install packages it works perfectly.</p>
<p>When I am in directory <code>service/service_1</code> and try to install it fails because it is trying to search lib_1 at <code>service/lib_1</code>.</p>
<p>How can I fix this ? Looking for fix without poetry.</p>
<p>Looks like its not natively supported in pip, can we do it using new uv (<a href="https://github.com/astral-sh/uv" rel="nofollow noreferrer">https://github.com/astral-sh/uv</a>)</p>
|
<python><pip><python-packaging><uv>
|
2024-02-22 16:27:07
| 0
| 2,405
|
Rahul Kumar
|
78,042,588
| 2,013,056
|
LIst in Python is empty but not empty - Selenium
|
<p>I am trying to extract path of the files changed in gitee. I can see there is only one file present in the files changed section but when I try to get it, 2 empty blocks comes along with the file. I checked everything but not sure why this is happening. Following is my code. The URL I am trying to extract fixed files from gitee is this - <a href="https://gitee.com/openharmony/arkui_ace_engine/pulls/27841/files" rel="nofollow noreferrer">https://gitee.com/openharmony/arkui_ace_engine/pulls/27841/files</a></p>
<pre><code>files_changed = driver.find_elements(By.XPATH, '//*[@class = "header file-header clearfix file-header-sticky"]//a')
files = []
count = count + 1
filecount = 1
for z in files_changed:
temp =list((z.text).split(' '))
if not temp:
print("A Blank File")
filecount = filecount + 1
else:
print("This is the temp value:", temp)
temp.insert(0,filecount)
files.append(temp)
print("List of files changed:", files)
filecount = filecount + 1
temp_issue['files_changed'] = files
</code></pre>
<p>The output I get from the above code is this:</p>
<pre><code>This is the temp value: ['']
List of files changed: [[1, '']]
This is the temp value: ['']
List of files changed: [[1, ''], [2, '']]
This is the temp value: ['test/unittest/core/pattern/overlay/overlay_manager_test_ng.cpp']
List of files changed: [[1, ''], [2, ''], [3, 'test/unittest/core/pattern/overlay/overlay_manager_test_ng.cpp']]
</code></pre>
<p>The output I expect to achieve is this:</p>
<pre><code>This is the temp value: ['test/unittest/core/pattern/overlay/overlay_manager_test_ng.cpp']
List of files changed: [[1, 'test/unittest/core/pattern/overlay/overlay_manager_test_ng.cpp']]
</code></pre>
<p>I even tried if(temp!= '') then.. But this didn't work either. Clearly, the list is not empty but it is displaying as empty. How to solve this please?</p>
|
<python><python-3.x><selenium-webdriver>
|
2024-02-22 16:26:57
| 1
| 649
|
Mano Haran
|
78,042,398
| 10,448,070
|
Getting 403 status code with GET request on Azure Function
|
<p>I'm encountering an issue with a Python script deployed as an Azure Function, which performs GET requests to URLs. The script works as expected for most URLs I've tested, but for certain ones, such as <a href="https://nationworldnews.com/mercedes-benz-improves-driver-experience-with-azure-openai-service/#:%7E:text=Mercedes-Benz%20announced%20that%20they%20have%20started%20integrating%20ChatGPT,MBUX%20voice%20assistant%20even%20more%20intuitive%20and%20conversational" rel="nofollow noreferrer">https://nationworldnews.com/mercedes-benz-improves-driver-experience-with-azure-openai-service/#:~:text=Mercedes-Benz%20announced%20that%20they%20have%20started%20integrating%20ChatGPT,MBUX%20voice%20assistant%20even%20more%20intuitive%20and%20conversational</a>, it returns a 403 HTTP status code when executed in Azure Functions. However, when I run the same script locally against this URL, it successfully returns a 200 status code. Upon inspecting the request headers in both scenarios, they are identical.</p>
<p>Sharing only the relevant bit.</p>
<pre class="lang-py prettyprint-override"><code>def send_request(url, headers):
response = requests.get(url, allow_redirects=True, headers=headers, timeout=30)
return response.status_code
</code></pre>
<p>Could someone explain why this discrepancy occurs and suggest a way to resolve it?</p>
|
<python><azure><http><server><python-requests>
|
2024-02-22 16:02:32
| 1
| 340
|
Coder_H
|
78,042,058
| 16,815,358
|
How to update cell output of a jupyter notebook which is hosted on a network IP
|
<p>I have a problem with a jupyter notebook that I have on a raspberry pi and am hosting on our IP address.</p>
<p>To do this, I used <code>jupyter notebook --generate-config</code> to generate the configuration file, where I then edited the <code>c.NotebookApp.ip = 'localhost'</code> to the IP address of our network. The jupyter notebook should display live results from measurements I do with different microprocessors.</p>
<p>The access to the home page is working great, and I can download the log files that I am generating. In the main file, however, I can see that it is being executed, but the values are not changing with time. The value is currently frozen at around 15:49:14, which is when I closed the first notebook where I started the execution.</p>
<p>The problem is happening so far with three laptops and even on the raspberry pi itself.</p>
<p>I found this closed GitHub issue: <a href="https://github.com/jupyter/jupyter/issues/83" rel="nofollow noreferrer">https://github.com/jupyter/jupyter/issues/83</a></p>
<p>It was closed due to inactivity</p>
<p>All my packages are up to date on all sides, as I built the whole system not long ago, and I ensured everything was stable. One of the laptops I tried was a Linux and it also did not work.</p>
<p>Could something in the configuration file still need to be adjusted?</p>
|
<python><jupyter-notebook><raspberry-pi><remote-access>
|
2024-02-22 15:11:17
| 1
| 2,784
|
Tino D
|
78,042,035
| 11,101,156
|
Langsmith is incorrectly tracing tokens
|
<p>I have a simple langgraph chain in place and I noticed that the counting of tokens is oddly off in langsmith in comperison to OpenAI online tokenizer or Python tokenizer:</p>
<p>Langsmith tokens (2,067):
<a href="https://i.sstatic.net/MnAoG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MnAoG.png" alt="enter image description here" /></a></p>
<p>Python program:</p>
<pre><code>import tiktoken
def num_tokens_from_string(string: str) -> int:
encoding = tiktoken.get_encoding("cl100k_base")
num_tokens = len(encoding.encode(string))
return num_tokens
test_string = """ Ctrl+c ctrl+v from langsmith trace """
print(num_tokens_from_string(test_string))
</code></pre>
<p>Output:
11185</p>
<p>OpenAI tokenizer:
<a href="https://i.sstatic.net/douWZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/douWZ.png" alt="enter image description here" /></a></p>
<p>Question:</p>
<ol>
<li>Why are the token counts different for langsmith and OpenAI?</li>
<li>How do I send / set in Langsmith correct counting token methods? For this python code: <code>response = ChatOpenAI().invoke("Hello!")</code></li>
</ol>
|
<python><openai-api><langchain><langsmith>
|
2024-02-22 15:08:41
| 0
| 2,152
|
Jakub Szlaur
|
78,041,881
| 7,133,942
|
How to reduce the x-axis ticks in matplotlib and still spread them over the whole x-axis
|
<p>I have the following code to plot 6 subfigures with a common x-axis in matplotlib (the first subfigure contains many diagrams):</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
file_path = r"C:\Users\wi9632\Desktop\Results_End_MultiOpt\Load_Profile_Figure\Combined_Table.csv"
df = pd.read_csv(file_path, sep=';', encoding='latin-1')
methods = {
"Dichotomous Method [kW]": ("Dichotomous Method", "tab:orange"),
"Conventional Control [kW]": ("Conventional Control", "tab:purple"),
"NSGA-II [kW]": ("NSGA-II", "tab:cyan"),
"SPEA-II [kW]": ("SPEA-II", "gold"),
"PALSS [kW]": ("PALSS", "red"),
"RELAPALLS [kW]": ("RELAPALLS", "lawngreen")
}
fig, axes = plt.subplots(6, 1, figsize=(12, 20), sharex=True)
# Plotting the first subplot with 6 lines and adding legend
for method, (name, color) in methods.items():
axes[0].plot(df["time of day"], df[method], label=name, color=color)
axes[0].set_ylabel("Power [kW]")
axes[0].set_title("Power Profiles for Different Methods")
axes[0].legend(loc='upper center', bbox_to_anchor=(0.5, 1.15), ncol=3)
variables = ["Space Heating [kW]", "DHW [kW]", "Availability of the EV", "Outside Temperature [°C]", "Price [Cent/kWh]"]
for i, variable in enumerate(variables, start=1):
axes[i].plot(df["time of day"], df[variable], color='blue')
axes[i].set_ylabel(variable)
plt.xlabel("Time of Day")
plt.xticks(range(0, 24), [f"{hour:02d}:00" for hour in range(24)])
plt.tight_layout()
combined_output_path = r"C:\Users\wi9632\Desktop\Results_End_MultiOpt\Load_Profile_Figure\ParetoFronts_combined.png"
plt.savefig(combined_output_path, dpi=200)
plt.show()
</code></pre>
<p>I want to have the x-axis with this data</p>
<pre><code>timeslot time of day Price [Cent/kWh] Space Heating [kW] DHW [kW] Electricity [kW] Availability of the EV Energy Consumption of the EV [kWh] Outside Temperature [°C] Dichotomous Method [kW] Conventional Control [kW] NSGA-II [kW] SPEA-II [kW] PALSS [kW] RELAPALLS [kW]
1 00:00 51 44 47 84 47 46 8 26 37 9 7 40 70
2 00:30 96 94 11 31 70 82 56 79 10 76 6 87 6
3 01:00 58 83 29 12 50 96 92 67 72 96 13 45 35
4 01:30 17 95 87 48 94 58 55 70 66 50 96 31 97
5 02:00 42 98 2 45 71 95 63 58 26 68 24 81 21
6 02:30 67 53 83 64 1 100 97 6 80 30 43 70 10
7 03:00 53 68 6 85 57 78 96 72 39 65 55 48 19
8 03:30 64 55 61 30 54 45 79 56 96 64 74 51 96
9 04:00 63 84 25 76 21 41 39 7 82 36 97 24 91
10 04:30 94 21 9 25 75 76 62 84 76 29 62 40 11
11 05:00 42 16 78 84 41 81 87 42 78 47 38 38 16
12 05:30 100 23 30 87 61 48 78 21 60 40 96 44 83
13 06:00 66 37 84 16 63 90 70 86 73 9 34 10 54
14 06:30 71 86 29 44 25 32 40 43 71 38 18 7 54
15 07:00 47 18 36 83 35 64 6 35 97 42 70 43 83
16 07:30 73 51 44 45 89 23 49 59 7 0 11 51 1
17 08:00 57 63 20 14 17 14 40 86 29 2 58 89 26
18 08:30 26 48 57 32 90 78 82 40 93 82 18 51 41
19 09:00 6 87 5 47 32 38 88 17 64 12 69 53 29
20 09:30 10 98 15 3 27 2 69 30 19 17 75 39 54
21 10:00 70 96 51 49 20 94 66 37 77 93 99 75 63
22 10:30 52 78 39 69 3 26 4 30 68 36 14 49 47
23 11:00 46 46 53 8 32 95 79 79 79 52 74 99 14
24 11:30 73 0 1 60 50 4 37 22 84 75 34 10 11
25 12:00 7 95 84 88 71 0 39 28 39 22 56 27 1
26 12:30 54 18 73 88 94 82 32 51 6 90 41 37 61
27 13:00 79 66 64 12 56 94 76 54 19 9 19 29 38
28 13:30 54 34 2 33 77 94 68 10 80 78 59 6 76
29 14:00 57 43 35 7 58 35 70 68 50 15 64 97 65
30 14:30 75 50 18 97 54 52 5 76 63 56 96 21 81
31 15:00 31 19 30 14 15 54 5 21 95 74 90 51 36
32 15:30 47 40 79 52 15 50 34 61 41 79 84 47 54
33 16:00 95 9 61 47 21 18 78 13 81 43 12 34 95
34 16:30 41 12 84 45 89 71 90 4 7 29 37 90 63
35 17:00 18 78 25 6 69 43 12 44 5 91 99 99 12
36 17:30 69 54 41 98 23 72 63 97 24 18 14 22 25
37 18:00 56 9 76 47 6 89 14 45 17 24 33 80 10
38 18:30 39 43 7 43 28 97 35 79 63 8 82 34 99
39 19:00 27 87 98 34 3 48 100 21 2 53 3 12 54
40 19:30 61 57 55 28 81 55 86 11 48 46 34 87 35
41 20:00 27 24 49 32 10 59 91 69 97 93 31 35 45
42 20:30 79 65 87 72 17 78 71 1 51 43 87 59 53
43 21:00 92 40 7 88 90 58 58 50 100 86 42 13 55
44 21:30 40 31 86 78 47 8 11 30 65 52 54 77 59
45 22:00 53 80 30 25 70 66 11 79 85 3 45 95 86
46 22:30 30 0 55 21 53 0 29 46 53 32 64 41 39
47 23:00 4 33 50 1 31 6 21 90 12 55 88 38 36
48 23:30 58 41 76 5 85 48 83 17 7 77 99 11 36
</code></pre>
<p>. There should be a tick for every hour of the day. So at 00:00, 01:00, 02:00 (but the data should be plotted in a time resolution of 30 minutes). The code above creates the following x-axis:</p>
<p>[![enter image description here][2]][2]</p>
<p>The x-axis ticks are not spread over the whole length of the x-axis. So the tick for 00:00 should be on the very left while the last tick at 23:00 should be (almost) on the very right.</p>
<p>Update: when using the following code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
file_path = r"C:\Users\wi9632\Desktop\Results_End_MultiOpt\Load_Profile_Figure\Combined_Table.csv"
df = pd.read_csv(file_path, sep=';', encoding='latin-1')
methods = {
"Dichotomous Method [kW]": ("Dichotomous Method", "tab:orange"),
"Conventional Control [kW]": ("Conventional Control", "tab:purple"),
"NSGA-II [kW]": ("NSGA-II", "tab:cyan"),
"SPEA-II [kW]": ("SPEA-II", "gold"),
"PALSS [kW]": ("PALSS", "red"),
"RELAPALLS [kW]": ("RELAPALLS", "lawngreen")
}
fig, axes = plt.subplots(6, 1, figsize=(12, 20), sharex=True)
# Plotting the first subplot with 6 lines and adding legend
for method, (name, color) in methods.items():
axes[0].plot(df["time of day"], df[method], label=name, color=color)
axes[0].set_ylabel("Power [kW]")
axes[0].set_title("Power Profiles for Different Methods")
axes[0].legend(loc='upper center', bbox_to_anchor=(0.5, 1.15), ncol=3)
variables = ["Space Heating [kW]", "DHW [kW]", "Availability of the EV", "Outside Temperature [°C]", "Price [Cent/kWh]"]
for i, variable in enumerate(variables, start=1):
axes[i].plot(df["time of day"], df[variable], color='blue')
axes[i].set_ylabel(variable)
# for ax in axes:
ax = axes[-1]
labels = ax.get_xticklabels()
ax.set_xticks([i for i, lbl in enumerate(labels) if
lbl.get_text().endswith('00')])
ax.tick_params(axis='x', rotation=90)
plt.xlabel("Time of Day")
plt.tight_layout()
combined_output_path = r"C:\Users\wi9632\Desktop\Results_End_MultiOpt\Load_Profile_Figure\ParetoFronts_combined.png"
plt.savefig(combined_output_path, dpi=200)
plt.show()
</code></pre>
<p>The diagramms looks as they should but there is one problem. There is too much empty white space before the first tick (00:00) and after the last tick (23:00). Also the diagramms don't begin and end at the very left and right but just there is white space (as you can also see on the previously posted screenshots)
[1]: <a href="https://i.sstatic.net/lUucl.png" rel="nofollow noreferrer">https://i.sstatic.net/lUucl.png</a>
[2]: <a href="https://i.sstatic.net/QE6VR.png" rel="nofollow noreferrer">https://i.sstatic.net/QE6VR.png</a></p>
|
<python><matplotlib>
|
2024-02-22 14:47:41
| 2
| 902
|
PeterBe
|
78,041,844
| 1,862,861
|
How can I differentiate between a PyTorch Tensor and a nested tensor?
|
<p>Recently, PyTorch introduced the <a href="https://pytorch.org/docs/stable/nested.html" rel="nofollow noreferrer">nested tensor</a>. However, if I create a nested tensor, e.g.,</p>
<pre class="lang-py prettyprint-override"><code>import torch
a = torch.randn(20, 128)
nt = torch.nested.nested_tensor([a, a], dtype=torch.float32)
</code></pre>
<p>and then look at its class type, it shows:</p>
<pre class="lang-py prettyprint-override"><code>type(nt)
torch.Tensor
</code></pre>
<p>i.e., the class type is just a regular PyTorch <code>Tensor</code>. So, <code>type(nt) == torch.Tensor</code> and <code>isinstance(nt, torch.Tensor)</code> will both return <code>True</code>.</p>
<p>So, my question is, is there a way to differentiate between a regular tensor and a nested tensor?</p>
<p>One way I can think of is that the <code>size</code> method to nested tensors (currently) works differently to that for regular tensors in that it requires an argument otherwise it raises a <code>RuntimeError</code>. So, a solution might be:</p>
<pre><code>def is_nested_tensor(nt):
if not isinstance(nt, torch.Tensor):
return False
try:
# try calling size without an argument
nt.size()
return False
except RuntimeError:
return True
return False
</code></pre>
<p>but is there something simpler that doesn't rely on something like the <code>size</code> method not changing in the future?</p>
|
<python><pytorch><tensor>
|
2024-02-22 14:41:10
| 1
| 7,300
|
Matt Pitkin
|
78,041,833
| 9,582,542
|
Click on svg label that has changing class
|
<p>The tag below has a changing class value so that can not be used to reference the tag. How to locate this tag and preform a click action</p>
<pre><code><div class="x6s0dn4 x78zum5 x1q0g3np xs83m0k"><svg aria-label="Options" class="x1lliihq x1n2onr6 x5n08af" fill="currentColor" height="32" role="img" viewBox="0 0 24 24" width="32"><title>Options</title><circle cx="12" cy="12" r="1.5"></circle><circle cx="6" cy="12" r="1.5"></circle><circle cx="18" cy="12" r="1.5"></circle></svg></div>
</code></pre>
<p>Code below worked at one time but the class value changes.</p>
<pre><code>driver.find_element_by_xpath('//div[@class="x6s0dn4 x78zum5 xdt5ytf xl56j7k"]/*[name()="svg"][@aria-label="Options"]').click()
</code></pre>
<p>How could I write this so its not dependant on the class value and still preform the click action?</p>
|
<python><selenium-webdriver>
|
2024-02-22 14:40:08
| 2
| 690
|
Leo Torres
|
78,041,796
| 1,062,645
|
sklearn ridgeCV versus ElasticNetCV
|
<p>When setting <code>l1_ratio = 0</code>, the elastic net reduces to Ridge regression.
However, I am unable to match the results obtained from sklearn's <code>ridgeCV</code> versus <code>ElasticNetCV</code>. They appear to produce very different optimal alpha values:</p>
<pre><code>import numpy as np
from sklearn.linear_model import ElasticNetCV, RidgeCV
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import numpy as np
#data generation
np.random.seed(123)
beta = 0.35
N = 120
p = 30
X = np.random.normal(1, 2, (N, p))
y = np.random.normal(5, size=N) + beta * X[:, 0]
#lambdas to try:
l = np.exp(np.linspace(-2, 8, 80))
ridge1 = RidgeCV(alphas = l, store_cv_values=True).fit(X, y)
MSE_cv = np.mean(ridge1.cv_values_, axis =0)#.shape
y_pred = ridge1.predict(X=X)
MSE = mean_squared_error(y_true=y,y_pred=y_pred)
print(f"best alpha: {np.round(ridge1.alpha_,3)}")
print(f"MSE: {np.round(MSE,3)}")
</code></pre>
<p>which yields
<strong>best alpha: 305.368</strong>,
<em>MSE: 0.952</em></p>
<p>While <code>ElasticNetCV</code> ends up with a similar MSE, its penalty parameters seem to be on a different scale (actually agreeing with the R implementation)</p>
<pre><code>ridge2 = ElasticNetCV(cv=10, alphas = l, random_state=0, l1_ratio=0);
ridge2.fit(X, y)
y_pred = ridge2.predict(X=X)
MSE = mean_squared_error(y_true=y,y_pred=y_pred)
print(f"best alpha: {np.round(ridge2.alpha_,3)}")
print(f"MSE: {np.round(MSE,3)}")
</code></pre>
<p>yielding
<strong>best alpha: 2.192</strong>,
<em>MSE: 0.934</em></p>
<p>Are the penalties defined differently ?
Does one maybe divide by N ?
Or is it due to the very different cross validation strategies ?</p>
|
<python><lasso-regression>
|
2024-02-22 14:34:19
| 1
| 378
|
Markus Loecher
|
78,041,738
| 81,444
|
Left join of 2 exploded polars columns
|
<h4>Considering</h4>
<pre><code>import polars as pl
df = pl.DataFrame({"a": [
[1, 2],
[3]],
"b": [
[{"id": 1, "x": 1}, {"id": 3, "x": 3}],
[{"id": 3, "x": 4}]]})
</code></pre>
<p>That looks like:</p>
<pre><code>+------+---------------------+
|a |b |
+------+---------------------+
|[1, 2]|[{1,1}, {3,3}]|
|[3] |[{3,4}] |
+------+---------------------+
</code></pre>
<h4>How to</h4>
<ul>
<li>get one row for each flatten <code>a</code> element and</li>
<li>if the list of <code>dict</code> in <code>b</code> contains the <code>a</code> element as <code>id</code></li>
<li>then have the corresponding <code>x</code> value in the column <code>b</code></li>
<li>otherwise <code>b</code> should be <code>null</code></li>
</ul>
<hr />
<h4>Current approach</h4>
<p><code>.explode</code> both <code>a</code> and <code>b</code> and <code>.filter</code> (INNER JOIN):</p>
<pre><code>df.explode("a").explode("b").filter(
pl.col("a") == pl.col("b").struct.field('id')
).select(
pl.col("a"),
pl.col("b").struct.field("x")
)
</code></pre>
<p>Unfortunately I get only the (expected):</p>
<pre><code>+-+----+
|a|b |
+-+----+
|1|1 |
|3|4 |
+-+----+
</code></pre>
<p>Instead of the full "LEFT JOIN" I am aiming to:</p>
<pre><code>+-+----+
|a|b |
+-+----+
|1|1 |
|2|null|
|3|4 |
+-+----+
</code></pre>
<p>How to efficiently get the desired result when the DataFrame is structured like that?</p>
|
<python><left-join><python-polars><pandas-explode>
|
2024-02-22 14:27:08
| 2
| 8,163
|
Filippo Vitale
|
78,041,728
| 4,118,756
|
Plot facet normals in trimesh from the facet center
|
<p>For this sample torus mesh in <code>trimesh</code>, I've managed to plot the normals to facets:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import trimesh
mesh = trimesh.creation.torus(9, 3)
# adapted from https://github.com/mikedh/trimesh/issues/549
vec = np.column_stack((mesh.facets_origin, mesh.facets_origin + (mesh.facets_normal * mesh.scale * .05)))
path = trimesh.load_path(vec.reshape((-1, 2, 3)))
trimesh.Scene([mesh, path]).show(smooth=False)
</code></pre>
<p>below is the resultant plot:
<a href="https://i.sstatic.net/qTh74.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qTh74.png" alt="trimesh view" /></a></p>
<p>but as you can see, the origin of the facets is simply one of the vertices. How can the center of the facet be used as an origin for normals instead? I see <code>mesh.triangles_center</code> but there's not such luxury defined for facets it seems.</p>
|
<python><numpy><mesh><trimesh>
|
2024-02-22 14:25:41
| 1
| 3,923
|
Vlas Sokolov
|
78,041,666
| 8,276,908
|
Implement an OR relationship for items that can occur in a JSON schema
|
<p>I am using jsonschema 4.17.3 with Python 3.7 to validate a JSON file. In this project I am facing a challenge, where I need to implement an OR relationship. This means in my case, that I require from 2 items that, either one of them is present, or both. This can be best explained with the JSON snippets below. Here, I want to be able to configure tasks for a tool. The first task is to <em><strong>download_data</strong></em> from a REST-API server and the second task is to <em><strong>upload_data</strong></em> the resulting (downloaded and converted data) to a relational database system. So, both tasks can occur together if I require my Python tool to do it all in one go.</p>
<pre><code>{
"tasks":{
"download_data":{
"targetdir": "data\\",
"Rest":
{
"type": "MyRestAPIProvider",
"server": "MyServerName",
"api_version": "9.99",
"source": "production",
"user": "de.my.corp\\User",
"password": "!SecretPasswort!"
}
},
"upload_data":{
"zip": "",
"database_server":{
"host":"MyDatabaseHost",
"port":"99999",
"database":"MyDatabase",
"user":"MyDatabaseUser",
"password":"MySecretPassword"
}
}
}
}
</code></pre>
<p>...but each task can also occur by itself alone (if I want to only download data, or only upload data that I have downloaded previously:</p>
<p>JSON sample to only download data:</p>
<pre><code>{
"tasks":{
"download_data":{
"targetdir": "data\\",
"Rest":
{
"type": "MyRestAPIProvider",
"server": "MyServerName",
"api_version": "9.99",
"source": "production",
"user": "de.my.corp\\User",
"password": "!SecretPasswort!"
}
}
}
}
}
</code></pre>
<p>JSON sample to only upload data:</p>
<pre><code>{
"tasks":{
"upload_report":{
"zip": "",
"database_server":{
"host":"MyDatabaseHost",
"port":"99999",
"database":"MyDatabase",
"user":"MyDatabaseUser",
"password":"MySecretPassword"
}
}
}
}
</code></pre>
<p>The best JSON Schema for validating this is the JSON schema shown below. Where I actually use 3 JSON Schema files of the listing below:</p>
<ol>
<li>A JSON schema containing definitions <em><strong>for both tasks</strong></em> as shown below</li>
<li>A JSON schema containing only the definitions for the <em><strong>download_data</strong></em> task (an extract from listing below)</li>
<li>A JSON schema containing only the definitions for the <em><strong>upload_data</strong></em> task (an extract from listing below)</li>
</ol>
<p>Next, I try to validate a given JSON file against one of the 3 files and pronounce the JSON file valid, if it succeeded on 1., 2. or 3.</p>
<pre><code> {
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"tasks": {
"type": "object",
"properties": {
"download_data": {
"type": "object",
"properties": {
"targetdir": { "type": "string" },
"Rest": {
"type": "object",
"properties": {
"type": { "type": "string" },
"server": { "type": "string" },
"api_version": { "type": "string" },
"source": { "type": "string" },
"user": { "type": "string" },
"password": { "type": "string" }
}, "required": ["type", "server", "api_version", "source", "user", "password"]
}
},
"required": ["targetdir", "Rest"]
},
"upload_data": {
"type": "object",
"properties": {
"zip": { "type": "string" },
"database_server": {
"type": "object",
"properties": {
"host": { "type": "string" },
"port": { "type": "string" },
"database": { "type": "string" },
"user": { "type": "string" },
"password": { "type": "string" }
}, "required": ["host", "port", "database", "user", "password"
]
}
}, "required": ["zip", "database_server"]
}
}, "required": ["download_report", "upload_report"]
}
},
"required": ["tasks"]
}
</code></pre>
<p>My question is: What JSON schema representation do I need to validate this with only 1 JSON schema file, which should validate OK, if only one of the tasks is present, or both. But should fail, if there is no task present.</p>
<p>Bonus question: Is there a good recommended resource to find such non-trivial cases and learn from examples that might be available in a reference.</p>
<p>I have obviously been playing with this problem for a while before coming here. But so far, I have found only simple 101 style docs (<a href="https://json-schema.org/learn/getting-started-step-by-step" rel="nofollow noreferrer">https://json-schema.org/learn/getting-started-step-by-step</a>) that do not seem to cover non-trivial cases like mine :-(</p>
|
<python><json><python-3.x><jsonschema>
|
2024-02-22 14:15:18
| 0
| 1,071
|
user8276908
|
78,041,555
| 6,394,617
|
Python development dependencies
|
<p>I'm familiar with installing dependencies using a <code>requirements.txt</code> or <code>environment.yml</code>, but I've only ever seen syntax in those files like <code>some_package>=1.2.3</code>.</p>
<p>What does it mean when dependencies are listed with curly braces, as in:</p>
<pre><code>pytest = "^6.2.5"
coverage = {extras = ["toml"], version = "^5.5"}
safety = "^1.10.3"
mypy = "^0.910"
typeguard = "^2.12.1"
xdoctest = {extras = ["colors"], version = "^0.15.5"}
Sphinx = "^4.1.2"
sphinx-autobuild = "^2021.3.14"
</code></pre>
<p>and how do you install those dependences?</p>
<p>Trying to install those by treating the file as a <code>requirements.txt</code> or <code>environment.yml</code> throws an <code>ERROR: Invalid requirement</code> or <code>CondaValueError: invalid package specification</code>, respectively.</p>
|
<python><dependency-management>
|
2024-02-22 13:58:42
| 1
| 913
|
Joe
|
78,041,062
| 14,282,714
|
At least one of TensorFlow 2.0 or PyTorch should be installed
|
<p>I would like to use this Image to text model from <a href="https://huggingface.co/llava-hf/llava-1.5-7b-hf" rel="nofollow noreferrer">HuggingFace</a>. When I run the code from the example I get an error:</p>
<pre><code>from transformers import pipeline
from PIL import Image
import requests
model_id = "llava-hf/llava-1.5-7b-hf"
pipe = pipeline("image-to-text", model=model_id)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
</code></pre>
<p>Error:</p>
<pre><code>RuntimeError: At least one of TensorFlow 2.0 or PyTorch should be installed. To install TensorFlow 2.0, read the instructions at https://www.tensorflow.org/install/ To install PyTorch, read the instructions at https://pytorch.org/.
</code></pre>
<p>This error happens at loading the model in the <code>pipeline</code>. I understand that we have to use <code>Tensorflow</code> or <code>Pytorch</code>. So I installed it using <code>pip install tensorflow</code>:</p>
<pre><code>Package Version
---------------------------- ----------
absl-py 2.1.0
appnope 0.1.4
asttokens 2.4.1
astunparse 1.6.3
cachetools 5.3.2
certifi 2024.2.2
charset-normalizer 3.3.2
comm 0.2.1
debugpy 1.6.7
decorator 5.1.1
exceptiongroup 1.2.0
executing 2.0.1
filelock 3.13.1
flatbuffers 23.5.26
fsspec 2024.2.0
gast 0.5.4
google-auth 2.28.1
google-auth-oauthlib 1.2.0
google-pasta 0.2.0
grpcio 1.62.0
h5py 3.10.0
huggingface-hub 0.20.3
idna 3.6
importlib-metadata 7.0.1
ipykernel 6.29.2
ipython 8.21.0
jedi 0.19.1
jupyter_client 8.6.0
jupyter_core 5.7.1
keras 2.15.0
libclang 16.0.6
Markdown 3.5.2
MarkupSafe 2.1.5
matplotlib-inline 0.1.6
mkl-fft 1.3.8
mkl-random 1.2.4
mkl-service 2.4.0
ml-dtypes 0.2.0
nest_asyncio 1.6.0
numpy 1.26.3
oauthlib 3.2.2
opt-einsum 3.3.0
packaging 23.1
parso 0.8.3
pexpect 4.9.0
pickleshare 0.7.5
pillow 10.2.0
pip 23.3.1
platformdirs 4.2.0
prompt-toolkit 3.0.42
protobuf 4.25.3
psutil 5.9.8
ptyprocess 0.7.0
pure-eval 0.2.2
pyasn1 0.5.1
pyasn1-modules 0.3.0
Pygments 2.17.2
pytesseract 0.3.10
python-dateutil 2.8.2
PyYAML 6.0.1
pyzmq 24.0.1
regex 2023.12.25
requests 2.31.0
requests-oauthlib 1.3.1
rsa 4.9
safetensors 0.4.2
setuptools 68.2.2
six 1.16.0
stack-data 0.6.2
tensorboard 2.15.2
tensorboard-data-server 0.7.2
tensorflow 2.15.0
tensorflow-estimator 2.15.0
tensorflow-io-gcs-filesystem 0.36.0
termcolor 2.4.0
tokenizers 0.15.2
tornado 6.4
tqdm 4.66.2
traitlets 5.14.1
transformers 4.38.1
typing_extensions 4.9.0
urllib3 2.2.1
wcwidth 0.2.13
Werkzeug 3.0.1
wheel 0.41.2
wrapt 1.14.1
zipp 3.17.0
</code></pre>
<p>So I'm using tensorflow <code>2.15.0</code> version. I'm using <code>Python 3.10.13</code> on a Mac intel. I followed this question: <a href="https://stackoverflow.com/questions/64337550/neither-pytorch-nor-tensorflow-2-0-have-been-found-models-wont-be-available">Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configuration and file/data utilities can be used</a> but installing the <code>tensorflow-gpu</code> with conda doesn't work. I followed the <a href="https://huggingface.co/docs/transformers/v4.15.0/installation" rel="nofollow noreferrer">installation</a> guide on HuggingFace but also doesn't work. I can't find a workaround for this. So I was wondering if anyone knows how to fix this issue?</p>
|
<python><tensorflow><huggingface-transformers>
|
2024-02-22 12:50:58
| 0
| 42,724
|
Quinten
|
78,040,964
| 10,333,905
|
How to properly pass dictionary to custom operator in Apache Airflow?
|
<p>I have created a custom operator which takes some parameters and ultimately triggers a glue job. Here's how it looks -</p>
<pre class="lang-py prettyprint-override"><code>task = MyCustomOperator(
task_id="my_op",
custom_property=False,
r_params = {
"run_date": "{{ ds }}"
}
)
</code></pre>
<p>Inside the custom operator we use <code>json.dumps</code> on <code>r_params</code> and pass it to glue operator, which only accepts strings.</p>
<p>The problem is some of our DAGs are using <code>render_template_as_native_obj=True</code> so the above operator fails with an error saying expected <code>str</code> got <code>dict</code>.</p>
<p>My understanding is even after converting <code>r_params</code> to <code>str</code> using <code>json.dumps</code> it's still getting converted back to <code>dict</code> after rendering <code>{{ ds }}</code>.</p>
<p>What's the correct way to pass dict to a custom operator so that it doesn't clash with the native object property?</p>
|
<python><airflow>
|
2024-02-22 12:34:39
| 1
| 499
|
Dark Matter
|
78,040,908
| 1,714,692
|
Is it possible in a multiindexed Pandas dataframe to have a column whose values refer to a higher level index?
|
<p>Suppose I have such a dataframe in Pandas:</p>
<pre><code>df = pd.DataFrame({'a':[4,4,8,8],'b':[4,5,6,5], 'd':[0,1,2,1]})
multi_idx = pd.MultiIndex.from_arrays([[0,0,1,1],[0,1,0,1]])
df.index= multi_idx
</code></pre>
<p>which outputs this shape:</p>
<pre><code> a b d
0 0 4 4 0
1 4 5 1
1 0 8 6 2
1 8 5 1
</code></pre>
<p>You see that values of columns <code>a</code> are repeated based on the first level index. I am looking for a way to avoid this repetition. One of course would be to split the information across more dataframes, i.e., having a data frame for column <code>a</code> and first level index. However, I was wondering if there is somehow a way using multiindex and multi level columns, to create a columns whose values correspond to all rows of a higher level index.</p>
<p>Visually I would like something like that: is that possible in Pandas?</p>
<pre><code>| | | a | b | c |
|----|----|-----|-----|-----|
|idx1|idx2| | | |
|----|----|-----|-----|-----|
| | 0 | | 4 | 0 |
| 0 |----| 4 |-----|-----|
| | 1 | | 5 | 1 |
|----|----|-----|-----|-----|
| | 0 | | 6 | 2 |
| 1 |----| 8 |-----|-----|
| | 1 | | 5 | 1 |
|----|----|-----|-----|-----|
</code></pre>
|
<python><pandas><dataframe><multi-index>
|
2024-02-22 12:27:34
| 1
| 9,606
|
roschach
|
78,040,873
| 9,313,033
|
Type hints for mixins that reference attributes of a third-party base class
|
<p>I'm trying to add type hints to a mixin class that is to be used alongside an external, third-party class. That is, the mixin relies on members of the third-party class.</p>
<p>For example, a mixin for a Django form:</p>
<pre class="lang-py prettyprint-override"><code># mixin_typing.py
from django import forms
class SuffixFormMixin:
suffix: str
def add_suffix(self, field_name: str) -> str:
# prefix is an attribute of forms.Form
return f"{self.prefix}__{field_name}__{self.suffix}"
class SuffixForm(SuffixFormMixin, forms.Form):
pass
</code></pre>
<p>Understandably, mypy will complain about the <code>add_suffix</code> method:</p>
<pre><code>"SuffixFormMixin" has no attribute "prefix"
</code></pre>
<p>IDE (PyCharm) will also throw a warning:</p>
<pre><code>Unresolved attribute reference 'prefix' for class 'SuffixFormMixin'
</code></pre>
<h1>Question:</h1>
<p><strong>Is there any "easy" solution that lets the mixin understand that <code>self</code> contains attributes/methods of <code>forms.Form</code>?</strong>
Here is a github issue that addresses this, but sadly didn't go anywhere: <a href="https://github.com/python/typing/issues/246" rel="nofollow noreferrer">https://github.com/python/typing/issues/246</a></p>
<p>Maybe some kind of typing object or other mypy-fu that acts as a promise to the mixin class that the future "partner class" has the members that the mixin is using?</p>
<h1>Attempted solutions</h1>
<p>The suggestions I have found so far all have drawbacks:</p>
<h2>Type-hinting self to the partner class</h2>
<p>(works for mypy: ✅, doesn't work for IDE: ❌)</p>
<p>I have seen <a href="https://github.com/python/mypy/issues/5837#issuecomment-665309244" rel="nofollow noreferrer">suggestions</a> to type hint self to the class that the mixin will later be used with. Here <code>forms.Form</code>:</p>
<pre class="lang-py prettyprint-override"><code> def add_suffix(self: forms.Form, field_name: str) -> str:
return f"{self.prefix}__{field_name}__{self.suffix}"
</code></pre>
<p>mypy no longer complains (although it really should?), but the IDE still does. This time about the <code>suffix</code> attribute:</p>
<pre><code>Unresolved attribute reference 'suffix' for class 'Form'
</code></pre>
<h2>Type-hinting self to future concrete class</h2>
<p>(mypy: ❌, IDE: ✅)</p>
<p>This goes a bit further than the previous suggestion:</p>
<pre class="lang-py prettyprint-override"><code>class SuffixFormMixin:
suffix: str
def add_suffix(self: "SuffixForm", field_name: str) -> str:
return f"{self.prefix}__{field_name}__{self.suffix}"
class SuffixForm(SuffixFormMixin, forms.Form):
pass
</code></pre>
<p>With this, the IDE can resolve all the attributes correctly, but mypy throws an error:</p>
<pre><code>The erased type of self "mixin_typing.SuffixForm" is not a supertype of its class "mixin_typing.SuffixFormMixin"
</code></pre>
<h2>Type-hinting self to a Union</h2>
<p>(mypy: ❌, IDE: ✅)</p>
<pre class="lang-py prettyprint-override"><code> def add_suffix(self: Union["SuffixFormMixin", forms.Form], field_name: str) -> str:
return f"{self.prefix}__{field_name}__{self.suffix}"
</code></pre>
<p>Again, the IDE understands what's up and can resolve attributes correctly, but mypy complains:</p>
<pre><code>Item "SuffixFormMixin" of "SuffixFormMixin | Any" has no attribute "prefix"
</code></pre>
<h2>Type-hinting self to a TypeVar</h2>
<p>(mypy: ✅, IDE: ❌)</p>
<p><a href="https://stackoverflow.com/a/68141872/9313033">This</a> suggests declaring the future base class as a TypeVar and using that as a hint on self.</p>
<pre class="lang-py prettyprint-override"><code>FormType = TypeVar("FormType", bound=forms.Form)
...
def add_suffix(self: FormType, field_name: str) -> str:
return f"{self.prefix}__{field_name}__{self.suffix}"
</code></pre>
<p>This doesn't generate any warnings, but it seems that this more because, by using TypeVar, I'm hiding types rather than it being a valid solution? For instance, you can do this without mypy or the IDE complaining:</p>
<pre class="lang-py prettyprint-override"><code>FormType = TypeVar("FormType", bound=forms.Form)
class SuffixFormMixin:
suffix: str
def this_will_explode(self: FormType) -> Any:
return self.suffix + 1 # str + int!
</code></pre>
<p>I guess the type for <code>self.suffix</code> is assumed to be <code>Any</code>, because it can't be resolved anymore? The IDE doesn't show any information on <code>self.suffix</code> either when hovering.</p>
<p>This isn't really workable either considering that mypy absolutely must catch a problem like that in the <code>this_will_explode</code> method.</p>
<h2>Inherting a protocol</h2>
<p>(mypy: ✅, IDE: ✅)</p>
<p>Declaring a Protocol and having the mixin inherit it, as described <a href="https://stackoverflow.com/a/70907644/9313033">here</a>, looks to be the most complete solution. The protocol describes bits of the partner class that the mixin is using.</p>
<pre class="lang-py prettyprint-override"><code># mixin_typing.py
from typing import Protocol
from django import forms
class HasPrefix(Protocol):
prefix: str
class SuffixFormMixin(HasPrefix):
suffix: str
def add_suffix(self, field_name: str) -> str:
return f"{self.prefix}__{field_name}__{self.suffix}"
class SuffixForm(SuffixFormMixin, forms.Form):
pass
</code></pre>
<p>mypy is happy, IDE knows the types, but...:</p>
<ul>
<li>it requires adding classes</li>
<li>the protocol must reflect the mixins partner class <code>forms.Form</code> <strong>which I do not have control over</strong></li>
<li>the protocol "blocks" the IDE from navigating to the actual implementation on the parent class (i.e. ctrl + click on <code>self.prefix</code> jumps to <code>HasPrefix.prefix</code> not <code>forms.Form</code>)</li>
<li>it doesn't actually run</li>
</ul>
<p>An error occurs when creating the <code>SuffixForm</code> class:</p>
<pre class="lang-py prettyprint-override"><code>>>> import mixin_typing
Traceback (most recent call last):
File "", line 36, in <module>
class SuffixForm(SuffixFormMixin, forms.Form):
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
</code></pre>
<p>So you need a workaround that switches the base class of the mixin depending on whether we are currently type checking:</p>
<pre class="lang-py prettyprint-override"><code>class HasPrefix(Protocol):
prefix: str
if TYPE_CHECKING:
_Base = HasPrefix
else:
_Base = object
class SuffixFormMixin(_Base):
suffix: str
def add_suffix(self, field_name: str) -> str:
return f"{self.prefix}__{field_name}__{self.suffix}"
</code></pre>
<p>I don't like this. It's even more code that only serves to provide type checking and now you need to explain why the mixin switches base class. Maybe this would work better in a stub file?</p>
<h2>Conditionally inheriting from partner class</h2>
<p>(mypy: ✅, IDE: ✅)</p>
<p>As suggested by a comment:</p>
<blockquote>
<p>You can also use your latest approach without a protocol: use forms.Form or object as a base class conditionally. It still "needs to explain why the class base is dynamic", but in if TYPE_CHECKING case that should be pretty obvious.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>if TYPE_CHECKING:
_Base = forms.Form
else:
_Base = object
class SuffixFormMixin(_Base):
suffix: str
def add_suffix(self, field_name: str) -> str:
return f"{self.prefix}__{field_name}__{self.suffix}"
</code></pre>
<p>mypy doesn't like this:</p>
<pre><code>mixin_typing.py:18: error: Variable "mixin_typing._Base" is not valid as a type [valid-type]
class SuffixFormMixin(_Base):
^
mixin_typing.py:18: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
mixin_typing.py:18: error: Invalid base class "_Base" [misc]
class SuffixFormMixin(_Base):
</code></pre>
<p>However, adding the <code>TypeAlias</code> type hint to <code>_Base</code> makes it work <a href="https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases" rel="nofollow noreferrer">(explained here)</a>:</p>
<pre class="lang-py prettyprint-override"><code>if TYPE_CHECKING:
_Base: TypeAlias = forms.Form
else:
_Base = object
</code></pre>
<p><strong>NOTE: TypeAlias is only available in in python3.10+</strong><br />
If you need to support python3.9 or earlier, use <code>from typing_extensions import TypeAlias</code>! <a href="https://mypy.readthedocs.io/en/stable/runtime_troubles.html#using-new-additions-to-the-typing-module" rel="nofollow noreferrer">More info here</a></p>
<hr />
<p>There's probably quite a lot I do wrong here (a lot of the deeper type hinting stuff flies completely over my head), or maybe I'm missing something obvious (this shouldn't be so difficult), so I'd welcome any suggestion.</p>
|
<python><django><mypy><python-typing>
|
2024-02-22 12:21:07
| 0
| 2,941
|
CoffeeBasedLifeform
|
78,040,774
| 1,567,264
|
I need a highly accurate simultaneous equation solver for Python
|
<p><code>np.linalg.solve(X,Y)</code> just isn't accurate enough. The code below solves a relatively small system of 5 equations:</p>
<pre><code>import numpy as np
n=5
Y = np.random.rand(n)
X = np.tile(np.array(range(1,n+1)),n)
X = X.reshape((n,n),order='F')
for c in range(n) :
X[:,c] = X[:,c]**c
A = np.linalg.solve(X,Y)
predicted_Y = X@A
table = [(y,pred_y,y-pred_y) for y,pred_y in zip(Y,predicted_Y)]
print('y predicted_y difference')
for c1,c2,c3 in table :
print(f"%.20f | %.20f | %.20f" % (c1, c2, c3))
</code></pre>
<p>The third column in the output shows that there are still differences between the actual Y values and the ones implied by the solved-for coefficients.</p>
<pre><code>y predicted_y difference
0.68935295599312118586 | 0.68935295599312118586 | 0.00000000000000000000
0.72899266151307307027 | 0.72899266151307240413 | 0.00000000000000066613
0.18770646040141103494 | 0.18770646040141256150 | -0.00000000000000152656
0.02144867791874205398 | 0.02144867791873661389 | 0.00000000000000544009
0.54517050144884360297 | 0.54517050144883372198 | 0.00000000000000988098
</code></pre>
<p>I know the differences seem tiny, but I need a high degree of accuracy for what I'm doing. I don't mind if the code is slower, but I want it to be accurate to the 20th decimal place.</p>
<p>The only alternative I've seen is <code>scipy.linalg.solve</code>, which gives the same problems. Is there an alternative package that will work until a more accurate solution is found?</p>
|
<python><numpy><solver>
|
2024-02-22 12:02:52
| 1
| 361
|
Mas
|
78,040,430
| 464,277
|
Fitting model in hmmlearn
|
<p>I'm not sure I get how <a href="https://hmmlearn.readthedocs.io/en/stable/" rel="nofollow noreferrer">hmmlearn</a> expects the input data. Below is a modified example from the tutorial.</p>
<pre><code>import numpy as np
from hmmlearn import hmm
states = ['up', 'down']
start_probs = np.array([0.6, 0.4])
vocabulary = [0, 1, 2, 3]
emission_probs = np.array(
[
[0.25, 0.1, 0.4, 0.25],
[0.2, 0.5, 0.1, 0.2],
]
)
trans_mat = np.array(
[
[0.8, 0.2],
[0.2, 0.8],
]
)
observations = np.random.choice(vocabulary, (10, 7))
lengths = [len(item) for item in observations]
# Set up model:
model = hmm.MultinomialHMM(
n_components=len(states),
n_trials=len(observations[0]),
n_iter=50,
init_params='',
)
model.n_features = len(vocabulary)
model.startprob_ = start_probs
model.transmat_ = trans_mat
model.emissionprob_ = emission_probs
model.fit(observations, lengths)
logprob, received = model.decode(observations)
</code></pre>
<p>At this point I'm getting this error:</p>
<pre><code> in _AbstractHMM._check_and_set_n_features(self, X)
525 if hasattr(self, "n_features"):
526 if self.n_features != n_features:
--> 527 raise ValueError(
528 f"Unexpected number of dimensions, got {n_features} but "
529 f"expected {self.n_features}")
530 else:
531 self.n_features = n_features
ValueError: Unexpected number of dimensions, got 7 but expected 4
</code></pre>
<p>I'm not sure why it is expecting 4, which is just the size of the vocabulary. Isn't <code>lengths</code> supposed to be the size of the sequences?</p>
|
<python><numpy><machine-learning><hmmlearn>
|
2024-02-22 11:08:22
| 0
| 10,181
|
zzzbbx
|
78,040,248
| 23,461,455
|
Visualize Nodes and their Connections as Clusters via networkx
|
<p>I have a list of Connections between two nodes describing links of entries in a Dataset.</p>
<p>I'm thinking of vizualising the Entries and their connections to show that there are clusters of very similar entries.</p>
<p>Each tuple stands for a pair of linked nodes. I've chosen weight as 1 for all of them since it's required but I want all edges equally thick.</p>
<p>I've started with <a href="https://networkx.org/" rel="nofollow noreferrer">networkx</a>, problem is I don't really now how to cluster the linked nodes together in a useful manner.</p>
<p>I have a List of connections in a Dataframe:</p>
<pre><code>smallSample =
[[0, 1492, 1],
[12, 937, 1],
[16, 989, 1],
[18, 371, 1],
[18, 1140, 1],
[26, 398, 1],
[26, 1061, 1],
[30, 1823, 1],
[33, 1637, 1],
[54, 1047, 1],
[63, 565, 1]]
</code></pre>
<p>I've created a graph the following way:</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
G = nx.Graph()
for index, row in CC.iterrows():
G.add_edge(CC['source'].loc[index],CC['target'].loc[index], weight =1)
pos = nx.spring_layout(G, seed=7)
nx.draw_networkx_nodes(G, pos, node_size=5)
nx.draw_networkx_edges(G, pos, edgelist=G.edges(), width=0.5)
pos = nx.spring_layout(G, k=1, iterations=200)
plt.figure(3, figsize=(2000,2000), dpi =2)
</code></pre>
<p>With the small sample provided above the result looks like this:</p>
<p><a href="https://i.sstatic.net/YX624.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YX624.png" alt="Small Sample" /></a></p>
<p>The result from my real df which consists of thousands of points:</p>
<p><img src="https://i.sstatic.net/gWCEq.png" alt="Big Sample" /></p>
<p>How can I group the linked nodes together so that it is better visible how many of them are in each cluster? I dont want them to overlap so hard, its really not that easy to grasp how many of them are there specially in the big sample.</p>
|
<python><matplotlib><graph><visualization><networkx>
|
2024-02-22 10:38:17
| 3
| 1,284
|
Bending Rodriguez
|
78,040,051
| 15,369
|
Why does importing a Python class instantiate it?
|
<p>A somewhat cut down example here from some PyTest unit tests that never exit. If I stick a break point on <code>DataListener.__init__</code> I see it getting called from the <code>from controller import Controller</code> statement in <code>test_controller.py</code> which was a bit of a suprise to me; Why does importing a class from a module instantiate an instance of it?</p>
<h2>data_listener.py</h2>
<pre><code>class DataListener:
def __init__(self, port, host='localhost'):
self._stop_monitor = Event()
self._monitor = Thread(target=self.check_last_message)
self._monitor.start()
def __del__(self):
self.stop()
def stop(self):
self._stop_monitor.set()
self._monitor.join()
def check_last_message(self):
while not self._stop_monitor.is_set():
# Code to check for a heartbeat here
pass
</code></pre>
<h2>controller.py</h2>
<pre><code>from data_listener import DataListener
class Controller:
def __init__(
self,
subscriber=DataListener(port=DEFAULT_SUBSCRIBER_PORT),
publisher=JsonPublisher(DEFAULT_PUBLISHER_PORT)):
pass
</code></pre>
<h2>test_controller.py</h2>
<pre><code>from controller import Controller
def test_controller_work_loop():
# ARRANGE
controller = Controller(subscriber=None, publisher=None)
# Some actual tests here
# This test passes and finishes but the test suite never exits
</code></pre>
|
<python><pytest><python-import>
|
2024-02-22 10:08:04
| 1
| 37,756
|
Jon Cage
|
78,039,918
| 13,086,128
|
How to use `isin` in Polars DataFrame?
|
<p>I have a polars DataFrame:</p>
<pre><code>import polars as pl
import numpy as np
df = pl.DataFrame({'A': ['red', 'blue', 'green', np.nan, 'orange']})
my_list = ['red','orange']
</code></pre>
<p>I want to know which colors are present in <code>my_list</code>.</p>
<p>In pandas, I would do something like:</p>
<pre><code>df.A.isin(my_list)
</code></pre>
<p>But, I am getting this error:</p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'A'
</code></pre>
<p>How to do this in polars?</p>
|
<python><python-3.x><dataframe><python-polars>
|
2024-02-22 09:48:32
| 2
| 30,560
|
Talha Tayyab
|
78,039,906
| 21,049,944
|
Polars: Is there a procedure that ensures zero copy when calling df.to_numpy()?
|
<p>For some time now I am failing to call <code>df.to_numpy(allow_copy = True)</code>.
Is there a procedure that transforms any given dataset into "zero_copy suitable" one?</p>
<p>For list-like values I tried</p>
<pre><code>data = pl.DataFrame(dict(
points=np.random.sample((N, 3)),
color=np.random.sample((N, 4))
),
schema=dict(
points=pl.Array(pl.Float64, 3),
color=pl.Array(pl.Float64, 4),
))
</code></pre>
<p>or simply <code>expr.cast(pl.Array(pl.Float32, 4))</code> as suggested <a href="https://github.com/pyvista/pyvista/issues/5516" rel="nofollow noreferrer">here</a>. It works for one of my datasets, but fails for a different one with slightly different build .</p>
<p>Calling <code>rechunk()</code>, having no null values and/or specifying <code>order = "c"</code> or <code>"fortran"</code> also seems to have no effect.</p>
<p>This is a generalization of my previous <a href="https://stackoverflow.com/questions/78025987/polars-what-does-fortran-like-indexing-mean-and-how-to-enforce-it?noredirect=1#comment137560472_78025987">question</a> that was perhaps too specific to get a real answer.</p>
|
<python><numpy><python-polars><zero-copy>
|
2024-02-22 09:47:25
| 2
| 388
|
Galedon
|
78,039,802
| 9,506,773
|
How do I set up the default LLM_RAG_CRACK_AND_CHUNK_AND_EMBED setup for my existing services from a python script?
|
<p>I see that there is a default setup for this <a href="https://github.com/Azure/azure-sdk-for-python/blob/7afd09de107b91364b0d460c07a2ee198f64a2e9/sdk/ai/azure-ai-resources/azure/ai/resources/_index/_dataindex/constants/_component.py#L17" rel="nofollow noreferrer">here</a>. How can I set this up for my existing services? Could anybody point to the right tutorial/template? I have the following piece of code:</p>
<pre><code>from msrest import Configuration
from azure.identity import DefaultAzureCredential
# create configuration for LLM_RAG_CRACK_AND_CHUNK_AND_EMBED
conf = Configuration("azureml://registries/azureml/components/llm_rag_crack_and_chunk_and_embed/labels/default")
endpoint = "https://xyz.search.windows.net"
credential = DefaultAzureCredential()
# how do I proceed?
</code></pre>
|
<python><azure><azure-cognitive-services><azure-sdk-python>
|
2024-02-22 09:34:40
| 1
| 3,629
|
Mike B
|
78,039,683
| 5,696,601
|
How to split layout and plot code into two files
|
<p>I would like to split the code for <a href="https://dash-example-index.herokuapp.com/bar-charts" rel="nofollow noreferrer">this</a> dashboard into two files namely <code>main.py</code> and <code>./folder/plot.py</code>. Actually, the plot is displayed correctly, but two <code>ID not found in layout</code> errors are being returned (Attempting to connect a callback Input item to component: "xxx" but no components with that id exist in the layout).</p>
<p>How can I fix the errors?</p>
<p><code>main.py</code></p>
<pre><code>"""Doc."""
from dash import Dash, dcc, html, Input, Output, callback
import plotly.express as px
from folder.plot import plot
app = Dash(__name__)
app.layout = html.Div([
html.H4("Restaurant tips by day of week"),
plot
])
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
<p><code>./folder/plot.py</code> (<code>./folder/</code> contains an <code>__init__.py</code> file).</p>
<pre><code>"""Doc."""
from dash import dcc, html, Input, Output, callback
import plotly.express as px
df = px.data.tips()
dropdown = dcc.Dropdown(["Fri", "Sat", "Sun"], "Fri", clearable=False)
graph = dcc.Graph()
plot = html.Div([
dropdown,
graph
])
@callback(Output(graph, "figure"), Input(dropdown, "value"))
def update_bar_chart(day):
mask = df["day"] == day
fig = px.bar(
df[mask],
x="sex",
y="total_bill",
color="smoker",
barmode="group"
)
return fig
</code></pre>
|
<python><plotly-dash>
|
2024-02-22 09:17:31
| 1
| 1,023
|
Stücke
|
78,039,639
| 1,335,606
|
Multithreading implementation using python does not work
|
<p>I want to process multiple lists using multiprocessing. i have implemented this way. but its not working. could somebody help on this?
main_list has one iteration,</p>
<blockquote>
<p>range(len(np_optimization))</p>
</blockquote>
<p>using this iterating again, i thing due to this multiprocessing not working.</p>
<pre><code>from multiprocessing.pool import ThreadPool
import numpy as np
pt = ThreadPool(mp.cpu_count() - 2)
def test_code(args):
idx, pld_idx, number, dimention, quantity, orientation, matrix, np_optimization = \
args[0], args[1], args[2], args[3], args[4], args[5], args[6], args[7]
position = False
lwh = (228.0, 139.2, 127.6)
for idx in range(len(np_optimization)):
b1_l, b1_w, b1_h = dimention[np_optimization[idx][1][0]][0], \
dimention[np_optimization[idx][1][0]][1], dimention[np_optimization[idx][1][0]][2]
b1_dim = (b1_l, b1_w, b1_h)
b2_l, b2_w, b2_h = dimention[np_optimization[idx][1][1]][0], \
dimention[np_optimization[idx][1][1]][1], dimention[np_optimization[idx][1][1]][2]
b2_dim = (b2_l, b2_w, b2_h)
block_combination_dim = np.array((b1_dim, b2_dim))
block_dim_sum = block_combination_dim.dot(matrix).sum(axis=0)
dim_fit_check = (block_dim_sum < lwh).astype(int)
fit_result = np.any(dim_fit_check)
if fit_result:
position = np.zeros_like(dim_fit_check)
position[np.argmax(dim_fit_check == 1)] = 1
break
if position is False:
quant = []
info = "Single"
else:
quant = np_optimization[idx][0]
b1_block_dim = (...)
b2_block_dim = (...)
info = ([b1_block_dim, b2_block_dim, list(position)])
return quant, info
main_list = [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 0, array([0, 1, 2, 3, 4, 5, 6, 7]), array([[168. , 94. , 30.5],
[168. , 94. , 61. ],
[168. , 94. , 91.5],
[168. , 94. , 122. ],
[168. , 30.5, 94. ],
[168. , 61. , 94. ],
[168. , 91.5, 94. ],
[168. , 122. , 94. ]]), array([1, 2, 3, 4, 1, 2, 3, 4]), array(['LxWxH', 'LxWxH', 'LxWxH', 'LxWxH', 'LxHxW', 'LxHxW', 'LxHxW',
'LxHxW'], dtype='<U5'), array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]), [(8, (3, 7)), (8, (6, 6)), (7, (2, 5)), (7, (3, 3)), (7, (3, 6))]]
def test():
for i in range(3):
# pld_idx = i
# constructing all the lists here.
output = pt.map(test_code, main_list)
print(output)
</code></pre>
|
<python><multithreading>
|
2024-02-22 09:12:40
| 0
| 503
|
user1335606
|
78,039,598
| 4,828,720
|
Why is the child QObject still accessible when the parent is deleted?
|
<p>Trying to learn more about PyQt I wrote a small script that:</p>
<ul>
<li>Creates an <code>QObject</code> <code>a</code></li>
<li>Creates an <code>QObject</code> <code>b</code> with <code>a</code> as its parent</li>
<li>Deletes <code>a</code></li>
</ul>
<p>At this point I expect the object to which the name <code>b</code> is point to have been deleted as well. The documentation of Qt (not PyQt!) <a href="https://doc.qt.io/qt-5/qobject.html#details" rel="nofollow noreferrer">says</a>:</p>
<blockquote>
<p>The parent takes ownership of the object; i.e., it will automatically
delete its children in its destructor.</p>
</blockquote>
<p>But <code>b</code> still points to an existing object.<br />
I also tried an explicit garbage collection without any change.<br />
Trying to access <code>a</code> via <code>b</code>'s <code>parent()</code> method fails though, as expected.</p>
<p>Why is the <code>QObject</code> referenced by <code>b</code> not deleted when <code>a</code>, which "owns" <code>b</code>, is deleted?</p>
<p>I have added the print outputs as comments below:</p>
<pre class="lang-py prettyprint-override"><code>import gc
from PyQt5.QtCore import QObject
def tracked_qobjects():
return [id(o) for o in gc.get_objects() if isinstance(o, QObject)]
def children(qobject: QObject):
return [id(c) for c in qobject.findChildren(QObject)]
a = QObject()
b = QObject(parent=a)
print(f"QObjects tracked by gc: {tracked_qobjects()}")
# QObjects tracked by gc: [140325587978704, 140325587978848]
print(f"Children of a: {children(a)}")
# Children of a: [140325587978848]
del a
print(f"QObjects tracked by gc: {tracked_qobjects()}")
# QObjects tracked by gc: [140325587978848]
gc.collect() # not guaranteed to clean up but should not hurt
print(f"QObjects tracked by gc: {tracked_qobjects()}")
# QObjects tracked by gc: [140325587978848]
# Since https://doc.qt.io/qt-5/qobject.html#details says:
# "The parent takes ownership of the object; i.e., it will automatically delete
# its children in its destructor."
# I expect that b now points to a non-existent object.
# But no, this still works! Maybe because we are in PyQt5 and
# not a C++ application?
print(id(b))
# 140325587978848
print(b)
# <PyQt5.QtCore.QObject object at 0x7fa018d30a60>
# The parent is truly gone though and trying to access it from its child raises the "wanted" RuntimeError
print(b.parent())
# RuntimeError: wrapped C/C++ object of type QObject has been deleted
</code></pre>
|
<python><qt><pyqt><pyqt5><ownership>
|
2024-02-22 09:04:56
| 1
| 1,190
|
bugmenot123
|
78,039,596
| 23,461,455
|
How can I extract structure and content from a MariaDB SQL dump?
|
<p>I have an SQL dump file containing an entire MariaDB which is multiple gigabytes big. I don't have access to a local database installation due to company security restrictions. Can I execute my dump via Python in SQLite or extract its data so I can analyze it?</p>
<p>I iterate my dump for table names to get an overview of the database:</p>
<pre><code>table_list=[]
with open(dmp.file ,encoding='cp437') as f:
for line in f:
line = line.strip()
if line.lower().startswith('create table'):
table_name = re.findall('create table `([\w_]+)`', line.lower())
table_list.extend(table_name)
for x in table_list:
print(x)
</code></pre>
<p>This works but my dump's SQL statements span multiple lines, so I wrote the following to get the statements on one line:</p>
<pre><code>currentLine = ""
with open(File,encoding='cp437') as f:
for line in f:
line = line.strip()
currentLine = currentLine + " " + line
if line.lower().endswith(';') == True:
with open(NewFileOneLiner.txt', "a", encoding="utf-8") as g:
g.write(currentLine.lstrip() + '\n')
currentLine = ""
</code></pre>
<p>What additional steps are needed (since both are SQL databases, transforming the SQL statements should be possible)? Is there any way to execute all statements in SQLite? What are the boundaries of and caveats to this approach (does SQLite not support concepts of SQL that I need to be aware of)? Can I extract the tables and their data in some other form?</p>
|
<python><sql><sqlite><mariadb>
|
2024-02-22 09:04:38
| 2
| 1,284
|
Bending Rodriguez
|
78,039,573
| 1,415,325
|
hexbin by matplotlib : get points in each hexagon?
|
<p>I am writting an app to interactively analyze 2D scatter plots with millions of points.
To reduce the number of points to plot and prevent reinventing the wheel, I use 2D hexagonal binning from matplotlib. (density plot)</p>
<p>The classical usage is:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from matplotlib import pyplot as plt
x=np.random.normal(loc=100,scale=20,size=100)
y=np.random.normal(loc=1000,scale=20,size=100)
fig, ax = plt.subplots(figsize=(10, 8))
hexbin = ax.hexbin(x, y, gridsize=25, cmap='jet',picker=True)
plt.show()
</code></pre>
<p>For my app, I need to know which points belong to each bin.</p>
<p>For this, I have two options:</p>
<ul>
<li><strong>use shapely</strong> : extract the shape of each hexagon and use shapely to find whether the point is included or not . Here is my proof of concept (of course some optimisation can still be done to prevent testing points already allocated):</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import shapely
# get reference hexagon shape
paths = hexbin.get_paths()
hexagon_vertices = [item[0] for item in paths[0].iter_segments()]
# get offset to apply to each hexagon
list_offsets = hexbin.get_offsets()
list_points = [shapely.Point(x,y) for x,y in zip(x,y)]
list_idx = []
for offset in list_offsets:
polygon = shapely.Polygon(shell = [corner + offset for corner in hexagon_vertices])
is_in = polygon.contains(list_points)
idx = np.argwhere(is_in==1).flatten()
if len(idx)==0:
idx = None
list_idx.append(idx)
</code></pre>
<p>But I think this is an overkill as behind the scene, this info is already computed somehow by matplotlib. I had the look at the code of the hexbin function of matplotlib but I am a little bit lost...
The key part seems to be here:
<a href="https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/axes/_axes.py#L5061-L5062" rel="nofollow noreferrer">https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/axes/_axes.py#L5061-L5062</a>
but I would need some help to be able to extract the usefull info...</p>
<ul>
<li><strong>adapt the matplotlib code</strong> to extract this info : for this I would need some help :-)</li>
</ul>
<p>I hope that someone will be able to help me.</p>
<p>Thanks a lot in advance,</p>
<p>Patrick</p>
|
<python><matplotlib><binning>
|
2024-02-22 09:00:22
| 0
| 1,429
|
sweetdream
|
78,039,427
| 5,868,293
|
How to apply custom function with many parameters in a pandas column
|
<p>I have the following function</p>
<pre><code>import math
def f(k,a,x):
return (1/(1+math.exp(-k*x))^a)
</code></pre>
<p>And the following pandas dataframe</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'x': list(range(1,100))})
</code></pre>
<p>How can I apply the above <code>function</code> on the <code>x</code> column of the <code>df</code> for lets say <code>k=1</code> and <code>a=2</code> ?</p>
|
<python><pandas>
|
2024-02-22 08:35:20
| 1
| 4,512
|
quant
|
78,039,353
| 14,348,748
|
Why does slots not work with final dataclass attribute?
|
<p>I have a dataclass with <code>slots=True</code> and a final class variable/constant that is used as version identifier. For some reason that does not work as expected. I either need to remove the slots or change <code>Final</code> to <code>ClassVar</code>.</p>
<p>Here is a minimal example:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
from __future__ import annotations
from dataclasses import dataclass
from typing import Final, ClassVar
@dataclass(slots=True)
class Child():
VERSION: Final[str] = '1.0.0'
@classmethod
def check_version(cls, version: str) -> bool:
return version == cls.VERSION
print(Child.check_version('1.0.0'))
# Returns False
</code></pre>
<p>Could anyone explain me why this happens and how to fix it properly? I am using python 3.11.7</p>
|
<python><python-3.x><python-typing>
|
2024-02-22 08:22:16
| 1
| 1,181
|
NicoHood
|
78,038,874
| 7,396,306
|
Plot bar and line y columns with vastly different scales and the and the same x axis using pandas DataFrame.plot() on a single figure
|
<p>I am making a little program to track weight loss and calorie intake.</p>
<p>I am trying to plot two different columns from a single <code>DataFrame</code>. Each column is a different set of y variables and they are to be plotted on the same figure using the same x axis (which is a time series). One y variable is for daily difference in weight, while the other is daily caloric intake. As you can imagine, the scales are these are quite different.</p>
<p>My code looks like this (it has sample data so you can test it for yourself; moreover, there may be some erroneous lines that are serving some other purpose outside of this plot):</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import date
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.ticker as mticker
from mplcursors import cursor
from statistics import mean
def add_nans(cals: list, dates: pd.Series) -> list:
for cal in range(len(dates) - len(cals)):
cals.append(np.nan)
return cals
# Weight tracking
real_weight = [
223.8, 222, 221.2, 220.8, 221.4, 219.4, 219.8, 219.2, 219.6, 218.2, 221.6, 220.2, 219.4, 218.8, 218, 217.4,
218.6, 216.8, 217.6, 215.8, 216, 216, 219.6, 216.8, 216.2, 215.6, 215.8, 217.2, 214.6, 219.6, 217.4, 216.0,
215.2, 214.2, 214.4, 216.4, 215.4, 215, 214.2, 214.4, 216, 214.2
] #### Add your weight to this list
est_cals = [
2129, 2323, 2298, 1984, 2020, 2102, 1980, 2386, 2146, 2377, 2500, 2265, 2150, 1840, 2073, 2070, 2050, 2108,
2130, 2770, 2170, 2270, 1955, 2030, 2020, 2500, 1910, 2150, 2695, 1980, 2110, 2190, 2940, 1970, 2750, 1870,
2470, 2200, 2400, 4000, 2200, 1916
]
lw = len(real_weight)
dates = pd.date_range(start='01/11/2024', end='07/11/2024') #### Add date range
real_weight = add_nans(cals=real_weight, dates=dates)
est_cals = add_nans(cals=est_cals, dates=dates)
difference = [0] + [round(j-i,4) for i, j in zip(real_weight[:-1], real_weight[1:])]
goalw, startw = 185, real_weight[0] #### Add goal weight and starting weight
lin_weight = np.linspace(startw, goalw, len(dates))
lpw = round(7 * (lin_weight[0] - lin_weight[1]), 2)
df = pd.DataFrame(data={'date': dates, 'lin_weight': lin_weight, 'real_weight': real_weight})
df['calories'] = est_cals
df['day'] = df.date
df['difference'] = difference
df['ordinal'] = pd.to_datetime(df['date']).apply(lambda date: date.toordinal())
df['idx'] = df.index
df = df.set_index('date')
startdt, enddt = df.index[0].date(), df.index[-1].date()
avg_dif = df.difference.abs().mean()
# Daily difference in weight
df = df.dropna(axis=0)
day = [f'day {i+1}' for i in range(len(df.index))]
df['day_1'] = day
max_day = df.loc[df['difference'] == df.difference.max(), 'day_1'].item()
min_day = df.loc[df['difference'] == df.difference.min(), 'day_1'].item()
ax1 = df[['ordinal', 'difference']].plot(x='ordinal', color='magenta', kind='bar',
legend=True, xlabel='Days into Diet', ylabel='Difference in Daily Weight (lbs)')
ax1.axhline(y=0, linewidth=1, c='black', linestyle='-', label='_nolegend_')
ax1.set_title(f'Difference in Daily Weight from {startdt} to {df.tail(1).index.item().date()}\n')
df[['ordinal', 'calories']].plot(x='ordinal', color='darkorange', ax=ax1, linestyle='-', marker='o',
ylabel='Daily Calorie Intake (kcal)', secondary_y=True)
ax1.set_xticklabels(day, rotation=45)
plt.xticks(rotation=45)
cursor(hover=True)
plt.show()
</code></pre>
<p>Right now, this is only showing the difference bar graph, but not the caloric intake line graph on top of it, even though it has the label and ticks on the secondary y axis. I think this may have to do with scale or something, but I have tried several things and I can't get it to show up; it's either only the difference or nothing at all.</p>
|
<python><pandas><matplotlib><plot>
|
2024-02-22 06:47:52
| 1
| 859
|
DrakeMurdoch
|
78,038,857
| 5,658,291
|
deep linking in react native using custom scheme link
|
<p>Hi I have done deeplinking in react native, Initially for testing purpose i have create on html page with hyper link if i click on that link if app is installed app is opening.
I have been stuck on one thing where i am trying to put the same link in email template but that link is not clickable. Using Interakt for mailing service and backend is python</p>
<p><strong>HTML page for internal testing</strong></p>
<pre><code><a href="customer://renewal">Plan renewal</a>
</code></pre>
<p><strong>in email template block of link:</strong></p>
<pre><code> <div class="column">
<span class="align_center_btn">
<a href="customer://renewal" style=" background: #285295;
border-radius: 6px;
padding: 10px 16px;
color: #FFFFFF;
border: none;">Renew again</a>
<p style="line-height: 20px;
font-size: 16px;
font-family: 'Open Sans';
font-style: normal;">We will remember you on <br />{{plan_end_date}} </p>
</span>
</div>
</code></pre>
|
<python><android><reactjs><react-native><deep-linking>
|
2024-02-22 06:42:47
| 1
| 579
|
Sudhir
|
78,038,847
| 185,977
|
Python how to mock a function that's returned by a higher-order function
|
<p>I'd like to mock the Pandas function <code>read_csv</code>, which in my code is returned dynamically by a higher-order function, but it seems the pytest <code>patch</code> doesn't really allow it.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
format_read_func_mapping = {"csv": pd.read_csv, "parquet": pd.read_parquet}
def my_func(s3_path, file_format):
read_func = format_read_func_mapping[file_format]
df = read_func(f"{s3_path}")
return df
</code></pre>
<p>When I run my unit test, I have the following</p>
<pre><code>from unittest.mock import patch
@patch(f"{MODULE_PATH}.pd.read_csv")
def test_my_func(self, mock_read_csv):
mock_read_csv.side_effect = [my_test_data_frame]
my_func(dummy_s3_path, "csv")
</code></pre>
<p>I expect the test would use the mock and return my test data, but it doesn't, it actually called <code>pandas.read_csv</code> to try to read from <code>dummy_s3_path</code>.</p>
<p>However, if I don't dynamically return <code>pd.read_csv</code> from a mapping, just directly use <code>pd.read_csv(f"{s3_path}")</code> in <code>my_func</code>, the mock works well.</p>
<p>Is this a limit of pytest? Or the way I mock is incorrect?</p>
<p>Thanks.</p>
|
<python><pytest><python-unittest>
|
2024-02-22 06:40:40
| 2
| 1,638
|
Sapience
|
78,038,534
| 1,806,998
|
PytestBDD target_fixture - PytestDeprecationWarning
|
<p>When I run <code>pytest</code> on the following code:</p>
<pre><code>import requests
from pytest_bdd import scenario, given, when, then
@scenario('../features/user.feature', 'Create a new user')
def test_create_user():
pass
@given('a valid user payload')
def valid_user_payload():
return {
"username": "TEST"
}
@when('a client calls POST /users', target_fixture='response')
def create_user():
api_base_url = "https://localhost:8080"
response = requests.post(f"{api_base_url}/users", json="user_payload")
return response
</code></pre>
<p>I get this warning: "... .venv/lib/python3.11/site-packages/pytest_bdd/steps.py:211: PytestDeprecationWarning: A private pytest class or function was used.
fd = FixtureDef(
"</p>
<p>When I remove the <code>target_fixture</code> parameter from the <code>@when</code> decorator, the warning goes away. I understand I can capture the warning but I don't want to capture deprecation warnings, I'd rather know what's causing this and how to fix it.</p>
<p>This is my requirements.txt file:</p>
<pre><code>pytest==8.0.0
boto3==1.34.39
pytest-bdd==7.0.1
devtools==0.12.2
requests==2.31.0
</code></pre>
|
<python><pytest><pytest-bdd>
|
2024-02-22 05:06:23
| 1
| 2,243
|
jerney
|
78,038,473
| 1,056,563
|
Getting "AttributeError: 'DefaultAzureCredential' object has no attribute 'signed_session'" using the azure.identity python package
|
<p>My code to read from the <code>Log Analytics Workspace</code> fails with the titular error.</p>
<pre><code>from azure.identity import DefaultAzureCredential
from azure.loganalytics import LogAnalyticsDataClient
from azure.loganalytics.models import QueryBody
from IPython.core.debugger import set_trace
# Replace these variables with your actual values
workspace_id = 'ABCDE'
query = """ddex_CL
| take 100
"""
credential = DefaultAzureCredential()
client = LogAnalyticsDataClient(credential)
query_body = QueryBody(query=query)
body=query_body)
response = client.query(workspace_id, body=query_body).execute()
for row in response.tables[0].rows:
print(row)
</code></pre>
<p>Note: there are similar question here: <a href="https://stackoverflow.com/questions/63384092/exception-attributeerror-defaultazurecredential-object-has-no-attribute-sig">Exception: AttributeError: 'DefaultAzureCredential' object has no attribute 'signed_session' using Azure Function and Python</a> . The most upvoted answer says</p>
<blockquote>
<p>As of May 2022, all SDKs have been re-released with native support for azure-identity</p>
</blockquote>
<p>But I still encounter the error even though installing the latest version via:</p>
<pre><code>pip3 install azure-loganalytics azure-identity
pip show azure-loganalytics
Name: azure-loganalytics
Version: 0.1.1
$pip show azure-identity
Name: azure-identity
Version: 1.14.0
</code></pre>
<p>Another similar question is here: <a href="https://stackoverflow.com/questions/74778698/attributeerror-defaultazurecredential-object-has-no-attribute-signed-session">AttributeError: 'DefaultAzureCredential' object has no attribute 'signed_session'</a> . But I do already have the lastest library version installed.</p>
<p>Are there any other library or code changes that are needed?</p>
|
<python><azure><azure-web-app-service><azure-sdk-python>
|
2024-02-22 04:46:30
| 1
| 63,891
|
WestCoastProjects
|
78,038,277
| 7,583,953
|
How is it possible to memoize longest divisible subset using only the length (and end position)?
|
<p>The goal is to find the largest divisible subset. A divisible subset is a subset s.t. for every pair of elements i, j, either i is divisible by j or j is divisible by i. The general approach to solve this is to sort the numbers and realize that if a is divisible by b and b by c, that a must also be divisible by c. Here's the unmemoized recursion:</p>
<pre><code>def largestDivisibleSubset0(self, nums: List[int]) -> List[int]:
def backtrack(candidate, end):
if len(candidate) > len(self.best):
self.best = candidate[:]
if end == n - 1:
return
for new_end in range(end+1, n):
if not candidate or candidate[-1] % nums[new_end] == 0:
backtrack(candidate+[nums[new_end]], new_end)
nums.sort(reverse=True)
n = len(nums)
self.best = []
backtrack([], -1)
return self.best
</code></pre>
<p>Next, I tried to memoize. Forgive the poor style where <code>self.best</code> and memo are both tracked. I know it's redundant and that I'm not actually caching the local best but instead the global (side question: what <em>is</em> the best way to memoize this?)</p>
<pre><code>def largestDivisibleSubset(self, nums: List[int]) -> List[int]:
def backtrack(candidate, end):
if (len(candidate), end) in memo:
return memo[(len(candidate), end)]
if len(candidate) > len(self.best):
self.best = candidate[:]
if end == n - 1:
return
for new_end in range(end+1, n):
if not candidate or candidate[-1] % nums[new_end] == 0:
backtrack(candidate+[nums[new_end]], new_end)
memo[(len(candidate), end)] = self.best
nums.sort(reverse=True)
n = len(nums)
self.best = []
memo = {}
backtrack([], -1)
return self.best
</code></pre>
<p>Here's what I don't understand. How is it an accurate representation of the state to simply consider the length of the current candidate, and not the last element? What if a subsequence ending in 5 has length x, equal to a subsequence ending in 4 — isn't it possible the latter one gets pruned from the search space, even if it might result in a longer divisible subset down the line?</p>
|
<python><algorithm><recursion><dynamic-programming><memoization>
|
2024-02-22 03:33:56
| 1
| 9,733
|
Alec
|
78,037,891
| 1,397,061
|
How to capture a DLL's stdout/stderr in Python?
|
<p>How can you capture a DLL's stdout and/or stderr in Python on Windows? For example, this prints <code>"hello"</code> to stderr, but it should be possible to capture the <code>"hello"</code> as a string instead of printing it:</p>
<pre><code>import ctypes
string = b'hello\n'
ctypes.cdll.msvcrt._write(2, string, len(string))
</code></pre>
<p>Here's what doesn't work:</p>
<ol>
<li>Temporarily assigning <code>sys.stderr</code> to a <code>StringIO</code> (or equivalently, using <a href="https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout" rel="nofollow noreferrer"><code>contextlib.redirect_stdout</code></a>) doesn't capture the output because it's from a C library function, not a Python print statement. This doesn't work on Linux either.</li>
<li>Using <a href="https://docs.python.org/3/library/os.html#os.dup2" rel="nofollow noreferrer"><code>os.dup2()</code></a> and reading from a pipe on a separate thread, as suggested <a href="https://stackoverflow.com/a/24277852/1397061">here</a>, merely suppresses the output without capturing it.</li>
<li>Using <code>ctypes.cdll.kernel32.GetStdHandle()</code> and <code>ctypes.cdll.kernel32.SetStdHandle()</code>, as suggested <a href="https://stackoverflow.com/a/17953864/1397061">here</a>, gives the error <code>OSError: [WinError 6] The handle is invalid</code> when attempting to print to the modified stderr.</li>
<li><a href="https://stackoverflow.com/a/35304921/1397061">This solution</a> fails because <code>ctypes.util.find_msvcrt()</code> returns <code>None</code> in Python 3.5+, which to my understanding is because Microsoft has transitioned from the Microsoft Visual C++ Runtime (MSVCRT) to the Universal C Runtime (UCRT). Even if I change the line <code>msvcrt = CDLL(ctypes.util.find_msvcrt())</code> to <code>msvcrt = ctypes.cdll.msvcrt</code>, it merely suppresses the output without capturing it.</li>
</ol>
<p>My general impression is that solutions that work on Linux don't work on Windows, and solutions that used to work on Windows no longer do because of the transition to the UCRT.</p>
|
<python><windows><dll><msvcrt><kernel32>
|
2024-02-22 01:03:13
| 2
| 27,225
|
1''
|
78,037,775
| 12,884,304
|
Cannot find implementation or library stub for module named
|
<p>I have <code>mypy</code> pre-commit hook</p>
<pre class="lang-yaml prettyprint-override"><code> - repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.8.0
hooks:
- id: mypy
args:
- --config-file=./.styleconfigs/mypy.ini
additional_dependencies:
[
"types-requests",
"types-python-dateutil",
"types-redis",
"types-ujson",
"types-cachetools",
"pydantic==2.5.3"
]
</code></pre>
<p>project structure</p>
<pre><code>project_dir/
|-- src/
| |-- apps/
| | |-- app1/
| | | |-- __init__.py
| | | |-- mod1.py
| | |-- app2/
| | | |-- __init__.py
| | | |-- mod2.py
| | |-- __init__.py
| |-- __init__.py
|-- .styleconfigs/
| |-- mypy.ini
|-- pre-commit-config.yaml
</code></pre>
<p><strong>src/apps/app1/mod1.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from apps.app2.mod2 import foo
</code></pre>
<p>I try to run <code>pre-commit run -a</code> from <code>project_dir</code> and receive errors <code>Cannot find implementation or library stub for module named "apps.app2.mod2"</code>.</p>
<p>But when I run command <code>cd src && mypy --config-file ../.styleconfigs/mypy.ini .</code> it's work correctly.</p>
|
<python><mypy><pre-commit-hook><python-3.11><pre-commit.com>
|
2024-02-22 00:11:41
| 1
| 331
|
unknown
|
78,037,765
| 19,556,055
|
Fitting a label to the size of text in Python with kivy
|
<p>I'm creating labels based on information that I'm drawing from a database, and I would like to left align it in the grid. For this, I'm trying to wrap the label tightly around the text so I can left align the label in an AnchorLayout. Because the text in my label and the number of these grids with labels are dynamic, I'm doing this through python rather than in a kv file. I've seen a ton of documentation on how to do this in the kivy language, but I can't figure it out in python. Just calling tour_name_label.texture_size (see below) returns [0. 0], I can't seem to access this property and set it to the label size. I'm new to kivy, so I'm not super comfortable with the bind and getters/setters methods. How should I tackle this?</p>
<p>main.py</p>
<pre><code>from kivymd.app import MDApp
from kivy.lang import Builder
from kivy.uix.screenmanager import Screen
from tour_grid import TourGrid
from kivy.core.window import Window
from kivy.core.text import LabelBase
import requests
import json
class MyToursScreen(Screen):
pass
class Main(MDApp):
acc_id = 0
def build(self):
# self.theme_cls.material_style = "M3"
# self.theme_cls.theme_style = "Dark"
Window.size = (360, 600)
return Builder.load_file("main.kv")
def on_start(self):
# Sample tour data
tour_data = [{'acc_id': 0, 'date_tour_created': '16-02-2024', 'name': 'Bike Tour Norway', 'starting_date': '13-06-2024', 'tour_id': 0}]
# print(self.root.ids)
# Populate tour grid on home screen
tours_grid = self.root.ids["my_tours"].ids["tours_grid"]
for tour in tour_data:
T = TourGrid(tour_name=tour["name"], starting_date=tour["starting_date"])
tours_grid.add_widget(T)
if __name__ == "__main__":
Main().run()
</code></pre>
<p>tour_grid.py (where the problem is)</p>
<pre><code>from kivy.uix.gridlayout import GridLayout
from kivy.uix.anchorlayout import AnchorLayout
from kivy.uix.label import Label
from kivy.uix.button import Button
from kivy.graphics import Color, Rectangle
import kivy.utils
class TourGrid(GridLayout):
def __init__(self, **kwargs):
self.cols = 1
super(TourGrid, self).__init__() # Initialize GridLayout with kwargs
with self.canvas.before:
Color(rgb=kivy.utils.get_color_from_hex("#dddddd"))
self.rect = Rectangle(size=self.size, pos=self.pos)
self.bind(pos=self.update_rect, size=self.update_rect)
# PROBLEM HERE, MADE INTO BUTTON TO CHECK DIMENSIONS
# Top FloatLayout containing name of tour
top = AnchorLayout(anchor_x="center", anchor_y="center")
tour_name_label = Button(text=str(kwargs["tour_name"]), color=kivy.utils.get_color_from_hex("#000000"))
# tour_name_label.bind(texture_size=tour_name_label.setter("size"))
# print(tour_name_label.getter("texture_size"))
# print(tour_name_label.setter("texture_size")())
top.add_widget(tour_name_label)
# Bottom FloatLayout containing starting date
bottom = AnchorLayout(anchor_y="center")
starting_date_label = Label(text="[color=000000]" + kwargs["starting_date"] + "[/color]", size_hint=[1, 0.3], pos_hint={"top": 1, "left": 1}, markup=True)
bottom.add_widget(starting_date_label)
self.add_widget(top)
self.add_widget(bottom)
def update_rect(self, *_):
self.rect.pos = self.pos
self.rect.size = self.size
</code></pre>
<p>main.kv</p>
<pre><code>#:include kv/my_tours.kv
BoxLayout:
ScreenManager:
id: screen_manager
MyToursScreen:
id: my_tours
name: "my_tours"
</code></pre>
<p>my_tours.kv</p>
<pre><code>#:import utils kivy.utils
<MyToursScreen@Screen>:
FloatLayout:
# Top bar, profile image and name, notification and settings button
GridLayout:
rows: 1
pos_hint: {"top": 1, "left": 1}
size_hint: 1, 0.1
canvas:
Color:
rgb: utils.get_color_from_hex("#25be55")
Rectangle:
size: self.size
pos: self.pos
# Main grid with tours
ScrollView:
pos_hint: {"top": 0.9, "left": 1}
size_hint: 1, 0.8
GridLayout:
id: tours_grid
cols: 1
size_hint_y: None
height: self.minimum_height
row_default_height: "70dp"
row_force_default: True
Label:
text: "Planned trips"
color: utils.get_color_from_hex("#000000")
# Bottom bar, My Tours button
GridLayout:
rows: 1
pos_hint: {"top": 0.1, "left": 1}
size_hint: 1, 0.1
canvas:
Color:
rgb: utils.get_color_from_hex("#25be55")
Rectangle:
size: self.size
pos: self.pos
Label:
text: "My tours"
bold: True
</code></pre>
|
<python><kivy>
|
2024-02-22 00:09:14
| 2
| 338
|
MKJ
|
78,037,700
| 2,459,855
|
Image embedded in email also becomes an attachment
|
<p>A script that is correctly embedding inline images in HTML emails, is also adding the images as attachments. How can I eliminate this second copy of each image?</p>
<p>If any intended attachments are included, they're processing correctly. With or without attachments, all images appear in the body of the message, and then again as an attachment. Removing the section for attachments (# Attach any files) makes no difference on the images. They still get included twice.</p>
<pre><code>#!/usr/bin/python3
import smtplib
from email.message import EmailMessage
from email import encoders
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.mime.audio import MIMEAudio
import mimetypes
strFrom = 'Me@some.com'
strPWord = 'XYZ'
strTo = 'Recipient@GMail.com'
msg = EmailMessage()
msg['Subject'] = 'This is a test at 1:22'
msg['From'] = strFrom
msg['To'] = strTo
msg.set_content('''
<!DOCTYPE html>
<html>
<body>
<p><img src="cid:image1"/></p>
<p>Python email test</p>
<p><img src="cid:image2"/></p>
</body>
</html>
''', subtype='html')
msg.make_mixed()
# Attach Any Images
images = '''/Users/my/Desktop/RKw.jpeg\n/Users/my/Desktop/logo.png'''.splitlines()
i=1
for image in images:
# print 'Image',i,': ',image,'\n'
fp = open(image, 'rb')
msgImage = MIMEImage(fp.read())
fp.close()
# Define the image's ID as referenced above
msgImage.add_header('Content-ID', '<image'+str(i)+'>')
msg.attach(msgImage)
i+=1
# Attach any files
files = ''''''.splitlines()
for file in files:
ctype, encoding = mimetypes.guess_type(file)
if ctype is None or encoding is not None:
ctype = 'application/octet-stream'
maintype, subtype = ctype.split('/', 1)
with open(file, 'rb') as fp:
msg.add_attachment(fp.read(),
maintype=maintype,
subtype=subtype,
filename=file.split('/')[-1].replace(' ','%20'))
with smtplib.SMTP_SSL('smtpserver.com', 465) as smtp:
smtp.login(strFrom, strPWord)
smtp.send_message(msg)
</code></pre>
|
<python><email><html-email><email-attachments><mime>
|
2024-02-21 23:44:50
| 1
| 1,127
|
JAC
|
78,037,639
| 1,226,676
|
How to properly scope validator function in parent class for use in child class
|
<p>I'm trying to use validators, but I'm having a mysterious issue with using validators in a parent/child class.</p>
<p>Right now I've created a base class for all media in my project; all media in this model need a name and a thumbnail:</p>
<pre><code>class BaseMedia(BaseModel):
def validate_thumbnail(image_obj):
validate_image_file_extension(image_obj)
dimensions = [image_obj.width, image_obj.height]
if dimensions[0] != 400 or dimensions[1] != 400:
raise ValidationError(
"Thumbnail dimensions must be 400 x 400; uploaded image is %s x %s"
% (dimensions[0], dimensions[1])
)
media_title = models.CharField(
max_length=25, verbose_name="Title (25 char)", null=True
)
thumbnail = models.ImageField(
null=True,
verbose_name="Thumbnail (400 x 400 px)",
upload_to="microscopy/images/thumbnails",
validators=([validate_thumbnail],),
)
</code></pre>
<p>and then I have a child class that inherits and implement the core media itself; for example:</p>
<pre><code>class MicroscopyMedia(BaseMedia):
def validate_image(image_obj):
validate_image_file_extension(image_obj)
dimensions = [image_obj.width, image_obj.height]
if dimensions[0] != 1920 or dimensions[1] != 1070:
raise ValidationError(
"Image dimensions must be 1920 x 1070; uploaded image is %s x %s"
% (dimensions[0], dimensions[1])
)
media = models.ImageField(
null=True,
verbose_name="Media (1920 x 1070 px)",
blank=True,
upload_to="microscopy/images",
validators=[validate_image],
)
</code></pre>
<p>The idea being that all models will need a thumbnail, but within the child class I have a specific validator for each individual media type.</p>
<p>However, when I try to make this migration, I get an error:</p>
<pre><code>(env) PS D:\Projects\django-cms\projects\cms> python .\manage.py makemigrations
SystemCheckError: System check identified some issues:
ERRORS:
microscopy.BaseMedia.thumbnail: (fields.E008) All 'validators' must be callable.
HINT: validators[0] ([<function BaseMedia.validate_thumbnail at 0x000001D3193C8A60>]) isn't a function or instance of a validator class.
</code></pre>
<p>There seems to be a scoping issue at play here. As defined, <code>validate_thumbnail()</code> is just a regular function within the scope of <code>BaseMedia</code>, and isn't available to <code>MicroscopyMedia</code>. However, I can't make it a member function either, since I can't pass the <code>self</code> parameter.</p>
<p>How should I define this validator function in the parent class so that when the child class calls it, it's a valid function call?</p>
|
<python><django><django-models>
|
2024-02-21 23:23:56
| 1
| 5,568
|
nathan lachenmyer
|
78,037,554
| 2,084,503
|
CVXPY returning `Strict inequalities are not allowed.` with no strict inequalities
|
<p>I've got a script</p>
<pre class="lang-py prettyprint-override"><code>import cvxpy
x, y = cvxpy.Variable(), cvxpy.Variable()
prob = cvxpy.Problem(cvxpy.Minimize(
cvxpy.norm2(1, cvxpy.quad_over_lin(x, cvxpy.sqrt(y)))
), [y >= 0])
print(prob.is_dcp())
</code></pre>
<p>When I run this, I get</p>
<pre><code>File "/Users/pavel/Desktop/dcp.py", line 6, in <module>
cvxpy.norm2(1, cvxpy.quad_over_lin(cvxpy.square(x), cvxpy.sqrt(y)))
File "/Users/pavel/Library/Python/3.9/lib/python/site-packages/cvxpy/atoms/norm.py", line 97, in norm2
return norm(x, p=2, axis=axis)
File "/Users/pavel/Library/Python/3.9/lib/python/site-packages/cvxpy/atoms/norm.py", line 71, in norm
return norm1(x, axis=axis, keepdims=keepdims)
File "/Users/pavel/Library/Python/3.9/lib/python/site-packages/cvxpy/atoms/axis_atom.py", line 36, in __init__
super(AxisAtom, self).__init__(expr)
File "/Users/pavel/Library/Python/3.9/lib/python/site-packages/cvxpy/atoms/atom.py", line 50, in __init__
self.validate_arguments()
File "/Users/pavel/Library/Python/3.9/lib/python/site-packages/cvxpy/atoms/axis_atom.py", line 60, in validate_arguments
if self.axis is not None and self.axis > self.args[0].ndim:
File "/Users/pavel/Library/Python/3.9/lib/python/site-packages/cvxpy/expressions/expression.py", line 740, in __gt__
raise NotImplementedError("Strict inequalities are not allowed.")
NotImplementedError: Strict inequalities are not allowed.
</code></pre>
<p>despite there being no strict inequalities defined in my problem. There are other threads about how this can happen when you're not using CVXPY atoms, but here that's all I'm using. What's going on?</p>
|
<python><cvxpy>
|
2024-02-21 23:01:34
| 1
| 1,266
|
Pavel Komarov
|
78,037,459
| 1,487,288
|
Selenium ChromeDriver / Flask Gunicorn, working in normal mode but failing in daemon mode
|
<p>I have a simple Flask app and the ChromeDriver is instantiated as such</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_argument('--no-sandbox')
options.add_argument("--headless")
options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)
....
</code></pre>
<p>This code is working fine when I invoke it via following commands</p>
<pre><code>python app.py
gunicorn wsgi:app -b :5000
</code></pre>
<p>However, when I use run it as a daemon mode, I get following error</p>
<pre><code>gunicorn[31543]: /bin/sh: 1: google-chrome: not found
gunicorn[31543]: /bin/sh: 1: google-chrome-stable: not found
gunicorn[31543]: /bin/sh: 1: google-chrome-beta: not found
gunicorn[31543]: /bin/sh: 1: google-chrome-dev: not found
gunicorn[31549]: /bin/sh: 1: google-chrome: not found
gunicorn[31549]: /bin/sh: 1: google-chrome-stable: not found
gunicorn[31549]: /bin/sh: 1: google-chrome-beta: not found
gunicorn[31549]: /bin/sh: 1: google-chrome-dev: not found
</code></pre>
<p>The contents of my service file is</p>
<pre><code>[Unit]
Description=Gunicorn instance to serve project
After=network.target
[Service]
User=www-data
Group=www-data
WorkingDirectory=/var/www/myproject
Environment="PATH=/var/www/myproject/bin"
ExecStart=/var/www/myproject/venv/bin/gunicorn --timeout 1000 -b :5000 wsgi:app
[Install]
WantedBy=multi-user.target
</code></pre>
<p>google-chrome is properly installed. when I invoke this command</p>
<pre><code>which google-chrome
</code></pre>
<p>I get following output</p>
<pre><code>/usr/bin/google-chrome
</code></pre>
<p>I think I am missing something when it comes to running the Flask app in daemon mode that is causing the issue.</p>
<p>I check the permission for google-chrome and its listed as such.</p>
<pre><code>lrwxrwxrwx 1 root root 31 Feb 19 00:55 google-chrome -> /etc/alternatives/google-chrome
lrwxrwxrwx 1 root root 32 Feb 19 00:55 google-chrome-stable -> /opt/google/chrome/google-chrome
</code></pre>
|
<python><selenium-webdriver><flask><selenium-chromedriver><gunicorn>
|
2024-02-21 22:35:53
| 0
| 1,442
|
elixir
|
78,037,427
| 2,364,295
|
iPython paste limit of one screen?
|
<p>I often like to debug by pasting hunks of code into my ipython terminal by just highlighting and middle clicking it (linux environment). Just now I'm noticing a limitation where it will only take in one screen full of code (one terminal with no scrolling). It's really weird because it doesn't truncate the code at the end but rather throws away the beginning such that the last stuff pasted is at the bottom of the terminal. And it really is fixed at one screen full because if I zoom out the terminal so there are more visible lines, it will accept more. Anyone know the source of this or how to fix it? I know I can source files and other stuff, but I like my current work flow.</p>
|
<python><linux><ipython>
|
2024-02-21 22:24:50
| 1
| 2,270
|
Mastiff
|
78,037,380
| 9,403,794
|
How to fill numpy array between two specific values
|
<p>I have to fill np array in row 2 with values from row 0.
Array looks like that:</p>
<pre><code>[[3. 4. 5. 6. 7. 8. 9. 8. 7. 6.]
[0. 0. 1. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre>
<p>Finally it should looks like bellow</p>
<pre><code>[[3. 4. 5. 6. 7. 8. 9. 8. 7. 6.]
[0. 0. 1. 0. 0. 1. 0. 0. 0. 0.]
[5. 5. 5. 8. 8. 8. 6. 6. 6. 6.]]
</code></pre>
<p>That means:</p>
<p>First row determine points of change. From beginning of row 3 to place where first value "1" in row 2, the row 3 should have value from row 0 where in row two occurs first value "1" and so on..</p>
<p>There is something similar but not exactly what i need.
<a href="https://stackoverflow.com/questions/60049171/fill-values-in-numpy-array-that-are-between-a-certain-value">Fill values in numpy array that are between a certain value</a></p>
<p>Do you know how to do it without iterate over columns?</p>
|
<python><numpy>
|
2024-02-21 22:13:59
| 3
| 309
|
luki
|
78,037,358
| 7,100,546
|
Pandas df.shape not outputting anything?
|
<p>I have two similar sets (Same Columns, Different Row Count, Possibly different data in some rows) of data from two different databases, and I'm trying to run comparison via Pandas. I'm exporting both datasets to xlsx and then importing into jupyter lab as so:</p>
<pre><code>df1 = pd.read_excel("Downloads/file1.xlsx",index_col=None)
df1 = df1.reset_index()
df2 = pd.read_excel("Downloads/file2.xlsx",index_col=None)\
df2 = df2.reset_index()
</code></pre>
<p>I want to begin the comparison and but when I try to use <code>.shape</code></p>
<pre><code>print("---file1---")
print(db2.shape)
print("---file2---")
print(edl.shape)
</code></pre>
<p>I don't get any return but df2 will return the expected tuple</p>
<pre><code>---file1---
---file2---
(20484, 7)
</code></pre>
<p>If I try to run <code>print(df1.head(3))</code> it does properly return the first 3 rows so I know data is making it into DataFrame 1. Same case for DataFrame 2.</p>
<p>I am assuming there is some form of formatting on the data being read from the excel, BUT I'm not getting any errors when calling the command. I could use some help understanding perhaps why <code>.shape</code> is not returning anything for df1 but is for df2.</p>
|
<python><pandas><dataframe><jupyter-lab>
|
2024-02-21 22:08:43
| 1
| 427
|
Rob
|
78,037,335
| 16,912,844
|
Execution Hang or Deadlock During `pytest` Using `asyncio.Queue` and `TaskGroup`
|
<p>I am new to async programming, I am trying to understand how <code>TaskGroup</code> can be used with <code>asyncio.Queue</code>. I have the below module with test, but when executing <code>pytest</code>, it prints out the queue item, but then it just hangs / deadlocks? Any suggestion on what I am doing wrong?</p>
<p><strong>Module: <code>AsynchronousQueueBeta.py</code></strong></p>
<pre class="lang-py prettyprint-override"><code>from asyncio import Queue, TaskGroup
class AsynchronousQueueBeta:
"""Asynchronous Queue Beta"""
async def fetch_recursive(self, source_list: list[str], maximum_connection: int = 10):
"""Fetch Recursive"""
print('Fetch Recursive')
query_queue = Queue()
for source in source_list:
query_queue.put_nowait(source)
async with TaskGroup() as group:
task_list = [
group.create_task(self.fetch_query(query_queue)) for _ in range(maximum_connection)
]
await query_queue.join()
result_list = [task.result() for task in task_list]
print(f'Result List: {result_list}')
async def fetch_query(self, queue: Queue):
"""Fetch Query"""
while True:
query = await queue.get()
print(f'Query: {query}')
queue.task_done()
</code></pre>
<p><strong>Test: <code>TestAsynchronousQueueBeta.py</code></strong></p>
<pre class="lang-py prettyprint-override"><code>import pytest
from AsynchronousQueueBeta import AsynchronousQueueBeta
class TestAsynchronousQueueBeta():
"""Test Asynchronous Queue Beta"""
@pytest.mark.asyncio
@pytest.mark.parametrize(
'source_list', [
[
'https://httpbin.org/anything/1',
'https://httpbin.org/anything/2',
'https://httpbin.org/anything/3',
'https://httpbin.org/anything/4',
'https://httpbin.org/anything/5',
'https://httpbin.org/anything/6',
'https://httpbin.org/anything/7',
'https://httpbin.org/anything/8',
'https://httpbin.org/anything/9',
'https://httpbin.org/anything/10',
'https://httpbin.org/anything/11',
'https://httpbin.org/anything/12',
],
]
)
async def test_fetch_recursive(self, source_list: list[str]):
"""Test Fetch Recursive"""
beta = AsynchronousQueueBeta()
await beta.fetch_recursive(
source_list=source_list,
)
</code></pre>
<p><strong>Result</strong></p>
<pre><code>platform darwin -- Python 3.12.1, pytest-7.4.4, pluggy-1.3.0 -- /Users/abc/Desktop/Project/Workspace/Python/pv312/bin/python3.12
cachedir: .pytest_cache
rootdir: /Users/abc/Desktop/Project/Async
configfile: pytest.ini
plugins: asyncio-0.23.3, anyio-4.2.0
asyncio: mode=Mode.STRICT
collected 1 item
Test/TestAsynchronousQueueBeta.py::TestAsynchronousQueueBeta::test_fetch_recursive[source_list0] Fetch Recursive
Query: https://httpbin.org/anything/1
Query: https://httpbin.org/anything/2
Query: https://httpbin.org/anything/3
Query: https://httpbin.org/anything/4
Query: https://httpbin.org/anything/5
Query: https://httpbin.org/anything/6
Query: https://httpbin.org/anything/7
Query: https://httpbin.org/anything/8
Query: https://httpbin.org/anything/9
Query: https://httpbin.org/anything/10
Query: https://httpbin.org/anything/11
Query: https://httpbin.org/anything/12
^C
!!! KeyboardInterrupt !!!
/opt/python/3.12.1/lib/python3.12/selectors.py:566: KeyboardInterrupt
(to show a full traceback on KeyboardInterrupt use --full-trace)
...
</code></pre>
|
<python><pytest><python-asyncio><pytest-asyncio>
|
2024-02-21 22:03:59
| 1
| 317
|
YTKme
|
78,037,267
| 4,268,602
|
matplotlib: combining hatching with pcolormesh
|
<pre><code>colors = ['darkgreen', 'darkred', 'red', 'green', 'lightgray']
to_plot = np.swapaxes(all_channel_maps, 0, 1)
# to_plot = all_channel_maps
plt.pcolormesh(to_plot, edgecolors='k', linewidth=0.1, cmap=ListedColormap(colors))
ax = plt.gca()
# ax.invert_yaxis()
ax.set_aspect('equal')
print(ax.pcolor)
plt.show()
</code></pre>
<p><code>all_channel_maps</code> is a 2D array of integers ranging from 0 to 4, each mapped to a specific color.</p>
<p>I want the cells with dark colors (values 0 and 1 in <code>all_channel_maps</code>) to be hatched in the resulting image.</p>
<p>How can I do this?</p>
|
<python><matplotlib>
|
2024-02-21 21:46:31
| 1
| 4,156
|
Daniel Paczuski Bak
|
78,037,265
| 7,874,547
|
python pptx: How to check if text in two placeholders overlap
|
<p>I'm using python-pptx to create a presentation.</p>
<p>I have a slide with 2 Text placeholders, the 1st above the 2nd (the same x-coord (shape.left)). When I put the text in both, the text in the 1st shape is long and overlap the text in the shape below. If I find out that there is the overlap, I can move the 1st shape (text) up or move the 2nd text down. But how to check the overlap?</p>
<p>Or how to get height of text in placeholder/text box? (then I can calculate vertical size and find overlap).</p>
<p>The python is preferred language but the package can be any, so you can suggest any package if it can solve it.</p>
<p>I can provide some code if needed.</p>
<p>EDIT: Yes, I'm looking for a solution in python (regardless package).
Code example:</p>
<pre><code>import time
from pptx import Presentation as pptx_Presentation
from pptx.enum.text import MSO_AUTO_SIZE
def get_date_time_str():
"""Return string of "now" in %Y%m%d-%H%M%S format"""
return time.strftime("%Y%m%d-%H%M%S", time.localtime())
long_text = "F" * 60
empty = r"C:\f\empty.pptx" # PowerPoint file where is "Sections" layout and no slide
new = pptx_Presentation(empty)
slide_layout = new.slide_layouts[1]
# there are 2 Text Placeholders, both have the same shape.left so they are one above another
# Example 1: Set text directly into text_frame
slide1 = new.slides.add_slide(slide_layout)
for shape in slide1.shapes:
shape.text_frame.text = long_text
# Example 2: Set text in paragraphs
# 3 paragraphs for each Text Placeholder
# NOTE: sometimes I also set text in runs in paragraphs
slide2 = new.slides.add_slide(slide_layout)
pars_data = [long_text for i in [1,2,3]]
wrap = True # False
for shape in slide2.shapes:
text_frame = shape.text_frame
text_frame.word_wrap = wrap
# also tested setting 'auto_size' before putting text
# text_frame.auto_size = MSO_AUTO_SIZE.SHAPE_TO_FIT_TEXT
text_frame.paragraphs[0].text = pars_data[0]
for par_data in pars_data[1:]:
par = text_frame.add_paragraph()
par.text = par_data
text_frame.auto_size = MSO_AUTO_SIZE.SHAPE_TO_FIT_TEXT
date_str = get_date_time_str()
out = rf"C:\f\out-{date_str}.pptx"
new.save(out)
</code></pre>
|
<python><python-pptx>
|
2024-02-21 21:46:14
| 0
| 320
|
martin-voj
|
78,037,174
| 4,119,262
|
Why is function within main function in python always returning "none" when it should return a float?
|
<p>In order to learn python, I am trying to take a string from the user, convert this string in a float. I need to convert the minutes in a decimal number. This is what the "convert" function intends to do.</p>
<p>But I always get "none" as output. What is wrong?</p>
<pre><code> def main():
time = input("what time is it?")
def convert (a):
#this split the string
x, z = time.split(":")
x = float(x)
z = float(z)
#converts hours and minutes to float
if z > 0:
z = z / 60
#converts minutes to decimals
else:
z = 0
k = x + z
return k
result = convert(time)
print(result)
if __name__ == "__main__":
main()
</code></pre>
<p>Ok so when reviewing the whole thing, and using this code as follows, I have the error "expected an intended block after function definition on line 1".
This indentation stuff seems not easy.</p>
<pre><code>def main():
time = input("what time is it?")
def convert (a):
x, z = time.split(":")
x = float(x)
z = float(z)
return x * (z / 60)
result = convert(time)
print(result)
if __name__ == "__main__":
main()
</code></pre>
<p>Ok so now it finally works...after I placed a small indentation after the first "main" line, like so:</p>
<pre><code>def main():
time = input("what time is it?")
def convert (a):
x, z = time.split(":")
x = float(x)
z = float(z)
return x * (z / 60)
result = convert(time)
print(result)
if __name__ == "__main__":
main()
</code></pre>
|
<python><function><type-conversion><minute>
|
2024-02-21 21:23:47
| 3
| 447
|
Elvino Michel
|
78,037,028
| 4,706,711
|
How I can prevent ‘connection reset by peer’ in curl caused by server closing the socket
|
<p>I made a basic http server (I am developing my own because I want to do low level http analysis & manipulation in a later stage):</p>
<pre><code>import socket
import threading
import queue
import time
class SocketServer:
"""
Basic Socket Server in python
"""
def __init__(self,host,port,max_theads):
self.host = host
self.port = port
self.server_socket = self.__initSocket()
self.max_theands = max_theads
self.request_queue = queue.Queue()
def __initSocket(self):
return socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def __accept(self):
self.server_socket.listen(5)
while True:
client_socket, client_address = self.server_socket.accept()
self.request_queue.put((client_socket, client_address))
def __handle(self):
while True:
# Dequeue a request and process it
client_socket, address = self.request_queue.get()
# Read HTTP Request
# Log Http Request
# Manipulate Http Request
# Forward or respond
client_socket.sendall(b"""HTTP/1.1 200 OK\r\nContent-Type: text/html\r\n\r\n<html><body>Hello World</body></html>\r\n""");
;
time.sleep(1)
client_socket.close()
self.request_queue.task_done()
def __initThreads(self):
for _ in range(self.max_theands):
threading.Thread(target=self.__handle, daemon=True).start()
def start(self):
self.server_socket.bind((self.host, self.port))
self.__initThreads()
self.__accept()
</code></pre>
<p>And I launch it into a seperate process:</p>
<pre><code>#!/usr/bin/env python3
"""
1. Read settings
2. Bootstrap Manupilator
3. Bootstrap Control Panel
"""
import multiprocessing
from manipulator.http_socket_server import SocketServer
if __name__ == "__main__":
# @todo read settings file
host = "0.0.0.0"
port = 80
max_threads = 5
server = SocketServer(host, port, max_threads)
server_process = multiprocessing.Process(target=server.start)
server_process.start()
# Add other main application code here if needed
server_process.join()
</code></pre>
<p>But curl receives:</p>
<pre><code>curl 10.0.0.2
<html><body>Hello World</body></html>
curl: (56) Recv failure: Connection reset by peer
</code></pre>
<p>despite the delay in <code>__handle</code>:</p>
<pre><code> time.sleep(1)
</code></pre>
<p>How I can close the socket gracefully?</p>
|
<python><python-3.x><sockets><curl>
|
2024-02-21 20:55:58
| 1
| 10,444
|
Dimitrios Desyllas
|
78,036,962
| 2,545,680
|
Which virtualenv tool is used by PyChart to create virtual envs
|
<p>PyChart has a configuration window to configure virtual env:</p>
<p><a href="https://i.sstatic.net/1J8Oi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1J8Oi.png" alt="enter image description here" /></a></p>
<p>Does anybody know what tool does it use to create those virtualenvs, e.g. venv or virtualenv or pipenv? Or maybe it's something propriatary for PyChart?</p>
|
<python><pip><pycharm>
|
2024-02-21 20:42:03
| 0
| 106,269
|
Max Koretskyi
|
78,036,895
| 5,790,653
|
python inline append to list and create a variable
|
<p>I have syntax error in this line:</p>
<pre class="lang-py prettyprint-override"><code>data = [{'name': project['name'], 'id': project['id']} for projects in api.all_projects() project = projects.to_dict() if project['name'].startswith('name')]
</code></pre>
<p>I'm not sure where's the problem.</p>
<p>Error message when I run the python cli:</p>
<pre><code>>>> data = [{'name': project['name'], 'id': project['id']} for projects in api.all_projects() project = projects.to_dict() if project['name'].startswith('name')]
File "<stdin>", line 1
data = [{'name': project['name'], 'id': project['id']} for projects in api.all_projects() project = projects.to_dict() if project['name'].startswith('name')]
^^^^^^^
SyntaxError: invalid syntax
</code></pre>
|
<python>
|
2024-02-21 20:31:24
| 3
| 4,175
|
Saeed
|
78,036,766
| 2,868,322
|
Python/SQLModel; nice syntax to add a method call to `__init__` and `update`?
|
<p>I'm experimenting with SQLModel as an ORM. A few of my models have custom validation, calculated fields, or just things that I want to happen when they're created or changed. I end up using the following boilerplate a lot:</p>
<pre><code>class MyModel(SqlModel):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.custom_method()
def update(self, **kwargs):
super().update(**kwargs)
self.custom_method()
def custom_method(self):
"""Do this when a model is created or updated
"""
pass
</code></pre>
<p>Is there some nice way I can sweeten this syntax a little? Ideally I'd like a decorator around the function that would inject the function call into <code>__init__</code> and <code>update</code>:</p>
<pre><code>class MyModel(SqlModel):
@run_on_change
def custom_method(self):
"""Do this when a model is created or updated
"""
pass
</code></pre>
<p>But I can't figure out how this would work, since a decorator intercepts when a function is called and modifies its behaviour, whereas I want to modify the circumstances in which the function is called.</p>
<p>Alternatively, can anyone make a compelling argument for using a <code>@listens_for</code> decorator instead of the above boilerplate approach associated with the model itself?</p>
|
<python><sqlalchemy><python-decorators><sqlmodel>
|
2024-02-21 20:03:13
| 1
| 887
|
charrison
|
78,036,731
| 11,144,121
|
save method or post_save signal is not being called while through model instance of a ManyToManyField is getting created
|
<p>I have a <code>Post</code> model in my Django app. It has a many-to-many relationship with <code>User</code> model through the <code>UserTag</code> model named <code>user_tags</code>.</p>
<pre class="lang-py prettyprint-override"><code>class Post(models.Model):
id = models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True)
author = models.ForeignKey(User, on_delete=models.CASCADE)
text = models.TextField()
user_tags = models.ManyToManyField(User, through=UserTag)
</code></pre>
<pre class="lang-py prettyprint-override"><code>class UserTag(models.Model):
id = models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True)
user = models.ForeignKey(User, on_delete=models.CASCADE)
post = models.ForeignKey(Post, on_delete=models.CASCADE, null=True, blank=True)
comment = models.ForeignKey(Comment, on_delete=models.CASCADE, null=True, blank=True)
def save(self, *args, **kwargs):
print("#### user tagged ##### - save method")
print(self.user)
return super().save(*args, **kwargs)
@receiver(post_save, sender=UserTag)
def user_tagged_signal(sender, instance, created, **kwargs):
if not created:
return
print("#### user tagged ##### - post save signal")
print(instance.user)
</code></pre>
<p>When I tag a user to a post as follows:</p>
<pre class="lang-py prettyprint-override"><code>post.user_tags.set(users)
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>post.user_tags.add(user)
</code></pre>
<p><code>UserTang</code> model instance gets created but <code>save</code> method or <code>post_save</code> signal of the <code>UserTag</code> model is not getting called.</p>
<p>I want to send notification to the user and remove the notification depending on the creation and deletion of the <code>UserTag</code> instance.</p>
<p>Is there anything I'm doing wrong or can you suggest me a solution?</p>
|
<python><django><django-signals>
|
2024-02-21 19:54:16
| 1
| 387
|
Tanvir Ahmed
|
78,036,552
| 2,013,056
|
Selenium code to extract text from project issue description
|
<p>I am trying to extract issue content from gitee using selenium in Python. But It gives blank when I try to extract the text. Here is the inspect element:</p>
<p><a href="https://i.sstatic.net/5aulV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5aulV.png" alt="enter image description here" /></a></p>
<p>I want to extract all the text inside the div class git-issue-description markdown body.</p>
<p>But when i try to extract it using the following code, it didnt work:</p>
<pre><code>Issue_description = driver.find_element(By.CLASS_NAME,'git-issue-description markdown-body').text
</code></pre>
<p>What should i do different to get the content inside this div class? Here is the link to the website that I am trying to extract text from - <a href="https://gitee.com/openharmony/arkui_ace_engine/issues/I92R3M?from=project-issue" rel="nofollow noreferrer">https://gitee.com/openharmony/arkui_ace_engine/issues/I92R3M?from=project-issue</a></p>
|
<python><python-3.x><selenium-webdriver>
|
2024-02-21 19:18:20
| 2
| 649
|
Mano Haran
|
78,036,516
| 11,748,924
|
pylance giving false negative import error in vscode
|
<p>I succesfully connect jupyter notebook in <em>external server linux</em> in my organization <code>http://100.96.0.29:8888/?token=...</code> through vscode in my <em>laptop windows</em>.</p>
<p>I also succesfully installed some dependencies such as cupy through first cell <code>%pip install cupy</code>.</p>
<p>I also succesfully import cupy through second cell:</p>
<pre><code>import cupy as cp #Import "cupy" could not be resolved Pylance(reportMissingImport)
x_gpu = cp.array([1, 2, 3])
x_gpu
</code></pre>
<p>Giving me output:</p>
<pre><code>array([1, 2, 3])
</code></pre>
<p>But, why pylance report missing import?</p>
<p><a href="https://i.sstatic.net/siD9t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/siD9t.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><jupyter-notebook><pylance>
|
2024-02-21 19:08:55
| 2
| 1,252
|
Muhammad Ikhwan Perwira
|
78,036,426
| 12,274,651
|
Lookup values with sos1 in gekko
|
<p>The optimizer needs to select the appropriate discrete option to maximize an objective. It uses a lookup to correlate the option (0-5) to associated values that can then be used in the optimization definition. Here is a simplified version that demonstrates the issue:</p>
<p><strong>Data</strong></p>
<pre class="lang-py prettyprint-override"><code>x_data = np.array([0, 1, 2, 3, 4, 5])
y_data = np.array([1000,2000,9000,4500,5000,900])
z_data = np.array([15,13,12,17,11,10])
</code></pre>
<p>I found this <a href="https://stackoverflow.com/questions/64473879/using-lookup-tables-in-gekko-python-intermediate-equation">related post</a> that demonstrates how to use a <code>cspline</code> function to create the relationship between the discrete choice <code>x</code> variable and the dependent <code>y</code> and <code>z</code> variables. I've created the <code>x</code> variable as an <code>sos1</code> type that selects one from the list of values.</p>
<pre class="lang-py prettyprint-override"><code>from gekko import GEKKO
import numpy as np
# Define Data Points
x_data = np.array([0, 1, 2, 3, 4, 5])
y_data = np.array([1000,2000,9000,4500,5000,900])
z_data = np.array([15,13,12,17,11,10])
# Create Variables
m = GEKKO()
x = m.sos1(x_data)
y,z = m.Array(m.Var,2)
m.cspline(x, y, x_data, y_data)
m.cspline(x, z, x_data, z_data)
# Define objective and solve
m.Maximize(y*z)
m.options.SOLVER = 1
m.solve(disp=True)
# Print the results
print(f'x: {x.value[0]}, y: {y.value[0]}, z: {z.value[0]}')
print('Objective:', y.value[0]*z.value[0])
print(y_data*z_data)
</code></pre>
<p>I get the wrong answer (maybe local solution) when solving this problem.</p>
<pre><code>x: 0.0, y: 1000.0, z: 15.0
Objective: 15000.0
</code></pre>
<p>It may have something to do with a local minimum that is created by the cubic spline. Do you have any suggestions to find the maximum for this synthetic function?</p>
|
<python><mathematical-optimization><gekko>
|
2024-02-21 18:51:29
| 1
| 744
|
TexasEngineer
|
78,036,376
| 12,252,679
|
How to activate python environment in ZSH using VSCode terminal
|
<p>I have a folder called <code>src</code> where there is a python environment called <code>venv</code>.</p>
<p>When <code>src</code> is open in VSCode, I need this environment to be activated as soon as I launch a terminal.</p>
<p>In <code>src/.vscode/settings.json</code> I have set</p>
<pre><code>"python.terminal.activateEnvironment": true
</code></pre>
<p>Which correctly activates <code>venv</code> when <code>terminal.integrated.defaultProfile.linux</code> is set to <code>bash</code>. However when it is set to <code>zsh</code>, the python interpreter used is the global one (<code>/usr/bin/python3</code>) and I have to manually run <code>source venv/bin/activate</code> in the terminal so that it uses <code>src/venv/bin/python3</code> instead.</p>
<p>I also tried the following:</p>
<pre class="lang-json prettyprint-override"><code> "terminal.integrated.profiles.linux": {
"venv": {
"path": "/bin/zsh", // works with "/bin/bash"
"source": "venv/bin/activate",
"args": []
}
},
"terminal.integrated.defaultProfile.linux": "venv"
</code></pre>
<p>But I get the same result.</p>
<p>In this folder, zsh always puts this at the end of the line
<a href="https://i.sstatic.net/CVJUj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CVJUj.png" alt="enter image description here" /></a>. I am not sure what this means exactly but I suspect it could give a clue of what's happening.</p>
<p>I also tried disabling all my zsh plugins but again, same result.</p>
|
<python><visual-studio-code><virtualenv><zsh><oh-my-zsh>
|
2024-02-21 18:40:23
| 1
| 482
|
Dj0ulo
|
78,036,351
| 104,910
|
ubuntu version of pip installs package_data to purelib
|
<p>I am trying to understand why a wheel created by Python 3.10 on <code>Ubuntu 22.04</code> is placing my <code>package_data</code> into a <code>purelib</code> directory while <code>Anaconda</code> (python 3.12) and <code>manylinux2014</code> (python 3.11 is not).</p>
<p>My <code>setup.py</code>:</p>
<pre><code>from setuptools import setup, dist
import os
import re
class BinaryDistribution(dist.Distribution):
"""Distribution which always forces a binary package with platform name"""
def has_ext_modules(self):
return True
setup(
packages=['pyfoo',],
package_data={
'pyfoo' : ['./fooJSON', './fooJSON.exe'],
},
distclass=BinaryDistribution,
)
</code></pre>
<p>where <code>fooJSON</code> is a binary executable. After running:</p>
<pre><code>pip wheel .
</code></pre>
<p>On <code>Anaconda</code> and <code>Manylinux 2_28</code> python it gets placed in the wheel as:</p>
<pre><code>pyfoo/fooJSON
</code></pre>
<p>where on Ubuntu it is placed as:</p>
<pre><code>pyfoo-0.0.2.data/purelib/pyfoo/fooJSON
</code></pre>
<p>What is the proper way to indicate package_data is not purelib? Is Ubuntu an outlier?</p>
<p>Anaconda: pip 23.3.1
Ubuntu 22.04: pip 22.0.2</p>
|
<python><ubuntu><pip><python-wheel>
|
2024-02-21 18:34:33
| 0
| 3,813
|
Juan
|
78,036,340
| 7,702,354
|
ansible get the last character of a fact
|
<p>I would like to have the last character of a fact. My playbook looks like this:</p>
<pre><code>- name: sf
set_fact:
root_dev: "{{ ansible_mounts|json_query('[?mount == `/`].device') }}"
- name: e
debug:
msg: "{{ root_dev[:-1] }}"
</code></pre>
<p>The problem is the output in this case always:</p>
<pre><code>"msg": []
</code></pre>
<p>or if I use the playbook without semicolon:</p>
<pre><code> debug:
msg: "{{ root_dev[-1] }}"
</code></pre>
<p>then the whole partition will be the output:</p>
<pre><code>"msg": "/dev/sda1"
</code></pre>
<p>I also can't quote the whole <code>root_dev</code> because it is a fact and I would like to get the last character of it's value. The <code>split</code> filter also not working in this case, because the device can be <code>/dev/sda</code> or <code>/dev/mapper/root_part_0</code> and so. What would be the best option in this case?</p>
|
<python><string><ansible><slice>
|
2024-02-21 18:32:14
| 2
| 359
|
Darwick
|
78,036,302
| 265,521
|
Deadlock with Python async processes and semaphore
|
<p>Why does this deadlock?</p>
<pre><code>#!/usr/bin/env python3
import asyncio
async def _stream_subprocess(id: int, command: "list[str]"):
proc = await asyncio.create_subprocess_exec(*command, stdout=asyncio.subprocess.PIPE)
await proc.wait()
print(f"{id}: Done")
async def run(id: int, command: "list[str]"):
print(f"{id}: Running")
await _stream_subprocess(id, command)
print(f"{id}: Throwing")
raise RuntimeError("failed")
async def run_with_parallelism_limit(id: int, command: "list[str]", limit: asyncio.Semaphore):
async with limit:
print(f"{id}: Calling run")
await run(id, command)
print(f"{id}: Run finished")
async def main():
sem = asyncio.Semaphore(1)
await asyncio.gather(
run_with_parallelism_limit(0, ["python", "--version"], sem),
run_with_parallelism_limit(1, ["python", "--version"], sem),
)
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>Output:</p>
<pre><code>0: Calling run
0: Running
0: Done
0: Throwing
1: Calling run
1: Running
</code></pre>
<p>There's <a href="https://stackoverflow.com/questions/77538602/deadlock-with-asyncio-semaphore">this question</a> which is possibly related but it's difficult to tell since it isn't solved and the code is different.</p>
|
<python><python-asyncio><deadlock>
|
2024-02-21 18:26:20
| 1
| 98,971
|
Timmmm
|
78,036,062
| 18,964,988
|
How to get the status of a build on GithHub through PyGitHub
|
<p>I have a text file with a list of repos. I wanted to use pygithub to iterate through the list and tell me for each repo on whether the last build was successful or failure on dev or master branch depending on which one exists. This is what I have</p>
<pre><code>from github import Github
g = Github(base_url="https://code.secret.url/api/v3", login_or_token="SECRET_TOKEN")
user = g.get_user()
login = user.login
org = g.get_organization("SECRET_ORG")
# Open the file containing the list of repositories
with open("repo_list.txt") as f:
for repo_name in f:
repo = org.get_repo(repo_name.strip())
try:
print(f"Processing repo: {repo_name.strip()}")
# Get branches as a PaginatedList object
branches = repo.get_branches()
# Check if the 'dev' branch exists
dev_branch_exists = False
for branch in branches:
if branch.name == 'dev':
dev_branch_exists = True
break
# Set base_branch to either dev or master
if dev_branch_exists:
base_branch = repo.get_branch("dev")
else:
base_branch = repo.get_branch("master")
# Get the latest commit
latest_commit = base_branch.commit
print(f"Latest commit: {latest_commit}")
# Check the status of the latest commit
statuses = latest_commit.get_statuses()
print(f"Statuses: {statuses}")
# Print the statuses if available
if statuses:
print("List of statuses:")
for status in statuses:
print(f"State: {status.state}, Description: {status.description}")
else:
print("No statuses found for the latest commit.")
except Exception as e:
print(f"An error occurred while processing {repo_name.strip()}: {str(e)}")
</code></pre>
<p>When I try printing the status it appears to be empty because in the log it will say</p>
<pre><code>List of statuses:
</code></pre>
<p>but nothing prints afterwards meaning it doesn't even execute <code>print(f"State: {status.state}, Description: {status.description}")</code>. I used the following documentation as reference <a href="https://pygithub.readthedocs.io/en/latest/github_objects/Commit.html#github.Commit.Commit.get_statuses" rel="nofollow noreferrer">https://pygithub.readthedocs.io/en/latest/github_objects/Commit.html#github.Commit.Commit.get_statuses</a></p>
<p>It's easy for me to see the status of a repo just by using the UI of github and just clicking on action of a repo but I was hoping to do this through automation. When I try printing out status from</p>
<pre><code> # Check the status of the latest commit
statuses = latest_commit.get_statuses()
print(f"Statuses: {statuses}")
</code></pre>
<p>It will print out something like <code><github.PaginatedList.PaginatedList object at 0x000001CD1AF03310></code>
so I am certain that I am populating the variable statues correctly with .get_statuses() but afterwards I am not certain if what I am doing is the correct way. I forgot to mention print(f"Latest commit: {latest_commit}") also appears to print correctly so I don't think there's an issue with the commit I'm grabbing</p>
<p>UPDATE: I also tried the following code snippet as per someone's suggestion</p>
<pre><code># Get the last commit associated with the pull request
last_commit = pr.get_commits().reversed[0].commit
print(f"Last commit SHA: {last_commit.sha}")
# Get the check runs associated with the last commit
check_runs = last_commit.get_check_runs()
# Check if all check runs have a conclusion of "success"
passing = all(check.conclusion == "success" for check in check_runs)
print("Build passed? ", passing)
</code></pre>
<p>But it gave me the error <code>'GitCommit' object has no attribute 'get_check_runs'</code></p>
|
<python><github><github-actions><github-api><pygithub>
|
2024-02-21 17:42:25
| 1
| 489
|
DevopitionBro
|
78,036,050
| 2,707,864
|
Sympy: Get the solutions of an ODE as Function objects
|
<p>As a follow-up of <a href="https://stackoverflow.com/questions/78031336/sympy-extract-the-two-functions-solutions-of-a-homogeneous-second-order-linear">Extract the two functions solutions of a homogeneous second order linear ODE</a>, I want to get <code>u1</code> and <code>u2</code> as Function objects.
This way, I could use them in generic expressions, e.g., definite/indefinite integrals in user-defined functions, etc.
E.g., to calculate particular solutions of the ODE.
A workaround is shown below, but I see it clumsy, and possibly fragile.</p>
<p>This is what I have</p>
<pre><code>>>> print(u1)
>>> print(type(u1))
>>> print(u2)
>>> print(type(u2))
1/r
<class 'sympy.core.power.Pow'>
r
<class 'sympy.core.symbol.Symbol'>
</code></pre>
<p>I want to define</p>
<pre><code>p = sym.Function('p', real=True)
def a1_par(u1, p):
# From the standard form, g(r) = p'(r)
R = sym.symbols('R', real=True, positive=True)
g = sym.diff(p(R), R)
# This would not work, since u1 is already a function of r
# I need a Function object, so I can use u1(R) in the integrand
#a1 = sym.integrate(u1*g, (R, 0, r)).doit() # not useful
a1 = sym.integrate(u1.subs(r,R)*g, (R, 0, r)).doit() # workaround
return a1
</code></pre>
<p>In fact, the parameter I am passing to <code>a1_par</code> is a function of <code>r</code>, not merely a Function object. If <code>u1</code> were a function object, my integration line would read</p>
<pre><code> a1 = sym.integrate(u1(R)*g, (R, 0, r)).doit()
</code></pre>
<p>Moreover, the return value of <code>a1_par</code> is again a function of <code>r</code>.
I would like to get a "handle" to the function (a Function object), so I can use it in other calculations.</p>
<p><strong>Related</strong>:</p>
<ol>
<li><a href="https://stackoverflow.com/questions/37100053/how-to-define-a-mathematical-function-in-sympy">How to define a mathematical function in SymPy?</a></li>
<li></li>
</ol>
|
<python><sympy><differentiation><symbolic-integration>
|
2024-02-21 17:40:21
| 0
| 15,820
|
sancho.s ReinstateMonicaCellio
|
78,035,957
| 9,443,671
|
How can I extract the duration and offset from a numpy array representing audio?
|
<p>I'm currently running a script where I take an entire audio file and save it using the <code>audiofile</code> library (which, in-turn, uses the <code>soundfile</code> library) in Python.</p>
<p>I'm trying to mimic the behavior of <code>audiofile.read()</code> where I give it an offset and duration (in seconds) and only return the respective numpy array of that particular sound interval. The only difference here is that instead of taking in a <code>.wav</code> file like the library requires, I'll already have the entire audio file as a numpy array and need to extract the correct start and end intervals from it.</p>
<p><a href="https://audeering.github.io/audiofile/_modules/audiofile/core/io.html#read" rel="nofollow noreferrer">I've tried copying the logic of calculating the start</a> and end and just slicing the numpy array from <code>sound_file[start:end]</code> but that doesn't seem to work. I'm not too familiar with how signal processing works with audio files so I'm at a little bit of a loss here and any help would be appreciated!</p>
<p>Here's my code:</p>
<p>I expect it to take in a numpy array, and return the same numpy array sliced to include only the start + the duration specified. All the files I've loaded were originally 96KHz that were resampled to 16KHz and saved as numpy arrays.</p>
<pre><code>
from audiofile.core.utils import duration_in_seconds
import audmath
def read_from_np(
file,
duration,
offset,
sampling_rate = 16000
):
if duration is not None:
duration = duration_in_seconds(duration, sampling_rate)
if np.isnan(duration):
duration = None
if offset is not None and offset != 0:
offset = duration_in_seconds(offset, sampling_rate)
if np.isnan(offset):
offset = None
# Support for negative offset/duration values
# by counting them from end of signal
#
if offset is not None and offset < 0 or duration is not None and duration < 0:
# Import duration here to avoid circular imports
from audiofile.core.info import duration as get_duration
signal_duration = get_duration(file)
# offset | duration
# None | < 0
if offset is None and duration is not None and duration < 0:
offset = max([0, signal_duration + duration])
duration = None
# None | >= 0
if offset is None and duration is not None and duration >= 0:
if np.isinf(duration):
duration = None
# >= 0 | < 0
elif offset is not None and offset >= 0 and duration is not None and duration < 0:
if np.isinf(offset) and np.isinf(duration):
offset = 0
duration = None
elif np.isinf(offset):
duration = 0
else:
if np.isinf(duration):
offset = min([offset, signal_duration])
duration = np.sign(duration) * signal_duration
orig_offset = offset
offset = max([0, offset + duration])
duration = min([-duration, orig_offset])
# >= 0 | >= 0
elif offset is not None and offset >= 0 and duration is not None and duration >= 0:
if np.isinf(offset):
duration = 0
elif np.isinf(duration):
duration = None
# < 0 | None
elif offset is not None and offset < 0 and duration is None:
offset = max([0, signal_duration + offset])
# >= 0 | None
elif offset is not None and offset >= 0 and duration is None:
if np.isinf(offset):
duration = 0
# < 0 | > 0
elif offset is not None and offset < 0 and duration is not None and duration > 0:
if np.isinf(offset) and np.isinf(duration):
offset = 0
duration = None
elif np.isinf(offset):
duration = 0
elif np.isinf(duration):
duration = None
else:
offset = signal_duration + offset
if offset < 0:
duration = max([0, duration + offset])
else:
duration = min([duration, signal_duration - offset])
offset = max([0, offset])
# < 0 | < 0
elif offset is not None and offset < 0 and duration is not None and duration < 0:
if np.isinf(offset):
duration = 0
elif np.isinf(duration):
duration = -signal_duration
else:
orig_offset = offset
offset = max([0, signal_duration + offset + duration])
duration = min([-duration, signal_duration + orig_offset])
duration = max([0, duration])
# Convert to samples
#
# Handle duration first
# and returned immediately
# if duration == 0
if duration is not None and duration != 0:
duration = audmath.samples(duration, sampling_rate)
if duration == 0:
from audiofile.core.info import channels as get_channels
channels = get_channels(file)
if channels > 1 or always_2d:
signal = np.zeros((channels, 0))
else:
signal = np.zeros((0,))
return signal, sampling_rate
if offset is not None and offset != 0:
offset = audmath.samples(offset, sampling_rate)
else:
offset = 0
start = offset
# duration == 0 is handled further above with immediate return
if duration is not None:
stop = duration + start
return np.expand_dims(file[0, start:stop], 0)
</code></pre>
|
<python><numpy><audio><signal-processing><soundfile>
|
2024-02-21 17:24:50
| 1
| 687
|
skidjoe
|
78,035,952
| 728,286
|
Can MatPlotlib make charts with a deviation from zero?
|
<p>I'm trying to make a chart like this:</p>
<p><a href="https://i.sstatic.net/AOBr9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AOBr9.png" alt="enter image description here" /></a></p>
<p>or this:</p>
<p><a href="https://i.sstatic.net/YGTdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YGTdr.png" alt="enter image description here" /></a></p>
<p>My original thoughts were to do it with MatPlotLib. I haven't found any examples of doing it with that library, is it possible or should I look to a different Python library?</p>
<p>Thanks,
Alex</p>
|
<python><matplotlib><charts>
|
2024-02-21 17:24:34
| 0
| 4,914
|
Alex S
|
78,035,937
| 7,169,895
|
How do I adjust the QDateTimeAxis so the first and last points are not on the edges without shifting the values?
|
<p>I am trying to add padding to the QDateTimeAxis so the first and last points on not on the edge of the graph. I am trying to add a padding of about a month. I then graph some financial points of whether a company achieved a certain number EPS (a financial goal) that quarter.</p>
<p>Here is what my graph looks like:
<a href="https://i.sstatic.net/cHV1y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cHV1y.png" alt="Graph 1" /></a> The red arrows indicate that I want some space next to those points so its more clean looking.</p>
<p>When I shift the min and max date using:</p>
<pre><code> self.axisX.setMin(min_date.addDays(-30))
self.axisX.setMax(max_date.addDays(30))
</code></pre>
<p>I get this graph, where the axis is different, but the data points remain on the edge. The yellow highlights show that the date has been changed but the points have not been moved so the date is now wrong.<a href="https://i.sstatic.net/emyJy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/emyJy.png" alt="Graph 2" /></a>.</p>
<p>Why is this? The closest similar problem I could find was this: <a href="https://stackoverflow.com/questions/45326462/bad-values-in-qdatetimeaxis-qtcharts">here</a>, but I am not sure if that described the same issue I am having. It seems this is a new question. I have had this issue for a couple of my time-series graphs.</p>
<p>My code:</p>
<pre><code>import sys
from PySide6.QtCore import Qt
from PySide6.QtGui import QPainter, QColor
from PySide6.QtWidgets import QMainWindow, QApplication
from PySide6.QtCharts import QChart, QChartView, QScatterSeries, QValueAxis, QDateTimeAxis
from PySide6.QtCore import QDateTime
import yfinance as yf
import pprint
import pandas as pd
class TestChart(QMainWindow):
def __init__(self):
super().__init__()
ticker = yf.Ticker("MSFT")
earnings_dates_df = ticker.earnings_dates
# Remove time from the datetime object
# earnings_dates_df.index = earnings_dates_df.index.date
print(earnings_dates_df)
# Get the min and maxs of the EPS date for the axis
min_eps_estimate = earnings_dates_df['EPS Estimate'].min()
min_eps_actual = earnings_dates_df['Reported EPS'].min()
max_eps_estimate = earnings_dates_df['EPS Estimate'].max()
max_eps_actual = earnings_dates_df['Reported EPS'].max()
# Set the date axis
self.axisX = QDateTimeAxis()
self.axisX.setFormat("MM-dd-yyyy")
self.axisX.setTitleText("Date")
min_date = QDateTime(earnings_dates_df.index.min())
max_date = QDateTime(earnings_dates_df.index.max())
self.axisX.setMin(min_date.addDays(-30))
self.axisX.setMax(max_date.addDays(30))
# Set the EPS axis
self.axisY = QValueAxis()
if min_eps_estimate < min_eps_actual:
self.axisY.setMin(min_eps_estimate - (0.15 * min_eps_estimate))
else:
self.axisY.setMin(min_eps_actual - (0.15 * min_eps_actual))
if max_eps_estimate > max_eps_actual:
self.axisY.setMax(max_eps_estimate + (0.15 * max_eps_estimate))
else:
self.axisY.setMax(max_eps_actual + (0.15 * max_eps_actual))
self.axisY.setTitleText("EPS $")
# Create the graphes points for the estimates
self.estimate_series = QScatterSeries()
# Create the graphes points for the actuals that MISSED
self.missed_series = QScatterSeries()
self.missed_series.setColor(QColor(255, 0, 0))
# Create the graphes points for the actuals that MADE it.
self.achieved_series = QScatterSeries()
self.achieved_series.setColor(QColor(0, 0, 255))
# iterate over the pandas to get each date and estimate
for index, row in earnings_dates_df.iterrows():
# Test for NAN
if pd.isna(row['EPS Estimate']) or pd.isna(row['Reported EPS']):
continue
earnings_date_time = QDateTime(index).toMSecsSinceEpoch()
print(earnings_date_time)
# Add ALL of our estimates to a series.
self.estimate_series.append(earnings_date_time, row['EPS Estimate'])
# if the estimate was missed add it to the missed series
if row['Reported EPS'] < row['EPS Estimate']:
self.missed_series.append(earnings_date_time, row['Reported EPS'])
# otherwise add it to the achieved series
else:
self.achieved_series.append(earnings_date_time, row['Reported EPS'])
print(self.missed_series.points())
self.chart = QChart()
self.chart.legend().hide()
self.chart.addAxis(self.axisY, Qt.AlignLeft)
self.chart.addAxis(self.axisX, Qt.AlignBottom)
self.chart.addSeries(self.estimate_series)
self.estimate_series.attachAxis(self.axisY)
self.estimate_series.attachAxis(self.axisY)
self.chart.addSeries(self.missed_series)
self.missed_series.attachAxis(self.axisY)
self.missed_series.attachAxis(self.axisY)
self.chart.addSeries(self.achieved_series)
self.achieved_series.attachAxis(self.axisY)
self.achieved_series.attachAxis(self.axisY)
self.chart.setTitle("Earnings Per Share (estimated versus actual)")
self._chart_view = QChartView(self.chart)
self._chart_view.setRenderHint(QPainter.Antialiasing)
self.setCentralWidget(self._chart_view)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = TestChart()
window.show()
window.resize(1000, 1000)
sys.exit(app.exec())
</code></pre>
|
<python><pyside6>
|
2024-02-21 17:21:40
| 0
| 786
|
David Frick
|
78,035,781
| 15,341,457
|
Tkinter - winfo_height() always returns 1
|
<p>I have a function that gets called by clicking a button called <em>expand_keyword</em>. This function expands a row of a table to show more data. The data is contained in multiple frames called <em>single_cve_frame</em>.</p>
<p>I'm trying to get the height of those frames by using the function <code>winfo_height()</code> but I always get returned the value '1'. I tried retrieving the value after <em>packing</em> the frame and after calling the root's function <code>update_idletasks()</code> but I get the same result. I call the function by accessing the parent of self which is the root. Here is my code:</p>
<pre><code>def expand_keyword(self, frame: ExpandableKeywordCVEsFrame, organized_cves):
for cve in organized_cves[keyword]['matching_cves']:
single_cve_frame = ctk.CTkFrame(frame, fg_color = '#E1E1E1')
ctk.CTkLabel(single_cve_frame,
text = cve,
text_color = 'black',
font = ('Inter', 18)
).place(relx = 0.2, rely = 0.5, relwidth = 0.25, relheight = 1, anchor = 'center')
single_cve_frame.pack(expand = True, fill = 'both')
print(single_cve_frame.winfo_height())
self.master.update_idletasks()
print(single_cve_frame.winfo_height())
</code></pre>
<p>Edit: I've noticed that The <em>w_info_height</em> values are returned before the widgets are generated on the gui. This could be the reason they are '1'. Even by setting the height of the frames statically, the values are still 1.</p>
|
<python><user-interface><tkinter><widget><height>
|
2024-02-21 16:58:53
| 0
| 332
|
Rodolfo
|
78,035,674
| 6,067,528
|
Why does collecting inbound and outbound audio create distorted audio?
|
<p>I'm trying to stream audio bytes into a shared buffer, and pass this through a transcription model. The audio's coming from a websocket sampled at 8kHz and mu-law encoded. I've managed to play a few seconds of audio to myself fine if I stream into separate separate audio buffers (<code>ibuffer</code> and <code>obuffer</code>) for inbound and outbound audio, but if I collect into a <code>shared</code> buffer the audio is really slow and delayed. Here is an extract from my testing code:</p>
<pre><code>obuffer = b""
ibuffer = b""
shared = b""
while True:
data = await queue.get()
if data["event"] == "media":
websocket_payload = data["media"]["payload"]
chunk = audioop.ulaw2lin(base64.b64decode(websocket_payload), 2)
if data["media"]["track"] == INBOUND:
obuffer += chunk
if data["media"]["track"] == OUTBOUND:
ibuffer += chunk
shared += chunk
</code></pre>
<p>I've been testing by collecting <code>obuffer</code>, <code>ibuffer</code> and <code>shared</code>, pickling the buffers, and then saving as <code>.wav</code> files and playing them on my machine. The separate buffers play fine, and can even be merged by simply averaging them which also plays fine - but why can't collecting them into a shared buffer produce the same quality of audio? The produced sound is quite far off from the original, and I've tried different sampling rates up to 16 kHz etc. Does anyone have any idea on what to do here?</p>
<p>It's strange because Twilio's <a href="https://www.twilio.com/en-us/blog/transcribe-phone-calls-text-real-time-twilio-vosk#Streaming-and-transcribing-the-audio-from-the-call" rel="nofollow noreferrer">own documentation</a> says you can do this no problem.</p>
<pre><code>import pickle
import wave
with open("all_bytes.pkl", "rb") as f:
loaded_audio_bytes = pickle.load(f)
nchannels = 1
sampwidth = 2
framerate = 8000
nframes = len(loaded_audio_bytes) // (nchannels * sampwidth)
with wave.open("wav.wav", 'wb') as wf:
wf.setnchannels(nchannels)
wf.setsampwidth(sampwidth)
wf.setframerate(framerate)
wf.setnframes(nframes)
wf.writeframes(loaded_audio_bytes)
</code></pre>
<p>This <a href="https://stackoverflow.com/a/61000169/6067528">answer</a> suggests just using the outbound only, but I need both tracks here!</p>
|
<python><audio><twilio>
|
2024-02-21 16:42:48
| 2
| 1,313
|
Sam Comber
|
78,035,520
| 18,346,591
|
Draw grid lines behind text reportlab pdf
|
<p>I want to create a grid like this. On top of this grid would be behind some texts or images. I have already created the text and images on the pdf, but I do not know how to create this grid behind them.</p>
<p>Any point in the right direction would be very helpful. A small complete solution would be more helpful.
<a href="https://i.sstatic.net/roEF6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/roEF6.jpg" alt="enter image description here" /></a></p>
|
<python><pdf><reportlab>
|
2024-02-21 16:19:30
| 1
| 662
|
Alexander Obidiegwu
|
78,035,378
| 23,260,297
|
Read multiple sections of data in file with pandas
|
<p>I have an excel file that has data in multiple table-like format (not formatted as an excel table though) and I need to read this data into a singular dataframe.</p>
<p>the spreadsheet looks something like this, with one table of data on top of the other:</p>
<pre><code>Commodities Cashflows
Heating Oil
Counterparty Ref# TradeDate Commodity Price MTM
------------ ---- --------- ---------- ----- ----
xxx DF9-1 23-Sep-19 Heating Oil 10.00 10,000
xxx DF9-2 23-Sep-19 Heating Oil 10.00 10,000
xxx DF9-3 23-Sep-19 Heating Oil 10.00 10,000
xxx DF9-4 23-Sep-19 Heating Oil 10.00 10,000
WTI-IPE
Counterparty Ref# TradeDate Commodity Price MTM
------------ ---- --------- ---------- ----- ----
xxx DF9-1 23-Sep-19 WTI-IPE 10.00 10,000
xxx DF9-1 23-Sep-19 WTI-IPE 10.00 10,000
xxx DF9-1 23-Sep-19 WTI-IPE 10.00 10,000
xxx DF9-1 23-Sep-19 WTI-IPE 10.00 10,000
</code></pre>
<p>the Commodity name labeling the data is different all the time, so I cannot just search for those keywords and begin reading from that point. My first thought was to search for the first column of the table which is always the same like this:</p>
<pre><code>def find_start_of_file(file):
with open(file, 'r') as f:
for line_num, line in enumerate(f):
if line.startswith('Counterparty'):
return line_num
</code></pre>
<p>but that will always start at the first data set, and read unnecessary data afterwards. How could I parse through and read the rest of the data as well and put it in one dataframe?</p>
|
<python><pandas>
|
2024-02-21 16:00:16
| 1
| 2,185
|
iBeMeltin
|
78,035,371
| 850,781
|
Temporarily change logging level - safely
|
<p>According to <a href="https://stackoverflow.com/q/19617355/850781">Dynamically changing log level without restarting the application</a> I can temporarily change logging level:</p>
<pre><code>logger_level = my_logger.level
my_logger.setLevel(logging.DEBUG)
a = do_something_with_higher_logging1()
b,c = do_something_with_higher_logging2()
d = do_something_with_higher_logging3()
my_logger.setLevel(logger_level)
</code></pre>
<p>The problem is that if <code>do_something_with_higher_logging[123]</code> raise an exception (which is caught outside this, so my program is not terminated), the level of <code>my_logger</code> is not reset back and stays at <code>DEBUG</code>.</p>
<p>I can do</p>
<pre><code>def call_with_loglevel(logger, level, f, **kwargs):
"Call f with logger at a different level"
saved_logger_level = logger.level
logger.setLevel(level)
try:
return f(**kwargs)
finally:
logger.setLevel(saved_logger_level)
</code></pre>
<p>but this requires me to define <code>f</code> out of the <code>do_something_with_higher_logging[123]</code>...</p>
<p>What I want is something like</p>
<pre><code>with my_logger.setLevel(logging.DEBUG):
a = do_something_with_higher_logging1()
b,c = do_something_with_higher_logging2()
d = do_something_with_higher_logging3()
</code></pre>
|
<python><logging><error-handling><python-logging>
|
2024-02-21 15:59:12
| 1
| 60,468
|
sds
|
78,035,329
| 1,714,692
|
Is it possible in a pandas dataframe to have some multiindexed columns and some singleindexed columns?
|
<p>In pandas I would like to have a dataframe whose some columns have a multi index, some don't.</p>
<p>Visually I would like something like this:</p>
<pre><code> | c | |
|--------| d |
| a | b | |
================|
| 1 | 4 | 0 |
| 2 | 5 | 1 |
| 3 | 6 | 2 |
</code></pre>
<p>In pandas I can do something like this:</p>
<pre><code>df = pd.DataFrame({'a':[1,2,3],'b':[4,5,6], 'd':[0,1,2]})
columns=[('c','a'),('c','b'), 'd']
df.columns=pd.MultiIndex.from_tuples(columns)
</code></pre>
<p>and the output would be:</p>
<pre><code> c d
a b NaN
0 1 4 0
1 2 5 1
2 3 6 2
</code></pre>
<p>However, when accessing the <code>d</code> column by <code>df['d']</code>, I get as output a Pandas Dataframe, not Pandas series. The problem is clearly that pandas applied multicolumn indexing to every column. So my question is: is there a way to apply column multindexing only to certain columns and leave the others as they are?</p>
<p>In other words, I would like that the result of <code>df['d']</code> would be a Series as in a normal dataframe, the result of <code>df['c']</code> a pd.DataFrame as in column multindex and the result of <code>df['c']['a']</code> a Pandas Series. Is this possible?</p>
|
<python><pandas><dataframe><multi-index>
|
2024-02-21 15:53:41
| 2
| 9,606
|
roschach
|
78,035,296
| 7,008,416
|
How to check with mypy that types are *not* compatible
|
<p>Imagine I am writing a little Python library with one (nonsense) function:</p>
<pre><code>def takes_a_str(x: str) -> str:
if x.startswith("."):
raise RuntimeError("Must not start with '.'")
return x + ";"
</code></pre>
<p>For runtime tests of the functionality, I can check it behaves as expected under both correct conditions (e.g. <code>assert takes_a_str('x')=='x;'</code>) and also error conditions (e.g. <code>with pytest.raises(RuntimeError): takes_a_str('.')</code>).</p>
<p>If I want to check that I have not made a mistake with the type hints, I can also perform positive tests: I can create a little test function in a separate file and run mypy or pyright to see that there are no errors:</p>
<pre><code>def check_types() -> None:
x: str = takes_a_str("")
</code></pre>
<p>But I also want to make sure my type hints are not too loose, by checking that some negative cases fail as they ought to:</p>
<pre><code>def should_fail_type_checking() -> None:
x: dict = takes_a_str("")
takes_a_str(2)
</code></pre>
<p>I can run mypy on this and observe it has errors where I expected, but this is not an automated solution. For example, if I have 20 cases like this, I cannot instantly see that they have all failed, and also may not notice if other errors are nestled amongst them.</p>
<p>Is there a way to ask the type checker to pass, and ONLY pass, where a type conversion does not match? A sort of analogue of <code>pytest.raises()</code> for type checking?</p>
|
<python><mypy><python-typing><pyright>
|
2024-02-21 15:50:25
| 2
| 10,152
|
Arthur Tacca
|
78,035,030
| 8,760,028
|
How to convert a of string to an array of object
|
<p>I have a string which contains values in some format (it is the output of an sql query). The string format is like this : <code>'enable id \n =============\nf avc-qwqwq\nt abd-rrtrtr\n f rec-yyuyu \n')</code></p>
<p>So, the first 2 values are the column names of the sql and the rows are inside the \n values. So, I have 2 extract these values and pass it as an object.So, I am able to get the values in a list, but can't figure out how to pass these values as an object.</p>
<p>My code looks like:</p>
<pre><code>result = ('enable id \n =============\nf avc-qwqwq\nt abd-rrtrtr\n f rec-yyuyu \n')
n=2
r = result.split('\n')
newlist = r[n:]
print('r', newlist) //Output : r ['f avc-qwqwq', 't abd-rrtrtr', ' f rec-yyuyu ', '']
</code></pre>
<p>Now, how to make an object which looks like:</p>
<pre><code>[{
'enable':'f',
'org':'avc-qwqwq'
},
{
'enable':'f',
'org':'abd-rrtrtr'
},
{
'enable':'f',
'org':'rec-yyuyu'
}]
</code></pre>
<p>I need this to work for both Python 2 and Python 3 environments.</p>
|
<python>
|
2024-02-21 15:00:51
| 2
| 1,435
|
pranami
|
78,034,888
| 1,232,660
|
Is string a valid type for annotation Sequence[Sequence[str]]?
|
<p>This is a typical example of a value that can be annotated by <code>Sequence[Sequence[str]]</code>:</p>
<pre class="lang-py prettyprint-override"><code>[
['This', 'is', 'the', 'first', 'tokenized', 'sentence', '.'],
['And', 'this', 'is', 'the', 'second', 'one', '.'],
]
</code></pre>
<p>I have learned that <a href="https://stackoverflow.com/questions/44912374/python-type-annotation-for-sequences-of-strings-but-not-for-strings">string is a valid type for annotation <code>Sequence[str]</code></a>, since, loosely speaking, <em>strings are sequences of strings</em> in the Python world. <strong>Does that work recursively?</strong> Is string also a valid type for annotation <code>Sequence[Sequence[str]]</code> etc.?</p>
<p>I'd say <em>yes</em>, following the Python logic, but I'd say <em>no</em>, following the common sense. And I don't know how to properly test it.</p>
|
<python><string><python-typing>
|
2024-02-21 14:42:31
| 2
| 3,558
|
Jeyekomon
|
78,034,772
| 4,688,190
|
Seleniumbase: Intercept HTTP requests
|
<p>With Selenium Wire, you can intercept your requests like this:</p>
<pre><code>from seleniumwire import webdriver
def intercept_response(request, response):
print(request.headers)
driver = webdriver.Chrome()
driver.response_interceptor = intercept_response
</code></pre>
<p>This prints a continuous stream of data for every request you make.</p>
<p>I attempted to do the same thing with SeleniumBase, but it doesn't work. The following only prints a single GET request, but the linkedIn page that I am visiting should yield multiple GET requests.</p>
<pre><code>from seleniumbase import Driver
driver = Driver(browser="chrome", pls="eager", agent=agent, proxy=proxy, headed=True, wire=True)
driver.get("https://www.linkedin.com")
for request in driver.requests:
print(request.url)
</code></pre>
|
<python><selenium-chromedriver><seleniumbase>
|
2024-02-21 14:26:33
| 1
| 678
|
Ned Hulton
|
78,034,721
| 2,249,312
|
Converting excel bloomberg function to python
|
<p>I m trying to convert this excel formula into python:</p>
<pre><code>=@BDH("MZBZ4C 75.00 Comdty","Trade,Settle","2024-02-20 09:00:00","","Dir=V","IntrRw=true","Headers=Y","Dts=S","QRM=S","cols=4;rows=4")
</code></pre>
<p>I tried with the blp library using this:</p>
<pre><code>from blp import blp
bquery = blp.BlpQuery().start()
df = bquery.bdh("MZBZ4C 75.00 Comdty", ["Trade", "Settle"],
start_date="20240219",
end_date="",
options={"adjustmentSplit": True})
</code></pre>
<p>But I get a field error saying the field is invalid. That library worked for my other types of data pulls.</p>
<p>Any idea how I could make this work please?</p>
|
<python><bloomberg>
|
2024-02-21 14:18:51
| 1
| 1,816
|
nik
|
78,034,551
| 14,897,644
|
Why my LSTM Model forecasts almost straight line on validation set?
|
<p>I trained a BI-LSTM model on stock prices.
For this model, I did 2 approaches of backtesting :</p>
<ol>
<li>Applying the predict function for the whole validation set and then compare predictions to real data</li>
<li>Predict the whole validation set based on the prediction of each day and then using it to predict the next day and so on.</li>
</ol>
<p>With the 1) method, results look like pretty good whith a RMSE near of 0.25 which can tell me that the model has good performance on test set, as follows:</p>
<p><a href="https://i.sstatic.net/ypbvf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ypbvf.png" alt="enter image description here" /></a></p>
<p>The code used:</p>
<pre><code> pas = 20
train_len=5586
test_data = df_scaled[train_len - pas: ]
print ('len(test_data):', len(test_data))
# Create the data sets x_test and y_test
x_test = []
y_test = df[train_len:, :]
for i in range(pas, len(test_data)):
x_test.append(test_data[i-pas:i, 0])
# Convert the data to a numpy array
x_test = np.array(x_test)
# Reshape the data
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1 ))
# Get the models predicted price values
fichier_modele = f"{symbols}.h5"
model = load_model(fichier_modele)
predictions = model.predict(x_test)
scaler2=dictio_scalers[symbols]
predictions = scaler2.inverse_transform(predictions)
</code></pre>
<p>But using the 2) method, the model predicts almost a straight line,
<a href="https://i.sstatic.net/yDWbK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yDWbK.png" alt="enter image description here" /></a></p>
<p>Code used:</p>
<pre><code>Pred_Array_Global=df[int(train_len)-pas:int(train_len)]
# Get the models predicted price values
fichier_modele = f"{symbols}.h5"
model = load_model(fichier_modele)
scaler2=dictio_scalers[symbols]
Pred_Array_Global=scaler2.fit_transform(Pred_Array_Global)
for i in range(0,len(test['Close'])):
Pred_Array_Global=np.array(Pred_Array_Global)
Pred_Array=Pred_Array_Global[i:i+pas]
# Convert the data to a numpy array
Pred_Array = np.array(Pred_Array)
# Reshape the data
Pred_Array_Input = np.reshape(Pred_Array,(1,pas, 1 ))
predictions = model.predict(Pred_Array_Input,verbose=0)
Pred_Array_Global=np.append(Pred_Array_Global,predictions)
Pred_Array_Global=Pred_Array_Global.reshape(-1,1)
Pred_Array_Global = scaler2.inverse_transform(Pred_Array_Global)
</code></pre>
<p>As the model performs really well with the whole test data, if I want to make predictions from each day to each day, I was expecting a slight drop in performance but not a straight line!</p>
<p>Did I Make a mistake in my code?</p>
<p>NB: Here is the code I use to build the model (epochs=2000, batch_size=256):</p>
<pre><code>model = Sequential()
model.add(Bidirectional(LSTM(units=128, input_shape=(20, 1))))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,verbose=0)
fichier_modele = f"{symbols}.h5"
model.save(fichier_modele)
</code></pre>
|
<python><tensorflow><keras><deep-learning><time-series>
|
2024-02-21 13:51:48
| 1
| 417
|
Rgrvkfer
|
78,034,483
| 539,490
|
Python type of generic Enum class of str
|
<p>Is there a generic <code>type</code> of enum that is possible for the following function <code>format_enum_as_list_with_quotes</code>:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum, EnumMeta
class Demo(str, Enum):
one = "one"
two = "two"
three = "three"
def format_enum_as_list_with_quotes (items: EnumMeta) -> str:
return ", ".join([f"\"{item.value}\"" for item in items])
# ^^^^^ Cannot access member "value" for type "EnumMeta"
# Member "value" is unknown
def format_enum_as_list_with_quotes_2 (items: Enum) -> str:
return ", ".join([f"\"{item.value}\"" for item in items])
# ^^^^^ "Enum" is not iterable
# "__iter__" method not defined
print(type(Demo)) # => EnumMeta
assert format_enum_as_list_with_quotes(Demo) == '"one", "two", "three"'
</code></pre>
<p>* edit *</p>
<p>The code executes fine I just want to get rid of the type errors.</p>
<p>Type errors show from pylance v2024.2.2 in Visual Studio Code with Python 3.10.11</p>
|
<python><enums><python-typing>
|
2024-02-21 13:42:07
| 1
| 29,009
|
AJP
|
78,034,468
| 7,002,816
|
Decoupling of models from flask-sqlalchemy
|
<p>Usually, I develop web-apps with ASP .Net Core but recently I had to create a python flask app using ORM with sqlalchemy. First, everything worked quite well. The trouble started when I tried to clean up the design and decouple the packages in order to be testable in unit-tests.</p>
<p>I ran into the problem, that a model class needs to derive from <code>db.Model</code> where <code>db</code> is already an object of class <code>SQLAlchemy</code> and is also tied to the Flask app-object. So, this all becomes basically a big ball of mud.</p>
<pre><code>class User(db.Model):
pass
</code></pre>
<p>As a C# developer I find that to be more than strange. How can I decouple data models from flask and sqlalchemy to be testable independently? Is there e.g. a way to programmatically register the model classes for SQLAlchemy?</p>
|
<python><flask><sqlalchemy>
|
2024-02-21 13:39:46
| 1
| 342
|
LosWochos
|
78,034,110
| 12,696,223
|
Are Python C extensions faster than Numba JIT?
|
<p>I am testing the performance of the Numba JIT vs Python C extensions. It seems the C extension is about 3-4 times faster than the Numba equivalent for a for-loop-based function to calculate the sum of all the elements in a 2d array.</p>
<h3>Update:</h3>
<p>Based on valuable comments, I realized a mistake that I should have compiled (called) the Numba JIT once. I provide the results of the tests after the fix along with extra cases. <strong>But</strong> the question remains on when and how which method should be considered.</p>
<p>Here's the result (time_s, value):</p>
<pre><code># 200 tests mean (including JIT compile inside the loop)
Pure Python: (0.09232537984848023, 29693825)
Numba: (0.003188209533691406, 29693825)
C Extension: (0.000905141830444336, 29693825.0)
# JIT once called before the test loop (to avoid compile time)
Normal: (0.0948486328125, 29685065)
Numba: (0.00031280517578125, 29685065)
C Extension: (0.0025129318237304688, 29685065.0)
# JIT no warm-up also no test loop (only calling once)
Normal: (0.10458517074584961, 29715115)
Numba: (0.314251184463501, 29715115)
C Extension: (0.0025091171264648438, 29715115.0)
</code></pre>
<ul>
<li>Is my implementation correct?</li>
<li>Is there a reason for why C extensions are faster?</li>
<li>Should I probably always use C extensions if I want the best performance? (non-vectorized functions)</li>
</ul>
<p><code>main.py</code></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import numba
import time
import loop_test # ext
def test(fn, *args):
res = []
val = None
for _ in range(100):
start = time.time()
val = fn(*args)
res.append(time.time() - start)
return np.mean(res), val
sh = (30_000, 20)
col_names = [f"col_{i}" for i in range(sh[1])]
df = pd.DataFrame(np.random.randint(0, 100, size=sh), columns=col_names)
arr = df.to_numpy()
def sum_columns(arr):
_sum = 0
for i in range(arr.shape[0]):
for j in range(arr.shape[1]):
_sum += arr[i, j]
return _sum
@numba.njit
def sum_columns_numba(arr):
_sum = 0
for i in range(arr.shape[0]):
for j in range(arr.shape[1]):
_sum += arr[i, j]
return _sum
print("Pure Python:", test(sum_columns, arr))
print("Numba:", test(sum_columns_numba, arr))
print("C Extension:", test(loop_test.loop_fn, arr))
</code></pre>
<p><code>ext.c</code></p>
<pre class="lang-c prettyprint-override"><code>#define PY_SSIZE_CLEAN
#include <Python.h>
#include <numpy/arrayobject.h>
static PyObject *loop_fn(PyObject *module, PyObject *args)
{
PyObject *arr;
if (!PyArg_ParseTuple(args, "O!", &PyArray_Type, &arr))
return NULL;
npy_intp *dims = PyArray_DIMS(arr);
npy_intp rows = dims[0];
npy_intp cols = dims[1];
double sum = 0;
PyArrayObject *arr_new = (PyArrayObject *)PyArray_FROM_OTF(arr, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY);
double *data = (double *)PyArray_DATA(arr_new);
npy_intp i, j;
for (i = 0; i < rows; i++)
for (j = 0; j < cols; j++)
sum += data[i * cols + j];
Py_DECREF(arr_new);
return Py_BuildValue("d", sum);
};
static PyMethodDef Methods[] = {
{
.ml_name = "loop_fn",
.ml_meth = loop_fn,
.ml_flags = METH_VARARGS,
.ml_doc = "Returns the sum using for loop, but in C.",
},
{NULL, NULL, 0, NULL},
};
static struct PyModuleDef Module = {
PyModuleDef_HEAD_INIT,
"loop_test",
"A benchmark module test",
-1,
Methods};
PyMODINIT_FUNC PyInit_loop_test(void)
{
import_array();
return PyModule_Create(&Module);
}
</code></pre>
<p><code>setup.py</code></p>
<pre class="lang-py prettyprint-override"><code>from distutils.core import setup, Extension
import numpy as np
module = Extension(
"loop_test",
sources=["ext.c"],
include_dirs=[
np.get_include(),
],
)
setup(
name="loop_test",
version="1.0",
description="This is a test package",
ext_modules=[module],
)
</code></pre>
<pre><code>python3 setup.py install
</code></pre>
|
<python><c><pandas><numpy><numba>
|
2024-02-21 12:43:41
| 2
| 990
|
Momo
|
78,034,052
| 2,516,892
|
"UNKNOWN" project name and version number for my own pip-package
|
<p>I successfully built my first Python-Package using a <code>pyproject.toml</code> with <code>setuptools</code>. I am able to install it and use it in Python, however I wonder the name of the project is "<code>UNKNOWN</code>" and the version <code>0.0.0</code>, despite set differently in the configuration.</p>
<p>When I reinstall the <code>*.whl</code> and <code>*.tar.gz</code> files (some small changes were made from version 1.0 to version 1.1), I get the following output in the console:</p>
<pre><code>> pip3 install myproject_qohelet-1.1.tar.gz
Defaulting to user installation because normal site-packages is not writeable
Processing ./myproject_qohelet-1.1.tar.gz
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: UNKNOWN
Building wheel for UNKNOWN (pyproject.toml) ... done
Created wheel for UNKNOWN: filename=UNKNOWN-0.0.0-py3-none-any.whl size=961 sha256=4e10d08c20a9cd5289d673d0627b53012bd917c1372fe681dd6aaa3f4ab64cb0
Stored in directory: /home/qohelet/.cache/pip/wheels/a8/0a/3d/225cba37f15f2ed307317d99dac03d622d5126e236877dc072
Successfully built UNKNOWN
Installing collected packages: UNKNOWN
Attempting uninstall: UNKNOWN
Found existing installation: UNKNOWN 0.0.0
Uninstalling UNKNOWN-0.0.0:
Successfully uninstalled UNKNOWN-0.0.0
Successfully installed UNKNOWN-0.0.0
</code></pre>
<p>Why does it call the package <code>UNKNOWN-0.0.0</code>? Did I forget something in my configuration?
Here the <code>pyproject.toml</code>:</p>
<pre><code>title = "myProject"
[owner]
name = "Qohelet"
[build-system]
requires = ["setuptools", "setuptools-scm"]
build-backend = "setuptools.build_meta"
[project]
name = "myproject_qohelet"
version = "1.1"
authors = [
{ name="Qohelet", email="qohelet@example.com" },
]
license = { text = "GPL" }
description = "This is a project that can be installed with pip"
readme = "README.md"
requires-python = ">=3.7"
classifiers = [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
]
[project.urls]
"Homepage" = "https://gitlab.com/qohelet/myProject"
"Bug Tracker" = "https://gitlab.com/qohelet/myProject/-/issues"
</code></pre>
<p>The project is built with the following commands:</p>
<pre><code>python3 -m pip install --upgrade build
python3 -m build
</code></pre>
<p>Edit (1):
This behavior only relates to the <code>.tar.gz</code>, but the <code>wheel</code> is less verbose:</p>
<pre><code>> pip3 install myproject_qohelet-1.1-py3-none-any.whl --force-reinstall
Defaulting to user installation because normal site-packages is not writeable
Processing ./myproject_qohelet-1.1-py3-none-any.whl
Installing collected packages: myproject-qohelet
Attempting uninstall: myproject-qohelet
Found existing installation: myproject_qohelet 1.1
Uninstalling myproject_qohelet-1.1:
Successfully uninstalled myproject_qohelet-1.1
Successfully installed myproject-qohelet-1.1
</code></pre>
|
<python><setuptools><python-packaging>
|
2024-02-21 12:33:29
| 1
| 1,661
|
Qohelet
|
78,034,030
| 16,425,408
|
How to keep the Python script running continuously on a GCP VM
|
<p>Recently, I developed a Python script that runs on a GCP VM. This script takes user input and generates the corresponding output based on that input.</p>
<p>When I attempted to run it on a GCP VM, it executed without any problems, and the script performed perfect. However, after a couple of hours, the script completely goes down.</p>
<p>Additionally, I used <a href="https://www.pythonanywhere.com/" rel="nofollow noreferrer">python anywhere</a> to run the script. but, it has a resources limitation.</p>
<p>I generally run it like
python3 script.py</p>
|
<python><google-cloud-platform><server><automation><google-compute-engine>
|
2024-02-21 12:30:36
| 1
| 838
|
Nani
|
78,033,906
| 5,561,649
|
Is there a way to let mypy check multiple import paths while looking for an object?
|
<p>I'm working on a project where we've created a module that wraps around a third-party module (containing several submodules). The wrapper module is designed to handle certain functions internally, but also call functions from the third-party module when necessary.</p>
<p>I'd like to find a way to help static analysis tools, especially <code>mypy</code>, understand this situation as well as possible.</p>
<p>A partial solution I have found is to add to the <code>__init__.py</code> of our wrapper something like this:</p>
<pre class="lang-py prettyprint-override"><code># wrapper_module.__init__
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from original_module import submodule_a
from original_module import submodule_b
# ...
from wrapper_module import submodule_a
from wrapper_module import submodule_b
# ...
# Actual code doing the imports dynamically
</code></pre>
<p>This helps in the sense that at least some imports are being correctly followed. However, the issue is that with this current naive solution, <code>mypy</code> only looks for them in the <code>original_module</code> (which makes sense, since nothing tells it how two imports with the same name should interact -- weird though that it takes the first one when you'd expect in python the second one to replace it).</p>
<p>Interesting note, Pycharm (almost magically) understands this exactly how I'd want: if, from another module that imports <code>wrapper_module</code>, I use <code>wrapper_module.submodule_a.some_function()</code>, it looks for <code>some_function</code> in <code>wrapper_module</code> and if it can't find it, it looks for it in <code>original_module</code> (so the "Go to implementation" function works perfectly, for instance).</p>
<p>Is there a way to make this work with <code>mypy</code>? Does <code>mypy</code> support following several import paths to look for dynamically imported objects (given the correct syntax to "help" it)?</p>
|
<python><mypy><python-typing>
|
2024-02-21 12:13:27
| 0
| 550
|
LoneCodeRanger
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.