QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,756,815
| 11,117,255
|
WARNING - Connection pool is full, discarding connection: storage.googleapis.com
|
<p>I'm working on a Python script that downloads images from an Amazon S3 bucket and uploads them to Google Cloud Storage (GCS) using the google-cloud-storage library. The script processes a large dataset in chunks and uses concurrent.futures.ThreadPoolExecutor to parallelize the downloads and uploads.
However, after running for a while, I start seeing the following warning message:
com</p>
<p>2024-07-16 21:26:40,575 - WARNING - Connection pool is full, discarding connection: storage.googleapis.com</p>
<pre><code>def download_and_upload_image(s3_client, bucket_name, s3_key, target_bucket_name, target_key):
# ... code to download image from S3 ...
blob = bucket.blob(f"{target_key}")
blob.upload_from_string(image_data)
def main():
# ... code to set up S3 and GCS clients ...
with concurrent.futures.ThreadPoolExecutor(max_workers=50) as executor:
futures = []
for row in tqdm(all_rows, desc="Downloading and uploading images"):
if row is not None:
s3_key, gcs_blob_name = row
future = executor.submit(download_and_upload_image, s3_client, bucket_name, s3_key, gcs_bucket_name, gcs_blob_name)
futures.append(future)
</code></pre>
<p>I suspect the issue is related to the number of concurrent connections to GCS, but I'm not sure how to resolve it.</p>
<p>I've tried increasing the max_workers parameter of the ThreadPoolExecutor, but that didn't seem to help. I also tried using a storage.Client instance directly instead of creating a new one for each upload, but the warning still occurs.</p>
<p>How can I fix this warning and ensure that the script can handle the large number of concurrent uploads to GCS without discarding connections? Is there a way to configure the connection pool size or reuse connections more efficiently?</p>
<p>Any guidance or suggestions would be greatly appreciated. Let me know if you need more details about my setup or code.</p>
|
<python><google-cloud-platform><google-cloud-storage><connection><connection-pooling>
|
2024-07-16 21:30:25
| 0
| 2,759
|
Cauder
|
78,756,777
| 897,272
|
How to call an async method from a sync method within a larger async method
|
<p>As said I want to call an async method from a sync method, the problem is the sync method is being called by another async method so roughly something like this...</p>
<pre><code>async def parent_async()
sync_method()
def sync_method():
#call utility_async_method
</code></pre>
<p>I can't just call asyncio.run since I'm already in an event loop in this case. The obvious answer would be to just turn my sync_method into an async method, but I don't want to do that for two key reasons.</p>
<ol>
<li>the sync_method I'm writing is overriding a method in a parent class. The parent method is called all over the place so trying to make it async would require a major refactoring effort</li>
<li>sync_method is called by third party tools. I can't make those third party tools asynchronous.</li>
</ol>
<p>I'm okay with sync_method blocking briefly while the async method is called, I just need to figure out how to call it.</p>
|
<python><asynchronous>
|
2024-07-16 21:18:10
| 1
| 6,521
|
dsollen
|
78,756,700
| 13,381,632
|
Export Excel Spreadsheet From Website - Python
|
<p>I am trying to find a way to export a Microsoft Excel spreadsheet (.xlsx) from a website and store locally (to my desktop) or to a database. I am able to parse a URL with tabular content and display/write to file, but I need to determine a way to retrieve spreadsheet content that requires clicking a button to download the data. More importantly, I need to be able to be able to retrieve spreadsheet data embedded within multiple separate pages as displayed on a webpage. Below is a sample script that displays tabular data from a website.</p>
<pre><code>import urllib3
from bs4 import BeautifulSoup
url = 'https://www.runnersworld.com/races-places/a20823734/these-are-the-worlds-fastest-marathoners-and-marathon-courses/'
http = urllib3.PoolManager()
response = http.request('GET', url)
soup = BeautifulSoup(response.data.decode('utf-8'))
print(soup)
</code></pre>
<p>I have inspected the Javascript tool that is the equivalent of manually exporting data on a website through a button click, but I need to find a way to automate this via a Python script...any assistance is most appreciated.</p>
|
<python><html><excel><automation><html-table>
|
2024-07-16 20:50:53
| 1
| 349
|
mdl518
|
78,756,646
| 2,153,235
|
Can't access local dataframe from dictionary comprehension expression
|
<p>I am using dictionary comprehension to compare each dataframe from within a dictionary to its corresponding dataframe in locals(). For some reason, I am getting a key error when accessing the locals() dataframe from within the dictionary comprehension expression.</p>
<p>Here is the setup:</p>
<pre><code>imort numpy as np
imort pandas as pd
df=pd.DataFrame(np.random.randint(3,size=[2,2]),columns=['A','B'])
# A B
# 0 1 2
# 1 2 0
# Create dictionary of datafame(s) (only 1 here)
OldDFs={'df':df}
</code></pre>
<p>Here is the failure when my dictionary comprehension expression refers to a <code>locals()</code> dataframe:</p>
<pre><code>#####################################################
# This is the "key test expression"
#####################################################
{ ky:locals()[ky] for (ky,OldDF1) in OldDFs.items() }
# Console message: KeyError 'df'
</code></pre>
<p>The key <code>ky</code> is of the correct type:</p>
<pre><code>{ ky:type(ky) for (ky,OldDF1) in OldDFs.items() } # {'df': str}
</code></pre>
<p>It works <em>outside</em> of dictionary comprehension:</p>
<pre><code>locals()['df'] # Prints "df"
</code></pre>
<p>The dictionary comprehension expression works if I refer to a dictionary other than <code>locals()</code>:</p>
<pre><code>{ ky:OldDFs[ky] for (ky,OldDF1) in OldDFs.items() } # Prints "df"
</code></pre>
<p>What is wrong with the "key test expression" above?</p>
<p><strong>Background:</strong> In the actual situation, OldDFs consists of many dataframes read from a pickle file. The actual test I want to perform is</p>
<pre><code>{ ky:OldDF1.equals(locals()[ky])
for (ky,OldDF1) in OldDFs.items() }
</code></pre>
|
<python><pandas><dictionary>
|
2024-07-16 20:35:53
| 1
| 1,265
|
user2153235
|
78,756,412
| 1,349,428
|
Panel programmatically changing Select item
|
<p>I am using Panel Select as the following:</p>
<pre><code>selection = pn.widgets.Select(name="My Selection", groups=
{'group 1':{'item1': {'data1':1, 'data2':2}, 'item2': {'data1':3, 'data2':4}},
'group 2':{'item3': {'data1':1, 'data2':2}}})
</code></pre>
<p>I'd like to programmatically change the selected value (i.e. from <code>'item1'</code> to <code>'item3'</code>)</p>
<p>What would be the best way to do it?</p>
<p>In addition, I'd like attach this Select widget to URL param (so I'd be able to preselect the item via URL argument (i.e. <code>http://localhost:5006/app?selection=item3</code>)</p>
|
<python><holoviz-panel>
|
2024-07-16 19:26:09
| 1
| 2,048
|
Meir Tseitlin
|
78,756,377
| 453,851
|
How to configure mosquitto to retain sessions? Why is it discarding my sessions out of the box?
|
<p>I'm trying to use paho-mqtt (python client) persuade mosquitto to retain a session between client connects, but every time I connect (with the same user,password,client id) the session is not retained.</p>
<p>My mosquitto configuration is trivial:</p>
<pre><code>listener 1883
password_file /mosquitto/config/passwords
</code></pre>
<p>Besides creating a password file with the username and password for that one user, there is no other config except for what may ship with the <a href="https://hub.docker.com/_/eclipse-mosquitto" rel="nofollow noreferrer">docker image</a>.</p>
<hr />
<p>I've traced through the MQTT session at the byte level and read read the values based on the <a href="https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html" rel="nofollow noreferrer">official MQTT5 standard</a>.</p>
<p>I've confirmed that:</p>
<ul>
<li>The client is connecting with protocol MQTT 5</li>
<li>The client CONNECT messsage includes the correct:
<ul>
<li>username</li>
<li>password</li>
<li>client id <code>abcde</code></li>
<li>clean session flag (<code>0</code>)</li>
<li>keep alive value (<code>120</code>)</li>
</ul>
</li>
<li>The server CONNACK message has:
<ul>
<li>No assigned client ID (wasn't expecting one)</li>
<li>Session present 0</li>
</ul>
</li>
<li>I can use the python/paho connection to talk to other clients through Mosquitto</li>
<li>I've also tried doing this and killing the connection without letting any DISCONNECT packet be sent.</li>
</ul>
<hr />
<p>I don't think this is relavent, but here's how I connect to Mosquitto:</p>
<pre class="lang-py prettyprint-override"><code>import paho.mqtt.client as mqtt
mqttc = mqtt.Client(mqtt.CallbackAPIVersion.VERSION2, protocol=mqtt.MQTTv5, client_id="abcdef")
mqttc.on_connect = on_connect
mqttc.on_message = on_message
mqttc.username_pw_set("guest", "guest")
mqttc.connect("localhost", 1884, clean_start=False, keepalive=120)
</code></pre>
<hr />
<p>I've repeatedly connected with the same client Id and created a subscription each time.</p>
<p>Why would Mosquitto not retain sessions out of the box. Am I missing a configuration option?</p>
|
<python><mqtt><mosquitto><paho>
|
2024-07-16 19:15:51
| 1
| 15,219
|
Philip Couling
|
78,756,354
| 1,700,890
|
Display custom dates on x axis matplotlib
|
<p>Here is my plot:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.dates as mdates
dates = pd.date_range('2020-03-31', '2024-06-30', freq='Q')
my_df =pd.DataFrame({'dates': dates,
'type_a': [np.nan]*10 + list(range(0, len(dates)-10)),
'type_b': range(len(dates), 2*len(dates))})
my_df = my_df.melt(id_vars='dates', value_vars = ['type_a', 'type_b'])
my_df.dates = my_df.dates.dt.date
gfg = sns.lineplot(x='dates', y='value', hue='variable',
data=my_df, estimator=None)
gfg.xaxis.set_major_locator(mdates.MonthLocator(interval=3))
gfg.set_xticklabels(gfg.get_xticklabels(), rotation=90)
gfg.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
</code></pre>
<p>It does not display dates as they are in dataframe:</p>
<p><a href="https://i.sstatic.net/WxPWKpDw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxPWKpDw.png" alt="plot" /></a></p>
<p>I also tried to replace <code>gfg.xaxis.set_major_locator(mdates.MonthLocator(interval=3))</code> with <code>gfg.xaxis.set_major_locator(mdates.DayLocator(interval=90))</code>, but it is also not what I am looking for.</p>
<p>How can I get dates exactly as in dataframe?</p>
|
<python><date><matplotlib><axis>
|
2024-07-16 19:11:13
| 0
| 7,802
|
user1700890
|
78,756,262
| 11,069,614
|
how to extract files from a list using substring filter
|
<p>I have a list of files from os.listdir like:</p>
<pre><code> TXSHP_20240712052921.csv
TXSHP_20240715045301.csv
TXSHP_FC_20210323084010.csv
TXSHP_FC_20231116060918.csv
</code></pre>
<p>how do I extract only the ones where 'FC' is not in the filename</p>
<p>Ive tried</p>
<pre><code>def get_archive_filelist_chip(archiveFldr):
nonedi_files_chip = []
folder = archiveFldr
for file in os.listdir(folder):
if file.endswith(".csv") & 'FC' in file == False:
nonedi_files_chip.append(file)
else:
pass
#nonedi_files_chip = filter(lambda x: 'FC' not in x, nonedi_files_chip)
return nonedi_files_chip
</code></pre>
|
<python><list><filter><substring>
|
2024-07-16 18:49:07
| 1
| 392
|
Ben Smith
|
78,756,225
| 22,370,136
|
Defined Rules get not called in Scrapy
|
<p>I am currently working with the Python library Scrapy and I am inheriting from the CrawlSpider so that I can override/define custom Rules. I have defined rules that should block all URLs with <code>auth/</code> and allow only URLs with <code>/tags</code>.</p>
<p>My solution:</p>
<pre><code>class DockerhubDockerRegistrySpider(CrawlSpider):
name = "dockerhubDockerQueriedRegistrySpider"
allowed_domains = ["hub.docker.com"]
rules = (
Rule(
LinkExtractor(allow='/tags'),
callback='parse_additional_page',
follow=False
),
Rule(
LinkExtractor(deny='auth/'),
follow=False
),
)
</code></pre>
<p>The Problem is that when the follow methode is called, it automatically calls always <code>https://hub.docker.com/auth/profile</code> and this page cannot be accessed, so I want to block it but somehow my rules do not fire.</p>
<p>Follow URL call:</p>
<pre><code>yield response.follow(
additional_data_url_absolute,
self.parse_additional_page,
cb_kwargs=dict(item=item),
meta=dict(
playwright=True,
playwright_include_page=True,
playwright_page_methods={
"wait_for_page_load": PageMethod("wait_for_selector", 'body[aria-describedby="global-progress"]')
},
)
)
</code></pre>
<p>Follow Callback:</p>
<pre><code>def parse_additional_page(self, response, item):
self.logger.info("Additional page meta: %s", response.meta)
self.logger.info("Additional page HTML: %s", response.css('title::text').get())
self.logger.info("Additional page HTML Repo-Name: %s", response.css('h2[data-testid="repoName"]::text').get())
item['additional_data'] = response.css('h2[data-testid="repoName"]::text').get()
yield item
</code></pre>
<p>Debug log:</p>
<pre><code>2024-07-16 20:28:17 [scrapy-playwright] DEBUG: [Context=default] Response: <401 https://hub.docker.com/auth/profile>
2024-07-16 20:28:18 [scrapy-playwright] DEBUG: [Context=default] Response: <401 https://hub.docker.com/auth/profile>
2024-07-16 20:28:19 [scrapy-playwright] DEBUG: [Context=default] Request: <POST 2024-07-16 20:28:19 [scrapy-playwright] DEBUG: [Context=default] Request: <GET https://hub.docker.com/auth/profile> (resource type: fetch, referrer: https://hub.docker.com/search?q=python&page=1)
2024-07-16 20:44:23 [scrapy-playwright] DEBUG: [Context=default] Request: <GET https://hub.docker.com/auth/profile> (resource type: fetch, referrer: https://hub.docker.com/r/google/guestbook-python-redis/tags)
</code></pre>
<p>How to block requests for <code> https://hub.docker.com/auth/profile</code>?</p>
<p>My whole solution:</p>
<pre><code>class DockerhubDockerRegistrySpider(CrawlSpider):
name = "dockerhubDockerQueriedRegistrySearchSpiderTemp"
allowed_domains = ["hub.docker.com"]
rules = (
Rule(
LinkExtractor(allow=r'/tags$'), # Only allow URLs that end with '/tags'
callback='parse_additional_page',
follow=False
),
Rule(
LinkExtractor(allow='search'),
callback='parse_registry',
follow=True
),
)
def __init__(self, query=None, *args, **kwargs):
super(DockerhubDockerRegistrySpider, self).__init__(*args, **kwargs)
self.query = query
self.start_urls = [f'https://hub.docker.com/search?q={query}&page={i}' for i in range(1, 12)]
def start_requests(self):
for index, url in enumerate(self.start_urls, start=1):
self.logger.info(f"Starting request: {url}")
yield scrapy.Request(
url,
meta=dict(
page_number=index,
playwright=True,
playwright_include_page=True,
playwright_page_methods={
"wait_for_search_results": PageMethod("wait_for_selector", "div#searchResults"),
}
),
callback=self.parse_registry
)
async def parse_registry(self, response):
page = response.meta["playwright_page"]
if await page.title() == "hub.docker.com":
await page.close()
await page.context.close()
page_number = response.meta.get("page_number")
if page_number is None:
self.logger.warning("Page number not found in meta: %s", response.url)
return
search_results = response.xpath('//a[@data-testid="imageSearchResult"]')
for result in search_results:
item = DockerImageItem()
item['page_number'] = page_number
item['name'] = result.css('[data-testid="product-title"]::text').get()
uploader_elem = result.css("span::text").re(r"^By (.+)")
if uploader_elem:
item["uploader"] = uploader_elem[0].strip()
else:
official_icon = result.css('[data-testid="official-icon"]')
verified_publisher_icon = result.css(
'[data-testid="verified_publisher-icon"]'
)
item["is_official_image"] = bool(official_icon)
item["is_verified_publisher"] = bool(verified_publisher_icon)
item['is_official_image'] = bool(result.css('[data-testid="official-icon"]'))
item['is_verified_publisher'] = bool(result.css('[data-testid="verified_publisher-icon"]'))
item['last_update'] = self.parse_update_string(result.css('span:contains("Updated")::text').get())
item['description'] = result.xpath(
'.//span[contains(text(), "Updated")]/ancestor::div[1]/following-sibling::p[1]/text()').get()
item['chips'] = result.css('[data-testid="productChip"] span::text').getall()
# Extract pulls last week
pulls_elem = (
result.css('p:contains("Pulls:")')
.xpath("following-sibling::p/text()")
.get()
)
item["pulls_last_week"] = (
pulls_elem.replace(",", "") if pulls_elem else None
)
item['downloads'] = result.css('[data-testid="DownloadIcon"] + p::text').get()
item['stars'] = result.css('svg[data-testid="StarOutlineIcon"] + span > strong::text').get()
# Clean up text fields
item['name'] = item['name'].strip() if item['name'] else None
item['description'] = item['description'].strip() if item['description'] else None
item['downloads'] = item['downloads'].strip() if item['downloads'] else None
item['pulls_last_week'] = item['pulls_last_week'].replace(",", "") if item['pulls_last_week'] else None
item['stars'] = item['stars'].strip() if item['stars'] else None
additional_data_url = result.attrib.get('href')
if additional_data_url and '/r/' or '/_/' in additional_data_url and '/tags' in additional_data_url:
additional_data_url_absolute = f"https://hub.docker.com{additional_data_url}/tags"
self.logger.info("Entered Additional Page: %s", additional_data_url_absolute)
yield response.follow(
additional_data_url_absolute,
callback=self.parse_additional_page,
cb_kwargs=dict(item=item),
meta=dict(
playwright=True,
playwright_include_page=True,
errback=self.close_context_on_error,
playwright_page_methods={
"wait_for_page_load": PageMethod("wait_for_selector", 'div[data-testid="repotagsTagListItem"]'),
},
playwright_context_kwargs={
"ignore_https_errors": True,
}
)
)
else:
self.logger.error("No additional page available for: %s", item['name'])
yield item
async def close_context_on_error(self, failure):
page = failure.request.meta["playwright_page"]
await page.close()
await page.context.close()
def parse_additional_page(self, response, item):
self.logger.info("Additional page meta: %s", response.meta)
self.logger.info("Additional page HTML: %s", response.css('title::text').get())
self.logger.info("Additional page HTML Repo-Name: %s", response.css('h2[data-testid="repoName"]::text').get())
item['additional_data'] = response.css('h2[data-testid="repoName"]::text').get()
return item
@staticmethod
def parse_update_string(update_string):
return update_string.strip() if update_string else None
</code></pre>
|
<python><web-scraping><scrapy><playwright-python>
|
2024-07-16 18:39:08
| 1
| 407
|
Vlajic Stevan
|
78,756,165
| 6,936,582
|
How to contract nodes and apply functions to node attributes
|
<p>I have a simple graph with the attributes <strong>height</strong> and <strong>area</strong></p>
<pre><code>import networkx as nx
nodes_list = [("A", {"height":10, "area":100}),
("B", {"height":12, "area":200}),
("C", {"height":8, "area":150}),
("D", {"height":9, "area":120})]
G = nx.Graph()
G.add_nodes_from(nodes_list)
edges_list = [("A","B"), ("B","C"), ("C","D")]
G.add_edges_from(edges_list)
nx.draw(G, with_labels=True, node_color="red")
</code></pre>
<p><a href="https://i.sstatic.net/lGZXLby9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGZXLby9.png" alt="enter image description here" /></a></p>
<p>I want to contract nodes and update the attributes with the average of height and the sum of area of the contracted nodes.</p>
<p>There must be some easier way than:</p>
<pre><code>H = nx.contracted_nodes(G, "B","C")
#print(H.nodes["B"])
#{'height': 12, 'area': 200, 'contraction': {'C': {'height': 8, 'area': 150}}}
#Calc the average of node B and C's heights
new_height = (H.nodes["B"]["height"] + H.nodes["B"]["contraction"]["C"]["height"])/2 #10
#Calc the sum of of node B and C's areas
new_area = H.nodes["B"]["area"] + H.nodes["B"]["contraction"]["C"]["area"]
#Drop old attributes
del(H.nodes["B"]["height"], H.nodes["B"]["area"], H.nodes["B"]["contraction"])
nx.set_node_attributes(H, {"B":{"height":new_height, "area":new_area}})
print(H.nodes["B"])
#{'height': 10.0, 'area': 350}
</code></pre>
<p>Can I get networkx to average height and sum area, every time I contract multiple nodes?</p>
|
<python><networkx>
|
2024-07-16 18:18:58
| 1
| 2,220
|
Bera
|
78,756,077
| 1,521,218
|
Using ironpython from C# async-await causes the app to hang
|
<p>From C# we have to call python function, in this case the nltk lib lemmatize function (which still has no good C# implementation). We call it like this:</p>
<pre><code>private string Lemmatize(string word)
{
using (Py.GIL())
{
using (var scope = Py.CreateScope())
{
dynamic nltk = Py.Import("nltk");
var lemmatizer = nltk.stem.WordNetLemmatizer();
var lemma = lemmatizer.lemmatize(word);
return lemma;
}
}
}
</code></pre>
<p>It works perfectly, in the test application, from a .net console app. When we apply this into the real app, which has async functions we experienced that the function usually hang at the point <code>using (Py.GIL())</code>. The whole app runs single threaded, because async functions everywhere, but each of them is called with <code>await</code>. Therefore a calling point looks like this:</p>
<pre><code>private async Task<IList<string>> LemmatizeAll(IEnumerabe<string> words)
{
var result = new List<string>();
foreach(var word in words)
{
var item = Lemmatize(word);
result.Add(item);
}
return result;
}
//
var lemmas = await LemmatizeAll(new [] {"running", "man"});
</code></pre>
<p>I understand that the <code>Py.GIL()</code> is mostly a semaphore, not enabling running multiple threads at the same time, but this not what happens now. It runs from only one thread, however from different threads because of the async-await. To be hung is not a nice thing from it.</p>
<p>How to use the python scripts from an environment like this? Any advice?</p>
|
<python><c#><async-await><semaphore><ironpython>
|
2024-07-16 17:55:00
| 0
| 1,261
|
Zoltan Hernyak
|
78,756,058
| 11,280,068
|
Unicode strings in a purely python3 codebase - are these useless?
|
<p>In the codebase that I'm working on, there seems to be remnants of python2 because a lot of the strings are prefixed with <code>u</code>.</p>
<p>From researching, it looks like this denotes a unicode string, but in python3, all strings are unicode by default. If I did a big passthrough of the codebase and removed all the <code>u</code>'s before strings, would there be any downside?</p>
|
<python><python-3.x><string><unicode><encoding>
|
2024-07-16 17:50:41
| 1
| 1,194
|
NFeruch - FreePalestine
|
78,755,449
| 38,557
|
Type of Union or Union of Types
|
<p>In Python, is <code>Type[Union[A, B, C]]</code> the same as <code>Union[Type[A], Type[B], Type[C]]</code>?
I think they are equivalent, and the interpreter seems to agree.</p>
<p>ChatGPT (which, from my past experience, tends to be wrong with this type of questions) <a href="https://chatgpt.com/share/cae7cf7d-988f-4335-9142-22007aa90140" rel="nofollow noreferrer">disagrees</a> so I was wondering which one is more correct.</p>
<p>To be clear: I want a union of the types of A, B or C, not a union of instances of A, B or C. The first option is shorter and IMHO more readable.</p>
<p>In other words, given these definitions:</p>
<pre class="lang-py prettyprint-override"><code>class A: pass
class B: pass
class C: pass
def foo(my_class: Type[Union[A, B, C]]): pass
</code></pre>
<p>I expect this usage to be correct:</p>
<pre class="lang-py prettyprint-override"><code>foo(A) # pass the class A itself
</code></pre>
<p>But not this one:</p>
<pre class="lang-py prettyprint-override"><code>foo(A()) # pass an instance of A
</code></pre>
|
<python><python-typing>
|
2024-07-16 15:27:03
| 1
| 13,083
|
noamtm
|
78,755,264
| 5,790,653
|
How to break the loop only if email subject does not contain today or yesterday date
|
<p>I'm reading this <a href="https://thepythoncode.com/article/reading-emails-in-python" rel="nofollow noreferrer">document</a> regarding connecting to my IMAP server and read the emails, but I have some issues.</p>
<p>Currently I have around 10K emails and it's growing (I delete old emails weekly, but they're still a lot), and in this part of the code, it fetches all emails:</p>
<pre class="lang-py prettyprint-override"><code>res, msg = imap.fetch(str(i), "(RFC822)")
</code></pre>
<p>My emails have static subject format like this:</p>
<pre><code>StaticText1 - SUBJECT - SentDateYYYY-MM-DD Hour:Minute
StaticText2 - SUBJECT - SentDateYYYY-MM-DD Hour:Minute
</code></pre>
<p>Let's suppose a list like this for simplicity:</p>
<pre class="lang-py prettyprint-override"><code>dates = ['2024-07-16', '2024-07-16', '2024-07-16', '2024-07-16', '2024-07-16', '2024-07-16', '2024-07-16', '2024-07-15', '2024-07-15', '2024-07-15', '2024-07-15', '2024-07-15', '2024-07-15', '2024-07-15', '2024-07-15', '2024-07-14', '2024-07-14', '2024-07-14', '2024-07-14', '2024-07-14', '2024-07-14', '2024-07-13', '2024-07-13', '2024-07-13', '2024-07-13', '2024-07-13', '2024-07-13', '2024-07-13']
</code></pre>
<p>So, let's say reading each emails take at least 0.1 second, and that will be a huge time for 10K emails.</p>
<p>I read emails in my code from the latest to the oldest.</p>
<p>I'm going to reach this:</p>
<pre class="lang-py prettyprint-override"><code>if datetime.datetime.now().strftime('%Y-%m-%d') in subject or (datetime.datetime.now() - datetime.timedelta(days=1)).strftime('%Y-%m-%d') in subject:
# then read email
print(email)
else:
# as soon as it reaches an email which has 2024-07-14, it breaks the loop and shouldn't try anything further
break
</code></pre>
<p>The main question is, is that even possible?</p>
<p><strong>Update1</strong></p>
<p>To clarify, I know this but it reads each email and if it doesn't contain the date, breaks and goes to another email (which is not excepted):</p>
<pre class="lang-py prettyprint-override"><code>for subject in emails:
if today in subject or yesterday in subject:
print(subject)
</code></pre>
|
<python>
|
2024-07-16 14:46:07
| 2
| 4,175
|
Saeed
|
78,755,224
| 8,384,910
|
Ignore sub-dependencies in pyproject.toml
|
<p>A package I want to include in my <code>pyproject.toml</code>, itself depends on <code>torch</code>, which I don't want automatically installing since it takes up a lot of space and my currently installed version already works with it.</p>
<p>So, I want to emulate the functionality of the <code>--no-dependencies</code> pip install flag, instead with configuration in my <code>pyproject.toml</code> file for that specific package.</p>
<p>I was unable to find any information online on how to do this.</p>
|
<python><pip><pyproject.toml>
|
2024-07-16 14:37:10
| 0
| 9,414
|
Richie Bendall
|
78,755,222
| 8,595,535
|
Pandas new column based on other dataframe with matching condition
|
<p>suppose I have two dataframes:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'Date': [0, 2, 3], 'Id':['A','B','C'], 'Country': ['US', 'CA', 'DE']})
df2 = pd.DataFrame({'Date': [0, 1, 2, 3], 'US': [10, 20, 30, 40], 'CA':[5, 10, 15, 20], 'DE':[100, 200, 300, 400]})
</code></pre>
<p>I would like to add a column to df1 that is equal to the value in df2 for the corresponding Country and corresponding date.</p>
<p>Desired result should look like that:</p>
<pre><code>df1 = pd.DataFrame({'Date': [0, 2, 3], 'Id':['A','B','C'], 'Country': ['US', 'CA', 'DE'], 'New':[10, 15, 400]})
</code></pre>
|
<python><pandas>
|
2024-07-16 14:36:40
| 2
| 309
|
CTXR
|
78,755,219
| 1,471,980
|
How do you summarize data frame in pandas based on values
|
<p>I have a data frame like this</p>
<p>df</p>
<pre><code>Node_Name size count
Abc1 10 2
Abc1 20 2
Zxd 30 3
Zxd 40 3
Zxd 80 3
Ddd 10 4
Ddd 40 4
Ddd 80 4
Ddd 100 4
</code></pre>
<p>I need subset this data frame of as this.</p>
<p>If the count value per Node_Name is 2 or less, take min value per Node_Name. If count is 3 or more, remove the max and add the size of values excepr for tge max value group by the Node_Name.</p>
<p>For example final_df should look like this:</p>
<pre><code>Node_Name size count
Abc1 10 2
Zxd 70 3
Ddd 130 4
</code></pre>
|
<python><pandas>
|
2024-07-16 14:35:59
| 3
| 10,714
|
user1471980
|
78,755,217
| 8,519,830
|
Why does python return a 64bit negative number as very large positive?
|
<p>Here's the problem. I expect as a result -100. I get from other stackoverflow questions that python is using the 2's complement for negative numbers. How would I convert a string to a 64bit number?</p>
<p>Using Python3 under Windows.</p>
<pre><code>Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> int('FFFFFFFFFFFFFF9C',16)
18446744073709551516
>>> int('0000000000000064',16)
100
</code></pre>
|
<python>
|
2024-07-16 14:35:41
| 3
| 585
|
monok
|
78,754,936
| 538,256
|
micro array image registration
|
<p>How can I image-register these two images?</p>
<p><a href="https://i.sstatic.net/JpMgwRq2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpMgwRq2.png" alt="imgsat" /></a></p>
<p><a href="https://i.sstatic.net/65qru46B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65qru46B.png" alt="img" /></a></p>
<p>They are magnetic microdot arrays, at different magnetic fields.
Subtracting one from the other I should be able to discern a magnetic contrast.</p>
<p>after defining a contrast-stretching function:</p>
<pre><code>def stretch(i,mt=2,Mt=98):
minval = np.percentile(i, mt)
maxval = np.percentile(i, Mt)
imgsub = np.clip(i, minval, maxval)
imgstretched = ((imgsub - minval) / (maxval - minval)) * 255
return imgstretched
</code></pre>
<p>and cropping out the status bar:</p>
<pre><code>imagesizey, imagesizex = imgsat.shape[:2]
imgsat = imgsat[0:imagesizey-50, :]
img = img[0:imagesizey-50, :]
</code></pre>
<p>I tried SciKit:</p>
<pre><code>from skimage.registration import phase_cross_correlation
from scipy.ndimage import fourier_shift
shift, error, diffphase = phase_cross_correlation(img, imgsat, upsample_factor=100)
offset_image = fourier_shift(np.fft.fftn(img), shift)
offset_image = np.fft.ifftn(offset_image)
plt.imshow(stretch(offset_image.real-imgsat,mt=2,Mt=98), cmap='gray')
</code></pre>
<p>I tried an <a href="https://github.com/keflavich/image_registration" rel="nofollow noreferrer">image-registration package</a> (<a href="https://image-registration.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://image-registration.readthedocs.io/en/latest/</a>):</p>
<pre><code>from image_registration import chi2_shift
from image_registration.fft_tools import shift
import noise
xoff, yoff, exoff, eyoff = chi2_shift(img, imgsat,return_error=True, upsample_factor='auto')
corrected_image2 = shift.shiftnd(img, (-yoff, -xoff))
imgs = corrected_image2-imgsat
plt.imshow(stretch(imgs,mt=2,Mt=98), cmap="gray")
</code></pre>
<p>Last I tried <a href="https://pypi.org/project/pystackreg/" rel="nofollow noreferrer">PyStackReg</a></p>
<pre><code>from pystackreg import StackReg
sr = StackReg(StackReg.TRANSLATION)
out_tra = sr.register_transform(imgsat, img)
plt.imshow(stretch(out_tra-imgsat,mt=2,Mt=98), cmap='gray')
</code></pre>
<p>In all cases the subtraction fails, with results similar to this (this with SciKit):</p>
<p><a href="https://i.sstatic.net/0kQsNweC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kQsNweC.png" alt="SK subtraction" /></a></p>
<p>I might add that the array in the images drift, but some details, like the two black specks at the left, remain stable, being defects in the optics.</p>
<p>Can you suggest me some alternative?</p>
|
<python><image-processing><image-registration>
|
2024-07-16 13:42:05
| 1
| 4,004
|
alessandro
|
78,754,912
| 893,254
|
What are the differences between using mutiple or'ed typehints vs abc and an inheritance hierachy in Python?
|
<p>Python is a dynamic language. This means that types are dynamic at runtime and Python makes use of the concept of Ducktyping.</p>
<p>What this means is that for any object <code>x</code> was can do</p>
<ul>
<li><code>x.some_function()</code></li>
<li><code>x.some_property = y</code></li>
</ul>
<p>without knowing until runtime whether or not <code>x</code> has the attribute <code>some_property</code> or the function <code>some_function</code>.</p>
<p>Python has class inheritance. It also has the concept of "abstract base classes" provided by the <code>abc</code> module.</p>
<p>This module provides a decorator <code>@abstractmethod</code>. The decorator prevents subclasses from being instantiated at runtime without those subclasses first having provided implementations for any methods marked as abstract using <code>@abstractmethod</code>.</p>
<p>This is just a piece protection logic which is triggered at runtime when a class is instantiated. It behaves approximatly as if the runtime check for required methods occurs in the <code>__init__</code> function.</p>
<p>This question is about both of these concepts, but particularly in the context of type hints. Type hints provide ways for static analyzers to provide aids to the developed in IDEs. An obvious example being the Python language server and Python extension for VS Code.</p>
<p>Type hints have other purposes and uses too, but in this context the interaction with static analysis tools is what I am interested in.</p>
<p>Consider an example inhertiance hierachy.</p>
<pre><code>class AbstractMessage():
pass
class GetterMessage(AbstractMessage):
pass
class SetterMessage(AbstractMessage):
pass
</code></pre>
<p>In this example I have defined two message types which inherit from an abstract base class. However, there is no reason for this ABC to exist, and code which uses the <code>GetterMessage</code> and <code>SetterMessage</code> would work no differently without it.</p>
<p>Here are two possible ways to implement a function which can return either type.</p>
<pre><code>def example_function() -> GetterMessage|SetterMessage:
pass
def example_function() -> AbstractMessage:
pass
def example_code_block():
return_value = example_function()
return_value. # <- static analyzer kicks in here
</code></pre>
<p>In this example, I have shown two possible ways to specify the return type:</p>
<ul>
<li><code>AbstractMessage</code></li>
<li><code>GetterMessage | SetterMessage</code></li>
</ul>
<p>What, if any, are the differences between either choice? Is there a reason to prefer one over the other? Is there a reason why one must be chosen over the other?</p>
<p>It would seem to me that either would work just as well. However, in some cases, there may be multiple return types which must be specified using the or (<code>|</code>) syntax, because the types which are returned are not related, and so cannot be combined into an inheritance hierachy.</p>
<hr />
<h1>Some simple tests</h1>
<p>The following simple tests show the suggestions for attributes of <code>my_object</code> or <code>my_object_2</code> made by the VS Code/Pylance static analyzer.</p>
<p>(This test is specific to VS Code with Pylance, with whichever versions of those software are currently installed on the machine I ran these tests on, and does not necessarily give a general answer.)</p>
<p>What is potentially interesting is that the or syntax <code>Type1|Type2</code> gives the full set of possible options, where as the inheritance hierachy ABC return type does not.</p>
<pre><code>class Type1():
def common_function():
pass
def function_1():
pass
class Type2():
def common_function():
pass
def function_2():
pass
class AbstractBaseType():
def abstract_base_type_function():
pass
class ConcreteType1(AbstractBaseType):
def common_function():
pass
def function_1():
pass
class ConcreteType2(AbstractBaseType):
def common_function():
pass
def function_2():
pass
def example_function_1(input: str) -> Type1 | Type2:
if input == 'Type1':
return Type1()
else:
return Type2()
def example_function_2(input: str) -> AbstractBaseType:
if input == 'Type1':
return ConcreteType1()
else:
return ConcreteType2()
def test_function():
my_object_1 = example_function_1()
my_object_2 = example_function_2()
my_object_1. # <- suggestions include `common_function()`, `function_1()`, `function_2()`
my_object_2. # <- suggestions include `abstract_base_type_function()`
</code></pre>
|
<python><python-typing>
|
2024-07-16 13:39:26
| 1
| 18,579
|
user2138149
|
78,754,820
| 17,624,474
|
AWS Lambda unable to work with \n as string
|
<p>I have a private key from GCP Pub Sub.
It is stored like this</p>
<pre><code>secret_manager_data = {'google_pubsub_private_key':'-----BEGIN PRIVATE KEY-----\nYourPrivateKeyHere\n-----END PRIVATE KEY-----\n'}
</code></pre>
<p>I'm now parsing the private key in the below code</p>
<pre><code>signer = crypt.Signer.from_string(secret_manager_data['google_pubsub_private_key'])
</code></pre>
<p>Now I am getting an error like "<strong>No key could be detected</strong>"</p>
<p>But when I give string directly, then it is working. Example:</p>
<pre><code>signer = crypt.Signer.from_string('-----BEGIN PRIVATE KEY-----\nYourPrivateKeyHere\n-----END PRIVATE KEY-----\n')
</code></pre>
<p>Means no issue in input but don't know how to handle <strong>\n</strong> in Lambda Function. Lambda is taking \n as next line instead of string.</p>
<p>Can you please help me?</p>
|
<python><amazon-web-services><aws-lambda><google-cloud-pubsub>
|
2024-07-16 13:20:52
| 1
| 312
|
Moses01
|
78,754,810
| 19,573,290
|
How do I use CSS in PyGObject with Gtk 4?
|
<p>I'm trying to create a little application with PyGObject and Gtk 4.0. All of the other answers were for Gtk 3 or below, and trying them gave me an error.</p>
<p>I tried this piece of code:</p>
<pre class="lang-py prettyprint-override"><code>css = '* { background-color: #f00; }' # temporary css
css_provider = Gtk.CssProvider()
css_provider.load_from_data(css)
context = Gtk.StyleContext()
screen = Gdk.Screen.get_default()
context.add_provider_for_screen(screen, css_provider,
Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION)
</code></pre>
<p>But it threw out <code>'gi.repository.Gdk' object has no attribute 'Screen'</code>.</p>
<p>Here are my imports if that matters:</p>
<pre class="lang-py prettyprint-override"><code>import gi
gi.require_version('Gtk', '4.0')
from gi.repository import Gtk, Gdk
</code></pre>
|
<python><gtk><pygobject><gtk4>
|
2024-07-16 13:18:31
| 2
| 363
|
stysan
|
78,754,768
| 8,736,713
|
Generic type that acts like base type
|
<p>I want to create a generic type <code>A[T]</code> that acts exactly like <code>T</code>, except that I can tell at runtime that the type is in fact <code>A[T]</code> and not <code>T</code>.</p>
<p>I tried</p>
<pre><code>class A(Generic[T], T):
pass
</code></pre>
<p>but that does not seem to work, as mypy complains, for example, that <code>A[str]</code> is of type <code>object</code>.</p>
<p>As an example, I want something like this to pass type checking:</p>
<pre><code>def f(s: A[str]):
return re.findall('foo|bar', s)
</code></pre>
<p>But still be somehow able to tell <code>A[str]</code> from <code>str</code> at runtime, when I get a variable of that type or inspect the function signature.</p>
<p>Is there a way to do this?</p>
|
<python><mypy><python-typing>
|
2024-07-16 13:08:30
| 1
| 890
|
Knoep
|
78,754,763
| 3,437,012
|
Partition by column value in O(n)
|
<p>Using polars, can I reorder the rows of a dataframe such that</p>
<ul>
<li>Rows with equal value in <code>col1</code> appear contiguously</li>
<li>Doing this is an O(n) operation.</li>
</ul>
<p>An alternative way of phrasing this is that I want the output sorted by <em>some</em> arbitrary order of <code>col1</code>, but I don't care which order, which should reduce the runtime from O(nlogn) to O(n).</p>
<p>Using this slightly awkward rephrasing I can ask the actual question I'm ultimately after: can I reorder the rows of a dataframe such that the output is sorted lexicographically by <code>(col1, ..., colN)</code>, with <em>some</em> arbitrary order of <code>(col1,..., colM)</code> and the canonical order of <code>(colM+1, ..., colN)</code>, other than via sorting (which would pick the canonical order of <code>(col1, ..., colM)</code> and would require unnecessary work)?</p>
<p>A simple example would be: I have rows (Date, String, Int) containing yearly population data of different cities over the last century. I want the rows for each given city next to each other and sorted by year (say, because I'm using an external tool for postprocessing that requires contiguity) but I don't care whether Amsterdam comes before Berlin.</p>
<p>Theoretically, this is trivially achievable in O(n) using hashes. Practically, I'd need built-in polars operations for this to be faster than just a regular sort.</p>
<p>EDIT: Timings of solutions suggested so far to my first question (partition rows, don't necessarily order them), far on an example with M=1, N=2 and 10M rows and average group size 10:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Method</th>
<th style="text-align: left;">Time</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">SORT</td>
<td style="text-align: left;">0.26s (2)</td>
</tr>
<tr>
<td style="text-align: left;">EXPLODE</td>
<td style="text-align: left;">0.22s (1)</td>
</tr>
<tr>
<td style="text-align: left;">PARTITION</td>
<td style="text-align: left;">3.89s (3)</td>
</tr>
</tbody>
</table></div>
<p>where</p>
<ul>
<li>SORT just sorts by (col1, col2)</li>
<li>EXPLODE uses the suggestion by @BallpointBen, <code>df.group_by("col1").all().explode(pl.exclude("col1"))</code></li>
<li>PARTITION uses the suggestion by @Dogbert, <code>pl.concat(df.partition_by("col1"))</code></li>
</ul>
<p>Timings of solutions suggested so far to my second question (partition rows and order within partition by second column), far on an example with M=1, N=2 and 1M rows and average group size 10:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Method</th>
<th style="text-align: left;">Presorted</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">SORT</td>
<td style="text-align: left;">0.03s (1)</td>
</tr>
<tr>
<td style="text-align: left;">PARTITION</td>
<td style="text-align: left;">5.05s (3)</td>
</tr>
<tr>
<td style="text-align: left;">PARALLELPARTITION</td>
<td style="text-align: left;">1.49s (2)</td>
</tr>
</tbody>
</table></div>
<p>where</p>
<ul>
<li>SORT just sorts by (col1, col2)</li>
<li>PARTITION uses the suggestion by @Dogbert, <code>pl.concat([x.sort('col2') for x in df.partition_by("col1")])</code></li>
<li>PARALLELPARTITION uses the suggestion by @DeanMcGregor, <code>pl.concat([x.lazy().sort('col2') for x in df.partition_by("col1")]).collect()</code></li>
</ul>
|
<python><python-polars>
|
2024-07-16 13:07:39
| 2
| 2,370
|
Bananach
|
78,754,323
| 19,648,465
|
Error in PyCharm: Package requirement 'Django==5.0.7' is not satisfied
|
<p>I'm trying to set up a Django project in PyCharm, but I keep encountering the following error:</p>
<pre><code>Package requirement 'Django==5.0.7' is not satisfied
</code></pre>
<p>I have already tried the following steps:</p>
<ol>
<li>Ensured that Django==5.0.7 is listed in my requirements.txt file.</li>
<li>pip install -r requirements.txt</li>
<li>Verified that my virtual environment is activated</li>
</ol>
<p>Despite these efforts, PyCharm still doesn't recognize the installed Django package.</p>
<p>Here are some additional details:</p>
<ul>
<li>PyCharm version: 2023.3.2</li>
<li>Python version: 3.10.1</li>
<li>OS: Windows 10</li>
</ul>
<p>What could be causing this issue, and how can I resolve it?</p>
|
<python><django><pycharm><virtualenv><requirements.txt>
|
2024-07-16 11:33:05
| 1
| 705
|
coder
|
78,754,301
| 5,885,054
|
Find a Substring in String with Boardtools is not working if the string contains special chars
|
<p>Today I got frustrated. I'm implementing a very simple logic with substring finding; it turned out to be a more complex situation.</p>
<p>Let's consider this small python code:</p>
<pre><code>word_one, word_two = ['(/ %$ss%*++~m/()', 'ss?=)?=)()']
field_content = "5uz8439qvnhfruieoghbvui gfurieaogv ui(/ %$SS%*++~m/() 123456789651321654 'ss?=)?=)() 87788+#~~##+"
pos_word_one = field_content.index(word_one)
pos_word_two = field_content.index(word_two)
print(pos_word_one) # throw an ValueError but the substring is included
print(pos_word_two) # 74
</code></pre>
<p>Does someone have an idea? Is the encoding important, or does it help here? With regex, we will have more problems because then I need to auto-escape all these special characters.</p>
|
<python><python-3.x><regex><string>
|
2024-07-16 11:29:26
| 1
| 497
|
Allan Karlson
|
78,753,870
| 2,749,397
|
I'm using a custom transformation, the PNG is OK but the PDF is empty
|
<p>This question is a follow up to <a href="https://stackoverflow.com/questions/78746962/making-a-poster-how-to-place-artists-in-a-figure-using-mm-and-top-left-origin">Making a poster: how to place Artists in a Figure using mm and top-left origin</a></p>
<p>This code:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.transforms import Affine2D
mm, dpi = 1/25.4, dpi
w, h = 841, 1189
trans = Affine2D().scale(mm*dpi).scale(1,-1).translate(0,h*mm*dpi)
fig = plt.figure(figsize=(w*mm, h*mm), layout='none', dpi=dpi)
fig.patches.append(
plt.Rectangle((230, 190), 40, 18, transform=trans))
fig.savefig('A0.pdf')
fig.savefig('A0.png')
</code></pre>
<p>produces an empty PDF, while the PNG file is correct.</p>
<p>Unfortunately the print shop with the wide inkjet plotter wants a PDF. While I could transform the raster file to PDF, I'd prefer to produce directly a correct PDF.</p>
<p>Is there something wrong in my code or it shouldn't happen and it's a bug I have to report to Matplotlib devs?</p>
<hr />
<p>This code, that does not use the custom transformation, produces a correct PDF, with the rectangle in the correct position and the right dimensions.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
mm, dpi = 1/25.4, 96
w, h = 841, 1189
fig = plt.figure(dpi=dpi, layout='none', figsize=(w*mm,h*mm))
fig.patches.append(plt.Rectangle((660/w, 1-1000/h), 40/w, -40/h,
transform=fig.transFigure))
fig.savefig('A0.pdf')
</code></pre>
<hr />
<p>UPDATE</p>
<pre><code>...
fig = plt.figure(figsize=(w*mm, h*mm), layout='none', dpi=dpi)
fig.patches.append(
plt.Rectangle((230, 190), 40, 18, transform=trans))
fig.patches.append(
plt.Rectangle((600/w, 1-190/h), 40/w, -18/h, transform=fig.transFigure))
fig.savefig('A0.pdf')
</code></pre>
<p>produces a non empty PDF, but only the second rectangle, drawn w/o using the custom transformation, is visible.</p>
|
<python><matplotlib><pdf><coordinate-transformation>
|
2024-07-16 09:58:23
| 3
| 25,436
|
gboffi
|
78,753,860
| 4,647,519
|
What is the proper way of including examples in Python docstrings?
|
<p>How to include examples for function calls in Python docstrings? See @examples tag in R documentation. Or what is the proper way to include examples?</p>
<p>I'm looking for something like this:</p>
<pre><code>def my_func(x):
"""prints x.
:param x: string to print
:example:
my_func("hello world!")
"""
print(x)
</code></pre>
<p>This is not a duplicate of <a href="https://stackoverflow.com/questions/64451966/how-to-embed-code-examples-into-a-docstring">How to embed code examples into a docstring?</a> I'm not specifically asking about Sphinx, even though this may be part of the answers.</p>
|
<python><docstring>
|
2024-07-16 09:55:54
| 1
| 545
|
tover
|
78,753,848
| 18,695,803
|
Using PyROOT with Anaconda. C system headers must be installed
|
<p>I want to install ROOT and use it in python to execute a script using <code>import ROOT</code>.</p>
<p>I used:</p>
<pre class="lang-bash prettyprint-override"><code>conda config --set channel_priority strict
conda create -c conda-forge --name root_env root
conda activate root_env
</code></pre>
<p>to install ROOT.</p>
<p>I then selected the root_env environment in Pycharm:</p>
<p><a href="https://i.sstatic.net/FhmGEiVo.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FhmGEiVo.jpg" alt="image of my pycharm editor" /></a></p>
<p>This throws the following error:</p>
<pre><code>C system headers (glibc/Xcode/Windows SDK) must be installed.
fatal error: module map file 'module.modulemap' not found
Warning in cling::IncrementalParser::CheckABICompatibility():
Failed to extract C++ standard library version.
Warning in cling::IncrementalParser::CheckABICompatibility():
Possible C++ standard library mismatch, compiled with _LIBCPP_ABI_VERSION '1'
Extraction of runtime standard library version was: ''
Replaced symbol at_quick_exit cannot be found in JIT!
Module Darwin not found.
Failed to load module std
<<< cling interactive line includer >>>: fatal error: module file '/tmp/root-feedstock/miniforge3/conda-bld/root_base_1719007212029/work/build-dir/lib/Darwin.pcm' not found: module file not found
<<< cling interactive line includer >>>: note: imported by module '_Builtin_intrinsics' in '/opt/anaconda3/envs/root_env/lib/_Builtin_intrinsics.pcm'
Failed to load module _Builtin_intrinsics
Failed to load module _Builtin_intrinsics
Failed to load module std
* Break * segmentation violation
</code></pre>
<p><a href="https://i.sstatic.net/QSsFaohn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QSsFaohn.png" alt="install path of xcode tools" /></a></p>
<p>Things I have already tried:</p>
<ul>
<li>uninstall brew (environment variable clashes)</li>
<li>reinstall anaconda distribution</li>
<li>install anaconda with brew</li>
<li>reinstall xcode tools</li>
</ul>
<p>How can i use PyROOT?</p>
|
<python><anaconda><root-framework>
|
2024-07-16 09:54:17
| 0
| 464
|
Felkru
|
78,753,611
| 7,664,840
|
Maps in pdf format generated by python cartopy cannot be concatenated in illustrator (possibly due to Outlines outside the artboard scope)
|
<p>I want to use illustrator to concatenate my PDF graphics, which were generated using python cartopy. But when I moved the two maps closer, I got an error and I couldn't move the maps. The detailed error information is as follows: Error: Can't move the objects. The requested transformation would make some objects fall completely off the drawing area.
<a href="https://i.sstatic.net/21ZhOJM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/21ZhOJM6.png" alt="enter image description here" /></a></p>
<p>Then I turn on outline mode (Ctrl+Y), you can see that there are a lot of outlines outside the artboard, which may be the reason why the graphics can not be joined.<a href="https://i.sstatic.net/Jp0D3zh2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp0D3zh2.png" alt="enter image description here" /></a>
I tried to select and delete these lines outside the artboard, but I did not succeed, I hope to get professional help here. Thank you all in advance for your help.</p>
|
<python><adobe-illustrator><cartopy>
|
2024-07-16 08:59:01
| 0
| 846
|
Li Yupeng
|
78,753,534
| 719,276
|
Set QTableView cell size when using fetch more?
|
<p>I can set the size of my table rows using the following code:</p>
<pre><code> table = QTableView()
table.setModel(CustomModel(dataFrame))
table.resizeRowsToContents()
</code></pre>
<p>My CustomModel.data() function returns a size when role is Qt.SizeHintRole:</p>
<pre><code>class CustomModel(QAbstractTableModel):
[...]
def data(self, index: QModelIndex, role=Qt.ItemDataRole):
if not index.isValid():
return None
path = self._dataframe.iloc[index.row(), index.column()]
pixmap = QPixmap(path)
if role == Qt.SizeHintRole:
return pixmap.size()
</code></pre>
<p>But when I modify my model so that it can fetch more items, as in <a href="https://doc.qt.io/qt-5/qtwidgets-itemviews-fetchmore-example.html" rel="nofollow noreferrer">the official example</a>, the size is not taken into account anymore:</p>
<pre><code>def setDataFrame(self, dataFrame: pd.DataFrame):
self._dataframe = dataFrame
self.beginResetModel()
self._rowCount = 0
self.endResetModel()
return
def canFetchMore(self, parent=QModelIndex()) -> bool:
return False if parent.isValid() else self._rowCount < len(self._dataframe)
def fetchMore(self, parent: QModelIndex | QPersistentModelIndex) -> None:
if parent.isValid(): return
remainder = len(self._dataframe) - self._rowCount
itemsToFetch = min(100, remainder)
if itemsToFetch <= 0: return
self.beginInsertRows(QModelIndex(), self._rowCount, self._rowCount + itemsToFetch - 1)
self._rowCount += itemsToFetch
self.endInsertRows()
def rowCount(self, parent=QModelIndex()) -> int:
return self._rowCount if parent == QModelIndex() else 0
</code></pre>
<p>When I change <code>rowCount()</code> so that it always returns <code>len(self._dataframe)</code>, the size works again (but the "fetch more" feature breaks):</p>
<pre><code>def rowCount(self, parent=QModelIndex()) -> int:
return len(self._dataframe) if parent == QModelIndex() else 0
</code></pre>
<p>Any idea why this happens?</p>
|
<python><qt><qtableview><qlistview><qabstractitemmodel>
|
2024-07-16 08:42:14
| 1
| 11,833
|
arthur.sw
|
78,753,422
| 17,580,381
|
Python - selenium - element not interactable
|
<p>I know there's quite a lot in Stackoverflow (and elsewhere) about "element not interactable" but I don't think this is a duplicate.</p>
<p>My issue is that I get the selenium.common.exceptions.ElementNotInteractableException <strong>ONLY</strong> if I try to run my code in headless mode.</p>
<p>Here's the code. Don't worry about what it's trying to achieve other than provide a <a href="https://stackoverflow.com/help/minimal-reproducible-example">Minimal Reproducible Example</a></p>
<pre><code>from selenium import webdriver
from selenium.webdriver import ChromeOptions
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
options = ChromeOptions()
options.add_argument("--headless")
user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36"
options.add_argument(f"user-agent={user_agent}")
QUERY = "museums in london"
# This bypasses the "automation" warning
def bypass(driver):
try:
wait = WebDriverWait(driver, 5)
for button in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "button"))):
if button.text.strip() == "Reject all":
button.click()
break
except Exception:
pass
with webdriver.Chrome(options=options) as driver:
driver.get("https://www.google.com")
bypass(driver)
selector = By.CSS_SELECTOR, 'form[action="/search"] textarea[name=q]'
textarea = WebDriverWait(driver, 5).until(EC.presence_of_element_located(selector))
# The following Action sequence makes no difference to the program's behaviour
ActionChains(driver).move_to_element(textarea).click(textarea).perform()
textarea.send_keys(QUERY+Keys.RETURN)
urls = set()
selector = By.CSS_SELECTOR, "cite[role=text]"
for cite in WebDriverWait(driver, 5).until(EC.presence_of_all_elements_located(selector)):
if text := cite.text.strip():
urls.add(text)
print(*urls, sep="\n")
</code></pre>
<p>As shown, I'm trying to do run the Chrome driver in headless mode. If I do that I consistently get the ElementNotInteractableException at the point where I try to invoke send_keys().</p>
<p>If you remove the addition of the --headless argument to the ChromeOptions class, the code runs without error.</p>
<p>I had hoped that the ActionChains code would help but it makes no difference to the program's behaviour whether in headless mode or not. I just left it in in case anyone suggested it as a potential workaround.</p>
<p>Why is this only a problem when in headless mode? What can be done about it?</p>
|
<python><selenium-webdriver>
|
2024-07-16 08:16:57
| 2
| 28,997
|
Ramrab
|
78,753,126
| 15,222,211
|
How to clear pydantic object to default values (reset to defaults)?
|
<p>I need default values to be cleared in a Pydantic model. I cannot recreate the object because, in my use case, it has references that need to be preserved. Could you please show me the best practices for this?</p>
<p>I need something like this example.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field
class MyObject(BaseModel):
value: int = Field(default=1)
my_object = MyObject(value1=2)
# Reference to object, that need to be preserved
reference = [my_object]
# How to clear object to default values?
my_object.clear()
assert reference[0].value == 1
</code></pre>
|
<python><pydantic>
|
2024-07-16 07:10:21
| 2
| 814
|
pyjedy
|
78,753,057
| 1,145,011
|
Python scrapy: get all URLs in the webpage without duplicate URLs
|
<p>I want to fetch all URLs in the webpage without duplicate URLs using Python scrapy. I wanted to list only URLs starting with <code>allowed_domains = en.wikipedia.org</code>. In case links has external links, I don't want to scan that external link.</p>
<p>When I use the code below, I get a few duplicate URLs, not sure what to change in the settings.py file.</p>
<pre><code>from scrapy.spiders import CrawlSpider
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
from scrapy import Item
from scrapy import Field
class UrlItem(Item):
url = Field()
class WikiSpider(CrawlSpider):
name = 'wiki'
allowed_domains = ['en.wikipedia.org']
start_urls = ['https://en.wikipedia.org/wiki/Main_Page']
rules = (
Rule(LinkExtractor(), callback='parse_url',follow=True),
)
def parse_url(self, response):
item = UrlItem()
item['url'] = response.url
return item
</code></pre>
<p>Then via terminal I'm running below to save the links into CSV file</p>
<pre><code>scrapy crawl wiki -o wiki.csv -t csv
</code></pre>
|
<python><scrapy>
|
2024-07-16 06:52:18
| 1
| 1,551
|
user166013
|
78,752,899
| 6,357,916
|
LR not decaying for pytorch AdamW even after hundreds of epochs
|
<p>I have the following code using <code>AdamW</code> optimizer from Pytorch:</p>
<pre><code>optimizer = AdamW(params=self.model.parameters(), lr=0.00005)
</code></pre>
<p>I tried to log in using wandb as follows:</p>
<pre><code>lrs = {f'lr_group_{i}': param_group['lr']
for i, param_group in enumerate(self.optimizer.param_groups)}
wandb.log({"train_loss": avg_train_loss, "val_loss": val_loss, **lrs})
</code></pre>
<p><a href="https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html" rel="nofollow noreferrer">Note that</a> default value for <code>weight_decay</code> parameter is <code>0.01</code> for <code>AdamW</code>.</p>
<p>When I checked wandb dashboard, it showed the same LR for AdamW even after 200 epochs and it was not decaying at all. I tried this in several runs.</p>
<p><a href="https://i.sstatic.net/IdppF0Wk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IdppF0Wk.png" alt="enter image description here" /></a></p>
<p>Why the LR decay is not happening?</p>
<p>Also, it was showing LR for only one param group. Why is it so? It seems that I miss something basic here. Can someone point out?</p>
|
<python><machine-learning><deep-learning><pytorch>
|
2024-07-16 06:09:44
| 1
| 3,029
|
MsA
|
78,752,644
| 19,067,218
|
Pytest Fixtures Not Found When Running Tests from PyCharm IDE
|
<p>I am having trouble with pytest fixtures in my project. I have a root <code>conftest.py</code> file with some general-use fixtures and isolated <code>conftest.py</code> files for specific tests. The folder structure is as follows:</p>
<pre><code>product-testing/
├── conftest.py # Root conftest.py
├── tests/
│ └── grpc_tests/
│ └── collections/
│ └── test_collections.py
└── fixtures/
└── collections/
└── conftest.py # Used by test_collections.py specifically
</code></pre>
<p>When I try to run the tests from the IDE (PyCharm) using the "run button" near the test function, pytest can't initialize fixtures from the root <code>conftest.py</code>. The test code is something like this:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import allure
from faker import Faker
from fixtures.collections.conftest import collection
fake = Faker()
@allure.title("Get list of collections")
def test_get_collections_list(collection, postgres):
with allure.step("Send request to get the collection"):
response = collection.collection_list(
limit=1,
offset=0,
# ...And the rest of the code
)
</code></pre>
<p>here is a contest from <code>fixtures/collection/</code></p>
<pre class="lang-py prettyprint-override"><code>import datetime
import pytest
from faker import Faker
from path.to.file import pim_collections_pb2
fake = Faker()
@pytest.fixture(scope="session")
def collection(grpc_pages):
def create_collection(collection_id=None, store_name=None, items_ids=None, **kwargs):
default_params = {
"id": collection_id,
"store_name": store_name,
"item_ids": items_ids,
"is_active": True,
"description_eng": fake.text(),
# rest of the code
</code></pre>
<p>and this is the root <code>conftest.py</code> file</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from faker import Faker
from pages.manager import DBManager, GrpcPages, RestPages
fake = Faker()
@pytest.fixture(scope="session")
def grpc_pages():
return GrpcPages()
@pytest.fixture(scope="session")
def rest_pages():
return RestPages()
@pytest.fixture(scope="session")
def postgres():
return DBManager()
#some other code
</code></pre>
<p>The error message I get is:</p>
<pre><code>test setup failed
file .../tests/grpc_tests/collections/test_collections.py, line 9
@allure.title("Create a multi-collection")
def test_create_multicollection(collection, postgres):
file .../fixtures/collections/conftest.py, line 10
@pytest.fixture(scope="session")
def collection(grpc_pages):
E fixture 'grpc_pages' not found
> available fixtures: *session*faker, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, collection, doctest_namespace, factoryboy_request, faker, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them.
</code></pre>
<p>However, when I run the tests through a terminal with <code>pytest .</code>, there is no error. It seems like pytest can see the root <code>conftest.py</code> in one case but not in another.</p>
<p>I have tried to Explicitly importing the missing fixtures directly from the root <code>conftest.py</code> to resolve the issue. This works for single test runs but causes errors when running all tests using CLI commands like <code>pytest .</code>, with the following error:</p>
<pre><code>```
ImportError while importing test module '/path/to/test_collections.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
.../importlib/init.py:127: in import_module
return bootstrap.gcd_import(name[level:], package, level)
tests/.../test_collections.py:6: in <module>
from conftest import grpc_pages
E ImportError: cannot import name 'grpc_pages' from 'conftest' (/path/to/fixtures/users/conftest.py)
```
</code></pre>
<p>It feels like pytest is trying to find <code>grpc_pages</code> in the isolated <code>conftest.py</code> instead of the root file.</p>
<p>Also, running isolated tests through a command like</p>
<ul>
<li><code>pytest -k "test_create_multicollection" ./tests/grpc_tests/collections/</code></li>
<li><code>pytest path/to/test_collections.py::test_create_multicollection </code></li>
</ul>
<p>works when the import is not explicit. However, I need to make the <code>run</code> button in the IDE work for less experienced, non-technical team members.</p>
<p>So my questions are:</p>
<ol>
<li>Has anyone encountered this issue where PyCharm can't find fixtures from the root conftest.py when running individual tests, but pytest can when run from the command line?</li>
<li>How can I fix this so that both PyCharm's run button and command-line pytest work consistently?</li>
</ol>
<p><strong>Additional Context</strong>:</p>
<ul>
<li>Pytest version: pytest 7.4.4</li>
<li>PyCharm version: 2023.2.1</li>
<li>Python version: 3.9.6</li>
</ul>
<p>Thank you in advance for your help!</p>
|
<python><pycharm><pytest><python-decorators><pytest-fixtures>
|
2024-07-16 04:01:21
| 2
| 344
|
llRub3Nll
|
78,752,638
| 10,634,126
|
How can I get distinct array fields efficiently?
|
<p>My collection with fields:</p>
<pre><code>[
{
"_id": xxx,
"field": {
"subfield": [
{
"k1": "a",
"k2": "b",
...
},
{
"k1": "a",
"k2": "b",
...
},
{
"k1": "c":
"k2": "d",
},
...
]
},
"kn": "z",
...
}
]
</code></pre>
<p>If <code>subfield.0.k1</code> is <code>a</code> then <code>subfield.0.k2</code> will always be <code>b</code>, etc.</p>
<p>I am trying to get the distinct values of <code>k1</code> along with their value counts, and then the unique (singular) value of <code>k2</code> that is associated with <code>k1</code>. But due to the amount of documents and variable-length arrays, trying to <code>$unwind</code> the matched documents in an aggregation pipeline kills the process or maxes out object size.</p>
<p>I managed to work within limits to retrieve value counts in PyMongo:</p>
<pre><code>from collections import Counter
from pymongo import MongoClient
db = MongoClient(...).get_database(...).get_collection(...)
query = [
{"$match": {"kn": "z"}},
{"$group": {"_id": "$field.subfield.k1", "count": {"$sum": 1}}}
]
results = list(db.aggregate(query)) # this returns a list of dicts where grouped unique _id is an array
results = [d["_id"] * d["count"] for d in results] # expanding these list _ids enables us to get value counts
results = [element for sublist in results for element in sublist]
results = Counter(results)
</code></pre>
<p>I'm looking for a way to find the unique value of <code>k2</code> associated with each value of <code>k1</code>. I could <code>find_one()</code> each returned counter value, but this is too time-consuming.</p>
|
<python><mongodb><pymongo>
|
2024-07-16 03:57:37
| 0
| 909
|
OJT
|
78,752,604
| 7,099,374
|
How to test Amazon Athena queries
|
<p>I have a rather complicated Athena query, which I would like to test on a local machine without connecting to Athena. I specified some mock data for testing purposes, and I was hoping that I could use something simple like <strong>SQLite</strong> to spin up a local database, populate it with mock data, run the tests and tear the database down.</p>
<p>The issue is that the SQLite dialect is just different enough for my Athena query to fail.
What are good practices for this task? Should I connect to actual Athena and create mock data there? Are there any tools that can convert SQL queries between dialects?</p>
<p>Testing is being done with Python.</p>
|
<python><sqlite><unit-testing><testing><amazon-athena>
|
2024-07-16 03:44:58
| 2
| 836
|
Amuoeba
|
78,752,567
| 13,642,249
|
Catch-up time-lapse buffering and lag issues while working with FFpyplayer frames and QPixmap with a UDP stream in MPEG-TS format
|
<p>I am trying to build from the code found in this <a href="https://stackoverflow.com/a/58604963/13642249">post</a> for my use case. However, I'm having trouble with python <a href="https://matham.github.io/ffpyplayer/player.html" rel="nofollow noreferrer"><code>FFpyPlayer</code></a>; with traditional <code>ffplay</code>, I don't have these kinds of issues. When running my application, I notice a catch-up time-lapse buffering effect, where the video playback speeds up considerably and then returns to normal speed. On some videos I have seen somewhat of a stutter or lag. I am using <a href="https://github.com/matham/ffpyplayer" rel="nofollow noreferrer"><code>ffpyplayer</code></a> to handle video playback from udp stream within a background thread with the frames. Not sure where or how, but I believe the issue arises when retrieving and displaying frames using PyQt's QPixmap. Below is the command used to send udp stream using <code>ffmpeg</code> with this stock <a href="https://videos.pexels.com/video-files/2099536/2099536-hd_1920_1080_30fps.mp4" rel="nofollow noreferrer">video</a>:</p>
<pre><code>ffmpeg -re -stream_loop -1 -i .\2099536-hd_1920_1080_30fps.mp4 -f mpegts udp://127.0.0.1:51234?localaddr=127.0.0.1
</code></pre>
<p>And here is <code>ffplay</code> command that runs with no problem:</p>
<pre><code>ffplay udp://127.0.0.1:51234
</code></pre>
<p>Here is an example code that uses the <code>ffpyplay</code> to display it with the issues mentioned:</p>
<pre><code>from PyQt6 import QtGui, QtWidgets, QtCore
from ffpyplayer.player import MediaPlayer
import time
from threading import Thread
class VideoDisplayWidget(QtWidgets.QWidget):
def __init__(self):
super().__init__()
self.player = None
self.layout = QtWidgets.QVBoxLayout(self)
self.video_frame_widget = QtWidgets.QLabel()
self.layout.addWidget(self.video_frame_widget)
self.frame_rate = 30
self.latest_frame = QtGui.QPixmap()
self.timer = QtCore.QTimer()
self.timer.setTimerType(QtCore.Qt.TimerType.PreciseTimer)
self.timer.timeout.connect(self.nextFrameSlot)
self.timer.start(int(1000.0/self.frame_rate))
self.play()
self.setLayout(self.layout)
def play(self):
play_thread = Thread(target=self._play, daemon=True)
play_thread.start()
def _play(self):
player = MediaPlayer("udp://127.0.0.1:51234")
val = ''
while val != 'eof':
frame, val = player.get_frame()
if val != 'eof' and frame is not None:
img, t = frame
# display img
w = img.get_size()[0]
h = img.get_size()[1]
data = img.to_bytearray()[0]
qimage = QtGui.QImage(data, w, h, QtGui.QImage.Format.Format_RGB888)
self.latest_frame = QtGui.QPixmap.fromImage(qimage)
time.sleep(0.001)
def nextFrameSlot(self):
self.video_frame_widget.setPixmap(self.latest_frame)
if __name__ == "__main__":
current_port = 51234
app = QtWidgets.QApplication([])
player = VideoDisplayWidget()
player.show()
exit(app.exec())
</code></pre>
<p>Here is a code block adapted from this <a href="https://stackoverflow.com/q/59611075/13642249">post</a> that works with no issues with ffpyplay using <code>opencv</code>:</p>
<pre class="lang-py prettyprint-override"><code>from ffpyplayer.player import MediaPlayer
import time
from pathlib import Path
import numpy as np
import cv2
player = MediaPlayer("udp://127.0.0.1:51234")
val = ''
while val != 'eof':
frame, val = player.get_frame()
if val != 'eof' and frame is not None:
img, t = frame
# display img
w = img.get_size()[0]
h = img.get_size()[1]
arr = np.uint8(np.array(img.to_bytearray()[0]).reshape(h,w,3)) # h - height of frame, w - width of frame, 3 - number of channels in frame
arr = cv2.cvtColor(arr, cv2.COLOR_BGR2RGB)
cv2.imshow('test', arr)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
</code></pre>
<p>Is there a specific setting or parameter I should be adjusting with <code>ffpyplay</code> and/or <code>pyqt</code> to prevent this catch-up time-lapse buffering effect?</p>
|
<python><python-3.x><pyqt6><ffpyplayer>
|
2024-07-16 03:15:33
| 0
| 1,422
|
kyrlon
|
78,752,520
| 322,909
|
How can I get the subarray indicies of a binary array using numpy?
|
<p>I have an array that looks like this</p>
<pre><code>r = np.array([1, 0, 0, 1, 1, 1, 0, 1, 1, 1])
</code></pre>
<p>and I want an output of</p>
<pre><code>[(0, 0), (3, 5), (7, 9)]
</code></pre>
<p>right now I am able to accomplish this with the following function</p>
<pre><code>def get_indicies(array):
indicies = []
xstart = None
for x, col in enumerate(array):
if col == 0 and xstart is not None:
indicies.append((xstart, x - 1))
xstart = None
elif col == 1 and xstart is None:
xstart = x
if xstart is not None:
indicies.append((xstart, x))
return indicies
</code></pre>
<p>However for arrays with 2 million elements this method is slow (~8 seconds). Is there a way using numpy's "built-ins" (<code>.argwhere</code>, <code>.split</code>, etc) to make this faster? <a href="https://stackoverflow.com/questions/7352684">This thread</a> is the closest thing I've found, however I can't seem to get the right combination to solve my problem.</p>
|
<python><numpy>
|
2024-07-16 02:46:00
| 2
| 13,809
|
John
|
78,752,372
| 474,911
|
How does the iterm2 Python API determine which tab a script is running in?
|
<p>I have a set of window tabs I use regularly, and when I start up iTerm2, I restore this using "Window->Restore Window Arrangement".</p>
<p>In .bashrc, I'd like to use the Iterm Python API to set the bash history filename to match the tab's title.</p>
<p>To do this, I wrote a Python script using the iterm python API to get the tab title:</p>
<pre><code> import iterm2
async def main(connection):
app = await iterm2.async_get_app(connection)
window = app.current_window
tab = app.current_terminal_window.current_tab
title = (await tab.async_get_variable("titleOverride"))
iterm2.run_until_complete(main)
</code></pre>
<p>However, this <em>doesn't work</em> as "current_tab" is the frontmost tab, which happens to be the last tab opened - <em>not</em> the tab the current .bashrc is running in.</p>
<p>For example, in the screenshot, the "current_tab" is "Apache", so <em>all</em> the instances of the script report it as the "current_tab", whereas what I need is 'Src', 'Bld', etc.</p>
<p>I've consulted the <a href="https://iterm2.com/python-api/" rel="nofollow noreferrer">documentation</a> and can't find a solution.</p>
<p>How can I do this?</p>
<p><a href="https://i.sstatic.net/wfhPkY8t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wfhPkY8t.png" alt="enter image description here" /></a></p>
|
<python><iterm2>
|
2024-07-16 01:12:52
| 1
| 860
|
Michael Wilson
|
78,752,355
| 7,498,328
|
How to fit an image exactly inside an Excel cell using Python and XlsxWriter?
|
<p>I'm working on a Python script that inserts images into Excel cells using the XlsxWriter library. My goal is to have each image fit precisely within its cell, without any overflow or unused space.</p>
<p>Here's a simplified version of my current approach:</p>
<pre class="lang-py prettyprint-override"><code>import xlsxwriter
from PIL import Image
import io
workbook = xlsxwriter.Workbook('image_in_cell.xlsx')
worksheet = workbook.add_worksheet()
# Set column width and row height
col_width = 25 # in Excel units
row_height = 100 # in Excel units
worksheet.set_column(0, 0, col_width)
worksheet.set_row(0, row_height)
# Open and resize the image
image_path = 'sample_image.jpg'
with Image.open(image_path) as img:
img_width, img_height = img.size
aspect_ratio = img_width / img_height
cell_width = col_width * 8 # Approximate conversion
cell_height = row_height * 1.2 # Approximate conversion
if aspect_ratio > cell_width / cell_height:
new_width = cell_width
new_height = int(cell_width / aspect_ratio)
else:
new_height = cell_height
new_width = int(cell_height * aspect_ratio)
img = img.resize((new_width, new_height), Image.LANCZOS)
# Convert image to bytes
buffer = io.BytesIO()
img.save(buffer, format='JPEG')
image_data = buffer.getvalue()
# Write the image to the cell
worksheet.insert_image(0, 0, image_path, {'image_data': image_data})
workbook.close()
</code></pre>
<p>Is there a way to get a perfect and tight fit inside the cell for an image?</p>
|
<python><excel><python-imaging-library><xlsxwriter>
|
2024-07-16 00:55:16
| 1
| 2,618
|
user321627
|
78,752,121
| 2,738,698
|
How does numpy.polyfit return a slope and y-intercept when its documentation says it returns otherwise?
|
<p>I have seen examples where <code>slope, yintercept = numpy.polyfit(x,y,1)</code> is used to return slope and y-intercept, but the <a href="https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html#numpy.polyfit" rel="nofollow noreferrer">documentation</a> does not mention "slope" or "intercept" anywhere.</p>
<p>The documentation also states three different return options:</p>
<ul>
<li><p>p: ndarray, shape (deg + 1,) or (deg + 1, K)</p>
<p>Polynomial coefficients, highest power first. If y was 2-D, the coefficients for k-th data set are in p[:,k],</p>
</li>
</ul>
<p>or</p>
<ul>
<li><p>residuals, rank, singular_values, rcond</p>
<p>These values are only returned if full == True</p>
<ul>
<li>residuals: sum of squared residuals of the least squares fit</li>
<li>rank: the effective rank of the scaled Vandermonde</li>
<li>singular_values: singular values of the scaled Vandermonde</li>
<li>rcond: value of rcond,</li>
</ul>
</li>
</ul>
<p>or</p>
<ul>
<li><p>v: ndarray, shape (deg + 1, deg + 1) or (deg + 1, deg + 1, K)</p>
<p>Present only if full == False and cov == True</p>
</li>
</ul>
<p>I have also run code with exactly the previous line and the <code>slope</code> and <code>yintercept</code> variables are filled with a single value each, which does not fit any of the documented returns. Which documentation am I supposed to use, or what am I missing?</p>
|
<python><numpy>
|
2024-07-15 22:41:03
| 1
| 578
|
user2738698
|
78,752,004
| 2,278,511
|
Kivy - How can I modify example for using ScrollView + GridLayout (Kivy-Garden Draggable)?
|
<p>This <a href="https://github.com/kivy-garden/draggable/blob/main/examples/using_other_widget_as_an_emitter.py" rel="nofollow noreferrer">Kivy - Draggable</a> example works great:</p>
<pre class="lang-py prettyprint-override"><code>from kivy.properties import ObjectProperty
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.label import Label
from kivy.uix.floatlayout import FloatLayout
from kivy_garden.draggable import KXDroppableBehavior, KXDraggableBehavior
KV_CODE = '''
#:import ascii_uppercase string.ascii_uppercase
<Cell>:
drag_classes: ['card', ]
canvas.before:
Color:
rgba: .1, .1, .1, 1
Rectangle:
pos: self.pos
size: self.size
<Card>:
drag_cls: 'card'
drag_timeout: 0
font_size: 100
opacity: .3 if self.is_being_dragged else 1.
canvas.after:
Color:
rgba: 1, 1, 1, 1
Line:
rectangle: [*self.pos, *self.size, ]
<Deck>:
canvas.after:
Color:
rgba: 1, 1, 1, 1
Line:
rectangle: [*self.pos, *self.size, ]
BoxLayout:
Widget:
size_hint_x: .1
# Put the board inside a RelativeLayout just to confirm the coordinates are properly transformed.
# This is not necessary for this example to work.
RelativeLayout:
GridLayout:
id: board
cols: 4
rows: 4
spacing: 10
padding: 10
BoxLayout:
orientation: 'vertical'
size_hint_x: .2
padding: '20dp', '40dp'
spacing: '80dp'
RelativeLayout: # Put a deck inside a RelativeLayout just ...
Deck:
board: board
text: 'numbers'
font_size: '20sp'
text_iter: (str(i) for i in range(10))
Deck:
board: board
text: 'letters'
font_size: '20sp'
text_iter: iter(ascii_uppercase)
'''
class Cell(KXDroppableBehavior, FloatLayout):
def accepts_drag(self, touch, ctx, draggable) -> bool:
return not self.children
def add_widget(self, widget, *args, **kwargs):
widget.size_hint = (1, 1, )
widget.pos_hint = {'x': 0, 'y': 0, }
return super().add_widget(widget, *args, **kwargs)
class Card(KXDraggableBehavior, Label):
pass
class Deck(Label):
text_iter = ObjectProperty()
board = ObjectProperty()
def on_touch_down(self, touch):
if self.collide_point(*touch.opos):
if (text := next(self.text_iter, None)) is not None:
card = Card(text=text, size=self._get_cell_size())
card.start_dragging_from_others_touch(self, touch)
return True
def _get_cell_size(self):
return self.board.children[0].size
class SampleApp(App):
def build(self):
return Builder.load_string(KV_CODE)
def on_start(self):
board = self.root.ids.board
for __ in range(board.cols * board.rows):
board.add_widget(Cell())
if __name__ == '__main__':
SampleApp().run()
</code></pre>
<p>and I need to modify this app for using ScrollView in "droppable" zone (as you can see on screenshot) but when only change RelativeLayout to ScrollView, ScrollView not works.</p>
<p>In my "experiments" i have only two states: ScrollView works, but it isnt works droppable functionality or droppable functiononality works, but it isnt possible to scroll cards.</p>
<p>Please, how can i customize this examlpe?</p>
<p><a href="https://i.sstatic.net/Wpsa1XwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wpsa1XwX.png" alt="Draggable + ScrollView" /></a></p>
|
<python><kivy><draggable><kivy-language>
|
2024-07-15 21:49:56
| 1
| 408
|
lukassliacky
|
78,751,958
| 3,367,091
|
Implementing __eq__ for class hierarchy
|
<p>This question is about how <code>__eq__</code> should be implemented in a class hierarchy.
Let's say we have the following setup:</p>
<pre class="lang-py prettyprint-override"><code>class Person:
"""A person with a name."""
def __init__(self, name):
self.name = name
def __eq__(self, other):
if not isinstance(other, type(self)):
return NotImplemented
return self.name == other.name
def __repr__(self):
return f'Person("{self.name}")'
class Student(Person):
"""A student with a name and a student id."""
def __init__(self, name, student_id):
super().__init__(name)
self.student_id = student_id
def __repr__(self):
return f'Student("{self.name}", {self.student_id})'
</code></pre>
<p>The question is specifically about how should <code>__eq__</code> be implemented?
The way it is written in the above code block results in:</p>
<pre class="lang-py prettyprint-override"><code>p = Person("SomeName")
s = Student("SomeName", 1)
print(p == s) # True
</code></pre>
<p>As <code>s</code> is an instance of a class derived from the class of instance <code>p</code>, the first call is</p>
<pre class="lang-py prettyprint-override"><code>s.__eq__(p). # NotImplemented, as Person is not the same class or a subclass of Student
</code></pre>
<p>Since <code>NotImplemented</code> was returned, the following call is made:</p>
<pre class="lang-py prettyprint-override"><code>p.__eq__(s) # True, as Student is a subclass of Person and the name attribute of both instances are identical
</code></pre>
<p>But "should" an instance of a derived class ever considered to be equal (value equality) to an instance of a parent class?
If the answer to this question is "no", we can't rely on the <code>__eq__</code> method (as it was written above) via inheritance, but must override it in the subclass.</p>
<p>Maybe this is contextual and case-to-case dependant, but if one would want to allow value equality to be <code>True</code> <strong>only</strong> when the types of both instances in the comparison are identical, couldn't we write <code>__eq__</code> as:</p>
<pre class="lang-py prettyprint-override"><code> def __eq__(self, other):
if type(self) != type(other):
return NotImplemented
return self.name == other.name
</code></pre>
<p>As <code>Person != Student</code> the <code>==</code> operator would fall back to object identity which would return <code>False</code>.</p>
<p>What is the correct approach to the <code>__eq__</code>-paradigm in this situation?</p>
|
<python>
|
2024-07-15 21:31:28
| 1
| 2,890
|
jensa
|
78,751,815
| 13,076,747
|
Inconsistent results between PyTorch loss function for `reduction=mean`
|
<p>In particular, the following code block compares using</p>
<p><code>nn.CrossEntropyLoss(reduction='mean')</code> with <code>loss_fn = nn.CrossEntropyLoss(reduction='none')</code></p>
<p>followed by <code>loss.mean()</code>.</p>
<p>The results are surprisingly not the same.</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
# Generate random predictions and labels
preds = torch.randn(8, 10, 100)
labels = torch.randint(high=100, size=(8, 10))
# replace some values with -100
labels[torch.rand(labels.shape) < 0.2] = -100
preds, labels = preds.view(-1, 100), labels.view(-1)
def compare_losses(preds, labels):
# Define loss functions
loss_fn = nn.CrossEntropyLoss(reduction='none')
loss_fn_mn = nn.CrossEntropyLoss(reduction='mean')
# Compute losses
losses = loss_fn(preds, labels)
weighted_loss = losses.mean()
# Compute mean loss using built-in mean reduction
loss = loss_fn_mn(preds, labels)
# Print and check if the results are identical
return torch.isclose(loss, weighted_loss), loss.item(), weighted_loss.item()
compare_losses(preds, labels)
</code></pre>
<p>Returns</p>
<pre><code>(tensor(False), 4.997840404510498, 3.748380184173584)
</code></pre>
|
<python><pytorch><loss-function>
|
2024-07-15 20:45:26
| 1
| 484
|
Essam
|
78,751,680
| 3,486,684
|
Nested named regex groups: how to maintain the nested structure in match result?
|
<p>A small example:</p>
<pre class="lang-py prettyprint-override"><code>import re
pattern = re.compile(
r"(?P<hello>(?P<nested>hello)?(?P<other>cat)?)?(?P<world>world)?"
)
result = pattern.match("hellocat world")
print(result.groups())
print(result.groupdict() if result else "NO RESULT")
</code></pre>
<p>produces:</p>
<pre class="lang-py prettyprint-override"><code>('hellocat', 'hello', 'cat', None)
{'hello': 'hellocat', 'nested': 'hello', 'other': 'cat', 'world': None}
</code></pre>
<p>The regex match result returns a flat dictionary, rather than a dictionary of dictionaries that would correspond with the nested structure of the regex pattern. By this I mean:</p>
<pre class="lang-py prettyprint-override"><code>{'hello': {'nested': 'hello', 'other': 'cat'}, 'world': None}
</code></pre>
<p>Is there a "built-in" (i.e. something involving details of what is provided by the <code>re</code> module) way to access the match result that does preserve the nesting structure of the regex? By this I mean that the following are not solutions in the context of this question:</p>
<ul>
<li>parsing the regex pattern myself to determine nested groups</li>
<li>using a data structure that represents a regex pattern as a nested structure, and then implementing logic for that data structure to match against a string as if it were a "flat" regex pattern.</li>
</ul>
|
<python><regex>
|
2024-07-15 19:52:49
| 1
| 4,654
|
bzm3r
|
78,751,562
| 1,825,632
|
How to apply margin between main Y-Axis Title vs. Subplot Y-Axis Title
|
<p>I created two heatmap objects that I want to combine them into a single subplot. So far, tracing them and setting them together worked out. However, I cannot add spacing between the make_subplot <code>y_title</code> and the subplot titles present. Looking through the documentation for plotly there are no parameters for placing margins to the top level chart.</p>
<p>Here's the basic idea of the script used to build out the two heatmaps into one subplot:</p>
<pre><code>fig = make_subplots(
rows=2,
cols=1,
row_heights=[0.75, 0.25],
vertical_spacing = 0.1,
x_title="Current Transactions",
y_title="Previous Transactions",
)
# Returning Section
for trace in returning.data:
fig.add_trace(trace, row=1, col=1)
# NTO Section
for trace in nto.data:
fig.add_trace(trace, row=2, col=1)
fig.update_xaxes(
row=1,
col=1,
showticklabels=True,
side="top"
)
# Generate final layout
fig.update_layout(
title="Financing Mapping",
height=500,
width=600,
hovermode=False,
autosize=False,
# margin=dict(l=200) # This added margin to the whole thing
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/Jp967PR2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp967PR2.png" alt="Plotly plot which contains two heatmap graphs with their respective label. However, the main y_title is intersecting with the y-axis labels of the subplot" /></a></p>
|
<python><plotly>
|
2024-07-15 19:13:00
| 1
| 1,344
|
Adib
|
78,751,159
| 3,117,006
|
How to add a Python package from Git a dependency in Bazel
|
<p>I'm trying to set up a build for a Python project in Bazel.</p>
<p>One of the dependencies (nvdiffrast, to be exact) is a Python package, that is not in PyPi.</p>
<p>I usually install it directly from Git (pip install git+...). Now I don't know how to set up installing this package in thenBazel build.</p>
<p>I'm trying something like:</p>
<pre><code>http_archive(
name = "nvdiffrast",
url = "https://github.com/NVlabs/nvdiffrast/archive/refs/tags/v0.3.1.tar.gz",
integrity = ...,
strip_prefix = ...,
)
</code></pre>
<p>However, it's still unclear to me how I would set up building this from setup.py (that is available in the repo).</p>
|
<python><git><bazel>
|
2024-07-15 17:16:02
| 1
| 1,010
|
zlenyk
|
78,751,101
| 1,592,380
|
Labelling contour lines with OGR_STYLE
|
<p>I'm trying to add labels to DEM contour lines.</p>
<p>I'm working with GDAL 3.6.2. , which I installed using Anaconda . I have some DEM data from USGS which I wrote as a contour map to a KML file using:</p>
<pre><code>gdal_contour small.tif /home/ubuntu/node/geotools/contour.kml -i 3.0 -f KML -a ELEVATION
</code></pre>
<p>When I open contour.kml in vim, I can see that its made up of features like:</p>
<pre><code><Placemark>
<Style><LineStyle><color>ff0000ff</color></LineStyle><PolyStyle><fill>0</fill></PolyStyle></Style>
<ExtendedData><SchemaData schemaUrl="#contour">
<SimpleData name="ID">39</SimpleData>
<SimpleData name="ELEVATION">525</SimpleData>
</SchemaData></ExtendedData>
<LineString><coordinates>-89.2626638689733,46.1525006611575 -89.262663868958,46.1525469572921 -89.2627325352002,46.152622208059 -89.2628251277396,46.1526266807347 -89.2629177202847,46.1526141000471 -89.2629982621728,46.1525469573863 -89.2629982621882,46.1525006612516</coordinates></LineString>
</Placemark>
</code></pre>
<p>Clearly there is a style component available , but I'm not sure how to work with it to add elevation labels or custom colours. My suspicion is that OGR_STYLE is used to do things like this (<a href="https://github.com/OSGeo/gdal/issues/835" rel="nofollow noreferrer">https://github.com/OSGeo/gdal/issues/835</a>)(<a href="https://gdal.org/user/ogr_sql_dialect.html#ogr-style" rel="nofollow noreferrer">https://gdal.org/user/ogr_sql_dialect.html#ogr-style</a> ), but I can't find any examples. How can this be done?</p>
|
<python><gis><gdal><ogr>
|
2024-07-15 17:02:56
| 1
| 36,885
|
user1592380
|
78,751,066
| 2,466,784
|
Whenever I am maximizing the subwindow in MDI in Pyqt5 the minimize, maximize and close buttons gets hidden in the extreme right top corner
|
<p>Inside the method, a new QMdiSubWindow object is created, and its content is set to windowRef. The title of the sub-window is set to subWindowTitle. The sub-window is also set to be deleted when it is closed, and its window flags are set to include minimize, maximize, and close buttons. Finally, the sub-window is added to the MDI area and displayed, but setWindowFlags is not working in my code.</p>
<pre><code>def subWindowCreation(self, windowRef, subWindowTitle):
try:
sub_window = QMdiSubWindow ()
sub_window.setWidget (windowRef)
sub_window.resize (windowRef.width (), windowRef.height ())
sub_window.setWindowTitle (subWindowTitle)
sub_window.setAttribute (Qt.WA_DeleteOnClose)
sub_window.setWindowFlags (Qt.Window | Qt.WindowMinimizeButtonHint | Qt.WindowMaximizeButtonHint | Qt.WindowCloseButtonHint)
print(Qt.WindowMinimizeButtonHint,Qt.WindowMaximizeButtonHint,Qt.WindowCloseButtonHint)
self.ui.mdiMainWindow.addSubWindow (sub_window)
sub_window.show ()
except Exception as e:
QMessageBox.about (self, 'Exception', 'Error: "{}"'.format (e))
</code></pre>
|
<python><pyqt5><mdi>
|
2024-07-15 16:51:22
| 0
| 805
|
Arun Agarwal
|
78,751,051
| 3,369,417
|
Error installing mySQLDB using pip Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
|
<p>Unable to run django application in the latest amazon Linux AL2023 running python3.9</p>
<p>It reports mySQLDB module is not installed, I tried installing mysql module using pip and yum but nothing helped to solve the problem. The error we get is "Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually"</p>
<pre><code> backend = load_backend(db["ENGINE"])
File "/data/vpy38/lib/python3.9/site-packages/django/db/utils.py", line 113, in load_backend
return import_module("%s.base" % backend_name)
File "/usr/lib64/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/data/vpy38/lib/python3.9/site-packages/django/db/backends/mysql/base.py", line 17, in <module>
raise ImproperlyConfigured(
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module.
Did you install mysqlclient?
</code></pre>
<p>When I tried installing it via pip Im getting the following error:</p>
<pre><code>Collecting mysqlclient
Using cached mysqlclient-2.2.4.tar.gz (90 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [24 lines of output]
Trying pkg-config --exists mysqlclient
Command 'pkg-config --exists mysqlclient' returned non-zero exit status 1.
Trying pkg-config --exists mariadb
Command 'pkg-config --exists mariadb' returned non-zero exit status 1.
Trying pkg-config --exists libmariadb
Command 'pkg-config --exists libmariadb' returned non-zero exit status 1.
Traceback (most recent call last):
File "/data/vpy38/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/data/vpy38/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/data/vpy38/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-0ehmrzaq/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 327, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
File "/tmp/pip-build-env-0ehmrzaq/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 297, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-0ehmrzaq/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 313, in run_setup
exec(code, locals())
File "<string>", line 155, in <module>
File "<string>", line 49, in get_config_posix
File "<string>", line 28, in find_package_name
Exception: Can not find valid pkg-config name.
Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
</code></pre>
<p>OS: Amazon Linux2023
Python3.9</p>
<p>Please assist me resolving it.</p>
|
<python><pip><mysql-python>
|
2024-07-15 16:48:55
| 1
| 418
|
user3369417
|
78,750,965
| 7,695,845
|
How to make argparse work nicely with enums and default values?
|
<p>I have an enum:</p>
<pre class="lang-python prettyprint-override"><code>from enum import auto, Enum
class MyEnum(Enum):
ONE = auto()
TWO = auto()
THREE = auto()
</code></pre>
<p>and I want to use it as an argument with <code>argparse</code>. To be more specific, I want to create an argument that accepts one of the enum names <code>("one", "two", "three")</code> and possibly has a default value, and the corresponding enum member is stored in the namespace. In other words, I want to do something like this:</p>
<pre class="lang-python prettyprint-override"><code>from argparse import ArgumentParser
from enum import auto, Enum
class MyEnum(Enum):
ONE = auto()
TWO = auto()
THREE = auto()
enum_map = {e.name.lower(): e for e in MyEnum}
parser = ArgumentParser()
parser.add_argument(
"--enum",
default="two",
choices=tuple(enum_map.keys()),
help="An enum value (Default: %(default)s)",
)
args = parser.parse_args()
args.enum = enum_map[args.enum]
print(args.enum)
</code></pre>
<p>This is some boilerplate code, and I wanted to eliminate it using a custom action. I took inspiration from <a href="https://stackoverflow.com/a/60750535/7695845">this</a> SO answer, and got the following example:</p>
<pre class="lang-python prettyprint-override"><code>from argparse import Action, ArgumentParser, FileType, Namespace
from collections.abc import Callable, Sequence
from enum import auto, Enum
from typing import Any
class EnumAction[T](Action):
_enum: type[T]
_enum_map: dict[str, T]
def __init__(
self,
option_strings: Sequence[str],
dest: str,
nargs: int | str | None = None,
const: Any = None,
default: str | T = None,
type: Callable[[str], T] | FileType | None = None,
choices: Sequence[T] | None = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None:
if type is None:
raise ValueError("type must be assigned an Enum when using EnumAction")
if not issubclass(type, Enum):
raise TypeError("type must be an Enum when using EnumAction")
if choices is not None:
raise ValueError("Can't specify choices when using EnumAction")
self._enum = type
type = None
self._enum_map = {e.name.lower(): e for e in self._enum}
choices = tuple(self._enum_map.keys())
super().__init__(
option_strings,
dest,
nargs,
const,
default,
type,
choices,
required,
help,
metavar,
)
def __call__(
self,
parser: ArgumentParser,
namespace: Namespace,
values: str | Sequence[Any] | None,
option_string: str | None = None,
) -> None:
setattr(namespace, self.dest, self._enum_map[values])
class MyEnum(Enum):
ONE = auto()
TWO = auto()
THREE = auto()
parser = ArgumentParser()
parser.add_argument(
"--enum",
action=EnumAction,
type=MyEnum,
default="two",
help="An enum value (Default: %(default)s)",
)
print(parser.parse_args())
</code></pre>
<p>It works quite well except for one problem: if I call the program without the <code>--enum</code> argument, I get the default value <code>"two"</code>, but it isn't processed by my action so I get the string itself and not the enum member:</p>
<pre class="lang-shell prettyprint-override"><code>$ python demo.py --enum one
Namespace(enum=<MyEnum.ONE: 1>)
$ python demo.py
Namespace(enum='two')
</code></pre>
<p>The easiest fix is to set the default value to <code>default=MyEnum.TWO</code> instead of <code>"two"</code> or make my action convert the <code>default</code> parameter into the corresponding enum member if it's a string. However, this causes the enum member to show up in the help message which isn't very readable (and is the reason I set choices to the names of the members instead of the actual members):</p>
<pre class="lang-shell prettyprint-override"><code>$ python demo.py -h
usage: demo.py [-h] [--enum {one,two,three}]
options:
-h, --help show this help message and exit
--enum {one,two,three}
An enum value (Default: MyEnum.TWO)
</code></pre>
<p>How can I get an enum member in the namespace both when passing an argument and when using the default value, and see the member's name in the help message instead of the member itself?</p>
|
<python><enums><argparse>
|
2024-07-15 16:27:45
| 3
| 1,420
|
Shai Avr
|
78,750,943
| 6,803,114
|
Split a pandas dataframe column into multiple based on text values
|
<p>I have a pandas dataframe with a column.</p>
<pre><code>id text_col
1 Was it Accurate?: Yes\n\nReasoning: This is a sample : text
2 Was it Accurate?: Yes\n\nReasoning: This is a :sample 2 text
3 Was it Accurate?: No\n\nReasoning: This is a sample: 1. text
</code></pre>
<p>I have to break the text_col into two columms <code>"Was it accurate?"</code> and <code>"Reasoning"</code></p>
<p>The final dataframe should look like:</p>
<pre><code>id Was it Accurate? Reasoning
1 Yes This is a sample : text
2 Yes This is a :sample 2 text
3 No This is a sample: 1. text
</code></pre>
<p>The text values can have multiple : "colons" in it</p>
<p>I tried splitting the text_col using "\n\nReasoning:" but did'nt get desired result.It is leaving out the text after second colon (:)</p>
<p><code>df[['Was it Accurate?', 'Reasoning']] = df['text_col'].str.extract(r'Was it Accurate\?: (Yes|No)\n\nReasoning: (.*)')</code></p>
<p>Edit:
<a href="https://i.sstatic.net/lplnEC9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lplnEC9F.png" alt="enter image description here" /></a>
I applied the function on the LLM_response column of my sample_100 dataframe. and printed the first row. if you see closely the <code>sample_100.iloc[0]['Reasoning']</code> has stripped off all the text after :</p>
<p>Temp dict obj to test on:</p>
<pre><code>{'id_no': [8736215],
'Notes': [' Temp Notes Sample xxxxxxxxxxxxx [4/21/23, 2:10 PM] Work started -work complete-'],
'ProblemDescription': ['Sample problem description xxxxxxxxxxxxxxxxxxxxxxxx'],
'LLM_response': ['Accurate & Understandable: Yes\n\nReasoning: The Technician notes are accurate and understandable as:\n1) The technician provided detailed steps on how they addressed the mold issue by removing materials, treating surfaces, priming, and painting them.\n2) Additionally, even though there was non-repair related information (toilet repairs), the main issue of mold growth was addressed.\n3) The process described logically follows the process for remedying a mold issue, which aligns with the problem description.'],
'Accurate & Understandable': ['Yes'],
'Reasoning': ['The Technician notes are accurate and understandable as:']}
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2024-07-15 16:23:07
| 2
| 7,676
|
Shubham R
|
78,750,662
| 2,014,141
|
np.where on a numpy MxN matrix but return M rows with indices where condition exists
|
<p>I am trying to use np.where on a MxN numpy matrix, where I want to return the same number of M rows but the indices in each row where the element exists. Is this possible to do so? For example:</p>
<pre><code>a = [[1 ,2, 2]
[2, 3, 5]]
np.where(a == 2)
</code></pre>
<p>I would like this to return:</p>
<pre><code>[[1, 2],
[0]]
</code></pre>
|
<python><numpy><indices>
|
2024-07-15 15:21:26
| 2
| 492
|
KidSudi
|
78,750,527
| 5,065,860
|
Installing Python on Node/Playwright docker image?
|
<p>I have been using Playwrights node docker image, defined here: <a href="https://playwright.dev/docs/docker" rel="nofollow noreferrer">https://playwright.dev/docs/docker</a></p>
<p>This includes all the browser dependencies/etc.. for the node version of docker. This has worked fine however i'm going to be integrating the testrail CLI to help upload playwright report results to testrail.</p>
<p>Unfortunately this requires python which is not installed (from what I can tell) on the playwright default docker file.</p>
<p>What's my best/easiest course of action here? Obviously I am not going to switch all my test cases, would it make more sense to use a self-created docker file? Is there some way I can install python in the docker file in gitlab? (Which is the ci system we are using now).</p>
<p>I'm not super familiar with docker outside of the basics. But I imagine installing python after the fact makes more sense?</p>
<p>What is the best game plan here? Thanks!</p>
|
<python><docker><playwright><gitlab-ci-runner>
|
2024-07-15 14:53:05
| 1
| 3,307
|
msmith1114
|
78,750,450
| 2,006,674
|
Correct way to graceful shutdown infinite async task running parallel and FastAPI
|
<p>Following code is working correctly.<br />
I am interested is this correct way to graceful shutdown FastAPI and infinite async task that is running in parallel.</p>
<pre><code>import os
import asyncio
import signal
from contextlib import asynccontextmanager
from fastapi import FastAPI
previous_signal_handler = None
init_shutdown = False
infinite_1_done = False
def signal_handler(signum, frame):
print(f'{signal.Signals(signum).name=} {signal.strsignal(signum)=}')
global init_shutdown
init_shutdown = True
global previous_signal_handler
signal.signal(signal.SIGTERM, previous_signal_handler)
os.kill(os.getpid(), signal.SIGTERM)
@asynccontextmanager
async def lifespan(app: FastAPI):
print(f'Before yield is shutdown {os.getpid()=}')
signal.signal(signal.SIGINT, signal_handler)
global previous_signal_handler
previous_signal_handler = signal.signal(signal.SIGTERM, signal_handler)
_ = asyncio.create_task(infinite_1(), name='my_task')
print(f'Before yield {_.done()=}')
yield
# wait for infinite_1 to finish
global infinite_1_done
while infinite_1_done is False:
print('waiting...')
await asyncio.sleep(0.5)
print(f'After yield {_.done()=}')
print('After yield is shutdown')
app = FastAPI(lifespan=lifespan)
@app.get("/hi/{wait_time}")
async def greet(wait_time: int):
await asyncio.sleep(wait_time)
return f"async Hello? World? after {wait_time}"
async def infinite_1():
c = 0
global init_shutdown
while init_shutdown is False:
c += 1
print(f'infinite_1 call {c=}')
await asyncio.sleep(2)
global infinite_1_done
infinite_1_done = True
# print for final shutdown
print(f'Shutdown infinite_1 {init_shutdown=}')
</code></pre>
<p>This is just prof-of-concept.<br />
In real code I would use Class for handling signals and global variables will be inside of that class.</p>
<p><code>async def infinite_1()</code> is simulating some task that is running in endless loop in parallel to FastAPI.<br />
In my real code is reading redis stream.<br />
I have my own signal handler <code>def signal_handler(signum, frame)</code>, that after it is done (inform <code>infinite_1</code> not to start again when done) is returning signal handling to FastAPI(because FastAPI signal handling wait for all path functions to finish).<br />
<strong>My concern is this complicated signal handling needed or it can it be done easier ?</strong></p>
<p>Because first I save default FastAPI signal handling reference, then overwrite signal handling to be my function, and all of this in <code>async def lifespan(app: FastAPI)</code>.<br />
When shutdown signal is received I need to inform <code>infinite_1</code>, via global variable <code>init_shutdown</code>, to shutdown, restore original signal handling and send <code>SIGTERM</code>.
After that code execution in continued in <code>async def lifespan(app: FastAPI)</code> after <code>yield</code> where I have to wait for <code>infinite_1</code> to stop.</p>
<p>Maybe this is only way how it can be done, hoping that there is something less complicated.</p>
|
<python><signals><fastapi>
|
2024-07-15 14:38:48
| 0
| 7,392
|
WebOrCode
|
78,750,364
| 8,405,296
|
Saving Fine-tune Falcon HuggingFace LLM Model
|
<p>I'm trying to save my model so it won't need to re-download the base model every time I want to use it but nothing seems to work for me, I would love your help with it.</p>
<p>The following parameters are used for the training:</p>
<pre><code>hf_model_name = "tiiuae/falcon-7b-instruct"
dir_path = 'Tiiuae-falcon-7b-instruct'
model_name_is = f"peft-training"
output_dir = f'{dir_path}/{model_name_is}'
logs_dir = f'{dir_path}/logs'
model_final_path = f"{output_dir}/final_model/"
EPOCHS = 3500
LOGS = 1
SAVES = 700
EVALS = EPOCHS / 100
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=False,
)
model = AutoModelForCausalLM.from_pretrained(
"tiiuae/falcon-7b-instruct",
quantization_config=bnb_config,
device_map={"": 0},
trust_remote_code=False
)
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.05, # 0.1
r=64,
bias="lora_only", # none
task_type="CAUSAL_LM",
target_modules=[
"query_key_value"
],
)
model.config.use_cache = False
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b-instruct", trust_remote_code=False)
tokenizer.pad_token = tokenizer.eos_token
training_arguments = TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
optim='paged_adamw_32bit',
max_steps=EPOCHS,
save_steps=SAVES,
logging_steps=LOGS,
logging_dir=logs_dir,
eval_steps=EVALS,
evaluation_strategy="steps",
fp16=True,
learning_rate=0.001,
max_grad_norm=0.3,
warmup_ratio=0.15, # 0.03
lr_scheduler_type="constant",
disable_tqdm=True,
)
model.config.use_cache = False
trainer = SFTTrainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=448,
tokenizer=tokenizer,
args=training_arguments,
packing=True,
)
for name, module in trainer.model.named_modules():
if "norm" in name:
module = module.to(torch.float32)
train_result = trainer.train()
</code></pre>
<p>And the saving of it I did like so:</p>
<pre><code>metrics = train_result.metrics
max_train_samples = len(train_dataset)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
# save train results
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
# compute evaluation results
metrics = trainer.evaluate()
max_val_samples = len(eval_dataset)
metrics["eval_samples"] = min(max_val_samples, len(eval_dataset))
# save evaluation results
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
model.save_pretrained(model_final_path)
</code></pre>
<p>Now I've tried so many different ways to load it or load and save it in various ways again and again (for example adding <code>lora_model.merge_and_unload()</code>, plain using <code>local_model = AutoModelForCausalLM.from_pretrained(after_merge_model_path)</code> and more), but nothing seems to work for me everything resulted in errors (sometimes the same errors, sometimes different ones), I need your help here.</p>
<p>If you think its better suited, I opened a question here too <a href="https://discuss.huggingface.co/t/saving-fine-tune-falcon-model/97421" rel="nofollow noreferrer">HuggingFace Forum</a></p>
|
<python><huggingface-transformers><large-language-model><inference><pre-trained-model>
|
2024-07-15 14:20:36
| 1
| 1,362
|
Lidor Eliyahu Shelef
|
78,750,118
| 2,447,427
|
Dataclass Enum init item from another item
|
<p>I'm trying to make an <code>Enum</code> the values of which are of a type which is a <code>dataclass</code>. As per the official <a href="https://docs.python.org/3/howto/enum.html#dataclass-support" rel="nofollow noreferrer">Python Enum HOWTO</a>, this can be achieved by inheriting from both <code>Enum</code> and the base <code>dataclass</code>.</p>
<p>Ok, so far, so good.</p>
<p>Now, because the underlying <code>dataclass</code> has quite a lot of fields, and the <code>Enum</code> values have only slight modifications between each other, I'd like to specify one of the values as a copy of another, but with a few select parameters changed. That's why I thought I'd use <code>dataclasses.replace</code>, but that doesn't work.</p>
<p>Here's a minimal reproduction code:</p>
<pre class="lang-py prettyprint-override"><code>import dataclasses
from enum import Enum
from typing import ClassVar
@dataclasses.dataclass(frozen=True)
class A:
s1: str
s2: str
class B(A, Enum):
b1: ClassVar[B] = 'b11', 'b12'
b2: ClassVar[B] = dataclasses.replace(b1, s2='b22')
</code></pre>
<p>This throws this error on the last line:</p>
<pre><code>TypeError: replace() should be called on dataclass instances
</code></pre>
<p>and, as I can see, it's because at the time <code>b2</code> is instantiated, <code>b1</code> is still just a tuple. Even if <code>b1</code> would be properly initialized, because <code>dataclasses.replace</code> internally calls <code>b1.__class__(b1=b1.s1, b2='b22')</code>, this might fail as well because directly instantiating an <code>Enum</code> is not generally allowed, and I'm not sure how <code>dataclass.replace</code> would interfere with the mechanism that only allows instantiation inside class def.</p>
<p>Until now I was thinking to use something like <code>astuple</code> to pass the params of <code>b1</code> to <code>b2</code>, but it's pretty unreadable to use code that changes the correct positional argument.</p>
<p>So, any ideas on how to make this work in readable and pythonic manner?</p>
<p><em>Note:</em> I'm using Python version 3.9, also tried 3.12, but to no avail.</p>
|
<python><enums><python-dataclasses>
|
2024-07-15 13:29:08
| 0
| 343
|
bbudescu
|
78,750,109
| 6,622,697
|
Getting the result from a future in Python
|
<p>I have the following code which executes a process and calls a callback function when the process is done</p>
<pre><code>import os
import subprocess
import tempfile
def callback(future):
print(future.temp_file_name)
try:
with open(future.temp_file_name, 'r') as f:
print(f"Output from {future.temp_file_name}:")
print(f.read())
except Exception as e:
print(f"Error in callback: {e}")
def execute(args):
from concurrent.futures import ProcessPoolExecutor as Pool
args[0] = os.path.expanduser(args[0])
with tempfile.NamedTemporaryFile(delete=False) as temp_file:
temp_file_name = temp_file.name
print('temp_file_name', temp_file_name)
pool = Pool()
with open(temp_file_name, 'w') as output_file:
process = subprocess.Popen(args, stdout=output_file, stderr=output_file)
future = pool.submit(process.wait)
future.temp_file_name = temp_file_name
future.add_done_callback(callback)
print("Running task")
if __name__ == '__main__':
execute(['~/testSpawn.sh', '1'])
execute(['~/testSpawn.sh', '2'])
execute(['~/testSpawn.sh', '3'])
</code></pre>
<p>But if I try to print out the result in the callback with</p>
<pre><code>def callback(future):
print(future.temp_file_name, future.result())
</code></pre>
<p>I get the following error</p>
<pre><code>TypeError: cannot pickle '_thread.lock' object
exception calling callback for <Future at 0x77d48bc55a90 state=finished raised TypeError>
</code></pre>
<p>How can I get the result in a callback? The whole point is that I want to be notified when the process is complete</p>
<h1>Update</h1>
<p>I modified my callback function to have the following:</p>
<pre><code> if future.exception() is not None:
print('exception', future.exception())
else:
print('result', future.result())
</code></pre>
<p>This just means it's not as ugly and it prints out
<code>exception cannot pickle '_thread.lock' object</code>
But it tells me that the error is not in printing out <code>future.results()</code>, but that there is just an inherent error in the callback function</p>
|
<python><subprocess><popen><concurrent.futures>
|
2024-07-15 13:26:34
| 1
| 1,348
|
Peter Kronenberg
|
78,750,004
| 6,068,731
|
Find average x values and average y values for a plot
|
<p>I have an algorithm that generates two sequences <code>x_values</code> and <code>y_values</code> of the same length <code>len(x_values) = len(y_values)</code>. However, at each different run of the algorithm the length of these two sequences is slightly higher or lower, <code>len(x_values_run1) != len(x_values_run2)</code>.</p>
<p>I want to eventually make a plot with a single x-axis sequence and a single y-axis sequence and for this I need to average across all runs. If the sequences all had the same length, then I could simply do <code>np.vstack(list_x_values).mean(axis=0)</code>. However, they have differen lengths.</p>
<p>How can I do this? Perhaps with interpolation.</p>
<h1>Minimal Working Example</h1>
<pre><code>import numpy as np
def algorithm():
"""Creates 2 list of the same length, different each time."""
length = np.random.randint(low=18, high=22)
x_values = np.random.randn(length)
y_values = 2*x_values + 1
return x_values, y_values
n_runs = 10
results = [algorithm() for _ in range(n_runs)]
list_x_values = [res[0] for res in results]
list_y_values = [res[1] for res in results]
</code></pre>
<h1>Important Information</h1>
<p>The sequence of <code>x</code> values will typically be on a logarithmic scale.</p>
|
<python>
|
2024-07-15 13:06:14
| 0
| 728
|
Physics_Student
|
78,749,974
| 131,051
|
Patching in helper function not appled to instance
|
<p>I am creating a test with <code>unittest.mock.patch</code> that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>class TestService:
def test_patched(self):
service = Service()
with patch.object(service, "_send_to_third_party") as patch_send:
with patch.object(service, "_notify_listeners") as patch_notify:
service.do_something()
patch_send.assert_called_once()
patch_notify.assert_called_once()
</code></pre>
<p>All that works fine, but I'd like to put those patches in a method so that I can reuse it across a bunch of tests. Like this:</p>
<pre class="lang-py prettyprint-override"><code>class TestService:
def _patched_service(self):
service = Service()
with patch.object(service, "_send_to_third_party") as patch_send:
with patch.object(service, "_notify_listeners") as patch_notify:
return service, patch_send, patch_notify
def test_patched(self):
service, patch_send, patch_notify = self._patched_service()
service.do_something()
patch_send.assert_called_once()
patch_notify.assert_called_once()
</code></pre>
<p>This fails because it's calling the <em>real</em> methods not the patched one.</p>
<p>To simplify, in the following test the third option fails, and I'm trying to wrap my head around why and if there's a good way to do what I want?</p>
<pre class="lang-py prettyprint-override"><code>from unittest.mock import patch
class ExampleClass:
def __init__(self):
self.value = 0
def add(self, value):
self.value += self._add(value)
def subtract(self, value):
self.value -= self._subtract(value)
def what_is_my_value(self):
return self.value
def _add(self, value):
return value
def _subtract(self, value):
return value
def patch_me(exa: ExampleClass):
with patch.object(exa, '_add', return_value=99):
with patch.object(exa, '_subtract', return_value=66):
return exa
class TestPatchingWorks:
def test_unpatched_works(self):
exa = ExampleClass()
exa.add(5)
exa.subtract(2)
assert exa.what_is_my_value() == 3
def test_patching_works_with_patch_class(self):
exa = ExampleClass()
with patch.object(ExampleClass, '_add', return_value=30):
with patch.object(ExampleClass, '_subtract', return_value=10):
assert exa.what_is_my_value() == 0
exa.add(5)
assert exa.what_is_my_value() == 30
exa.subtract(2)
assert exa.what_is_my_value() == 20
def test_patching_works_with_patch_instance(self):
exa = ExampleClass()
with patch.object(exa, '_add', return_value=40):
with patch.object(exa, '_subtract', return_value=30):
assert exa.what_is_my_value() == 0
exa.add(5)
assert exa.what_is_my_value() == 40
exa.subtract(2)
assert exa.what_is_my_value() == 10
def test_patching_works_with_function(self):
exa = ExampleClass()
exa = patch_me(exa)
assert exa.what_is_my_value() == 0
exa.add(5)
assert exa.what_is_my_value() == 99
exa.subtract(2)
assert exa.what_is_my_value() == 33
</code></pre>
|
<python><python-unittest>
|
2024-07-15 12:59:40
| 1
| 3,789
|
GSP
|
78,749,938
| 13,217,286
|
How do I identify consecutive/contiguous dates in polars
|
<p>I'm trying to isolate runs of dates using polars. For this, I've been playing with <code>.rolling</code>, <code>.rle</code>, and <code>.rle_id</code> but can't seem to fit them together to make it work.</p>
<p>Given this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"date": ["2024-01-01", "2024-01-02", "2024-01-03", "2024-01-05", "2024-01-06", "2024-01-07"]
})
df = df.with_columns(pl.col("date").str.to_date())
</code></pre>
<p>I would like to have a 2nd column with the id of the run if they're contiguous:</p>
<pre><code>shape: (6, 2)
┌────────────┬─────┐
│ date ┆ id │
│ --- ┆ --- │
│ date ┆ i64 │
╞════════════╪═════╡
│ 2024-01-01 ┆ 1 │
│ 2024-01-02 ┆ 1 │
│ 2024-01-03 ┆ 1 │
│ 2024-01-05 ┆ 2 │
│ 2024-01-06 ┆ 2 │
│ 2024-01-07 ┆ 2 │
└────────────┴─────┘
</code></pre>
|
<python><dataframe><date><python-polars><rle>
|
2024-07-15 12:53:18
| 2
| 320
|
Thomas
|
78,749,890
| 19,745,277
|
Running ns1labs/flame from python in docker
|
<p>I have a docker container that runs a python script. The python script should periodically run an instance of <a href="https://hub.docker.com/r/ns1labs/flame/tags" rel="nofollow noreferrer">ns1labs/flame</a>. It produces an output file, <strong>flame.out.json</strong>. That file needs to be read by the docker container that started the flame instance.</p>
<p><em>docker-compose.yml:</em></p>
<pre class="lang-yaml prettyprint-override"><code>services:
container1:
...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./output:/output
</code></pre>
<p><em>main.py (started from container1)</em>:</p>
<pre class="lang-py prettyprint-override"><code>flamethrower_command = [
server,
"-q", "10",
"-v", "0",
"-t", "10",
"-Q", "8194",
"-l", "10"
] # Adjust Flamethrower parameters as needed
output_file = "flame.out.json"
output_dir = "/output"
output_path = os.path.join(output_dir, output_file)
flamethrower_command += ["-o", output_path]
try:
client = docker.from_env()
container = client.containers.run(
"ns1labs/flame",
command=flamethrower_command,
volumes={os.path.abspath(output_dir): {'bind': output_dir, 'mode': 'rw'}},
remove=True, # Automatically remove the container when it exits
network="host"
)
with open(output_path, 'r') as file: # Getting FileNotFoundError here
print(file.read().splitlines())
except Exception as ex:
print(ex)
</code></pre>
|
<python><docker><docker-compose><dockerpy>
|
2024-07-15 12:44:57
| 1
| 349
|
noah
|
78,749,739
| 972,647
|
Why does iterating break up my text file lines while a generator doesn't?
|
<p>For each line of a text file I want to do heavy calculations. The amount of lines can be millions so I'm using multiprocessing:</p>
<pre><code>num_workers = 1
with open(my_file, 'r') as f:
with multiprocessing.pool.ThreadPool(num_workers) as pool:
for data in pool.imap(my_func, f, 100):
print(data)
</code></pre>
<p>I'm testing interactively hence <code>ThreadPool()</code> (will be replaced in final version). For <code>map</code> or <code>imap</code> documentation says:</p>
<blockquote>
<p>This method chops the iterable into a number of chunks which it submits to the process pool as separate tasks</p>
</blockquote>
<p>Since my opened file is an iterable (each iteration is a line) I expect this to work but it breaks up lines in the middle. I made a generator that returns lines as expected, but I want to understand why the generator is needed at all. Why isn't the chunking happening on line boundaries?</p>
<p>UPDATE:</p>
<p>to clarify if I use this generator:</p>
<pre><code>def chunked_reader(file, chunk_size=100):
with open(file, 'r') as f:
chunk = []
i = 0
for line in f:
if i == chunk_size:
yield chunk
chunk = []
i = 0
i += 1
chunk.append(line)
# yield final chunk
yield chunk
</code></pre>
<p>the calculation works, returns expected values. If I use the file object directly, I get errors somehow indicating the chunking is not splitting on line boundaries.</p>
<p>UPDATE 2:</p>
<p>it's chunking the line into each individual character and not line by line. I can also take another environment with python 3.9 and that has the same behavior. So it has been like this for some time and seems to be "as designed" but not very intuitive.</p>
<p>UPDATE 3:</p>
<p>To clarify on the marked solution, my misunderstanding was that chunk_size will send a list of data to process to my_func. Internally my_func iterates over the passed in data. But since my assumption wrong and each line gets send separately to my_func regardless of chunk_size, the internal iteration was iterating over the string, not as I expected list of strings.</p>
|
<python><multiprocessing>
|
2024-07-15 12:09:11
| 1
| 7,652
|
beginner_
|
78,749,530
| 17,580,381
|
Is this a Selenium oddity?
|
<p>The following piece of code runs without error:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver import ChromeOptions
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
options = ChromeOptions()
options.add_argument("--headless")
selectors = [
'img[alt=Google]',
'img[alt="Google"]'
]
with webdriver.Chrome(options=options) as driver:
driver.get("https://www.google.com")
wait = WebDriverWait(driver, 5)
for selector in selectors:
t = By.CSS_SELECTOR, selector
wait.until(EC.presence_of_element_located(t))
</code></pre>
<p>Note the two slightly different forms of CSS selector.</p>
<p>I have always understood that the definitive syntax for this would be:</p>
<pre><code>'img[alt="Google"]'
</code></pre>
<p>Note the double-quotes.</p>
<p>I have determined that in selenium 4.22.0 (Python 3.12.4) the following syntax also works:</p>
<pre><code>'img[alt=Google]'
</code></pre>
<p>Not the absence of double-quotes.</p>
<p>So my question is whether leaving out the double-quotes complies with CSS standards or is this something peculiar to selenium?</p>
|
<python><css><selenium-webdriver><css-selectors>
|
2024-07-15 11:14:47
| 1
| 28,997
|
Ramrab
|
78,749,504
| 1,714,385
|
Missing checkpoint files when training multiple models at the same time in tensorflow
|
<p>I have ~100 tensorflow models to train, and on each training I run <code>keras-tuner</code> to find the best hyperparameters for each model. To save time, I would like to train one of these models per CPU core.</p>
<p>However, I think that the parallel trainings are overwriting each other's checkpoint, because when I run <code>get_best_models()</code> I get the following traceback:</p>
<pre><code> 1720 tuner.search(X_train, y_train, epochs=config.DIRECTOR_EPOCHS,
1721 validation_split=0.15, callbacks=[callbacks],
1722 verbose=0)
1724 # Let's extract the director model from the best fitted model
-> 1725 best_model = tuner.get_best_models()[0]
1726 director = keras.Model(inputs=best_model.get_layer('_').input,
1727 outputs=[
1728 best_model.get_layer('__').output,
1729 best_model.get_layer('___').output
1730 ],
1731 name=f'director_h{h}')
1732 self.directors[f'director_h{h}'] = director
File ~\PycharmProjects\Orchestra\omnienergy-orchestra\venv\lib\site-packages\keras_tuner\engine\tuner.py:366, in Tuner.get_best_models(self, num_models)
348 """Returns the best model(s), as determined by the tuner's objective.
349
350 The models are loaded with the weights corresponding to
(...)
363 List of trained model instances sorted from the best to the worst.
364 """
365 # Method only exists in this class for the docstring override.
--> 366 return super().get_best_models(num_models)
File ~\PycharmProjects\Orchestra\omnienergy-orchestra\venv\lib\site-packages\keras_tuner\engine\base_tuner.py:364, in BaseTuner.get_best_models(self, num_models)
349 """Returns the best model(s), as determined by the objective.
350
351 This method is for querying the models trained during the search.
(...)
361 List of trained models sorted from the best to the worst.
362 """
363 best_trials = self.oracle.get_best_trials(num_models)
--> 364 models = [self.load_model(trial) for trial in best_trials]
365 return models
File ~\PycharmProjects\Orchestra\omnienergy-orchestra\venv\lib\site-packages\keras_tuner\engine\base_tuner.py:364, in <listcomp>(.0)
349 """Returns the best model(s), as determined by the objective.
350
351 This method is for querying the models trained during the search.
(...)
361 List of trained models sorted from the best to the worst.
362 """
363 best_trials = self.oracle.get_best_trials(num_models)
--> 364 models = [self.load_model(trial) for trial in best_trials]
365 return models
File ~\PycharmProjects\Orchestra\omnienergy-orchestra\venv\lib\site-packages\keras_tuner\engine\tuner.py:297, in Tuner.load_model(self, trial)
294 # Reload best checkpoint.
295 # Only load weights to avoid loading `custom_objects`.
296 with maybe_distribute(self.distribution_strategy):
--> 297 model.load_weights(self._get_checkpoint_fname(trial.trial_id))
298 return model
File ~\PycharmProjects\Orchestra\omnienergy-orchestra\venv\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File ~\PycharmProjects\Orchestra\omnienergy-orchestra\venv\lib\site-packages\tensorflow\python\training\py_checkpoint_reader.py:31, in error_translator(e)
27 error_message = str(e)
28 if 'not found in checkpoint' in error_message or (
29 'Failed to find any '
30 'matching files for') in error_message:
---> 31 raise errors_impl.NotFoundError(None, None, error_message)
32 elif 'Sliced checkpoints are not supported' in error_message or (
33 'Data type '
34 'not '
35 'supported') in error_message:
36 raise errors_impl.UnimplementedError(None, None, error_message)
NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for tmp\untitled_project\trial_09\checkpoint
</code></pre>
<p>Interestingly, if I train two models on different hard disks, the error does not show up.</p>
<p>I tried looking up ways to rename the checkpoint files, but I couldn't find one.</p>
|
<python><tensorflow><keras>
|
2024-07-15 11:08:53
| 0
| 4,417
|
Ferdinando Randisi
|
78,749,458
| 7,973,301
|
How to create a Python type alias for a parametrized type
|
<p><a href="https://docs.kidger.site/jaxtyping/" rel="nofollow noreferrer"><code>jaxtyping</code></a> provides type annotations that use a <code>str</code> as parameter (as opposed to a type), e.g.:</p>
<pre class="lang-py prettyprint-override"><code>Float[Array, "dim1 dim2"]
</code></pre>
<p>Let's say I would like to create a type alias which combines the <code>Float</code> and <code>Array</code> part.
This means I would like to be able to write</p>
<pre class="lang-py prettyprint-override"><code>MyOwnType["dim1 dim2"]
</code></pre>
<p>instead.
To my understanding I cannot use <code>TypeAlias</code>/<code>TypeVar</code>, as the generic parameter (here <code>"dim1 dim2"</code>) is not a type but actually an instance of a type.</p>
<p>Is there a concise Pythonic way to achieve this?</p>
<p><strong>EDIT:</strong></p>
<p>I tried the following and it does not work:</p>
<pre class="lang-py prettyprint-override"><code>class _Singleton:
def __getitem__(self, shape: str) -> Float:
return Float[Array, shape]
MyOwnType = _Singleton()
</code></pre>
<p>Using <code>MyOwnType["dim1 dim2"]</code>as function parameter annotation gives the <code>mypy</code> complaint <code>Variable "MyOwnType" is not valid as a type</code></p>
<p><strong>Solution:</strong></p>
<p>Based on the answer of @chepner this was the final solution:</p>
<pre class="lang-py prettyprint-override"><code>class MyOwnType(Generic[Shape]):
def __class_getitem__(cls, shape: str) -> Float:
return Float[Array, shape]
</code></pre>
|
<python><python-typing>
|
2024-07-15 10:56:48
| 1
| 970
|
Padix Key
|
78,749,454
| 201,657
|
Sort x axis of plotly express stacked bar chart by total for the bar
|
<p>I have created a stacked bar chart in using plotly express that visualises edits made to a files in GDrive, and who made those edits.</p>
<p>My source dataframe has columns</p>
<ul>
<li>file_path</li>
<li>emailAddress (of the person editing the file)</li>
<li>tally</li>
</ul>
<p>I have file_path plotted on the x axis.</p>
<p><a href="https://i.sstatic.net/O7J0pf18.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O7J0pf18.png" alt="enter image description here" /></a></p>
<p>The problem I'm having is that I want all the bars to appear in descending order of edits, regardless of who made those edits. As you can see that's not happening right now.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
fig = go.Figure()
# Get a list of unique email addresses to create one trace per email address
email_addresses = df['emailAddress'].unique()
# Create a trace for each email address
for email in email_addresses:
df_filtered = df[df['emailAddress'] == email]
fig.add_trace(go.Bar(x=df_filtered['path_name'], y=df_filtered['tally'], name=email))
# Update the layout to stack the bars
fig.update_layout(barmode='stack', xaxis_title="File path", yaxis_title="Tally", title="Tally by file path and Email Address")
fig.show()
</code></pre>
<p>Can anyone suggest how I change this to get all bars appearing in descending order of edits, regardless of who made the edits.</p>
|
<python><plotly>
|
2024-07-15 10:55:59
| 1
| 12,662
|
jamiet
|
78,749,310
| 4,467,693
|
In python Django how to define test database and keep records inserted in test database until cleaned in tearDown method of testcase
|
<p>I want a test database created for my default database in Django latest version, for that I configured in project <code>settings.py</code> file as below.</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
'TEST': {
'NAME': BASE_DIR / 'test_db.sqlite3',
}
}
}
</code></pre>
<p>I have a model <code>Topic</code> in my <code>models.py</code> file and have run <code>makemigrtions</code> & <code>migrate</code> cmds to have table in sqlite db. I have created <code>TestTopic</code> class in my app(first_app) <code>tests.py</code> file as below</p>
<pre><code>from django.test import TestCase,SimpleTestCase
from .models import Topic
class TestTopic(TestCase):
def setUp(self):
Topic.objects.create(top_name='Test topic1')
Topic.objects.create(top_name='Test topic2')
Topic.objects.create(top_name='Test topic3')
def test_create_topic(self):
all_topics = Topic.objects.count()
self.assertEqual(all_topics, 3)
def tearDown(self) -> None:
return super().tearDown()
</code></pre>
<p>I am executing cmd below to run my app(first_app) tests. Passing <code>--keepdb</code> for preserving <code>test_db</code>.</p>
<pre><code>py manage.py test first_app --keepdb
</code></pre>
<p>After this I see test case pass. But when I am checking <code>test_db.sqlite</code> in sqlite db browser app to see test topics I created in test case, I do not see them. I want to have those records in <code>test_db</code> until I clear them manually or in <code>tearDown</code> method of my test case. ( Below is sqlite db browser of <code>test_db.sqlite</code> topics table.</p>
<p><a href="https://i.ibb.co/p30SRFt/Untitled.png" rel="nofollow noreferrer"><img src="https://i.ibb.co/p30SRFt/Untitled.png" alt="test_db.sqlite" /></a></p>
|
<python><python-3.x><django><django-testing><django-tests>
|
2024-07-15 10:20:19
| 1
| 706
|
Kapil Yadav
|
78,749,249
| 4,451,315
|
How to append string to each element of chunked array?
|
<p>Say I have</p>
<pre class="lang-py prettyprint-override"><code>In [22]: import pyarrow as pa
In [23]: t = pa.table({'a': ['one', 'two', 'three']})
</code></pre>
<p>and I'd like to append <code>'_frobenius'</code> to each element of <code>'a'</code></p>
<p>Expected output:</p>
<pre><code>pyarrow.Table
a: string
----
a: [["one_frobenius","two_frobenius","three_frobenius"]]
</code></pre>
|
<python><pyarrow>
|
2024-07-15 10:05:17
| 2
| 11,062
|
ignoring_gravity
|
78,749,205
| 22,216,622
|
Multithreading in Azure Api
|
<p>I need to write almost 300 different wiki page all separated by type : [Epic, Features, ....] using Azure devops python Api</p>
<p>And this take me many minutes when i write them one by one</p>
<p>So i tried to do a basic multithreading</p>
<pre><code>if (multithreading):
threads = []
for task in task_list:
thread = threading.Thread(target=write_one_epic, args=(task, requests_session)
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
</code></pre>
<p>This result me in this error</p>
<p><code>The wiki page operation failed with message : TF401028: The reference 'refs/heads/wikiMaster' has already been updated by another client, so you cannot update it. Please try again.</code></p>
<p>Update, i tried with a <code>ThreadPoolExecutor</code> and it result me as the same error</p>
<p>The error is raised when i try to create a new page</p>
<pre><code>wiki.create_or_update_page(content, Constant.project, Constant.wikiIdentifier, self.actual_path, None)
</code></pre>
|
<python><multithreading><azure-devops><python-multithreading>
|
2024-07-15 09:52:05
| 1
| 311
|
Kaiwinta
|
78,748,860
| 1,082,349
|
Rolling mean, timedelta offset, min_periods, and center
|
<p>I would like an annual rolling window that is centered, and where we have missings when the window is calculated over missing data -- at the start and end of the time series, something like:</p>
<pre><code>test.head(20)
Out[39]:
foo
month
2016-08-01 NaN
2016-09-01 NaN
2016-10-01 NaN
2016-11-01 NaN
2016-12-01 NaN
2017-01-01 =mean(2016-08-01 to 2016-07-01)
2017-02-01 =mean(2016-09-01 to 2016-08-01)
2017-03-01 =mean(2016-10-01 to 2016-09-01)
</code></pre>
<p>(because its an annual average with monthly data, I don't particularly care whether its 6 months before, and 5 months after, or 5 before and 6 after (or even 6 and 6).</p>
<p>Here are the first rows of my test data:</p>
<pre><code>test.head(20)
Out[39]:
foo
month
2016-08-01 -0.105854
2016-09-01 -0.120718
2016-10-01 -0.129150
2016-11-01 -0.114559
2016-12-01 -0.135669
2017-01-01 -0.140636
2017-02-01 -0.094412
2017-03-01 -0.139349
2017-04-01 -0.099603
2017-05-01 -0.076229
2017-06-01 -0.098049
2017-07-01 -0.108535
2017-08-01 -0.037598
2017-09-01 -0.181011
2017-10-01 -0.121715
2017-11-01 -0.089486
2017-12-01 -0.132886
2018-01-01 -0.063460
2018-02-01 -0.111066
2018-03-01 -0.130275
</code></pre>
<p>A standard rolling mean works well</p>
<pre><code>test.rolling('365D').mean()
Out[41]:
foo
month
2016-08-01 -0.105854
2016-09-01 -0.113286
2016-10-01 -0.118574
2016-11-01 -0.117570
2016-12-01 -0.121190
</code></pre>
<p>, also when I specify min_periods:</p>
<pre><code>test.rolling('365D', min_periods=3).mean()
Out[45]:
foo
month
2016-08-01 NaN
2016-09-01 NaN
2016-10-01 -0.118574
2016-11-01 -0.117570
2016-12-01 -0.121190
</code></pre>
<p>But when I specify center=True, behold:</p>
<pre><code>test.rolling('365D', min_periods=3, center=True).mean()
Out[46]:
foo
month
2016-08-01 -0.124431
2016-09-01 -0.122543
2016-10-01 -0.119995
2016-11-01 -0.115618
2016-12-01 -0.114021
</code></pre>
<p>What am I missing?</p>
|
<python><pandas>
|
2024-07-15 08:40:13
| 2
| 16,698
|
FooBar
|
78,748,775
| 10,919,370
|
implementing __repr__ on a class, if try to add function members, get "RecursionError: maximum recursion depth exceeded"
|
<p>I'm trying to implement __repr__ for a class, but when I want to access function members to add there value to __repr__ I get "maximum recursion depth exceeded".</p>
<p>I've noticed that if I remove all the class' functions the __repr__ works as expected.</p>
<p>I intend to do a workaround to skip the members which are function, but I want to understand why the recursion is happening.</p>
<p>Se below a simplified code.</p>
<pre><code>class C:
def __init__(self, a):
self.a = a
def getA(self):
return self.a
def __repr__(self):
ret_str = self.__class__.__name__ + "\n"
for v in dir(self):
if v.startswith("__"):
continue
ret_str += "{} -> {}\n".format(v, getattr(self,v))
return ret_str
c = C(1)
print(c)
</code></pre>
<p>here is the error</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 18, in <module>
print(c)
File "test.py", line 13, in __repr__
ret_str += "{} -> {}\n".format(v, getattr(self,v))
File "test.py", line 13, in __repr__
ret_str += "{} -> {}\n".format(v, getattr(self,v))
File "test.py", line 13, in __repr__
ret_str += "{} -> {}\n".format(v, getattr(self,v))
[Previous line repeated 163 more times]
RecursionError: maximum recursion depth exceeded
</code></pre>
|
<python><recursion><repr>
|
2024-07-15 08:19:34
| 1
| 1,215
|
Marcel Preda
|
78,748,640
| 6,898,424
|
Parse XML to String with <b> tags
|
<p>I'm only getting value1 when using .findall('string') and rest is ignored. How to get whole value?
xml file input:</p>
<pre><code><resources>
<string name="key">value1 <b>value2</b>. value3</string>
</resources>
</code></pre>
<p>python code:</p>
<pre><code>import xml.etree.ElementTree as ET
tree = ET.parse(xml_file)
root = tree.getroot()
for string in root.findall('string'):
name = string.get('name')
value = string.text
</code></pre>
|
<python><xml><elementtree>
|
2024-07-15 07:48:36
| 1
| 421
|
Gorthez
|
78,748,601
| 1,308,590
|
python can not read json file with encoding = 'utf8'
|
<p>I can not read full text with this json file:</p>
<pre><code>{
"messages": [
{
"sender_name": "test",
"timestamp_ms": 1554347140802,
"content": "Ch\u00c3\u00a0o Anh/Ch\u00e1\u00bb\u008b, Anh/Ch\u00e1\u00bb\u008b vui l\u00c3\u00b2ng \u00c4\u0091\u00e1\u00bb\u0083 l\u00e1\u00ba\u00a1i S\u00e1\u00bb\u0090 \u00c4\u0090I\u00e1\u00bb\u0086N THO\u00e1\u00ba\u00a0I + T\u00c3\u008cNH TR\u00e1\u00ba\u00a0NG B\u00e1\u00bb\u0086NH \u00c4\u0091\u00e1\u00bb\u0083 D\u00c6\u00af\u00e1\u00bb\u00a2C S\u00c4\u00a8 CHUY\u00c3\u008aN M\u00c3\u0094N s\u00e1\u00ba\u00afp x\u00e1\u00ba\u00bfp t\u00c6\u00b0 v\u00e1\u00ba\u00a5n v\u00e1\u00bb\u0081 s\u00e1\u00ba\u00a3n ph\u00e1\u00ba\u00a9m, b\u00e1\u00bb\u0087nh t\u00c3\u00acnh c\u00e1\u00bb\u00a5 th\u00e1\u00bb\u0083 v\u00c3\u00a0 li\u00e1\u00bb\u0087u tr\u00c3\u00acnh ph\u00c3\u00b9 h\u00e1\u00bb\u00a3p cho Anh/Ch\u00e1\u00bb\u008b nh\u00c3\u00a9.",
"is_geoblocked_for_viewer": false
},
{
"sender_name": "",
"timestamp_ms": 1554334611125,
"content": "T\u00c3\u00b4i mu\u00e1\u00bb\u0091n \u00c4\u0091\u00e1\u00ba\u00b7t h\u00c3\u00a0ng",
"is_geoblocked_for_viewer": false
},
{
"sender_name": "test",
"timestamp_ms": 1554334610788,
"content": "Ch\u00c3\u00a0o Musickhc! Ch\u00c3\u00bang t\u00c3\u00b4i c\u00c3\u00b3 th\u00e1\u00bb\u0083 gi\u00c3\u00bap g\u00c3\u00ac cho b\u00e1\u00ba\u00a1n?",
"is_geoblocked_for_viewer": false
},
{
"sender_name": "test",
"timestamp_ms": 1554334609955,
"content": "Customer \u00c4\u0091\u00c3\u00a3 tr\u00e1\u00ba\u00a3 l\u00e1\u00bb\u009di tin nh\u00e1\u00ba\u00afn ch\u00c3\u00a0o m\u00e1\u00bb\u00abng t\u00e1\u00bb\u00b1 \u00c4\u0091\u00e1\u00bb\u0099ng c\u00e1\u00bb\u00a7a b\u00e1\u00ba\u00a1n. \u00c4\u0090\u00e1\u00bb\u0083 thay \u00c4\u0091\u00e1\u00bb\u0095i ho\u00e1\u00ba\u00b7c g\u00e1\u00bb\u00a1 l\u00e1\u00bb\u009di ch\u00c3\u00a0o n\u00c3\u00a0y, h\u00c3\u00a3y truy c\u00e1\u00ba\u00adp ph\u00e1\u00ba\u00a7n C\u00c3\u00a0i \u00c4\u0091\u00e1\u00ba\u00b7t tin nh\u00e1\u00ba\u00afn.",
"is_geoblocked_for_viewer": false
}
]
}
</code></pre>
<p>I am using this code:</p>
<pre><code>with open('message_1.json', 'r', encoding='utf-8') as file:
data = json.loads(file.read())
print('message', data)
file.close()
</code></pre>
<p>The result is
<code>{'messages': [{'sender_name': 'test', 'timestamp_ms': 1554347140802, 'content': 'ChÃ\xa0o Anh/Chá»\x8b, Anh/Chá»\x8b vui lòng Ä\x91á»\x83 lại Sá»\x90 Ä\x90Iá»\x86N THOáº\xa0I + TÃ\x8cNH TRáº\xa0NG Bá»\x86NH Ä\x91á»\x83 DƯỢC SĨ CHUYÃ\x8aN MÃ\x94N sắp xếp tư vấn vá»\x81 sản phẩm, bá»\x87nh tình cụ thá»\x83 vÃ\xa0 liá»\x87u trình phù hợp cho Anh/Chá»\x8b nhé.', 'is_geoblocked_for_viewer': False}, {'sender_name': '', 'timestamp_ms': 1554334611125, 'content': 'Tôi muá»\x91n Ä\x91ặt hÃ\xa0ng', 'is_geoblocked_for_viewer': False}, {'sender_name': 'test', 'timestamp_ms': 1554334610788, 'content': 'ChÃ\xa0o Musickhc! Chúng tôi có thá»\x83 giúp gì cho bạn?', 'is_geoblocked_for_viewer': False}, {'sender_name': 'test', 'timestamp_ms': 1554334609955, 'content': 'Customer Ä\x91ã trả lá»\x9di tin nhắn chÃ\xa0o mừng tá»± Ä\x91á»\x99ng cá»§a bạn. Ä\x90á»\x83 thay Ä\x91á»\x95i hoặc gỡ lá»\x9di chÃ\xa0o nÃ\xa0y, hãy truy cáº\xadp phần CÃ\xa0i Ä\x91ặt tin nhắn.', 'is_geoblocked_for_viewer': False}]}</code></p>
<p>Can someone help me how to read this file with <em>utf-8</em> ?
Thanks</p>
|
<python><json><utf-8>
|
2024-07-15 07:39:33
| 2
| 554
|
famfamfam
|
78,748,429
| 12,336,422
|
Apply permutation array on multiple axes in numpy
|
<p>Let's say I have an array of permutations <code>perm</code> which could look like:</p>
<pre class="lang-py prettyprint-override"><code>perm = np.array([[0, 1, 2], [1, 2, 0], [0, 2, 1], [2, 1, 0]])
</code></pre>
<p>If I want to apply it to one axis, I can write something like:</p>
<pre class="lang-py prettyprint-override"><code>v = np.arange(9).reshape(3, 3)
print(v[perm])
</code></pre>
<p>Output:</p>
<pre><code>array([[[0, 1, 2],
[3, 4, 5],
[6, 7, 8]],
[[3, 4, 5],
[6, 7, 8],
[0, 1, 2]],
[[0, 1, 2],
[6, 7, 8],
[3, 4, 5]],
[[6, 7, 8],
[3, 4, 5],
[0, 1, 2]]])
</code></pre>
<p>Now I would like to apply it to two axes at the same time. I figured out that I can do it via:</p>
<pre class="lang-py prettyprint-override"><code>np.array([v[tuple(np.meshgrid(p, p, indexing="ij"))] for p in perm])
</code></pre>
<p>But I find it quite inefficient, because it has to create a mesh grid, and it also requires a for loop. I made a small array in this example but in reality I have a lot larger arrays with a lot of permutations, so I would really love to have something that's as quick and simple as the one-axis version.</p>
|
<python><arrays><numpy>
|
2024-07-15 06:58:46
| 2
| 733
|
sams-studio
|
78,748,223
| 1,516,331
|
Multi stage builds made my Python Docker image larger. Why?
|
<p>When using normal one stage build Dockerfile, the final image is 216MB; but when using a multi-stage build approach shown below, I got a final image of 227MB. Why is this?</p>
<pre><code># This multi stage build Dockerfile actually makes the image larger! :(
FROM python:3.10.14-slim-bookworm AS base
LABEL authors="X"
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV ENVIRONMENT=production
ENV PYTHONPATH=/app
WORKDIR /app
FROM base AS build
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt && \
rm requirements.txt
FROM base AS final
# cleaning up unused files
RUN rm -rf /var/lib/apt/lists/*
COPY --from=build /usr/local/lib/python3.10/site-packages/ /usr/local/lib/python3.10/site-packages/
COPY ./src /app/src
# Expose port 8000 for FastAPI server
EXPOSE 8000
ENTRYPOINT ["python", "/app/src/main.py"]
</code></pre>
<p>The <code>requirements.txt</code> file only contains 4 packages:</p>
<pre><code>fastapi==0.111.0
loguru==0.7.2
aiohttp[speedups]==3.9.5
boto3==1.34.128
</code></pre>
|
<python><docker><dockerfile><fastapi><docker-multi-stage-build>
|
2024-07-15 05:48:25
| 1
| 3,190
|
CyberPlayerOne
|
78,747,915
| 13,994,829
|
How could langchain agent step by step in `astream_event()`?
|
<h3>Example Code</h3>
<pre class="lang-py prettyprint-override"><code>@tool
async def tool1():
"""tool1"""
print('start1...')
cmd = {
'cmd': 'Scrape',
}
await asyncio.sleep(5) # do something need long time.
go2_command_queue.put(cmd)
print('time1....')
@tool
async def tool2():
"""tool2"""
print('start2...')
print(go2_command_queue.qsize())
print('time2....')
async def agent_completion(
agent_executor,
message: str,
tools: List = None,
) -> AsyncGenerator:
"""Base on query to decide the tool which should use.
Response with `async` and `streaming`.
"""
tool_names = [tool.name for tool in tools]
async for event in agent_executor.astream_events(
{
"input": message,
"tools": tools,
"tool_names": tool_names,
"agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
},
version='v2'
):
kind = event['event']
if kind == "on_chain_start":
if (
event["name"] == "Agent"
):
yield(
f"\n### Agent: `{event['name']}`,Agent Input: `{event['data'].get('input')}`\n"
)
elif kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
yield content
elif kind == "on_tool_start":
yield(
f"\n### Tool: `{event['name']}`,Tool Input: `{event['data'].get('input')}`\n"
)
elif kind == "on_tool_end":
yield(
f"\n### Tool Finished: `{event['name']}`,Tool Results: \n"
)
yield(
f"`{event['data'].get('output')}`\n"
)
elif kind == "on_chain_end":
if (
event["name"] == "Agent"
):
yield(
f"\n### Agent Finished: `{event['name']}`,Agent Results: \n"
)
yield(
f"{event['data'].get('output')['output']}\n"
)
if __name__ == '__main__':
llm = ChatOpenAI(
model='gpt-4o',
temperature=0.,
api_key=OPENAI_API_KEY,
streaming=True,
max_tokens=None,
)
tools = [
too1,
tool2,
tool3,
...
]
agent = create_openai_tools_agent(llm, tools, AGENT_PROMPT)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=False,
return_intermediate_steps=True
)
while True:
user_message = input('Query: ') # tool1 then tool2
responses = agent_completion(agent_executor, user_message, tools)
async for response in responses:
print(response, flush=True)
</code></pre>
<h3>Results</h3>
<pre><code>### Tool: `tool1`,Tool Input: `{}`
start1...
start2...
0
time2...
### Tool: `tool2`,Tool Input: `{}`
time1....
### Tool Finished: `tool1`,Tool Results:
`None`
### Tool Finished: `tool2`,Tool Results:
`None`
</code></pre>
<h3>Description</h3>
<p>If some of my tools have dependencies, for example: tool3 needs to wait for tool2 to complete before it can be executed. Is there a way to achieve this? Through <code>astream_event</code>, I noticed that different tools are executed almost concurrently in an asynchronous manner.</p>
<p>I also tried using the .stream method. Although this method can execute tools step by step, the task process is not output in a streaming manner, which is not what I want.</p>
|
<python><streaming><openai-api><langchain><agent>
|
2024-07-15 03:03:06
| 1
| 545
|
Xiang
|
78,747,896
| 3,099,733
|
some python modules cannot be recognized by pylance in vscode
|
<p>I have installed a python module from <a href="https://github.com/Isra3l/ligpargen" rel="nofollow noreferrer">source</a> with the following command</p>
<pre class="lang-bash prettyprint-override"><code>cd /path/to/ligpargen
pip install -e . --config-settings editable_mode=strict
</code></pre>
<p>According to <a href="https://stackoverflow.com/questions/76213501/python-packages-imported-in-editable-mode-cant-be-resolved-by-pylance-in-vscode">this</a> it should be the right way to make pylance to find editable module. But it turns out not work at all.</p>
<p>I am sure that I am using the right virtual environment and I can import the module in the interactive mode. And I also check the lib python where the module is installed. As you can find in the screenshot,there is a <code>LigPargen-2.1.dist-info</code>, but there is no a module named <code>ligparagen</code> like the others. I have no idea how to fix this.</p>
<p><a href="https://i.sstatic.net/6HPrPdtB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HPrPdtB.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><pylance>
|
2024-07-15 02:48:38
| 2
| 1,959
|
link89
|
78,747,766
| 826,112
|
Python bytecode, does compare_op pop the stack?
|
<p>I'm working with the following bytecode:</p>
<pre><code> 6 10 LOAD_FAST 0 (x)
12 LOAD_CONST 3 (255)
14 COMPARE_OP 2 (<)
18 POP_JUMP_IF_FALSE 27 (to 74)
</code></pre>
<p>LOAD_FAST pushes the value for x (zero in this case) onto the stack. LOAD_CONST pushes the value of 255 onto the stack. I cannot find documentation regarding what the less than COMPARE_OP does with the stack although the <a href="https://docs.python.org/3/library/dis.html#opcode-collections" rel="nofollow noreferrer">Documentation</a> for the POP_JUMP_IF_FALSE states that</p>
<blockquote>
<p>If STACK[-1] is false, increments the bytecode counter by delta. STACK[-1] is popped.</p>
</blockquote>
<p>What happens with the stack when the COMPARE_OP is executed?</p>
|
<python><bytecode>
|
2024-07-15 01:27:44
| 1
| 536
|
Andrew H
|
78,747,712
| 6,525,260
|
Find number of redundant edges in components of a graph
|
<p>I'm trying to solve <a href="https://leetcode.com/problems/number-of-operations-to-make-network-connected/description/" rel="nofollow noreferrer">problem 1319</a> on Leetcode, which is as follows:</p>
<blockquote>
<p>There are n computers numbered from 0 to n - 1 connected by ethernet cables connections forming a network where connections[i] = [ai, bi] represents a connection between computers ai and bi. Any computer can reach any other computer directly or indirectly through the network.</p>
<p>You are given an initial computer network connections. You can extract
certain cables between two directly connected computers, and place
them between any pair of disconnected computers to make them directly
connected.</p>
<p>Return the minimum number of times you need to do this in order to
make all the computers connected. If it is not possible, return -1.</p>
</blockquote>
<p>Thinking on this a little, I came up with the following non-working approach and associated code:</p>
<p>First, convert the edge list into an adjacency list of connections. Go to the first computer and see how many computers are accessible from that one (using e.g DFS). Additionally, keep track of the number of connections that repeatedly try to access a visited node, indicating that there's a wire we can get rid of. This represents a connected component. Find the next non-visited node and repeat the same process. At the end, determine if the number of wires we counted is >= the number of connected components - 1</p>
<pre class="lang-py prettyprint-override"><code>from typing import DefaultDict, List, Set
from collections import defaultdict
class Solution:
def makeConnected(self, n: int, connections: List[List[int]]) -> int:
def dfs(
adj_list: DefaultDict[int, List[int]], computer: int, visited: Set[int]
) -> int:
"""Returns the number of removable wires from this connected component"""
num_removable_wires = 0
stack = [computer]
while len(stack) > 0:
current = stack.pop()
# Already been here, so can remove this wire
if current in visited:
num_removable_wires += 1
continue
visited.add(current)
if current in adj_list:
for neighbor in adj_list[current]:
stack.append(neighbor)
return num_removable_wires
adj_list = defaultdict(list)
for connection in connections:
adj_list[connection[0]].append(connection[1])
# adj_list[connection[1]].append(connection[0])
total_removable_wires = 0
num_components = 0
visited = set()
for computer in adj_list.keys():
if computer in visited:
continue
num_components += 1
total_removable_wires += dfs(adj_list, computer, visited)
# Add computers that are completely isolated
num_components += n - len(visited)
return (
num_components - 1
if total_removable_wires >= num_components - 1
else -1
)
if __name__ == "__main__":
print(Solution().makeConnected(6, [[0, 1], [0, 2], [0, 3], [1, 2]]))
print(
Solution().makeConnected(
11,
[
[1, 4],
[0, 3],
[1, 3],
[3, 7],
[2, 7],
[0, 1],
[2, 4],
[3, 6],
[5, 6],
[6, 7],
[4, 7],
[0, 7],
[5, 7],
],
)
)
</code></pre>
<p>For the first test case, this code works as expected. For the second, I realized that for certain vertices, e.g 1, the only vertices accessible, directly or indirectly, are 4, 3, 7, and 6 since the edges are only placed in one direction in the adjacency list. The code then incorrectly determines that vertex 0 is part of a new component. To fix this, I tried to adjust the following, uncommenting the second line of code when constructing the adjacency list, to add both sides of the same edge:</p>
<pre class="lang-py prettyprint-override"><code>for connection in connections:
adj_list[connection[0]].append(connection[1])
adj_list[connection[1]].append(connection[0])
</code></pre>
<p>However, while this fixes the second test case, this now breaks the first. Now, when the code reaches e.g 3 from 0 and sees that 0 is a neighbor already visited, it incorrectly states that edge is redundant even though it was just traversed on the way to 3.</p>
<p>How can I correctly count the number of redundant edges (or removable wires) in the context of this problem? <strong>Note that I realize there are better approaches in the Leetcode solutions tab that I could implement, but I was wondering what I am doing wrong for my solution attempt and whether it is possible to correct this existing approach.</strong></p>
|
<python><algorithm><graph-theory>
|
2024-07-15 00:44:18
| 3
| 11,847
|
Arnav Borborah
|
78,747,442
| 708,833
|
How can I set the display environment variable on a Raspberry Pi at boot?
|
<p>I have a Raspberry Pi 4 with a 1280x800 display and no keyboard or mouse. I ssh into it remotely over WiFi and am coding it to display images, ultimately on its own without my ssh into it. I installed feh and am using it in a python script to display an image:</p>
<pre><code>import subprocess
image = subprocess.Popen(["feh", "--hide-pointer", "-x", "-q", "-B", "black", "-g", "1280x800", "./test.jpg"])
</code></pre>
<p>I can get this code to work if I type into the command line</p>
<pre><code>export DISPLAY=:0.0
</code></pre>
<p>beforehand. However, I would like to have that automatically at boot so that I can run the python code at boot as well and have it run autonomously. I tried putting that line into /etc/profile, but it didn't work.</p>
<p>Sorry for my operating system naïveté--it's a real weakness. And thanks in advance for any help.</p>
|
<python><raspberry-pi><raspberry-pi4>
|
2024-07-14 20:58:19
| 2
| 4,741
|
Dribbler
|
78,747,409
| 2,988,730
|
Suppress stdout message from C/C++ library
|
<p>I am attempting to suppress a message that is printed to stdout by a library implemented in C.</p>
<p>My specific usecase is OpenCV, so I will use it for the MCVE below. The <a href="https://docs.opencv.org/4.10.0/d9/d0c/group__calib3d.html#ga1b976b476cd2083edd4323a34e9e1ffa" rel="nofollow noreferrer"><code>estimateChessboardSharpness</code></a> function has a printout when the grid size is too small (which happens <a href="https://github.com/opencv/opencv/blob/15783d65981d8978597c6b60e830e21e964cbdf9/modules/calib3d/src/chessboard.cpp#L3376C20-L3376C23" rel="nofollow noreferrer">here</a>). I made a <a href="https://github.com/opencv/opencv/pull/25908" rel="nofollow noreferrer">PR to fix it</a>, but in the meantime, I'd like to suppress the message. For example:</p>
<pre><code>import cv2
import numpy as np
img = np.zeros((512, 640), dtype='uint8')
corners = []
for i in range(10):
for j in range(8):
corner = (30 + 3 * j, 70 + 3 * i)
if i and j:
corners.append(corner)
if (i % 2) ^ (j % 2):
img[corner[0]:corner[0] + 3, corner[1]:corner[1] + 3] = 255
corners = np.array(corners)
</code></pre>
<pre><code>>>> cv2.estimateChessboardSharpness(img, (9, 7), corners)
calcEdgeSharpness: checkerboard too small for calculation.
((9999.0, 9999.0, 9999.0, 9999.0), None)
</code></pre>
<p>The line appears to be a simple <code>std::cout << ...</code>, so I have tried all of the following:</p>
<pre><code>from contextlib import redirect_stdout
from os imoprt devnull
import sys
with redirect_stdout(None):
cv2.estimateChessboardSharpness(img, (9, 7), corners)
with open(devnull, "w") as null, redirect_stdout(null):
cv2.estimateChessboardSharpness(img, (9, 7), corners)
sys.stdout = open(devnull, "w")
cv2.estimateChessboardSharpness(img, (9, 7), corners)
</code></pre>
<p>I've even tried <a href="https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stderr" rel="nofollow noreferrer"><code>redirect_stderr</code></a> instead of <a href="https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout" rel="nofollow noreferrer"><code>redirect_stdout</code></a> just in case. I've also tried setting <code>OPENCV_LOG_LEVEL=SILENT</code> in bash and <code>os.environ["OPENCV_LOG_LEVEL"] = "SILENT"</code> in python before importing <code>cv2</code>, not that I expected stdout to be conflated with logging in this case.</p>
<p>In all cases, the message prints. How do I make it stop?</p>
|
<python><cpython>
|
2024-07-14 20:34:44
| 1
| 115,659
|
Mad Physicist
|
78,747,259
| 15,547,292
|
Possibility to implement multiple repr strategies?
|
<p>In Python, is there a way to implement multiple <code>repr</code> strategies on custom classes, to provide different representation styles, and that recursively on arbitrary data structures, so nested objects would be shown using the parent call's strategy, if available? Like this...</p>
<pre class="lang-py prettyprint-override"><code>class SomeClass:
def __init__(self, objects):
self._objects = objects
def __repr__(self):
return f"Recursive default repr: {self._objects}"
def __alternate_repr__(self):
return f"Recursive alternate repr: {self._objects}"
a = SomeClass(["list", "of", "values"])
b = SomeClass(dict(nested_class=a))
c = SomeClass([b, 1, "..."])
repr(c) # c,b,a shown using __repr__()
alternate_repr(c) # c,b,a shown using __alternate_repr__()
</code></pre>
|
<python><repr>
|
2024-07-14 19:13:04
| 2
| 2,520
|
mara004
|
78,747,242
| 8,008,396
|
Why does a zig const reference cause a segmentation fault when passed to C-function?
|
<p>Consider the following file <code>hello.zig</code> which defines a basic Python C extension (and which works):</p>
<pre><code>const c = @cImport({
@cInclude("Python.h");
});
var module = c.PyModuleDef{
.m_name = "__name__",
.m_doc = "__doc__",
.m_size = -1,
};
export fn PyInit_hello() [*c]c.PyObject {
return c.PyModule_Create(&module);
}
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ zig build-lib -dynamic -femit-bin=hello.so -I/usr/include/python3.12 -L/usr/lib -lpython3.12 -lc hello.zig
$ python3.12 -c "import hello; print(hello.__doc__)"
__doc__
</code></pre>
<p>Now, we instead define our module with const instead of var, and we get a segmentation fault:</p>
<pre><code>const c = @cImport({
@cInclude("Python.h");
});
const module = c.PyModuleDef{
.m_name = "__name__",
.m_doc = "__doc__",
.m_size = -1,
};
export fn PyInit_hello() [*c]c.PyObject {
return c.PyModule_Create(@constCast(&module));
}
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ zig build-lib -dynamic -femit-bin=hello.so -I/usr/include/python3.12 -L/usr/lib -lpython3.12 -lc hello.zig
$ python3.12 -c "import hello"
[1] 791627 segmentation fault (core dumped)
</code></pre>
<p>Why is this happening? Does zig use some kind of read-only memory region for constants, and trigger a segmentation fault if some C-code tries to modify it or something like this?</p>
<p>My zig-version is <code>0.14.0-dev.244+0d79aa017</code>, and I'm running on x86-64 Linux, kernel version 6.6.32-1</p>
<p>Also, is there a way of doing something like the following inline without having to use <code>@constCast</code> (Since this leads to a segmentation fault)</p>
<pre><code>const c = @cImport({
@cInclude("Python.h");
});
export fn PyInit_hello() [*c]c.PyObject {
return c.PyModule_Create(@constCast(&c.PyModuleDef{
.m_name = "__name__",
.m_doc = "__doc__",
.m_size = -1,
}));
}
</code></pre>
|
<python><c><zig>
|
2024-07-14 19:04:07
| 1
| 2,654
|
Tobias Bergkvist
|
78,746,962
| 2,749,397
|
Making a poster: how to place Artists in a Figure using mm and top-left origin
|
<p>I want to prepare a "portrait" A0 poster (841mm × 1189mm) placing different Artists (no Axes, but Rectangles and Texts) specifying their positions in mm from the top-left corner of the figure.</p>
<p>I have already figured out a possible procedure, e.g., if I want a rectangle, 40mm × 18mm, its top left corner positioned at (230mm, 190mm) from the top-left corner of the figure, I'd write</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(figsize(841/25.4, 25.4), layout='none')
fig.patches.append(plt.Rectangle((230, 190), 40, 18, transform=???))
# -------------------------------------------------------------///
fig.savefig('A0.pdf')
</code></pre>
<p>While I know that <code>x_fig = x_mm/841</code> and <code>y_fig = 1 - y_mm/1189</code>I don't know how to define a proper transform to have the coordinates specified the way I'd like them.</p>
<p>How can I define a proper transform?</p>
|
<python><matplotlib><coordinates><transform>
|
2024-07-14 17:03:20
| 1
| 25,436
|
gboffi
|
78,746,932
| 11,932,910
|
Can "Go to Definition" go to a concrete class in VSCode / Python (Pylance) during debugging?
|
<p>I define an attribute of type <code>MyAbstractClass</code> (abstract class), which takes a value of a concrete subclass <code>MyConcreteClass</code>. When I hit "Go to Definition" in VSCode at <code>poke</code> in <code>self.item.poke()</code>, it takes me to <code>MyAbstractClass.poke()</code>, which is expected.</p>
<p>However, at runtime in the debug mode, when <code>container.item</code> is bound to an instance of <code>MyConcreteClass</code>, the IDE could do better and take me to <code>MyConcreteClass.poke()</code>. Are there any easy ways to make this happen?</p>
<p>All I want is more convenience when editing and debugging my code.</p>
<p>One option is to replace <code>item : MyAbstractClass</code> with <code>item : Union[MyAbstractClass, MyConcreteClass, MyConcreteClass2, ...]</code>, in which case "Go to Definition" will produce a pop up with all listed methods, which would allow me to jump through them. However, having to manually list all possible subclasses defeats modularity, also I have to check the object's concrete type myself. Is there a solution that automates this and takes advantage of Python dynamic resolution infrastructure?</p>
<p>Another option is to remove all type hints. Then "Go to Definition" at debug mode jumps directly to the concrete method implementation. This is at the cost of it not working at all, unless the debugger is running and <code>container.item</code> is bound.</p>
<p>Neither of the options is optimal. I want to have the best in both situations: jumping to the abstract method when editing offline and jumping to the concrete method when debugging.</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
class MyAbstractClass(ABC):
@abstractmethod
def __init__(self):
pass
@abstractmethod
def poke(self):
pass
class MyConcreteClass(MyAbstractClass):
def __init__(self):
pass
def poke(self):
print("Poked!")
class MyContainerClass():
item : MyAbstractClass
def __init__(self, item: MyAbstractClass):
self.item = item
def poke_item(self):
self.item.poke()
item = MyConcreteClass()
container = MyContainerClass(item=item)
container.poke_item()
</code></pre>
|
<python><visual-studio-code><pylance><dynamic-dispatch>
|
2024-07-14 16:47:24
| 0
| 381
|
paperskilltrees
|
78,746,708
| 6,068,731
|
Create a boolean array with True for any value between two entries in a certain array
|
<p>I have a tensor called <code>idx</code> which has integers from <code>0</code> to <code>27</code>, for instance:</p>
<pre><code>idx = torch.tensor([9, 0, 6, 5, 2, 1, 10, 18, 26, 0, 11])
</code></pre>
<p>I want to generate another array/tensor which has boolean values. I want the value to be True when it is strictly between <code>1</code> (on the left) and <code>0</code> (on the right) or the end of the tensor:</p>
<pre><code>flag = torch.tensor([False, False, False, False, False, False, True, True, True, False, False])
</code></pre>
<p>Notice that if the <code>0</code> does not appear after the <code>1</code>, I still want <code>True</code> until the end of the tensor. That is,</p>
<pre><code>idx = torch.tensor([9, 0, 6, 5, 2, 1, 10, 18])
flag = torch.tensor([False, False, False, False, False, False, True, True])
</code></pre>
<p>Is there a fast way to do this without an expensive for loop? Or if the for-loop is unavoidable, what's the most efficient way to do this?</p>
<p>Importantly, my <code>idx</code> will always have <code>0</code> and <code>1</code> at regular intervals and they will always alternate. In particular, there are always exactly <code>3</code> (in this case) elements between <code>1</code> and <code>0</code> or between <code>0</code> and <code>1</code>.</p>
|
<python><numpy><pytorch>
|
2024-07-14 15:06:50
| 2
| 728
|
Physics_Student
|
78,746,691
| 2,863,527
|
Django reversed for loop slice
|
<p>I am currently trying to reverse a slice of a list going from 0 to 11</p>
<pre><code>class Game(models.Model):
board = models.JSONField(default=list)
game = Game(board=[4] * 12, scores=[0, 0], current_player=0, game_over=False)
</code></pre>
<p>Here is part of <code>index.html</code></p>
<pre><code><div>
{% for i in game.board|slice:"6:12" reversed%}
<a href="{% url 'make_move' forloop.counter0 %}" > {{ i }} </a>
{% endfor %}
</div>
</code></pre>
<p>Here is what I have in <code>views.py</code></p>
<pre><code>def make_move(request, pit):
print(f"Make move called. Pit: {pit}")
</code></pre>
<p><code>pit</code> will always print from 0 to 5, when I expect it to print 11, 10, 9, etc</p>
<p>It feels that <code>slice</code> and <code>reversed</code> have no effect on <code>i</code> in this case</p>
|
<python><django><slice><reverse>
|
2024-07-14 15:00:05
| 1
| 400
|
trexgris
|
78,746,642
| 4,379,669
|
Get hierarchy context in a Strawberry custom field resolver
|
<h3>TL;DR</h3>
<p>I need to access information from the "parent" of an object who's field I'm calculating in a custom resolver, but that information does not seem to be available during the resolver's execution.</p>
<hr/>
<p>Assuming I have 2 objects that have a many-to-many relation, describing which clients accessed which resources, something like (simplified example) -</p>
<pre><code>// graphql query
{
consumers {
node {
id
resources {
node {
id
}
}
}
}
}
</code></pre>
<p>that might return something like (indicating that 2 consumers accessed the same single resource)</p>
<pre class="lang-json prettyprint-override"><code>{
"data": {
"consumer": {
"node": [
{
"id": "consumer1",
"resource": {
"node": [{ "id": "resource1" }]
}
},
{
"id": "consumer2",
"resource": {
"node": [{ "id": "resource1" }]
}
}
]
}
}
}
</code></pre>
<p>And assuming I have a DB table that looks something like this, which maps the last interaction of each consumer-resource pair.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>consumer_id</th>
<th>resource_id</th>
<th>last_accessed</th>
</tr>
</thead>
<tbody>
<tr>
<td>consumer1</td>
<td>resource1</td>
<td>08/05/2022</td>
</tr>
<tr>
<td>consumer2</td>
<td>resource1</td>
<td>11/03/2024</td>
</tr>
</tbody>
</table></div>
<p>I now want to create a custom resolver to add a <code>lastAccesed</code> field to the resource object, which will be calculated by fetching the <code>last_accessed</code> field from the above table.</p>
<p>I started to write a custom resolver that looks something like this -</p>
<pre class="lang-py prettyprint-override"><code>def resolve(parent: Any, info: Info):
# `parent` here is the resource object
resource_id = getattr(parent, 'id', None)
consumer_id = # ??? this is what I need to somehow get
# ... get `last_accessed` from DB by resource_id + consumer_id
</code></pre>
<p>But I can't figure out how to get the ID of the consumer I'm "currently under" in the calculation. For the example query above, my resolver will get called twice, both with the same resource - which is fine - but there is no info in the <code>parent</code> or <code>context</code> that I can use to figure out under which <code>consumer</code> I am.</p>
<p>I can use the <code>parent</code> parameter to access the list of all consumers that the resource is linked to, but the list will include both consumers in both invocations of the resolver, so this is not helpful.</p>
|
<python><graphql><strawberry-graphql>
|
2024-07-14 14:39:33
| 1
| 662
|
Nimrod Dolev
|
78,746,578
| 1,084,416
|
How to use standard tools to package and install a zipped python package of pyc files and a .pth file
|
<p>I have a Python package which I have zipped using <a href="https://docs.python.org/3/library/zipfile.html#pyzipfile-objects" rel="nofollow noreferrer"><code>zipfile.PyZipFile</code></a>, so I now have a <code>.zip</code> containing only <code>.pyc</code> files and sub-packages for a particular version of Python.</p>
<p>I can manually place it into <code>site-packages</code> with a <code>.pth</code> file and it all works nicely.</p>
<p>I'd like to be able to automate this process, using the standard tools.</p>
<p>I used to have a <code>pyproject.toml</code> file which built a <code>.pyd</code> file with <code>nuitka</code>:</p>
<pre><code>[build-system]
requires = ["setuptools", "wheel", "nuitka", "toml"]
build-backend = "nuitka.distutils.Build"
</code></pre>
<p>I don't want to build a <code>.pyd</code> file now. It's too large, takes too long to build, and runs slower. I only want minimal obfuscation to deter casual tinkering by clients, I know decompyle-ation is possible, and it doesn't matter to me.</p>
<p>I think what I need is little different to compiling and distributing a <code>.pyd</code> file, but I don't know where to insert my <code>.zip</code> into the build process.</p>
<p>I'm not interested in <strong>why</strong> I shouldn't be trying to do this.</p>
<p>As with most things software, the question isn't <strong>if</strong> something is possible, but <strong>how</strong> it is achievable.</p>
|
<python><build><python-packaging><pyproject.toml>
|
2024-07-14 14:06:38
| 0
| 24,283
|
Open AI - Opting Out
|
78,746,094
| 8,543,025
|
Plotly Express: Remove Trendline from Marginal Distribution Figures
|
<p>I'm looking for a "clean" way to remove the trendline from the marginal-distribution subplot created using plotly-express. I know it's a bit unclear, so please look at the following example:<br />
Generating some fake data:</p>
<pre><code>np.random.seed(42)
data = pd.DataFrame(np.random.randint(0, 100, (100, 4)), columns=["feature1", "feature2", "feature3", "feature4"])
data["label"] = np.random.choice(list("ABC"), 100)
data["is_outlier"] = np.random.choice([True, False], 100)
</code></pre>
<p>Creating a scatter plot with both <code>marginal</code> and <code>trendline</code> options:</p>
<pre><code>fig = px.scatter(
data, x="feature1", y="feature2",
color="label", symbol="is_outlier", symbol_map={True: "x", False: "circle"},
log_x=False, marginal_x="box",
log_y=False, marginal_y="box",
trendline="ols", trendline_scope="overall", trendline_color_override='black',
trendline_options=dict(log_x=False, log_y=False),
)
</code></pre>
<p>This yields a figure with a trendline in all 3 panels:<br />
<a href="https://i.sstatic.net/2f3NVG5M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2f3NVG5M.png" alt="original fig" /></a></p>
<p>I looked into the <code>fig.data</code> struct and found that the trendlines are the last 3 objects in it, and the last 2 are the lines appearing in the top & right panels. Removing those objects from the structs will result in removing the lines from those panels. Seen here:</p>
<pre><code>fig2 = copy.deepcopy(fig)
fig2.data = fig2.data[:-2]
</code></pre>
<p><a href="https://i.sstatic.net/6eikurBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6eikurBM.png" alt="last 2 objects removed from original fig.data" /></a></p>
<p>This creates a new issue, because it also removes <code>trendline</code> from the legend, which is not a behavior I'm happy with. So I need to first update the 3rd-to-last object (main panel's trendline) to have <code>showlegend=True</code> attribute:</p>
<pre><code>fig3 = copy.deepcopy(fig)
fig3.data[-3].showlegend = True
fig3.data = fig3.data[:-2]
</code></pre>
<p>This finally gives me the figure I wanted:<br />
<a href="https://i.sstatic.net/eAEZcHAv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAEZcHAv.png" alt="last 2 objects removed from original fig.data and trendline included in legend" /></a></p>
<p>So I do have a solution, but it requires "manhandling" the <code>fig</code> object.<br />
Is there a better, cleaner way of achieving the same final figure?</p>
<p>###############<br />
Full code:</p>
<pre><code>import copy
import numpy as np
import pandas as pd
import plotly.io as pio
import plotly.express as px
pio.renderers.default = "browser"
np.random.seed(42)
data = pd.DataFrame(np.random.randint(0, 100, (100, 4)), columns=["feature1", "feature2", "feature3", "feature4"])
data["label"] = np.random.choice(list("ABC"), 100)
data["is_outlier"] = np.random.choice([True, False], 100)
fig = px.scatter(
data, x="feature1", y="feature2",
color="label", symbol="is_outlier", symbol_map={True: "x", False: "circle"},
log_x=False, marginal_x="box",
log_y=False, marginal_y="box",
trendline="ols", trendline_scope="overall", trendline_color_override='black',
trendline_options=dict(log_x=False, log_y=False),
)
fig.show()
fig2 = copy.deepcopy(fig)
fig2.data = fig2.data[:-2]
fig2.show()
fig3 = copy.deepcopy(fig)
fig3.data[-3].showlegend = True
fig3.data = fig3.data[:-2]
fig3.show()
</code></pre>
|
<python><plotly>
|
2024-07-14 10:08:15
| 1
| 593
|
Jon Nir
|
78,745,907
| 2,604,247
|
Is There a Way to Expose a Plot Generated Using a Pandas Dataframe as a Jupyter Notebook as an App to the End User?
|
<h5>Current Workflow</h5>
<p>Right now, me, as an ML engineer, am using a jupyter notebook to plot some pandas dataframes. Basically, inside the jupyter notebook, I am setting some hard-coded configurations like</p>
<pre><code>k_1:float=2.3
date_to_analyse:str='2024-06-17' # 17th June
region_to_analyse:str='montana'
...
</code></pre>
<p>Based on the hardcoded parameters like above, the notebook (along with some helper functions) runs a query to my company's data warehouse in Google Big Query, crunches a few numbers, applies some business logic and generates a pandas dataframe for transaction numbers minute by minute throughout the day.</p>
<p>The plot of the pandas dataframe is what's important to the business user, which I show to them as part of our model evaluation metric.</p>
<p>The logic is straightforward enough, and captured in the figure below.</p>
<p><a href="https://i.sstatic.net/x7J3zziI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/x7J3zziI.png" alt="enter image description here" /></a></p>
<h5>Objective</h5>
<p>I am tasked with making this into a self-service application directly usable by the business team, where a user can</p>
<ul>
<li>input the above values via a simple UI</li>
<li>the plot appears on the screen.</li>
</ul>
<p>I don't have a dedicated frontend guy, neither do I have frontend experience myself. So was just wondering whether there is a simple enough solution (best if serverless, and part of the GCP eco-system) that can accomplish this?</p>
<p>I know about Google colab notebooks (basically, jupyter notebook, right?), but have not tried them. Can they provide a simple way to expose the notebook's functionality (generate the plots from user provided configs)</p>
|
<python><pandas><google-bigquery><jupyter-notebook><google-colaboratory>
|
2024-07-14 08:36:07
| 1
| 1,720
|
Della
|
78,745,757
| 6,017,833
|
Redisearch not working with wildcard prefix
|
<p>I am trying to understand why <code>index_test</code> works but <code>index_test_wildcard</code> does not work. The index created in the latter does not match to the records created within the function despite respecting the wildcard syntax. Any help would be appreciated. Thanks.</p>
<pre class="lang-py prettyprint-override"><code>from redis.commands.json.path import Path
from redis.commands.search.field import TagField, TextField, NumericField
def index_test(r: Redis):
r.json().set('doc:data:1', Path.root_path(), {
"title": "Example Title",
"views": 150,
"tags": ["tutorial", "redis"]
})
# Define the schema and index definition
schema = [
TextField('$.title', as_name='title', weight=5.0),
NumericField('$.views', as_name='views'),
TagField('$.tags', as_name='tags', separator=',')
]
definition = IndexDefinition(prefix=['doc:data:'], index_type=IndexType.JSON)
# Create the index
r.ft("index_test").create_index(schema, definition=definition)
def index_test_wildcard(r: Redis):
r.json().set('doc:1:data:1', Path.root_path(), {
"title": "Example Title",
"views": 150,
"tags": ["tutorial", "redis"]
})
# Define the schema and index definition
schema = [
TextField('$.title', as_name='title', weight=5.0),
NumericField('$.views', as_name='views'),
TagField('$.tags', as_name='tags', separator=',')
]
definition = IndexDefinition(prefix=['doc:*:data:'], index_type=IndexType.JSON)
# Create the index
r.ft("index_test_wildcard").create_index(schema, definition=definition)
redis_client = initialise_redis()
index_test(redis_client)
index_test_wildcard(redis_client)
</code></pre>
<p>Here are the results:</p>
<pre class="lang-bash prettyprint-override"><code>127.0.0.1:6379> FT.SEARCH index_test *
1) (integer) 1
2) "doc:data:1"
3) 1) "$"
2) "{\"title\":\"Example Title\",\"views\":150,\"tags\":[\"tutorial\",\"redis\"]}"
127.0.0.1:6379> FT.SEARCH index_test_wildcard *
1) (integer) 0
</code></pre>
|
<python><json><redis>
|
2024-07-14 07:11:47
| 1
| 1,945
|
Harry Stuart
|
78,745,568
| 4,130,242
|
Using scenedetect to spot subtle cut edits in a video?
|
<p>I'm trying to use the Scenedetect Python package to detect subtle cuts in a video. For example, I'm looking at this video:</p>
<p><a href="https://www.youtube.com/watch?v=fNb_-Tmxq8Q" rel="nofollow noreferrer">https://www.youtube.com/watch?v=fNb_-Tmxq8Q</a></p>
<p>I have downloaded the video and run the following on the mp4 file:</p>
<pre><code> --input Bruised\ Makeup\ Tutorial\ \[fNb_-Tmxq8Q\].mp4 detect-adaptive list-scenes
</code></pre>
<p>SceneDirect can pick up the more obvious cuts. However, there are some cuts it's missing: for example the quick cut at around <a href="https://youtu.be/fNb_-Tmxq8Q?t=5" rel="nofollow noreferrer">0:05</a> and the blur at around <a href="https://youtu.be/fNb_-Tmxq8Q?t=16" rel="nofollow noreferrer">0:16</a>. I downloaded the --stats file but it's not clear to me how to define those cuts in a way that will capture them but avoid false positives. Any suggestions for how to do this?</p>
|
<python><video-processing>
|
2024-07-14 04:57:05
| 1
| 1,637
|
garson
|
78,745,394
| 10,082,415
|
Python/Regex: Finding a specific pattern to update and then placing it back in the original text
|
<p>I have a long file containing thousands of lines and a couple of samples are shown below:</p>
<pre><code>\begin{align*}
H_0 \amp : \mu_1 = \mu_2 = \mu_3 = \mu_4 = \mu_5 \\
H_1 \amp : \text{Some text} \\
H_2 \amp : \text{More text...} \\
\end{align*}
\begin{table}[htb]
\centering
\begin{tabular}{cc}
Mean \amp = \amp the mean value $\mu$ \\
Median \amp = \amp the median value $\median$ \\
Mode \amp = \amp the mode value $\mode$ \\
\end{tabular}
\end{table}
</code></pre>
<p>The objective is to turn <code>\begin{align*}...\end{align*}</code> into</p>
<pre><code><md>
<mrow>H_0 \amp : \mu_1 = \mu_2 = \mu_3 = \mu_4 = \mu_5</mrow>
<mrow>H_1 \amp : \text{Some text}</mrow>
<mrow>H_2 \amp : \text{More text...}</mrow>
</md>
</code></pre>
<p>and <code>\begin{table}[htb]...\end{table}</code> to</p>
<pre><code><table>
<tabular halign="center">
<row header="yes" bottom="minor" >
<cell>Mean</cell>
<cell>=</cell>
<cell>the mean value $\mu$</cell>
</row>
<row>
<cell>Median</cell>
<cell>=</cell>
<cell>the mode value $\mode$</cell>
</row>
<row>
<cell>Mode</cell>
<cell>=</cell>
<cell>the mode value $\mode$</cell>
</row>
</tabular>
</table>
</code></pre>
<p>I am trying to get <code>\begin{align*}</code> working and haven't started on <code>\begin{table}</code> yet. I have made a script for it but doesn't work as expected. I believe it is because I am using <code>re.escape(...)</code>. There are too many unnecessary <code>\</code> characters generated. I want to eliminate the extra <code>\</code>'s and also remove <code>\begin{align*}</code> along with <code>\end{align*}</code> during the process. Any assistance is appreciated!</p>
<pre><code><md><mrow>\begin\{align\*\}\
\ \ H_0\ \amp\ :\ \mu_1\ =\ \mu_2\ =\ \mu_3\ =\ \mu_4\ =\ \mu_5\ </mrow><mrow>\
\ \ H_1\ \amp\ :\ \text\{Some\ text\}\ </mrow><mrow>\ \
\ \ H_2\ \amp\ :\ \text\{More\ text\.\.\.\}\ </mrow>\ \
\end\{align\*\}</md>
\begin{table}[htb]
\centering
\begin{tabular}{cc}
Mean \amp = \amp the mean value $\mu$ \\
Median \amp = \amp the median value $\median$ \\
Mode \amp = \amp the mode value $\mode$ \\
\end{tabular}
\end{table}
</code></pre>
<pre><code>import re
my_file = open("sample.txt", "r")
data: str = my_file.read()
result: str = data
original = re.findall(r'\\begin{align\*}[\s\S]*\\end{align\*}', data,)
modified = re.findall(r'\\begin{align\*}[\s\S]*\\end{align\*}', data,)
for i in range(len(modified)):
# append the first mrow of the <md> tag
modified[i] = r'<mrow>' + modified[i]
# replace \\ with a closing and opening of </mrow> and <mrow>.
modified[i] = str(modified[i]).replace(r'\\', r'</mrow><mrow>')
#wrap everything with the math display environment
modified[i] = '<md>' + modified[i]+r'</md>'
# Remove the last <mrow> as it is an extra
modified[i] =(modified[i][::-1].replace(r'<mrow>'[::-1], ''[::-1], 1))[::-1]
result = re.sub(re.escape(original[i]), re.escape(modified[i]), result)
# print(modified[i])
# print(original[i])
print(result)
</code></pre>
|
<python><regex>
|
2024-07-14 02:28:55
| 2
| 1,003
|
M. Al Jumaily
|
78,745,180
| 383,958
|
Pytorch: matrices are equal but have different result
|
<p>How is this possible?</p>
<pre><code>>>> torch.all(w1 == w2)
tensor(True)
>>> torch.all(i1 @ w1.T == i1 @ w2.T)
tensor(False)
</code></pre>
<p>The matrices are all the same dtype, on the same device, etc:</p>
<pre><code>>>> i.dtype, w1.dtype, w2.dtype
(torch.float32, torch.float32, torch.float32)
>>> i.device, w1.device, w2.device
(device(type='cpu'), device(type='cpu'), device(type='cpu'))
>>> i.shape, w1.shape, w2.shape
(torch.Size([3, 4096]), torch.Size([14336, 4096]), torch.Size([14336, 4096]))
>>> diff = (i @ w1.T) - (i @ w2.T)
>>> diff
tensor([[ 1.6391e-07, -5.3644e-07, 3.8743e-07, ..., 4.7684e-07,
-5.9605e-08, -3.8743e-07],
[ 3.5763e-07, 2.9802e-08, -4.7684e-07, ..., 8.6427e-07,
8.9407e-08, -1.1921e-07],
[ 3.5763e-07, 3.4273e-07, -4.4703e-08, ..., 2.1607e-07,
2.3656e-07, 7.4506e-08]], grad_fn=<SubBackward0>)
>>> torch.max(torch.abs(diff))
tensor(7.1526e-06, grad_fn=<MaxBackward1>)
>>> torch.all(w1 - w2 == 0)
tensor(True)
</code></pre>
<p>Seeing <a href="https://stackoverflow.com/questions/63824484/pytorch-same-input-different-output-not-random">PyTorch Same Input Different Output (Not Random)</a>, I also set:</p>
<pre><code>>>> torch.manual_seed(0)
>>> torch.backends.cudnn.deterministic = True
>>> torch.backends.cudnn.benchmark = False
</code></pre>
<p>This didn't change anything.</p>
<p>For context, I discovered this <a href="https://github.com/TransformerLensOrg/TransformerLens/compare/main...joelburget:TransformerLens:mixtral-1l" rel="nofollow noreferrer">while debugging</a> a difference between two different implementations of <a href="https://huggingface.co/mistralai/Mixtral-8x7B-v0.1" rel="nofollow noreferrer">Mixtral</a>. So <code>i</code> corresponds to what's called <code>current_state</code> and <code>w1</code> is <code>expert_mlp.W_gate.T</code>, while <code>w2</code> is the corresponding matrix from the Huggingface implementation of Mixtral (where it's called <code>w1</code>, confusingly).</p>
|
<python><pytorch>
|
2024-07-13 23:18:44
| 1
| 1,438
|
Joel Burget
|
78,745,143
| 1,107,474
|
Error "ModuleNotFoundError: No module named 'pylibpcap'"
|
<p>I'm trying to use this Python libpcap library:</p>
<p><a href="https://pypi.org/project/python-libpcap/" rel="nofollow noreferrer">https://pypi.org/project/python-libpcap/</a></p>
<p>I have installed using:</p>
<pre><code>sudo apt-get install libpcap-dev
pip3 install python-libpcap
</code></pre>
<p>but when I run the capture example code in script using sudo python3:</p>
<pre><code>from pylibpcap.pcap import sniff
for plen, t, buf in sniff("my_interface", count=-1, promisc=1, out_file="pcap.pcap"):
print("[+]: Payload len=", plen)
print("[+]: Time", t)
print("[+]: Payload", buf)
</code></pre>
<p>I get this error:</p>
<pre><code> from pylibpcap.pcap import sniff
ModuleNotFoundError: No module named 'pylibpcap'
</code></pre>
<p>How can I fix this/what's wrong?</p>
<p>I'm on Ubuntu 22.04.</p>
|
<python><pip><libpcap>
|
2024-07-13 22:44:18
| 2
| 17,534
|
intrigued_66
|
78,744,986
| 4,427,375
|
How do you compile OpenCV with Python with CUDA Support on Rocky Linux 9?
|
<p>I am trying to build <a href="https://pypi.org/project/opencv-python/" rel="nofollow noreferrer">opencv-python</a> per the official instructions with support for CUDA so I can run SIFT. When I try to build with <code>pip wheel . --verbose</code> I get:</p>
<pre><code> -- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "/tmp/pip-build-env-hrc4y4f6/overlay/lib/python3.9/site-packages/skbuild/setuptools_wrap.py", line 664, in setup
env = cmkr.configure(
File "/tmp/pip-build-env-hrc4y4f6/overlay/lib/python3.9/site-packages/skbuild/cmaker.py", line 354, in configure
raise SKBuildError(msg)
An error occurred while configuring with CMake.
Command:
/tmp/pip-build-env-hrc4y4f6/overlay/lib64/python3.9/site-packages/cmake/data/bin/cmake /home/grant/chinese_test/opencv-python/opencv -G 'Unix Makefiles' --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/home/grant/chinese_test/opencv-python/_skbuild/linux-x86_64-3.9/cmake-install -DPYTHON_VERSION_STRING:STRING=3.9.18 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-hrc4y4f6/overlay/lib/python3.9/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/usr/bin/python3 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPYTHON_LIBRARY:PATH=/usr/lib64/libpython3.9.so -DPython_EXECUTABLE:PATH=/usr/bin/python3 -DPython_ROOT_DIR:PATH=/usr -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPython_NumPy_INCLUDE_DIRS:PATH=/tmp/pip-build-env-hrc4y4f6/overlay/lib64/python3.9/site-packages/numpy/_core/include -DPython3_EXECUTABLE:PATH=/usr/bin/python3 -DPython3_ROOT_DIR:PATH=/usr -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPython3_NumPy_INCLUDE_DIRS:PATH=/tmp/pip-build-env-hrc4y4f6/overlay/lib64/python3.9/site-packages/numpy/_core/include -DWITH_CUDA=ON -DCMAKE_BUILD_TYPE=Release -DENABLE_FAST_MATH=1 -DCUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -DOPENCV_DNN_CUDA=ON -DPYTHON3_EXECUTABLE=/usr/bin/python3 -DPYTHON_DEFAULT_EXECUTABLE=/usr/bin/python3 -DPYTHON3_INCLUDE_DIR=/usr/include/python3.9 -DPYTHON3_LIBRARY=/usr/lib64/libpython3.9.so -DBUILD_opencv_python3=ON -DBUILD_opencv_python2=OFF -DBUILD_opencv_java=OFF -DOPENCV_PYTHON3_INSTALL_PATH=python -DINSTALL_CREATE_DISTRIB=ON -DBUILD_opencv_apps=OFF -DBUILD_opencv_freetype=OFF -DBUILD_SHARED_LIBS=OFF -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_DOCS=OFF -DPYTHON3_LIMITED_API=ON -DBUILD_OPENEXR=ON -DOPENCV_EXTRA_MODULES_PATH=/home/grant/chinese_test/opencv-python/opencv_contrib/modules -DWITH_CUDA=ON -DCMAKE_BUILD_TYPE=Release -DENABLE_FAST_MATH=1 -DCUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -DOPENCV_DNN_CUDA=ON
Source directory:
/home/grant/chinese_test/opencv-python/opencv
Working directory:
/home/grant/chinese_test/opencv-python/_skbuild/linux-x86_64-3.9/cmake-build
Please see CMake's output for more information.
error: subprocess-exited-with-error
× Building wheel for opencv-contrib-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /home/grant/.local/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmpzppm94gi
cwd: /home/grant/chinese_test/opencv-python
Building wheel for opencv-contrib-python (pyproject.toml) ... error
ERROR: Failed building wheel for opencv-contrib-python
Failed to build opencv-contrib-python
ERROR: Failed to build one or more wheels
</code></pre>
<p>I searched for <em>output</em> recursively in the directory tree and there was nothing related to cmake's output so I don't know where it is putting that.</p>
<p>I have done this:</p>
<ol>
<li>Clone this repository: <code>git clone --recursive https://github.com/opencv/opencv-python.git</code></li>
<li><code>cd opencv-python</code></li>
<li>The below flags:</li>
</ol>
<pre><code>export CMAKE_ARGS="-DWITH_CUDA=ON -DCMAKE_BUILD_TYPE=Release -DENABLE_FAST_MATH=1 -DCUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -DOPENCV_DNN_CUDA=ON"
export ENABLE_CONTRIB=1
pip wheel . --verbose
</code></pre>
|
<python><opencv><python-wheel><sift>
|
2024-07-13 21:07:02
| 0
| 1,873
|
Grant Curell
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.