QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,003,563
| 11,330,134
|
Add column to Pandas dataframe based on dictionary lookup multiplication then sum
|
<p>I have a sample dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'id' : [1 ,2, 3],
'pts': [25,20,9],
'ast': [8, 14, 7],
'reb': [1, 4, 9],
'oth': [5,6,7],
'tov': [4, 2, 1]
})
print(df)
id pts ast reb oth tov
0 1 25 8 1 5 4
1 2 20 14 4 6 2
2 3 9 7 9 7 1
</code></pre>
<p>I want to essentially apply some weights/multiply certain columns then sum them up and make that a new column (<code>score</code>). Not every column in the <code>df</code> has a lookup mapping.</p>
<p>I can do this by manually applying the math to each column via a function:</p>
<pre><code>def f(in_df):
return in_df['pts'] + (in_df['reb'] * 1.2) + (in_df['ast'] * 1.5) - df['tov']
df['score'] = f(df)
print(df)
id pts ast reb oth tov score
0 1 25 8 1 5 4 34.2
1 2 20 14 4 6 2 43.8
2 3 9 7 9 7 1 29.3
</code></pre>
<p>I want to accomplish it using a dictionary lookup though:</p>
<pre><code>score_dict = {'pts': 1, 'reb': 1.2, 'ast': 1.5, 'tov': -1}
# Something like ..?
df['score'] = df[?].map(lambda d: sum(k * v for k, v in score_dict.items()))
</code></pre>
<p>I was looking at <a href="https://stackoverflow.com/questions/69981936/how-to-sum-keys-and-multiply-keys-and-values-from-dictionary-in-column">this post</a> and trying to implement a mapping like below. That solution takes a single column though for the mapping; I don't know how to implement it correctly.</p>
|
<python><pandas>
|
2024-02-15 19:43:58
| 4
| 489
|
md2614
|
78,003,363
| 5,790,653
|
How to use join method for a list of dictionaries and then add a string for the value
|
<p>This is a list of dicts:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'unique_id': '1qaz2wsx', 'db_id': 10},
{'unique_id': '2qaz2wsx', 'db_id': 20},
{'unique_id': '3qaz2wsx', 'db_id': 30},
{'unique_id': '4qaz2wsx', 'db_id': 40},
]
</code></pre>
<p>I'm trying to have like this as expected output:</p>
<pre><code>unique 1qaz2wsx, url http://url.com/10
unique 2qaz2wsx, url http://url.com/20
unique 3qaz2wsx, url http://url.com/30
unique 4qaz2wsx, url http://url.com/40
</code></pre>
<p>I googled for this but I couldn't find any hints up to now to how to do this. I mean I may googled the incorrect words, so I don't have any codes and attempts.</p>
<p>How can I have the expected output?</p>
|
<python>
|
2024-02-15 19:06:04
| 3
| 4,175
|
Saeed
|
78,003,276
| 1,554,020
|
How to generate uniformly distributed subintervals of an interval?
|
<p>I have a non-empty integer interval <em>[a; b)</em>. I want to generate a random non-empty integer subinterval <em>[c; d)</em> (where <em>a <= c</em> and <em>d <= b</em>). The <em>[c; d)</em> interval must be uniformly distributed in the sense that every point in <em>[a; b)</em> must be equally likely to end up in <em>[c; d)</em>.</p>
<p>I tried generating uniformly distributed <em>c</em> from <em>[a; b - 1)</em>, and then uniformly distributed <em>d</em> from <em>[c + 1; b)</em>, like this:</p>
<pre><code>a = -100
b = 100
N = 10000
cs = np.random.randint(a, b - 1, N)
ds = np.random.randint(cs + 1, b)
</code></pre>
<p>But when measuring how often each point ends up being sampled, the the distribution is clearly non-unifrom:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
hist = np.zeros(b - a, int)
for c, d in zip(cs, ds):
hist[c - a:d - a] += 1
plt.plot(np.arange(a, b), hist)
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/gL2EW.png" alt="histogram" /></p>
<p>How do I do this correctly?</p>
|
<python><algorithm><random><distribution>
|
2024-02-15 18:48:58
| 2
| 14,259
|
yuri kilochek
|
78,003,185
| 14,775,478
|
How to properly instantiate an empty pandas Series for "any" dtype in version 2.2.0?
|
<p>Pandas <code>2.2.0</code> raises a FutureWarning <code>Setting an item of incompatible dtype is deprecated</code>.</p>
<p>While this sounds like a helper to avoid possible type casting issues, it's actually introducing a new problem if you want to initialize an empty Series and only later fill it, for example by copying over a bunch of records, or single elements. For example, the below raises a warning, and will raise an error in the future:</p>
<pre><code>my_index = pd.Index([1, 2, 3])
ds = pd.Series(None, index=my_index)
ds.iloc[0] = "a"
FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value 'a' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
</code></pre>
<p>I find it hard to follow why <code>pd.Series(None)</code> would infer <code>float</code> (fine by me), but then prohibit me from assigning "anything" to <code>None</code>.</p>
<p>How to set up the Series properly? Especially if you don't know/want to specify the type just yet? The proposed solution should allow to assign any data type later (e.g., <code>int</code> or <code>float</code> or <code>str</code>) without raising an error, as long as the entire series is of the same type, and the initialized series is just empty.</p>
|
<python><pandas>
|
2024-02-15 18:33:51
| 2
| 1,690
|
KingOtto
|
78,003,119
| 8,737,016
|
Allow multiple Ray workers to fill a common queue which can be accessed from the main thread
|
<p>I have a <code>DataGenerator</code> class</p>
<pre class="lang-py prettyprint-override"><code>@ray.remote
class DataGenerator:
def generate_continuously(self):
while True:
time.sleep(5)
data = random.rand()
# I need data to be put into a queue common to all instances of DataGenerator
</code></pre>
<p>From the main script, I instantiate many of them</p>
<pre class="lang-py prettyprint-override"><code>queue = # Some shared queue
# Create generator handles and start continuous collection for all
handles = [DataGenerator.remote() for i in range(10)]
ray.wait([handle.generate_continuously.remote() for handle in handles])
# Continuously pop the queue and store locally the results
all_data = []
while True:
data = queue.pop_all()
all_data.extend()
# do something compute-intensive with all_data
</code></pre>
<p>I need each handle to put their result into a common queue that I can repeatedly access from this main script.</p>
<p><strong>What I've tried</strong>
This is the closest I got to the desired result:</p>
<pre class="lang-py prettyprint-override"><code>@ray.remote
class DataGenerator:
def generate(self):
time.sleep(5)
data = random.rand()
return data
N_HANDLES = 10
generator_handles = [DataGenerator.remote() for i in range(N_HANDLES)]
# Map generator handle index to the ref of the object being produced remotely
handle_idx_to_ref = {idx: generator_handles[idx].remote.generate() for idx in range(N_HANDLES)}
all_data = []
while True:
for idx, ref in handle_idx_to_ref.items():
ready_id, not_ready_id = ray.wait([ref], timeout=0)
if ready_id:
all_data.extend(ray.get([ready_id]))
# Start again generation for this worker
handle_idx_to_ref[idx] = generator_handles[idx].remote.generate()
# Do something compute-intensive with all_data
</code></pre>
<p>This is almost good, but if the compute-intensive operations takes too long, some <code>DataGenerator</code> could be finished and not started again until the next iteration. How can I improve on this code?</p>
|
<python><ray>
|
2024-02-15 18:22:00
| 1
| 2,245
|
Federico Taschin
|
78,003,018
| 759,880
|
Plotting histograms with pandas DataFrame
|
<p>I am trying to plot feature distributions per label value for a ML project, but I'm not getting the plot I want.</p>
<p>I'm doing:</p>
<pre><code>import pandas as pd
import numpy as np
x1 = np.random.randn(1000)
x2 = np.random.randn(1000)
y = np.random.randint(0,2,size=1000)
df = pd.DataFrame({'x1':x1, 'x2':x2, 'y': y})
axes = df.hist(['x1', 'x2'], by='y', bins=10, layout=(2,2), figsize=(6,6))
</code></pre>
<p>which gives me:</p>
<p><a href="https://i.sstatic.net/RVSoe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RVSoe.png" alt="enter image description here" /></a></p>
<p>But what I really need is a separate subplot for x1 and x2, showing the distributions of the values for the 0/1 label on the same plot. How could I do that?</p>
|
<python><pandas><dataframe>
|
2024-02-15 18:04:49
| 2
| 4,483
|
ToBeOrNotToBe
|
78,002,829
| 11,092,636
|
lru_cache vs dynamic programming, stackoverflow with one but not with the other?
|
<p>I'm doing this basic dp (Dynamic Programming) problem on trees (<a href="https://cses.fi/problemset/task/1674/" rel="nofollow noreferrer">https://cses.fi/problemset/task/1674/</a>). Given the structure of a company (hierarchy is a tree), the task is to calculate for each employee the number of their subordinates.</p>
<p>This:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from functools import lru_cache # noqa
sys.setrecursionlimit(2 * 10 ** 9)
if __name__ == "__main__":
n: int = 200000
boss: list[int] = list(range(1, 200001))
# so in my example it will be a tree with every parent having one child
graph: list[list[int]] = [[] for _ in range(n)]
for i in range(n-1):
graph[boss[i] - 1].append(i+1) # directed so neighbours of a node are only its children
@lru_cache(None)
def dfs(v: int) -> int:
if len(graph[v]) == 0:
return 0
else:
s: int = 0
for u in graph[v]:
s += dfs(u) + 1
return s
dfs(0)
print(*(dfs(i) for i in range(n)))
</code></pre>
<p>crashes (I googled the error message and it means stack overflow)</p>
<pre class="lang-py prettyprint-override"><code>Process finished with exit code -1073741571 (0xC00000FD)
</code></pre>
<p>HOWEVER</p>
<pre class="lang-py prettyprint-override"><code>import sys
sys.setrecursionlimit(2 * 10 ** 9)
if __name__ == "__main__":
n: int = 200000
boss: list[int] = list(range(1, 200001))
# so in my example it will be a tree with every parent having one child
graph: list[list[int]] = [[] for _ in range(n)]
for i in range(n-1):
graph[boss[i] - 1].append(i+1) # directed so neighbours of a node are only its children
dp: list[int] = [0 for _ in range(n)]
def dfs(v: int) -> None:
if len(graph[v]) == 0:
dp[v] = 0
else:
for u in graph[v]:
dfs(u)
dp[v] += dp[u] + 1
dfs(0)
print(*dp)
</code></pre>
<p>doesn't and it's exactly the same complexity right? The dfs goes exactly as deep in both situations too? I tried to make the two pieces of code as similar as I could.</p>
<p>I tried 20000000 instead of 200000 (i.e. graph 100 times deeper) and it still doesn't stackoverflow for the second option. Obviously I could do an iterative version of it but I'm trying to understand the underlying reason why there are such a big difference between those two recursive options so that I can learn more about Python and its underlying functionning.</p>
<p>I'm using <code>Python 3.11.1</code>.</p>
|
<python><time-complexity><space-complexity>
|
2024-02-15 17:30:43
| 1
| 720
|
FluidMechanics Potential Flows
|
78,002,577
| 15,411,076
|
How to dynamically modify Airflow task decorator attributes?
|
<p>I want my extractor task to use a string variable to set the <code>pool</code>.</p>
<p>I realised I can do it like this:</p>
<pre><code>@task(pool="my_pool")
def extractor_task(**kwargs):
</code></pre>
<p>But how can I do it dynamically, or how can I access those attributes and change them? Because I cannot dynamically change what I'm passing to the decorator, is there any other way to access the decorated extractor_task pool and set it as I want?</p>
|
<python><airflow>
|
2024-02-15 16:53:27
| 1
| 349
|
Doraemon
|
78,002,573
| 22,054,564
|
Unable to print the website status after the 1st URL in the given list using Python code
|
<p>This is the Python file that checks the given CSV file that contains the web url's and their status is ok or not found:</p>
<pre class="lang-py prettyprint-override"><code>import csv
import requests
from http import HTTPStatus
from fake_useragent import UserAgent
def get_websites(csv_path: str) -> list[str]:
"""Loads websites from a csv file"""
websites: list[str] = []
with open(csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if 'https://' not in row[1]:
websites.append(f'https://{row[1]}')
else:
websites.append(row[1])
return websites
def get_user_agent() -> str:
"""Returns a user agent that can be used with requests"""
ua = UserAgent()
return ua.chrome
def get_status_description(status_code: int) -> str:
"""Uses the status code to return a readable description"""
for value in HTTPStatus:
if value == status_code:
description: str = f'({value} {value.name}) {value.description}'
return description
return '(???) Unknown status code...'
def check_website(website: str, user_agent: str):
"""Gets that status code for a website and prints the result"""
try:
# code: int = requests.get(website, headers={'User-Agent': user_agent}).status_code
response = requests.get(website, headers={'User-Agent': user_agent})
code: int = response.status_code
print(website, get_status_description(code))
except Exception as e:
# print(f'**Could not get information for website: "{website}"')
print("Exception is: ", e)
def main():
sites: list[str] = get_websites('websites.csv')
user_agent: str = get_user_agent()
# Check websites
for i, site in enumerate(sites):
check_website(site, user_agent)
if __name__ == '__main__':
main()
</code></pre>
<p>csv file:</p>
<pre><code>1,"apple.com"
2,"facebook.com"
3,"lulumall.com"
4,"newgrounds.com"
5,"linkedin.com"
6,"python-guide.com"
7,"fastmail.com"
8,"soundcloud.com"
9,"whatsapp.com/hello"
10,"udemy.com"
</code></pre>
<p>Output:</p>
<pre><code>Exception is: HTTPSConnectionPool(host='linkedin.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
Exception is: HTTPSConnectionPool(host='facebook.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
Exception is: HTTPSConnectionPool(host='lulumall.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING]
EOF occurred in violation of protocol (_ssl.c:1006)')))
https://apple.com (200 OK) Request fulfilled, document follows
Exception is: HTTPSConnectionPool(host='linkedin.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
Exception is: HTTPSConnectionPool(host='python-guide.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
Exception is: HTTPSConnectionPool(host='fastmail.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
Exception is: HTTPSConnectionPool(host='soundcloud.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
Exception is: HTTPSConnectionPool(host='whatsapp.com', port=443): Max retries exceeded with url: /hello (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
Exception is: HTTPSConnectionPool(host='udemy.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
</code></pre>
<p>If I keep all the URLs as apple.com, then it is working even you change the order of apple.com url. It is not working for all the other URLs except apple.com.</p>
<p>Could anyone help me.</p>
|
<python><python-3.x><azure><csv>
|
2024-02-15 16:53:06
| 0
| 837
|
VivekAnandChakravarthy
|
78,002,555
| 14,250,641
|
How to train Random Forest classifier with large dataset to avoid memory errors in Python?
|
<p>I have a dataset that is 30 million rows. I have two columns: one that contains a 1 or 0 label and the other column has a list of 1280 features for each row (181 GB total). All I want to do is plug this dataset into a Random Forest algorithm, but the memory runs out and it crashes (I've tried using memory of 400 GB, but it still crashes).</p>
<p>After loading the dataset, I had to manipulate it a bit since it is in the Hugging Face arrow format: <a href="https://huggingface.co/docs/datasets/en/about_arrow" rel="nofollow noreferrer">https://huggingface.co/docs/datasets/en/about_arrow</a> (I suspect this is taking up a lot of RAM).</p>
<p>I am aware I could do some dimensionality reduction to my dataset, but is there any changes I should make to my code to reduce RAM usage?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, roc_auc_score, roc_curve, auc
from datasets import load_dataset, Dataset
# Load dataset
df = Dataset.from_file("data.arrow")
df = pd.DataFrame(df)
X = df['embeddings'].to_numpy() # Convert Series to NumPy array
X = np.array(X.tolist()) # Convert list of arrays to a 2D NumPy array
X = X.reshape(X.shape[0], -1) # Flatten the 3D array into a 2D array
y = df['labels']
# Split the data into training and testing sets (80% train, 20% test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Initialize the random forest classifier
rf_classifier = RandomForestClassifier(n_estimators=100, random_state=42)
# Train the classifier
rf_classifier.fit(X_train, y_train)
# Make predictions on the test set
y_pred = rf_classifier.predict(X_test)
# Evaluate the classifier
# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
# Calculate AUC score
auc_score = roc_auc_score(y_test, y_pred)
print("AUC Score:", auc_score)
with open("metrics.txt", "w") as f:
f.write("Accuracy: " + str(accuracy) + "\n")
f.write("AUC Score: " + str(auc_score))
# Make predictions on the test set
y_pred_proba = rf_classifier.predict_proba(X_test)[:, 1]
# Calculate ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
# Plot ROC curve
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % auc(fpr, tpr))
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.legend(loc="lower right")
# Save ROC curve plot to an image file
plt.savefig('roc_curve.png')
# Close plot to free memory
</code></pre>
|
<python><pandas><numpy><performance><scikit-learn>
|
2024-02-15 16:50:25
| 2
| 514
|
youtube
|
78,002,459
| 2,990,379
|
Determine if a Python string contains any kind of quote character
|
<p>I need to write a regex that will determine if a string contains any kind of single or double quotation mark character, including the specialty ones that you find in Word etc. Is there a list somewhere of what all the special characters are that a human would read as an opening or closing single or double quote, and how those would be represented in a Python string and in a Python regex?</p>
<p>e.g.,
“fancy quoted” ‘single quoted’ "plan text quotes" or 'single quotes' or `backquotes` -- are there others and how can I include these characters in a python regex?</p>
|
<python><regex>
|
2024-02-15 16:35:19
| 1
| 1,281
|
PurpleVermont
|
78,002,134
| 9,545,165
|
Django REST Framework: static file serve does not work only when the url is "static/"
|
<p>I have the following <code>urls.py</code> -</p>
<pre><code>urlpatterns = [
path('admin/', include("admin.urls")),
path('accounts/', include("accounts.urls")),
path('blogs/', include("blogs.urls")),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>In the <code>settings.py</code> I have -</p>
<pre><code>STATIC_URL = 'static/'
STATIC_ROOT = 'static/'
</code></pre>
<p>Here my static files in the static folder are showing a "Page Not Found (404)" error when opened in the browser by using the link <code>"http://localhost:8000/static/style.css"</code>. But if I change <code>settings.STATIC_URL</code> to anything else, it works. For example, if I do - <code>static("anything/", document_root=settings.STATIC_ROOT)</code>, then it works on "http://localhost:8000/anything/style.css", only the strings that start with 'static' does not work.</p>
<p>I tried removing all URLs and only keeping the <code>static()</code> to see if other URLs are interfering with static, but it still shows the same error.</p>
<p>In short : It does not work if I use STATIC_URL defined in settings.py in the static() function. Any other strings works.</p>
<p>Can anyone shed some light on this behaviour?</p>
|
<python><django><django-rest-framework>
|
2024-02-15 15:47:54
| 0
| 312
|
Muhib Al Hasan
|
78,002,075
| 12,400,088
|
Fitz removes page when inserting text
|
<p>I have a 2-page PDF, and when I try to insert text onto the last page, the page is getting removed. I believe the error is related to the PDF itself as the behavior is not consistent across all PDFs. But I am not sure what could be causing Fitz to remove the page.</p>
<p>The code to add text is as follows:</p>
<pre><code>doc: fitz.Document = fitz.open(file_path)
page: fitz.Page = doc[doc.page_count - 1]
start_of_txt = fitz.Point(70, 90) # X/Y coordinates
text = "INSERTED TEXT"
page.insert_text(
start_of_txt,
text,
fontname="helv",
fontsize=11,
rotate=0,
color=(0, 0, 0), # font
doc.saveIncr()
</code></pre>
<p>What I figured out is if I do <code>doc.save()</code>, the missing page is there. Incremental save seems to remove the page.</p>
<p>Can anyone point me in the right direction to understand why this is or how I can do an incremental save?</p>
|
<python><python-3.x><pymupdf>
|
2024-02-15 15:39:08
| 0
| 588
|
kravb
|
78,002,011
| 8,414,030
|
Understanding Scrapy Python: Refactoring parse method does not work
|
<p>I have a spider code that works.</p>
<pre class="lang-py prettyprint-override"><code>class MySpider(BaseScrapper):
name = "my_spider"
def parse(self, response, **kwargs):
self.logger.info(f"Parse: Processing {response.url}")
yield ScrapyItem(
source=response.meta["source"],
url=response.url,
html=response.text,
)
links = self.extract_links(response)
self.logger.info(f"Extracted {len(links)} links from {response.url}")
for link in links:
self.logger.info(f"Following link: {link.url}")
yield scrapy.Request(
url=link.url,
callback=self.parse,
meta={
"source": response.meta["source"],
},
)
</code></pre>
<p>I tried refactoring the above code using 2 methods, something like the below and it does not work.</p>
<pre class="lang-py prettyprint-override"><code> def follow_links(self, response, links):
self.logger.info(f"Following {len(links)} links from {response.url}")
for link in links:
self.logger.info(f"Following link: {link.url}")
yield scrapy.Request(
url=link,
callback=self.parse,
meta={
"source": response.meta["source"],
},
)
def extract_and_follow_links(self, response):
links = self.extract_links(response)
self.logger.info(f"Extracted {len(links)} links from {response.url}")
# TODO: Save the links to the database
self.follow_links(response, links)
def parse(self, response, **kwargs):
self.logger.info(f"Parse: Processing {response.url}")
yield ScrapyItem(
source=response.meta["source"],
url=response.url,
html=response.text,
)
self.extract_and_follow_links(response)
</code></pre>
<p>A little background: <em>the start urls supplied by the BaseScrapper have been scrapped and the item pipelines picks it and raises DropItem exception. This is expected and I am more interested in new links on those pages.</em></p>
<p>After I run the refactored code, it raises DropItem for the start urls and stops.</p>
<p>My question to you is - Why does the code behave differently when refactored.</p>
|
<python><python-3.x><scrapy>
|
2024-02-15 15:30:05
| 1
| 791
|
inquilabee
|
78,001,923
| 3,375,695
|
Extend python enum in __new__ with auto()
|
<p>I have a question related to using auto() with python enum.</p>
<p>I have a base clase like this:</p>
<pre><code>class TokenEnum(IntEnum):
def __new__(cls, value):
member = object.__new__(cls)
# Always auto-generate the int enum value
member._value_ = auto() # << not working !!
member.rule = value
return member
</code></pre>
<p>and want to use it like this. The enum value should be an int and auto-generated. The string provided should go into the additional 'rule' variable.</p>
<pre><code>class Tokens(TokenEnum):
ID = r'[a-zA-Z_][a-zA-Z0-9_]*'
...
</code></pre>
<p><code>auto()</code> doesn't seem to work in the place where I'm using it. And idea on how to get this working?</p>
|
<python><enums><auto>
|
2024-02-15 15:18:36
| 1
| 891
|
Juergen
|
78,001,855
| 8,118,238
|
How to convert cftime to unix timestamp?
|
<p>I am looking for some convenient way to convert time in cftime format available in netcdf files to unix timestamp(ms), what is a suitable way to do this without several <code>for</code> loops, of extracting date and time string followed by <code>datetime</code> type conversion in python, and few other steps to finally get to unix timestamp(ms). While working with array of time values, it's really daunting to use several <code>for</code> loops to seemingly simple operation.</p>
<p>If there is any library or functionality available within <code>cftime</code> or <code>datetime</code> module, i would highly appreciate.</p>
<p>Here is the output of my initial data in <code>cftime</code> format:</p>
<pre><code><xarray.DataArray 'time' (time: 227)>
array([ 107. , 129.5, 227.5, ..., 7928. , 7958.5, 7989. ], dtype=float32)
Coordinates:
* time (time) float32 107.0 129.5 227.5 ... 7.928e+03 7.958e+03 7.989e+03
Attributes:
bounds: time_bounds
calendar: gregorian
axis: T
standard_name: Time
long_name: Time
Units: days since 2002-01-01T00:00:00
</code></pre>
<p>When I used <code>xarray.decode_cf(dataset)</code>, the array is shown but it is still random, and I am not able to figure out what these numbers mean.</p>
<p>Here is a sample array after <code>xarray.decode_cf()</code> operation:</p>
<p><code>[107. 129.5 227.5 258. 288.5 319. 349.5 380.5 410. 439.5 470. 495.5 561.5 592.5]</code></p>
|
<python><datetime><timestamp><python-xarray><netcdf4>
|
2024-02-15 15:10:11
| 1
| 593
|
hillsonghimire
|
78,001,815
| 14,114,654
|
generate col of the first index position until the value changes
|
<p>I have a df of fruits</p>
<pre><code> fruit
0 apple
1 apple
2 apple
3 banana
4 apple
5 pear
</code></pre>
<p>How could I create indexy -- the first index position +1 until the value changes?</p>
<pre><code> fruit indexy
0 apple 1
1 apple 1
2 apple 1
3 banana 4
4 apple 5
5 pear 6
</code></pre>
|
<python><pandas><group-by>
|
2024-02-15 15:04:06
| 1
| 1,309
|
asd
|
78,001,753
| 1,606,022
|
multivariate derivatives in jax - efficiency question
|
<p>I have the following code which computes derivatives of the function:</p>
<pre class="lang-py prettyprint-override"><code>import jax
import jax.numpy as jnp
def f(x):
return jnp.prod(x)
df1 = jax.grad(f)
df2 = jax.jacobian(df1)
df3 = jax.jacobian(df2)
</code></pre>
<p>With this, all the partial derivatives are available, for example (with <code>vmap</code> additionally):</p>
<pre class="lang-py prettyprint-override"><code>x = jnp.array([[ 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10.],
[11., 12., 13., 14., 15.],
[16., 17., 18., 19., 20.],
[21., 22., 23., 24., 25.],
[26., 27., 28., 29., 30.]])
df3_x0_x2_x4 = jax.vmap(df3)(x)[:, 0, 2, 4]
print(df3_x0_x2_x4)
# [ 8. 63. 168. 323. 528. 783.]
</code></pre>
<p>The question is how can I compute <code>df3_x0_x2_x4</code> only, avoiding all the unnecessary derivative calculations (and leaving <code>f</code> with a single vector argument)?</p>
|
<python><jax>
|
2024-02-15 14:54:26
| 1
| 1,249
|
mrkwjc
|
78,001,750
| 9,506,773
|
Error adding skillset to Azure Search Indexer
|
<p>I am trying to add a custom skillset to my indexer. I am however facing some issues which I don't know how to solve. My skills:</p>
<pre class="lang-py prettyprint-override"><code># https://learn.microsoft.com/en-us/python/api/azure-search-documents/azure.search.documents.indexes.models.splitskill?view=azure-python
split_skill = SplitSkill(
name="split",
description=None,
context="/document/reviews_text",
inputs=[{"name": "text", "source": "/document/content"}],
outputs=[{"name": "textItems", "targetName": "pages"}],
text_split_mode="pages",
default_language_code="en",
maximum_page_length=1000
)
# https://learn.microsoft.com/en-us/python/api/azure-search-documents/azure.search.documents.indexes.models.azureopenaiembeddingskill?view=azure-python-preview
text_embedding_skill = AzureOpenAIEmbeddingSkill(
name="embedding",
description=None,
inputs=[{"name": "text", "source": "/pages"}],
outputs=[{"name": "embeddings", "targetName": "embeddings"}],
resource_uri="xxx",
deployment_id="yyy",
api_key="zzz",
)
</code></pre>
<p>How they are added to an index</p>
<pre class="lang-py prettyprint-override"><code># Define the skillset with the text embedding skill
skillset = SearchIndexerSkillset(
name="my-text-embedding-skillset",
skills=[split_skill, text_embedding_skill],
description="A skillset for creating text embeddings"
)
</code></pre>
<p>The error I get is:</p>
<pre class="lang-py prettyprint-override"><code>azure.core.exceptions.HttpResponseError: () One or more skills are invalid. Details: Error in skill 'embedding': Outputs are not supported by skill: embeddings
Code:
Message: One or more skills are invalid. Details: Error in skill 'embedding': Outputs are not supported by skill: embeddings
</code></pre>
<p>Should I create the index first and match the output name to the one in the index? I would be grateful if anybody could point to a good tutorial on this :)</p>
|
<python><azure-cognitive-search><azure-python-sdk>
|
2024-02-15 14:54:12
| 1
| 3,629
|
Mike B
|
78,001,691
| 3,070,181
|
Tkinter How to set focus from a function
|
<p>I am trying to set the focus to <em>entry_1</em> in a function, but <em>entry_2</em> gets the focus. What's wrong?</p>
<pre><code>import tkinter as tk
def _test_entry(event):
if value_1.get() == 'a':
entry_1.focus_set()
win = tk.Tk()
win.geometry("200x100")
value_1 = tk.StringVar(value='')
entry_1 = tk.Entry(win, textvariable=value_1)
entry_1.bind("<Tab>", _test_entry)
entry_1.pack()
entry_2 = tk.Entry(win)
entry_2.pack()
entry_1.focus_set()
win.mainloop()
</code></pre>
|
<python><tkinter>
|
2024-02-15 14:44:59
| 0
| 3,841
|
Psionman
|
78,001,538
| 5,571,914
|
populate pandas dataframe using csv file
|
<p>I have two files:</p>
<p>File1: An empty dataframe with only a header. (the data frame I wish to populate)</p>
<pre><code>item A B C D E F G H
</code></pre>
<p>File2: A CSV file with space-delimited elements with uneven column numbers</p>
<pre><code>item1 A B D F
item2 A B C D E
item3 A H I
item4 C D E F G
item5 K L
</code></pre>
<p>The output I expect is the following:</p>
<pre><code>item |A|B|C|D|E|F|G|H
----------------------
item1 A B D F
item2 A B C D E
item3 A H
item4 C D E F G
</code></pre>
<p>The code is below and partially working:</p>
<pre><code>import pandas as pd
df = pd.read_csv('to_map.txt', delim_whitespace=True, header=0)
with open('to_search.txt', "r") as f:
for line in f:
elements = line.strip().split(" ")
first_element = elements[0]
row = {} # Create an empty dictionary for each row
for element in elements:
row[element] = first_element
if element in df.columns:
row[element] = element # Match found, update to corresponding column
else:
row[element] = pd.NA # Match not found, assign NA
# Append the row to the DataFrame
df = df._append(row, ignore_index=True)
print(df)
</code></pre>
<p>The output of the above code is:</p>
<pre><code>item A B C D E F G H item1 item2 item3 I item4 \
0 NaN A B NaN D NaN F NaN NaN NaN NaN NaN NaN NaN
1 NaN A B C D E NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN A NaN NaN NaN NaN NaN NaN H NaN NaN NaN NaN NaN
3 NaN NaN NaN C D E F G NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
item5 K L
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
</code></pre>
<p>As you can see, the item(1..n), K, L, though not in column header, it still written as separate column and filled with NaNs. Please help !</p>
|
<python><pandas>
|
2024-02-15 14:23:26
| 3
| 669
|
Arun
|
78,001,480
| 3,828,463
|
Passing a modifiable two dimensional array from Python to Fortran
|
<p>In a previous post <a href="https://stackoverflow.com/questions/77989853/ctypes-argumenterror-dont-know-how-to-convert-parameter-1">ctypes.ArgumentError Don't know how to convert parameter 1</a>, I received a solution for passing a modifiable one dimensional array from Python to Fortran, which is (the first argument):</p>
<pre><code>f(x.ctypes.data_as(ct.POINTER(ct.c_double)), ct.byref(objf)).
</code></pre>
<p>I now have a two dimensional array which needs to be sent and returned from Fortran.</p>
<p>The Fortran looks as follows:</p>
<pre><code>!dec$ attributes dllexport :: getlincon
subroutine getlincon(nr, lhs)
integer, intent(in ) :: nr
real(8), intent(out) :: lhs(nr,*)
</code></pre>
<p>On the Python side, I have: (there are nr=3 rows and nc=2 columns)</p>
<pre><code>lib = ct.CDLL('x64\\Debug\\fortranobj.dll')
fl = getattr(lib,'FTN_OBJFUN_MOD_mp_GETLINCON')
nc = ct.c_int(2)
nr = ct.c_int(3)
lhs = [[np.zeros(int(nr.value))], [np.zeros(int(nc.value))]] # initialize lhs
fl(ct.byref(nr), lhs.ctypes.data_as(ct.POINTER(ct.c_double)))
</code></pre>
<p>This fails with:</p>
<pre><code>AttributeError: 'list' object has no attribute 'ctypes'
</code></pre>
<p>Clearly I haven't got the Python syntax correct.</p>
|
<python><fortran>
|
2024-02-15 14:12:57
| 1
| 335
|
Adrian
|
78,001,467
| 22,221,987
|
How to use decorator, defined inside the class, with no warnings
|
<p>I've defined decorator inside the class and trying to use it in the same class.<br />
I've created the decorator according to <a href="https://stackoverflow.com/questions/1367514/how-to-decorate-a-method-inside-a-class">this</a> topic. But PyCharm tells me, that this is veeery sus way to do this.<br />
Here is the code:</p>
<pre><code>import sys
from typing import Callable
class TestClass:
def __init__(self):
self.smile = ':P'
def handle_exception(func: Callable):
def handle_exception_wrapper(self=None, *args, **kwargs):
try:
func(self, *args, **kwargs)
return self.shutdown()
except Exception as error:
self.shutdown(error)
return handle_exception_wrapper
@handle_exception
def some_function(self) -> None:
print('some suspicious function is working')
raise RuntimeError('Break Runtime - all be fine')
def shutdown(error=None):
if error:
print('error:', error)
print('function collapsed')
else:
print('good-boy function completed the task')
print('goodbye', self.smile)
sys.exit(1)
test = TestClass()
test.some_function()
</code></pre>
<p>And IDE warnings:
<a href="https://i.sstatic.net/0Fxz0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Fxz0.png" alt="IDE warniings" /></a></p>
<p>And this code works (bc of decorator passing a class object quietly). But looks kinda sus and "inPEPful".</p>
<p>Long story short, is it possible to define and use decorator inside the class beautifully? (One thing is important that I want to access class methods inside the decorator).</p>
|
<python><python-3.x><exception><decorator>
|
2024-02-15 14:10:23
| 1
| 309
|
Mika
|
78,001,435
| 3,575,623
|
Remove central noise from image
|
<p>I'm working with biologists, who are imaging DNA strands under a microscope, giving greyscale PNGs. Depending on the experiment, there can be a lot of background noise. On the ones where there is quite little, a simple threshold for pixel intensity is generally enough to remove it. However for some, it's not so simple.</p>
<p><a href="https://i.sstatic.net/ov2wc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ov2wc.jpg" alt="DNA lines on noisy background" /></a></p>
<p>(not sure why it's uploading as jpg - <a href="https://drive.google.com/file/d/1yANfVlahYPouHLc_OQGmpkCVW-nM7ow8/view?usp=sharing" rel="nofollow noreferrer">this</a> should let you download it as PNG)</p>
<p>When I do apply a threshold for pixel intensity, I get this result:</p>
<p><a href="https://i.sstatic.net/96EG6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/96EG6.png" alt="Same DNA lines, but with a less noisy background" /></a></p>
<p>If I raise the threshold any more, I'll start to lose the pixels at the edges of the image that aren't very bright.</p>
<p>The noise seems to follow a gaussian distribution based on its location within the image, probably due to the light source of the microscope. What's the best way to compare pixels to the local background noise? Can I integrate the noise distribution?</p>
<p>For the moment I've been using <code>imageio.v3</code> in python3 for basic pixel manipulation, but I'm open to suggestions.</p>
<p>EDIT: I tried to use histogram equalisation, but it doesn't <em>quite</em> seem to be what I'm looking for...</p>
<p><a href="https://i.sstatic.net/cGe98.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cGe98.jpg" alt="DNA lines after histogram equalisation" /></a></p>
|
<python><image><png><noise>
|
2024-02-15 14:04:54
| 1
| 507
|
Whitehot
|
78,001,405
| 9,582,542
|
Selenium click on div text that is a variable
|
<p>I have an array of strings that varies and some strings have an apostrophe</p>
<pre><code>array = ["It's a Joke", "It's not a Joke","No more Jokes please"]
strNo = 0
if (strNo == 0):
rpstring = array[strNo]
print(f'"""//div[text()="{rpstring}"]"""')
button = wait.until(EC.element_to_be_clickable((By.XPATH,f'"""//div[text()="{rpstring}"]"""')))
button.click()
</code></pre>
<p>Here is the print result</p>
<pre><code>"""//div[text()="It's a Joke"]"""
</code></pre>
<p>I like to be able to click the element based on the text passed as it may vary.
Current I have the print result here but it fails because i believe the apostrophe is messing up string and I am not sure I am setting the string properly and its not being found in the html.</p>
|
<python><selenium-webdriver>
|
2024-02-15 14:00:37
| 1
| 690
|
Leo Torres
|
78,001,359
| 10,037,034
|
Flask app not return any response when raising error (python)
|
<p>I am building a Flask API in Python. When a function inside the API throws an error, the server does not return any response. I have tried using try-except blocks and print statements to debug the issue, but I am still unable to resolve it.</p>
<p>Here is some relevant code:</p>
<pre><code>import json
from flask import Flask, Response, request, jsonify
def first_function():
try:
second_function()
except Exception as error:
error_message = str(error).split("\n")[0]
raise ValueError(error_message)
def second_function():
try:
print("second_function")
third_function()
print("I do not want to see this sentence.")
except Exception as error:
error_message = str(error).split("\n")[0]
raise ValueError(error_message)
def third_function():
try:
print("third_function")
print(1/0)
except Exception as error:
error_message = str(error).split("\n")[0]
raise ValueError(error_message)
@app.route('/control', methods=['POST'])
def get():
response = ''
try:
first_function()
return jsonify(response)
except Exception as error:
error_message = str(error).split("\n")[0]
response = {"text" : "Something went wrong when operating 'get' process. -> " + str(error_message)}
print("Error on main function ", error_message)
return jsonify(response)
</code></pre>
<p>After that i want to see division error message on response.
Thank you for your help!</p>
<p> </p>
|
<python><json><flask>
|
2024-02-15 13:53:21
| 1
| 1,311
|
Sevval Kahraman
|
78,001,058
| 8,760,028
|
How to remove multiple values from a tuple in python
|
<p>I have a tuple which is an output of a select query. The tuple looks like this:</p>
<p><code>('checkStatus id \n========================= ==================== \nfalse adcs-fddf-sdsd \n', None)</code></p>
<p>I am trying to retrieve only the two values i.e. false and adcs-fddf-sdsd</p>
<p>This is the code:</p>
<pre><code>import subprocess
def foo():
cmd = return ("select checkStatus , id from abc")
pr = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=TRUE)
output = pr.communicate()
print(output) //('checkStatus id \n=========================
// ==================== \nfalse adcs-fddf-sdsd \n', None)
</code></pre>
<p>Now, I found out that I can use replace to remove the unwanted white space using <code>str(output).replace(' ', '')</code>. So, for all the other details also I will have to add multiple replace statements. So, is there a better way to retrieve only those 2 values <code>(false and adcs-fddf-sdsd)</code> .</p>
<p><strong>NOTE</strong> The code has to be compatible for both Python version 2 and 3.</p>
|
<python>
|
2024-02-15 13:10:41
| 2
| 1,435
|
pranami
|
78,001,042
| 19,356,117
|
How to solve duplicated lists in numpy 1.24?
|
<p>Now I have a 2-dimensional list which have many duplicated lists in it:</p>
<pre><code>sta_list = [[31, 30, 11, 3, 1, 0], [31, 30, 11, 3, 1, 0], [31, 30, 11, 3, 1, 0], ……, [31, 30, 11, 3, 1, 0], [31, 30, 11, 3, 1, 0], [34, 33, 1, 0], ……]
</code></pre>
<p>In numpy 1.22, I can use np.unique to remove duplicated lists like this:</p>
<pre><code>np.unique(np.array(sta_lists))
</code></pre>
<p>But in numpy 1.24, it can only get this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\yy\AppData\Roaming\JetBrains\IntelliJIdea2023.3\plugins\python\helpers-pro\pydevd_asyncio\pydevd_asyncio_utils.py", line 114, in _exec_async_code
result = func()
^^^^^^
File "<input>", line 1, in <module>
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (396,) + inhomogeneous part.
</code></pre>
<p>So how to solve the problem in python 3.11 and numpy 1.24 rather than retreating to older python and numpy?</p>
|
<python><arrays><numpy>
|
2024-02-15 13:08:08
| 2
| 1,115
|
forestbat
|
78,001,004
| 10,590,609
|
Typehint the type of a collection and the collection itself
|
<p>Say I want to collect and iterable into another type in the collect method:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar, Collection
from dataclasses import dataclass
T = TypeVar("T")
@dataclass
class Example(Generic[T]):
data: list[T]
def collect(self, collector: type[Collection]) -> Collection[T]:
return collector(self.data)
</code></pre>
<p>If implemented like that, the typehinting information of which collector is used is lost:</p>
<pre class="lang-py prettyprint-override"><code># result should type-hint 'set[int]'
# but instead shows 'Collection[int]'
# The collector type is lost..
result = Example([1, 2]).collect(set)
</code></pre>
<p>How can I keep both the type of collection and the type of what is held by the collection, while keeping it all generic?</p>
|
<python><generics><python-typing>
|
2024-02-15 13:02:34
| 1
| 332
|
Izaak Cornelis
|
78,000,943
| 7,408,848
|
In Mongoengine add a document and update a reference field in another document
|
<p>I have two documents and I would like to link together through a reference Field using the _ID. The idea was to create the documents, save them to the database, take the _ID information and append it to a reference list of another document. As the documents will be created at different times, I cannot have both established before hand.</p>
<p>Here is the code thus far and the error that occurs.</p>
<pre><code>from mongoengine import StringField, Document, DateField, ReferenceField
class GameWrapper(Document)
title: StringField()
year: DateField()
class GameCompanyWrapper(Document):
Company_Name: StringField()
published_games: ReferenceField(GameWrapper)
class IstrumentService:
mongo_url: str = None
def save(self, inst: GameWrapper) -> ObjectId:
saved_inst: inst = inst.save()
return saved_inst.id
instrument_service = IstrumentService()
GCW = GameCompanyWrapper()
GCW.Company_Name = "Squaresoft"
IstrumentService.save(GCW)
GW = GameWrapper()
GW.Title = "Final Fantasy"
ob_ID = IstrumentService.save(GCW)
GCW.published_games = ob_ID
IstrumentService.save(GW)
</code></pre>
<blockquote>
<p>AttributeError: 'ObjectId' object has no attribute '_data'</p>
</blockquote>
<p>I am under the assumption that the objectID is a reference to an entry in the database, so why does it throw a no attribute '_data' error?</p>
|
<python><mongoengine>
|
2024-02-15 12:54:09
| 1
| 1,111
|
Hojo.Timberwolf
|
78,000,812
| 11,913,986
|
Map columns of two df based on array intersection of individual columns and based on highest common element Pandas Memory error Pyspark/Pandas
|
<p>I have a dataframe df1 like this:</p>
<pre><code>A B
AA [a,b,c,d]
BB [a,f,g,c]
CC [a,b,l,m]
</code></pre>
<p>And another one as df2 like:</p>
<pre><code>C D
XX [a,b,c,n]
YY [a,m,r,s]
UU [e,h,I,j]
</code></pre>
<p>I want to find out and map column C of df2 with column A of df1 based on the highest element match between the items of df2['D'] and df1['B'] and null if there is none.</p>
<p>The result df will look like:</p>
<pre><code>C D A common_items
XX [a,b,c,n] AA [a,b,c]
YY [a,m,r,s] CC [a,m]
UU [e,h,I,j] Null Null
</code></pre>
<p>The solution I have is to convert to set, compute the intersections with numpy and get the best match:</p>
<pre><code>b = df1['B'].apply(set).to_numpy()
d = df2['D'].apply(set).to_numpy()
# compute pairwise intersections
common = d[:,None] & b
# get largest intersection per row
vlen = np.vectorize(len)
idx = np.argmax(vlen(common), axis=1)
# assign intersections and original ID
df2['common_items'] = common[np.arange(len(d)), idx]
df2['A'] = np.where(df2['common_items'].str.len()>0,
df1['A'].to_numpy()[idx], None)
</code></pre>
<p>Any solution on the number of common elements of the two columns of different dataframes, mapping the source df1['A'] to target df2['c'].
[a,b,c,d] etc are list of string.</p>
<p>Data:</p>
<pre><code>df1.to_dict('list'):
{'A': ['AA', 'BB', 'CC'],
'B': [['a', 'b', 'c', 'd'], ['a', 'f', 'g', 'c'], ['a', 'b', 'l', 'm']]}
df2.to_dict('list'):
{'C': ['XX', 'YY', 'UU'],
'D': [['a', 'b', 'c', 'n'], ['a', 'm', 'r', 's'], ['e', 'h', 'l', 'j']]}
</code></pre>
<p>It's working fine for a limited amount of data (say 5K). When I am trying to do this between size of 87K as df2 and df1 as 567, it is constantly coming with memory outage error on databricks.</p>
<pre><code>The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached.
</code></pre>
<p>Anything efficient if not in Pandas, in Pyspark?</p>
|
<python><pandas><dataframe><list><pyspark>
|
2024-02-15 12:35:18
| 1
| 739
|
Strayhorn
|
78,000,787
| 1,866,775
|
How to obtain a 2d transformation matrix from two pairs of points?
|
<p>OpenCV provides:</p>
<ul>
<li><code>getRotationMatrix2D</code> to get a 2x3 transformation matrix (rotation, scale, shift) defined by <code>center</code>, <code>angle</code> and <code>scale</code></li>
<li><code>getAffineTransform</code> to get a 2x3 transformation matrix (rotation, scale, shift, sheer) defined by three pairs of points.</li>
</ul>
<p>I'd like to get a transformation matrix with rotation, scale, and shift (i.e. no sheer) from two pairs of points.</p>
<p>Here is my current implementation, which works, but it's way too complex for my taste:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Tuple, List
import cv2
import numpy as np
import numpy.typing
def _third_triangle_point(p1: Tuple[float, float], p2: Tuple[float, float]) -> Tuple[float, float]:
"""Calculate the third point of an isosceles right-angled triangle."""
p1_arr = np.array(p1, dtype=np.float32)
p2_arr = np.array(p2, dtype=np.float32)
diff = p2_arr - p1_arr
perpendicular = np.array((diff[1], -diff[0]), dtype=np.float32)
result = p1_arr + perpendicular
return result[0], result[1]
def _stack_points(points: List[Tuple[float, float]]) -> np.typing.NDArray[np.float32]:
return np.vstack([np.array(p, dtype=np.float32) for p in points])
def get_transformation_between_two_point_pairs(
src: Tuple[Tuple[float, float], Tuple[float, float]],
dst: Tuple[Tuple[float, float], Tuple[float, float]]
) -> np.typing.NDArray[np.float32]:
# cv2.getAffineTransform takes three point pairs.
# It supports rotation, translation, scaling, and shearing.
# We don't need the shearing,
# so we invent a third point with a stable relation to the given two.
return cv2.getAffineTransform( # type: ignore
_stack_points([src[0], src[1], _third_triangle_point(src[0], src[1])]),
_stack_points([dst[0], dst[1], _third_triangle_point(dst[0], dst[1])])
)
print(get_transformation_between_two_point_pairs(((10, 10), (17, 23)), ((30, 30), (70, 30))))
</code></pre>
<pre><code>[[ 1.28440367 2.3853211 -6.69724771]
[-2.3853211 1.28440367 41.00917431]]
</code></pre>
<p>Is there a simpler way to achieve the same result?</p>
|
<python><opencv><computer-vision><linear-algebra><affinetransform>
|
2024-02-15 12:30:08
| 1
| 11,227
|
Tobias Hermann
|
78,000,777
| 9,173,710
|
Improve scipy.integrate.quad_vec performance for fitting integral equation, workers keyword non-functional
|
<p>I am using <code>quad_vec</code> in a function that contains an integral not solvable with analytic methods. I need to fit this equation to data points.</p>
<p>However, even the evaluation with fixed values takes multiple seconds, the total fit time grows to 7 min.</p>
<p>Is there a way to accelerate the computation? I know <code>quad_vec</code> has the <code>workers</code> keyword, but when I try to use that, the computation does not finish at all.
I am currently working in a Jupyter notebook if that has any significance.</p>
<p>This is the function definition, I already tried to use numba here with little success.</p>
<pre class="lang-py prettyprint-override"><code>from scipy.special import j0, j1
from scipy.integrate import quad_vec
import numpy as np
import numba as nb
h = 0.00005
G = 1000
E = 3*G
R= 0.0002
upsilon = 0.06
gamma = 0.072
@nb.jit(nb.float64(nb.float64, nb.float64, nb.float64, nb.float64, nb.float64),cache=True)
def QSzz_1(s,h,z, upsilon, E):
# exponential form of the QSzz^-1 function to prevent float overflow,
# also assumes nu==0.5
numerator = (1+2*s**2*h**2)*np.exp(-2*s*z) + 0.5*(1+np.exp(-4*s*z))
denominator = 0.5*(1-np.exp(-4*s*z)) - 2*s*h*np.exp(-2*s*z)
return (3/(2*s*E)) / (numerator/denominator + (3*upsilon/(2*E))*s)
def integrand(s, r, R, upsilon, E, h):
return s*(R*j0(s*R) - 2*j1(s*R)/s) * QSzz_1(s,h,h, upsilon, E) * j0(s*r)
def style_exact(r, gamma, R, upsilon, E, h):
int_out = quad_vec(integrand, 0, np.inf, args=(r,R,upsilon,E,h))
return gamma * int_out[0]
# calculate fixed ~10s
x_ax = np.linspace(0, 0.0004,101, endpoint=True, dtype=np.float64)
zeta = style_exact(x_ax, gamma, 0.0002, upsilon, E, h)
# fit to dataset (wetting ridge) ~7 min
popt_hemi, pcov_hemi = curve_fit(lambda x, upsilon, E, h: style_exact(x, gamma, R, upsilon, E, h) ,points_x, points_y, p0=(upsilon, E, h), bounds=([0,0,0],[np.inf,np.inf,np.inf]))
</code></pre>
<p>Edit:
here are some exemplar values:</p>
<pre class="lang-py prettyprint-override"><code>points_x =[0.00040030286, 0.00040155788, 0.00040281289, 0.00040406791000000003, 0.00040532292, 0.00040657794, 0.00040783296, 0.00040908797, 0.00041034299, 0.00041159801, 0.00041285302, 0.00041410804, 0.00041536305, 0.00041661807, 0.00041787309, 0.0004191281, 0.00042038312, 0.00042163814, 0.00042289315000000003, 0.00042414817000000003, 0.00042540318, 0.0004266582, 0.00042791322, 0.00042916823, 0.00043042325, 0.00043167827, 0.00043293328, 0.0004341883, 0.00043544332, 0.00043669833, 0.00043795335, 0.00043920836, 0.00044046338, 0.0004417184, 0.00044297341000000003, 0.00044422843000000003, 0.00044548345, 0.00044673846, 0.00044799348, 0.00044924849, 0.00045050351, 0.00045175852999999996, 0.00045301354000000006, 0.00045426856, 0.00045552357999999995, 0.00045677859000000005, 0.00045803361, 0.00045928863000000006, 0.00046054364000000004, 0.00046179866, 0.00046305367, 0.00046430869000000004, 0.00046556371, 0.00046681871999999997, 0.00046807374000000003, 0.00046932876, 0.00047058376999999996, 0.00047183879, 0.0004730938, 0.00047434881999999995, 0.00047560384, 0.00047685885, 0.00047811387000000006, 0.00047936889, 0.0004806239, 0.00048187892000000005, 0.00048313393000000004, 0.00048438895, 0.00048564397000000004, 0.00048689898000000003, 0.000488154, 0.00048940902, 0.00049066403, 0.00049191905, 0.00049317407, 0.00049442908, 0.0004956841, 0.0004969391100000001, 0.00049819413, 0.0004994491500000001, 0.00050070416, 0.00050195918, 0.0005032142, 0.00050446921, 0.00050572423, 0.00050697924, 0.00050823426, 0.00050948928, 0.00051074429, 0.00051199931, 0.00051325433, 0.00051450934, 0.00051576436, 0.00051701937, 0.0005182743900000001, 0.00051952941, 0.00052078442, 0.00052203944, 0.00052329446, 0.00052454947, 0.00052580449, 0.00052705951, 0.00052831452, 0.00052956954, 0.00053082455, 0.00053207957, 0.00053333459, 0.0005345896, 0.00053584462, 0.00053709964, 0.00053835465, 0.00053960967, 0.00054086468, 0.0005421197, 0.0005433747200000001, 0.00054462973, 0.00054588475, 0.00054713977, 0.00054839478, 0.0005496498, 0.00055090482, 0.00055215983, 0.00055341485, 0.00055466986, 0.00055592488, 0.0005571799, 0.00055843491, 0.00055968993, 0.00056094495, 0.0005621999600000001, 0.00056345498, 0.00056470999, 0.00056596501, 0.00056722003, 0.00056847504, 0.00056973006, 0.00057098508, 0.00057224009, 0.00057349511, 0.00057475012, 0.00057600514, 0.00057726016, 0.00057851517, 0.00057977019, 0.00058102521, 0.00058228022, 0.0005835352400000001, 0.00058479026, 0.00058604527, 0.00058730029, 0.0005885553, 0.00058981032, 0.00059106534, 0.00059232035, 0.00059357537, 0.00059483039, 0.0005960854, 0.00059734042, 0.00059859543, 0.00059985045, 0.00060110547, 0.0006023604800000001, 0.0006036155, 0.00060487052, 0.00060612553, 0.00060738055, 0.00060863556, 0.00060989058, 0.0006111456, 0.00061240061, 0.00061365563, 0.00061491065, 0.00061616566, 0.00061742068, 0.0006186757, 0.00061993071, 0.00062118573, 0.00062244074, 0.00062369576, 0.00062495078, 0.00062620579, 0.0006274608100000001, 0.00062871583, 0.00062997084, 0.00063122586, 0.00063248087, 0.00063373589, 0.00063499091, 0.00063624592, 0.00063750094, 0.00063875596, 0.00064001097, 0.00064126599, 0.000642521, 0.00064377602, 0.00064503104, 0.0006462860500000001, 0.00064754107, 0.0006487960900000001, 0.0006500511, 0.00065130612, 0.00065256114, 0.00065381615, 0.00065507117, 0.00065632618, 0.0006575812, 0.00065883622, 0.00066009123, 0.00066134625, 0.00066260127, 0.00066385628, 0.0006651113, 0.00066636631, 0.0006676213300000001, 0.00066887635, 0.00067013136, 0.00067138638, 0.0006726414, 0.00067389641, 0.00067515143, 0.00067640645, 0.00067766146, 0.00067891648, 0.00068017149, 0.00068142651, 0.00068268153, 0.00068393654, 0.00068519156, 0.00068644658, 0.00068770159, 0.00068895661, 0.00069021162, 0.00069146664, 0.0006927216600000001, 0.00069397667, 0.00069523169, 0.00069648671, 0.00069774172, 0.00069899674, 0.00070025175, 0.00070150677, 0.00070276179, 0.0007040168, 0.00070527182, 0.00070652684, 0.00070778185, 0.00070903687, 0.00071029189, 0.0007115469000000001, 0.00071280192, 0.00071405693, 0.00071531195, 0.00071656697, 0.00071782198, 0.000719077, 0.00072033202, 0.00072158703, 0.00072284205, 0.00072409706, 0.00072535208, 0.0007266071, 0.00072786211, 0.00072911713, 0.00073037215, 0.00073162716, 0.0007328821800000001, 0.00073413719, 0.00073539221, 0.00073664723, 0.00073790224, 0.00073915726, 0.00074041228, 0.00074166729, 0.00074292231, 0.00074417733, 0.00074543234, 0.00074668736, 0.00074794237, 0.00074919739, 0.00075045241, 0.0007517074200000001, 0.00075296244, 0.00075421746, 0.00075547247, 0.00075672749, 0.0007579825, 0.00075923752, 0.00076049254, 0.00076174755, 0.00076300257, 0.00076425759, 0.0007655126, 0.00076676762, 0.00076802264, 0.00076927765, 0.00077053267, 0.00077178768, 0.0007730427, 0.00077429772, 0.00077555273, 0.0007768077500000001, 0.00077806277, 0.00077931778, 0.0007805728, 0.00078182781, 0.00078308283, 0.00078433785, 0.00078559286, 0.00078684788, 0.0007881029, 0.00078935791, 0.00079061293, 0.00079186794, 0.00079312296, 0.00079437798, 0.0007956329900000001, 0.00079688801, 0.0007981430300000001, 0.00079939804, 0.00080065306, 0.00080190808, 0.00080316309, 0.00080441811, 0.00080567312, 0.00080692814, 0.00080818316, 0.00080943817, 0.00081069319, 0.00081194821, 0.00081320322, 0.00081445824, 0.00081571325, 0.0008169682700000001, 0.00081822329, 0.0008194783, 0.00082073332, 0.00082198834, 0.00082324335, 0.00082449837, 0.00082575338, 0.0008270084, 0.00082826342, 0.00082951843, 0.00083077345, 0.00083202847, 0.00083328348, 0.0008345385, 0.00083579352, 0.00083704853, 0.00083830355, 0.00083955856, 0.00084081358, 0.0008420686000000001, 0.00084332361, 0.00084457863, 0.00084583365, 0.00084708866, 0.00084834368, 0.00084959869, 0.00085085371, 0.00085210873, 0.00085336374, 0.00085461876, 0.00085587378, 0.00085712879, 0.00085838381, 0.00085963882, 0.0008608938400000001, 0.00086214886, 0.00086340387, 0.00086465889, 0.00086591391, 0.00086716892, 0.00086842394, 0.00086967896, 0.00087093397, 0.00087218899, 0.000873444, 0.00087469902, 0.00087595404, 0.00087720905, 0.00087846407, 0.00087971909, 0.0008809741, 0.0008822291200000001, 0.00088348413, 0.00088473915, 0.00088599417, 0.00088724918, 0.0008885042, 0.00088975922, 0.00089101423, 0.00089226925, 0.00089352427, 0.00089477928, 0.0008960343, 0.00089728931, 0.00089854433, 0.00089979935, 0.0009010543600000001, 0.00090230938, 0.0009035644, 0.00090481941, 0.00090607443, 0.00090732944, 0.00090858446, 0.00090983948, 0.00091109449, 0.00091234951, 0.00091360453, 0.00091485954, 0.00091611456, 0.00091736957, 0.00091862459, 0.00091987961, 0.00092113462, 0.00092238964, 0.00092364466, 0.00092489967, 0.0009261546900000001, 0.00092740971, 0.00092866472, 0.00092991974, 0.00093117475, 0.00093242977, 0.00093368479, 0.0009349398, 0.00093619482, 0.00093744984, 0.00093870485, 0.0009399598700000001, 0.00094121488, 0.0009424699, 0.0009437249200000001, 0.00094497993, 0.00094623495, 0.0009474899700000001, 0.0009487449799999999, 0.00095, 0.0009512550100000001, 0.0009525100299999999, 0.00095376505, 0.0009550200600000001, 0.0009562750799999999, 0.0009575301, 0.0009587851100000001, 0.0009600401299999999, 0.00096129515, 0.00096255017, 0.00096380517, 0.0009650601700000001, 0.00096631517, 0.00096757027, 0.0009688252700000001, 0.00097008027, 0.0009713352699999999, 0.0009725902700000001, 0.00097384527, 0.0009751003699999999, 0.0009763553700000001, 0.00097761037, 0.00097886537, 0.00098012037, 0.0009813753699999999, 0.0009826304699999998, 0.0009838854700000002, 0.00098514047, 0.00098639547, 0.00098765047, 0.0009889054699999998, 0.0009901605699999998, 0.0009914155700000002, 0.00099267057, 0.00099392557, 0.00099518057, 0.0009964355699999998, 0.0009976905700000002, 0.0009989456700000001, 0.00100020067, 0.00100145567, 0.00100271067, 0.0010039656699999998, 0.0010052206700000002, 0.0010064757700000001, 0.00100773077, 0.00100898577, 0.0010102407699999999, 0.0010114957699999998, 0.0010127507700000002, 0.00101400587, 0.00101526087, 0.00101651587, 0.0010177708699999999, 0.0010190258700000002, 0.0010202808700000001, 0.00102153597, 0.00102279097, 0.00102404597, 0.0010253009699999998, 0.0010265559700000002, 0.0010278109700000001, 0.00102906607, 0.00103032107, 0.00103157607, 0.0010328310699999998, 0.0010340860700000002, 0.00103534107, 0.00103659617, 0.00103785117, 0.00103910617, 0.0010403611699999998, 0.0010416161700000002, 0.00104287117, 0.00104412627, 0.00104538127, 0.0010466362699999999, 0.0010478912699999998, 0.0010491462700000002, 0.00105040127, 0.00105165627, 0.00105291137, 0.0010541663699999999, 0.0010554213700000002, 0.0010566763700000001, 0.00105793137, 0.00105918637, 0.00106044147, 0.0010616964699999999, 0.0010629514700000002, 0.0010642064700000001, 0.00106546147, 0.00106671647, 0.00106797157, 0.0010692265699999998, 0.0010704815700000002, 0.00107173657, 0.00107299157, 0.00107424657, 0.00107550167, 0.0010767566699999998, 0.0010780116700000002, 0.00107926667, 0.00108052167, 0.00108177667, 0.0010830317699999999, 0.0010842867699999998, 0.0010855417700000002, 0.00108679677, 0.00108805177, 0.00108930677, 0.0010905618699999999, 0.0010918168700000002, 0.0010930718700000001, 0.00109432687, 0.00109558187, 0.00109683687, 0.0010980919699999999, 0.0010993469700000002, 0.0011006019700000001, 0.00110185697, 0.00110311197, 0.0011043669699999999, 0.0011056219699999998, 0.0011068770700000002, 0.00110813207, 0.00110938707, 0.00111064207, 0.0011118970699999999, 0.0011131520700000002, 0.0011144071700000002, 0.00111566217, 0.00111691717, 0.00111817217, 0.0011194271699999998, 0.0011206821700000002, 0.0011219372700000002, 0.00112319227, 0.00112444727, 0.00112570227, 0.0011269572699999998, 0.0011282122700000002, 0.0011294673700000001, 0.00113072237, 0.00113197737, 0.00113323237, 0.0011344873699999998, 0.0011357423700000002, 0.0011369974700000001, 0.00113825247, 0.00113950747, 0.0011407624699999999, 0.0011420174699999998, 0.0011432724700000002, 0.0011445275700000001, 0.00114578257, 0.00114703757, 0.0011482925699999999, 0.0011495475700000002, 0.0011508025700000001, 0.00115205767, 0.00115331267, 0.00115456767, 0.0011558226699999999, 0.0011570776700000002, 0.0011583326700000001, 0.00115958777, 0.00116084277, 0.00116209777, 0.0011633527699999998, 0.0011646077700000002, 0.00116586277, 0.00116711777, 0.00116837287, 0.00116962787, 0.0011708828699999998, 0.0011721378700000002, 0.00117339287, 0.00117464787, 0.00117590297, 0.0011771579699999999, 0.0011784129699999998, 0.0011796679700000002, 0.00118092297, 0.00118217797, 0.00118343307, 0.0011846880699999999, 0.0011859430700000002, 0.0011871980700000001, 0.00118845307, 0.00118970807, 0.00119096317, 0.0011922181699999999, 0.0011934731700000002, 0.0011947281700000001, 0.00119598317, 0.00119723817, 0.00119849327, 0.0011997482699999998, 0.0012010032700000002, 0.00120225827, 0.00120351327, 0.00120476827, 0.00120602337, 0.0012072783699999998, 0.0012085333700000002, 0.00120978837, 0.00121104337, 0.00121229837, 0.0012135534699999999, 0.0012148084699999998, 0.0012160634700000002, 0.00121731847, 0.00121857347, 0.00121982847, 0.0012210834699999998, 0.0012223385700000002, 0.0012235935700000001, 0.00122484857, 0.00122610357, 0.00122735857, 0.0012286135699999998, 0.0012298686700000002, 0.0012311236700000001, 0.00123237867, 0.00123363367, 0.0012348886699999999, 0.0012361436699999998]
points_y = [-2.4929826e-07, -2.3248189e-07, -4.4305314e-07, -1.0689171e-06, -7.0144722e-07, -1.3773717e-06, -9.3672285e-07, -1.6876499e-06, -9.8346007e-07, -1.7992562e-06, -1.0198111e-06, -1.721233e-06, -8.9082583e-07, -1.1925362e-06, -8.3776501e-07, -6.9998957e-07, -7.1134901e-07, -4.5476849e-07, -6.4449894e-07, -3.8765887e-07, -6.3044764e-07, -7.4224008e-07, -7.6114851e-07, -1.0377502e-06, -1.3589471e-06, -1.3342596e-06, -1.3712255e-06, -1.3510569e-06, -1.2278933e-06, -8.2319036e-07, -1.4040568e-06, -6.4183121e-07, -1.1649824e-06, -7.3197454e-07, -1.0537769e-06, -8.3223932e-07, -1.1644648e-06, -1.2177416e-06, -1.4045247e-06, -1.6934001e-06, -1.6157397e-06, -1.8595331e-06, -1.7097882e-06, -1.8031869e-06, -1.5406345e-06, -1.5851084e-06, -1.5695719e-06, -1.4990693e-06, -1.8087735e-06, -1.7151045e-06, -1.8353234e-06, -1.71844e-06, -1.7904118e-06, -1.5297879e-06, -1.6064767e-06, -1.4520618e-06, -1.1090131e-06, -1.2475477e-06, -7.4591269e-07, -1.0619496e-06, -7.5699762e-07, -1.3883064e-06, -1.3300594e-06, -1.9713711e-06, -2.0613271e-06, -2.5116161e-06, -2.4466345e-06, -2.5386926e-06, -2.2368298e-06, -2.2934508e-06, -1.8951084e-06, -1.8117756e-06, -1.6680112e-06, -1.8274169e-06, -1.7569355e-06, -2.1081536e-06, -2.1241154e-06, -2.2742958e-06, -2.4032149e-06, -2.2596226e-06, -2.1889918e-06, -1.9359605e-06, -1.8878718e-06, -1.6144539e-06, -1.6485844e-06, -1.2316506e-06, -1.6932815e-06, -8.1348768e-07, -1.310099e-06, -4.3574574e-07, -1.0726973e-06, -6.6005902e-07, -1.2151841e-06, -9.1100721e-07, -1.4911344e-06, -1.3152027e-06, -1.3695714e-06, -1.3930563e-06, -1.3452594e-06, -1.3228626e-06, -1.3714694e-06, -1.2480971e-06, -1.4622823e-06, -1.5687181e-06, -1.7872703e-06, -1.7135845e-06, -2.0209804e-06, -1.3665688e-06, -1.7074398e-06, -1.1511678e-06, -1.1604734e-06, -1.0173458e-06, -1.0660268e-06, -1.0424449e-06, -1.1101976e-06, -1.0030326e-06, -1.0879421e-06, -8.2978143e-07, -9.3823628e-07, -7.2342249e-07, -9.8929055e-07, -1.0764783e-06, -1.3105722e-06, -1.3954326e-06, -1.5047949e-06, -1.4339143e-06, -1.3061363e-06, -1.3200332e-06, -1.381963e-06, -1.3490984e-06, -1.3526509e-06, -1.463083e-06, -1.2588114e-06, -1.4445926e-06, -1.1240129e-06, -1.3659935e-06, -1.3323392e-06, -1.3695779e-06, -1.7108472e-06, -1.7111548e-06, -2.0250494e-06, -2.1803196e-06, -2.2433208e-06, -2.4435685e-06, -1.9341618e-06, -2.3866277e-06, -1.8497934e-06, -1.8903583e-06, -1.4422203e-06, -1.7661343e-06, -1.5059728e-06, -1.5770287e-06, -1.8108199e-06, -2.0170832e-06, -1.8260586e-06, -2.1429269e-06, -2.0532939e-06, -2.1373399e-06, -2.342127e-06, -2.3871293e-06, -2.5980083e-06, -2.4293864e-06, -2.3568741e-06, -2.0801477e-06, -1.8587702e-06, -1.7074138e-06, -1.5791169e-06, -1.6891695e-06, -1.7635139e-06, -1.9566623e-06, -1.8455385e-06, -2.1080438e-06, -2.0320153e-06, -2.1665641e-06, -2.1571212e-06, -2.3643005e-06, -2.074037e-06, -2.0893195e-06, -1.9232214e-06, -1.7025658e-06, -1.6232691e-06, -1.6510243e-06, -1.7197265e-06, -1.8580166e-06, -1.9258182e-06, -2.0062691e-06, -2.0157544e-06, -2.0394525e-06, -2.0826713e-06, -1.9067459e-06, -2.0218438e-06, -1.9964327e-06, -2.1734356e-06, -2.1242189e-06, -2.4424379e-06, -2.4437198e-06, -2.6022861e-06, -2.4502697e-06, -2.6343237e-06, -2.2225432e-06, -2.3110892e-06, -2.1664638e-06, -2.1287713e-06, -2.011825e-06, -2.2808875e-06, -2.158988e-06, -2.5522458e-06, -2.556647e-06, -2.8299596e-06, -2.9620166e-06, -2.6908558e-06, -3.0163631e-06, -2.6530144e-06, -2.5642676e-06, -2.2324086e-06, -2.0825715e-06, -1.7085644e-06, -1.4025919e-06, -1.4042667e-06, -1.397307e-06, -1.4471031e-06, -1.4352464e-06, -1.6847902e-06, -1.4372545e-06, -1.6405646e-06, -1.5025385e-06, -1.58785e-06, -1.5018164e-06, -1.546755e-06, -1.5307927e-06, -1.5450872e-06, -1.762507e-06, -1.9245396e-06, -2.1342847e-06, -2.083201e-06, -2.1824533e-06, -2.2264199e-06, -1.9521925e-06, -2.1104425e-06, -2.35205e-06, -2.1372429e-06, -2.3874246e-06, -2.3111549e-06, -2.3476044e-06, -1.9828263e-06, -2.1105666e-06, -1.77767e-06, -1.8420129e-06, -1.90373e-06, -1.930438e-06, -2.0727705e-06, -2.1793671e-06, -2.4205829e-06, -2.1275047e-06, -2.4740434e-06, -2.0603233e-06, -2.2409819e-06, -1.7541814e-06, -2.0279909e-06, -1.730486e-06, -1.9476207e-06, -1.7534857e-06, -1.8505329e-06, -2.0095086e-06, -1.618978e-06, -1.8867553e-06, -1.9088163e-06, -1.886491e-06, -1.7468138e-06, -1.8476389e-06, -1.7557932e-06, -1.4058452e-06, -1.6067978e-06, -1.3156005e-06, -1.3659535e-06, -1.0961384e-06, -1.0153987e-06, -9.4432646e-07, -6.6454642e-07, -9.2586387e-07, -1.0025458e-06, -1.0698426e-06, -1.2805659e-06, -1.3957816e-06, -1.504749e-06, -1.3274602e-06, -1.4140738e-06, -1.3504825e-06, -1.3899331e-06, -1.3970904e-06, -1.4744283e-06, -1.4185692e-06, -1.7050143e-06, -1.5382651e-06, -1.5599202e-06, -1.5529446e-06, -1.506719e-06, -1.4330019e-06, -1.240627e-06, -1.2835575e-06, -1.1023492e-06, -1.1632735e-06, -1.1683113e-06, -1.2732747e-06, -1.219676e-06, -1.2890147e-06, -1.1440703e-06, -9.1523203e-07, -8.2035542e-07, -8.7226368e-07, -7.3633722e-07, -9.884313e-07, -8.5961273e-07, -1.2392311e-06, -1.0843573e-06, -1.0707268e-06, -9.571558e-07, -1.0067944e-06, -6.4553431e-07, -4.0506156e-07, -3.3043043e-07, -1.7361598e-07, -1.3118263e-07, -2.9468891e-07, -4.7080768e-07, -6.4225818e-07, -7.5475209e-07, -8.5102358e-07, -6.0803728e-07, -8.1677753e-07, -5.9744241e-07, -7.8274568e-07, -5.3968306e-07, -8.350585e-07, -5.4845851e-07, -5.8427222e-07, -5.1520419e-07, -4.6822083e-07, -6.0910398e-07, -4.4298342e-07, -5.6257054e-07, -2.7562129e-07, -2.5181401e-07, 3.8053095e-08, 2.4159147e-07, 3.6882074e-07, 5.1241897e-07, 4.8644598e-07, 7.2692073e-07, 4.7022181e-07, 7.0384493e-07, 6.8289479e-07, 6.4066943e-07, 8.5657662e-07, 5.8406311e-07, 6.7344028e-07, 4.1435118e-07, 2.7649325e-07, 2.3123522e-07, -1.9399705e-08, 2.6291987e-07, -3.6143527e-08, 4.1732021e-07, 3.3391364e-07, 6.4314122e-07, 7.7139665e-07, 1.1209136e-06, 1.4367421e-06, 1.6319081e-06, 1.7711259e-06, 1.8566403e-06, 1.7371454e-06, 1.4824876e-06, 1.6134811e-06, 1.0707754e-06, 1.3415844e-06, 1.1356512e-06, 1.4106389e-06, 1.4104486e-06, 1.7408528e-06, 2.0744193e-06, 2.079919e-06, 2.1838213e-06, 2.3656145e-06, 2.1909773e-06, 2.3504607e-06, 2.2917643e-06, 2.4505978e-06, 2.0934847e-06, 2.583584e-06, 2.2871518e-06, 2.5116042e-06, 2.6234818e-06, 2.8420594e-06, 3.0011699e-06, 3.3721137e-06, 3.3177881e-06, 3.7014297e-06, 3.4988464e-06, 3.6346743e-06, 3.6031151e-06, 3.6434367e-06, 3.3825082e-06, 3.6445565e-06, 3.2970635e-06, 3.6138927e-06, 3.3753039e-06, 3.7447733e-06, 3.5673385e-06, 3.6078831e-06, 3.5609168e-06, 3.6213054e-06, 3.5038571e-06, 3.7264648e-06, 3.9751613e-06, 3.8206903e-06, 4.1254495e-06, 3.9272576e-06, 4.2386514e-06, 3.815278e-06, 4.2691643e-06, 4.0643683e-06, 4.330484e-06, 4.29042e-06, 4.6035887e-06, 4.4565016e-06, 4.583597e-06, 4.7192276e-06, 4.7442267e-06, 4.734727e-06, 5.0407053e-06, 5.3132589e-06, 5.3419609e-06, 5.7940368e-06, 6.014359e-06, 6.0453411e-06, 6.0996584e-06, 6.064599e-06, 6.1232403e-06, 5.8926808e-06, 6.0748121e-06, 5.9732831e-06, 6.0281785e-06, 5.9558067e-06, 5.8235522e-06, 5.6378228e-06, 5.4438118e-06, 5.3658419e-06, 5.3454619e-06, 5.205238e-06, 5.4038907e-06, 5.0070169e-06, 5.0996156e-06, 4.5688268e-06, 4.5768204e-06, 4.3706204e-06, 4.378131e-06, 4.3035565e-06, 4.4136234e-06, 4.4586055e-06, 4.2999055e-06, 4.2367521e-06, 4.1092479e-06, 3.6691199e-06, 3.7132548e-06, 3.3891334e-06, 3.4132172e-06, 3.3112791e-06, 3.4194779e-06, 3.3548478e-06, 3.4746562e-06, 3.0714297e-06, 3.631046e-06, 2.9155762e-06, 3.3648723e-06, 3.0564361e-06, 3.1977623e-06, 2.9422311e-06, 2.8664619e-06, 2.9553471e-06, 2.6331467e-06, 2.7458985e-06, 2.4857213e-06, 2.5358048e-06, 2.0853043e-06, 2.2717608e-06, 1.7708539e-06, 2.1185441e-06, 1.8057521e-06, 2.1431184e-06, 1.8050008e-06, 2.2162456e-06, 1.8085417e-06, 2.0822527e-06, 1.6735792e-06, 1.9627324e-06, 1.5854033e-06, 1.7829235e-06, 1.7266717e-06, 1.9015957e-06, 2.1904481e-06, 2.0235789e-06, 2.319506e-06, 2.0939101e-06, 1.9725124e-06, 1.8089637e-06, 1.6690528e-06, 1.5539039e-06, 1.5197157e-06, 1.6846562e-06, 1.5117772e-06, 1.6974785e-06, 1.6371901e-06, 1.62875e-06, 1.3414928e-06, 1.5716781e-06, 1.1625945e-06, 1.4214429e-06, 9.0279233e-07, 1.1886867e-06, 1.0201856e-06, 1.1328523e-06, 1.1236831e-06, 1.2940038e-06, 1.5421363e-06, 1.4063536e-06, 1.8374101e-06, 1.4969428e-06, 1.8342753e-06, 1.3619477e-06, 1.6680712e-06, 1.1305091e-06, 1.3914424e-06, 1.1421031e-06, 1.0667439e-06, 9.2969523e-07, 1.0559697e-06, 9.1449135e-07, 1.0647219e-06, 1.0087653e-06, 1.4041236e-06, 1.0653469e-06, 1.5728387e-06, 1.1900372e-06, 1.396327e-06, 9.5831428e-07, 9.8273849e-07, 6.9228567e-07, 8.1312667e-07, 7.1831973e-07, 8.7380434e-07, 1.1589902e-06, 1.1559309e-06, 1.3702947e-06, 1.2662056e-06, 1.5425231e-06, 1.3134162e-06, 1.3669259e-06, 1.3616054e-06, 1.1954299e-06, 1.2969087e-06, 1.2044857e-06, 1.2129433e-06, 1.066829e-06, 1.2888265e-06, 9.1080301e-07, 9.4850302e-07, 5.628118e-07, 7.3976011e-07, 2.4812591e-07, 4.8428843e-07, 2.8183855e-07, 5.7697958e-07, 4.5050618e-07, 8.9398675e-07, 6.1835844e-07, 1.0104357e-06, 8.0297383e-07, 1.048894e-06, 8.5358493e-07, 1.0796489e-06, 8.1287327e-07, 9.8257456e-07, 5.680762e-07, 8.0814245e-07, 5.9584009e-07, 6.3797485e-07, 6.944886e-07, 9.4480563e-07, 9.518683e-07, 1.2123149e-06, 1.5256706e-06, 1.5178348e-06, 1.5515784e-06, 1.7937264e-06, 1.240253e-06, 1.3527793e-06, 1.0316758e-06, 9.3243026e-07, 9.332284e-07, 7.1470557e-07, 8.428729e-07, 6.2857809e-07, 5.4179424e-07, 4.6470621e-07, 3.6276441e-07, 2.814264e-07, 3.295977e-07, 4.0521404e-07, 3.5343026e-07, 3.6958044e-07, 5.0506593e-07, 3.1834025e-07, 3.2582213e-07, 4.3522668e-07, 2.6025184e-07, 3.8900153e-07, 1.0961338e-07, 2.5467694e-07, 1.166893e-07, 5.6772224e-08, 9.1470554e-08, 2.0167496e-07, 3.7911797e-08, 2.6796461e-07, 2.0933361e-07, 2.1677593e-07, 1.5076751e-07, 2.154547e-07, 9.4922825e-08, -1.5619153e-08, 5.6953286e-08, 1.492038e-08, -1.2234541e-07, -7.3945498e-08, -1.4066427e-07, -1.5021338e-07, -8.5765791e-08, -2.5937592e-07, -8.5784093e-08, -3.7865655e-07, -3.3939569e-07, -5.3969692e-07, -6.6329776e-07, -6.7695552e-07, -7.9978318e-07, -6.5715392e-07, -5.8634763e-07, -3.4631253e-07, -2.6236251e-07, -9.1816048e-09, -6.8072671e-08, 4.6884891e-08, -2.1581414e-07, -1.6978596e-07, -3.1446355e-07, -3.5427569e-07, -2.7410849e-07, -2.8441695e-07, -2.4303658e-07, -6.8944944e-08, -1.8188616e-07, 5.0232102e-08, -5.0489499e-08, -3.4827404e-08, -2.0914572e-07, -2.141703e-07]
</code></pre>
|
<python><performance><scipy><curve-fitting>
|
2024-02-15 12:28:53
| 1
| 1,215
|
Raphael
|
78,000,614
| 9,582,542
|
Press button when class name changes using Selenium and the text has an apostrophe
|
<p>This is a similar question to a prior question but the previous method wont work if the text has an apostrophe in the string. I need to click the this class with changing value.</p>
<pre><code><div class="x1g731 xwtio9w y99xum8 x168nmei x13lgxp2 x5pf9jr xo71vjh r7s2etuq x1plvlek xstyops x1iyjqo2 x2lwn1j xeuugli xdt5ytf xqjyukv x1qjc9v5 x1oa3qoh x1nhvcw1">Stop Leo's job</div>
</code></pre>
<p>Tried to run</p>
<pre><code>button = wait.until(EC.element_to_be_clickable((By.XPATH, "//div[text()='Stop Leo\'s Job']")))
button.click()
</code></pre>
<p>But this wont work seems the escape sequence is not working.</p>
|
<python><selenium-webdriver>
|
2024-02-15 12:01:04
| 1
| 690
|
Leo Torres
|
78,000,602
| 567,059
|
Get the total count of 1 field of a list of namedtuple objects
|
<p>In Python, what is the easiest way to get the total count of one field from a set of named tuples?</p>
<p>In this example, I'm looking for the total count of <code>TestResults.failed</code>, which should be 4. It's doable with a <code>for</code> loop, but it feels like there could be a more efficient way.</p>
<pre class="lang-py prettyprint-override"><code>from collections import namedtuple
TestResults = namedtuple('TestResults', ['failed', 'attempted'])
test_results = [
TestResults(0, 5),
TestResults(3, 28),
TestResults(1, 7)
]
failed_count = 0
for r in test_results:
if hasattr(r, 'failed'):
failed_count += r.failed
print(failed_count)
</code></pre>
|
<python><namedtuple>
|
2024-02-15 11:58:49
| 1
| 12,277
|
David Gard
|
78,000,520
| 22,538,132
|
ERROR: failed to solve: process "/bin/sh -c python3
|
<p>I'm getting an error when I try to run a python script <code>docker build -t my_img .</code> inside a Dockerfile while building the image:</p>
<pre class="lang-dockerfile prettyprint-override"><code>FROM nvidia/cuda:11.7.1-cudnn8-runtime-ubuntu20.04
ENV DEBIAN_FRONTEND noninteractive
# setup timezone
RUN echo 'Etc/UTC' > /etc/timezone && \
ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime && \
apt-get update && \
apt-get install -q -y --no-install-recommends tzdata && \
rm -rf /var/lib/apt/lists/*
# Install necessary dependencies
RUN apt-get update && apt-get install -q -y --reinstall --no-install-recommends \
curl \
git \
lsb-release \
build-essential \
software-properties-common \
cmake \
python3-pip python3-dev python-is-python3 \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
# setup environment
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
## BlenderProc
WORKDIR /root
RUN git clone https://github.com/DLR-RM/BlenderProc.git
WORKDIR /root/BlenderProc
RUN pip install -e .
## Download textures
RUN echo 'import blenderproc as bproc' | cat - /root/BlenderProc/blenderproc/scripts/download_cc_textures.py > temp && mv temp /root/BlenderProc/blenderproc/scripts/download_cc_textures.py
RUN python3 /root/BlenderProc/blenderproc/scripts/download_cc_textures.py
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
</code></pre>
<p>How can I solve that problem?</p>
|
<python><docker><shell><dockerfile>
|
2024-02-15 11:47:28
| 0
| 304
|
bhomaidan90
|
78,000,451
| 353,337
|
Construct all `r`-tuples with two nonzeros
|
<p>Given an int value <code>val</code> and a tuple length <code>r</code>, I need to create all <code>r</code>-tuples that have <code>d</code> of <code>{+val, -val}</code> and the rest filled up with zeros. With <code>d=2</code>, I can do</p>
<pre class="lang-py prettyprint-override"><code>val = 7
r = 5
out = []
for i0 in range(r - 1):
for i1 in range(i0 + 1, r):
for v0 in [+val, -val]:
for v1 in [+val, -val]:
t = [0] * r
t[i0] = v0
t[i1] = v1
print(t)
</code></pre>
<pre class="lang-py prettyprint-override"><code>[7, 7, 0, 0, 0]
[7, -7, 0, 0, 0]
[-7, 7, 0, 0, 0]
[-7, -7, 0, 0, 0]
[7, 0, 7, 0, 0]
# ...
</code></pre>
<p>but this already feels messy. It's getting worse for larger <code>d</code>. I looked at <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow noreferrer">itertools</a> combinatoric iterators, but none of those seems to help.</p>
<p>Any hints?</p>
|
<python><permutation><python-itertools>
|
2024-02-15 11:35:49
| 2
| 59,565
|
Nico Schlömer
|
78,000,267
| 2,473,382
|
Mock `__setitem__ ` of a subclass of dict
|
<p>I am trying to subclass a dict, and making sure that <code>__setitem__</code> is called when a new item is added.</p>
<p>I do know that <code>__setitem__</code> is called (if I add <code>1/0</code> I see it), but I cannot confirm it from my unit tests. I have this behaviour if I inherit from <code>dict</code> and <code>UserDict</code>.</p>
<p>What would be the right way to do it?</p>
<pre class="lang-py prettyprint-override"><code>from unittest import mock
from unittest.mock import MagicMock
class MyDict(dict):
def __setitem__(self, key, value):
# 1 / 0 # adding this confirms that my method is actually called
super().__setitem__(key, value)
class TestAssumptions:
def test_setitem_called1(self):
t = MyDict()
with mock.patch.object(t, "__setitem__", new=MagicMock(wraps=t.__setitem__)) as setitem:
t["a"] = 1
setitem.assert_called_once() # Expected 'mock' to have been called once. Called 0 times.
def test_setitem_called2(self):
t = MyDict()
t.__setitem__ = MagicMock(wraps=t.__setitem__)
t["a"] = 1
t.__setitem__.assert_called_once() # Expected 'mock' to have been called once. Called 0 times.
</code></pre>
<p>This might not be something smart to do as is (test behaviour, not implementation as the first comment says), but:</p>
<ul>
<li>this is a minimal example, not real life</li>
<li>not being able to do this means I do not get how patch/mock works. I want to fix this.</li>
<li>the final goal is to compare different dict subclasses. It is "known" that, for instance, inheriting from <code>UserDict</code> has different behaviour that inheriting from <code>dict</code>. This is actually what I am trying to confirm or inform.</li>
</ul>
|
<python><python-3.x><mocking><pytest>
|
2024-02-15 11:05:18
| 0
| 3,081
|
Guillaume
|
78,000,266
| 22,221,987
|
How to make a simple C server, which accept only one connection at the same time
|
<p>I'm trying to make a simple C server (for testing my python clients). The main rule is that server must accept only one connection at the same time. Server should work like the loop with <code>accept</code> function in it. Server should accept or reject clients. If client is accepted, server should continue listening for new connections and reject them.</p>
<p>My python client looks like this.</p>
<pre><code>import threading
import time
import socket
SERVER_ADDRESS = 'some ip'
SERVER_PORT = 12345
def connect_to_server():
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
client_socket.connect((SERVER_ADDRESS, SERVER_PORT))
print("Client connected successfully to the server.", threading.current_thread())
except ConnectionRefusedError:
print("Connection refused: Unable to connect to the server.")
except ConnectionAbortedError:
print("Connection aborted: Connection with the server aborted.")
except ConnectionResetError:
print("Connection reset: Connection with the server was reset.")
finally:
time.sleep(10000)
connect_to_server()
</code></pre>
<p>The problem is that with <code>accept</code> C function I can't get <code>ConnectionRefusedError</code> on my client.<br />
I understand that server firstly accept the client and than disconnect it, but is there any variant to get <code>ConnectionRefusedError</code> immediately after server socket closure? Without sending or receiving any messages into socket from client.</p>
<p>I'm not good in C, so I ased this question in ChatGPT, but his solution doesn't fit my conditions.</p>
<p>So, is it actually possible?</p>
|
<python><c><python-3.x><sockets><tcp>
|
2024-02-15 11:05:11
| 0
| 309
|
Mika
|
78,000,249
| 1,908,482
|
Uni-modal vs Multi-modal models - how to pick best fit
|
<p>My goal is to create a model of a data distribution. I want to pick the best model however I am not sure how to compare the obtained models.</p>
<p>For uni-modal distribution I used the Fitter library. For multi-modal I tried Gaussian Mixture Models. Both models output aic/bic.</p>
<p>In Fitter the information criteria is computed <a href="https://fitter.readthedocs.io/en/latest/_modules/fitter/fitter.html#Fitter" rel="nofollow noreferrer">here</a> (see _fit_single_distribution() method)
In GMM is <a href="https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L876" rel="nofollow noreferrer">here</a></p>
<p>In my case the values seem to have different orders of magnitudes:</p>
<ul>
<li>uni-modal - 1500-1600</li>
<li>multi-modal (2 normal distributions) - 155000-157000</li>
</ul>
<p>The original samples size used for fitting: ~16800 samples</p>
<p><strong>Single Distribution Fit Model</strong></p>
<p><a href="https://i.sstatic.net/pZJwo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pZJwo.png" alt="Uni-modal fit" /></a></p>
<p><strong>Two Distribution Fit</strong>
<a href="https://i.sstatic.net/RadsY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RadsY.png" alt="Multi-modal fit" /></a></p>
<p><strong>AIC/BIC values (multi-modal)</strong></p>
<p>I've picked 2 components in my example.
<a href="https://i.sstatic.net/Rqpun.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rqpun.png" alt="AIC/BIC multi-modal" /></a></p>
|
<python><model-fitting><gaussian-mixture-model>
|
2024-02-15 11:01:45
| 1
| 2,390
|
Eugen
|
78,000,094
| 16,707,518
|
Combinations of all possible values from single columns
|
<p>I've tried to find this specific question answered but not had any luck. Suppose I have a pandas dataframe with the following columns:</p>
<pre><code>A B C
1 2 3
4 5 6
</code></pre>
<p>All I'm trying to achieve is a new dataframe that has all combinations of the unique values in the individual columns themselves, so:</p>
<pre><code>A B C
1 2 3
4 2 3
1 2 6
4 2 6
1 5 3
1 5 6
4 5 6
4 5 3
</code></pre>
|
<python><pandas><dataframe><combinations>
|
2024-02-15 10:38:52
| 1
| 341
|
Richard Dixon
|
77,999,955
| 893,254
|
The Python jsons library produces duplicated serialized output with underscores prepended to the field names. Can this be prevented?
|
<p>The original title of this question was going to be</p>
<p><em><strong>Why does the Python <code>jsons</code> library produce serialized output with underscores prepended to the field names?</strong></em></p>
<p>However while writing the question I realized why this is happening. Now the question has become more about whether or not this can be prevented and if so how to go about doing so.</p>
<p>Here are some examples to demonstrate the problem:</p>
<h4>Example 1: Dictionary serialized as expected</h4>
<pre><code>with open('tmp.txt', 'w') as ofile:
d = {'key': 'value'}
ofile.write(jsons.dumps(d))
# produces:
# {"key": "value"}
</code></pre>
<h4>Example 2: Class (object) serialized as expected</h4>
<pre><code>class Test:
def __init__(self, value:str) -> None:
self.value = value
with open('tmp.txt', 'w') as ofile:
test = Test('value')
ofile.write(jsons.dumps(test))
# produces:
# {"value": "value"}
</code></pre>
<h4>Example 3: As soon as the concept of <code>properties</code> are introduced, we begin to see duplicated output</h4>
<pre><code>class TestProperty:
def __init__(self, value:str) -> None:
self.value = value
@property
def value(self) -> str:
return self._value
@value.setter
def value(self, the_value) -> None:
self._value = the_value
with open('tmp.txt', 'w') as ofile:
test = TestProperty('value')
ofile.write(jsons.dumps(test))
# produces:
# {"_value": "value", "value": "value"}
</code></pre>
<h4>Example 4: If we swap out the properties concept for getter/setter functions, the problem goes away</h4>
<pre><code>class TestFunction:
def __init__(self, value:str) -> None:
self.value = value
def get_value(self) -> str:
return self.value
def set_value(self, the_value) -> None:
self.value = the_value
with open('tmp.txt', 'w') as ofile:
test = TestFunction('value')
ofile.write(jsons.dumps(test))
# produces:
# {"value": "value"}
</code></pre>
<h1>What problems are created?</h1>
<p>As Example 3 shows, if properties are introduced, the data is effectively duplicated.</p>
<ul>
<li>I don't fully understand why this is happening</li>
<li>I thought <code>properties</code> should work like functions, as in Example 4, but they don't and cause a field <code>value</code> to be serialized</li>
<li>I understand why <code>_value</code> is being serialized in Example 3, it is an attribute of the class, it is the field name where we are storing the actual data values</li>
</ul>
<p>There are three potential issues with this:</p>
<ul>
<li>Harder to inspect messages when debugging. The data is duplicated making the JSON message harder to read and using up screen space (which might become important if the message to be inspected is large)</li>
<li>Slower message transfer rates (sending twice as much data over a network)</li>
<li>Wasted disk space (might become important if storing large quantities of data, because available space to store records is reduced by a factor of 2)</li>
</ul>
<p>Why is the data duplicated in Example 3 and is there a solution to this?</p>
|
<python><json><python-jsons>
|
2024-02-15 10:17:58
| 1
| 18,579
|
user2138149
|
77,999,789
| 10,639,382
|
Finding standard deviation and mean for Normalize function from torchvision
|
<p>I am wondering how to find the mean and standard deviation for lets say 3 images. This is to be used as inputs to the <code>Normalize</code> function in Pytorch (<code>from torchvision.transforms import Normalize</code>).</p>
<p>In the particular dataset I work, the 3 color channels are in separate tif files. As it just a repetition I will show the the calculations for the red band.</p>
<p>Approach 1</p>
<p>I load the tensor which is 1x120x120 tensor and find the mean of the red channel and append it to a list to keep track of the means (means across the pixels) for the 3 images. Then finally to find the mean of the dataset for the red channel, I just find the mean of the list (mean across the images). Computing the standard deviation would be the same process</p>
<pre><code>def get_mean_std(root:str):
"""
Finds the mean and standard deviation of channels in a dataset
Inputs
- root : Path to Root directory of dataset
"""
rb_list = []
gb_list = []
bb_list = []
mean = 0
for data_folder in os.listdir(root)[:3]:
# Path containing to folder containing 12 tif files and a json file
data_folder_pth = os.path.join(root, data_folder)
# Path to RGB channels | rb refers to red band ...
rb_pth = os.path.join(data_folder_pth, [f for f in os.listdir(data_folder_pth) if f.endswith("B04.tif")][0])
gb_pth = os.path.join(data_folder_pth, [f for f in os.listdir(data_folder_pth) if f.endswith("B03.tif")][0])
bb_pth = os.path.join(data_folder_pth, [f for f in os.listdir(data_folder_pth) if f.endswith("B02.tif")][0])
# Open each Image and convert to tensor
rb = ToTensor()(Image.open(rb_pth)).float() #(1,120,120)
gb = ToTensor()(Image.open(gb_pth)).float() #(1,120,120)
bb = ToTensor()(Image.open(bb_pth)).float() #(1,120,120)
# Find the mean of all pixels
rb_list.append(rb.mean().item())
mean_of_3_images = np.array(rb_list).mean()
print(f"rb_list : {rb_list}")
print(f"mean of red channel : {mean_of_3_images}")
# output
>>> rb_list : [281.01361083984375, 266.2029113769531, 1977.7083740234375]
>>> mean of red channel : 841.6416320800781
</code></pre>
<p>Approach 2</p>
<p>Following this post (<a href="https://saturncloud.io/blog/how-to-normalize-image-dataset-using-pytorch/#step-2-calculate-the-mean-and-standard-deviation-of-the-dataset" rel="nofollow noreferrer">https://saturncloud.io/blog/how-to-normalize-image-dataset-using-pytorch/#step-2-calculate-the-mean-and-standard-deviation-of-the-dataset</a>) but amended to work with this dataset. Here the author find keeps track of the count of all the pixels and the mean and then divid the mean by the number of pixels.</p>
<p>But the results I get from the two methods are different.</p>
<pre><code>def get_mean_std(root:str):
"""
Finds the mean and standard deviation of channels in a dataset
Inputs
- root : Path to Root directory of dataset
"""
mean = 0
num_pixels = 0
for data_folder in os.listdir(root)[:3]:
# Path containing to folder containing 12 tif files and a json file
data_folder_pth = os.path.join(root, data_folder)
# Path to RGB channels | rb refers to red band ...
rb_pth = os.path.join(data_folder_pth, [f for f in os.listdir(data_folder_pth) if f.endswith("B04.tif")][0])
gb_pth = os.path.join(data_folder_pth, [f for f in os.listdir(data_folder_pth) if f.endswith("B03.tif")][0])
bb_pth = os.path.join(data_folder_pth, [f for f in os.listdir(data_folder_pth) if f.endswith("B02.tif")][0])
# Open each Image and convert to tensor
rb = ToTensor()(Image.open(rb_pth)).float() #(1,120,120)
gb = ToTensor()(Image.open(gb_pth)).float() #(1,120,120)
bb = ToTensor()(Image.open(bb_pth)).float() #(1,120,120)
batch, height, width = rb.shape #(1,120,120)
num_pixels += batch * height * width
mean += rb.mean().sum()
print(mean)
print(mean / num_pixels)
# Output
>>> tensor(2524.9248)
>>> tensor(0.0584)
</code></pre>
<p>I am wondering why the values are so different. Any Idea why my method is incorrect?</p>
<p>Just to get some idea of the values for the red band inside the 3 images ...</p>
<pre><code>tensor([[[322., 275., 262., ..., 260., 225., 268.],
[283., 271., 259., ..., 277., 269., 278.],
[302., 303., 276., ..., 305., 279., 283.],
...,
[398., 341., 374., ..., 246., 273., 227.],
[383., 351., 375., ..., 266., 277., 260.],
[353., 347., 359., ..., 280., 260., 227.]]])
tensor([[[153., 214., 242., ..., 825., 575., 399.],
[206., 223., 198., ..., 766., 507., 477.],
[219., 256., 189., ..., 593., 365., 384.],
...,
[138., 255., 329., ..., 227., 289., 334.],
[174., 215., 276., ..., 402., 395., 350.],
[216., 212., 214., ..., 354., 362., 312.]]])
tensor([[[1727., 1852., 1184., ..., 3494., 3539., 3374.],
[1882., 1868., 1307., ..., 3523., 3443., 3278.],
[1716., 1975., 1919., ..., 3280., 3319., 3121.],
...,
[2199., 2214., 2269., ..., 2563., 2284., 2147.],
[2181., 2213., 2312., ..., 2686., 2668., 2737.],
[2208., 2297., 2351., ..., 2647., 2904., 3008.]]])
</code></pre>
|
<python><pytorch><computer-vision><normalization>
|
2024-02-15 09:52:49
| 1
| 3,878
|
imantha
|
77,999,731
| 501,663
|
Keras/Tensorflow have slightly different outputs on different CPU architectures
|
<p>I know similar questions were asked in the past, for example <a href="https://stackoverflow.com/questions/69729241/is-it-normal-that-model-output-slightly-different-on-different-platforms">Is it normal that model output slightly different on different platforms?</a>, and there are more.</p>
<p>I am fully aware to differences originating from floating point operations on e.g. CPU vs. GPU, and to the fact that different OSs (e.g., Mac vs Linux in the linked SO question above) may have differnet binary libraries working in the backend, etc.</p>
<p>However, the small difference I see in my Keras model ouptut is on 2 machines (on AWS) with the same OS, exactly the same set of Python package versions (inc. of course Tensorflow), and the only difference is the model/architecture of the CPU - one has "Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz" and the other "Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz". Simple floating point operations like <code>0.1 + 0.2 - 0.3</code> give the same (non-zero) output on both machines. Of course a deep network model involves tons of floating-point operations, so it may not be relevant.</p>
<p>My question is, can such a difference in CPU architectures explain the different output? The difference I see is in the 5th digit of the final prediction of the model, for some inputs.</p>
|
<python><tensorflow><keras>
|
2024-02-15 09:43:43
| 2
| 9,665
|
Itamar Katz
|
77,999,455
| 391,161
|
Is it possible to wait for a key release and then consume the remainder of stdin in Python?
|
<p>My goal is to have a program that waits for a key to be released, and then consumes standard input so that the next program to run in a script does not pick up the input that was emitted by the key being held down.</p>
<p>Currently, I've cobbled together a script based on a combination of these <a href="https://stackoverflow.com/a/41521576/391161">two</a> <a href="https://stackoverflow.com/a/45585167/391161">answers</a>:</p>
<h3>WaitForRelease.py</h3>
<pre><code>#!/usr/bin/env python
from pynput import keyboard
import sys
import select
def on_key_release(key):
print('Released Key %s' % key)
return False
with keyboard.Listener(on_release = on_key_release) as listener:
listener.join()
while True:
if select.select([sys.stdin,],[],[],0.5)[0]:
x = sys.stdin.read()
print("Got: ", x)
else:
break
</code></pre>
<p>I run it with the following invocation, and then hold down the <code>f</code> key for 5 seconds and release. Then I type Control-D to send EOF to <code>cat</code>.</p>
<pre><code>stty -echo; ./WaitForRelease.py; stty echo; cat
</code></pre>
<h3>Expected / Desired output</h3>
<pre><code>Released key 'f'
Got: ffffffffffffffffffffff
</code></pre>
<h3>Observed output:</h3>
<pre><code>Released Key 'f'
fffffffffffffffffffffffffffffff
</code></pre>
<p>The observed output suggests that <code>cat</code> got the extra <code>f</code>'s and Python failed to consume it.</p>
<p><strong>What is happening here and is it possible to modify this program (or its invocation) to consume any remaining input from the key press and exit?</strong></p>
<p>As an extra twist, I am trying to make this work with keys that are held down <em>before</em> the program started.</p>
|
<python><linux>
|
2024-02-15 08:59:20
| 0
| 76,345
|
merlin2011
|
77,999,362
| 619,609
|
I'm unable to find a type for aws s3 event notifications in python
|
<p>I am working in python and have a code that receives s3 event notifications via an SQS. I'm trying to find existing type support (for type hints and parsing using pydantic) for events such as s3:ObjectCreated - I would assume something must exist in libraries such as types_aiobotocore_s3, but I'm unable to locate a type that describes these events.
I don't want to write it myself, because it should probably already exist.</p>
<p>Can anyone please assist?</p>
|
<python><amazon-web-services><amazon-s3><types><boto3>
|
2024-02-15 08:43:33
| 2
| 404
|
Anorflame
|
77,999,148
| 1,307,231
|
How can I make back/forward history navigation not care about cursor position in VS Code interactive windows?
|
<p>When working with Python Interactive windows in VS Code, I find the following behaviour irritating:</p>
<p>When typing commands into the field at the bottom (where it says "Type 'python' code here and press Enter to run), these commands are added to some sort of history. Using Cursor-Up and Cursor-Down I can go back and forth through the commands I typed in a session. However, changing direction requires two key presses, because the first key press will bring the cursor only to the end or beginning of the line. This typically happens when searching for previously typed commands and whizzing past them with one too many presses of Cursor-Up, then having to press Cursor-Down <em>two times</em> to get to that command. Can this be changed?</p>
|
<python><visual-studio-code><keyboard-shortcuts>
|
2024-02-15 07:57:33
| 1
| 581
|
mcenno
|
77,999,061
| 17,501,206
|
Python mmap return Invalid argument
|
<p>I tried to use <code>mmap</code> in Python</p>
<pre><code>import mmap
import sys
f = open(sys.argv[1])
off = int(sys.argv[2])
mm = mmap.mmap(f.fileno(),length=0,access=mmap.ACCESS_READ,offset=off )
</code></pre>
<p>When off =0 that works fine .</p>
<p>But when off equal to any number (100 for example) I got <code>OSError: [ERROR 22] Invalid argument</code></p>
|
<python><python-3.x><mmap>
|
2024-02-15 07:38:03
| 1
| 334
|
vtable
|
77,999,057
| 13,987,643
|
atexit registered function unable to access streamlit session state variable
|
<p>I have registered an atexit function like this:</p>
<pre><code>import streamlit as st
from azure.search.documents.indexes import SearchIndexClient
from azure.core.credentials import AzureKeyCredential
@atexit.register
def delete_vector_index():
admin_client = SearchIndexClient(endpoint=st.session_state.vector_store_address,
index_name=st.session_state.index_name,
credential=AzureKeyCredential(st.session_state.vector_store_password))
admin_client.delete_index(st.session_state.index_name)
if "initialized" not in st.session_state:
st.session_state.initialized = True
st.session_state.vector_store_address: str = "<my endpoint>"
st.session_state.vector_store_password: str = "<admin key>"
st.session_state.index_name: str = "<index name>"
</code></pre>
<p>But everytime the interpreter finishes, this is the error message I get <code>"AttributeError: st.session_state has no attribute "vector_store_address". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization"</code></p>
<p>Why is the st.session_state variable inaccessible from the atexit function when it can be easily accessed from my other functions? Does anyone know what to do about this?</p>
|
<python><azure><function><streamlit><atexit>
|
2024-02-15 07:37:00
| 1
| 569
|
AnonymousMe
|
77,998,762
| 19,370,273
|
Returning as many complete concatenated rows as possible up to a length limit
|
<p>Suppose I have a dataframe with a single column (elements as strings) and a certain number of rows. The length of the string of each row can be variable.</p>
<p>I want to concatenate the rows into a single string. Currently, I can do something like this (where <code>output</code> is the column with the strings I want to concatenate):</p>
<pre class="lang-py prettyprint-override"><code>toprint = pd.DataFrame({'text': ['\n|-\n'.join(df['output'].str.strip('"').tolist())]})['text'].item()
</code></pre>
<p>The problem is that <code>toprint</code> cannot have more than a certain size, and I want the first k bytes of <code>toprint</code> as a result. So one way I can work around this is to do this:</p>
<pre class="lang-py prettyprint-override"><code>toprint = toprint[:2096900]
</code></pre>
<p>The issue is that the result would often be that I get part of a row (as it gets cut off when the length limit is reached). I want to return as many <strong>complete</strong> rows as possible in the string, up to the length limit. How can I do that as efficiently as possible, without iterating through each row of the dataframe (i.e, without using something like <code>df.iterrows</code>)?</p>
<p>Rough example: the dataframe contains five rows of length 25. The maximum length of the concatenated string is 80. Currently, the concatenated string would contain the first three rows and part of the fourth row. I want the string to contain only the first three rows.</p>
|
<python><pandas>
|
2024-02-15 06:26:03
| 1
| 391
|
Leaderboard
|
77,998,694
| 3,121,975
|
Replace pandas row in one dataframe with row(s) from another dataframe
|
<p>I have two pandas DataFrames defined like so:</p>
<pre><code>left = pd.DataFrame({"A": [1, 2], "B": ["a", "b"]})
left["C"] = pd.Series(dtype="int64")
right = pd.DataFrame({"A": [1, 1], "B": ["a", "a"], "C": [10, 20]})
</code></pre>
<p>To provide a bit of context here, <code>right</code> was derived from <code>left</code> and then filtered using <code>loc</code>, with additional columns added to the result. As such, where column names match between <code>left</code> and <code>right</code>, the values should also match, but there will be additional columns in <code>right</code> that may or may not be present in <code>left</code>. Furthermore, I have many <code>right</code>s and each will be merged into <code>left</code>, producing a new value for <code>left</code>, and the code I'm trying to write here will be part of a for-loop that does this for each <code>right</code>.</p>
<pre><code> A B C
0 1 a 10
1 1 a 20
2 2 b NaN
</code></pre>
<p>I've tried a few ways to handle this.</p>
<h3>Assigning to loc</h3>
<p>In this strategy, I filter the rows using <code>loc</code> and then assign to them:</p>
<pre><code>left.loc[left["A"] == right["A"]] = right
</code></pre>
<p>This gets me the correct columns but I'm missing a rows:</p>
<pre><code> A B C
0 1 a 10
2 2 b NaN
</code></pre>
<h3>Merging</h3>
<p>In this strategy, I employ merging to get the dataset I'm interested in.</p>
<pre><code>left.merge(right, on="A", how="left")
</code></pre>
<p>This gets me all the rows I want but now the columns have to be renamed, which makes it a pain as I expect I'll have an array of <code>right</code>s and they'll all need to be merged into <code>left</code> so the renaming step would become increasingly annoying. Plus, in this case I'd then have to determine how to merge <code>left.C</code> and <code>right.C</code> if there are values from both I want to keep.</p>
<pre><code> A B_x C_x B_y C_y
0 1 a NaN a 10.0
1 1 a NaN a 20.0
2 2 b NaN NaN NaN
</code></pre>
<h3>Joining</h3>
<p>Similar to the merging strategy, this one revolves around joining the datasets together:</p>
<pre><code>left.join(right, on="A", how="left", rsuffix="R")
</code></pre>
<p>This is actually the worst of both solutions to some extent because now I don't have all the rows I want and the column names are not correct, leaving me with the same column-merging problem I had when using <code>merge</code>.</p>
<pre><code> A B C AR BR CR
0 1 a NaN 1.0 a 20.0
1 2 b NaN NaN NaN NaN
</code></pre>
<p>Does anyone know how to get this output? It seems like there should be a simple solution here, but as I can't find it, perhaps there's a more complex operation I need to do?</p>
|
<python><pandas>
|
2024-02-15 06:05:02
| 2
| 8,192
|
Woody1193
|
77,998,622
| 5,640,500
|
Do gca().lines and legend.get_lines() return the lines in the same order?
|
<p>Do <code>gca().lines</code> and <code>legend.get_lines()</code> return the lines in the same corresponding order?</p>
<p>If not, is there a way to get corresponding legend lines from <code>gca().lines</code>?</p>
|
<python><matplotlib><legend>
|
2024-02-15 05:45:36
| 1
| 651
|
Tims
|
77,998,568
| 13,634,472
|
langchain : ModuleNotFoundError: No module named 'langchain_community'
|
<p>Trying to execute this code:</p>
<p><code>from langchain_community.vectorstores import FAISS</code></p>
<p>Error it is showing:
<strong>ModuleNotFoundError: No module named 'langchain_community'</strong></p>
<p>Thou I have executed the command:</p>
<blockquote>
<p><strong>pip install langchain-community</strong></p>
</blockquote>
|
<python><langchain>
|
2024-02-15 05:24:26
| 4
| 1,679
|
Nikita Malviya
|
77,998,496
| 12,149,817
|
How to build shell plots based on pandas dataframe?
|
<p>I have the following dataframe of numbers of interactions between ligand-receptor pairs:</p>
<pre><code>d = {'ligand': ['B cells','CAFs','Malignant cells','T cells','TAMs','TAMs','TAMs','TAMs','TAMs','TAMs','TAMs','TECs','unclassified'],
'receptor': ['TAMs','TAMs','TAMs','TAMs','B cells','CAFs','Malignant cells','T cells','TAMs','TECs','unclassified','TAMs','TAMs'],
'interactions': [18, 29, 89, 22, 17, 12, 48, 34, 56, 27, 14, 53, 24]}
df = pd.DataFrame(d)
</code></pre>
<p>I need to visualize the number of interactions between ligands and receptors. I am looking for something like this:
<a href="https://i.sstatic.net/RuGeH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RuGeH.png" alt="enter image description here" /></a></p>
<p>The description says these are shell plots. But I can't find that type of visualization anywhere. How to get this type of visualization using python modules?</p>
|
<python><pandas><matplotlib><visualization>
|
2024-02-15 04:56:50
| 1
| 720
|
Yulia Kentieva
|
77,998,370
| 11,330,010
|
Plotly: Add both primary and secondary axis for same line
|
<p>I am working on a school project. I have created the following callback to update a graph. The graph simply plots :</p>
<ol>
<li>a line showing the values of the variable Wealth,</li>
<li>a horizontal line at the starting value of Wealth,</li>
<li>fills the area between the first and the second line.</li>
</ol>
<p>I would like to modify the code to add ticks on the vertical axis on the left, showing the difference between the value of Wealth and the starting its value. Therefore, I don't want to add a 3rd line, I just want to add the ticks that essentially show the profit. I will show my attempt below this code snippet.</p>
<pre><code># Callbacks to update graphs based on the original code
@app.callback(
Output('wealth-over-time', 'figure'),
Input('wealth-time-range-slider', 'value'))
def update_wealth_over_time(selected_range):
filtered_df = df[df['Day'].between(selected_range[0], selected_range[1])]
# Create a new DataFrame for the fill area
fill_area_df = pd.DataFrame({
'Day': filtered_df['Day'],
'Starting Wealth': starting_wealth,
'Wealth ($)': filtered_df['Wealth ($)']
})
fig = go.Figure()
# Add fill area between starting wealth and Wealth ($)
fig.add_trace(go.Scatter(
x=pd.concat([fill_area_df['Day'], fill_area_df['Day'][::-1]]), # X-values concatenated with reverse for seamless fill
y=pd.concat([fill_area_df['Starting Wealth'], fill_area_df['Wealth ($)'][::-1]]), # Y-values for starting wealth and Wealth ($) with reverse for fill
fill='toself',
fillcolor='rgba(135, 206, 250, 0.5)', # Light blue fill with transparency
line=dict(color='rgba(255,255,255,0)'), # Invisible line around the fill area
showlegend=False,
name='Fill Area',
))
# Add Wealth ($) line on top of the fill area
fig.add_trace(go.Scatter(
x=filtered_df['Day'],
y=filtered_df['Wealth ($)'],
mode='lines+markers',
showlegend=False,
name='Wealth ($)',
line=dict(color='DeepSkyBlue'),
))
# Add dashed horizontal line for starting wealth
fig.add_shape(
type='line',
x0=filtered_df['Day'].min(),
x1=filtered_df['Day'].max(),
y0=starting_wealth,
y1=starting_wealth,
line=dict(color='Gray', dash='dash', width=2),
)
fig.update_layout(
title={'text': "Wealth", 'font': {'color': 'black'}},
plot_bgcolor='white',
xaxis=dict(
title='Day',
color='black',
showgrid=True,
gridcolor='lightgrey',
gridwidth=1,
showline=True,
linewidth=2,
linecolor='black',
mirror=True
),
yaxis=dict(
title='Wealth ($)',
color='black',
showgrid=True,
gridcolor='lightgrey',
gridwidth=1,
showline=True,
linewidth=2,
linecolor='black',
mirror=True
),
xaxis_range=[filtered_df['Day'].min(), filtered_df['Day'].max()]
)
fig.add_shape(
go.layout.Shape(
type="line",
x0=min(filtered_df['Day']),
y0=starting_wealth,
x1=max(filtered_df['Day']),
y1=starting_wealth,
line=dict(
color="black",
width=2,
dash="dash",
),
)
)
return fig
</code></pre>
<p>This generates the following image:</p>
<p><a href="https://i.sstatic.net/kkJcI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kkJcI.png" alt="enter image description here" /></a></p>
<p>I have attempted to do this modification following the steps illustrated <a href="https://plotly.com/python/multiple-axes/" rel="nofollow noreferrer">here</a>. In particular, I made the following changes:</p>
<ol>
<li><p><code>from plotly.subplots import make_subplots</code></p>
</li>
<li><p><code>Changed fig = go.Figure()</code> to <code>fig = make_subplots(specs=[[{"secondary_y": True}]])</code></p>
</li>
<li><p>Added <code>secondary_y=True</code> as follows:</p>
</li>
</ol>
<pre><code> # Add Wealth ($) line on top of the fill area
fig.add_trace(go.Scatter(
x=filtered_df['Day'],
y=filtered_df['Wealth ($)'],
mode='lines+markers',
showlegend=False,
name='Wealth ($)',
line=dict(color='DeepSkyBlue'),
), secondary_y=True,)
</code></pre>
<p>However, when I run the code,</p>
<p><strong>1) the blue area is no longer aligned properly</strong>:</p>
<p><a href="https://i.sstatic.net/bkFcH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bkFcH.png" alt="enter image description here" /></a></p>
<p><strong>2) I do not know how to show the correct ticks</strong> (the difference of the ticks on the left and the starting value of Wealth).</p>
<p>I would really appreciate some help with this.</p>
|
<python><plotly><plotly-dash>
|
2024-02-15 03:59:30
| 1
| 407
|
NC520
|
77,998,349
| 1,437,138
|
Execute background process in python
|
<p>In python, I am trying to run a process (storing huge amount of data to db) as a background process and it should not wait to return the response of current executing function. I tried the below code. But its not working as my expectation. I will explain my real problem. If I get an alternate solution for my real problem also would be very helpful.</p>
<p>Actually my problem is I am using Azure cosmos db and function app. If I need to process huge amount of data in the cosmos db, I am getting Batch Write Error. So I tried to insert data batch by batch with the 2 sec interval. If I do this, the whole process execution time is crossing over 2 min, then my function app getting failed due to 2 min time out. Thats why I tried this db update to be executed in a separate process. But If anything fails in the independent thread execution, I need to send an email saying the db update is failed.</p>
<pre><code>async def import_and_background_task(myobj):
asyncio.ensure_future(myobj.storeDataToDb())
def exectuteReport():
myobj.readInputFile()
myobj.readDatafromDb()
myobj.generateReport()
# myobj.storeDataToDb()
asyncio.run(import_and_background_task(myobj))
print('its done')
return {success:True}
</code></pre>
<p>Sample definition of <code>storeDataToDb</code> function is,</p>
<pre><code>def storeDataToDb():
count = 0
while count < 5:
print('storing set {} of data to db'.format(count))
count += 1
time.sleep(2) # Delays for 2 seconds. You can adjust as needed.
</code></pre>
<p>I am expecting it will return the {success:True} immediately not waiting to print <code>storing set {} of data to db</code>. But its waiting to complete all the works of <code>storeDataToDb</code>. How can I run this storeDataToDb function as a background process?</p>
|
<python><azure-functions><multiprocessing><azure-cosmosdb><python-asyncio>
|
2024-02-15 03:52:36
| 1
| 2,827
|
Smith Dwayne
|
77,998,316
| 9,582,542
|
Press button when class name changes using Selenium
|
<p>Using Selenium for Edge what would the proper syntax be to click the div below? the class names changes but the "Run Job" text remains the same. How can I click on this using the text instead of the class?</p>
<pre><code><div class="x9f619 xjbqb8w x78zum5 x168nmei x13lgxp2 x5pf9jr xo71vjh x1n2onr6 x1plvlek xryxfnj x1iyjqo2 x2lwn1j xeuugli xdt5ytf xqjyukv x1qjc9v5 x1oa3qoh x1nhvcw1">Run Job</div>
</code></pre>
|
<python><selenium-webdriver>
|
2024-02-15 03:38:45
| 1
| 690
|
Leo Torres
|
77,998,295
| 5,896,591
|
python on Linux: os.pipe() with cumulative byte counter?
|
<p>Is it possible to get a cumulative count of bytes written to an <code>os.pipe()</code>? I tried <code>os.fdopen(...).tell()</code> but got <code>IOError: [Errno 29] Illegal seek</code>. Is there some other way to wrap the <code>fd</code> to get a cumulative byte count?</p>
|
<python><linux><pipe>
|
2024-02-15 03:28:59
| 1
| 4,630
|
personal_cloud
|
77,998,038
| 6,394,617
|
Python slice assignment should be sequence of same type?
|
<p>In the <a href="https://docs.python.org/3/reference/simple_stmts.html#assignment-statements" rel="nofollow noreferrer">documentation</a>, it states "If the target is a slicing: The primary expression in the reference is evaluated. It should yield a mutable sequence object (such as a list). The assigned object should be a sequence object of the same type."</p>
<p>But when I try sequence objects of different types, it seems to work:</p>
<pre><code>x = [1,2,3,4,5]
x[1:3] = (8,9)
print(x) # [1, 8, 9, 4, 5]
x[1:3] = "ab"
print(x) # [1, 'a', 'b', 4, 5]
</code></pre>
<p>Is this a documentation bug, or just a style suggestion, or is there something that I am not understanding correctly?</p>
|
<python><slice><variable-assignment>
|
2024-02-15 01:24:02
| 0
| 913
|
Joe
|
77,997,955
| 12,358,733
|
Using Google Application Default Credentials (ADC) with gcloud-aio Storage in Python
|
<p>I'm using <a href="https://github.com/talkiq/gcloud-aio/tree/master/storage" rel="nofollow noreferrer">gcloud-aio/storage</a> to access GCS buckets, with the authentication via service account key file (JSON file). Here's my working Python code:</p>
<pre><code>from os import path
from gcloud.aio.auth import Token
from gcloud.aio.storage import Storage
SCOPES = ["https://www.googleapis.com/auth/cloud-platform.read-only"]
pwd = path.realpath(path.dirname(__file__))
service_file = path.join(pwd, "../../my_key.json")
token = Token(service_file=service_file, scopes=SCOPES)
async with Storage(token=token) as storage:
_ = await storage.list_objects(bucket_name)
</code></pre>
<p>I'd like to instead authenticate via Application Default Credentials, and typically use this code to get an access token:</p>
<pre><code>from google.auth import default
from google.auth.transport.requests import Request
credentials, project_id = default(scopes=SCOPES)
_ = Request()
credentials.refresh(_)
access_token = credentials.token
</code></pre>
<p>But can't figure out how (or if) I can pass this type of token to aio storage.</p>
|
<python><google-cloud-platform><google-cloud-storage><google-oauth><aio>
|
2024-02-15 00:49:21
| 0
| 931
|
John Heyer
|
77,997,732
| 9,842,472
|
Decorate a decorator: how to assign functionality to adecorator programatically
|
<p>I'm building a class where I want some methods to be registered under a set of categories. To do so, I build a set of decorator functions, one for each category I want to then apply them kinda following <a href="https://stackoverflow.com/questions/3054372/auto-register-class-methods-using-decorator">this answer</a>. That leads me to defining my category decorators as such:</p>
<pre class="lang-py prettyprint-override"><code># The categories
def bar(func):
func._bar = None
return func
def foo(func):
func._foo = None
return func
</code></pre>
<p>which are then used inside my class to register some functions like so:</p>
<pre class="lang-py prettyprint-override"><code>@class_register_categories
class MyClass(object):
@foo
def my_method(self, arg1, arg2):
pass
@foo
@bar
def my_other_method(self, arg1, arg2):
pass
</code></pre>
<p>This works fantastic so far, with some extra code modified from the linked answer. I now would like to make it a tiny bit easier to add new categories. Since all decorator functions I define have always the same functionality (grab the input function, add a new attribute to them with a dummy value and return it), I thought it could abstract that to a decorator and <em>decorate the decorators</em>. I tried to do that like so:</p>
<pre class="lang-py prettyprint-override"><code>def category(cat_name_func):
cat_name = cat_name_func.__name__
def wrapper(func):
setattr(func, cat_name, None)
return func
return wrapper
@category
def foo(func):
pass
@category
def bar(func):
pass
</code></pre>
<p>But this doesn't work. Doing</p>
<pre class="lang-py prettyprint-override"><code>@foo
def my_method():
pass
</code></pre>
<p>does not add the required attribute to the decorated function.</p>
<hr />
<p>For completeness: full working code. Above I extracted the bits I want to modify from here.</p>
<pre class="lang-py prettyprint-override"><code>def class_register_categories(cls):
categories = 'foo', 'bar'
for category in categories:
setattr(cls, category, Category())
for methodname in dir(cls):
method = getattr(cls, methodname)
for category in categories:
if hasattr(method, '_'+category):
getattr(cls, category).append(methodname)
return cls
class Category(list):
def __repr__(self):
string = '\n'.join(self)
return string
def foo(func):
func._foo = None
return func
def bar(func):
func._bar = None
return func
@class_register_categories
class MyClass(object):
@foo
def my_method(self, arg1, arg2):
pass
@foo
@bar
def my_other_method(self, arg1, arg2):
pass
def yet_another_method(self):
pass
myclass = MyClass()
print(myclass.foo)
# my_method
# my_other_method
</code></pre>
|
<python><python-3.x>
|
2024-02-14 23:27:59
| 1
| 527
|
Puff
|
77,997,430
| 866,262
|
ImportError: cannot import name 'sync_playwright' from 'playwright.async_api'
|
<p>I initially did</p>
<pre><code>pip3 install playwright
playwright install
</code></pre>
<p>Then tried uninstalling and reinstalling <code>pip3 install pytest-playwright</code>. I looked at other StackOverflow questions and none of them worked for me.</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Users/me/Desktop/userinterview-python/userinterview.py", line 1, in <module>
from playwright.async_api import sync_playwright, Playwright
ImportError: cannot import name 'sync_playwright' from 'playwright.async_api' (/opt/homebrew/lib/python3.11/site-packages/playwright/async_api/__init__.py)
</code></pre>
<hr />
<pre><code>from playwright.async_api import sync_playwright, Playwright
with sync_playwright as p:
browser = p.chromium.launch(headless=False, slow_mo=50)
page = browser.new_page()
page.goto('google.com')
</code></pre>
|
<python><python-3.x><playwright><playwright-python>
|
2024-02-14 22:00:49
| 1
| 3,315
|
stumped
|
77,997,390
| 1,141,818
|
Sqlalchemy - Filtering parent and chidren objects at the same time
|
<p>I got this many-to-many design</p>
<pre><code>class Parent(Base):
__tablename__ = "parent"
id = Column(Integer, primary_key=True)
parent_number = Column(String)
start_date = Column(DateTime, primary_key=True)
end_date = Column(DateTime)
children= relationship("Child", lazy="selectin")
class Child(Base):
__tablename__ = "child"
id = Column(Integer, primary_key=True)
start_date = Column(DateTime, primary_key=True)
end_date = Column(DateTime)
parent_id = Column(Integer, ForeignKey("parent.id"))
</code></pre>
<p>Both models have a <code>start_date</code> and <code>end_date</code> to register the validity date range of each row resulting in a compound primary key.</p>
<p>What I want to do is being able to construct the <code>Parent</code> object that satisfies a criteria on the <code>start_date</code>and <code>end_date</code> along with the children array.</p>
<p>I've tried a simple query like</p>
<pre><code>session.query(Parent)
.filter(Parent.parent_number == '1234')
.filter(Parent.start_date > given_date & Parent.end_date < given_date)
</code></pre>
<p>which results in two queries</p>
<pre><code>SELECT *
FROM Parent
WHERE parent_number = '1234' AND start_date > '2024-02-14' AND end_date < '2024-02-14'
SELECT Parent.id, Parent.start_date, Child.*
FROM Parent
JOIN Child ON Child.parent_id = Parent.id
WHERE (Parent.id, Parent.start_date) IN ((id_of_previous_parent, start_date_of_previous_parent))
</code></pre>
<p>The first query is fine, but for the second one, I'd like something like</p>
<pre><code>SELECT Parent.id, Parent.start_date, Child.*
FROM Parent
JOIN Child ON Child.parent_id = Parent.id
WHERE (Parent.id, Parent.start_date) IN ((id_of_previous_parent, start_date_of_previous_parent)) AND Child.start_date > '2024-02-14' AND Child.end_date < '2024-02-14'
</code></pre>
<p>I've tried the naive query</p>
<pre><code>session.query(Parent)
.filter(Parent.parent_number == '1234')
.filter(Parent.start_date > given_date & Parent.end_date < given_date)
.filter(Child.start_date > given_date & Child.end_date < given_date)
</code></pre>
<p>but it only changed the first query.</p>
<p>I also tried to join in the query like</p>
<pre><code>session.query(Parent)
.join(Child)
.filter(Parent.parent_number == '1234')
.filter(Parent.start_date > given_date & Parent.end_date < given_date)
.filter(Child.start_date > given_date & Child.end_date < given_date)
</code></pre>
<p>with no result.</p>
|
<python><sqlalchemy>
|
2024-02-14 21:51:09
| 1
| 3,575
|
GuillaumeA
|
77,997,297
| 4,355,878
|
What does this mean? "Be aware, overflowing tokens are not returned for the setting you have chosen"
|
<p>I don't understand if this is an error or warning:</p>
<blockquote>
<p>Be aware, that overflowing tokens are not returned for the setting you have chosen, i.e. sequence pairs with the 'longest_first' truncation strategy. So the returned list will always be empty even if some tokens have been removed.</p>
</blockquote>
<p>This is a minimal example code:</p>
<pre><code>import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from datasets import load_dataset
from transformers import BertTokenizer, BertForSequenceClassification
# Set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load tokenizer and model
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("bert-base-uncased").to(device)
# Load dataset
dataset = load_dataset("glue", "mnli")
val_loader = DataLoader(dataset["validation_matched"], batch_size=64)
# Define loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# Training loop
for epoch in range(10):
model.eval()
total_correct = 0
with torch.no_grad():
for batch in val_loader:
inputs = tokenizer(batch["premise"], batch["hypothesis"], padding=True, truncation=True, max_length=128, return_tensors="pt")
inputs = {key: value.to(device) for key, value in inputs.items()}
labels = batch["label"].to(device)
outputs = model(**inputs)
loss = criterion(outputs.logits, labels)
total_correct += (outputs.logits.argmax(dim=1) == labels).sum().item()
val_accuracy = total_correct / len(dataset["validation_matched"])
print(f"Epoch {epoch + 1}: Val Acc: {val_accuracy:.4f}")
</code></pre>
<p>Do I need to change something?</p>
|
<python><huggingface-transformers>
|
2024-02-14 21:24:29
| 0
| 1,533
|
j35t3r
|
77,997,127
| 12,178,630
|
Add padding to CubeSpline interpolation curve
|
<p>I have the a set of points and I pass a curve through them, using <code>scipy</code> <code>CubeSpline</code>, I really would like to know how can I add padding from outside, so that the curve is fitting the flow of the points but they are always inside it by some factor, as shown in this image, where what I have is the blue line and what I need to get is the red curve <a href="https://i.sstatic.net/stl2a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/stl2a.png" alt="interpolated curve" /></a></p>
<pre><code>def visualize_trajectory(points):
points = np.array(points)
distance = np.cumsum(np.sqrt(np.sum(np.diff(points, axis=0)**2, axis=1)))
distance = np.insert(distance, 0, 0)/distance[-1]
interpolations_methods = ['cubic']
alpha = np.linspace(0, 1, 25)
interpolator = CubicSpline(distance, points, axis=0)
interpolated_points = {}
for method in interpolations_methods:
interpolator = CubicSpline(distance, points, axis=0)
interpolated_points[method] = interpolator(alpha)
plt.figure(figsize=(7, 7))
plt.gca().invert_yaxis()
for method_name, curve in interpolated_points.items():
plt.plot(*curve.T, '-', label=method_name)
plt.plot(*points.T, 'ok', label='original points')
plt.axis('equal')
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
# plt.show(block=False)
plt.show()
plt.close()
</code></pre>
|
<python><scipy><spline><spatial-interpolation>
|
2024-02-14 20:44:29
| 0
| 314
|
Josh
|
77,997,083
| 4,159,833
|
Web scraping a website when page source doesn't contain what I see on browser
|
<p>Suppose I want to scrape the data (marathon running time) in this website: <a href="https://www.valenciaciudaddelrunning.com/en/marathon/2021-marathon-ranking/" rel="nofollow noreferrer">https://www.valenciaciudaddelrunning.com/en/marathon/2021-marathon-ranking/</a></p>
<p>One Google chrome when I right-click and select 'Inspect' or 'View page source', I don't see the actual data embedded in the source page (e.g. I can see the name of athlete, the split times, etc on the browser, but the source code doesn't contain any of those). I have tried web-scraping other websites where the data I need are embedded inside those tab, and using <code>requests</code> and <code>bs4</code> packages in Python I manage to extract the data I want from the websites. For the Valencia marathon URL posted above, is it possible to do web scraping, and if so how?</p>
<p>From some quick google search it looks like some webpages are dynamically loaded with Javascript (correct me if I'm wrong). Is that the case if the website appears to be interactive or if I don't see the browser output when I inspect the source code? Is package like <code>selenium</code> useful for the the above Valencia marathon URL? I know basically nothing about how websites are rendered so if someone can direct me to some useful resources that would be great.</p>
<p><strong>Update</strong>: so apparently the URL provided above is not the actual website containing the data as they are in an <code><iframe></code>. This URL should be closer to the actual source of the data:
<a href="https://resultados.valenciaciudaddelrunning.com/en/2021/maraton-clasificados.php?y=2021" rel="nofollow noreferrer">https://resultados.valenciaciudaddelrunning.com/en/2021/maraton-clasificados.php?y=2021</a>
But it looks like the leaderboard page, and the result for each athlete when I click on their names, still don't show the actual data when I inspect the source.</p>
|
<python><html><selenium-webdriver><web-scraping><beautifulsoup>
|
2024-02-14 20:33:26
| 2
| 3,068
|
Physicist
|
77,996,844
| 13,494,917
|
How to explode a pandas dataframe that has nulls in some rows, but populated in others
|
<p>So I have many dataframes coming in that need to be exploded. they look something like this:</p>
<pre><code>df = pd.DataFrame({'A': [1, [11,22], [111,222]],
'B': [2, [33,44], float('nan')],
'C': [3, [55,66], [333,444]],
'D': [4, [77,88], float('nan')]
})
</code></pre>
<pre><code>+-----------+---------+-----------+---------+
| A | B | C | D |
+-----------+---------+-----------+---------+
| 1 | 2 | 3 | 4 |
+-----------+---------+-----------+---------+
| [11,22] | [33,44] | [55,66] | [77,88] |
+-----------+---------+-----------+---------+
| [111,222] | NaN | [333,444] | NaN |
+-----------+---------+-----------+---------+
</code></pre>
<p>Typically if a column couldn't be exploded I'd just remove it from the column list like so:</p>
<pre><code>colList = df.columns.values.tolist()
colList.remove("B")
colList.remove("D")
df = df.explode(colList)
</code></pre>
<p>But that would leave me with a dataframe that looks like:</p>
<pre><code>+-----+---------+-----+---------+
| A | B | C | D |
+-----+---------+-----+---------+
| 1 | 2 | 3 | 4 |
+-----+---------+-----+---------+
| 11 | [33,44] | 55 | [77,88] |
+-----+---------+-----+---------+
| 22 | [33,44] | 66 | [77,88] |
+-----+---------+-----+---------+
| 111 | NaN | 333 | NaN |
+-----+---------+-----+---------+
| 222 | NaN | 444 | NaN |
+-----+---------+-----+---------+
</code></pre>
<p>I still need to explode those columns (B and D in example), but if I do, it'll throw an error due to the nulls. How can I successfully explode dataframes with this sort of problem?</p>
|
<python><pandas><dataframe>
|
2024-02-14 19:30:35
| 3
| 687
|
BlakeB9
|
77,996,792
| 10,940,989
|
Why does torch.compile need to use g++?
|
<p>I was trying to use the <code>compile</code> feature that was introduced with Pytorch 2 and had trouble getting it to work. After much debugging, I realised that the problem came from g++: it was using an incorrect version on my machine that didn't support C++17. Patching it by setting the <code>cpp.cxx</code> config option to the correct path was sufficient to fix this up.</p>
<p>Although the problem is solved, I'd still like to understand this better. From the docs I get that at first torch dynamo will jit-compile it into an fx-graph intermediate representation. However, I am struggling to understand how g++ comes into this: if it can only convert C++ to machine code, where does its C++ input come from? Does the torch inductor generate C++ code from the intermediate representation, which can then be executed?</p>
|
<python><c++><pytorch><g++>
|
2024-02-14 19:20:27
| 0
| 380
|
Anthony Poole
|
77,996,758
| 4,710,409
|
Django- getting the value of 'models.DateField(default=datetime.datetime.today)' in view
|
<p>I have this model:</p>
<p><strong>models.py:</strong></p>
<pre><code>class myModel(models.Model):
a = models.ForeignKey(P,on_delete=models.CASCADE)
...
**date = models.DateField(default=datetime.datetime.today)**
</code></pre>
<p>In the views.py, I get the data of "date" field:</p>
<p><strong>views.py</strong></p>
<pre><code>from appX.models import myModel
from datetime import datetime
...
def my_items(request):
...
got_date = myModel.objects.get(id = 1).date
got_date_2 = datetime.strptime(got_date, "%YYYY-%MM-%d")
</code></pre>
<p>But python throws the following error:</p>
<pre><code>time data '"2024-02-14"' does not match format '%YYYY-%MM-%d'
</code></pre>
<p>What "format" should I use ?</p>
<p>The date is fetched from the database, then stored in a session, then extracted again as an "str" called "got_date"; thus I want to convert it to date object -the same used in django's date field- again to pass to a different database table using "strptime".</p>
|
<python><django><datetime><view><model>
|
2024-02-14 19:15:35
| 2
| 575
|
Mohammed Baashar
|
77,996,511
| 1,360,476
|
Writing in delta using Polars and adlfs
|
<p>According to <a href="https://stackoverflow.com/questions/75800596">How to you write polars data frames to Azure blob storage?</a>,
we can write <em>parquet</em> using <code>polars</code> directly on <em>Azure Storage</em> such as basic storage containers.</p>
<p>In my case I was required to write in <em>Delta</em> format, which stands on top of <em>parquet</em>, so I modified the code a bit since <code>polars</code> <a href="https://docs.pola.rs/py-polars/html/reference/api/polars.DataFrame.write_delta.html" rel="nofollow noreferrer">also supports delta</a></p>
<pre class="lang-py prettyprint-override"><code>import adlfs
import polars as pl
from azure.identity.aio import DefaultAzureCredential
# pdf: pl.DataFrame
# path: str
# account_name: str
# container_name: str
credential = DefaultAzureCredential()
fs = adlfs.AzureBlobFileSystem(account_name=account_name, credential=credential)
with fs.open(f"{container_name}/way/to/{path}", mode="wb") as f:
if path.endswith(".parquet"):
pdf.write_parquet(f)
else:
pdf.write_delta(f, mode="append")
</code></pre>
<p>Using this code, I was able to write on the Azure filesystem when I specified a <code>path = path/to/1.parquet</code> but not <code>path = path/to/delta_folder/</code>.</p>
<p>In the second case, my problem was only a 0 byte file was written to <code>delta_folder</code> on Azure storage, <code>f</code> being a file pointer.</p>
<p>What's more, If I just use the local filesystem using <code>pdf.write_delta(path, mode="append")</code> it just works.</p>
<p>How can I modify my code to support writing recursively in the <code>delta_folder/</code> in the cloud?</p>
|
<python><azure-storage><python-polars><fsspec>
|
2024-02-14 18:25:46
| 1
| 1,787
|
Michel Hua
|
77,996,468
| 595,305
|
QTableView won't display single line of text quite right
|
<p>Here's an MRE:</p>
<pre><code>import sys, logging, datetime
from PyQt5 import QtWidgets, QtCore, QtGui
from PyQt5.Qt import QVBoxLayout
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.__class__.instance = self
self.resize(1200, 1600) # w, h
main_splitter = QtWidgets.QSplitter(self)
main_splitter.setOrientation(QtCore.Qt.Vertical)
self.setCentralWidget(main_splitter)
self.top_frame = QtWidgets.QFrame()
main_splitter.addWidget(self.top_frame)
self.bottom_frame = BottomFrame()
self.bottom_frame.setMaximumHeight(350)
self.bottom_frame.setMinimumHeight(100)
main_splitter.addWidget(self.bottom_frame)
main_splitter.setCollapsible(1, False)
self.bottom_frame.construct()
class BottomFrame(QtWidgets.QFrame):
def construct(self):
layout = QtWidgets.QVBoxLayout(self)
# without this you get the default 10 px border all round the table: too much
layout.setContentsMargins(1, 1, 1, 1)
self.setLayout(layout)
self.messages_table = LogTableView()
layout.addWidget(self.messages_table)
self.messages_table.visual_log('hello world')
self.messages_table.visual_log('message 2 qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd ')
self.messages_table.visual_log('message 3')
self.messages_table.visual_log('message 4', logging.ERROR)
self.messages_table.visual_log('message 5')
self.messages_table.visual_log('message 6 qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd qunaomdd ')
self.messages_table.visual_log('message 7')
class LogTableView(QtWidgets.QTableView):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.setModel(QtGui.QStandardItemModel())
self.horizontalHeader().setStretchLastSection(True)
self.horizontalHeader().hide()
self.setVerticalHeader(VerticalHeader(self))
self.verticalHeader().setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)
self.verticalHeader().hide()
self.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self.setAlternatingRowColors(True)
# this doesn't seem to have any effect
# self.verticalHeader().setMinimumSectionSize(1)
def sizeHintForRow(self, row ):
hint = super().sizeHintForRow(row)
# print(f'size hint for row {row}: {hint}')
# this doesn't seem to have any effect!
if hint < 25 and hint > 10:
hint = 10
return hint
def visual_log(self, msg: str, log_level: int=logging.INFO):
model = self.model()
i_new_row = model.rowCount()
model.insertRow(i_new_row)
datetime_stamp_str = datetime.datetime.now().strftime('%a %H:%M:%S.%f')[:-3]
model.setItem(i_new_row, 0, QtGui.QStandardItem(datetime_stamp_str))
model.setItem(i_new_row, 1, QtGui.QStandardItem(str(log_level)))
self.setColumnWidth(0, 160)
self.setColumnWidth(1, 100)
model.setItem(i_new_row, 2, QtGui.QStandardItem(msg))
QtCore.QTimer.singleShot(0, self.resizeRowsToContents)
QtCore.QTimer.singleShot(10, self.scrollToBottom)
class VerticalHeader(QtWidgets.QHeaderView):
def __init__(self, parent):
super().__init__(QtCore.Qt.Vertical, parent)
def sectionSizeHint(self, logical_index):
hint = super().sectionSizeHint(logical_index)
print(f'vh index {logical_index} hint {hint}')
return hint
def main():
app_instance = QtWidgets.QApplication(sys.argv)
MainWindow().show()
sys.exit(app_instance.exec())
if __name__ == '__main__':
main()
</code></pre>
<p>In terms of row heights it's <em>ALMOST</em> doing what I want: try resizing the main window: the table rows adjust their heights depending on the real height of the text (wrapped as needed).</p>
<p>BUT... while this works if you have a piece of text which needs more than one row, sizing the row height just right, the single-line rows always take up slightly too much height. The <code>sizeHintForRow()</code> method of the <code>QTableView</code> always seems to return 24 (pixels) from the superclass ... but even if I interfere with that and brutally say "no, make it a smaller hint", it appears that that gets overridden by something later.</p>
<p><code>setMinimumSectionSize</code> on the vertical header also seems to have no effect.</p>
<p>I also thought the <code>data()</code> method of the table model might be the culprit but role 13, "SizeHintRole", never seems to get fired.</p>
<p><strong>Source code</strong><br>
I have tried looking at the <a href="https://code.qt.io/cgit/qt/qtbase.git/tree/src/widgets/itemviews/qheaderview.cpp?h=dev" rel="nofollow noreferrer">source code</a>. The method of interest here seems to be on l. 3523, one of several versions of <code>QHeaderView</code>'s <code>resizeSections</code> method. The code is naturally quite daunting, but I did spot, for example, <code>invalidateCachedSizeHint()</code> on l. 3261, which might explain why the size hints are being ignored... Anyone with particularly intricate knowledge of <code>QHeaderView</code>'s functionality? <em><strong>Update later</strong></em> After Musicamante suggested use of <code>setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContent)</code> (good idea), I don't now know what part of the source code is involved.</p>
<p><strong>Screenshot</strong><br>
<a href="https://i.sstatic.net/8nQci.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8nQci.jpg" alt="main window" /></a></p>
<p>... the single-text-line rows are slightly too high. If you see something different, like perfectly tight single-text-line rows, please let me know. I'm on W10. A different OS may produce a different result, who knows?</p>
|
<python><pyqt><qtableview>
|
2024-02-14 18:15:52
| 1
| 16,076
|
mike rodent
|
77,996,458
| 23,260,297
|
Find starting point to read csv data
|
<p>I am given a csv file everyday that contains data to be used for calculations. I want to use pandas to organize the data. My issue is that the data to be read does not start on the first line of the file. The data also does not start on the same exact line every time so I cannot use the skiprows parameter in read_csv() method.</p>
<p>The data does have some indicators as to where the data begins.</p>
<p>For example this is how the beginning of my csv file would look. I am only interested in starting at the first column header which is 'Deal Type':</p>
<pre><code>Counterparty Name
ID Number
.
.
.
Asset
USD.HO
USD.LCO
USD.RB
Cpty:
Product:
[Deal Type] [column] ... ... ...
[here the data begins]
</code></pre>
<p>How could I parse through the file and find the first column header and start at that line? The column header 'Deal Type' is always the first column.</p>
|
<python><pandas><csv>
|
2024-02-14 18:13:08
| 2
| 2,185
|
iBeMeltin
|
77,996,271
| 5,790,653
|
how to print duplicate values of a list of dictionaries in python
|
<p>Suppose this list:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'id': 1, 'name': 'A', 'blob': 'blob', 'type': 'A'},
{'id': 2, 'name': 'B', 'blob': 'blob', 'type': 'B'},
{'id': 3, 'name': 'A', 'blob': 'blob', 'type': 'A'},
{'id': 4, 'name': 'C', 'blob': 'blob', 'type': 'A'},
{'id': 5, 'name': 'E', 'blob': 'blob', 'type': 'A'},
{'id': 6, 'name': 'B', 'blob': 'blob', 'type': 'B'}
]
</code></pre>
<p>I'm going to find if both values of <code>name</code> and <code>type</code> are the same.</p>
<p>This is expected output:</p>
<pre><code>name A has duplicate type A
name B has duplicate type B
</code></pre>
<p>How can I have the expected output?</p>
<p>I googled but most of results print one value only (for example if there are multiple <code>name</code> keys with the same values).</p>
|
<python><dictionary>
|
2024-02-14 17:37:28
| 1
| 4,175
|
Saeed
|
77,996,196
| 22,466,650
|
How to duplicate rows based on the number of weeks between two dates?
|
<p>My input is this dataframe :</p>
<pre><code>df = pd.DataFrame(
{
'ID': ['ID001', 'ID002', 'ID003'],
'DATE': ['24/12/2023', '01/02/2024', '12/02/2024'],
}
)
df['DATE'] = pd.to_datetime(df['DATE'], dayfirst=True)
print(df)
ID DATE
0 ID001 2023-12-24
1 ID002 2024-02-01
2 ID003 2024-02-12
</code></pre>
<p>I'm trying to duplicate the rows for each id N times with N being the number of weeks between the column <code>DATE</code> and the current date. At the end, each last row for a given id will have the week of the column <code>DATE</code>.</p>
<p>For that I made the code below but it gives me a wrong output :</p>
<pre><code>number_of_weeks = (pd.Timestamp('now') - df['DATE']).dt.days // 7
final = df.copy()
final['YEAR'] = final['DATE'].dt.isocalendar().year
final['WEEK'] = final['DATE'].dt.isocalendar().week
final['WEEKS'] = (pd.Timestamp('now') - df['DATE']).dt.days // 7
for index, row in final.iterrows():
for i in range(1, row['WEEKS'] + 1):
final.loc[i, 'WEEK'] = i
final = final.ffill().drop(columns='WEEKS')
print(final)
ID DATE YEAR WEEK
0 ID001 2023-12-24 2023 51
1 ID002 2024-02-01 2024 1
2 ID003 2024-02-12 2024 2
3 ID003 2024-02-12 2024 3
4 ID003 2024-02-12 2024 4
5 ID003 2024-02-12 2024 5
6 ID003 2024-02-12 2024 6
7 ID003 2024-02-12 2024 7
</code></pre>
<p>Have you guys encountered a similar problem ? I'm open to any suggestion.</p>
<p>My expected output is this :</p>
<pre><code> ID DATE YEAR WEEK
0 ID001 24/12/2023 2023 51
1 ID001 24/12/2023 2023 52
2 ID001 24/12/2023 2024 1
3 ID001 24/12/2023 2024 2
4 ID001 24/12/2023 2024 3
5 ID001 24/12/2023 2024 4
6 ID001 24/12/2023 2024 5
7 ID001 24/12/2023 2024 6
8 ID001 24/12/2023 2024 7
##################################
9 ID002 01/02/2024 2024 5
10 ID002 01/02/2024 2024 6
11 ID002 01/02/2024 2024 7
##################################
12 ID003 12/02/2024 2024 7
</code></pre>
|
<python><pandas>
|
2024-02-14 17:25:06
| 1
| 1,085
|
VERBOSE
|
77,995,989
| 11,567,354
|
How to automatically update ORM object attributes with client-side values
|
<p>I'm using aws api gateway to create my rest api and I'm using lambda layers to define my sqlalchemy models. I'm using Mysql as my RDBMS
I have a model defined as:</p>
<pre><code>class Project(Base):
__tablename__ = "project"
name = Column(String(256))
created_by = Column(String(256))
updated_by = Column(String(256))
</code></pre>
<p>I have a function that get the user credentials who invoked the call based on jwt token and which return the username:</p>
<pre><code>def get_user_creds(token):
return jwt.decode(token, algorithms=["RS256"], options={"verify_signature": False})
</code></pre>
<p>I want to automatically fill the fields create_by and updated_by using the <strong>get_user_creds</strong> function.
Any solution to fill these columns on all transaction without explicitly setting them in model methods so that when I run</p>
<pre><code>project = Project(name="project1")
session.add(project)
session.commit()
</code></pre>
<p>The transaction adds automatically created_by fields</p>
|
<python><sqlalchemy><aws-api-gateway>
|
2024-02-14 16:51:40
| 1
| 366
|
Wajih Katrou
|
77,995,937
| 3,200,552
|
Race Conditions with ModelViewset Requests
|
<p>I'm experiencing an issue with a Django Rest Framework ModelViewSet endpoint named <code>projects/</code>. I have a set of requests (PATCH, DELETE, then GET) that are causing unexpected behavior. The timeline of requests and responses is as follows:</p>
<ol>
<li>PATCH request at 14:45:09.420</li>
<li>DELETE request at 14:45:12.724
3.DELETE 204 response at 14:45:12.852</li>
<li>PATCH 200 response at 14:45:13.263</li>
<li>GET request at 14:45:13.279</li>
<li>GET 200 response at 14:45:13.714
All responses indicate success. However, the GET response, which follows a DELETE, includes the supposedly deleted model. If I call the GET endpoint a bit later, the deleted model is no longer listed.</li>
</ol>
<p>This behavior suggests a potential race condition or a caching issue, where the PATCH operation completes after the DELETE, or the GET request returns a cached list not reflecting the deletion.</p>
<p>The view, serializer and model code are all pretty vanilla:</p>
<pre class="lang-py prettyprint-override"><code>class ProjectViewSet(ModelViewSet):
parser_classes = (MultiPartParser, FormParser, JSONParser)
queryset = Project.objects.all()
serializer_class = ProjectSerializer
pagination_class = ProjectPagination
class ProjectSerializer(serializers.ModelSerializer):
creator = UserUUIDField(default=serializers.CurrentUserDefault())
image = serializers.ImageField(required=False)
class Meta:
model = Project
fields = "__all__"
class Project(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
creator = models.ForeignKey(User, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
image = models.ForeignKey(
wand_image,
on_delete=models.DO_NOTHING,
null=True,
blank=True,
related_name="projects"
)
</code></pre>
<p>One model has a foreign key reference to this model, but the on_delete behavior is to set it to null.</p>
<p>I'm running this with Google Cloud Run, a serverless backend service</p>
|
<python><django><django-rest-framework><race-condition>
|
2024-02-14 16:44:31
| 1
| 785
|
merhoo
|
77,995,812
| 353,337
|
Initialize pytest test objects in the test, not in the `parametrized`/collection phase
|
<p>I have a number of pytest tests that I run like</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from mymodule import *
@pytest.mark.parametrize(
"arg",
[
fun1(),
fun2(),
]
+ [
fun3(n) for n in range(10)
]
+ [
fun4(n, model)
for n in range(3, 7)
for model in ["explicit", "implicit"]
],
)
def test_foobar(arg):
# ...
pass
</code></pre>
<p>Some <code>fun*</code> take long to initialize, but I don't know which since the initialization happens in the pytest <em>collection</em> phase. I'd rather move initialization into the function itself <em>without adding much boilerplate code</em>.</p>
<p>How would I achieve that? Note that different <code>fun</code> methods have different signatures.</p>
|
<python><pytest>
|
2024-02-14 16:25:53
| 2
| 59,565
|
Nico Schlömer
|
77,995,669
| 5,168,463
|
pyspark schema mismatch issue
|
<p>I am trying to load a .csv file into spark using this code:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark.sql import Window
spark = SparkSession.builder.appName('Demo').master('local').getOrCreate()
pathData = '/home/data/departuredelays.csv'
schema = StructType([
StructField('date', StringType()),
StructField('delay', IntegerType()),
StructField('distance', IntegerType()),
StructField('origin', StringType()),
StructField('destination', StringType()),
])
flightsDelatDf = (spark
.read
.format('csv')
.option('path', pathData)
.option('header', True)
.option("schema", schema)
.load()
)
</code></pre>
<p>When I check the schema, i see that column <strong>delay</strong> and <strong>distance</strong> are shown as type <code>string</code> whereas in the schema, I have defined them as <code>integers</code></p>
<pre><code>flightsDelatDf.printSchema()
root
|-- date: string (nullable = true)
|-- delay: string (nullable = true)
|-- distance: string (nullable = true)
|-- origin: string (nullable = true)
|-- destination: string (nullable = true)
</code></pre>
<p>But if I read the file using <code>.schema(schema)</code> instead of using <code>.option('schema', schema)</code> to specify schema :</p>
<pre><code>flightsDelatDf = (spark
.read
.format('csv')
.option('path', pathData)
.option('header', True)
.schema(schema)
.load()
)
</code></pre>
<p>I see that the column data types are aligned with what I have specified.</p>
<pre><code>flightsDelatDf.printSchema()
root
|-- date: string (nullable = true)
|-- delay: integer (nullable = true)
|-- distance: integer (nullable = true)
|-- origin: string (nullable = true)
|-- destination: string (nullable = true)
</code></pre>
<p>Does anyone why in the first type, the data types are not aligned with the schema defined, whereas they are in the second type? Thanks in advance.</p>
|
<python><apache-spark><pyspark><schema>
|
2024-02-14 16:04:26
| 1
| 515
|
DumbCoder
|
77,995,602
| 9,069,109
|
Excel truncates function return when using PyXll
|
<p>We have a function in python that is called in Excel using PyXll Add-in. To do so, the function is called using this code:</p>
<pre class="lang-py prettyprint-override"><code>
import myfunction
from pyxll import xl_func
import pandas as pd
@xl_func('float[][], float[]', auto_resize=True)
def myfunction_xl(
x: Iterable,
y: Iterable,
) -> Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame]:
(
df1,
df2,
df3,
) = myfunction(x, y)
return (
df1,
df2,
df3,
)
</code></pre>
<p>This calls the code of <code>myfunction</code> which returns after some computation a tuple of three dataframes of different sizes and columns.</p>
<p>The function call works fine in Excel, however the three dataframes are somehow truncated and not displayed with their content.</p>
<p><a href="https://i.sstatic.net/Z6Li4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z6Li4.png" alt="enter image description here" /></a></p>
<p>The same thing happens when calling some other functions which return a long list. Can you please help me figure out how to fix the display? thanks.</p>
|
<python><excel><excel-addins><pyxll>
|
2024-02-14 15:52:55
| 1
| 453
|
Aniss Chohra
|
77,995,419
| 92,516
|
Correct way to perform expensive per-worker setup
|
<p>What's the correct / best way to perform costly per-process initialisation when using multiple workers, ensuring the workers can correctly communicate with the master?</p>
<p>As part of a custom <code>locustfile.py</code> I need to download a large dataset from a URL and load into memory to generate GET request parameters (download time ~60s; load time takes ~5s). For a single-process locust setup I have used the <code>init</code> event to perform this, and that works fine. However when running with multiple workers (<code>--processes=N</code>) I've run into a few issues with simply using the <code>init</code> event:</p>
<p><strong>Attempt 1</strong>: As <code>init</code> is fired once per process, using that event for all workers results in downloading the same files from all processes, which is inefficient and causes problems with files overwriting each other.</p>
<p><strong>Attempt 2</strong>: Based on the <a href="https://github.com/locustio/locust/blob/master/examples/test_data_management.py" rel="nofollow noreferrer"><code>test_data_management.py</code> example</a>, I tried having master / local runner downloading via <code>init</code>, but deferring the loading (only from disk, re-using the already-downloaded files) until <code>on_start</code>:</p>
<pre class="lang-py prettyprint-override"><code>@events.init.add_listener
def on_locust_init(environment, **_kwargs):
if not isinstance(environment.runner, WorkerRunner):
# Download (if not exists in local files), read and load
# into environment.dataset
setup_dataset(environment)
...
@events.test_start.add_listener
def setup_worker_dataset(environment, **_kwargs):
if isinstance(environment.runner, WorkerRunner):
# Make the Dataset available for WorkerRunners (non-Worker will have
# already downloaded the dataset via on_locust_init).
setup_dataset(environment, skip_download_and_populate=True)
</code></pre>
<p>This seems to work ok if <code>setup_worker_dataset()</code> is quick (less than 1s), however for larger datasets it can take longer, and I see problems with the master / worker communication, and load generation terminates:</p>
<pre><code>locust-swarm-0-f008c40/INFO/locust.runners: Worker locust-swarm-0-xxx failed to send heartbeat, setting state to missing.
locust-swarm-0-f008c40/INFO/locust.runners: Worker locust-swarm-0-yyy failed to send heartbeat, setting state to missing.
locust-swarm-0-f008c40/INFO/locust.runners: The last worker went missing, stopping test.
</code></pre>
<p>(Of note setup_dataload() performs blocking / CPU-heavy work, so I expect it's preventing the gevent event loop from running)</p>
<p>Essentially I need a "safe place" to perform some CPU-heavy work at the process/environment level before the User objects start running tasks.</p>
|
<python><load-testing><locust>
|
2024-02-14 15:28:07
| 1
| 9,708
|
DaveR
|
77,995,258
| 5,022,847
|
Django group by substring on a field
|
<p>I have a Django, PostgreSQL project.
I want to perform a group by on substring from a field.
Ex I have a model as <code>Model1</code> and its column as <code>name</code>. The value in the column can be:</p>
<pre><code>ABC-w-JAIPUR-123
XYZ-x-JAIPUR-236
MNO-y-SURAT-23
DEF-q-SURAT-23
</code></pre>
<p>From the above values in <code>name</code> field, I want to group by second occurrence of <code>-</code> and ending with <code>-</code> in our case it would be: "JAIPUR", "SURAT"</p>
<p>Please let me know how to achieve this in Django.</p>
<p>UPDATE:
So far, I have tried:</p>
<pre><code>Model1.objects.annotate(substring=Substr('name', F('name').index('-')+1, (F('name', output_field=CharField())[:F('name').rindex('-')]).index('-'))).values('substring').annotate(count=Count('id'))
</code></pre>
<p>but this is giving error:</p>
<pre><code>AttributeError: 'F' object has no attribute 'index'
</code></pre>
|
<python><django><postgresql><django-models><django-model-field>
|
2024-02-14 15:04:26
| 1
| 1,430
|
TechSavy
|
77,995,227
| 1,754,307
|
How to remove data validation from selected cells in python using openpyxl
|
<p>I am opening a template file using openpyxl, which has data validation down to row 500 in about 5 columns. In my python script I am loading data into the left half of the spreadsheet from Oracle, with the data validation columns being on the right.</p>
<p>If the data is only 50 rows I am inserting, I need to get rid of the 450 rows worth of data validation entries and formulas. But it seems impossible to just delete a validation rule from a specific cell in openpyxl.</p>
<p>It seems I may have to forget about having a template with data validation already in up to rows 500, and create the validation rules manually, which I was hoping to avoid. Much easier to have a template pre-populated, and just delete what I don't need.</p>
<p>This below doesn't work, as the "validation.sqref" is like "M2:500" (as this is where the validation rule is present) and so never matches the cell reference of the cell in the current iteration.</p>
<pre><code>for row in range(start_row_to_delete, ws_data.max_row + 1):
for col in range(1, ws_data.max_column + 1):
cell = ws_data.cell(row=row, column=col)
if ws_data.data_validations:
for validation in ws_data.data_validations.dataValidation:
if validation.sqref == cell.coordinate:
ws_data.data_validations.remove(validation)
</code></pre>
|
<python><openpyxl>
|
2024-02-14 14:58:46
| 1
| 3,082
|
smackenzie
|
77,995,211
| 16,420,204
|
Polars: Manipulation on columns by 'dtype' creating multiple new columns
|
<p>With the given code I want to create new columns for every column selected by <code>pl.DATETIME_DTYPES</code> with the extracted year.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = {"col1": ['2020/01/01', '2020/02/01'], "col2": ['2020/01/01', '2020/02/01']}
df = pl.DataFrame(data, schema={"col1": pl.String, "col2": pl.String})
df = df.with_columns(
pl.col('col1').str.to_datetime(),
pl.col('col2').str.to_datetime()
)
df.with_columns(
pl.col(pl.DATETIME_DTYPES).dt.year()
)
</code></pre>
<pre><code>┌──────┬──────┐
│ col1 ┆ col2 │
│ --- ┆ --- │
│ i32 ┆ i32 │
╞══════╪══════╡
│ 2020 ┆ 2020 │
│ 2020 ┆ 2020 │
└──────┴──────┘
</code></pre>
<p>For a single column I would apply <code>.alias()</code> but what to do for potentially <em><strong>n</strong></em> new columns? Any <em>generic</em> way?</p>
<p>For example, if I wanted to add a suffix to the existing name.</p>
<pre><code>┌─────────────────────┬─────────────────────┬───────────┬───────────┐
│ col1 ┆ col2 ┆ col1_year ┆ col2_year │
│ --- ┆ --- ┆ --- ┆ --- │
│ datetime[μs] ┆ datetime[μs] ┆ i32 ┆ i32 │
╞═════════════════════╪═════════════════════╪═══════════╪═══════════╡
│ 2020-01-01 00:00:00 ┆ 2020-01-01 00:00:00 ┆ 2020 ┆ 2020 │
│ 2020-02-01 00:00:00 ┆ 2020-02-01 00:00:00 ┆ 2020 ┆ 2020 │
└─────────────────────┴─────────────────────┴───────────┴───────────┘
</code></pre>
|
<python><dataframe><datetime><python-polars>
|
2024-02-14 14:56:58
| 1
| 1,029
|
OliverHennhoefer
|
77,994,983
| 13,086,128
|
Difference between [ ] and Expression API?
|
<p>What is the difference between using square brackets <code>[ ]</code> and using <a href="https://docs.pola.rs/py-polars/html/reference/expressions/index.html" rel="nofollow noreferrer"><strong>Expression APIs</strong></a> like <code>select</code>, <code>filter</code>, etc. when trying to access data from a polars Dataframe? Which one to use when?</p>
<p>a polars dataframe</p>
<pre><code>df = pl.DataFrame(
{
"a": ["a", "b", "a", "b", "b", "c"],
"b": [2, 1, 1, 3, 2, 1],
}
)
</code></pre>
|
<python><python-polars>
|
2024-02-14 14:21:17
| 3
| 30,560
|
Talha Tayyab
|
77,994,888
| 10,570,372
|
Can we provide an example and motivation on when to use covariant and contravariant?
|
<p>I have some troubles finding real use cases of covariant/contravariant. I would like a concrete motivation and example on when covariant/contravariant is used, in particular, I would appreciate the example to be such that if covariant/contravariant is not applied, then the application would be buggy/not type safe.</p>
<p>To start, I know PyTorch's <code>Dataset</code> and <code>DataLoader</code> is parametrized by a covariant type:</p>
<pre class="lang-py prettyprint-override"><code>class Dataset(Generic[T_co]):
r"""An abstract class representing a :class:`Dataset`.
All datasets that represent a map from keys to data samples should subclass
it. All subclasses should overwrite :meth:`__getitem__`, supporting fetching a
data sample for a given key. Subclasses could also optionally overwrite
:meth:`__len__`, which is expected to return the size of the dataset by many
:class:`~torch.utils.data.Sampler` implementations and the default options
of :class:`~torch.utils.data.DataLoader`. Subclasses could also
optionally implement :meth:`__getitems__`, for speedup batched samples
loading. This method accepts list of indices of samples of batch and returns
list of samples.
.. note::
:class:`~torch.utils.data.DataLoader` by default constructs a index
sampler that yields integral indices. To make it work with a map-style
dataset with non-integral indices/keys, a custom sampler must be provided.
"""
def __getitem__(self, index) -> T_co:
raise NotImplementedError("Subclasses of Dataset should implement __getitem__.")
# def __getitems__(self, indices: List) -> List[T_co]:
# Not implemented to prevent false-positives in fetcher check in
# torch.utils.data._utils.fetch._MapDatasetFetcher
def __add__(self, other: 'Dataset[T_co]') -> 'ConcatDataset[T_co]':
return ConcatDataset([self, other])
# No `def __len__(self)` default?
# See NOTE [ Lack of Default `__len__` in Python Abstract Base Classes ]
# in pytorch/torch/utils/data/sampler.py
</code></pre>
<p>I wonder if someone can come up with convincing example of why a <code>Dataset</code> needs to be covariant.</p>
|
<python><pytorch><python-typing><liskov-substitution-principle>
|
2024-02-14 14:07:02
| 1
| 1,043
|
ilovewt
|
77,994,872
| 7,662,164
|
Specify an arbitrary argument in addition to *args in python
|
<p>For a function with multiple arguments like this:</p>
<pre><code>def f(a, b, c):
return a + b + c
</code></pre>
<p>it is straightforward to define a new function as follows:</p>
<pre><code>def g(a, *args):
return f(a, *args)
</code></pre>
<p>here, <code>a</code> is the first argument of <code>f()</code>. What if I would like to specify the second argument of <code>f()</code>, namely:</p>
<pre><code>def g(b, *args):
# return f(???)
</code></pre>
|
<python><function><arguments><positional-argument>
|
2024-02-14 14:04:29
| 2
| 335
|
Jingyang Wang
|
77,994,776
| 16,525,263
|
Join two dataframes to get data into another dataframe and create a new column in pyspark
|
<p>I have a pyspark dataframe</p>
<pre><code>mail_id e_no
aaa@ven.com 111
bbb@ven.com 222
ccc@ven.com 333
</code></pre>
<p>I have another dataframe</p>
<pre><code>email emp_no
aaa@ven.com 111
222
333
</code></pre>
<p>I need to fill the emp_no from second dataframe using first dataframe.
I need to do a left join to get all records from first dataframe.
I also need an extra col to indicate from which dataframe I'm getting the emp_no.</p>
<p>The final dataframe should be like:</p>
<pre><code>email emp_no src
aaa@ven.com 111 tbl1
bbb@ven.com 222 tbl2
ccc@ven.com 333 tbl2
</code></pre>
<pre><code>final_df = df1.join(df2, df1['e_no'] == df2['emp_no'], 'left')
final_df = final_df.withColumn('src', F.when('mail_id'.isNull() | mail_id =='', 'tbl2')
.when('mail_id' != '', 'tbl1')
.otherwise(F.lit('')))
</code></pre>
<p>I'm not getting the right results. Please suggest if any changes are needed.</p>
|
<python><pyspark>
|
2024-02-14 13:49:14
| 1
| 434
|
user175025
|
77,994,714
| 10,452,700
|
How can retrieve numbers of records above and under mean\median over time data with respect to time resolution in python?
|
<p>Let's say I have the following time <a href="https://github.com/amcs1729/Predicting-cloud-CPU-usage-on-Azure-data/blob/master/azure.csv" rel="nofollow noreferrer">data</a> for 1 month or (31 days) in January:</p>
<pre class="lang-py prettyprint-override"><code>import os, holoviews as hv
os.environ['HV_DOC_HTML'] = 'true'
hv.extension('bokeh')
import pandas as pd
import pandas_bokeh
from pandas_bokeh import plot_bokeh
pandas_bokeh.output_notebook()
#-----------------------------------------------------------
# Libs
#-----------------------------------------------------------
#!pip install hvplot
#!pip install pandas-bokeh
#-----------------------------------------------------------
# LOAD THE DATASET
#-----------------------------------------------------------
df = pd.read_csv('azure.csv')
df['timestamp'] = pd.to_datetime(df['timestamp'])
df = df.rename(columns={'min cpu': 'min_cpu',
'max cpu': 'max_cpu',
'avg cpu': 'avg_cpu',})
#df = df.set_index('timestamp')
df.head()
# Data preparation
# ==============================================================================
sliced_df = df[['timestamp', 'avg_cpu']]
# convert column to datetime object
#sliced_df['timestamp'] = pd.to_datetime(sliced_df['timestamp'], format='%Y-%m-%d %H:%M:%S')
# get the hour, day month
sliced_df['hour'] = sliced_df['timestamp'].dt.hour
sliced_df['day'] = sliced_df['timestamp'].dt.day
sliced_df['month'] = sliced_df['timestamp'].dt.month
sliced_df['year'] = sliced_df['timestamp'].dt.year
year_input=2017
month_input=1
day_input=21
# Retrive average CPU usage for hourly
df_avg = sliced_df.groupby('hour').agg({'avg_cpu': 'mean'}).reset_index()
df_21 = sliced_df[(sliced_df.year == year_input) & (sliced_df.month == month_input) & (sliced_df.day == day_input)]
df_21 = df_21.groupby('hour').agg({'avg_cpu': 'max'}).reset_index()
df_above = pd.merge(df_21, df_avg, on='hour', suffixes=('_hour','_avg'))
df_above['above'] = df_above.loc[df_above[f"avg_cpu_hour"] >= df_above["avg_cpu_avg"], f"avg_cpu_hour"]
df_above['above_value'] = df_above['avg_cpu_hour'] - df_above['avg_cpu_avg']
df_below = pd.merge(df_21, df_avg, on='hour', suffixes=('_hour','_avg'))
df_below['below'] = df_below.loc[df_below[f"avg_cpu_hour"] < df_below["avg_cpu_avg"], f"avg_cpu_hour"]
df_below['below_value'] = df_below['avg_cpu_hour'] - df_below['avg_cpu_avg']
above_count = df_above['above'].value_counts().sum()
below_count = df_below['below'].value_counts().sum()
dark_red = "#FF5555"
dark_blue = "#5588FF"
plot = sliced_df.hvplot( x="hour", y="avg_cpu", by="day", color="grey", alpha=0.02, legend=False, hover=False)
plot_avg = df_avg.hvplot( x="hour", y="avg_cpu", color="grey", legend=False)
plot_21th = df_21.hvplot( x="hour", y="avg_cpu", color="black", legend=False)
plot_above = df_above.hvplot.area(x="hour", y="avg_cpu_avg", y2="avg_cpu_hour").opts(fill_alpha=0.2, line_alpha=0.8, line_color=dark_red, fill_color=dark_red)
plot_below = df_below.hvplot.area(x="hour", y="avg_cpu_avg", y2="avg_cpu_hour").opts(fill_alpha=0.2, line_alpha=0.8, line_color=dark_blue, fill_color=dark_blue)
text_days_above = hv.Text(5, df_21["avg_cpu"].max(), f"{above_count}", fontsize=14).opts(text_align="right", text_baseline="bottom", text_color=dark_red, text_alpha=0.8)
text_days_below = hv.Text(5, df_21["avg_cpu"].max(), f"{below_count}", fontsize=14).opts(text_align="right", text_baseline="top", text_color=dark_blue, text_alpha=0.8)
text_above = hv.Text(5, df_21["avg_cpu"].max(), "DAYS ABOVE", fontsize=7).opts(text_align="left", text_baseline="bottom", text_color="lightgrey", text_alpha=0.8)
text_below = hv.Text(5, df_21["avg_cpu"].max(), "DAYS BELOW", fontsize=7).opts(text_align="left", text_baseline="above", text_color="lightgrey", text_alpha=0.8)
hv.renderer("bokeh").theme = theme
final = (
plot
* plot_21th
* plot_avg
* plot_above
* plot_below
* text_days_above
* text_days_below
* text_above
* text_below
).opts(
xlabel="hourly",
ylabel="CPU [Hz]",
title=f"{day_input}th Jan data vs AVERAGE",
gridstyle={"ygrid_line_alpha": 0},
xticks=[
(0, "00:00"),
(1, "01:00"),
(2, "02:00"),
(3, "03:00"),
(4, "04:00"),
(5, "05:00"),
(6, "06:00"),
(7, "07:00"),
(8, "08:00"),
(9, "09:00"),
(10, "10:00"),
(11, "11:00"),
(12, "12:00"),
(13, "13:00"),
(14, "14:00"),
(15, "15:00"),
(16, "16:00"),
(17, "17:00"),
(18, "18:00"),
(19, "19:00"),
(20, "20:00"),
(21, "21:00"),
(22, "22:00"),
(23, "23:00"),
],
xrotation=45,
show_grid=True,
fontscale=1.18,
)
hv.save(final, "final.html")
final
</code></pre>
<hr />
<p>Update II:</p>
<p>I have tried the following code unsuccessfully inspired by the answer of @chitown88 using <a href="/questions/tagged/bokeh" class="post-tag" title="show questions tagged 'bokeh'" aria-label="show questions tagged 'bokeh'" rel="tag" aria-labelledby="tag-bokeh-tooltip-container">bokeh</a> instead of <a href="/questions/tagged/seaborn" class="post-tag" title="show questions tagged 'seaborn'" aria-label="show questions tagged 'seaborn'" rel="tag" aria-labelledby="tag-seaborn-tooltip-container">seaborn</a> package to get the desired output for <em>output1</em> at least.
My current output:
<img src="https://i.imgur.com/G0B5u1t.png" alt="img" /></p>
<hr />
<p>Example output for years:
<img src="https://i.imgur.com/9v7fIs3.png" alt="img" /></p>
<p>As seen in the above example, I want to retrieve, reflect, and visualize my data in <strong>smaller time resolution (hourly\daily\monthly)</strong> rather than annually. I need to retrieve the number of CPU usage of column <code>avg cpu</code> within dataframe observations that exceeding\below:</p>
<ul>
<li>Output1:</li>
</ul>
<blockquote>
<p>the <strong>hourly</strong> average for one certain day ( i.e., 21st Jan. 2024-01-21 00:00:00 till 2024-01-22 00:00:00)</p>
<ul>
<li>x-axis: hourly timestamp of records for 21 Jan</li>
<li>y-axis: 'avg CPU' usage for 21 Jan</li>
<li>Threshold line: average CPU usage for 21 Jan</li>
</ul>
</blockquote>
<ul>
<li>Output2:</li>
</ul>
<blockquote>
<p>The <strong>daily</strong> average for Jan.</p>
<ul>
<li>x-axis: daily timestamp of Jan (1th-31th)</li>
<li>y-axis: 'avg CPU' usage for Jan</li>
<li>Threshold line: average CPU usage for Jan month</li>
</ul>
</blockquote>
<p>What is the elegant way to treat my dataframe to achieve this and the find threshold for interested <em>outpu1</em> and <em>output2</em> accordingly?</p>
<p><img src="https://i.imgur.com/c9Icu2L.png" alt="img" /></p>
|
<python><pandas><time-series><bokeh>
|
2024-02-14 13:41:00
| 2
| 2,056
|
Mario
|
77,994,707
| 14,860,526
|
Python unittest/pytest tests priority
|
<p>i have many unittests in my project each independent from another, however i would like to run the tests in parallel on several cores and in order to get the most out of it, it's necessary that the longest tests run as early as possible.
Is it possible to give a priority order to the tests? something like</p>
<pre><code>class TestClass(unittest.TestCase):
@unittest.priority(2)
def test_high_priority(self):
@unittest.priority(1)
def test_medium_priority(self):
@unittest.priority(0)
def test_low_priority(self):
def test_low_priority2(self):
</code></pre>
<p>Many thanks</p>
|
<python><unit-testing><testing><pytest>
|
2024-02-14 13:39:45
| 1
| 642
|
Alberto B
|
77,994,610
| 15,803,668
|
LlamaIndex library not respecting LLAMA_INDEX_CACHE_DIR environment variable
|
<p>I'm using the <code>LlamaIndex</code> library in my Python project to handle some data processing tasks. According to the documentation <a href="https://docs.llamaindex.ai/en/stable/getting_started/installation.html#quickstart-installation-from-pip" rel="nofollow noreferrer">(Link)</a>, I can control the location where additional data is downloaded by setting the <code>LLAMA_INDEX_CACHE_DIR</code> environment variable. However, despite setting this environment variable, the <code>LlamaIndex</code> library seems to ignore it and continues to store data in a different location.</p>
<p>Here's how I'm setting the environment variable in my Python script:</p>
<pre><code>import os
os.environ["LLAMA_INDEX_CACHE_DIR"] = "/path/to/my/cache/directory"
</code></pre>
<p>When creating the index storage (see code below), <code>nltk_data</code> gets downloaded to <code>/Users/user/nltk_data</code> instead of the path I set in as the environment variable.</p>
<pre><code>loader = UnstructuredReader()
doc = loader.load_data(file=Path(file), split_documents=False)
storage_context = StorageContext.from_defaults()
cur_index = VectorStoreIndex.from_documents(doc, storage_context=storage_context)
storage_context.persist(persist_dir=f"./storage/name")
</code></pre>
<p>I've checked for typos, ensured correct permissions on the cache directory, and set the environment variable before importing the <code>LlamaIndex</code> library, but the issue persists.</p>
<p>Could anyone suggest why <code>LlamaIndex</code> might not be respecting the <code>LLAMA_INDEX_CACHE_DIR</code> environment variable, and how I can troubleshoot or resolve this issue?</p>
<p>Any insights or suggestions would be greatly appreciated. Thank you!</p>
|
<python><llama-index>
|
2024-02-14 13:22:03
| 1
| 453
|
Mazze
|
77,994,599
| 800,457
|
What are all the known serialization formats of (unix) epoch time?
|
<p>So, the <a href="https://en.wikipedia.org/wiki/Unix_time" rel="nofollow noreferrer">basic definition</a> of epoch time (similar to 'unix time', but not exactly: see below) and what it means is clear. However, nowhere on the wikipedia page or on any other resource I can find is it mentioned that <strong>epoch time can be <strong>serialized</strong> (ie. to JSON, CSV, XML, etc.) in various equally-modern formats with varying degrees of precision</strong>. Which is not a shock, right? Different programming languages do things in different ways. That's a given. But, the semi-consistent usage of epoch time itself across so many languages could possibly indicate one of two things:</p>
<ol>
<li>an actual standard/spec, albeit a weak one with gaps</li>
<li>an informal convention-by-mistake, about which: assume nothing</li>
</ol>
<p>(Note: I am NOT asking about ISO formatted date/time strings and the variants of those. Separate topic. Further, I'm not talking about low-level programming representations of these numbers, like signed vs. unsigned integers, 32 vs 64 bit, leap-seconds vs none, etc. Wikipedia actually covers all that. Just the various <em>serializations</em> of the epoch/unix time, ie. in a console, in JSON, in a database, etc.)</p>
<p>For instance, I've stumbled across at least 3 basic variants:</p>
<ul>
<li>1707882022 - In seconds</li>
<li>1707882022000 - In milliseconds (no decimal indicator)</li>
<li>1707882022.159835 - In seconds, with decimal indicator and (up to) microsecond precision</li>
</ul>
<p>Javascript, for instance, defaults to the milliseconds option, while Python 3 defaults to seconds with microsecond precision.</p>
<p>My question is: <em>is that all the known options/formats? And, is there any common specification/definition of them, or is it all just ad-hoc decisions by language designers, and so you just need to read the specs for all of them, separately?</em></p>
<p>Bonus question: I know that translating between simple mathematical units like these is the appropriate work of programmers. But, as the life of senior programmers involves bringing junior people up to speed on such questions: are there any particular tools which specialize in transparent and ergonomical translation between these particular formats, or is it too seemingly simple/fundamental for anyone to have bothered? (I'll note that it's only simple if you already know what all the possibilities are, and how to recognize them. For which: there's seemingly no guide. And no clear reason why not.)</p>
<p>UPDATE: Some specs, such as <a href="https://www.rfc-editor.org/rfc/rfc7493" rel="nofollow noreferrer">IETF RFC-7493</a>, have attempted to reckon with this question, and to establish a level playing-field for serialization, although they don't help in documenting the current, fragmented state of play.</p>
<p>UPDATE: We have usefully clarified that "Unix time" and "Epoch Time" are not the same things. I don't know if it's formally specified anywhere, but (at least according to Wikipedia) "Unix (System) Time" is always counted in seconds. Whereas, the more general concept of "(Unix) Epoch Time" simply means time since Jan 1, 1970, which is counted by different programming languages in either seconds or milliseconds, at varying levels of precision. So, for the sake of clarity, I've edited my question to be about (Unix) Epoch Time, not Unix Time.</p>
<p>UPDATE: one poster has suggested that DateTime objects serialized directly to JSON are inherently numbers, not strings. Some serialization implementations do indeed behave that way, so we might go so far as to say that it's an unwritten convention. However, it's unfortunately not part of the JSON spec. And, because JSON payloads are often assembled via intermediate stages, it's quite common to have an end-product batch of JSON that doesn't come directly from system DateTime objects, but may have been processed/calculated, or de-/re-serialized somewhere in the pipeline, into another format. For instance, I've got a junior data scientist working in Python who has somehow managed to serialize a datestamp to JSON with 7 digits of decimal precision, rather than the conventional 6. He has no idea how/where it happened, and nothing in his toolchain coerced it to a more conventional value.</p>
|
<javascript><python><datetime><unix-timestamp><epoch>
|
2024-02-14 13:20:06
| 1
| 19,537
|
XML
|
77,994,578
| 3,050,852
|
Python subprocess will not run command
|
<p>For some reason I have a subprocess command line that is not running the subprocess command line underneath it and doing the processing it should be.</p>
<pre><code> import subprocess
import os
os.chdir('/home/linux/Trial/')
a = open('Attempts.txt', 'r')
b = a.readlines()
a.close()
for c in range(0, len(b)-1):
words = list(b[c].split(" "))
d = len(words)
e = words[d-1]
f = b[c].replace(e, 'Hiccup' + str(c) + '.mp4')
words[d-1] = 'Hiccup' + str(c) + '.mp4'
print(f)
command = 'ffmpeg', '-i', words[2], '-ss', words[4], '-to', words[6], '-c:v copy -a copy', words[11]
print(command)
subprocess.run(['ffmpeg', '-i', words[2], '-ss', words[4], '-to', words[6], '-c:v copy -a copy', words[11]])
</code></pre>
<p>When I run the following code I get the following output:</p>
<pre><code> ffmpeg -i test.mp4 -ss 00:00:00 -to 00:02:34 -c:v copy -a copy Hiccup0.mp4
('ffmpeg', '-i', 'test.mp4', '-ss', '00:00:00', '-to', '00:02:34', '-c:v copy -a copy', 'Hiccup0.mp4')
</code></pre>
<p>I get nothing else. When I check no new file has been created called Hiccup0. I have ffmpeg-python installed and updated(made sure of that this morning).</p>
<p>What gives. I see this should work but it doesn't.</p>
<p>Edit:</p>
<p>I decided to try running the subprocess statement in interpretter instead of through the program only. When I run:</p>
<pre><code> subprocess.run('ffmpeg', '-i', 'SR0.mp4', '-ss', '0', '-to', '82', '-c:v', 'copy' ,'-a', 'copy', 'help.mp4')
</code></pre>
<p>I get the error, bufsize must be an integer. I still ask the same question, what gives? I tried changing everything over to seconds from minutes and hours to make it a digit, as can be seen from the above line, but it still won't work correctly. At least I'm getting an error now, much than just not having it run at all.</p>
<p>I also just tried to put an f into front of 'ffmpeg; and it still gave the same error.</p>
|
<python><ffmpeg><subprocess>
|
2024-02-14 13:14:49
| 1
| 1,341
|
confused
|
77,994,378
| 12,194,774
|
Pandas 'isin' or 'merge' function with multiple columns and dataframes conditions
|
<p>I have 2 dataframes:</p>
<pre><code>d1={'A':[1,3,5,7,8,4,6],'B':[6,4,3,8,1,7,4], 'C':[2,5,8,9,8,4,7]}
df1=pd.DataFrame(data=d1)
d2={'a':[2,8,6,5,7],'b':[6,4,9,3,2]}
df2=pd.DataFrame(data=d2)
</code></pre>
<p>Now, I would like to see which "a" and "b" values of df2 are the same as "A" and "B" values of df1. This is true for the third row of df1 and fourth row of df2 [5,3], thus the <strong>result</strong> will be a new column in df2 saying True.
The dataframes have different length and also different number of columns. I know there is function "isin", I can apply it when I search for pattern in one column but not in two columns at once. I also found function "merge" with indicator=True but only understand how to apply it when the dataframes have the same number of columns.
I would be very grateful for a help in this situation.</p>
|
<python><pandas><dataframe>
|
2024-02-14 12:39:42
| 3
| 337
|
HungryMolecule
|
77,994,342
| 18,178,867
|
How to calculate the computational complexity of machine learning algorithms
|
<p>I have used five machine learning algorithms, including the artificial neural network (ANN), random forest, CatBoost, SVM, and K-nearest neighbors (KNN) for a multiclass classification. How can I compute the computational complexity of these models?</p>
<p>I have also utilized the Power-Law Committee Machine (PLCM) to combine the outputs the five machine learning algorithms as the sixth method. I also need to calculate the computational complexity of this method.</p>
|
<python><machine-learning><scikit-learn>
|
2024-02-14 12:34:47
| 1
| 443
|
Reza
|
77,994,163
| 7,566,673
|
composite score for denoising 1D signal
|
<p>I am suing different methods such as wavelet(different parameters) , FFT to denoise 1 D single . I using <code>skimage.metrics</code> to evaluate denoising as shown in snippet below. (here <code>signal</code> is original noisy signal)</p>
<pre><code>import skimage.metrics as M
def get_denoise_metrics(signal, clean_signal):
data_range = (min(signal), max(signal))
d = {
'normalized_root_mse' : [np.round(M.normalized_root_mse(signal, clean_signal), 4)],
'peak_signal_noise_ratio' : [round(M.peak_signal_noise_ratio(signal, clean_signal, data_range =data_range[1]), 4)],
'structural_similarity' : [np.round(M.structural_similarity(signal, clean_signal, data_range =data_range[1]), 4)],
}
return d
</code></pre>
<p>Since I have 3 metrices for each denoising method (total no. of methods are greater than 10) I am using , How can I create a composite score so based on that I select based method .</p>
|
<python><signal-processing><fft><scikit-image><pywavelets>
|
2024-02-14 12:04:36
| 1
| 1,219
|
Bharat Sharma
|
77,994,041
| 10,327,984
|
Memory Error while Generating User-Movie Combinations for Content-Based Recommender
|
<p>I'm working with the MovieLens dataset and aiming to build a content-based recommender using a machine learning algorithm to predict ratings. To achieve this, I need to generate all combinations of users and movies and assign actual ratings for users who have already rated a movie or assign 0 for users who haven't watched it.</p>
<p>I attempted the following method:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from itertools import product
from scipy.sparse import csr_matrix
# Get unique user IDs and movie IDs
all_users = df['userId'].unique()
all_movies = df['movieId'].unique()
# Generate all possible combinations of user IDs and movie IDs
all_combinations = product(all_users, all_movies)
# Convert the combinations into a new DataFrame
result_df = pd.DataFrame(all_combinations, columns=['userId', 'movieId'])
# Merge the new DataFrame with the original DataFrame to get the ratings where available
result_df = pd.merge(result_df, df, on=['userId', 'movieId'], how='left')
# Fill NaN values with 0 to represent movies that users haven't watched
result_df['rating'].fillna(0, inplace=True)
# Display the resulting DataFrame
print(result_df)
</code></pre>
<p>However, I encountered a memory error. I also attempted to convert my DataFrame to a sparse matrix and then back to a DataFrame, but the memory issue persisted:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.sparse import csr_matrix
sparse_matrix = csr_matrix((df['rating'], (df['userId'] , df['movieId'])))
dense_matrix = sparse_matrix.toarray()
df_back = pd.DataFrame(data=dense_matrix, index=df['userId'].unique(), columns=df['movieId'].unique())
</code></pre>
<p>I'm looking for a solution to avoid this problem without scaling horizontally or vertically because I'm working on my personal computer. Any suggestions would be greatly appreciated. Thanks!</p>
|
<python><pandas><dataframe><recommendation-engine>
|
2024-02-14 11:44:31
| 0
| 622
|
Mohamed Amine
|
77,993,956
| 19,672,778
|
keras.models.load_model() fails to load model from .keras file
|
<p>Hello so i am trying to load model from .keras zip file but it fails to do so. it gives me this error. how can i fix it?</p>
<pre class="lang-py prettyprint-override"><code>modelvggNew = keras.models.load_model('/kaggle/working/vgg15.keras')
</code></pre>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[33], line 1
----> 1 keras.models.load_model('/kaggle/working/vgg15.keras')
File /usr/local/lib/python3.10/site-packages/keras/src/saving/saving_api.py:185, in load_model(filepath, custom_objects, compile, safe_mode)
183 return legacy_h5_format.load_model_from_hdf5(filepath)
184 elif str(filepath).endswith(".keras"):
--> 185 raise ValueError(
186 f"File not found: filepath={filepath}. "
187 "Please ensure the file is an accessible `.keras` "
188 "zip file."
189 )
190 else:
191 raise ValueError(
192 f"File format not supported: filepath={filepath}. "
193 "Keras 3 only supports V3 `.keras` files and "
(...)
202 "might have a different name)."
203 )
ValueError: File not found: filepath=/kaggle/working/vgg15.keras. Please ensure the file is an accessible `.keras` zip file.
</code></pre>
<p>Even tho filepath exists it fails to load</p>
|
<python><tensorflow><keras>
|
2024-02-14 11:29:37
| 4
| 319
|
NikoMolecule
|
77,993,932
| 709,439
|
How to get detailed status from flask background runner?
|
<p>I have to implement a very long (many minutes, perhaps hours) method in my Python Flask API.</p>
<p>I was planning to use flask_executor.
My code is something like this:</p>
<pre><code>from flask import Flask, request, jsonify
from flask_executor import Executor
import uuid
import time
app = Flask(__name__)
executor = Executor(app)
class BackgroundRunner:
def __init__(self, executor):
self.executor = executor
def task(self):
percent_completed = 0
time.sleep(1000)
percent_completed = 33
time.sleep(1000)
percent_completed = 66
time.sleep(1000)
percent_completed = 100
def task_async(self):
task_id = uuid.uuid4().hex # generate a unique task ID
task = self.executor.submit(task_id)
return task_id
def task_status(self, task_id):
task = self.executor.futures._state(task_id)
if task.done():
return "completed"
else:
return "running"
background_runner = BackgroundRunner(executor)
@app.route('/slow_task', methods=['POST'])
def slow_task():
task_id = background_runner.task_async()
return jsonify({"task_id": task_id})
@app.route('/slow_task_status/<task_id>')
def slow_task_status(task_id):
status = background_runner.task_status(task_id)
return jsonify({"status": status})
if __name__ == '__main__':
app.run()
</code></pre>
<p>My problem is I need some more info about task completion in slow_task_status, like a percentage value ...<br />
Is this possible using <code>flask_executor</code> package?</p>
|
<python><python-3.x><flask>
|
2024-02-14 11:25:34
| 0
| 17,761
|
MarcoS
|
77,993,912
| 4,575,197
|
cuML UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan
|
<p>i am trying to train a RF regression using <code>gridsearchcv</code>. I change all file types to float32 and i still get these warnings that i'm not sure how to solve.</p>
<p>my code:</p>
<pre><code>combined_df=cpd.concat([train_df,evaluate_df])
combined_df = combined_df.astype({
'Mcap_w': 'float32',
'constant': 'int32',
'TotalAssets': 'float32',
'NItoCommon_w': 'float32',
'NIbefEIPrefDiv_w': 'float32',
'PrefDiv_w': 'float32',
},error='raise')
print(combined_df.iloc[:,2:].info(),combined_df['Mcap_w'])
test_fold = [0] * len(train_df) + [1] * len(evaluate_df)
#p_test3 = {'n_estimators':[50,100,200,300,500],'max_depth':[3,4,5,6,7,8], 'max_features':[5,10,15,21]}
p_test3 = {'n_estimators':[20,50,200,500],'max_depth':[3,5,7,10], 'max_features':[25]}
tuning = GridSearchCV(estimator =cuRFr(n_streams=1, min_samples_split=2, min_samples_leaf=1, random_state=0),
param_grid = p_test3, scoring='r2', cv=PredefinedSplit(test_fold=test_fold))
tuning.fit(combined_df.iloc[:,2:],combined_df['Mcap_w'])
print(tuning.best_score_)
tuning.cv_results_, tuning.best_params_, tuning.best_score_
</code></pre>
<p>the print output:</p>
<pre><code> # Column Non-Null Count Dtype
--- ------ -------------- -----
0 TotalAssets 896 non-null float32
1 NItoCommon_w 896 non-null float32
2 NIbefEIPrefDiv_w 896 non-null float32
dtypes: float32(3)
memory usage: 101.5 KB
None
</code></pre>
<p>for <code>print(combined_df['Mcap_w'])</code> this would return a series as <code>Name: Mcap_w, Length: 896, dtype: float32</code></p>
<p>then i get 32 Warnings followed by 32 TypeErrors because i am using GridSearchCV.</p>
<blockquote>
<p>miniconda3/envs/rapid/lib/python3.10/site-packages/sklearn/model_selection/_validation.py:988: <em><strong>UserWarning</strong></em>: Scoring failed. The score on this train-test partition for these parameters will be set to nan.</p>
</blockquote>
<blockquote>
<p>TypeError: Implicit conversion to a host NumPy array via <strong>array</strong> is not allowed, To explicitly construct a GPU matrix, consider using .to_cupy()
To explicitly construct a host matrix, consider using .to_numpy().</p>
</blockquote>
<blockquote>
<p>miniconda3/envs/rapid/lib/python3.10/site-packages/sklearn/model_selection/_validation.py:988: <em><strong>UserWarning</strong></em>: Scoring failed. The score on this train-test partition for these parameters will be set to nan.</p>
</blockquote>
<blockquote>
<p>miniconda3/envs/rapid/lib/python3.10/site-packages/cudf/core/frame.py", line 402, in <strong>array</strong>
raise TypeError( <em><strong>TypeError</strong></em>: Implicit conversion to a host NumPy array via <strong>array</strong> is not allowed, To explicitly construct a GPU matrix, consider using .to_cupy()
To explicitly construct a host matrix, consider using .to_numpy().</p>
</blockquote>
|
<python><typeerror><rapids><cuml><user-warning>
|
2024-02-14 11:22:29
| 1
| 10,490
|
Mostafa Bouzari
|
77,993,894
| 3,828,463
|
Error in Python trying to create a list of a certain length after a call to Fortran to get the length
|
<p>I have a piece of Python code which needs to call Fortran to get the length of a list Python needs to create (so that it can pass that list of the correct size back to Fortran later).</p>
<p>Here is the code:</p>
<pre><code># get size of x
n = ct.c_int(0)
fs(ct.byref(n)) # fs is a Fortran subroutine which returns the length
print('n: ',n)
x = [0]*n
print('x: ',x)
</code></pre>
<p>However Python complains:</p>
<pre><code> x = [0]*n
TypeError: can't multiply sequence by non-int of type 'c_long'
</code></pre>
<p>If I change the initialization of n to n=0, the byref call on the next line fails with:</p>
<pre><code> fs(ct.byref(n))
TypeError: byref() argument must be a ctypes instance, not 'int'
</code></pre>
<p>How do I create a list of length n (obtained from Fortran)?</p>
|
<python><fortran>
|
2024-02-14 11:19:13
| 1
| 335
|
Adrian
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.