QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,671,711
| 10,200,497
|
How can I filter groups by comparing the first value of each group and the last cummax that changes conditionally?
|
<p>My DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'group': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c', 'd', 'd', 'd', 'e', 'e', 'e'],
'num': [1, 2, 3, 1, 12, 12, 13, 2, 4, 2, 5, 6, 10, 20, 30]
}
)
</code></pre>
<p>Expected output is getting three groups from above <code>df</code></p>
<pre><code> group num
0 a 1
1 a 2
2 a 3
group num
6 c 13
7 c 2
8 c 4
group num
12 e 10
13 e 20
14 e 30
</code></pre>
<p>Logic:</p>
<p>I want to compare the first value of each group to the last <code>cummax</code> of <code>num</code> column. I can explain better by this code:</p>
<pre><code>df['last_num'] = df.groupby('group')['num'].tail(1)
df['last_num'] = df.last_num.ffill().cummax()
</code></pre>
<p>But I think what I really need is this <code>desired_cummax</code>:</p>
<pre><code> group num last_num desired_cummax
0 a 1 NaN 3
1 a 2 NaN 3
2 a 3 3.0 3
3 b 1 3.0 3
4 b 12 3.0 3
5 b 12 12.0 3
6 c 13 12.0 3
7 c 2 12.0 3
8 c 4 12.0 4
9 d 2 12.0 4
10 d 5 12.0 4
11 d 6 12.0 4
12 e 10 12.0 4
13 e 20 12.0 4
14 e 30 30.0 30
</code></pre>
<p>I don't want a new <code>cummax</code> if the first value of <code>num</code> for each group is less than <code>last_num</code>.</p>
<p>For example for group <code>b</code>, the first value of <code>num</code> is 1. Since it is less that its <code>last_num</code>, when it reaches the end of the group <code>b</code> it should not put 12. It should still be 3.</p>
<p>Now for group <code>c</code>, since its first value is more than <code>last_num</code>, when it reaches at the end of group <code>c</code>, a new <code>cummax</code> will be set.</p>
<p>After that I want to filter the groups. If <code>df.num.iloc[0] > df.desired_cummax.iloc[0]</code></p>
<p>Note that the first group should be in the expected output no matter what.</p>
<p>Maybe there is a better approach to solve this. But this is what I have thought might work.</p>
<p>My attempt was creating <code>last_num</code> but I don't know how to continue.</p>
|
<python><pandas><dataframe>
|
2024-06-26 10:03:35
| 1
| 2,679
|
AmirX
|
78,671,562
| 1,037,407
|
Parsing a source code with just one digit
|
<p>Could some please walk me through, how CPython parses a file contain just one character <code>1</code>?</p>
<p>In particular, why <code>ast.parse("3")</code> returns <code>...Expr(...)...</code> as (I believe) Python's source code is a list of statements?</p>
<p>In other words, reading <a href="https://docs.python.org/3/reference/grammar.html" rel="nofollow noreferrer">the grammar</a> how do I go from <code>file</code> to ... <code>atom</code> (I guess)?</p>
|
<python><parsing><cpython>
|
2024-06-26 09:35:06
| 1
| 11,598
|
Ecir Hana
|
78,671,534
| 3,070,007
|
How to Download a CSV File from a Blob URL Using Selenium in Python?
|
<p>I'm trying to automate the download of a CSV file from a Blob URL on a dynamic website using Selenium with Python. The CSV download is triggered by clicking a button, but the button click generates a Blob URL, which isn't directly accessible via traditional HTTP requests. I'm having trouble capturing and downloading the file from this Blob URL.</p>
<p>Here is the example of url: <a href="https://snapshot.org/#/aave.eth/proposal/0x70dfd865b78c4c391e2b0729b907d152e6e8a0da683416d617d8f84782036349" rel="nofollow noreferrer">https://snapshot.org/#/aave.eth/proposal/0x70dfd865b78c4c391e2b0729b907d152e6e8a0da683416d617d8f84782036349</a></p>
<p><a href="https://i.sstatic.net/vTIYnwVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTIYnwVo.png" alt="enter image description here" /></a></p>
<p>Here is the link looks like when I check from download history:</p>
<p>blob:<a href="https://snapshot.org/4b2f45e9-8ca3-4105-b142-e1877e420c84" rel="nofollow noreferrer">https://snapshot.org/4b2f45e9-8ca3-4105-b142-e1877e420c84</a></p>
<p>I've checked this post as well <a href="https://stackoverflow.com/questions/48034696/python-how-to-download-a-blob-url-video">Python: How to download a blob url video?</a> which said that I could not download it and I think it does not make sense since when I open the webpage manually, I can click the download button and get the file.</p>
<p>This is the code that I've tried before</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
import time
# Setup ChromeDriver
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
# URL of the proposal page
url = 'https://snapshot.org/#/aave.eth/proposal/0x70dfd865b78c4c391e2b0729b907d152e6e8a0da683416d617d8f84782036349'
# Navigate to the page
driver.get(url)
try:
# Wait up to 20 seconds until the expected button is found using its attributes
wait = WebDriverWait(driver, 20)
download_button = wait.until(EC.element_to_be_clickable((By.XPATH, "//button[contains(.,'svg')]")))
download_button.click()
print("Download initiated.")
except Exception as e:
print(f"Error: {e}")
# Wait for the download to complete
time.sleep(5)
# Close the browser
driver.quit()
</code></pre>
<p>When I inspect the Download csv button here is what I got</p>
<pre><code><svg viewBox="0 0 24 24" width="1.2em" height="1.2em"><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M4 16v1a3 3 0 0 0 3 3h10a3 3 0 0 0 3-3v-1m-4-4l-4 4m0 0l-4-4m4 4V4"></path></svg>
</code></pre>
|
<python><selenium-webdriver><selenium-chromedriver><blob>
|
2024-06-26 09:30:46
| 1
| 1,585
|
rischan
|
78,671,448
| 2,706,344
|
Insert lists into DataFrame cells
|
<p>Look at this code:</p>
<pre><code>df=pd.DataFrame(data=[['a','b'],['c','d']],columns=['B','C'])
df.insert(loc=0,column='A',value='Hello world!')
</code></pre>
<p>What it does: It inserts a new column named <code>'A'</code> and writes <code>'Hello world!'</code> into each cell of that new columns. But now watch this:</p>
<pre><code>df=pd.DataFrame(data=[['a','b'],['c','d']],columns=['B','C'])
df.insert(loc=0,column='A',value=['Hello','world!'])
</code></pre>
<p>Here we obtain a new column named <code>'A'</code> again, but this time we get <code>'Hello'</code> in the first cell and <code>'world!'</code> in the second. However, I want to have the list <code>['Hello','world!']</code> in both cells. How do I modify the call of the insert function to obtain what I want?</p>
|
<python><pandas><dataframe>
|
2024-06-26 09:15:19
| 0
| 4,346
|
principal-ideal-domain
|
78,671,295
| 14,833,503
|
ElasticNetCV in Python: Get full grid of hyperparameters with corresponding MSE?
|
<p>I have fitted a ElasticNetCV in Python with three splits:</p>
<pre><code>import numpy as np
from sklearn.linear_model import LinearRegression
#Sample data:
num_samples = 100 # Number of samples
num_features = 1000 # Number of features
X = np.random.rand(num_samples, num_features)
Y = np.random.rand(num_samples)
#Model
l1_ratios = np.arange(0.1, 1.1, 0.1)
tscv=TimeSeriesSplit(max_train_size=None, n_splits=3)
regr = ElasticNetCV(cv=tscv.split(X), random_state=42,l1_ratio=l1_ratios)
regr.fit(X,Y)
</code></pre>
<p>Now I want to get the whole grid of combinations of hyperparameters with the corresponding MSE as a Data Frame, I tried the following. However, the problem is that the resulting data frame shows the MSE for a combination of hyperparameters which are not shown as the minimum in the ElasticNetCV object which can be obtained by <code>regr.alpha_</code> and <code>regr.l1_ratio_</code>
:</p>
<pre><code>mse_path = regr.mse_path_
alpha_path = regr.alphas_
# Reshape mse_path to have l1_ratios, n_alphas, cross_validation_step as separate columns
mse_values = mse_path.flatten()
alpha_values = alpha_path.flatten()
l1_values=np.tile(l1_ratios ,int(alpha_values.shape[0]/l1_ratios.shape[0]))
repeated_l1_ratios = np.repeat(l1_ratios, 100)
# mse has dimensions (11, 100, 3)
array_3d = mse
# Flatten the 3D array into a 2D array
# Each sub-array of shape (100, 3) becomes a row in the new 2D array
array_2d = array_3d.reshape(-1, 3)
# Create a DataFrame from the 2D array
df = pd.DataFrame(array_2d, columns=['MSE Split1', 'MSE Split2', 'MSE Split3'])
df['alpha_values'] = alpha_values
df['l1_values'] = repeated_l1_ratios
</code></pre>
<p>The following then results in a hyperparameter combination that is not the true one. So when combining the MSEs and the hyperparameter values, something is wrong:</p>
<pre><code># Calculate the minimum MSE for each row across the three splits
df['Min MSE'] = df[['MSE Split1', 'MSE Split2', 'MSE Split3']].min(axis=1)
# Identify the row with the overall minimum MSE
min_mse_row_index = df['Min MSE'].idxmin()
# Retrieve the row with the minimum MSE
min_mse_row = df.loc[min_mse_row_index]
print("Row with the minimum MSE across all splits:")
print(min_mse_row)
</code></pre>
|
<python><machine-learning><scikit-learn><sklearn-pandas>
|
2024-06-26 08:49:51
| 2
| 405
|
Joe94
|
78,670,658
| 1,371,666
|
Cannot find play button on a web page in selenium using python
|
<br>
I am using Python 3.11.9 in windows with google chrome Version 126.0.6478.127 (Official Build) (64-bit).<br>
Here is the simple code
<pre><code>driver = webdriver.Chrome()
url = "https://onlineradiofm.in/stations/vividh-bharati"
driver.get(url)
radio_is_paused=True
while radio_is_paused:
time.sleep(30)
play_buttons = driver.find_elements(By.XPATH,'/html/body/div[2]/div/div/div/div[1]/div/div[1]/div/div[2]/div[1]/svg[1]/g/circle')
if len(play_buttons)>0:
print(len(play_buttons),' play buttons found')
else:
print('play button not found')
driver.quit()
</code></pre>
<p>I can see that web page gets opened in a browser and play button is visible .<br>
I am not able to find out the reason that why I get 'play button not found' ?<br>
I have also tried xpath //*[@id="play"]/g/circle , but got same result .<br></p>
|
<python><python-3.x><google-chrome><selenium-webdriver><selenium-chromedriver>
|
2024-06-26 06:35:50
| 1
| 481
|
user1371666
|
78,670,441
| 485,330
|
AWS Lambda Timeout to Amazon SES
|
<p>The code below to send an Amazon SES email works fine. However, I need the code to communicate with a local EC2 database, so I need to add this Lambda function to my VPC and Subnets - At this point, the code below stops working and timeouts.</p>
<p><strong>How can I fix this?</strong></p>
<pre><code>import json
import boto3
def send_email_ses(email):
client = boto3.client('ses', region_name='eu-west-1')
try:
response = client.send_email(
Destination={
'ToAddresses': [email]
},
Message={
'Body': {
'Text': {
'Charset': 'UTF-8',
'Data': 'Hello world',
}
},
'Subject': {
'Charset': 'UTF-8',
'Data': 'Welcome! Your API Key',
},
},
Source='source@domain.com'
)
return response['MessageId']
except Exception as e:
print(f"An error occurred: {str(e)}")
return None
def lambda_handler(event, context):
email = "test@domain.com"
message_id = send_email_ses(email)
if message_id:
body = f"Email Sent Successfully. MessageId is: {message_id}"
status_code = 200
else:
body = "Failed to send email."
status_code = 500
return {
'statusCode': status_code,
'body': json.dumps(body)
}`
</code></pre>
<p>Error Message:
<code>Response { "errorMessage": "2024-06-26T05:12:37.998Z 34457ba1-910f-4f54-9ced-234dac1c0950 Task timed out after 5.01 seconds" }</code></p>
<p>If I remove the VPC, it works again.</p>
|
<python><amazon-web-services><aws-lambda><amazon-vpc><amazon-ses>
|
2024-06-26 05:19:40
| 2
| 704
|
Andre
|
78,670,098
| 2,989,330
|
Wrapped functions of a Python module raise TypeError
|
<p>I am currently trying to replace a module in a large code base in a certain condition and to figure out when any function of this module is called, I wrap each function/method in the module with a warning that prints a short message with the stack trace. The implementation of this method is as follows (printing the stack trace was omitted):</p>
<pre><code>def wrap_module_with_warnings(module):
for fn_name, fn in inspect.getmembers(module):
if (not (inspect.isfunction(fn) or inspect.ismethod(fn))
or fn_name.startswith('_')
or inspect.getmodule(fn) is not module):
continue
@functools.wraps(fn)
def wrapped_fn(*args, **kwargs):
warnings.warn(f"The function {fn_name} was called.")
return fn(*args, **kwargs)
setattr(module, fn_name, wrapped_fn)
</code></pre>
<p>Unfortunately, this function doesn't work as I'd expect. My expectation is that it replaces all functions in <code>module</code> with its wrapped variant that just prints a warning and then calls the original function. However, in reality, it throws <code>TypeError: 'module' object is not callable</code>:</p>
<pre><code>/tmp/question/main.py:22: UserWarning: The function random was called.
warnings.warn(f"The function {fn_name} was called.")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File /tmp/question/main.py:31
29 print(f"Output before wrapping: {mod.f()}")
30 wrap_module_with_warnings(mod)
---> 31 print(f"Output after wrapping: {mod.f()}")
File /tmp/question/main.py:23, in wrap_module_with_warnings.<locals>.wrapped_fn(*args, **kwargs)
20 @functools.wraps(fn)
21 def wrapped_fn(*args, **kwargs):
22 warnings.warn(f"The function {fn_name} was called.")
---> 23 return fn(*args, **kwargs)
TypeError: 'module' object is not callable
</code></pre>
<p>To demonstrate my problem, I'm using the following main code:</p>
<pre><code>import mod
if __name__ == '__main__':
print(f"Output before wrapping: {mod.f()}")
wrap_module_with_warnings(mod)
print(f"Output after wrapping: {mod.f()}")
</code></pre>
<p>and the example module <code>mod.py</code>:</p>
<pre><code>import random
def f() -> int:
return random.randint(0, 1000)
</code></pre>
<p>The error suggests that <code>mod.f</code> is a module. Examining the type of <code>mod.f</code> after calling my wrapper method reveals that it's actually a function:</p>
<pre><code>In [1]: type(mod.f)
Out[1]: function
</code></pre>
<p>To examine the original function, I changed the last line of <code>wrap_module_with_warnings</code> to:</p>
<pre><code>setattr(module, fn_name, (fn, wrapped_fn))
</code></pre>
<p>and examine <code>mod.f</code> after calling the wrapper doesn't help much either:</p>
<pre><code>In [1]: mod.f[0]
Out[1]: <function mod.f() -> int>
In [2]: mod.f[1]
Out[2]: <function mod.f() -> int>
In [3]: type(mod.f[0])
Out[3]: function
In [4]: type(mod.f[1])
Out[4]: function
In [5]: mod.f[0]()
Out[5]: 785
In [6]: mod.f[1]()
/tmp/AscendSpeed/q/main.py:22: UserWarning: The function random was called.
warnings.warn(f"The function {fn_name} was called.")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 mod.f[1]()
File /tmp/AscendSpeed/q/main.py:23, in wrap_module_with_warnings.<locals>.wrapped_fn(*args, **kwargs)
20 @functools.wraps(fn)
21 def wrapped_fn(*args, **kwargs):
22 warnings.warn(f"The function {fn_name} was called.")
---> 23 return fn(*args, **kwargs)
TypeError: 'module' object is not callable
</code></pre>
<p>I'm pretty sure that this code should work, <code>fn</code> <em>should</em> be the function <code>mod.f</code>, but why do I get the <code>TypeError</code> then?</p>
|
<python><types><typeerror><metaprogramming>
|
2024-06-26 02:10:59
| 1
| 3,203
|
Green 绿色
|
78,670,018
| 9,872,200
|
python pandas loop through dataframe replicate many tables in excel
|
<p>I want to convert a large dataframe to a series of report tables that replicates the template for each unique id within the dataframe seperated/skipped excel row. I would like to do this with a series of loops. I think I can accomplish through mapping each item in the df to an excel file... but it would take several thousand lines based on the size of the dataframe - any help would be much appreciated!!</p>
<pre><code>import pandas as pd
data = {'id' = [1,2,3]
, 'make' = ['ford','chevrolet','dodge']
, 'model' = ['mustang','comaro','challenger']
, 'year' = ['1969','1970','1971']
, 'color' = ['blue', 'red', 'green']
, 'miles' = ['15000','20000','35000']
, 'seats' = ['leather', 'cloth' , 'leather']
}
df = pd.DataFrame(data)
df.to_excel(r'/desktop/reports/output1.xlsx')
</code></pre>
<p>Proposed outcome in excel (one row is skipped between id groupings):</p>
<pre><code> A B C D E F
1 make ford year 1969 miles 15000
2 model mustang color blue seats leather
3
4 make chevrolet year 1970 miles 20000
5 model comaro color red seats cloth
6
7 make dodge year 1971 miles 35000
8 model challenger color green seats leather
</code></pre>
|
<python><excel><pandas><report>
|
2024-06-26 01:23:23
| 1
| 513
|
John
|
78,669,908
| 3,486,684
|
Why is `re.Pattern` generic?
|
<pre class="lang-py prettyprint-override"><code>import re
x = re.compile(r"hello")
</code></pre>
<p>In the above code, <code>x</code> is determined to have type <code>re.Pattern[str]</code>. But why is <code>re.Pattern</code> generic, and then specialized to string? What does a <code>re.Pattern[int]</code> represent?</p>
|
<python><mypy><python-typing><python-re>
|
2024-06-26 00:11:00
| 1
| 4,654
|
bzm3r
|
78,669,790
| 17,835,120
|
Clicking on popup with selenium
|
<p>Trying to click on this button but cannot:</p>
<p>last attempt:</p>
<pre><code> # Attempt to locate and click the 'Start' button
try:
start_button = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.XPATH, "//button[text()='Start']"))
)
start_button.click()
</code></pre>
<p>Any ideas why this fails or alternative approaches to achieve?</p>
<p><a href="https://i.sstatic.net/2JlF7rM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2JlF7rM6.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver>
|
2024-06-25 22:59:18
| 0
| 457
|
MMsmithH
|
78,669,675
| 299,754
|
Python requests: how to get the client port used *before* sending a request?
|
<p>I'm trying to debug an apparently low level intermittent ConnectionError (likely happening at the TCP layer).
For this I've built a multiprocessed program spamming a remote site and catching the relevant error, and am monitoring the traffic via wireshark with the goal of capturing a full faulty TCP conversation.
It's difficult to match an error with a specific TCP conversation in wireshark as there's a really high throughput (that error is very infrequent so I need a lot of them to see it happen once!), so I'd like to record the client side port used in the connection for each request (or at least for failing ones) so I can log this when there's an error and find the culprit TCP stream.</p>
<p><a href="https://stackoverflow.com/a/32311849/299754">This solution using socket.fromfd</a> was the closest I found, except of course it doesn't work here because the exception happens while making the request so <code>r</code> doesn't exist in case of error and I can't access the fd.</p>
<p>Is there some way to get/create the socket before sending the request so I can know which client port will be used? Or some hook maybe in urllib3 after the socket creation but before the TCP conversation starts, where I can access/log it?</p>
|
<python><python-requests>
|
2024-06-25 22:09:41
| 0
| 6,928
|
Jules Olléon
|
78,669,632
| 10,203,572
|
One-liner split and map within list comprehension
|
<p>I have this bit for parsing some output from stdout:</p>
<pre><code>out_lines = res.stdout.split("\n")
out_lines = [e.split() for e in out_lines]
out_vals = [{"date":e[0],
"time":e[1],
"size":e[2],
"name":e[3]} for e in out_lines if e]
</code></pre>
<p>Is there an idiomatic way to merge the second and third lines here so that the splitting and mapping happen within the same line, without redundant calls to <code>e.split()</code>?</p>
|
<python>
|
2024-06-25 21:51:19
| 4
| 1,066
|
Layman
|
78,669,542
| 12,347,371
|
How to send a telegram message without blocking main thread
|
<pre class="lang-py prettyprint-override"><code>from telegram import Bot
import asyncio
TOKEN = "blah"
CHAT_ID = 1
async def send_message_async():
message = "This is a test update from the bot!"
bot = Bot(token=TOKEN)
await bot.send_message(chat_id=CHAT_ID, text=message)
print("Message sent successfully!")
async def main():
print("Before sending message...")
await send_message_async()
print("After sending message...") # I don't want this to be blocked until the message is sent
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>I'm trying to send message without blocking whole thread, but I'm struggling to avoid it. Is there any way?</p>
<p>I am hoping to find a simpler API to above problem. Something like below</p>
<pre class="lang-py prettyprint-override"><code>def send_message():
# Sends message to user asynchonously without blocking the main thread
pass
def main():
# Main function
send_message()
</code></pre>
<p>But it seems like, to call one async function, every other function should be converted into async function.</p>
|
<python><python-asyncio><telegram><python-telegram-bot>
|
2024-06-25 21:14:13
| 1
| 675
|
Appaji Chintimi
|
78,669,465
| 11,829,999
|
Pandas:using list as cell value - different approach to modify values and NaN issues
|
<p>This is 50% a question and 50% an observation that baffles me a bit. Maybe someone can enlighten me.</p>
<p>Also I would like to know opinions on using lists as cell values. Yes/No and why please.</p>
<p>Here is a trivial example:</p>
<pre><code>data = [[['apple', 'banana'],1], [['grape', 'orange'],2], [['banana', 'lemon'],4]]
df = pd.DataFrame(data, columns=['Fruit', 'Count'])
</code></pre>
<p>which results in:</p>
<pre><code> Fruit Count
0 [apple, banana] 1
1 [grape, orange] 2
2 [banana, lemon] 4
</code></pre>
<p>Given a new list:</p>
<pre><code>input_list = ['melon', 'kiwi']
</code></pre>
<p><strong>The using 'loc' approach:</strong></p>
<p>(A) Outright doesn't work.</p>
<pre><code>df.loc[df['Count'] == 2, 'Fruit'] = [input_list] # with or without wrapping brackets is both bust
</code></pre>
<p>(B) Using Series also doesn't work</p>
<pre><code>ser = pd.Series(input_list) # NO wrapping which is an incorrect length Series object - fair enough
df.loc[df['Count'] == 2, 'Fruit'] = ser
# wrong result --->
Fruit Count
0 [apple, banana] 1
1 kiwi 2
2 [banana, lemon] 4
</code></pre>
<p>(C) Series Take 2</p>
<pre><code>ser = pd.Series([input_list]) # WITH wrapping = Series --> 0 [melon, kiwi]
df.loc[df['Count'] == 2, 'Fruit'] = ser
# wrong result ---> NaN??? HUH?
Fruit Count
0 [apple, banana] 1
1 NaN 2
2 [banana, lemon] 4
</code></pre>
<p><strong>The using 'at' approach:</strong></p>
<p>(D)</p>
<pre><code>mask = df['Count'] == 2
mask_match_idx = df[mask].index.values[0] # first match int value
df.at[mask_match_idx, 'Fruit'] = input_list
# results in (finally) the correct result
Fruit Count
0 [apple, banana] 1
1 [melon, kiwi] 2
2 [banana, lemon] 4
</code></pre>
<p>I understand that B is bust because of the wrong length Series object.
But why are (A) (or a version thereof) and (C) wrong? Or how could they work? Especially the NaN result is confusing. Why is that happening?</p>
<p>Is the conclusion to always use 'at' in those kind of cases?</p>
<p>And again: What are the takes for using lists as cell values in regards to stuff like this happening etc. Would love some input here and potential alternative suggestions if lists are a no go.</p>
<p>Thank you!</p>
|
<python><pandas><dataframe><nan>
|
2024-06-25 20:47:23
| 0
| 508
|
sebieire
|
78,669,299
| 4,421,575
|
rearrange columns in dataframe depending on sorting output
|
<p>I have the following data frame:</p>
<pre><code>df = pd.DataFrame(
{
'a':[1,2,3,4,5,6],
'b':[1,1,3,3,5,5],
'c':[1,2,3,4,5,6],
'd':[1,1,1,1,1,5],
}
)
In [1051]: df
Out[1051]:
a b c d
0 1 1 1 1
1 2 1 2 1
2 3 3 3 1
3 4 3 4 1
4 5 5 5 1
5 6 5 6 5
</code></pre>
<p>If I sort the df using all the columns, I get the following:</p>
<pre><code>In [1055]: columns = list(df.columns)
...:
...: dfSorted = df.sort_values(by=columns, ascending=False)
...:
...: print(dfSorted)
a b c d
5 6 5 6 5
4 5 5 5 1
3 4 3 4 1
2 3 3 3 1
1 2 1 2 1
0 1 1 1 1
</code></pre>
<p>I'd like to re arrange the order of the columns going from the column with the least differences among the rows, being the last column the one with the most differences. In my example, the expected order should be then d,b,c,a.</p>
<p>This is so because column <code>d</code> has only two different values (1 and 5) while columns <code>c</code> and <code>a</code> have all of the values different. Column <code>b</code> lies in between ...</p>
<pre><code>In [1056]: dfSorted[['d','b','c','a']]
Out[1056]:
d b c a
5 5 5 6 6
4 1 5 5 5
3 1 3 4 4
2 1 3 3 3
1 1 1 2 2
0 1 1 1 1
</code></pre>
<p>Any idea? Thanks!</p>
|
<python><pandas><sorting>
|
2024-06-25 19:55:17
| 2
| 1,509
|
Lucas Aimaretto
|
78,669,277
| 2,952,838
|
Define a matrix in R in a similar fashon as with Numpy
|
<p>I love the fact that using numpy in Python it is very easy to define a matrix/array in a way that is very close to the mathematical definition.
Does R have a similar way of achieving this result?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
n = np.arange(4)
m = np.arange(6)
## Now I will define the matrix N_mat[i,j]=m[i]*n[j] with 6 rows
## and 4 columns
N_mat = m[:,np.newaxis]*n[np.newaxis,:]
</code></pre>
|
<python><r><numpy><matrix>
|
2024-06-25 19:48:50
| 1
| 1,543
|
larry77
|
78,669,161
| 9,116,959
|
How to Remove Wildcard Imports from a Large Python Repository?
|
<p>I am working on a large python project that contains many thousands of files, includes a mix of independent scripts (many with main functions) and common modules.
Unfortunately, in my project there is extense usage of wildcard imports (e.g., from module import *).
This causes me to not be able to do automatic optimize for unused imports.</p>
<p>My goal is to replace these wildcard imports with explicit imports, but every time I start, I encounter hundreds of unresolved reference errors all over my project. I am looking for a systematic approach to tackle this task efficiently.</p>
<p>The codebase is quite large, making manual resolution impractical.</p>
<p>Has anyone here ever encountered and solved such a frustrating task?
Where Should I Start?</p>
|
<python><import><refactoring>
|
2024-06-25 19:11:17
| 1
| 632
|
Kfir Ettinger
|
78,669,116
| 5,527,646
|
How to use gdal_translate in Python
|
<p>I am trying to convert a <code>netCDF</code> file to a <code>Cloud Optimized GeoTiff</code> (COG) format. I have been able to do so successfully in the OSGeo4W Shell using the following command:</p>
<pre><code>gdal_translate -of GTiff NETCDF:my-precip-data.nc:monthly_prcp_norm my_precip-monthly-normal.tif -of COG -co COMPRESS=LZW
</code></pre>
<p>However, I want to able to run gdal_translate inside a python script. I have tried the following but it doesn't seem to write to the output folder:</p>
<pre><code>import os
import sys
from osgeo import gdal, osr, ogr
import subprocess
input_folder_path = r"C:\MyTest\data"
output_folder_path = r"C:\MyTest\output"
gdal_tranlate = r"C:\Users\my_user\.conda\envs\geospatial\Library\bin\gdal_translate.exe"
test_input_nc = os.path.join(input_folder_path, 'my-precip-data.nc')
test_output_tif = os.path.join(output_folder_path, 'my_precip-monthly-normal.tif')
command = [f'{gdal_translate} -of GTiff NETCDF:{test_input_nc}:mlyprcp_norm {test_output_tif} -of COG -co COMPRESS=LZW']
output = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
</code></pre>
<p>This runs, but nothing happens. The output COG tif is never written to the output folder. Any suggestions as to what I am doing wrong here? I also know that there is a <code>gdal.translate()</code> method, but I don't see good documentation on how to do the above to translate a <code>.nc</code> file to a COG <code>.tif</code>.</p>
|
<python><subprocess><netcdf><gdal><geotiff>
|
2024-06-25 18:57:26
| 0
| 1,933
|
gwydion93
|
78,669,086
| 2,153,235
|
np.ones(30011,30011) needs 7.2GB, but Task Manager shows 5.3GB
|
<p>A 30,011x30,011 of 64-bit floats takes 7.2GB. There are many explanations for why one might see <code>np.zeros([30011,30011])</code> take up (say) a miniscule 0.7MB. However, my Windows 2010 Task Manager also shows noticeably less memory than expected for <code>np.ones([30011,30011])</code>. Specifically, 5.3GB, which is almost 2GB less than the expected 7.2GB.</p>
<p>What is the explanation for this?</p>
<p><strong>Afternote:</strong> The 5.3GB includes other objects in the main namespace other than <code>x=np.ones([30011,30011])</code>, albeit smaller objects. Stranger still, <code>x=x*3.14</code> causes memory usage to drop further to 3GB. It dropped further to 1.7GB within a few minutes even though I didn't do anything.</p>
|
<python><memory><memory-management><numpy-ndarray>
|
2024-06-25 18:50:29
| 1
| 1,265
|
user2153235
|
78,669,082
| 11,999,957
|
Is there a way to speed up vectorized log operations in scipy optimize?
|
<p>I'm doing some optimization in which I am calculating the $ amount needed to get the change in conversion using elasticities. Originally, I used simple elasticities but am thinking of converting to compounding elasticities. However, just doing this simple switch has increased the time it takes for me to optimize dramatically, I'm talking about like from 3 minutes to like days.</p>
<p>Simple Elasticity:</p>
<pre><code>(ChangeInConversion * 100 / Elasticity) * -1
</code></pre>
<p>Compounding Elasticity:</p>
<pre><code>(np.log(1 + ChangeInConversion ) / np.log(1 + Elasticity)) * 100 * -1
</code></pre>
<p>Are there ways to speed things up, or is this an expected behavior if you start using logs in your optimization functions? I'm using the Scipy Optimize package. Both <code>ChangeInConversion</code> and <code>Elasticity</code> are vectors of values (np.arrays).</p>
<p>Thanks!</p>
<p>Edit: The function I am using is:</p>
<pre><code>def FDollarChange(OptimizedUnitCount):
_OptimizedUnitCount = np.array(OptimizedUnitCount)
_OptimizedUnitCountByChannel = np.dot(ChannelMatrix.toarray(), _OptimizedUnitCount)
_ChangeInUnitCountByChannel = _OptimizedUnitCountByChannel - CurrentUnitCountByChannel
ChangeInConversion = _ChangeInUnitCountByChannel / CurrentUnitCountByChannel
# expand out to full rows
ChangeInConversion = np.dot(ChannelMatrixT.toarray(), ChangeInConversion)
DollarChange = (ChangeInConversion * 100 / Elasticity) * -1 # simple method
# DollarChange = (np.log(1 + ChangeInConversion) / np.log(1 + Elasticity)) * 100 * -1 # compounding method
return DollarChange, ChangeInConversion
def FTotalProfit(OptimizedUnitCount):
_OptimizedUnitCount = np.array(OptimizedUnitCount)
_DollarChange, _ChangeInConversion = FDollarChange(OptimizedUnitCount)
PostProfit = PreProfit + _DollarChange
TotalProfit = np.dot(_OptimizedUnitCount, TotalProfit)
return PostProfit, TotalProfit
def ObjectiveFunction(OptimizedUnitCount):
_OptimizedUnitCount = np.array(OptimizedUnitCount)
_PostProfit, _TotalProfit = FTotalProfit(_OptimizedUnitCount)
return (_TotalProfit * -1)
</code></pre>
<p>Where:</p>
<ul>
<li><code>OptimizedUnitCount</code>: Is basically the x values in the optimizer.</li>
<li><code>ChannelMatrix</code>: Is basically a sparse matrix that does a group by sum allowing me to calculate a grouped by OptimizedUnitCount</li>
<li><code>ChannelMatrixT</code>: Is ChannelMatrix transposed that converts things back to the original vector size.</li>
</ul>
|
<python><numpy><optimization><scipy><mathematical-optimization>
|
2024-06-25 18:48:36
| 1
| 541
|
we_are_all_in_this_together
|
78,668,998
| 13,597,979
|
Withdrawn window not reappearing with deiconify after filedialog in Tkinter on MacOS
|
<p>The code below is meant to ask for a file name with <code>filedialog</code> with a hidden root window, after which the root reappears and shows a label that has the selected filename. However, on MacOS 14.5 and Python 3.9.6, the <code>deiconify</code> does not make the window reappear. I have to click on the Python icon in the dock in order for the window to appear. If I replace the <code>filedialog</code> line with <code>file_name = '\example\file\name'</code>, no such issue occurs. How can I make the window appear without needing to click on the icon?</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import Tk, filedialog, Label
root = Tk()
root.withdraw()
file_name = filedialog.askopenfilename(parent=root, title="Select File")
if file_name:
Label(root, text=file_name, padx=20, pady=20).pack()
root.update()
root.deiconify()
root.mainloop()
else:
root.destroy()
</code></pre>
<p><em>Edit</em>: Interestingly, if I add <code>cursor="cross"</code> to the <code>Label</code>, the cursor does indeed change to a cross when hovering over the location in the screen where the window eventually appears. So it seems to be there somehow, and is just invisible.</p>
|
<python><tkinter>
|
2024-06-25 18:26:39
| 1
| 550
|
TimH
|
78,668,978
| 4,701,426
|
Condition in a function to stop it when called in a loop?
|
<p>Please consider this simple function:</p>
<pre><code>def my_func(x):
if x > 5:
print(x)
else:
quit()
print('this should be printed only if x > 5')
</code></pre>
<p>Then if we call this function in a loop:</p>
<pre><code>for i in [2, 3, 4, 5, 6, 7]:
my_func(i)
</code></pre>
<p><strong>Expected output:</strong></p>
<pre class="lang-none prettyprint-override"><code>6
this should be printed only if x > 5
7
this should be printed only if x > 5
</code></pre>
<p>But <code>quit</code> actually disconnects the Kernel.</p>
<p>I know that the following function will work but I do not want to have the second print up there:</p>
<pre><code>def my_func(x):
if x > 5:
print(x)
print('this should be printed only if x > 5')
else:
pass
</code></pre>
<p>Lastly, I know that if I put the loop inside the function, I can use <code>continue</code> or <code>break</code> but I prefer to keep the function simple and instead put the function call in a loop.
So, what needs to change in the first function to achieve the expected output?</p>
|
<python>
|
2024-06-25 18:17:29
| 3
| 2,151
|
Saeed
|
78,668,957
| 1,367,705
|
ImportError: cannot import name 'Ollama' from 'llama_index.llms' (unknown location) - installing dependencies does not solve the problem
|
<p>I want to learn LLMs. I run Ollama with the following Docker Compose file - it's running:</p>
<pre><code>services:
ollama:
image: ollama/ollama:latest
ports:
- 11434:11434
volumes:
- ollama_data:/root/.ollama
healthcheck:
test: ollama list || exit 1
interval: 10s
timeout: 30s
retries: 5
start_period: 10s
ollama-models-pull:
image: curlimages/curl:8.6.0
command: >-
http://ollama:11434/api/pull -d '{"name": "mistral"}'
depends_on:
ollama:
condition: service_healthy
volumes:
ollama_data:
</code></pre>
<p>I would like to write a Python app, which will use ollama, and I found this piece of code:</p>
<pre><code>from llama_index.llms import Ollama, ChatMessage
llm = Ollama(model="mistral", base_url="http://127.0.0.1:11434")
messages = [
ChatMessage(
role="system", content="you are a multi lingual assistant used for translation and your job is to translate nothing more than that."
),
ChatMessage(
role="user", content="please translate message in triple tick to french ``` What is standard deviation?```"
)
]
resp = llm.chat(messages=messages)
print(resp)
</code></pre>
<p>I installed all dependencies:</p>
<pre><code>python3 -m venv venv
source venv/bin/activate
pip install llama-index
pip install llama-index-llms-ollama
pip install ollama-python
</code></pre>
<p>However, when I run the app, I got:</p>
<pre><code>Traceback (most recent call last):
File "/home/user/test.py", line 1, in <module>
from llama_index.llms import Ollama, ChatMessage
ImportError: cannot import name 'Ollama' from 'llama_index.llms' (unknown location)
</code></pre>
<p>where can be the problem?</p>
|
<python><large-language-model><llama-index><ollama>
|
2024-06-25 18:11:54
| 2
| 2,620
|
mazix
|
78,668,860
| 18,227,234
|
Is there a clean way to handle reuse of try...except blocks that force early function return?
|
<p>I'm writing an Azure Functions API, with many different endpoints with similar internal behavior. One major step in each endpoint is to take each naked POST and parse out any and all arguments (comes in as a dictionary) to a custom request object I define that can be further used for all function behavior. Each request object expects certain keys to be present in the request's data, and I raise exceptions within the object to direct program flow within the main function towards an early response with error code.</p>
<p>My main question is: I have to reuse these <code>try...except</code> blocks in every function as I want the responses to be the same, is there a way to abstract this out to a single reusable block?</p>
<p>This is an example of how one endpoint would look:</p>
<pre class="lang-py prettyprint-override"><code>from requests import ShowUsersRequest
def my_endpoint(req: azure.functions.HttpRequest):
try:
wrapped_request = ShowUsersRequest(req)
except KeyError:
return func.HttpResponse('Request missing argument(s)', status_code=400)
except ValueError as e:
return func.HttpResponse(f'One or more request arguments was of invalid type. {e.args[0]}', status_code=400)
... # rest of function continues
</code></pre>
<p>And what a request object would look like:</p>
<pre class="lang-py prettyprint-override"><code># requests.py
class ShowUsersRequest:
def __init__(self, req: func.HttpRequest) -> None:
# KeyErrors would arise from a missing key in the params dict
self.identifier = req.params['account_identifier']
self.client_code = int(req.params['client_code'])
# raise ValueError to represent some kind of validation problem
if self.client_code < 0:
raise ValueError('Client code must be non-negative')
</code></pre>
|
<python><azure-functions>
|
2024-06-25 17:46:43
| 1
| 318
|
walkrflocka
|
78,668,711
| 20,920,790
|
How to get pd.Dataframe from ClickHouseHook()?
|
<p>I got this code in my DAG for Airflow:</p>
<pre><code>import pandas as pd
import datetime
import io
import httpx
from airflow.decorators import dag, task
from airflow.models import Variable
from clickhouse_driver import Client
from airflow_clickhouse_plugin.hooks.clickhouse import ClickHouseHook
from airflow.models import Connection
from airflow.utils.db import create_session
default_args = {
'owner': 'd-chernovol',
'depends_on_past': False,
'retries': 2,
'retry_delay': datetime.timedelta(minutes=5),
'start_date': datetime.datetime(2024, 6, 20)
}
schedule_interval = '*/20 * * * *'
host = Variable.get('host')
database_name = Variable.get('database_name')
user_name = Variable.get('user_name')
password_for_db = Variable.get('password_for_db')
bot_token = Variable.get('bot_token')
chat_id = Variable.get('chat_id')
connections_name = 'clickhouse_default'
try:
# create Connection
clickhouse_conn = Connection(
conn_id=connections_name,
conn_type='ClickHouse',
host=host,
schema=database_name,
login=user_name,
password=password_for_db,
port=8443
)
# create connection
with create_session() as session:
session.add(clickhouse_conn)
session.commit()
# make ClickHouseHook
ch_hook = ClickHouseHook(clickhouse_conn_id=connections_name)
except:
ch_hook = ClickHouseHook(clickhouse_conn_id=connections_name)
@dag(default_args=default_args, schedule_interval=schedule_interval, catchup=False, concurrency=3)
def test_dag_4():
@task
def get_table_from_db(sql_query):
result = ch_hook.execute(sql_query)
return result
@task
def send_msg(bot_token: str, chat_id: str, message: str):
# send_message
url = f'https://api.telegram.org/bot{bot_token}/sendMessage?chat_id={chat_id}&text={message}'
client = httpx.Client()
return client.post(url)
get_table_from_db_task >> send_msg_task
st_dag = test_dag_4()
</code></pre>
<p>My code is working, but get_table_from_db(sql_query) returns:</p>
<pre><code>class 'airflow.models.xcom_arg.PlainXComArg'>
</code></pre>
<p>If I convert result to string I get list with values from table:</p>
<pre><code>[[111, 'name_1'], [222, 'name_2'], [222, 'name_3']]
</code></pre>
<p>But I need to get table from my database as pandas.Dataframe.
How I can achieve it? Or I should write function for this?
The main problem, that I get only values without column names.</p>
|
<python><pandas><airflow><clickhouse>
|
2024-06-25 17:06:39
| 2
| 402
|
John Doe
|
78,668,650
| 3,070,181
|
Problem when adding dependencies to pyinstaller using poetry to manage dependencies
|
<p>I am attempting to create an executable file in Windows 10 Pro from a python script that uses <em>import</em></p>
<pre><code># main.py
from termcolor import cprint
def main() -> None:
cprint('Hello', 'red')
if __name__ == '__main__':
main()
</code></pre>
<p>I have installed <em>poetry</em> and created an environment</p>
<pre><code> poetry new test
</code></pre>
<p>added <em>termcolor</em></p>
<pre><code>poetry add termcolor
</code></pre>
<p>If I enter the poetry shell and run the script</p>
<pre><code>python main.py
</code></pre>
<p>it works as expected.</p>
<p>When I create the exe</p>
<pre><code>pyinstaller --onefile main.py
</code></pre>
<p>and try to run the exe it generates ModuleNotFoundError for <em>termcolor</em></p>
<p>This is my <em>main.spec</em> file</p>
<pre><code>a = Analysis(
['main.py'],
pathex=['.', 'C:\Users\... path to ...\site-packages'],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
optimize=0,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
[],
exclude_binaries=True,
name='main',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='main',
)
</code></pre>
<p>What am I doing wrong?</p>
|
<python><windows><pyinstaller><python-poetry>
|
2024-06-25 16:52:26
| 1
| 3,841
|
Psionman
|
78,668,507
| 435,317
|
python regex, split string with multiple delimeters
|
<p>I know this question has been answered but my use case is slightly different. I am trying to setup a regex pattern to split a few strings into a list.</p>
<p>Input Strings:</p>
<pre><code>1. "ABC-QWERT01"
2. "ABC-QWERT01DV"
3. "ABCQWER01"
</code></pre>
<p>Criteria of the string
ABC - QWERT 01 DV
1 2 3 4 5</p>
<ol>
<li>The string will always start with three chars</li>
<li>The dash is optional</li>
<li>there will then be 3-10 chars</li>
<li>Left padded 0-99 digits</li>
<li>the suffix is 2 chars and is optional</li>
</ol>
<p>Expected Output</p>
<pre><code>1. ['ABC','-','QWERT','01']
1. ['ABC','-','QWERT','01', 'DV']
1. ['ABC','QWER','01','DV']
</code></pre>
<p>I have tried the following patterns a bunch of different ways but I am missing something. My thought was start at the beginning of the string, split after the first three chars or the dash, then split on the occurrence of two decimals.</p>
<p>Pattern 1: <code>r"([ -?, \d{2}])+"</code>
This works but doesn't break up the string by the first three chars if the dash is missing</p>
<p>Pattern 2: <code>r"([^[a-z]{3}, -?, \d{2}])+"</code>
This fails as a non-pattern match, nothing gets split</p>
<p>Pattern 3: <code>r"([^[a-z]{3}|-?, \d{2}])+"</code>
This fails as a non-pattern match, nothing gets split</p>
<p>Any tips or suggestions?</p>
|
<python><regex>
|
2024-06-25 16:18:36
| 1
| 1,762
|
Drewdin
|
78,668,423
| 18,769,241
|
Is there an naoqi SDK for Python 3?
|
<p>I can't seem to find a Python NaoQi SDK for Python 3? All I find is for Python 2.7 from the reference installation page: <a href="http://doc.aldebaran.com/2-8/dev/python/install_guide.html" rel="nofollow noreferrer">http://doc.aldebaran.com/2-8/dev/python/install_guide.html</a></p>
<p>The latest version of the SDK (2.8) requires Python 2.7 and when I use it with Python 3.7 erros occur and the program couldn't be executed correctly.
Any clues if such SDK is available ?</p>
|
<python><nao-robot>
|
2024-06-25 15:59:44
| 2
| 571
|
Sam
|
78,668,385
| 1,685,729
|
ModuleNotFoundError: No module named 'arichuvadi.valam'; 'arichuvadi' is not a package
|
<ul>
<li><p>ModuleNotFoundError: No module named 'arichuvadi.valam'; 'arichuvadi' is not a package
Installed the local package in editable form with the following command</p>
<p>#+begin_src shell</p>
<pre><code>pip install -e .
</code></pre>
<p>#+end_src</p>
</li>
</ul>
<p>I get this error when running the code like this</p>
<pre><code>#+begin_src shell
$ python mlm/arichuvadi/uyirmei.py
Traceback (most recent call last):
File "/home/vanangamudi/code/arichuvadi/mlm/arichuvadi/uyirmei.py", line 1, in <module>
from arichuvadi.valam import UYIRMEI_MAP_PATH
File "/home/vanangamudi/code/arichuvadi/mlm/arichuvadi/arichuvadi.py", line 7, in <module>
from arichuvadi.valam import (
ModuleNotFoundError: No module named 'arichuvadi.valam'; 'arichuvadi' is not a package
#+end_src
</code></pre>
<p>Here is the file layout</p>
<pre><code>#+begin_src shell :results code
tree
#+end_src
#+RESULTS:
#+begin_src shell
.
├── LICENSE
├── MANIFEST.in
├── mlm
│ ├── arichuvadi
│ │ ├── arichuvadi.py
│ │ ├── __init__.py
│ │ ├── uyirmei.py
│ │ └── valam.py
│ ├── arichuvadi.egg-info
│ │ ├── dependency_links.txt
│ │ ├── PKG-INFO
│ │ ├── SOURCES.txt
│ │ └── top_level.txt
│ ├── orunguri-tha.py
│ └── tharavu
│ ├── adaiyalamitta-ari.txt
│ ├── ari.txt
│ ├── ari-uni.txt
│ └── uyir-mei.csv
├── pyproject.toml
├── README.txt -> YENNAI_PADI.txt
├── setup.py
├── YENNAI_PADI.txt
└── YENNA_SEIYA.org
4 directories, 20 files
#+end_src
</code></pre>
<p>It works when importing in repl</p>
<pre><code>#+begin_src shell
$ python
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import arichuvadi
>>> import arichuvadi.valam
>>>
#+end_src
</code></pre>
<p>Please help!</p>
|
<python><import>
|
2024-06-25 15:52:26
| 2
| 727
|
vanangamudi
|
78,668,318
| 118,562
|
Python PyAudio always record in mono sound
|
<p>Here is my code:</p>
<pre><code>import pyaudio
import wave
class AudioRecorder:
def __init__(self, chunk=1024, sample_format=pyaudio.paInt16, channels=2, fs=44100, filename="output.wav"):
self.chunk = chunk
self.sample_format = sample_format
self.channels = channels
self.fs = fs
self.filename = filename
self.wave = None
self.p = pyaudio.PyAudio()
self.frames = []
self.stream = None
def record(self):
def record_callback(in_data, frame_count, time_info, status):
# self.frames.append(in_data)
self.wave.writeframes(in_data)
return (None, pyaudio.paContinue)
print(f'{self.p.get_default_input_device_info()}')
print('Start Recording...\n')
# init wave file
self.wave = wave.open(self.filename, 'wb')
self.wave.setnchannels(self.channels) # Stereo
self.wave.setsampwidth(self.p.get_sample_size(self.sample_format))
self.wave.setframerate(self.fs)
self.stream = self.p.open(format=self.sample_format,
channels=self.channels,
rate=self.fs,
frames_per_buffer=self.chunk,
input=True,
stream_callback=record_callback)
self.stream.start_stream()
def stop(self):
if self.stream is not None and self.stream.is_active():
self.stream.stop_stream()
self.stream.close()
self.p.terminate()
self.wave.close()
print('Finished recording')
else:
print('Stream is not active')
if __name__ == "__main__":
recorder = AudioRecorder()
while True:
print('Press 1 to start recording')
print('Press 2 to stop recording')
choice = input('Enter your choice: ')
if choice == '1':
recorder.record()
continue
elif choice == '2':
recorder.stop()
break
</code></pre>
<p>Here is my device info:</p>
<blockquote>
<p>{'index': 1, 'structVersion': 2, 'name': 'Scarlett 2i2 USB', 'hostApi': 0, 'maxInputChannels': 2, 'maxOutputChannels': 2, 'defaultLowInputLatency': 0.01, 'defaultLowOutputLatency': 0.0033385416666666667, 'defaultHighInputLatency': 0.1, 'defaultHighOutputLatency': 0.005671875, 'defaultSampleRate': 192000.0}</p>
</blockquote>
<p>It supports 2 channels and I set 2 channels when I start recording. But the result is always mono sound. Any idea? Thanks.</p>
|
<python><pyaudio>
|
2024-06-25 15:40:30
| 0
| 12,986
|
Bagusflyer
|
78,668,306
| 15,912,168
|
PyInstaller-built API Not Recognizing Routes After Building
|
<p>I'm building an API using PyInstaller 6.4.0, but it's not working.</p>
<p>For some reason, the built file is not recognizing my routes.</p>
<p>The project folder structure is:</p>
<p>markdown
Copiar código</p>
<pre><code>API
├── logs
└── v1
└── routes
├── router1.py
└── router2.py
</code></pre>
<p>The code is:</p>
<pre><code>[imports]
rprefix = f"/api/v1"
app = FastAPI()
[...]
router_prices = importlib.import_module(f"v1.routes.checkprices_routes")
router_commons = importlib.import_module(f"v1.routes.commons_routes")
[...]
app.include_router(
router_prices.__getattribute__("check_prices_router"), prefix=rprefix
)
app.include_router(router_commons.__getattribute__("commons_router"), prefix=rprefix)
if __name__ == "__main__":
if len(SSL_KEY) > 0 and len(SSL_CERT) > 0:
uvicorn.run(
"main:app",
host=API_HOST,
port=API_PORT,
reload=False,
ssl_keyfile=SSL_KEY,
ssl_certfile=SSL_CERT,
)
else:
uvicorn.run("main:app", host=API_HOST, port=API_PORT, reload=False)
</code></pre>
<p>traceback:</p>
<pre><code><module 'v1.routes.checkprices_routes' from 'C:\\Users\\WA\\Área de Trabalho\\\main\\_internal\\v1\\routes\\checkprices_routes.py'>
<module 'v1.routes.commons_routes' from 'C:\\Users\\WA\\Área de Trabalho\\main\\_internal\\v1\\routes\\commons_routes.py'>
Traceback (most recent call last):
File "main.py", line 36, in <module>
router_prices = importlib.import_module(f"v1.routes.checkprices_routes")
File "importlib\__init__.py", line 127, in import_module
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\WA\Área de Trabalho\main\_internal\v1\routes\checkprices_routes.py", line 10, in <module>
from v1.repositories.checkprices_search.search_repository import \
File "C:\Users\WA\Área de Trabalho\main\_internal\v1\repositories\checkprices_search\search_repository.py", line 13, in <module>
from v1.models.dao.product_classes_DAO import ProductClassesDAO
File "C:\Users\WA\Área de Trabalho\main\_internal\v1\models\dao\product_classes_DAO.py", line 2, in <module>
from main import LOGGER
File "C:\Users\WA\Área de Trabalho\main\_internal\main.py", line 61, in <module>
router_prices.__getattribute__("check_prices_router"), prefix=rprefix
AttributeError: partially initialized module 'v1.routes.checkprices_routes' has no attribute 'check_prices_router' (most likely due to a circular import)
[16600] Failed to execute script 'main' due to unhandled exception!
</code></pre>
<p>P.S: The project is currently running on VSCode; it just doesn't work after building.</p>
|
<python><fastapi><pyinstaller>
|
2024-06-25 15:36:29
| 1
| 305
|
WesleyAlmont
|
78,668,255
| 3,768,977
|
Is there a better way to generate my dataframe?
|
<p>I have a set of data that I need to perform some transformations on. The raw form of the data is as follows (actual dataset will have many more columns, reduced set here for simplicity):</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>lVoterUniqueID</th>
<th>sElectionAbbr1</th>
<th>sElectionAbbr2</th>
<th>sElectionAbbr3</th>
<th>sElectionAbbr4</th>
<th>sElectionAbbr5</th>
</tr>
</thead>
<tbody>
<tr>
<td>371527</td>
<td>2024PR</td>
<td>2022Gen</td>
<td>2022PR</td>
<td>2020GEN</td>
<td>2020PR</td>
</tr>
<tr>
<td>1843949</td>
<td>2024PR</td>
<td>null</td>
<td>null</td>
<td>null</td>
<td>null</td>
</tr>
<tr>
<td>2813398</td>
<td>2024PR</td>
<td>2022Gen</td>
<td>null</td>
<td>2020GEN</td>
<td>2020PR</td>
</tr>
</tbody>
</table></div>
<p>The output I'm looking for is:</p>
<ul>
<li><p>lVoterUniqueID relabeled as ID</p>
</li>
<li><p>Each sElectionAbbr[X] column relabeled to Election_Code</p>
</li>
</ul>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Election_Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>371527</td>
<td>2024PR</td>
</tr>
<tr>
<td>1843949</td>
<td>2024PR</td>
</tr>
</tbody>
</table></div>
<p>I can accomplish this via sql:</p>
<pre><code>select distinct
lVoterUniqueID,
sElectionAbbr1 as Election_Code
from data_set
where sElectionAbbr1 != ''
union
select distinct
lVoterUniqueID,
sElectionAbbr2
from data_set
where sElectionAbbr2 != ''
union ...
</code></pre>
<p>However, I wanted to do this within Python and I've accomplished that goal, but it feels like the process is a bit heavy handed and I'm wondering if there is a more efficient way to do this.</p>
<p>Working Python below:</p>
<pre><code>import pandas as pd
df = pd.read_csv([path to data file])
#find number of election columns.
selected_columns = [column for column in df.columns if column.startswith("sElection")]
#add voter id column
selected_columns.insert(0,'lVoterUniqueID')
#create dataframe schema
rcdp_voters_prod = pd.DataFrame(columns=['ID','Election_Code'])
def access_elements(list_var, list_index):
# Use list comprehension to create a new list containing elements from 'list_var' at the specified indices in 'list_index'
result = [list_var[i] for i in list_index]
return result
#indices of list values that hold the election column
python_indices = [index for (index, item) in enumerate(selected_columns) if item != "lVoterUniqueID"]
union_dict = {}
#Create dictionary of lists containing unique voterid and each election abbreviation column
# ['lVoterUniqueID', 'sElectionAbbr1']
# ['lVoterUniqueID', 'sElectionAbbr2']
# ['lVoterUniqueID', 'sElectionAbbr3']
# ['lVoterUniqueID', 'sElectionAbbr4']
# ['lVoterUniqueID', 'sElectionAbbr5']
for index in python_indices:
union_dict.update({index:access_elements(selected_columns,[0,index])})
dataframe_list = []
#create list of dataframes containing only distinct columns from column list
#rename columns for concating
for key in union_dict:
loop_df = df.filter(items=union_dict[key]).drop_duplicates()
loop_df.rename(columns={'lVoterUniqueID':'ID',union_dict[key][1]:'Election_Code'}, inplace= True)
dataframe_list.append(loop_df)
final_frame_form = pd.concat(dataframe_list)
#drop rows where election code is null
final_frame_form=final_frame_form[final_frame_form['Election_Code'].notnull()]
</code></pre>
|
<python><pandas><dataframe>
|
2024-06-25 15:23:31
| 1
| 553
|
MPJ567
|
78,668,248
| 15,370,142
|
Convert a complex string representation of a list (containing tuples, which look like function calls) into a list
|
<p>I am trying to create a fixture for unit testing. I retrieve data from an API and need data that looks like what I get from the API without making the call to the API. I need to create a number of list literals that capture different scenarios for the unit test. Here is the string representation of the list I'm trying to convert to a list:</p>
<pre><code>[(3479865,
PaginatedManagedEntityHeaders(
success=True,
count=0,
rows=[
ManagedEntityHeader(
case_role='Reference',
display_name='Person A',
entity_type='PERSON',
unique_id=247878382,
is_active='1',
date_created=datetime.datetime(2021, 10, 18, 16, 29, 6, 535000, tzinfo=TzInfo(UTC))),
ManagedEntityHeader(
case_role='Reference',
display_name='Person B',
entity_type='PERSON',
unique_id=247563788,
is_active='0',
date_created=datetime.datetime(2021, 9, 8, 21, 4, 29, 631000, tzinfo=TzInfo(UTC)))]
))]
</code></pre>
<p>I have saved this as a JSON file: "test_fixture.json". I could also save it as a text file. It imports successfully as a string. I don't think I can create this list in Python code without receiving numerous errors, but maybe there is a way to do it. Here is what I have tried so far to convert the string to a list.</p>
<pre><code>import os
import json
import ast
import re
the_file = os.path.join(r"test_fixture.json")
file_str = open(the_file).read()
meth1 = file_str.strip("][").split(", ")
# receive a list of 25 instead of a list of 1
meth2 = ast.literal_eval(file_str)
# ValueError: malformed node or string on line 2: <ast.Call object at 0x000002241687F8E0>
meth3 = json.loads(file_str)
# JSONDecodeError: Expecting value
elements = re.findall(r"\d+", file_str)
meth4 = [int(x) for x in elements]
# receive a list of 20 instead of a list of 1
meth5 = eval(file_str)
# NameError: name 'PaginatedManagedEntityHeaders' is not defined
meth6 = list(map(int, file_str[1:-1].split(",")))
# ValueError: invalid literal for int() with base 10: '(3479865'
</code></pre>
<p>Is there another method (<code>meth_that_worked</code>) that will successfully convert the string into a list of 1 element?</p>
<p><code>print(meth_that_worked)</code> should produce output identical to the string representation of the list above.</p>
<p><code>print(type(meth_that_worked))</code> should produce <code><class 'list'></code>. <code>print(len(meth_that_worked))</code> should produce <code>1</code>.</p>
<p><strong>Update</strong></p>
<p>Based on some of the comments, I tried addressing the namespace errors received when executing <code>eval(file_str)</code> as follows:</p>
<pre><code>def ManagedEntityHeader(*args, **kwargs):
pass
def TzInfo(*args, **kwargs):
pass
def UTC(*args, **kwargs):
pass
</code></pre>
<p>Then I received the following error:</p>
<pre><code>ValidationError: 2 validation errors for PaginatedManagedEntityHeaders
rows.0
Input should be a valid dictionary or object to extract fields from [type=model_attributes_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.6/v/model_attributes_type
rows.1
Input should be a valid dictionary or object to extract fields from [type=model_attributes_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.6/v/model_attributes_type
</code></pre>
<p>Then I created a function similar to the ones above to address this error:</p>
<pre><code>def PaginatedManagedEntityHeaders(*args, **kwargs):
pass
</code></pre>
<p>After doing this, a list was successfully created, but the list content was truncated compared to the original string representation of the list:</p>
<pre><code>[(3479865, None)]
</code></pre>
|
<python><list><type-conversion><tuples><function-call>
|
2024-06-25 15:22:09
| 3
| 412
|
Ted M.
|
78,668,028
| 3,533,800
|
python: "from library import module" works, but "import library.module" could not be resolved
|
<p>I've got this dts:</p>
<pre><code>Pipfile
module-common\
__init__.py
setup.py
common\
library-common.py
module-a\
__init__.py
setup.py
functional-module\
library-a.py
library-b.py
</code></pre>
<p>Pipfile is defined as</p>
<pre><code>...
[packages]
module-common = {file = "module-common", editable = true}
module-a = {file = "module-a", editable = true}
</code></pre>
<p>setup.py are defined as</p>
<pre><code>from setuptools import find_packages, setup
setup(
name="module-common",
version="2",
...
packages=find_packages(),
...
)
</code></pre>
<p>and</p>
<pre><code>from setuptools import find_packages, setup
setup(
name="module-a",
version="2",
...
packages=find_packages(),
...
)
</code></pre>
<p><strong>init</strong>.py files are empty</p>
<p>I installed all with</p>
<pre class="lang-bash prettyprint-override"><code>pipenv install -e module-common
pipenv install -e module-a
pipenv install
</code></pre>
<p><code>pip list</code> is correctly showing my packages into the dts</p>
<p>so, in library-a I want to use a function from library-common,</p>
<p>but this declaration of library-a fails</p>
<pre><code>from common.library-common import SomeClass
print(f"function return: {SomeClass.function()}")
</code></pre>
<p>and results in:</p>
<pre><code>NameError: name 'library-common' is not defined
</code></pre>
<p>but if I declare</p>
<pre><code>import common
print(f"function return: {common.common-function.SomeClass.function()}")
</code></pre>
<p>I got the correct answer.</p>
<p>what am I missing here? how do I get first notation to work?</p>
<p>the purpose is trying not touch library-a code, leave the imports as they are to avoid refactoring the whole codebase</p>
|
<python><module><importerror><pipenv><pipfile>
|
2024-06-25 14:39:37
| 0
| 329
|
Ozeta
|
78,667,785
| 9,571,463
|
Communicating between Queues in Asyncio not Working as Expected
|
<p>I'm working on a pattern where I communicate amongst multiple queues to process items along a pipeline. I am using sentinels to communicate between the queues when to stop working, however in the following code, I am seeing results that confuse me.</p>
<p>When reading from the <code>write_q</code> in <code>write_task()</code> I see the first value come in as the sentinel <code>None</code> instead of the tasks in the order they were placed in <code>response_task()</code>. If I understand right, <code>write_task()</code> should receive the items in order and process them in order as the tasks are created.</p>
<p>Also, when printing the <code>qsize()</code> in <code>write_task()</code> after I find the sentinel, it says there are 0 items, however when printing back in main it seems that the <code>qsize()</code> of <code>write_q</code> still has 2 items. I've read somewhere that <code>aiofiles</code> uses <code>run_in_executor()</code> which means there might be a divergence of where the queue is hand</p>
<p>Most of the below code is boilerplate to illustrate the actual scenario on why my code continues to block infinitely.</p>
<pre><code>import asyncio
import aiofiles
import aiocsv
import json
async def fetch(t: float) -> dict:
print(f"INFO: Sleeping for {t}s")
await asyncio.sleep(t)
return t
async def task(l: list, request_q: asyncio.Queue) -> None:
# Read tasks from source of data
for i in l:
await request_q.put(
asyncio.create_task(fetch(i))
)
# Sentinel value to signal we are done receiving from source
await request_q.put(None)
async def request_task(request_q: asyncio.Queue, write_q: asyncio.Queue) -> None:
while True:
req = await request_q.get()
# If we received sentinel for tasks, pass message to next queue
if not req:
print("INFO: received sentinel from request_q")
request_q.task_done()
await request_q.put(None) # put back into the queue to signal to other consumers we are done
break
# Make the request
resp = await req
await write_q.put(resp)
request_q.task_done()
async def write_task(write_q: asyncio.Queue) -> None:
headers: bool = True
async with aiofiles.open("file.csv", mode="w+", newline='') as f:
w = aiocsv.AsyncWriter(f)
while True:
# Get data out of the queue to write it
data = await write_q.get()
print(data)
# if not data:
# print(f"INFO: Found sentinel in write_task, queue size was: {write_q.qsize()}")
# write_q.task_done()
# await f.flush()
# break
if headers:
await w.writerow([
"status",
"data",
])
headers = False
# Write the data from the response
await w.writerow([
"200",
json.dumps(data)
])
await f.flush()
write_q.task_done()
async def main() -> None:
# Create fake data to POST
items: list[str] = [.2, .5, 1]
# Queues for orchestrating
request_q = asyncio.Queue()
write_q = asyncio.Queue()
# one producer
producer = asyncio.create_task(
task(items, request_q)
)
# 5 request consumers
request_consumers = [
asyncio.create_task(
request_task(request_q, write_q)
)
for _ in range(2)
]
# 5 write consumers
write_consumer = asyncio.create_task(
write_task(write_q)
)
errors = await asyncio.gather(producer, return_exceptions=True)
print(f"INFO: Producer has completed! exceptions: {errors}")
await request_q.join()
for c in request_consumers:
c.cancel()
print("INFO: request consumer has completed! ")
print(f"INFO: write_q in main qsize: {write_q.qsize()}")
await write_q.join()
print("INFO: write queue has completed! ")
# await write_consumer
write_consumer.cancel()
print("INFO: Complete!")
if __name__ == "__main__":
# loop = asyncio.new_event_loop()
# loop.run_until_complete(main())
asyncio.run(main())
</code></pre>
|
<python><async-await><queue><python-asyncio>
|
2024-06-25 13:52:15
| 1
| 1,767
|
Coldchain9
|
78,667,734
| 6,939,324
|
How to avoid a nan loss (from the first iteration) and gradients being None?
|
<p>I am trying to predict/ fit filter coefficients using an MLP, my target function is:</p>
<p><img src="https://github.com/pytorch/pytorch/assets/15731839/7b15cf05-86fc-43e7-bdd3-6e5543a42b42" alt="biquad" /></p>
<p>However, the system is stuck in the same loss (<code>nan</code>) and there is no learning or update happening.
When I remove the <code>lfilter</code> and use <code>batch_loss = torch.nn.functional.mse_loss(y, target_seq_batch)</code> for the loss, the algorithm converges.</p>
<h3>Code</h3>
<pre class="lang-py prettyprint-override"><code>import time
import torch
import torchaudio
import numpy as np
from tqdm import tqdm
from torchaudio.functional import lfilter
from torch.optim import Adam, lr_scheduler
# Set the device
hardware = "cpu"
device = torch.device(hardware)
class FilterNet(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_batches=1, num_biquads=1, num_layers=1, fs=44100):
super(FilterNet, self).__init__()
self.eps = 1e-8
self.fs = fs
self.dirac = self.get_dirac(fs, 0, grad=True) # generate a dirac
self.mlp = torch.nn.Sequential(torch.nn.Linear(input_size, 100),
torch.nn.ReLU(),
torch.nn.Linear(100, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, output_size))
self.sos = torch.rand(num_biquads, 6, device=hardware, dtype=torch.float32, requires_grad=True)
def get_dirac(self, size, index=1, grad=False):
tensor = torch.zeros(size, requires_grad=grad)
tensor.data[index] = 1
return tensor
def compute_filter_magnitude_and_phase_frequency_response(self, dirac, fs, a, b):
# filter it
filtered_dirac = lfilter(dirac, a, b)
freqs_response = torch.fft.fft(filtered_dirac)
# compute the frequency axis (positive frequencies only)
freqs_rad = torch.fft.rfftfreq(filtered_dirac.shape[-1])
# keep only the positive freqs
freqs_hz = freqs_rad[:filtered_dirac.shape[-1] // 2] * fs / np.pi
freqs_response = freqs_response[:len(freqs_hz)]
# magnitude response
mag_response_db = 20 * torch.log10(torch.abs(freqs_response))
# Phase Response
phase_response_rad = torch.angle(freqs_response)
phase_response_deg = phase_response_rad * 180 / np.pi
return freqs_hz, mag_response_db, phase_response_deg
def forward(self, x):
self.sos = self.mlp(x)
return self.sos
# Define the target filter variables
fs = 2048 # 44100 # Sampling frequency
num_biquads = 1 # Number of biquad filters in the cascade
num_biquad_coeffs = 6 # Number of coefficients per biquad
# define filter coeffs
target_sos = torch.tensor([0.803, -0.132, 0.731, 1.000, -0.426, 0.850])
a = target_sos[3:]
b = target_sos[:3]
# prepare data
import scipy.signal as signal
f0 = 20
f1 = 20e3
t = np.linspace(0, 60, fs, dtype=np.float32)
sine_sweep = signal.chirp(t=t, f0=f0, t1=60, f1=f1, method='logarithmic')
white_noise = np.random.normal(scale=5e-2, size=len(t))
noisy_sweep = sine_sweep + white_noise
train_input = torch.from_numpy(noisy_sweep.astype(np.float32))
train_target = lfilter(train_input, a, b)
# Init the optimizer
n_epochs = 9
batche_size = 1
seq_length = 512
seq_step = 512
model = FilterNet(seq_length, 10*seq_length, 6, batche_size, num_biquads, 1, fs)
optimizer = Adam(model.parameters(), lr=1e-1, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min')
criterion = torch.nn.MSELoss()
# compute filter response
freqs_hz, mag_response_db, phase_response_deg = model.compute_filter_magnitude_and_phase_frequency_response(model.get_dirac(fs, 0, grad=False), fs, a, b)
target_frequency_response = torch.hstack((mag_response_db, phase_response_deg))
# Inits
start_time = time.time() # Start timing the loop
pbar = tqdm(total=n_epochs) # Create a tqdm progress bar
loss_history = []
# data batching
num_sequences = int(train_input.shape[0] / seq_length)
# Run training
for epoch in range(n_epochs):
model.train()
device = next(model.parameters()).device
print("\n+ Epoch : ", epoch)
total_loss = 0
for seq_id in range(num_sequences):
start_idx = seq_id*seq_step
end_idx = seq_id*seq_step + seq_length
# print(seq_id, start_idx, end_idx)
input_seq_batch = train_input[start_idx:end_idx].unsqueeze(0).to(device)
target_seq_batch = train_target[start_idx:end_idx].unsqueeze(0).to(device)
optimizer.zero_grad()
# Compute prediction and loss
sos = model(input_seq_batch)
y = lfilter(waveform=input_seq_batch, b_coeffs=sos[:, :3], a_coeffs=sos[:, 3:])
batch_loss = torch.nn.functional.mse_loss(y, target_seq_batch)
sos.requires_grad_(True)
y.requires_grad_(True)
batch_loss.requires_grad_(True)
print("|-> y : ", y.grad)
print("|-> sos : ", sos.grad)
print("|-> batch_loss (before backprop) : ", batch_loss.grad)
# Backpropagation
batch_loss.backward()
print("|-> batch_loss (after backprop) : ", batch_loss.grad)
optimizer.step()
total_loss += batch_loss.item()
print(f"|=========> Sequence {seq_id}: Loss = {batch_loss.item():.9f}")
# record loss
epoch_loss = total_loss / num_sequences
loss_history.append(epoch_loss)
print("-"* 100)
print(f"|=========> epoch_loss = {epoch_loss:.3f} | Loss = {epoch_loss:.3f}")
# Update the progress bar
#pbar.set_description(f"\nEpoch: {epoch}, Loss: {epoch_loss:.9f}\n")
#pbar.update(1)
scheduler.step(total_loss)
print("*"* 100)
# End timing the loop & print duration
elapsed_time = time.time() - start_time
print(f"\nOptimization loop took {elapsed_time:.2f} seconds.")
# Plot predicted filter
predicted_a = model.sos[:, 3:].detach().cpu().T.squeeze(1)
predicted_b = model.sos[:, :3].detach().cpu().T.squeeze(1)
freqs_hz, predicted_mag_response_db, predicted_phase_response_deg = model.compute_filter_magnitude_and_phase_frequency_response(model.get_dirac(fs, 0, grad=False), fs, predicted_a, predicted_b)
</code></pre>
<h3>Output</h3>
<pre><code>
+ Epoch : 0
|-> y : None
|-> sos : None
|-> batch_loss (before backprop) : None
|-> batch_loss (after backprop) : None
|=========> Sequence 0: Loss = 1.106894493
|-> y : None
|-> sos : None
|-> batch_loss (before backprop) : None
|-> batch_loss (after backprop) : None
|=========> Sequence 1: Loss = 1.414705992
|-> y : None
|-> sos : None
|-> batch_loss (before backprop) : None
|-> batch_loss (after backprop) : None
|=========> Sequence 2: Loss = nan
|-> y : None
|-> sos : None
|-> batch_loss (before backprop) : None
|-> batch_loss (after backprop) : None
|=========> Sequence 3: Loss = nan
----------------------------------------------------------------------------------------------------
|=========> epoch_loss = nan | Loss = nan
****************************************************************************************************
+ Epoch : 1
|-> y : None
|-> sos : None
|-> batch_loss (before backprop) : None
|-> batch_loss (after backprop) : None
|=========> Sequence 0: Loss = nan
|-> y : None
|-> sos : None
|-> batch_loss (before backprop) : None
|-> batch_loss (after backprop) : None
|=========> Sequence 1: Loss = nan
|-> y : None
|-> sos : None
|-> batch_loss (before backprop) : None
</code></pre>
|
<python><machine-learning><pytorch><signal-processing><torchaudio>
|
2024-06-25 13:40:11
| 0
| 2,966
|
SuperKogito
|
78,667,698
| 12,955,644
|
Terminal does not open on Ubuntu when using updated python version, only works with specific version
|
<p>I have these versions of python installed under python:</p>
<p>1.</p>
<pre><code>update-alternatives --list python
/usr/bin/python3
/usr/bin/python3.11
/usr/bin/python3.12
/usr/bin/python3.8
/usr/bin/python3.9
</code></pre>
<ol start="2">
<li></li>
</ol>
<pre><code>update-alternatives --list python3
/usr/bin/python3.10
/usr/bin/python3.12
/usr/bin/python3.8
</code></pre>
<p>When I change the python version for python3 to python3.12 (Any version other than 3.8), the terminal doesn’t seem to work.
I can’t open the terminal.</p>
<p>So then I open the virtual terminal using <code>ctrl+atl+F3</code>, and change the python version back to 3.8 with the command:</p>
<pre><code>update-alternatives –config python3
</code></pre>
<p>And select python3.8 - Then the terminal starts working again</p>
<p>Why is the terminal attached to python3.8? I may have to work with python’s other versions.</p>
<p>When I change the python version for the python it causes no problem. It’s only with python3</p>
<p>Can somebody who has a deep understanding of Ubuntu and Python guide me?</p>
|
<python><python-3.x><ubuntu><terminal>
|
2024-06-25 13:33:44
| 0
| 333
|
Imtiaz Ahmed
|
78,667,638
| 6,622,697
|
How to query from a joined table in SQLAlchemy
|
<p>I have 2 tables with a many-to-many relationship, so I have a join table between them. They are defined like this</p>
<pre><code>from sqlalchemy import String, Integer, Boolean
from sqlalchemy.orm import mapped_column, relationship
from root.db.ModelBase import ModelBase
class Module(ModelBase):
__tablename__ = 'module'
pk = mapped_column(Integer, primary_key=True)
description = mapped_column(String)
is_active = mapped_column(Boolean)
name = mapped_column(String, unique=True, nullable=False)
ngen_cal_active = mapped_column(String)
groups = relationship("ModuleGroup", secondary="module_group_member", back_populates="modules", lazy="joined")
---
from sqlalchemy import String, Integer, Boolean
from sqlalchemy.orm import mapped_column, relationship
from root.db.ModelBase import ModelBase
class ModuleGroup(ModelBase):
__tablename__ = 'module_group'
pk = mapped_column(Integer, primary_key=True)
description = mapped_column(String)
is_active = mapped_column(Boolean)
name = mapped_column(String, unique=True, nullable=False)
modules = relationship("Module", secondary="module_group_member", back_populates="groups", lazy="joined")
---
from sqlalchemy import String, Integer, Boolean, ForeignKey
from sqlalchemy.orm import mapped_column
from root.db.ModelBase import ModelBase
class ModuleGroupMember(ModelBase):
__tablename__ = 'module_group_member'
description = mapped_column(String)
is_active = mapped_column(Boolean)
module_pk = mapped_column(ForeignKey('module.pk'), primary_key=True,)
module_group_pk = mapped_column(ForeignKey('module_group.pk'), primary_key=True)
</code></pre>
<p>If I query like this</p>
<pre><code>query = select(Module).where(Module.name == 'Module3') # type: ignore
results = session.execute(query).unique().all()
print('results', results)
</code></pre>
<p>It appears to work (although I don't know why it forces me to use <code>unique()</code> since there should only be 1 result)
In the result, I see a <code>groups</code> object, which is defined by the relationship, so it all works fine.</p>
<p>However, if I want to get just the name and the associated groups, I tried something like this:</p>
<pre><code>query = select(Module.name, Module.groups).where(Module.name == 'Module3') # type: ignore
</code></pre>
<p>and I get this error <code>SAWarning: SELECT statement has a cartesian product between FROM element(s) "module_group", "module_group_member_1" and FROM element "module". Apply join condition(s) between each element to resolve.</code></p>
<p>I'm not sure what the problem is. I thought that SQLAlchemy would take care of any JOINS that need to be done</p>
<p>What I really want to do is get all the names and groups for every module</p>
<pre><code>query = select(Module.name, Module.groups)
</code></pre>
|
<python><sqlalchemy>
|
2024-06-25 13:20:13
| 1
| 1,348
|
Peter Kronenberg
|
78,667,579
| 7,195,787
|
Return the list of all registered models in MLflow
|
<p>I'm using mlflow on databricks and I've trained some models, that I can see in the "registered models" page.</p>
<p>Is there a way to extract the list of these models in code?</p>
<p>Something like</p>
<pre><code>import mlflow
mlflow.set_tracking_uri("databricks")
model_infos = mlflow.tracking.MlflowClient().some_method_to_list_registered_models()
for model_info in model_infos:
print(f"Name: {model_info.name}, Version: {model_info.latest_versions[0].version}")
</code></pre>
|
<python><databricks><mlflow>
|
2024-06-25 13:06:26
| 1
| 443
|
Carlo
|
78,667,500
| 4,190,657
|
conditional split based on list of column
|
<p>I have a dataframe having 2 column - "id" (int) and "values" (list of struct). I need to split on name. I have a list of column names as delimiter. I need to check the occurence of column names from the list, if one of the column name is present , then split the dataframe.</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import col, explode
from pyspark.sql.types import StructType, StructField, StringType, IntegerType, ArrayType
value_schema = ArrayType(
StructType([
StructField("name", StringType(), True),
StructField("location", StringType(), True)
])
)
data = [
(1, [
{"name": "col1_US", "location": "usa"},
{"name": "col2_name_plex", "location": "usa"},
{"name": "col4_false", "location": "usa"},
{"name": "col3_name_is_fantasy", "location": "usa"}
])
]
df = spark.createDataFrame(data, ["id", "values"])
df = df.withColumn("values", explode(col("values")).alias("values"))
df = df.select(col("id"),col("values.name").alias("name"))
df.display()
col_names = ["col1","col2_name","col3_name_is","col4"]
for c in col_names:
#if (df["name"].contains(c)): # this is not working
split_data = split(df["name"], f'{c}_')
df = df.withColumns({
"new_name": lit(c),
"new_value": split_data.getItem(1)
})
df.display()
</code></pre>
<p>Data after cleanup:</p>
<pre><code>id name
1 col1_US
1 col2_name_plex
1 col4_false_val
1 col3_name_is_fantasy
</code></pre>
<p>Final data from above script:</p>
<pre><code># returning unexpected data
id name new_name new_value
1 col1_US col4 null
1 col2_name_plex col4 null
1 col4_false_val col4 false
1 col3_name_is_fantasy col4 null
</code></pre>
<p>Expected Result:</p>
<pre><code>id name new_name new_value
1 col1_US col1 US
1 col2_name_plex col2_name plex
1 col4_false_val col4 false_val
1 col3_name_is_fantasy col3_name_is fantasy
</code></pre>
|
<python><regex><apache-spark><pyspark><split>
|
2024-06-25 12:51:01
| 1
| 305
|
steve
|
78,667,448
| 10,317,745
|
Python typing for nested iterator on list of objects with iterable property
|
<p>I have a somewhat tricky python typing question. I have a function (generator) that iterates over a list of objects, and then over a particular attribute of each of these objects like this:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T')
S = TypeVar('S')
def iter_attribute(outer: Iterable[T], attribute: str) -> Iterable[S]:
for item in outer:
yield from getattr(item, attribute)
</code></pre>
<p>How would I properly convey the fact that <code>T</code> should have an iterable attribute called <code>attribute</code> that yields <code>S</code> objects? By the way, mypy is totally happy with this and doesn't catch this:</p>
<pre class="lang-py prettyprint-override"><code>class A:
a: str
def __init__(self, a: str):
self.a = a
a: str
for a in iter_attribute([A('a'), A('b'), A('c')], 'b'):
print(a)
</code></pre>
<p>How would I correctly type this?</p>
|
<python><mypy><python-typing>
|
2024-06-25 12:41:22
| 1
| 1,273
|
Ingo
|
78,667,096
| 11,779,593
|
Efficiently Resampling and Interpolating Pandas DataFrames with Millisecond Accuracy
|
<p>I have a Pandas DataFrame with timestamps that have millisecond accuracy and corresponding altitude values. I want to resample and interpolate this data efficiently. Here is a simple example:</p>
<pre><code>import pandas as pd
import numpy as np
# Generate 5 random timestamps within the same minute with millisecond accuracy
base_timestamp = pd.Timestamp.now().floor("min") # Get the current time, floored to the nearest minute
timestamps = [
base_timestamp + pd.to_timedelta(np.random.randint(0, 60000), unit="ms")
for _ in range(5)
]
# Generate random altitudes
altitudes = np.random.uniform(100, 1000, size=5) # Random altitudes between 100 and 1000
# Create the DataFrame
df = pd.DataFrame({"timestamp": timestamps, "altitude": altitudes}).sort_values("timestamp")
</code></pre>
<p>A method that works but is terribly inefficient is the following:</p>
<pre><code>df_interpolated = (
df.set_index("timestamp").resample("1ms").interpolate().resample("1s").interpolate()
)
</code></pre>
<p>Plotting the results shows that it works:</p>
<pre><code>import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Scatter(x=df.timestamp, y=df.altitude, mode="lines", name="Original"))
fig.add_trace(
go.Scatter(
x=df_interpolated.index,
y=df_interpolated.altitude,
mode="markers",
name="Interpolated 1ms and 1s",
)
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/tbDuYuyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tbDuYuyf.png" alt="results" /></a></p>
<p>Any idea on how to do this better?</p>
|
<python><pandas><time-series>
|
2024-06-25 11:35:38
| 1
| 317
|
bfgt
|
78,666,967
| 3,665,976
|
SPARK_GEN_SUBQ_0 WHERE 1=0, Error message from Server: Configuration schema is not available
|
<p>I'm trying to read the data from sample schema from table nation from data-bricks catalog from my local machine via spark but i'm getting this error.</p>
<p><code>com.databricks.client.support.exceptions.GeneralException: [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: null, Query: SELECT * FROM (SELECT n_nationkey ,n_name ,n_regionkey ,n_comment FROM samples.tpch.nation order by "n_nationkey" limit 50000 offset 0) SPARK_GEN_SUBQ_0 WHERE 1=0, Error message from Server: Configuration schema is not available</code></p>
<p>Here is my sample code.</p>
<pre><code>from pyspark.sql import SparkSession
import os
os.environ['SPARK_HOME'] = '/opt/spark'
os.environ['PATH'] = os.environ['SPARK_HOME'] + '/bin:' + os.environ['PATH']
spark_dict = {
"hostname": "my_hostname",
"user": "my_user",
"password": "my_pass",
"database": "samples",
"schema": "tpch",
"driver": "com.databricks.client.jdbc.Driver",
"url": "jdbc:databricks://host:443/samples;transportMode=http;ssl=1;AuthMech=3;httpPath=httpPath",
"port": "443",
"query": "SELECT n_nationkey ,n_name ,n_regionkey ,n_comment FROM samples.tpch.nation order by \"n_nationkey\" limit 50000 offset 0"
}
spark = (
SparkSession.builder.appName("abc")
.config("spark.jars", "/home/jars/databricks-jdbc-2.6.34.jar")
.getOrCreate()
)
df = spark.read.format("jdbc").options(**spark_dict).load()
df.show()
</code></pre>
<p>Can someone please guide me here how to fix the above error.</p>
|
<python><pyspark><azure-databricks>
|
2024-06-25 11:06:31
| 1
| 606
|
Hassan Shahbaz
|
78,666,961
| 10,309,712
|
Meta-feature analysis: split data for computation on available memory
|
<p>I am working with the meta-feature extractor package: <a href="https://github.com/ealcobaca/pymfe" rel="nofollow noreferrer">pymfe</a> for complexity analysis.
On a small dataset, this is not a problem, for example.</p>
<pre><code>pip install -U pymfe
from sklearn.datasets import make_classification
from sklearn.datasets import load_iris
from pymfe.mfe import MFE
data = load_iris()
X= data.data
y = data.target
extractor = MFE(features=[ "t1"], groups=["complexity"],
summary=["min", "max", "mean", "sd"])
extractor.fit(X,y)
extractor.extract()
(['t1'], [0.12])
</code></pre>
<p>My dataset is large <code>(32690, 80)</code> and this computation gets killed for exessive memory usage. I work on <code>Ubuntu 24.04</code> having <code>32GB</code> RAM.</p>
<p>To reproduce scenario:</p>
<pre class="lang-py prettyprint-override"><code># Generate the dataset
X, y = make_classification(n_samples=20_000,n_features=80,
n_informative=60, n_classes=5, random_state=42)
extractor = MFE(features=[ "t1"], groups=["complexity"],
summary=["min", "max", "mean", "sd"])
extractor.fit(X,y)
extractor.extract()
Killed
</code></pre>
<p><strong>Question:</strong></p>
<p>How do I split this task to compute on small partitions of the dataset, and combine final results (averaging)?</p>
|
<python><machine-learning><meta-learning>
|
2024-06-25 11:05:09
| 2
| 4,093
|
arilwan
|
78,666,883
| 3,852,385
|
pip install quickfix failed in windows
|
<p>I'm trying to install the quikcfix library on my windows machine. The python version is 3.12.2.
However I get the below error.</p>
<pre><code>python setup.py bdist_wheel did not run successfully
exit code: 1
[7 lines of output]
Testing for std::tr1::shared_ptr...
...not found
Testing for std::shared_ptr...
...not found
Testing for std::unique_ptr...
...not found
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools
</code></pre>
<p>I installed the Microsoft C++ build tools as well. But it still gives the same error.
Can anyone suggest a solution for this error</p>
|
<python><quickfix>
|
2024-06-25 10:50:13
| 1
| 711
|
dGayand
|
78,666,868
| 9,709,594
|
Access SQL Server with Active Directory Domain\Username credentials from Linux using Python
|
<p>I have the following scenario.</p>
<p>I received access to a Windows SQL server with <code>Domain/Username</code> authentication attached to my personal Windows <strong>Active Directory</strong> and also our team's Service Account's Active Directory. When I test with my Windows work station, I can confirm that, I am able to programmatically fetch data from the SQL server. I tested this within my Windows machine with two Python scripts. One used <code>pyodbc</code>, and other used <code>pymssql</code>.</p>
<p><strong>pyodbc example (local)</strong></p>
<pre><code>import pyodbc
conn = pyodbc.connect("DRIVER={ODBC Driver 17 for SQL Server};"
"SERVER=<server>\<instance>,<port>;"
"DATABASE=<database>;"
"Trusted_Connection=yes;"
)
cursor = conn.cursor()
cursor.execute("SELECT * FROM database.table")
for row in cursor.fetchall():
print (row)
cursor.close()
print("Completed!")
</code></pre>
<p><strong>pymssql example (local)</strong></p>
<pre><code>import pymssql
conn = pymssql.connect(host='<server>', server='<instance>', port='<port>', database='<database>')
cursor = conn.cursor()
cursor.execute("SELECT * FROM database.table")
for row in cursor.fetchall():
print (row)
cursor.close()
</code></pre>
<p>The two scripts worked well in my personal Windows laptop. As you can see, in both scenarios, I did not have to specify my username or password. For example, when using <code>pyodbc</code>, I used '<code>Trusted_Connection=yes</code>'. As, I was executing this script from my personal laptop which received the permission to the external Windows SQL server, this script automatically authenticate my request to the targeted SQL server and enable me to query the data from it. Similarly, when using <code>pymssql</code>, I was able to retrieve data directly from my local laptop.</p>
<p>However, my end objective is to retrieve data when I execute the script in a Linux Kubernets environment. Therefore, I extend the above two scripts while including my Active Directory <code><Domain>\<Username></code> and <code><password></code>. However, in both scenarios, the code does not get executed as per my expectation.i.e. the use of Active Directory <code><Domin>\<Username></code> and <code><password></code> combination with in Linux environment, throw me errors. The extended <code>pyodbc</code> and <code>pymssql</code> scripts are provided below. The received error messages are also stated.</p>
<p><strong>pyodbc example</strong></p>
<pre><code>import pyodbc
conn = pyodbc.connect("DRIVER={ODBC Driver 17 for SQL Server};"
"SERVER=<server>\<instance>,<port>;"
"DATABASE=<database>;"
"Trusted_Connection=yes;"
"UID=<domain>\<username>;"
"PWD=<password>;"
"Integrated_Security=SSPI"
)
cursor = conn.cursor()
cursor.execute("SELECT * FROM database.table")
for row in cursor.fetchall():
print (row)
cursor.close()
print("Completed!")
</code></pre>
<p><strong>pyodbc error encountered</strong></p>
<pre><code> conn = pyodbc.connect("DRIVER={ODBC Driver 17 for SQL Server};"
pyodbc.OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')
</code></pre>
<p><strong>pymssql example</strong></p>
<pre><code>import pymssql
conn = pymssql.connect(host='<server>', server='<instance>', port='<port>', database='<database>', user='<domain\\username>', password='<password>', tds_version='7.0')
cursor = conn.cursor()
cursor.execute("SELECT * FROM database.table")
for row in cursor.fetchall():
print (row)
cursor.close()
</code></pre>
<p><strong>pymssql error encountered:</strong></p>
<pre><code> conn = pymssql.connect(host='<server>', server='<instance>', port='<port>', database='<db>', user='<domain>\\<username>', password='<password>', tds_version='7.0')
File "src/pymssql/_pymssql.pyx", line 659, in pymssql._pymssql.connect
pymssql.exceptions.OperationalError: (20009, b'DB-Lib error message 20009, severity 9:\nUnable to connect: Adaptive Server is unavailable or does not exist (<server>)\nNet-Lib error during Connection timed out (110)\n')
</code></pre>
<p>I primarily followed Stack Overflow article, titled, <a href="https://stackoverflow.com/questions/49695990/authenticate-from-linux-to-windows-sql-server-with-pyodbc">Authenticate from Linux to Windows SQL Server with pyodbc</a>, and <a href="https://sites.google.com/site/hellobenchen/home/wiki/python/sql-server-windows-authentication-linux-python" rel="nofollow noreferrer">SQL Server windows authentication linux python</a> to accomplish my task. But, the steps I followed did not help me yet.</p>
<p>What steps am I missing? Do you see an issue in my code, or do you have alternate suggestions?</p>
|
<python><sql-server><linux><active-directory><pyodbc>
|
2024-06-25 10:47:28
| 0
| 416
|
0xMinCha
|
78,666,683
| 5,552,153
|
How to update axis labels in Plotly Dashboard?
|
<p>I am using the following code to monitor an output file from another application named TOUGH.</p>
<p>I define a graph component and update its figure using the update_graph_live() call back function. In order for my graphs to remain legible, I adjust the x-axis time units. But although the data appears to update correctly, the x-axis labels remain what they were when the app is initially launched, despite me returning the figure as part of the callback function. I run the script from Spyder IDE in an Anaconda virtual env.</p>
<p>I've scratched my head some more when I actually checked the Dash debugger and saw that my debugging counter was actually incrementing fine on the Dashboard, but somehow the figure is not updated.</p>
<p><a href="https://i.sstatic.net/JflCyMW2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JflCyMW2.png" alt="enter image description here" /></a></p>
<p>Any help would be most welcome.</p>
<p>Here is what the dash looks like:
<a href="https://i.sstatic.net/87DkVPTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/87DkVPTK.png" alt="Dash Output" /></a></p>
<pre><code>import os
import time
import pandas as pd
import numpy as np
import plotly.graph_objs as go
from dash import Dash, dcc, html
from dash.dependencies import Input, Output, State
from plotly.subplots import make_subplots
# Create Dash app
app = Dash(__name__)
# Layout of the app
app.layout = html.Div([
dcc.Input(
id='file-path',
type='text',
placeholder='Enter the file path...',
style={'width': '100%'}
),
dcc.Graph(id='live-graph', animate=True),
dcc.Interval(
id='graph-update',
interval=1000, # in milliseconds (1 seconds)
n_intervals=0
)
])
def load_data(file_path):
try:
data = pd.read_csv(file_path, header=0, skiprows=0, index_col=False)
new_headers = [s.strip() for s in data.columns.values] # remove whitespaces
data.columns = new_headers
# Adjust the times in a legible unit
max_time = data["TIME(S)"].max()
if max_time > 2 * 365 * 24 * 3600:
data["TIME(Years)"] = data["TIME(S)"] / (365 * 24 * 3600)
time_col = "TIME(Years)"
time_label = "Time (Years)"
elif max_time > 2 * 30 * 24 * 3600:
data["TIME(Months)"] = data["TIME(S)"] / (30 * 24 * 3600)
time_col = "TIME(Months)"
time_label = "Time (Months)"
elif max_time > 2 * 7 * 24 * 3600:
data["TIME(Weeks)"] = data["TIME(S)"] / (7 * 24 * 3600)
time_col = "TIME(Weeks)"
time_label = "Time (Weeks)"
elif max_time > 1 * 24 * 3600:
data["TIME(Days)"] = data["TIME(S)"] / (24 * 3600)
time_col = "TIME(Days)"
time_label = "Time (Days)"
elif max_time > 0.5 * 3600:
data["TIME(Hours)"] = data["TIME(S)"] / 3600
time_col = "TIME(Hours)"
time_label = "Time (Hours)"
elif max_time > 24:
data["TIME(Minutes)"] = data["TIME(S)"] / 60
time_col = "TIME(Minutes)"
time_label = "Time (Minutes)"
else:
time_col = "TIME(S)"
time_label = "Time (Seconds)"
print(time_label)
data.iloc[:, 1:] = data.iloc[:, 1:].apply(pd.to_numeric)
return data, time_col, time_label
except Exception as e:
print(f"Error loading data: {e}")
return None, None, None
@app.callback(
Output('live-graph', 'figure'),
[Input('graph-update', 'n_intervals')],
[State('file-path', 'value')]
)
def update_graph_live(n_intervals, file_path):
if not file_path or not os.path.exists(file_path):
return go.Figure()
data, time_col, time_label = load_data(file_path)
if data is None:
return go.Figure()
time = data[time_col]
pressure = data["PRES"]
temperature = data["TEMP"]
saturation_gas = data["SAT_Gas"]
saturation_aqu = data["SAT_Aqu"]
time_diff = data["TIME(S)"].diff().dropna()
fig = make_subplots(rows=2, cols=3, subplot_titles=("Pressure vs Time", "Molar Fractions vs Time", "Time Difference vs Time", "Temperature vs Time", "Saturation vs Time"))
fig.add_trace(go.Scatter(x=time, y=pressure, mode='lines', name='Pressure (Pa)', line=dict(color='blue')), row=1, col=1)
fig.add_trace(go.Scatter(x=time, y=temperature, mode='lines', name='Temperature (°C)', line=dict(color='red')), row=2, col=1)
fig.add_trace(go.Scatter(x=time, y=saturation_gas, mode='lines', name='Gas Phase Saturation', line=dict(color='green')), row=2, col=2)
fig.add_trace(go.Scatter(x=time, y=saturation_aqu, mode='lines', name='Aqueous Phase Saturation', line=dict(color='blue')), row=2, col=2)
# just sorting out some colouring and styling for legibility
for col in data.columns[6:]:
color = 'black'
linestyle = 'solid'
if 'H2' in col:
color = 'green'
elif 'CH4' in col:
color = 'lightblue'
elif 'water' in col:
color = 'blue'
if 'Gas' in col:
linestyle = 'solid'
elif 'Aqu' in col:
linestyle = 'dash'
if 'TIME' in col:
continue
fig.add_trace(go.Scatter(x=time, y=data[col], mode='lines', name=col, line=dict(color=color, dash=linestyle)), row=1, col=2)
fig.add_trace(go.Scatter(x=time.iloc[1:len(time_diff)+1], y=time_diff, mode='lines', name='Time Step Size', line=dict(color='purple')), row=1, col=3)
# # PREVIOUS ATTEMPT
# fig.update_layout(
# title='Real-Time TOUGH Simulation Data',
# showlegend=True,
# yaxis3_type='log' # Setting the y-axis of the third subplot to log scale
# )
# PREVIOUS ATTEMPT
# # Update x-axis labels individually
# fig.update_xaxes(title_text=time_label, row=1, col=1)
# fig.update_xaxes(title_text=time_label, row=1, col=2)
# fig.update_xaxes(title_text=time_label, row=1, col=3)
# fig.update_xaxes(title_text=time_label, row=2, col=1)
# fig.update_xaxes(title_text=time_label, row=2, col=2)
# CURRENT ATTEMPT
fig.update_layout(
title='Real-Time TOUGH Simulation Data',
showlegend=True,
yaxis3_type='log', # Setting the y-axis of the third subplot to log scale
xaxis_title_text=time_label+str(n_intervals), # Update x-axis
# labels. I add the interval counter for debugging
xaxis2_title_text=time_label,
xaxis3_title_text=time_label,
xaxis4_title_text=time_label,
xaxis5_title_text=time_label
)
print('Update> ',time_label) # just a sanity check which demonstrates that the time
# conversion and label change is being read correctly - which it is
fig.update_yaxes(title_text='Pressure (Pa)', row=1, col=1)
fig.update_yaxes(title_text='Molar Fraction (-)', row=1, col=2)
fig.update_yaxes(title_text='Timestep (s)', row=1, col=3)
fig.update_yaxes(title_text='Temperature (°C)', row=2, col=1)
fig.update_yaxes(title_text= 'Saturation (-)', row=2, col=2)
return fig
if __name__ == '__main__':
app.run_server(debug=True, port=8050)
</code></pre>
|
<python><plotly><plotly-dash>
|
2024-06-25 10:08:20
| 1
| 935
|
Sorade
|
78,666,658
| 12,427,876
|
PyLance Auto-Completion with Docker
|
<p>I'm using PyLance in Visual Studio Code to work on a Dockerized Python project, and there are certain dependencies that is very complex to be built on MacOS, so I dockerized the project and now it works fine.</p>
<p>The problem is, since I didn't build/install the library locally on my OS, PyLance is not able to detect such library and provide auto completion.</p>
<p>Is there any hack to make PyLance detect & read packages in Docker containers, or simply not installed locally (e.g., a cloned git repo, without being installed)?</p>
|
<python><docker><pylance>
|
2024-06-25 10:03:27
| 0
| 411
|
TaihouKai
|
78,666,604
| 3,825,948
|
Call openai.Audio.transcribe('model_name', bytes, prompt=language_prompt, response_format='verbose_json') with timeout
|
<p>Is it possible to call the OpenAI speech to text API in Python with a timeout so if it doesn't respond within the timeout the call is cancelled? The call has the following signature:</p>
<p>openai.Audio.transcribe('model_name', bytes, prompt=language_prompt,
response_format='verbose_json')</p>
<p>I tried using the Python requests library but it didn't work. Can I possible use the Python OpenAI module for this purpose? Any advice would be much appreciated. Thanks.</p>
|
<python><timeout><openai-api>
|
2024-06-25 09:51:38
| 0
| 937
|
Foobar
|
78,666,512
| 2,016,165
|
Modify QGIS layer every/after ~5 seconds (without blocking the main thread)
|
<p>I wrote a Python script for QGIS 3.36.2 (uses Python 3.12.3) that does the following:</p>
<ol>
<li>Create a layer</li>
<li>Start an HTTP GET request to fetch new coordinates (runs asynchronously by default)</li>
<li>Use these coordinates to draw a marker on the layer (the old marker is removed first, has to run on the main thread afaik)</li>
</ol>
<p>Step 1 only happens once. 2. + 3. should run indefinitely but stop if there's an error or if the user stops the script. For testing I only want to run it e.g. 10 times.</p>
<p>What I've found/tried so far:</p>
<ul>
<li><code>time.sleep()</code> (as suggested <a href="https://stackoverflow.com/a/25251804/2016165">here</a>) freezes QGIS completely.</li>
<li><code>sched</code> scheduler (see code below) also blocks the main thread and freezes QGIS.</li>
<li><code>threading.Timer</code> would start a new thread every time (and you wouldn't be able to stop the loop), so the <a href="https://stackoverflow.com/a/10813316/2016165">answer</a> advises against using it - untested because of that.</li>
<li>I can't use <code>Tkinter</code> because QGIS' python doesn't support it.</li>
<li><code>asyncio</code> (as suggested <a href="https://stackoverflow.com/a/14040516/2016165">here</a>) doesn't seem to be fully supported in this QGIS version either (lots of errors when trying to run <a href="https://stackoverflow.com/a/29924261/2016165">this</a> example but it's working fine in the Python 3.9 console) and it's also kind of blocking because it uses coroutines (see <a href="https://stackoverflow.com/q/53264314/2016165">this</a> question; you can <code>yield</code>).</li>
</ul>
<p>How do I repeat steps 2 and 3 multiple times if there's no error, e.g. 5 seconds after the last iteration finished, without blocking the GUI (especially the map viewer) with some type of <code>sleep</code> and preferably without using any extra libraries?</p>
<p>My code:</p>
<pre><code>#imports here
class ArrowDrawerClass:
layer = None
dataprovider = None
feature = None
repeat = True
url = "someURL"
repeatCounter = 0
myscheduler = sched.scheduler(time.time,time.sleep)
def __init__(self):
self.createNewLayer()
def createNewLayer(self):
layername = "ArrowLayer"
self.layer = QgsVectorLayer('Point', layername, "memory")
self.dataprovider = self.layer.dataProvider()
self.feature = QgsFeature()
#Set symbol, color,... of layer here
QgsProject.instance().addMapLayers([self.layer])
def doRequest(self):
request = QNetworkRequest(QUrl(self.url))
request.setTransferTimeout(10000) #10s
self.manager = QNetworkAccessManager()
self.manager.finished.connect(self.handleResponse)
self.manager.get(request)
def handleResponse(self, reply):
err = reply.error()
if err == QtNetwork.QNetworkReply.NetworkError.NoError:
bytes = reply.readAll()
replytext = str(bytes, 'utf-8').strip()
#extract coordinates here ...
self.drawArrow(x,y)
else:
self.displayError(str(err),reply.errorString())
def drawArrow(self,x,y):
self.layer.dataProvider().truncate() #removes old marker
point1 = QgsPointXY(x,y)
self.feature.setGeometry(QgsGeometry.fromPointXY(point1))
self.dataprovider.addFeatures([self.feature])
self.layer.updateExtents()
self.layer.triggerRepaint()
self.repeatCounter += 1
self.repeatEverything()
def displayError(self,code,msg):
self.repeat = False
#show error dialog here
def start(self):
self.myscheduler.enter(0,0,self.doRequest)
self.myscheduler.run()
def repeatEverything(self):
print("counter:",self.repeatCounter)
if self.repeat and self.repeatCounter<10:
print("repeat")
self.myscheduler.enter(5,0,self.test) #TODO: Call "self.doRequest()" instead
self.myscheduler.run()
else:
print("don't repeat!")
def test(self):
print("test!")
adc = ArrowDrawerClass()
adc.start()
</code></pre>
|
<python><qgis>
|
2024-06-25 09:32:16
| 1
| 2,013
|
Neph
|
78,666,266
| 7,476,542
|
No module named 'src'
|
<p>I have a python project. The folder structure is like the following:</p>
<pre><code>📦src
┣ 📂utils
┃ ┣ 📜custom_pow.py
┃ ┣ 📜equation.py
┃ ┗ 📜__init__.py
┣ 📜app.py
┗ 📜__init__.py
</code></pre>
<p><strong>custom_pow.py</strong></p>
<pre><code>class Pow:
def __init__(self,a,n):
self.a = a
self.n = n
def pow(self):
out = self.a ** self.n
return out
</code></pre>
<p><strong>equation.py</strong></p>
<pre><code>from src.utils.custom_pow import Pow
class CustFormula:
def __init__(self,a,b):
self.a = a
self.b = b
def apbwsq(self):
ap = Pow(self.a,2)
bp = Pow(self.b,2)
out = (ap.pow()) + (2*self.a*self.b) + (bp.pow())
return out
def apbwcub(self):
aap = Pow(self.a,3)
bbp = Pow(self.b,3)
ap = Pow(self.a,2)
bp = Pow(self.b,2)
out = (aap.pow()) + (bbp.pow()) + 3*ap.pow()*self.b + 3*bp.pow()*self.a
return out
</code></pre>
<p><strong>app.py</strong></p>
<pre><code>from src.utils.equation import CustFormula
p = CustFormula(2,3)
out = p.apbwsq()
print(f"(2+3)^2 = {out}")
</code></pre>
<p>I am pretty new to Python packaging. As per my understanding if we put <strong>init</strong>.py it will be considered as a module.</p>
<p>But while I am running the app.py from the src folder I am getting the below error:</p>
<pre><code>File "********\module-test\src\app.py", line 2, in <module>
from src.utils.equation import CustFormula
ModuleNotFoundError: No module named 'src'
</code></pre>
<p>Please help me to solve this.</p>
|
<python><path>
|
2024-06-25 08:44:04
| 1
| 303
|
Sayandip Ghatak
|
78,666,050
| 2,660,278
|
Understanding result of combining 3D transformations with `pytransform3d`
|
<p>I feel like I am missing something very basic here. Question is specific to <code>pytransform3d</code>. I am most likely missing a basic understanding of 3D transformations and/or the <code>pytransform3d</code> library.</p>
<p>Here is the setup:</p>
<p><a href="https://i.sstatic.net/7AMv4Ame.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7AMv4Ame.png" alt="enter image description here" /></a></p>
<p>Coordinate system <code>O</code> is inside the cube, and coordinate systems <code>A</code> and <code>B</code> are on the corners as shown.</p>
<p>I have two transformation matrices <code>O2A</code> and <code>O2B</code> (named in the style used in <code>pytransform3d</code>). Since I don't know how to visualize rotation matrices, I convert them into intrinsic Euler XYZ rotations. Using the translations and the intrinsic Euler XYZ rotations, I can visually confirm that the transformation matrices <code>O2A</code> and <code>O2B</code> are correct.</p>
<p>Now, I want to calculate the transformation matrix <code>A2B</code>. Using <code>numpy</code>, I come up with <code>A2B = inv(O2A) @ O2B</code> which I understand and am happy with. Again, just to verify, I can convert into intrinsic Euler XYZ rotations and check visually on the graph.</p>
<p>Now, according to <code>pytransform3d</code> docs of <a href="https://dfki-ric.github.io/pytransform3d/_apidoc/pytransform3d.transformations.concat.html#pytransform3d.transformations.concat" rel="nofollow noreferrer"><code>concat</code></a> and <a href="https://dfki-ric.github.io/pytransform3d/_apidoc/pytransform3d.transformations.invert_transform.html#pytransform3d.transformations.invert_transform" rel="nofollow noreferrer"><code>invert_transform</code></a>, this should be the same as <code>A2B = pytr.concat(pytr.invert_transform(O2A), O2B)</code> but it is not. If I look at the translations and intrinsic Euler XYZ rotations of this result, they are not what I would expect. Looking at <a href="https://dfki-ric.github.io/pytransform3d/transformation_modeling.html" rel="nofollow noreferrer">this</a> docs page, it seems like it should be the same but I am having difficulty understanding why.</p>
<p>Question: In what sense is the resulting <code>A2B</code> is a transformation <em>from</em> <code>A</code> <em>to</em> <code>B</code>? Perhaps I need to further post-process the result before extracting the translations and intrinsic Euler XYZ rotations?</p>
<p>Below I have some basic Python snippet to show the values:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from math import degrees, radians
from numpy.linalg import inv
import pytransform3d.rotations as pyrot
import pytransform3d.transformations as pytr
def print_rp(X2Y):
r = list(map(degrees, pyrot.intrinsic_euler_xyz_from_active_matrix(X2Y[0:3,0:3])))
p = X2Y[0:3,3:4].ravel().tolist()
print(f"intrinsic Euler XYZ rotations {r}")
print(f"translations {p}")
print("# O2A")
O2A = pytr.transform_from(
R = np.block([[1,0,0],[0,1,0],[0,0,1]]),
p = [0.05,0.05,-0.05]
)
print_rp(O2A)
print("# O2B")
O2B = pytr.transform_from(
R = np.block([[-1,-0,-0],[0,-1,0],[0,0,1]]),
p = [0.05,-0.05,-0.05],
)
print_rp(O2B)
print("# A2B (should work but doesn't)")
A2B = pytr.concat(pytr.invert_transform(O2A), O2B)
print_rp(A2B)
print("# A2B (works but do not know why)")
A2B = pytr.concat(O2B, pytr.invert_transform(O2A))
print_rp(A2B)
print("# A2B (works as expected)")
A2B = inv(O2A) @ O2B
print_rp(A2B)
</code></pre>
<p>And the output:</p>
<pre><code># O2A
intrinsic Euler XYZ rotations [0.0, 0.0, 0.0]
translations [0.05, 0.05, -0.05]
# O2B
intrinsic Euler XYZ rotations [0.0, 0.0, 180.0]
translations [0.05, -0.05, -0.05]
# A2B (should work but doesn't)
intrinsic Euler XYZ rotations [0.0, 0.0, 180.0]
translations [0.1, 0.0, 0.0]
# A2B (works but do not know why)
intrinsic Euler XYZ rotations [0.0, 0.0, 180.0]
translations [0.0, -0.1, 0.0]
# A2B (works as expected)
intrinsic Euler XYZ rotations [0.0, 0.0, 180.0]
translations [0.0, -0.1, 0.0]
</code></pre>
|
<python><3d><rotation><coordinate-systems><coordinate-transformation>
|
2024-06-25 07:55:13
| 0
| 358
|
user2660278
|
78,665,714
| 4,097,712
|
How to create rv_histogram from known probability density function values
|
<p>Let's say there is a distribution, and all what is known about the distribution is the value of its probability density function in a small range(e.g. from 0.0 to 0.001).</p>
<p>Then, I want to use statistical functions in <code>scipy</code> to describe the distribution, and I found there is a function called <code>scipy.stats.rv_histogram</code> which seems can be used because everything I known about the distribution is the something like a array of histograms.</p>
<p>The method says that the first parameter:</p>
<blockquote>
<p>In particular, the return value of <code>numpy.histogram</code> is accepted.</p>
</blockquote>
<p>Following the guide, I tried to call the <code>numpy.histogram</code> method, however I found that it actually asks for count number but the probability density for each bin, viz. there are 123 elements but the probability density is 0.1 in the range from 0.0 to 0.001. And the parameter <code>density</code> controls the output but the input.</p>
<p>May I know are there some other solutions in such a case that only the value of probability density function is known please?</p>
<p>Take the following distribution as an example:
<a href="https://i.sstatic.net/fyDV3F6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fyDV3F6t.png" alt="enter image description here" /></a></p>
<p>The height of each bin is known and also the width of each bin is a constant. Here, the height of each bin is the probability density of the distribution but the count number in the small range.</p>
|
<python><numpy><math><scipy><statistics>
|
2024-06-25 06:35:57
| 0
| 940
|
Tonny Tc
|
78,665,646
| 2,385,132
|
PyTorch model training with DataLoader is too slow
|
<p>I'm training a very small NN using the HAM10000 dataset. For loading the data I'm using the DataLoader that ships with PyTorch:</p>
<pre><code>class CocoDetectionWithFilenames(CocoDetection):
def __init__(self, root: str, ann_file: str, transform=None):
super().__init__(root, ann_file, transform)
def get_filename(self, idx: int) -> str:
return self.coco.loadImgs(self.ids[idx])[0]["file_name"]
def get_loaders(root: str, ann_file: str) -> tuple[CocoDetection, DataLoader, DataLoader, DataLoader]:
transform = transforms.Compose([
transforms.ToTensor()
])
dataset = CocoDetectionWithFilenames(
root=root,
ann_file=ann_file,
transform=transform
)
train_size = int(0.7 * len(dataset))
valid_size = int(0.15 * len(dataset))
test_size = len(dataset) - train_size - valid_size
train_dataset, valid_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, valid_size, test_size])
num_workers = os.cpu_count()
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=32,
shuffle=True,
num_workers=num_workers,
pin_memory=True,
prefetch_factor=1024
)
valid_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=32,
shuffle=False,
num_workers=num_workers,
pin_memory=True,
prefetch_factor=1024
)
test_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=32,
shuffle=False,
num_workers=num_workers,
pin_memory=True
)
return dataset, train_loader, valid_loader, test_loader
</code></pre>
<p>The thing is, when my training loop runs, the training itself is very fast, but the program spends 95% on the time inbetween epochs - probably loading the data:</p>
<pre><code>def extract_bboxes(targets: list[dict]) -> list[torch.Tensor]:
bboxes = []
for target in targets:
xs, ys, widths, heights = target["bbox"]
for idx, _ in enumerate(xs):
x1, y1, width, height = xs[idx], ys[idx], widths[idx], heights[idx]
# Convert COCO format (x, y, width, height) to (x1, y1, x2, y2)
x2, y2 = x1 + width, y1 + height
bboxes.append(torch.IntTensor([x1, y1, x2, y2]))
return bboxes
num_epochs = 25
train_losses = []
val_losses = []
for epoch in range(num_epochs):
model.train()
running_loss = 0.0
for images, targets in train_loader_tqdm:
images = images.to(device)
bboxes = extract_bboxes(targets)
bboxes = torch.stack(bboxes).to(device)
optimizer.zero_grad(set_to_none=True)
outputs = model(images)
loss = criterion(outputs, bboxes)
loss.backward()
optimizer.step()
running_loss += loss.item()
epoch_train_loss = running_loss / len(train_loader)
train_losses.append(epoch_train_loss)
print(f"Epoch {epoch + 1}, Loss: {epoch_train_loss}")
model.eval()
</code></pre>
<p>As you can see, the training loop code is quite simple, nothing weird happening there.</p>
|
<python><pytorch><pytorch-dataloader>
|
2024-06-25 06:12:50
| 1
| 3,930
|
Marek M.
|
78,665,637
| 1,390,558
|
Using AWS Moto with Python mock to write unit tests
|
<p>I am working with a codebase that doesn't have any tests setup. I am trying to add some tests and currently have the below test class <code>test_main.py</code></p>
<pre><code>from unittest import TestCase
from moto import mock_aws
import boto3
from main import collect_emr
class Test(TestCase):
def setUp(self):
pass
@mock_aws
def test_collect_emr(self):
emr_client = boto3.client('emr')
emr_client.run_job_flow(
Instances={
"InstanceCount": 3,
"KeepJobFlowAliveWhenNoSteps": True,
"MasterInstanceType": "c3.medium",
"Placement": {"AvailabilityZone": "us-east-1a"},
"SlaveInstanceType": "c3.xlarge",
},
JobFlowRole="EMR_EC2_DefaultRole",
LogUri="s3://mybucket/log",
Name="cluster",
ServiceRole="EMR_DefaultRole",
VisibleToAllUsers=True,
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
expected_outcome = [{'arn': 'arn:aws:elasticmapreduce:ap-southeast-2:123456789012:cluster/j-S78L4S34G4Q7U', 'tags': [{'key': 'string', 'value': 'string'}]}]
actual_outcome = list(collect_emr('us-east-1', 'arn:aws:iam::123456789012:role/role_name'))
self.assertEqual(expected_outcome, actual_outcome)
</code></pre>
<p>then the method I am testing from <code>main.py</code>:</p>
<pre><code>def collect_emr(region_name: str, role_arn: str) -> Generator[ResourceInfo, Any, None]:
creds = assume_role(role_arn)
client = boto3.client(
"emr",
aws_access_key_id=creds["AccessKeyId"],
aws_secret_access_key=creds["SecretAccessKey"],
aws_session_token=creds["SessionToken"],
region_name=region_name,
config=config,
)
cluster_ids = []
for page in client.get_paginator("list_clusters").paginate():
for cluster in page['Clusters']:
if 'Id' not in cluster:
continue
else:
cluster_id = cluster['Id']
cluster_ids.append(cluster_id)
if cluster_ids:
for cluster_id in cluster_ids:
describe_cluster = client.describe_cluster(ClusterId = cluster_id)
if 'Cluster' not in describe_cluster:
continue
yield {
"arn": describe_cluster['Cluster']['ClusterArn'],
"tags": list(
[{"key": t["Key"], "value": t["Value"]} for t in describe_cluster['Cluster']["Tags"]]
),
}
</code></pre>
<p>and my test output when I run them:</p>
<pre><code>Ran 1 test in 0.852s
FAILED (failures=1)
[] != [{'arn': 'arn:aws:elasticmapreduce:ap-southeast-2:123456789012:cluster/j-S78L4S34G4Q7U',
'tags': [{'key': 'string', 'value': 'string'}]}]
</code></pre>
<p>I understand that the <code>client = </code> inside the <code>collect_emr(...</code> method in <code>main.py</code> is creating a new boto3 client which doesn't know about the resources created by <code>emr_client</code> created in my test method <code>test_collect_emr(self):</code> but im unsure how to go about resolving this? In the past I would write methods that would always take the client as a parameter eg:</p>
<pre><code>def collect_emr(boto_client):
...
</code></pre>
<p>and then pass in the boto client I created in my test case. I am unable to do that here as the code base has stuff all over the place and refactoring the method to do things this way would break a lot of stuff. Is it possible to combine python mock and aws moto in the same method or would I need to use either or? Any help would be greatly appreciated as im a bit stuck on the best way to approach this</p>
|
<python><mocking><moto>
|
2024-06-25 06:09:44
| 0
| 1,469
|
Justin S
|
78,665,486
| 9,757,174
|
(4096, 'Microsoft Outlook', 'The operation failed.', None, 0, -2147287037) error when saving an email to my local machine
|
<p>I am trying to download some emails from my outlook folder to my local machine as <code>.msg</code> files using the following code.</p>
<pre class="lang-py prettyprint-override"><code>messages = source_folder.Items
for message in messages:
# Estimate the size of the email
size = message.Size
if size < 5 * 1024 * 1024: # 5MB
# Save the email as a .msg file
subject = message.Subject.replace(":", "").replace("\\", "").replace("/", "").replace("*", "").replace("?", "").replace("\"", "").replace("<", "").replace(">", "").replace("|", "")
received_time = message.ReceivedTime.strftime("%Y%m%d%H%M%S")
file_name = f"{subject}.msg"
file_path = os.path.join(email_dir, file_name)
time.sleep(10)
message.SaveAs(file_path) # 3 corresponds to the .msg format
print(f"Emails have been saved to {email_dir}")
</code></pre>
<p>I get a <code>com_error</code> on the same. The stack trace is below:</p>
<pre><code>---------------------------------------------------------------------------
com_error Traceback (most recent call last)
Cell In[17], line 12
10 file_path = os.path.join(email_dir, file_name)
11 time.sleep(10)
---> 12 message.SaveAs(file_path) # 3 corresponds to the .msg format
13 if i == 0:
14 break
File <COMObject <unknown>>:2, in SaveAs(self, Path, Type)
com_error: (-2147352567, 'Exception occurred.', (4096, 'Microsoft Outlook', 'The operation failed.', None, 0, -2147287037), None)
</code></pre>
<p>I have looked at a few <a href="https://stackoverflow.com/questions/67775619/4096-microsoft-outlook-the-attempted-operation-failed-an-object-could-not">potential ways</a> to do this but I am not able to solve for it. Any solutions would be appreciated.</p>
|
<python><python-3.x><automation><outlook><win32com>
|
2024-06-25 05:09:43
| 1
| 1,086
|
Prakhar Rathi
|
78,665,239
| 1,228,276
|
install awscli on java 17 docker
|
<p>I am upgrading JAVA8 to JAVA17(<code>FROM amazoncorretto:17-alpine-jdk</code>) with one of my Spring boot service. I have <code>awscli</code> being installed as with below command in my <code>Dockerfile</code></p>
<pre><code>RUN apk add --no-cache \
python3 \
py3-pip \
&& pip3 install --upgrade pip \
&& pip3 install --no-cache-dir \
awscli \
&& rm -rf /var/cache/apk/*
</code></pre>
<p>But after upgrading to Java17 I am getting below issue and I can not continue with above step</p>
<pre><code> => ERROR [plp-svc 2/7] RUN apk add --no-cache python3 py3-pip && pip3 install --upgrade pip && pip3 install --no-cache-dir awscli && rm -rf /var/cache/apk/* 5.0s
------
> [plp-svc 2/7] RUN apk add --no-cache python3 py3-pip && pip3 install --upgrade pip && pip3 install --no-cache-dir awscli && rm -rf /var/cache/apk/*:
0.165 fetch https://dl-cdn.alpinelinux.org/alpine/v3.19/main/aarch64/APKINDEX.tar.gz
0.546 fetch https://dl-cdn.alpinelinux.org/alpine/v3.19/community/aarch64/APKINDEX.tar.gz
0.894 fetch https://apk.corretto.aws/aarch64/APKINDEX.tar.gz
1.010 (1/25) Installing libexpat (2.6.2-r0)
1.033 (2/25) Installing libbz2 (1.0.8-r6)
1.054 (3/25) Installing libffi (3.4.4-r3)
1.383 (4/25) Installing gdbm (1.23-r1)
1.432 (5/25) Installing xz-libs (5.4.5-r0)
1.499 (6/25) Installing libgcc (13.2.1_git20231014-r0)
1.535 (7/25) Installing libstdc++ (13.2.1_git20231014-r0)
1.637 (8/25) Installing mpdecimal (2.5.1-r2)
1.668 (9/25) Installing ncurses-terminfo-base (6.4_p20231125-r0)
1.717 (10/25) Installing libncursesw (6.4_p20231125-r0)
1.763 (11/25) Installing libpanelw (6.4_p20231125-r0)
1.797 (12/25) Installing readline (8.2.1-r2)
1.879 (13/25) Installing sqlite-libs (3.44.2-r0)
2.007 (14/25) Installing python3 (3.11.9-r0)
3.018 (15/25) Installing python3-pycache-pyc0 (3.11.9-r0)
3.464 (16/25) Installing pyc (3.11.9-r0)
3.482 (17/25) Installing py3-setuptools-pyc (68.2.2-r0)
3.623 (18/25) Installing py3-pip-pyc (23.3.1-r0)
3.884 (19/25) Installing py3-parsing (3.1.1-r0)
3.913 (20/25) Installing py3-parsing-pyc (3.1.1-r0)
3.948 (21/25) Installing py3-packaging-pyc (23.2-r0)
3.974 (22/25) Installing python3-pyc (3.11.9-r0)
3.993 (23/25) Installing py3-packaging (23.2-r0)
4.020 (24/25) Installing py3-setuptools (68.2.2-r0)
4.103 (25/25) Installing py3-pip (23.3.1-r0)
4.320 Executing busybox-1.36.1-r15.trigger
4.324 OK: 398 MiB in 42 packages
4.778 error: externally-managed-environment
4.778
4.778 × This environment is externally managed
4.778 ╰─>
4.778 The system-wide python installation should be maintained using the system
4.778 package manager (apk) only.
4.778
4.778 If the package in question is not packaged already (and hence installable via
4.778 "apk add py3-somepackage"), please consider installing it inside a virtual
4.778 environment, e.g.:
4.778
4.778 python3 -m venv /path/to/venv
4.778 . /path/to/venv/bin/activate
4.778 pip install mypackage
4.778
4.778 To exit the virtual environment, run:
4.778
4.778 deactivate
4.778
4.778 The virtual environment is not deleted, and can be re-entered by re-sourcing
4.778 the activate file.
4.778
4.778 To automatically manage virtual environments, consider using pipx (from the
4.778 pipx package).
4.778
4.778 note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
4.778 hint: See PEP 668 for the detailed specification.
------
failed to solve: process "/bin/sh -c apk add --no-cache python3 py3-pip && pip3 install --upgrade pip && pip3 install --no-cache-dir awscli && rm -rf /var/cache/apk/*" did not complete successfully: exit code: 1
</code></pre>
<p>Is there any way I can run my old command without using virtual environment.</p>
<p>I did changed my command to use virtual environment to install <code>awscli</code> with below command and got worked.</p>
<pre><code>#Install python3 and pip
RUN apk add --no-cache python3 py3-pip
# Create and activate a virtual environment, then install awscli
RUN python3 -m venv /venv \
&& . /venv/bin/activate \
&& pip install --no-cache --upgrade pip \
&& pip install --no-cache awscli \
&& deactivate
</code></pre>
<p>But my app is failing with below error to get the aws credentials as it was not installed instead in a virtual environment</p>
<pre><code>Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: To use assume role profiles the aws-java-sdk-sts module must be on the class path., com.amazonaws.auth.profile.ProfileCredentialsProvider@4afd1baf: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@7a2c4bf0: Failed to connect to service endpoint: ]
</code></pre>
<p>Any help is appreciated.</p>
<p>Thanks</p>
|
<python><spring><docker><java-17>
|
2024-06-25 03:10:18
| 0
| 1,093
|
rakeeee
|
78,664,824
| 1,185,242
|
Find all consensus locations of 2D line segment intersections
|
<p>I have a set of line segments, subsets of which all intersect at some locations. The line segments are noisy though so the intersections points aren't exact and there are some erroneous line segments in the set too. What's a good robust algorithm for finding the most likely set of intersection points? My first though is using RANSAC to incrementally extract the most supported intersection points.</p>
<p><a href="https://i.sstatic.net/bmmTrheU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bmmTrheU.png" alt="enter image description here" /></a></p>
|
<python><computational-geometry>
|
2024-06-24 22:49:47
| 0
| 26,004
|
nickponline
|
78,664,718
| 1,897,688
|
Reshape Pandas Dataframe and group by 2 level columns
|
<p>I have a dataframe with flat structure of having unique Rows as follow.
<a href="https://i.sstatic.net/JApDzl2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JApDzl2C.png" alt="Original Df" /></a></p>
<p>I need to reshape it as shown below.
<a href="https://i.sstatic.net/7rC1lbeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7rC1lbeK.png" alt="Desired reshaped format" /></a></p>
<p>Using Pivot table and swapping levels, I managed to obtain somewhat closer to the result, but it has inadvertently sorted the level1 sub columns.
<a href="https://i.sstatic.net/51X3ziVH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51X3ziVH.png" alt="incorrect output" /></a></p>
<pre><code>data = {
"Case_ID": ["1-1 Max Export", "1-1 Max Export", "1-2 Max Export", "1-2 Max Export", "1-3 Max Export", "1-3 Max Export"],
"Item": ["3-Winding TX", "R1-SUT1", "3-Winding TX", "R1-SUT1", "3-Winding TX", "R1-SUT1"],
"HV Current": [0.5, 0.1, 0.4, 0.1, 0.5, 0.1],
"Total Power": [114.5, 2.2, 113.4, 2.2, 100.0, 1.8],
"Tap Pos.": [15, 3, 1, 3, 20, 3]
}
df = pd.DataFrame(data) # Original Dataframe Format with Flat Structure
item_order = list (df.columns[2:]) # The second Level columns must be in same order as per the original df
# Pivot the DataFrame
reshaped_df = df.pivot_table(index='Case_ID',
columns='Item',
values=list (df.columns[2:]),
aggfunc='first')
# Swap level 0 and level 1 columns
reshaped_df.columns = reshaped_df.columns.swaplevel(0, 1)
# Without.sort_index(axis=1) the code doesn't work.
# The Level 0 and Level 1 colums shallbe in the same order as per the original df
reshaped_df = reshaped_df.sort_index(axis=1)
reshaped_df
</code></pre>
<p>The Tap Pos. sub column needs to be the last in each category
The Sub columns sequence should be as per the original df (ie HV Current, Total Power , Tap Pos.).</p>
<ul>
<li><p>a) I'm looking to fix the above code.</p>
</li>
<li><p>b) Also interested to see there is another way to achieve this instead
of using pivot table.</p>
</li>
</ul>
|
<python><pandas><group-by><pivot-table><reshape>
|
2024-06-24 21:56:31
| 1
| 356
|
rush dee
|
78,664,503
| 7,089,108
|
3D scatterplot, matrix operations and calculation of the inverse
|
<p>I have a scatter-plot with the code shown below.</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
x = np.random.rand(100)
y = np.random.rand(100)
z = np.random.rand(100)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
scatter = ax.scatter(x, y, z, c='r', marker='o')
plt.show()
</code></pre>
<p>We can say</p>
<pre><code>A = np.column_stack((x, y, z)).
</code></pre>
<p>Additionally, I have a matrix B, which equals the matrix A times a rotation operation, a shift operation + noise.</p>
<pre><code>B = A x rot(X) + shift(Y) + Noise
</code></pre>
<p>If only matrix A and B are given, how can I find the necessary rotation angle and shift vector to overlap matrix B with matrix A / revert the operations on matrix B? Is a center of mass approach best or what would be the recommended approach?</p>
|
<python><multidimensional-array><alignment><noise><svd>
|
2024-06-24 20:33:21
| 0
| 433
|
cerv21
|
78,664,472
| 25,413,271
|
gitlab CI: powershell job doesnt fail on python exception
|
<p>I have a simple gitlab config:</p>
<pre><code>variables:
P7_TESTING_INSTALLATION_PATH: D:\Logos_block_gitlab_runner\pSeven-v2024.05
stages:
- cleanup
- installation
cleanup_build:
tags:
- block_autotest
variables:
ErrorActionPreference: stop
allow_failure: false
stage: cleanup
script:
- Invoke-Expression -Command "$env:P7_TESTING_INSTALLATION_PATH\client\p7batch.exe --log-level=error --run $env:CI_PROJECT_DIR\autotest\jobs\_clear.py"
- if(!$?) { Exit $LASTEXITCODE }
install_block:
tags:
- block_autotest
variables:
ErrorActionPreference: stop
allow_failure: false
stage: installation
script:
- Invoke-Expression -Command "$env:P7_TESTING_INSTALLATION_PATH\client\p7batch.exe --log-level=error --run $env:CI_PROJECT_DIR\autotest\jobs\_setup_block.py"
</code></pre>
<p>Both files _clear.py and _setup_block.py not exists, and I see the proper exception in log:</p>
<pre><code>@ [Terminal] raise IOError("No such file: '{}'".format(filepath))
@ [Terminal] IOError: No such file: 'D:\Logos_block_gitlab_runner\GitLab-Runner\builds\vDLVWCgmH\0\aero\logos_userblock\autotest\jobs\_clear.py'
[32;1m$ if(!$?) { Exit $LASTEXITCODE }[0;m
section_end:1719260050:step_script
[0K[32;1mJob succeeded[0;m
</code></pre>
<p>But I still receive 'job succeeded'!!!</p>
<p>I have added ErrorActionPreference: stop as well as allow_failure: false but still I receive success on python exception...</p>
|
<python><gitlab-ci>
|
2024-06-24 20:24:11
| 1
| 439
|
IzaeDA
|
78,664,449
| 651,174
|
Add a "log" for a single function
|
<p>I need to debug some legacy code and I only want to print some information to a file that will not disturb any other places in the codebase. The most crude way I have found so far is something like:</p>
<pre><code>def my_func():
f = open('/tmp/log.txt', 'a') # help with logging stuff
f.write('Here is my crude local logger.')
# some code
logger.log('Hello') # our 'Normal logger' that goes to Splunk
# so I don't think I can modify this
# safely without side-effects?
</code></pre>
<p>However, I like all the 'extra' stuff that the logging module adds in, such as line numbers, time, functions, etc. What might be the best way to add in a local logger to do what a normal logger might do?</p>
|
<python><logging>
|
2024-06-24 20:15:40
| 2
| 112,064
|
David542
|
78,664,393
| 3,486,684
|
What determines the order of type variables when narrowing a generic type?
|
<p><strong>Note:</strong> this question refers to Python 3.12+</p>
<p>Suppose I have:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, TypeVar
import numpy as np
T = TypeVar("T")
U = TypeVar("U")
ListLike = T | list[T] | tuple[T, ...] | np.ndarray[Any, U]
ListLikeStr = ListLike[str, np.object_]
# ListLikeStr should be: str | list[str] | tuple[str, ...] | np.ndarray[Any, np.object_]
</code></pre>
<p>This works, but it was lucky. I could have instead written: <code>ListLike[np.object_, str]</code>, and then I'd get <code>ListLikeStr</code> being <code>np.object_ | list[np.object_] | tuple[np.object_, ...] | np.ndarray[Any, str]</code>, which is not what I'd like.</p>
<p>Ideally I could have done something like: <code>ListLike[T=str, U=np.object_]</code>, but that does not work. So what determines the order when I am instantiating the type variables in <code>ListLike</code>? How does <code>ListLike</code> "know" that <code>T</code> corresponds with <code>str</code> and <code>U</code> with <code>np.object_</code>, when I write <code>ListLike[str, np.object_]</code>?</p>
|
<python><mypy><python-typing>
|
2024-06-24 19:54:49
| 1
| 4,654
|
bzm3r
|
78,664,347
| 16,707,518
|
Randomly adding rows from another datatable based on group and a probability
|
<p>Suppose I have a dataframe called <strong>Main</strong> of this format - which will have a large combination of Region and Type of thousands of rows, with the Region/Type combination duplicated multiple times but with difference numbers in the Value column. This is a shortened version of the <strong>Main</strong> dataframe showing its format:</p>
<pre><code>Region Type Value
A 1 600
A 2 700
A 2 750
B 1 700
B 1 500
B 2 900
B 2 1000
</code></pre>
<p>I also have an another dataframe <strong>Prob</strong> that consists of Region, Type and Probability. This is a fairly short table - 20 or 30 rows as the probability will always be fixed for a given combination of Region and Type. It's this table that is used as a lookup for <strong>Main</strong> to set the probability that is discussed further below. This table will be fairly short in size.</p>
<pre><code>Region Type Probability
A 1 20%
A 2 30%
B 1 40%
B 2 90%
</code></pre>
<p>Stepping down each row in the <strong>Main</strong> table, I look up the probability from the dataframe <strong>Prob</strong> based on the Region and Type. Python then works out randomly based on the probability whether I'll add an extra row to the <strong>Main</strong> dataframe.</p>
<p>However, that extra row of data I'll add will be taken randomly from another even larger dataframe <strong>Extras</strong> which has the same form to <strong>Main</strong>:</p>
<pre><code>Region Type Value
A 1 600
A 1 300
A 2 700
A 2 950
B 1 700
B 1 50
B 2 900
B 2 1200
B 2 300
</code></pre>
<p>BUT:</p>
<p>a) The random row I pull from <strong>Extras</strong> can only have the same Region and Type as in the <strong>Main</strong> dataframe - and</p>
<p>b) Once I've taken that row of data, I remove it from the <strong>Extras</strong> dataframe as I don't want to pull the same row twice from the <strong>Extras</strong> dataframe.</p>
<p>Am trying to understand how I get this done.</p>
<p>Using my small example here, in the end I'll have a <strong>Main</strong> dataframe for example that is something that looks like:</p>
<pre><code>Region Type Value
A 1 600
A 2 700
A 2 750
B 1 700
B 1 500
B 2 900
B 2 1000
B 2 900
B 2 300
</code></pre>
<p>...where stepping through the seven rows of the <strong>Main</strong> dataframe, for Region B, Type 2 where there's 90% chance of pulling a new row into "Main" from Extras, we end up pulling two extra rows (whilst removing them from the <strong>Extras</strong> dataframe as we step through each row).</p>
<p>In my actual large data example, I have about 10000 rows in <strong>Main</strong> all with combinations of Region and Type of A/B and 1/2 all with probabilities of around 20% - so I'd expect to get around 2000 new rows added to the data from <strong>Extras</strong> table (which itself is about 40000 rows in size).</p>
|
<python><pandas><dataframe>
|
2024-06-24 19:42:00
| 1
| 341
|
Richard Dixon
|
78,664,293
| 3,486,684
|
What is a union dtype in numpy, and how do I define it?
|
<p>Suppose I have an array like this:</p>
<pre class="lang-py prettyprint-override"><code>x = np.array([["Hello", 0, 1], ["World", 1, 2]], dtype=np.object_)
</code></pre>
<p>I'd like to do better in terms of how I define the <code>dtype</code> for this array. More specifically, I want the array to allocate for each element as many bytes as would be required for the largest element.</p>
<p>This is what I think of as a "union" of types.</p>
<p><a href="https://stackoverflow.com/a/14320632/3486684">The answer</a> to the question <a href="https://stackoverflow.com/questions/14316066/c-style-union-with-numpy-dtypes">c-style union with numpy dtypes?</a> refers to "union dtypes" being allowed in <code>numpy</code> after 1.7+.</p>
<p>This is the <a href="https://numpy.org/doc/stable/user/basics.rec.html#union-types" rel="nofollow noreferrer">relevant section</a> in the <code>numpy</code> documentation. "Union types" documentation refers to <code>(base_dtype, new_dtype)</code>'s documentation in the section on data types titled <a href="https://numpy.org/doc/stable/reference/arrays.dtypes.html#specifying-and-constructing-data-types" rel="nofollow noreferrer">Specifying and constructing data types</a>.</p>
<p><code>(base_dtype, new_dtype)</code>'s documentation says something interesting (emphasis mine):</p>
<blockquote>
<p>This form also makes it possible to specify struct dtypes with overlapping fields, functioning like the ‘union’ type in C. This usage is discouraged, however, and <strong>the union mechanism is preferred</strong>.</p>
</blockquote>
<p>What precisely is the "union mechanism"? Searching for it in the <code>numpy</code> docs doesn't lead me to anything that stands out clearly as the mechanism being referred to. More importantly, I cannot do something like this in <code>(base_dtype, new_dtype)</code> format: <code>(np.object_, (np.uint32, np.uint32))</code>, because <code>np.object_</code> cannot be used as a <code>base_dtype</code>.</p>
<p>My best guess at the moment: documentation for <code>numpy</code> states that we can allow the <a href="https://numpy.org/doc/stable/user/basics.rec.html" rel="nofollow noreferrer">structured array fields to overlap</a>...which would give union like behaviour? After all, the structured arrays docs state (emphasis mine):</p>
<blockquote>
<p>For these purposes they support specialized features such as subarrays, nested datatypes, <strong>and unions</strong>, and allow control over the memory layout of the structure.</p>
</blockquote>
<p>The docs have sufficiently confused me around what the preferred mechanism is for making a "union type"...</p>
|
<python><numpy>
|
2024-06-24 19:26:12
| 1
| 4,654
|
bzm3r
|
78,664,231
| 14,655,211
|
How to assign values from a list to variables from a list in python?
|
<p>I have some code that is asking users questions and storing the responses in a list. I then want to assign the response values to variables from a list. Its not working as expected though as it doesnt look like the variables exist when I try and use them later in my script. I've seen a few similar questions on here and tried a few things out but nothing is working</p>
<pre><code>questions = ['Please enter your gross annual salary: ', 'Please enter the annual amount of any bonus or commision that appears on your paylip. If not applicable enter 0: ', 'Please enter the annual amount of any benefits in kind that appear on your payslip. If not applicable enter 0: ', 'Please enter your annual total tax credit: ', 'Please enter your annual total pension contribution. If not applicable enter 0: ', 'Do you have any other non tax deductable income? (ie. Holiday purchase schemes etc.) If not applicable enter 0: ']
variables = ['salary', 'bonus', 'bik', 'tax_credits', 'pension_contributions', 'other_non_tax_deductibles']
responses = []
for question in questions:
while True:
try:
response = float(input({question}))
except ValueError:
print("Please enter a number")
else:
responses.append(response)
break
for variable, response in zip(variables, responses):
variable = response
</code></pre>
|
<python>
|
2024-06-24 19:08:47
| 1
| 471
|
DeirdreRodgers
|
78,664,168
| 3,970,738
|
Subprocess initiated from Jupyter notebook does not use the same python virtual environment. Why and how to fix?
|
<p>I am exectuting a python script as a subprocess. Doing this from a regular module ineherits the virtual environment but when I do it inside a Jupyter notebook, the subprocess is using the system python. Why is this and how can I fix it in a platform independent way? (I do not know what virtual environments will be used by other users.)</p>
<p>A minimal example:</p>
<pre><code># kernel_printer.py
import sys
import os
def print_kernel():
kernel_name = os.path.basename(sys.executable.replace("/bin/python", ""))
print('Kernel: ', kernel_name)
if __name__ == '__main__':
print_kernel()
</code></pre>
<p>Code to be exectued either as a script or as a notebook cell:</p>
<pre><code>import subprocess
print(subprocess.run(['python', 'kernel_printer.py'], capture_output=True))
# as module: subprocess output contains print of activated virtualenv
# as notebook cell: ... systemwide python is printed as current kernel
</code></pre>
<p>Note that the jupyter notebook indeed is running the same virtualenv as the module. That is, the following code prints the name of the virtual env regardless if run in a module or a notebook cell.</p>
<pre><code>import kernel_printer
kernel_printer.print_kernel()
# always prints name of virtual env
</code></pre>
|
<python><jupyter><virtualenv>
|
2024-06-24 18:49:45
| 1
| 501
|
Stalpotaten
|
78,664,154
| 1,245,262
|
What does num_entities in Milvus collection increase when one overwrites existing entities
|
<p>I had thought that <code>num_entities</code> would indicate the number of records (or whatever the correct term is) within a Milvus collection. However, I created 1 file - <code>test_milvus.py</code> to create a simple collection like so:</p>
<pre><code>import numpy as np
from pymilvus import connections, Collection, CollectionSchema, FieldSchema, DataType
connections.connect(alias='default',host='localhost', port='19530')
# Define the schema
schema = CollectionSchema([FieldSchema("id", DataType.INT64, is_primary=True, max_length=100),
FieldSchema("vector", DataType.FLOAT_VECTOR, dim=2)])
# Create a collection
collection = Collection("test_collection", schema)
# Insert data
data = [{"id":i, "vector": np.array([i, i],dtype=np.float32)} for i in range(10)]
collection.insert(data)
# Flush data
collection.flush()
# Disconnect from the server
connections.disconnect(alias='default')
</code></pre>
<p>and another to get information on collections within a Milvus database = <code>milvus_info.py</code> - like so:</p>
<pre><code>from pymilvus import Collection, connections, db, utility
def get_info (host: str = "localhost", port: str = "19530"):
# Connect to Milvus (replace with your connection details)
connections.connect(alias="default", host=host, port=port) # Replace with your connection parameters
# Print the list of databases and collections
db_list = db.list_database()
for db_name in db_list:
print(f"Database: {db_name}")
collection_list = utility.list_collections(using=db_name)
if len(collection_list) == 0:
print(" No collections")
for collection_name in collection_list:
print(f" Collection: {collection_name}")
temp_collection = Collection(name=collection_name)
for info in temp_collection.describe():
print(f" {info}: {temp_collection.describe()[info]}")
temp_collection.flush() #Note: Adding this line does not fix problem.
print(f" Number of entities: {temp_collection.num_entities}")
# Disconnect from Milvus
connections.disconnect(alias='default')
if __name__ == "__main__":
get_info()
</code></pre>
<p>The first time I ran <code>test_milvus</code> followed by <code>milvus_info.py</code>, I got this output:</p>
<pre><code> $ python test_milvus.py
$ python milvus_info.py
Database: default
Collection: test_collection
collection_name: test_collection
auto_id: False
num_shards: 1
description:
fields: [{'field_id': 100, 'name': 'id', 'description': '', 'type': <DataType.INT64: 5>, 'params': {}, 'is_primary': True}, {'field_id': 101, 'name': 'vector', 'description': '', 'type': <DataType.FLOAT_VECTOR: 101>, 'params': {'dim': 2}}]
aliases: []
collection_id: 450687678279804785
consistency_level: 2
properties: {}
num_partitions: 1
enable_dynamic_field: False
Number of entities: 10
</code></pre>
<p>which struck me as odd, because there were only 2 vectors in the db.</p>
<p>However, if I run `test_milvus.py' again, the number of entities goes up to 20, even though no new vectors have been added:</p>
<pre><code>$ python test_milvus.py
$ python milvus_info.py
Database: default
Collection: test_collection
collection_name: test_collection
auto_id: False
num_shards: 1
description:
fields: [{'field_id': 100, 'name': 'id', 'description': '', 'type': <DataType.INT64: 5>, 'params': {}, 'is_primary': True}, {'field_id': 101, 'name': 'vector', 'description': '', 'type': <DataType.FLOAT_VECTOR: 101>, 'params': {'dim': 2}}]
aliases: []
collection_id: 450687678279804785
consistency_level: 2
properties: {}
num_partitions: 1
enable_dynamic_field: False
Number of entities: 20
</code></pre>
<p>This happens even though I've only tried to add records that were already there. I would've expected <code>num_entities</code> to be 10, no matter how many time I run these files. The documentation says it returns the number of rows, but I can drive it arbitrarily high while still having only 10 rows. Is num_entities supposed to track all rows that ever existed???</p>
<p>Note: This also happens when I replace <code>insert</code> with <code>upsert</code></p>
|
<python><milvus>
|
2024-06-24 18:45:57
| 1
| 7,555
|
user1245262
|
78,664,094
| 9,571,463
|
Asyncio Task was destroyed but it was pending
|
<p>I am making requests to an external API, getting the response back and writing it to a file. Everything works/runs fine, however I receive the "Task was destroyed but it is pending!" warning that I'd like to clean up.</p>
<p>I have emulated the process below. I receive items from a source (e.g. a list) of items and as I receive them, I put them into the queue, signaling the end of the list but putting in a sentinel <code>None</code> value.</p>
<p>The <code>write_q</code> continues to receive items from this queue until it receives the sentinel and then breaks out.</p>
<p>Below is the code which will show that my <code>write_task()</code> is cancelled before it is completed. What is the proper design to handle this?</p>
<pre><code>import asyncio
import aiofiles
import aiocsv
import json
async def task(l: list, write_q: asyncio.Queue) -> None:
# Read tasks from source of data
for i in l:
# Put a request task into the queue
req: dict = {
"headers": {"Accept": "application/json"},
"url": "https://httpbin.org/post",
"data": i
}
await write_q.put(req)
# Sentinel value to signal we are done receiving from source
await write_q.put(None)
async def write_task(write_q: asyncio.Queue) -> None:
headers: bool = True
while True:
async with aiofiles.open("file.csv", mode="a+", newline='') as f:
w = aiocsv.AsyncWriter(f)
# Get data out of the queue to write it
data = await write_q.get()
if not data:
write_q.task_done()
await f.flush()
break
if headers:
await w.writerow([
"status",
"data",
])
headers = False
# Write the data from the response
await w.writerow([
"200",
json.dumps(data)
])
write_q.task_done()
async def main() -> None:
# Create fake data to POST
items: list[str] = [["hello", "world"], ["asyncio", "test"]] * 5
# Queues for orchestrating
write_q = asyncio.Queue()
producer = asyncio.create_task(
task(items, write_q)
)
consumer = asyncio.create_task(
write_task(write_q)
)
errors = await asyncio.gather(producer, return_exceptions=True)
print(f"INFO: Producer has completed! exceptions: {errors}")
# Wait for queue to empty and cancel the consumer
await write_q.join()
consumer.cancel()
print("INFO: write consumer has completed! ")
print("INFO: Complete!")
if __name__ == "__main__":
loop = asyncio.new_event_loop()
loop.run_until_complete(main())
</code></pre>
|
<python><python-asyncio><producer-consumer><python-aiofiles>
|
2024-06-24 18:31:10
| 1
| 1,767
|
Coldchain9
|
78,664,009
| 2,142,338
|
Altair chart legend for subset of data
|
<p>As an exercise for learning more advanced altair, I'm trying to generate a simplified version of this chart: <a href="https://climatereanalyzer.org/clim/t2_daily/?dm_id=world" rel="nofollow noreferrer">https://climatereanalyzer.org/clim/t2_daily/?dm_id=world</a>.</p>
<p>To simplify, I'm using gray for all years prior to 2023 and then red and black for 2023 and 2024, respectively. I'd like to have a legend that is either just for 2023 & 2024 or is "1940-2022", "2023", "2024".</p>
<p>Right now I'm focused on getting a compact legend that reflect either subset of years, but I'd take any advice on how to improve the code / approach.</p>
<pre><code>import pandas as pd
import altair as alt
# Function to fetch and prepare the data
def fetch_and_prep_data():
url = "https://climatereanalyzer.org/clim/t2_daily/json/era5_world_t2_day.json"
data = requests.get(url).json()
years = []
all_temperatures = []
for year_data in data:
year = year_data['name']
temperatures = year_data['data']
temperatures = [temp if temp is not None else float('nan') for temp in temperatures]
days = list(range(1, len(temperatures) + 1))
df = pd.DataFrame({
'Year': [year] * len(temperatures),
'Day': days,
'Temperature': temperatures
})
years.append(year)
all_temperatures.append(df)
df_at = pd.concat(all_temperatures)
# Drop all rows where Year is more than 4 digits
df_at = df_at[df_at['Year'].str.len() <= 4]
return df_at
# Function to create the last day in month labels
def get_last_day_in_month_labels():
dates = pd.date_range(start='2023-01-01', end='2023-12-31', freq='D')
last_days = dates[dates.is_month_end]
labels = {day_of_year: month_abbr for day_of_year, month_abbr in zip(last_days.day_of_year, last_days.strftime('%b'))}
return labels
# Functions to determine opacity, color, and stroke width
def determine_opacity(year):
try:
year_int = int(year)
return 0.01 if year_int < 2023 else 1.0
except ValueError:
return 1.0
def determine_color(year):
color = 'gray'
try:
year_int = int(year)
if year_int < 2023:
color = 'gray'
elif year_int == 2023:
color = 'red'
elif year_int == 2024:
color = 'black'
except ValueError:
color = 'black'
return color
def determine_strokewidth(year):
width = 1
try:
year_int = int(year)
if year_int < 2023:
width = 1
else:
width = 4
except ValueError:
width = 4
return width
# Applying the functions to the 'Year' column
# Fetch and prepare the data
df_at = fetch_and_prep_data()
df_all = df_at.copy()
df_all['Opacity'] = df_all['Year'].apply(determine_opacity)
df_all['Color'] = df_all['Year'].apply(determine_color)
df_all['Width'] = df_all['Year'].apply(determine_strokewidth)
# Ensure 'Day' is correctly interpreted as a quantitative variable
df_all['Day'] = pd.to_numeric(df_all['Day'], errors='coerce')
# Filter the data to ensure 'Day' values are within the desired range
df_filtered = df_all[df_all['Day'] <= 365]
# Create last day in month labels
last_day_in_month_labels = get_last_day_in_month_labels()
# Extract the keys and values for tick marks and labels
tick_values = list(last_day_in_month_labels.keys())
tick_labels = list(last_day_in_month_labels.values())
# Plotting the main data using Altair with the existing Color and Opacity columns
line_chart = alt.Chart(df_filtered).mark_line().encode(
x=alt.X(
'Day:Q',
title='Month',
scale=alt.Scale(domain=(0, 365), clamp=True),
axis=alt.Axis(
labels=True,
tickCount=12,
values=tick_values,
labelExpr=f"datum.value == {tick_values[0]} ? '{tick_labels[0]}' : " +
" : ".join([f"datum.value == {tick} ? '{label}'" for tick, label in zip(tick_values[1:], tick_labels[1:])]) +
" : ''",
labelOffset= -30 # Shift the x-axis labels to the left by 30 units
)
),
y=alt.Y(
'Temperature:Q',
title='Temperature (C)',
scale=alt.Scale(domain=(11, 18), clamp=True),
),
color=alt.Color('Color:N', legend=None, scale=None), # Use the "Color" column for line colors
opacity=alt.Opacity('Opacity:Q', legend=None), # Use the "Opacity" column
detail=alt.Detail('Year:N'), # Add detail encoding for Year, otherwise you get vertical lines
strokeWidth=alt.StrokeWidth('Width:N'), legend=None) # Use the "Width" column
).properties(
width=800,
height=600
)
line_chart```
</code></pre>
|
<python><legend><altair>
|
2024-06-24 18:08:20
| 2
| 888
|
Don
|
78,663,904
| 6,622,697
|
How to store database uri in Flask config for SQLAlchemy?
|
<p>I'm using Flask and SQLAlchemy (<em>not</em> flask-sqlalchemy).</p>
<p>I'm trying to get access to the flask config from my database code, but it's telling me that I'm not in an application context, probably because the app is not yet fully initialized.</p>
<p>I have <code>app.py</code>:</p>
<pre><code>from flask import Flask
from root.views.calibration_views import calibration
nwm_app = Flask(__name__)
nwm_app.config.from_pyfile('myConfig.cfg')
print('config', nwm_app.config)
</code></pre>
<p>My <code>calibration_views</code> file imports <code>database.py</code>, which looks like this:</p>
<pre><code>from sqlalchemy import URL, create_engine
from sqlalchemy.orm import sessionmaker
from flask import current_app as nwm_app
url = URL.create(
drivername="postgresql",
username="postgres",
password="postgres",
host="localhost",
database="flask_test"
)
engine = create_engine(url, echo=True)
with nwm_app.app_context():
print('config', nwm_app.config)
Session = sessionmaker(bind=engine)
</code></pre>
<p>I'm trying to get the config object using <code>current_app</code>, which avoids the Python error about circular imports, but I still get</p>
<pre><code>RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that was needed.
The current application. To solve this, set up an application context.
With app.app_context(). See the documentation for more information.
</code></pre>
<p>The <code>nwm_app.app_context()</code> was my attempt to fix it, but it had no effect</p>
<p>Part of me is wondering if the Flask config should really be used like this, even though they say you are allowed to put your own variables in there. It would be easy enough to just load a separate config file using normally Python behavior.</p>
|
<python><flask><sqlalchemy>
|
2024-06-24 17:40:38
| 2
| 1,348
|
Peter Kronenberg
|
78,663,854
| 4,391,360
|
Not the same sys.path in a shell and in Spyder
|
<p>Why isn't the <code>sys.path</code> in Spyder the same as in a shell?</p>
<p>In a shell, <code>python -c "import sys ; print(sys.path)"</code> doesn't give the same thing as if I stop the script at the first line with a breakpoint to do in the debugging console <code>import sys ; print(sys.path)</code>.</p>
<p>I've managed to get around the issue by defining the necessary directories in Tools > PYTHONPATH Manager, but I find the method rather dirty as the <code>sys.path</code> is correct outside Spyder ...</p>
<h2>Edit</h2>
<p><strong>Some results :</strong></p>
<p>1- On the host:</p>
<p>1-1 In a shell (where the IDE will be started, so with the same environment):</p>
<pre><code>$ echo $PYTHONPATH
$ python -c "import sys; print('\nsys.path:' + '\n- '.join(sys.path)); print('\nsys.executable: ', sys.executable, '\n')"
sys.path:
- /usr/lib64/python312.zip
- /usr/lib64/python3.12
- /usr/lib64/python3.12/lib-dynload
- /home/toto/.local/lib/python3.12/site-packages
- /usr/lib64/python3.12/site-packages
- /usr/lib/python3.12/site-packages
sys.executable: /usr/bin/python
</code></pre>
<p>1-1 In Spyder:</p>
<pre><code>IPdb [8]: print('\nsys.path:\n' + '\n'.join(f'- {path}' for path in sys.path))
sys.path:
- /usr/lib64/python312.zip
- /usr/lib64/python3.12
- /usr/lib64/python3.12/lib-dynload
-
- /home/toto/.local/lib/python3.12/site-packages
- /usr/lib64/python3.12/site-packages
- /usr/lib/python3.12/site-packages
- /data/Git_projects
IPdb [9]: print('\nsys.executable: ', sys.executable)
sys.executable: /usr/bin/python
</code></pre>
<p>So we have the same thing (with the project path in the last position, which is configured in Spyder). Sounds perfect to me.</p>
<p>2- On a singularity container:</p>
<p>2-1 In a shell (where the IDE will be started, so with the same environment):</p>
<pre><code>$ echo $PYTHONPATH
/casa/host/src/development/brainvisa-cmake/master/python:/casa/host/build/python
$ python -c "import sys; print('\nsys.path:' + '\n- '.join(sys.path)); print('\nsys.executable: ', sys.executable, '\n')"
sys.path:
- /casa/host/src/development/brainvisa-cmake/master/python
- /casa/host/build/python
- /usr/lib/python310.zip
- /usr/lib/python3.10
- /usr/lib/python3.10/lib-dynload
- /casa/home/.local/lib/python3.10/site-packages
- /casa/host/src/populse/populse_db/master/python
- /usr/local/lib/python3.10/dist-packages
- /usr/local/lib/python3.10/dist-packages/DracoPy-1.3.0-py3.10-linux-x86_64.egg
- /usr/lib/python3/dist-packages
- /usr/lib/python3.10/dist-packages
- /casa/host/build/python/sitecustomize
- /casa/host/src/development/casa-distro/master/python
- /casa/host/src/capsul/master
sys.executable: /usr/bin/python
</code></pre>
<p>2-2 In Spyder:</p>
<pre><code>IPdb [2]: print('\nsys.path:\n' + '\n'.join(f'- {path}' for path in sys.path))
sys.path:
- /usr/lib/python310.zip
- /usr/lib/python3.10
- /usr/lib/python3.10/lib-dynload
-
- /casa/home/.local/lib/python3.10/site-packages
- /casa/host/src/populse/populse_db/master/python
- /usr/local/lib/python3.10/dist-packages
- /usr/local/lib/python3.10/dist-packages/DracoPy-1.3.0-py3.10-linux-x86_64.egg
- /usr/lib/python3/dist-packages
- /usr/lib/python3.10/dist-packages
- /casa/home/Git_projects
IPdb [3]: print('\nsys.executable: ', sys.executable)
sys.executable: /usr/bin/python
</code></pre>
<p>We notice that we've lost the last 3 elements of sys.path in Spyder.</p>
<ul>
<li>/casa/host/build/python/sitecustomize</li>
<li>/casa/host/src/development/casa-distro/master/python</li>
<li>/casa/host/src/capsul/master</li>
</ul>
<p>I would like to point out that these are paths that we define elsewhere in the container, but these paths are well known in the environment (what I call shell, which is a bit too simplistic) as shown in 2-1. It seems to me that Spyder should be able to retrieve the latter elements by default...</p>
<p>However, it's true that all is not lost, because as I wrote above, it is possible to define a Spyder-specific PYTHONPATH with these elements... It's just that I don't think developers really want to waste time with these configuration steps.</p>
|
<python><spyder><sys.path>
|
2024-06-24 17:25:47
| 1
| 727
|
servoz
|
78,663,819
| 3,279,603
|
couchbase.exceptions.UnAmbiguousTimeoutException using Python SDK
|
<p>I’m getting <code>couchbase.exceptions.UnAmbiguousTimeoutException</code> every time I try to connect to my Couchbase cluster using the python SDK. With the same naming and configuration, I have no issue connecting via <code>.NET</code> sdk.</p>
<p>This is my Python code:</p>
<pre class="lang-py prettyprint-override"><code>class CouchbaseBase:
def __init__(self, endpoint, username, password, bucket_name):
logging.basicConfig(filename='example.log',
filemode='w',
level=logging.DEBUG,
format='%(levelname)s::%(asctime)s::%(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
logger = logging.getLogger()
couchbase.configure_logging(logger.name, level=logger.level)
# Connect options - authentication
auth = PasswordAuthenticator(username, password)
timeout_opts = ClusterTimeoutOptions(connect_timeout=timedelta(seconds=60),
kv_timeout=timedelta(seconds=60))
# get a reference to our cluster
options = ClusterOptions(auth,
timeout_options=timeout_opts,
disable_mozilla_ca_certificates=True,
enable_dns_srv=False,
tls_verify=None)
cluster = Cluster.connect(f"couchbase://{endpoint}", options)
# Wait until the cluster is ready for use.
cluster.wait_until_ready(timedelta(seconds=5))
</code></pre>
<p>And this is the logs ( named address and ip are replaced for security reasons):</p>
<pre><code>DEBUG::2024-06-24 09:58:57::{"openssl_default_cert_dir": "/etc/ssl/certs", "openssl_default_cert_file": "/etc/ssl/cert.pem", "openssl_headers": "OpenSSL 1.1.1 (compatible; BoringSSL)", "openssl_runtime": "BoringSSL", "txns_forward_compat_extensions": "TI,MO,BM,QU,SD,BF3787,BF3705,BF3838,RC,UA,CO,BF3791,CM,SI,QC,IX,TS,PU", "txns_forward_compat_protocol_version": "2.0", "version": "1.0.0"}
DEBUG::2024-06-24 09:58:57::Found DNS Servers: [10.70.0.2, 10.2.127.254, 8.8.8.8], selected nameserver: "10.70.0.2"
DEBUG::2024-06-24 09:58:57::open cluster, id: "a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337", core version: "1.0.0+", {"bootstrap_nodes":[{"hostname":"couchbase-named-address.com","port":"11210"}],"options":{"analytics_timeout":"75000ms","bootstrap_timeout":"10000ms","config_idle_redial_timeout":"300000ms","config_poll_floor":"50ms","config_poll_interval":"2500ms","connect_timeout":"60000ms","disable_mozilla_ca_certificates":false,"dns_config":{"nameserver":"10.70.0.2","port":53,"timeout":"500ms"},"dump_configuration":false,"enable_clustermap_notification":true,"enable_compression":true,"enable_dns_srv":true,"enable_metrics":true,"enable_mutation_tokens":true,"enable_tcp_keep_alive":true,"enable_tls":false,"enable_tracing":true,"enable_unordered_execution":true,"idle_http_connection_timeout":"1000ms","key_value_durable_timeout":"10000ms","key_value_timeout":"60000ms","management_timeout":"75000ms","max_http_connections":0,"metrics_options":{"emit_interval":"600000ms"},"network":"auto","query_timeout":"75000ms","resolve_timeout":"2000ms","search_timeout":"75000ms","show_queries":false,"tcp_keep_alive_interval":"60000ms","tls_verify":"peer","tracing_options":{"analytics_threshold":"1000ms","key_value_threshold":"500ms","management_threshold":"1000ms","orphaned_emit_interval":"10000ms","orphaned_sample_size":64,"query_threshold":"1000ms","search_threshold":"1000ms","threshold_emit_interval":"10000ms","threshold_sample_size":64,"view_threshold":"1000ms"},"transactions_options":{"cleanup_config":{"cleanup_client_attempts":false,"cleanup_lost_attempts":false,"cleanup_window":"0ms","collections":[]},"durability_level":"none","query_config":{"scan_consistency":"not_bounded"},"timeout":"0ns"},"trust_certificate":"","use_ip_protocol":"any","user_agent_extra":"pycbc/4.2.1 (python/3.9.7)","view_timeout":"75000ms"}}
DEBUG::2024-06-24 09:58:57::Query DNS-SRV: address="couchbase-named-address.com", service="_couchbase", nameserver="10.70.0.2:53"
DEBUG::2024-06-24 09:58:57::DNS UDP returned 0 records
WARNING::2024-06-24 09:58:57::DNS SRV query returned 0 records for "couchbase-named-address.com", assuming that cluster is listening this address
DEBUG::2024-06-24 09:58:57::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> attempt to establish MCBP connection
DEBUG::2024-06-24 09:58:57::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> connecting to 127.0.0.1:11210 ("couchbase-named-address.com:11210"), timeout=60000ms
DEBUG::2024-06-24 09:58:59::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> reached the end of list of bootstrap nodes, waiting for 500ms before restart
DEBUG::2024-06-24 09:59:00::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> attempt to establish MCBP connection
DEBUG::2024-06-24 09:59:00::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> connecting to 127.0.0.1:11210 ("couchbase-named-address.com:11210"), timeout=60000ms
DEBUG::2024-06-24 09:59:02::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> reached the end of list of bootstrap nodes, waiting for 500ms before restart
DEBUG::2024-06-24 09:59:02::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> attempt to establish MCBP connection
DEBUG::2024-06-24 09:59:02::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> connecting to 127.0.0.1:11210 ("couchbase-named-address.com:11210"), timeout=60000ms
DEBUG::2024-06-24 09:59:04::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> reached the end of list of bootstrap nodes, waiting for 500ms before restart
DEBUG::2024-06-24 09:59:05::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> attempt to establish MCBP connection
DEBUG::2024-06-24 09:59:05::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> connecting to 127.0.0.1:11210 ("couchbase-named-address.com:11210"), timeout=60000ms
DEBUG::2024-06-24 09:59:07::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> reached the end of list of bootstrap nodes, waiting for 500ms before restart
DEBUG::2024-06-24 09:59:07::all nodes failed to bootstrap, triggering DNS-SRV refresh, ec=unambiguous_timeout (14), last endpoint="couchbase-named-address.com:11210"
WARNING::2024-06-24 09:59:07::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> unable to bootstrap in time
DEBUG::2024-06-24 09:59:07::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> stop MCBP connection, reason=do_not_retry
DEBUG::2024-06-24 09:59:07::Query DNS-SRV: address="couchbase-named-address.com", service="_couchbase", nameserver="10.70.0.2:53"
DEBUG::2024-06-24 09:59:07::PYCBC: create conn callback completed
DEBUG::2024-06-24 09:59:07::[a7e0e7-ee62-6f4b-ab4b-67f599fe7e4337/b49bd8-2d09-e440-6ac3-0b7691c4cef19e/plain/-] <couchbase-named-address.com:11210> destroy MCBP connection
</code></pre>
<p>The only difference I’m seeing in my.NET configuration, is there are some options to <code>HttpIgnoreRemoteCertificateMismatch</code> and <code>KvIgnoreRemoteCertificateNameMismatch</code> which I set to <code>true</code> in my case, but I cannot find any for <code>python</code> sdk.</p>
<p><code>.NET</code> equivalent code that is working without any issue:</p>
<pre class="lang-cs prettyprint-override"><code>_clusterOptions = new ClusterOptions();
config.GetCouchbaseClientDefinition().Bind(_clusterOptions);
_clusterOptions.HttpCertificateCallbackValidation += new RemoteCertificateValidationCallback((object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors errors) =>
{
return true;
});
_clusterOptions.KvCertificateCallbackValidation += new RemoteCertificateValidationCallback((object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors errors) =>
{
return true;
});
_clusterOptions.NetworkResolution = "external";
_clusterOptions.EnableDnsSrvResolution = false;
_clusterOptions.WithSerializer(new CouchbaseSerializer());
_clusterOptions.AddLinq();
_clusterOptions.KvTimeout = TimeSpan.FromSeconds(10);
_cluster = Cluster.ConnectAsync(_clusterOptions).Result;
</code></pre>
|
<python><sdk><timeout><couchbase>
|
2024-06-24 17:16:03
| 1
| 5,230
|
aminrd
|
78,663,805
| 7,486,038
|
mlflow doesn't autolog artifacts while logging images
|
<p>I am fairly new to mlflow. I stumbled across some strange behavior. When I am running a simple Keras model fit with MLFlow Autolog like this:</p>
<pre><code>mlflow.set_tracking_uri("sqlite:///mlflow.db")
mlflow.set_experiment("keras_log")
mlflow.tensorflow.autolog()
# Define the parameters.
num_epochs = 10
batch_size = 256
# Train the model.
history = model.fit(X_train,
y_train,
epochs=num_epochs,
batch_size=batch_size,
validation_data=(X_test, y_test))
mlflow.end_run()
</code></pre>
<p>This produces the expected behavior. I can see the model in artifacts and metrics.</p>
<p><a href="https://i.sstatic.net/oTlcS9WA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTlcS9WA.png" alt="enter image description here" /></a></p>
<p>However, when I autolog along with custom figure containing the training accuracy and loss , the artifacts contain only the custom images. As if autolog didn't work at all.</p>
<pre><code>mlflow.set_tracking_uri("sqlite:///mlflow.db")
mlflow.set_experiment("keras_log")
mlflow.tensorflow.autolog()
# Define the parameters.
num_epochs = 10
batch_size = 256
# Train the model.
history = model.fit(X_train,
y_train,
epochs=num_epochs,
batch_size=batch_size,
validation_data=(X_test, y_test))
##_________ Problematic Part
fig, ax = plt.subplots(1,2,figsize=(10,4))
ax[0].plot(history.history['accuracy'], label='Accuracy' )
ax[0].plot(history.history['val_accuracy'], label='Val Accuracy' )
ax[0].set_title('Accuracy')
ax[0].legend(loc='best')
ax[1].plot(history.history['loss'], label='Loss' )
ax[1].plot(history.history['val_loss'], label='Val Loss' )
ax[1].set_title('Loss')
ax[1].legend(loc='best')
mlflow.log_figure(fig,'training_history.png')
# _________
mlflow.end_run()
</code></pre>
<p>The model artifacts aren't there. The metrics also aren't recorded.</p>
<p>Am I missing something straightforward?</p>
<p><a href="https://i.sstatic.net/2oRZZfM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2oRZZfM6.png" alt="enter image description here" /></a>
Please help.</p>
|
<python><machine-learning><mlflow>
|
2024-06-24 17:12:13
| 1
| 326
|
Arindam
|
78,663,622
| 1,215,889
|
How to pass feature value to custom loss in Keras
|
<p>I am using a custom loss to train my model and want to use feature value to have a differential loss, something like</p>
<pre><code>def custom_loss(y_true, y_pred, feature_val):
return (feature_val ** 2) * abs(y_true - y_pred)
</code></pre>
<p>How can I achieve this in Keras?</p>
|
<python><tensorflow><keras><deep-learning>
|
2024-06-24 16:27:41
| 0
| 9,159
|
neel
|
78,663,537
| 7,465,462
|
Using REST API in Python to run Workflows in Azure Purview
|
<p>We are trying to use Purview REST APIs to make a user request that should be able to trigger a Purview workflow run. The workflow should trigger whenever the user is trying to update an asset, for example its description.</p>
<p>According to the <a href="https://learn.microsoft.com/en-us/rest/api/purview/workflowdataplane/user-requests/submit?view=rest-purview-workflowdataplane-2023-10-01-preview&tabs=HTTP" rel="nofollow noreferrer">REST API documentation</a>, we are using the following code:</p>
<pre><code>import requests
url = f"https://XXXXX.purview.azure.com/workflow/userrequests?api-version=2023-10-01-preview"
headers = {
"Authorization": f"Bearer {token}", # each time generated
"Content-Type": "application/json"
}
body = {
"operations": [
{
"type": "UpdateAsset",
"payload": {
"entities": {
"typeName": "azure_sql_table",
"attributes": {
"guid": "f00553c6-7a45-479f-b2fe-f9f6f6f60000",
"userDescription": "New description from ADB via workflow API",
"qualifiedName": "mssql://XXXXX.database.windows.net/XXXXX/dbo/YYYYY",
"name": "YYYYY",
"description": "Description field from ADB via workflow API"
}
}
}
}
],
"comment": "Thanks!"
}
response = requests.post(url, headers=headers, json=body)
response.json()
</code></pre>
<p>The response we are getting is:</p>
<pre><code>{'error': {'requestId': '3ea14555-aa4c-48e7-b1b6-1d683f39515b',
'code': 'Workflow.DataCatalogError.InvalidJsonRequestPayload',
'message': "Invalid Json request payload: '.entities(missing)'"}}
</code></pre>
<p>We do not understand what is missing inside "entities". There is a similar issue <a href="https://learn.microsoft.com/en-us/answers/questions/1631905/how-to-use-updateasset-to-submit-a-a-user-request?comment=question" rel="nofollow noreferrer">here</a>.<br />
What are we doing wrong?</p>
|
<python><rest><azure-purview>
|
2024-06-24 16:04:22
| 3
| 9,318
|
Ric S
|
78,663,524
| 5,443,120
|
How to specify colorway for plotly line graph theme?
|
<p>I'm plotting line graphs in Python with plotly. I can manually specify the colour of each line from the <code>fig.line()</code> call with the <code>color_discrete_map</code> argument.
But what I want to do is specify the default colours using a <a href="https://plotly.com/python/templates/" rel="nofollow noreferrer"><em>theme</em></a>. I am able to define a theme and apply other characteristics (e.g. a grey background), but the "colorway" does not get applied. Why?</p>
<p>MWE:</p>
<pre><code>import plotly.express as px
import plotly.io as pio
import pandas as pd
# generate sample data
data = []
for x in range(3):
for c in ['a', 'b']:
data.append({'x': x, 'y': x + (c == 'a')*1, 'c': c})
test_df = pd.DataFrame(data)
fig = px.line(test_df,
y='y',
x='x',
color='c',
color_discrete_map={
'a': 'pink',
'b': 'olive'
},
title='manual')
fig.show()
test_template = {
'layout': {
'colorway': ['pink', 'olive'],
'plot_bgcolor': 'grey',
},
}
# Register the template
pio.templates['test_template'] = test_template
# Create the plot
fig = px.line(test_df,
y='y',
x='x',
color='c',
title='via theme')
fig.update_layout(template=test_template)
# Show the plot
fig.show()
</code></pre>
<p>Expected behavior: Both plots have the same line colours.</p>
<p>Actual behavior:</p>
<p>The default plotly colours (red and blue) are used for the line colours when I try to use a custom theme.</p>
<p><a href="https://i.sstatic.net/M6qrkrdp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6qrkrdp.png" alt="plots" /></a></p>
<p>The fact that the background colour does change means that I am correctly applying a theme. The problem is probably with the definition or location of the <code>colorway</code> property within the theme dict.</p>
|
<python><plotly><themes>
|
2024-06-24 16:00:44
| 1
| 4,421
|
falsePockets
|
78,663,324
| 5,786,649
|
Is it possible to have an inline-definition of a dataclass in Python?
|
<p><em>Preliminary note:</em> The first three examples provided all work. My question is: If I declare dataclasses that are only used once inside another dataclass, can I avoid the "proper" declaration of the nested dataclass in favour of an "inline" declaration?
I want to use nested dataclasses because it makes attribute access in my IDE simpler, i.e. I want to be able to use dot notation like <code>MyNestedDataclass.</code> and my IDE will suggest <code>group_1</code> and <code>group_2</code>.</p>
<hr />
<p>I want to define a nested dict and make use of Dataclass's feature for dot-notation - mainly because my type checker then suggests the fields. This is the data structure I aim for:</p>
<pre class="lang-py prettyprint-override"><code>my_nested_dict = {
"group_1": {
"variable_1a":10,
"variable_1b":True,
},
"group_2": {
"variable_2a":"foo",
}
}
</code></pre>
<p>Currently, I am doing this:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Group1:
variable_1a: int = 10
variable_1b: bool = True
@dataclass
class Group2:
variable_2a: str = "foo"
@dataclass
class MyNestedDataclass:
group_1: Group1
group_2: Group2
</code></pre>
<p>I have tried this pattern, but it leads to my type checker not knowing which keys should be allowed in <code>MyNestedDataclass.group_1</code> and <code>MyNestedDataclass.group2</code>, and me not being able to access it like <code>MyNestedDataclass.group_1.variable_1a</code>:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class MyNestedDataclass:
group_1: dict[str, Any] = field(default_factory=lambda:{
"variable_1a":10,
"variable_1b":True,
})
group_2: field(default_factory=lambda:{
"variable_2a":"foo",
})
</code></pre>
<p>It seems very unlikely, but is there something like this:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class MyNestedDataclass:
group_1: Group1 = field(default_factory=lambda: dataclass(
variable_1a: int = 10
variable_1b: bool = True
))
group_2: Group2= field(default_factory=lambda: dataclass(
variable_2a: str = "foo"
))
</code></pre>
|
<python><python-typing><python-dataclasses><pyright>
|
2024-06-24 15:16:03
| 1
| 543
|
Lukas
|
78,663,210
| 6,622,697
|
Alembic creates foreign key constraints before other table is created
|
<p>I'm trying to setup Alembic for the first time. I dropped all my tables, so that Alembic would be starting from scratch. I entered</p>
<pre><code>alembic revision --autogenerate -m "Initial tables"
alembic upgrade head
</code></pre>
<p>I got an immediate error because it was creating a table which has a foreign key. But the other table hadn't been created yet.</p>
<p>Here's an extract</p>
<pre><code>def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('calibration_metric',
sa.Column('pk', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['metric_pk'], ['metric.pk'], ),
sa.PrimaryKeyConstraint('pk')
)
</code></pre>
<p>and then way down, many lines later</p>
<pre><code> op.create_table('metric',
sa.Column('pk', sa.Integer(), nullable=False),
</code></pre>
<p>Is there a way to tell Alembic not to create the Foreign Key constraints until after all the tables have been created?</p>
<p>Any other solution?</p>
|
<python><alembic><alchemy>
|
2024-06-24 14:56:53
| 2
| 1,348
|
Peter Kronenberg
|
78,662,921
| 447,426
|
pyspark How to read a folder with binary files continuously - on new files
|
<p>I created a pyspark pipeline that begins on reading binary files:</p>
<pre><code>unzipped: DataFrame = spark.read.format("binaryFile")\
.option("pathGlobFilter", "*.pcap.tgz")\
.option("compression", "gzip")\
.load(folder)
</code></pre>
<p>After "unzipping" i untar them and extract ip-udp packages with proprietary package format.
Is there a pyspark way to execute the pipeline on new files or do i need some kind of wrapper to "observe" the folder? I would like to avoid polling but have something filesystem event based.</p>
<p>I would like to move processed files into a done folder.</p>
<p>I know there is "stream processing" but i am not sure if this is applicable here. At least there a re some use cases where i need data from previous files accessible.</p>
|
<python><apache-spark><pyspark>
|
2024-06-24 13:56:55
| 0
| 13,125
|
dermoritz
|
78,662,706
| 16,527,170
|
Interchange Start Date, End Date and other Columns with Earlier Row if Dates greater than 8 in pandas dataframe
|
<p>Data:</p>
<pre><code># Sample data
data = {
'Start Date': ['2022-10-18', '2022-10-25', '2023-04-17'],
'End Date': ['2022-10-20', '2023-04-06', '2023-07-04'],
'Close1': [17486.95, 17656.35, 17706.85],
'Close2': [17563.95, 17599.15, 19389.00],
'NF_BEES1': [0.58, 0.19, 0.12],
'NF_BEES2': [0.63, 0.75, 0.73],
'Difference': [77.00, -57.20, 1682.15],
'Days Difference': [2, 163, 78]
}
# Create DataFrame
df = pd.DataFrame(data)
df
# Convert date columns to datetime
df['Start Date'] = pd.to_datetime(df['Start Date'])
df['End Date'] = pd.to_datetime(df['End Date'])
</code></pre>
<p>Output:</p>
<pre><code> Start Date End Date Close1 Close2 NF_BEES1 NF_BEES2 Difference Days Difference
0 2022-10-18 2022-10-20 17486.95 17563.95 0.58 0.63 77.00 2
1 2022-10-25 2023-04-06 17656.35 17599.15 0.19 0.75 -57.20 163
2 2023-04-17 2023-07-04 17706.85 19389.00 0.12 0.73 1682.15 78
</code></pre>
<p>If <code>Days Difference</code> column > 8 , then create new row in df & <code>End Date</code> of Earlier row should be <code>Start Date</code> in new row & <code>Start Date</code> of current row (Days Difference >8) should be <code>End Date</code>.</p>
<p><code>Close1</code>, <code>Close2</code>, <code>NF_BEES1</code> , <code>NF_BEES2</code> values should also be changed like <code>Start Date</code> & <code>End Date</code>. Based on that <code>Days Difference</code> needs to be calculated for new row</p>
<p>My Code (Not working as Expected):</p>
<pre><code># Iterate through the DataFrame
for idx, row in df.iterrows():
rows.append(row)
if row['Days Difference'] > 8:
# Create a new row
new_row = row.copy()
new_row['Start Date'] = rows[-2]['End Date']
new_row['Close1'] = rows[-2]['Close2']
new_row['NF_BEES1'] = rows[-2]['NF_BEES2']
new_row['End Date'] = row['Start Date']
new_row['Close2'] = row['Close1']
new_row['NF_BEES2'] = row['NF_BEES1']
new_row['Difference'] = (new_row['Close2'] - new_row['Close1'])
new_row['Days Difference'] = (new_row['End Date'] - new_row['Start Date']).days
print(new_row)
print(type(new_row))
# Create a new DataFrame with the split rows
new_df = pd.DataFrame(rows)
new_df
</code></pre>
<p>My Output:</p>
<pre><code>
Start Date End Date Close1 Close2 NF_BEES1 NF_BEES2 Difference Days Difference
0 2022-10-18 2022-10-20 17486.95 17563.95 0.58 0.63 77.00 2
1 2022-10-25 2023-04-06 17656.35 17599.15 0.19 0.75 -57.20 163
2 2023-04-17 2023-07-04 17706.85 19389.00 0.12 0.73 1682.15 78
</code></pre>
<p>Expected Output:</p>
<pre><code>Start Date End Date Close1 Close2 NF_BEES1 NF_BEES2 Difference Days Difference
18 2022-10-18 2022-10-20 17486.95 17563.95 0.58 0.63 77.00 2
19 2022-10-20 2022-10-25 17563.95 17656.35 0.63 0.19 93 5
20 2023-04-06 2023-04-17 17599.15 17706.85 0.75 0.12 107 11
</code></pre>
|
<python><pandas><dataframe>
|
2024-06-24 13:12:57
| 2
| 1,077
|
Divyank
|
78,662,483
| 3,170,559
|
Why is my PySpark row_number column messed up when applying a schema?
|
<p>I want to apply a schema to specific non-technical columns of a Spark DataFrame. Beforehand, I add an artificial ID using <code>Window</code> and <code>row_number</code> so that I can later join some other technical columns to the new DataFrame from the initial DataFrame. However, after applying the schema, the generated ID is messed up. Below is a code sample. Can someone explain why this happens and how to resolve the issue?</p>
<pre><code>from pyspark.sql.functions import row_number, lit, col, monotonically_increasing_id, sum
from pyspark.sql.window import Window
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
# Sample DataFrame
data = [(1, "Alice"), (2, "Bob"), (3, "Charlie")]
df = spark.createDataFrame(data, ["id", "name"])
# Schema to apply
schema = StructType([
StructField("id", IntegerType(), False),
StructField("name", StringType(), False),
])
# Create ID column
w = Window().orderBy(lit('A'))
df = df.withColumn('_special_surrogate_id', row_number().over(w))
# Improved method
surrogate_key_field = StructField("_special_surrogate_id", StringType(), False)
schema_with_surrogate = StructType(schema.fields + [surrogate_key_field])
# Loop because sometimes it works and sometimes it does't work
for i in range(11):
df_filtered = df.select("id", "name", "_special_surrogate_id")
df_filtered = spark.createDataFrame(df_filtered.rdd, schema_with_surrogate)
combined_df = df.withColumnRenamed("id", "id1").join(df_filtered.withColumnRenamed("id", "id2"), on="_special_surrogate_id")
print("Diffs in Iteration " + str(i) + ":")
print(combined_df.withColumn("diff", (col("id1") != col("id2")).cast("integer")).agg(sum("diff")).collect()[0][0])
</code></pre>
|
<python><apache-spark><pyspark><rdd><azure-synapse>
|
2024-06-24 12:35:42
| 1
| 717
|
stats_guy
|
78,662,482
| 1,083,855
|
Is it possible to improve orientation detection with tesseract by specifying language and script?
|
<p>I am using tesseract to detect orientation of scanned images. They mostly contains text.
It works in general but sometimes fails, even for simple cases :</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Input</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;"><a href="https://i.sstatic.net/kBKKqcb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kBKKqcb8.png" alt="enter image description here" /></a></td>
<td>orientation: 0<br>script: 'Cyrillic'</td>
</tr>
<tr>
<td style="text-align: center;"><a href="https://i.sstatic.net/4tZnIeLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4tZnIeLj.png" alt="enter image description here" /></a></td>
<td>orientation: 180<br> script: 'Cyrillic'</td>
</tr>
</tbody>
</table></div>
<p>I know the text is always 'Latin' and 'French/English'.
Is there a way to specify that to tesseract ? I found that it's possible to specify it while converting image to text (using <code>image_to_string()</code>) but not when detecting orientation.</p>
<p>Here is the code I use:</p>
<pre><code>import pytesseract
import cv2
img = cv2.imread("somefile.jpg")
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #black and white
img = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] #contrast
results = pytesseract.image_to_osd(img, output_type=pytesseract.Output.DICT)
print(results["rotate"])
</code></pre>
|
<python><ocr><tesseract>
|
2024-06-24 12:35:39
| 0
| 4,576
|
tigrou
|
78,662,478
| 5,823,008
|
Compute statistical values out of a precounted list in Python
|
<p>I have a dataframe of precounted data (shown below). Let's assume it's a "Do you like?" scale, where 4 people answered 1-Don't like at all, 10 people answer 2-Don't like and so on.</p>
<p>How can I compute the different statistical values? I want to compute the mean (in this case, it can be done by hand <code>(4*1+10*2+125*3+85*4+25*5)/(4+10+125+85+25)=3.47)</code> and the standard deviation</p>
<pre><code>df=pd.DataFrame({1:4,2:10,3:125,4:85,5:25})
</code></pre>
|
<python><statistics>
|
2024-06-24 12:34:42
| 3
| 350
|
MatMorPau22
|
78,662,180
| 273,593
|
invoke a `@classmethod`-annotated method
|
<p>Dummy scenario:</p>
<pre class="lang-py prettyprint-override"><code># where I want to collect my decorated callbacks
cbs = []
# decorator to mark a callback to be collected
def my_deco(cb):
cbs.append(cb)
return cb
class C:
@my_deco
@classmethod
def cm(cls):
...
# execute the collected callbacks
for cb in cbs:
cb()
</code></pre>
<p>my goal is to have a <code>my_deco</code> decorator that collect some callbacks and execute them at a given time</p>
<p>I can apply <code>my_deco</code> to normal functions and they are collected and invoked normally. I also may call <code>my_deco(C.cm)</code> after <code>C</code> definition and the collected method is invoke-able normally.</p>
<p>In the example - however - I want to apply <code>my_deco</code> inside the definition of the <code>class C</code>.
I am struggling to understand how to detect</p>
<ol>
<li>that I'm decorating a <code>classmethod</code> object (atm I'm checking if <code>isinstance(cb, classmethod)</code> but I'm not sure is correct</li>
<li>how to retrieve the <code>C</code> class itself: assuming it's a <code>classmethod</code> instance my understanding is that is an "unbounded" method, such as the invocation require to manually set the <code>cls</code> parameter. But I don't know how to retrieve <code>C</code> in the <code>my_deco</code> implementation</li>
</ol>
|
<python><python-3.x><python-decorators><class-method>
|
2024-06-24 11:37:05
| 3
| 1,703
|
Vito De Tullio
|
78,662,089
| 1,083,855
|
How to update exif data without losing JFIF header?
|
<p>I use the following code to rotate an JPG image by 180 degrees, by updating EXIF header. I want to avoid re-encoding the image :</p>
<pre><code>import PIL
import piexif
filename = "somefile.jpg"
img = PIL.Image.open(filename)
exif_dict = piexif.load(img.info['exif'])
exif_dict["0th"][274] = 3 #180 deg rotate
exif_bytes = piexif.dump(exif_dict)
piexif.insert(exif_bytes, filename)
</code></pre>
<p>It seems to work (Windows can open image, and image is rotated accordingly)
The main issue is that JFIF header seems to be lost :</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Before</th>
<th>After</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://i.sstatic.net/O9IJTEh1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9IJTEh1.png" alt="enter image description here" /></a></td>
<td><a href="https://i.sstatic.net/A22BJ4a8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A22BJ4a8.png" alt="enter image description here" /></a></td>
</tr>
</tbody>
</table></div>
|
<python><jpeg><exif>
|
2024-06-24 11:18:23
| 1
| 4,576
|
tigrou
|
78,661,816
| 9,736,176
|
How to check for uncommited changes to git in a subset of files using python
|
<p>I'm doing data analysis, and for reproducibility I want to make sure my results are marked with the version of the code used to produce them. I do this with</p>
<pre><code>import git
repo = git.Repo(search_parent_directories=True)
last_commit_hex = repo.head.object.hexsha
</code></pre>
<p>But that's not necessarily the actual state of the code if there's uncommitted changes. So I want an automatic check in the program just before it starts running the analysis. I can use <code>repo.is_dirty()</code> to check for uncommitted changes in the repo. But that's checking too much. Besides the core package, there's also a bunch of miscellaneous files. Testing, other analysis scripts that use that core package, etc. And those don't need to be fully committed, but will trigger <code>is_dirty()</code>. How do I only check for the relevant files?</p>
<p>So, if my file system looks like this:</p>
<pre><code>Repo folder
|- package_folder
|- __init__.py
|- module1.py
|- module2.py
|- ...
|- some_analysis_script1.py
|- some_analysis_script2.py
|- some_testing.py
|- ...
</code></pre>
<p>I would want to check only the contents of <code>package_folder</code>. Getting the list of files can be done with <code>os.listdir</code>, but how do I check which ones have been changed?</p>
|
<python><git>
|
2024-06-24 10:22:53
| 1
| 436
|
BurnNote
|
78,661,811
| 1,422,096
|
Convert to int if not None, else keep None
|
<p>I have a <code>request</code> dict and I need to parse some elements into either <code>int</code> or <code>None</code>. This works:</p>
<pre><code>request = {"a": "123", "c": "456"}
a, b = request.get("a"), request.get("b") # too many
a, b = int(a) if a is not None else None, int(b) if b is not None else None # repetitions of "a" and "b" in this code!
print(a, b) # 123 None
</code></pre>
<p>but I guess it might be simplified, ideally with a one-liner.</p>
<p>Is there a workaround, <strong>with standard built-in functions</strong> (and no extra helper util function) to be able to do <code>a = int(request.get("a"))</code> without <code>int</code> producing an error if the input is <code>None</code>?</p>
<p>Note: in this question I'm not really interested in the <code>for</code> loop part, what I'm interested in is how to simplify this:</p>
<pre><code>a = request.get("a")
a = int(a) if a is not None else None
</code></pre>
<p>which I don't find pythonic.</p>
<p>Is there a way to simplify this pattern:</p>
<p><code>... if a is not None else None</code></p>
<p>?</p>
|
<python><nonetype>
|
2024-06-24 10:21:31
| 3
| 47,388
|
Basj
|
78,661,800
| 6,809,075
|
How to Implement a Memory-Efficient Custom Iterator for Large Files in Python?
|
<p>I need to iterate over extremely large files (several gigabytes each) in Python without loading the entire file into memory. Specifically, I want to implement a custom iterator that reads and processes data from these large files efficiently. What are the best practices and techniques for achieving this?</p>
<p><strong>I have experimented with Python’s fileinput module and streaming libraries, but I’m unsure of the best approach to implement a memory-efficient iterator for large files</strong></p>
<p>Can someone provide a comprehensive example or explanation of how to implement a memory-efficient custom iterator in Python for reading and processing large files? I am particularly interested in techniques that optimize memory usage and maintain high performance.</p>
|
<python><memory-management><iterator>
|
2024-06-24 10:18:16
| 0
| 680
|
safiqul islam
|
78,661,106
| 3,486,684
|
How do I type annotate a worksheet, or more specifically, a read-only worksheet?
|
<p>I have <code>types-openpyxl</code> installed.</p>
<p>I load a workbook in read-only mode, and then access a worksheet from it:</p>
<pre><code>from pathlib import Path
import openpyxl as xl
from openpyxl.worksheet._read_only import ReadOnlyWorksheet
excel_file = Path("./xl.xlsx")
wb: xl.Workbook = xl.load_workbook(
excel_file, read_only=True, data_only=True, keep_links=False
)
ws: ReadOnlyWorksheet = wb["hello world"]
x = ws[1]
</code></pre>
<p>This results in the <code>mypy</code> error:</p>
<pre><code>Invalid self argument "ReadOnlyWorksheet" to attribute function "__getitem__" with type "Callable[[Worksheet, int], tuple[Cell, ...]]"
</code></pre>
<p>How do I type annotate <code>ws</code>?</p>
|
<python><openpyxl><mypy><python-typing>
|
2024-06-24 07:42:33
| 2
| 4,654
|
bzm3r
|
78,661,005
| 10,200,497
|
How can I compare a value in one column to all values that are BEFORE it in another column to find the number of unique values that are less than?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [100, 100, 105, 106, 106, 107, 108, 109],
'b': [99, 100, 110, 107, 100, 110, 120, 106],
}
)
</code></pre>
<p>Expected output is creating column <code>x</code>:</p>
<pre><code> a b x
0 100 99 0
1 100 100 1
2 105 110 2
3 106 107 3
4 106 100 1
5 107 110 4
6 108 120 5
7 109 106 3
</code></pre>
<p>Logic:</p>
<p>This is somehow an extension to this <a href="https://stackoverflow.com/a/78515922/10200497">answer</a>. I explain the logic by examples and I start from row <code>1</code>:</p>
<p>For row <code>1</code>, the <code>b</code> column value is 100. Then in order to get <code>x</code> for row <code>1</code> this value should be compared with all UNIQUE values in <code>a</code> that are on the same row or before it to find out how many values in <code>a</code> are less than or equal to it. The only unique value that is on the same row or before it is 100, so 1 is chosen for <code>x</code>.</p>
<p>For row <code>2</code>, there are two unique values in <code>a</code> that are less than or equal to 110 which are 100, 105.</p>
<p>The logic is the same for the rest of rows.</p>
<p>This is my attempt based on the linked answer but it does not work:</p>
<pre><code>t = df.a.unique()
m1 = np.arange(len(t))[:,None] >= np.arange(len(t))
h = df['b'].to_numpy()
m2 = t <= h[:, None]
</code></pre>
|
<python><pandas><dataframe>
|
2024-06-24 07:16:53
| 4
| 2,679
|
AmirX
|
78,660,891
| 15,690,172
|
Can't control servo with Python using Raspberry Pi
|
<p>I am using a <strong>Raspberry Pi model 4B</strong> and a <strong>Tower Pro servo SG51R</strong> which only turns 180 degrees. I am writing code on the Raspberry Pi to control the servo <em>(which works as I have tested it with my Arduino Uno, it responds and works correctly)</em> with Python.</p>
<p>I installed the <code>gpiozero</code> library successfully on my Raspberry Pi using the below commands:</p>
<pre><code>sudo apt-get install python3-rpi.gpio
sudo pip install gpiozero
</code></pre>
<p>I then connected my servo to the Raspberry Pi similar to the diagram shown below
<a href="https://i.sstatic.net/UmX7n5bE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmX7n5bE.png" alt="diagram" /></a><br />
I made sure the brown wire connected to GND, the red wire to a 4.8 volt power supply. and the yellow (PWM) wire to the GPIO 25 pin (pin 22).</p>
<p>I then ran the below code:</p>
<pre class="lang-py prettyprint-override"><code>from gpiozero import Servo
test = Servo(25) # Connected servo to this pin (pin 22)
print('moving')
test.max()
print('done moving')
from time import sleep
sleep(1)
print('moving again')
test.min()
print('finished second movement')
sleep(5)
test.value = 0
sleep(1)
test.value = 1
sleep(1)
test.value = -1
print('done entirely')
</code></pre>
<p>and received the below output:</p>
<pre><code>/usr/lib/python3/dist-packages/gpiozero/output_devices.py:1532: PWMSoftwareFallback: To reduce servo jitter, use the pigpio pin factory.See https://gpiozero.readthedocs.io/en/stable/api_output.html#servo for more info
warnings.warn(PWMSoftwareFallback(
moving
done moving
moving again
finished second movement
done entirely
</code></pre>
<p>Yet the servo still does not move. I even tried using a different servo of the same model, and using different pins to no avail. Is there anything else I'm missing? How could I make this work?</p>
|
<python><raspberry-pi><gpio><servo><gpiozero>
|
2024-06-24 06:44:54
| 0
| 309
|
realhuman
|
78,660,804
| 2,530,674
|
FastAPI via Docker is not running
|
<p>I have the following two files but I cannot get the FastAPI server to work via Docker.</p>
<p>I am using the command <code>docker build -t my_project .</code> to build it and <code>docker run -it -p 8080:8080 my_project</code> to run the server. Finally, I use the following curl command to call the API, but I get this error: <code>curl: (56) Recv failure: Connection reset by peer</code>.</p>
<pre><code>curl -X POST "http://127.0.0.1:8080/predict/" -H "Content-Type: application/json" -d '{"image_sha": "example_sha"}'
</code></pre>
<p>Running this via Python locally works perfectly fine.</p>
<p>server.py</p>
<pre class="lang-py prettyprint-override"><code>import base64
import io
from fastapi import FastAPI
from typing import Any
import uvicorn
from PIL import Image
import pydantic
class ImageModelInput(pydantic.BaseModel):
image_sha: str
def get_base64_encoded_image(self) -> str:
return "dummy_base64_image"
def decode_image(self) -> Image.Image:
image_data = base64.b64decode(self.image_base64)
return Image.open(io.BytesIO(image_data))
class ImageModelOutput(pydantic.BaseModel):
attributes: list[str]
class ImageModel:
def __init__(self, model: Any):
self.model = model
self.app = FastAPI()
@self.app.post("/predict/", response_model=ImageModelOutput)
async def predict(input: ImageModelInput):
image = input.get_base64_encoded_image()
prediction = self.model_predict(image)
return ImageModelOutput(attributes=prediction)
@self.app.get("/readyz")
async def readyz():
return {"status": "ready"}
def model_predict(self, image: Image.Image) -> list[str]:
# Replace this method with actual model prediction logic
return ["dummy_attribute_1", "dummy_attribute_2"]
def run(self, host: str = "127.0.0.1", port: int = 8080):
uvicorn.run(self.app, host=host, port=port)
# Example usage
if __name__ == "__main__":
# Replace with your actual model loading logic
dummy_model = "Your_Model_Here"
image_model = ImageModel(model=dummy_model)
image_model.run()
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM python:3.12.4-slim@sha256:2fba8e70a87bcc9f6edd20dda0a1d4adb32046d2acbca7361bc61da5a106a914
WORKDIR /app
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -U pip poetry wheel
RUN addgroup --system somebody && \
adduser --system --home /app --ingroup somebody somebody && \
chown -R somebody:somebody /app
USER somebody
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY my_project /app/my_project
ENV PYTHONUNBUFFERED=1
EXPOSE 8080
CMD python3 cx_ai_lightbox/server.py
</code></pre>
|
<python><docker><fastapi>
|
2024-06-24 06:18:32
| 2
| 10,037
|
sachinruk
|
78,660,560
| 4,801,767
|
PEP 668 Error (Externally managed environment) within Conda
|
<p>While trying to <code>pip install</code> packages, even though I am inside a Conda environment, I'm getting the familiar error:</p>
<pre class="lang-none prettyprint-override"><code>error: externally-managed-environment
</code></pre>
<p>I would expect this if I'm using Python directly from the system prompt. But why do I get this even inside the Conda environment?</p>
|
<python><pip><conda>
|
2024-06-24 04:36:44
| 1
| 1,472
|
ahron
|
78,660,517
| 4,451,521
|
Is there a way to make the rows of a gradio dataframe clickable?
|
<p>I have the following script</p>
<pre><code>import gradio as gr
import pandas as pd
import numpy as np
def generate_random_dataframe():
# Generate a random DataFrame
df = pd.DataFrame(np.random.rand(10, 5), columns=list('ABCDE'))
return df
with gr.Blocks() as demo:
generate_button = gr.Button("Generate Random DataFrame")
# Create a DataFrame component to display the DataFrame
output_df = gr.DataFrame()
generate_button.click(fn=generate_random_dataframe, inputs=[], outputs=output_df)
demo.launch()
</code></pre>
<p>when run and the button click I got</p>
<p><a href="https://i.sstatic.net/F0Tw0pzV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0Tw0pzV.png" alt="enter image description here" /></a></p>
<p>My question is, is there a way to make this dataframe clickable?
For example I click on a row, and it calculates the sum of all numbers in the row and show it below in a text element</p>
|
<python><gradio>
|
2024-06-24 04:17:58
| 1
| 10,576
|
KansaiRobot
|
78,660,506
| 14,343,465
|
ImportError: cannot import name 'Credentials' from 'dbt.adapters.base'
|
<p>I'm trying to get an AWS Athena adapter setup with dbt-core</p>
<h2>setup</h2>
<pre><code>python -m pip install dbt-core dbt-athena
</code></pre>
<ul>
<li>following this doc: <a href="https://docs.getdbt.com/docs/core/connect-data-platform/athena-setup" rel="nofollow noreferrer">Athena setup | dbt Developer Hub</a></li>
</ul>
<p><code>~/.dbt/profiles.yml</code></p>
<pre><code>default:
outputs:
dev:
type: athena
s3_staging_dir: [s3_staging_dir]
s3_data_dir: [s3_data_dir]
s3_data_naming: [table_unique] # the type of naming convention used when writing to S3
region_name: [region_name]
database: [database name]
schema: [dev_schema]
aws_profile_name: [optional profile to use from your AWS shared credentials file.]
threads: [1 or more]
num_retries: [0 or more] # number of retries performed by the adapter. Defaults to 5
target: dev
</code></pre>
<h2>error</h2>
<p>running <code>dbt debug</code> returns the following error:</p>
<pre><code>from dbt.adapters.athena.connections import AthenaConnectionManager, AthenaCredentials
File "/usr/local/Caskroom/miniconda/base/envs/ww-data-integrity-dbt/lib/python3.11/site-packages/dbt/adapters/athena/connections.py", line 15, in <module>
from dbt.adapters.base import Credentials
ImportError: cannot import name 'Credentials' from 'dbt.adapters.base' (/usr/local/Caskroom/miniconda/base/envs/ww-data-integrity-dbt/lib/python3.11/site-packages/dbt/adapters/base/__init__.py)
</code></pre>
|
<python><amazon-web-services><amazon-athena><dbt>
|
2024-06-24 04:11:29
| 1
| 3,191
|
willwrighteng
|
78,660,487
| 13,642,249
|
Sniff Continuously and Save PCAP Files Simultaneously using PyShark's LiveCapture Method with display_filter
|
<p>I am attempting to continuously sniff packets while concurrently saving them to a PCAP file using PyShark's <code>LiveCapture</code> method with the <code>display_filter</code> param. I am attempting to replicate the feature from Wireshark where you can stop and save a capture at any given moment with any filter specified. This setup in python would involve an indefinite timeout and no restriction on packet counts, allowing a process interruption (such as a keyboard interrupt) to halt the process. Here is an example with try/catch where I can print out packets with no problem:</p>
<pre><code>import pyshark
interf = "Wi-Fi"
capture = pyshark.LiveCapture(interface=interf, display_filter='tcp')
try:
for packet in capture.sniff_continuously():
print(packet)
except KeyboardInterrupt:
print("Capture stopped.")
</code></pre>
<p>And now after adding the param for <em>output_file</em>, nothing happens:</p>
<pre><code>import pyshark
interf = "Wi-Fi"
capture = pyshark.LiveCapture(interface=interf, display_filter='tcp', output_file="HERE.pcap")
try:
for packet in capture.sniff_continuously():
print(packet)
except KeyboardInterrupt:
print("Capture stopped.")
</code></pre>
<p>Currently using <code>pyshark==0.6</code></p>
|
<python><pyshark>
|
2024-06-24 04:00:23
| 1
| 1,422
|
kyrlon
|
78,660,434
| 1,245,262
|
Why can't I connect to the default port on localhost when using Milvus?
|
<p>I'm just starting to use Milvus, but cannot connect to the port given in the example. I am running directly on my machine (i.e. not a Docker image), but cannot get initial exampe to work:</p>
<pre><code>$ python
Python 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pymilvus
>>> pymilvus.__version__
'2.4.4'
>>> host="localhost"
>>> port = 19530
>>> from pymilvus import connections
>>> connections.connect("default",host=host,port=port)
Traceback (most recent call last):
File "/home/me/anaconda3/envs/ShopTalk/lib/python3.10/site-packages/pymilvus/client/grpc_handler.py", line 147, in _wait_for_channel_ready
grpc.channel_ready_future(self._channel).result(timeout=timeout)
File "/home/me/anaconda3/envs/ShopTalk/lib/python3.10/site-packages/grpc/_utilities.py", line 162, in result
self._block(timeout)
File "/home/me/anaconda3/envs/ShopTalk/lib/python3.10/site-packages/grpc/_utilities.py", line 106, in _block
raise grpc.FutureTimeoutError()
grpc.FutureTimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/me/anaconda3/envs/ShopTalk/lib/python3.10/site-packages/pymilvus/orm/connections.py", line 447, in connect
connect_milvus(**kwargs, user=user, password=password, token=token, db_name=db_name)
File "/home/me/anaconda3/envs/ShopTalk/lib/python3.10/site-packages/pymilvus/orm/connections.py", line 398, in connect_milvus
gh._wait_for_channel_ready(timeout=timeout)
File "/home/me/anaconda3/envs/ShopTalk/lib/python3.10/site-packages/pymilvus/client/grpc_handler.py", line 150, in _wait_for_channel_ready
raise MilvusException(
pymilvus.exceptions.MilvusException: <MilvusException: (code=2, message=Fail connecting to server on localhost:19530, illegal connection params or server unavailable)>
>>>
</code></pre>
<p>I've tried to guarantee that 19530 is listening:</p>
<pre><code>$ sudo ufw status
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
19530/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
19530/tcp (v6) ALLOW Anywhere (v6)
</code></pre>
<p>But this doesn't seem to make a difference. I suppose there's something `networky' that I'm missing here, but I have no idea what it is. The only installation instructions I saw were 'pip install -U milvus', which I did. What am I not doing here?</p>
|
<python><database><port><milvus>
|
2024-06-24 03:25:05
| 0
| 7,555
|
user1245262
|
78,660,379
| 4,701,852
|
Straighten a Polygon into a line with Geopandas/Shapely
|
<p>I'm using Python. geopandas and shapelly to process the geometries of a road intersection.</p>
<p>The geojson has a list of Polygons that I want to straighten into Lines, something like this:</p>
<p><a href="https://i.sstatic.net/e83OFzZv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e83OFzZv.png" alt="enter image description here" /></a></p>
<p>Does anyone know how to achieve that?</p>
|
<python><line><polygon><geopandas><shapely>
|
2024-06-24 02:55:19
| 2
| 361
|
Paulo Henrique PH
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.