QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,744,982
| 7,226,192
|
How to get the current function name and file name and location (path) in python ? And how to get the calling function data as-well?
|
<p>I've been searching for some useful code examples on this subject for creating a simple logger for my program.</p>
<p>Here are the things I've searched:</p>
<ol>
<li>Getting the current function info (name, arguments, file name and path)</li>
<li>Getting the calling function into (name, file name and path, line number)</li>
</ol>
<p>For some reason this information is scattered over many posts and locations and it would be nice to have them in one single location.</p>
|
<python><function><parameters><filenames><signature>
|
2024-07-13 21:04:18
| 2
| 506
|
JamesC
|
78,744,970
| 10,749,925
|
How do i get my Python .env credentials file into Ubuntu EC2?
|
<p>I have a .env file which my app.py file references, containing my AWS credentials. How do i get this into my AWS Ubuntu EC2 machine?</p>
<p>I can't even upload it to GitHub.</p>
<p>If there's any security issues with this, whats the simplest workaround strategy you use? Worst case i could hardcode it, but i don't think that's ideal.</p>
|
<python><amazon-web-services><ubuntu><amazon-s3><amazon-ec2>
|
2024-07-13 20:57:12
| 1
| 463
|
chai86
|
78,744,860
| 4,611,374
|
Using pandas.concat along Axis 1 returns a Concatenation along Axis 0
|
<p>I am trying to horizontally concatenate a pair of data frames with identical indices, but the result is always a vertical concatenation with NaN values inserted into every column.</p>
<pre class="lang-py prettyprint-override"><code>dct_l = {'1':'a', '2':'b', '3':'c', '4':'d'}
df_l = pd.DataFrame.from_dict(dct_l, orient='index', columns=['Key'])
dummy = np.zeros((4,3))
index = np.arange(1,5)
columns = ['POW', 'KLA','CSE']
df_e = pd.DataFrame(dummy, index, columns)
print(df_l)
</code></pre>
<pre><code> Key
1 a
2 b
3 c
4 d
</code></pre>
<pre class="lang-py prettyprint-override"><code>print(df_e)
</code></pre>
<pre><code> POW KLA CSE
1 0.0 0.0 0.0
2 0.0 0.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
</code></pre>
<pre class="lang-py prettyprint-override"><code>pd.concat([df_l, df_e], axis=1)
</code></pre>
<p><strong>Actual Result</strong></p>
<pre><code> Key POW KLA CSE
1 a NaN NaN NaN
2 b NaN NaN NaN
3 c NaN NaN NaN
4 d NaN NaN NaN
1 NaN 0.0 0.0 0.0
2 NaN 0.0 0.0 0.0
3 NaN 0.0 0.0 0.0
4 NaN 0.0 0.0 0.0
</code></pre>
<p><strong>Expected Result</strong></p>
<pre><code> Key POW KLA CSE
1 a 0.0 0.0 0.0
2 b 0.0 0.0 0.0
3 c 0.0 0.0 0.0
4 d 0.0 0.0 0.0
</code></pre>
<p>What is happening here?</p>
|
<python><pandas><dataframe><concatenation>
|
2024-07-13 20:06:21
| 1
| 309
|
RedHand
|
78,744,758
| 3,347,814
|
How do I get Lead Info from Facebook API using Python?
|
<p>I want to get Leadinfo from my Lead ads using python and the facebook API.</p>
<p>I have tried this:</p>
<pre><code>import json
from facebook_business.api import FacebookAdsApi
from facebook_business.adobjects.adaccount import AdAccount
from facebook_business.adobjects.leadgenform import LeadgenForm
from facebook_business.adobjects.lead import Lead
access_token = '<my_token>'
ad_account_id = 'act_my_ad_acount'
FacebookAdsApi.init(access_token=access_token)
ad_account = AdAccount(ad_account_id)
ads = ad_account.get_ads(fields=[
Ad.Field.id,
Ad.Field.name,
])
leads = []
for ad in ads:
lead = ad.get_leads(fields=[
Lead.Field.id,
Lead.Field.field_data
])
leads.extend(lead)
print(leads)
</code></pre>
<p>However it breaks due to this error:</p>
<pre><code>There have been too many calls from this ad-account. Please wait a bit and try again.
</code></pre>
<p>I understand that I should be doing some kind of batch call. But I found the <a href="https://github.com/facebook/facebook-python-business-sdk" rel="nofollow noreferrer">documentation</a> too hard do understand. It took me three days just get the code to list the ads in my account.</p>
<p>Could someone, please help me?</p>
<p>My end goal is to retrieve the information that the users sent on the leadforms ads, ei. their name, telephone, etc..</p>
|
<python><facebook-graph-api><facebook-python-business-sdk>
|
2024-07-13 19:12:10
| 1
| 1,143
|
user3347814
|
78,744,671
| 2,401,856
|
Creating a local proxy server in python
|
<p>I have two computers <code>computer 1</code> and <code>computer 2</code> on the same network. I'm trying to make <code>computer 1</code> act as a proxy server so when I send a request from <code>computer 2</code> using its ip address as a proxy, the request will pass through <code>computer 1</code>.</p>
<p>After trying multiple approaches, this is my code on the client side:</p>
<pre><code>import requests
# Proxy server settings
PROXY_HOST = '192.168.1.112'
PROXY_PORT = 8888
url = 'https://www.google.com'
proxies = {
'http': f'http://{PROXY_HOST}:{PROXY_PORT}',
'https': f'http://{PROXY_HOST}:{PROXY_PORT}'
}
try:
response = requests.get(url, proxies=proxies)
print("Response from server:")
print(response.content.decode('utf-8'))
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
</code></pre>
<p>And this is the code on my server side (<code>computer 1</code>):</p>
<pre><code>import socket
import threading
import ssl
LOCAL_HOST = '0.0.0.0'
PROXY_PORT = 8888
def handle_client(client_socket):
request = client_socket.recv(4096)
print(f"Received request from client: {request}")
first_line = request.split(b'\n')[0]
method = first_line.split()[0]
if method == b'CONNECT':
handle_https(client_socket, request)
else:
handle_http(client_socket, request)
def handle_https(client_socket, request):
# Extract the host and port from the CONNECT request
first_line = request.split(b'\n')[0]
host, port = first_line.split()[1].split(b':')
port = int(port)
remote_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
remote_socket.connect((host.decode('utf-8'), port))
client_socket.send(b'HTTP/1.1 200 OK\n\n')
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
client_ssl = context.wrap_socket(client_socket, server_side=True)
remote_ssl = context.wrap_socket(remote_socket, server_hostname=host.decode('utf-8'))
forward_data(client_ssl, remote_ssl)
def handle_http(client_socket, request):
remote_host = 'www.example.com'
remote_port = 80
remote_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
remote_socket.connect((remote_host, remote_port))
remote_socket.send(request)
remote_response = remote_socket.recv(4096)
print(f"Received response from remote server: {remote_response}")
client_socket.send(remote_response)
remote_socket.close()
client_socket.close()
def forward_data(sock1, sock2):
sockets = [sock1, sock2]
while True:
for sock in sockets.copy():
try:
data = sock.recv(4096)
if data:
other_sock = sock2 if sock == sock1 else sock1
other_sock.sendall(data)
else:
sock.close()
sockets.remove(sock)
except Exception as e:
print(f"Error: {e}")
sock.close()
sockets.remove(sock)
def start_proxy_server():
proxy_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
proxy_socket.bind((LOCAL_HOST, PROXY_PORT))
proxy_socket.listen(5)
print(f"Proxy server listening on port {PROXY_PORT}...")
while True:
client_socket, addr = proxy_socket.accept()
print(f"Accepted connection from {addr[0]}:{addr[1]}")
# Handle client request in a separate thread
client_handler = threading.Thread(target=handle_client, args=(client_socket,))
client_handler.start()
if __name__ == '__main__':
start_proxy_server()
</code></pre>
<p>If I send an http request that works well, but when sending an https request I'm getting this error on the server side during the handshake process:</p>
<blockquote>
<p>ssl.SSLError: [SSL: NO_SHARED_CIPHER] no shared cipher (_ssl.c:1000)</p>
</blockquote>
<p>I tried to add this to my code but it didn't help:</p>
<pre><code>cipher = 'DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-GCM-SHA256'
context.set_ciphers(cipher)
</code></pre>
|
<python><ssl><proxy>
|
2024-07-13 18:31:33
| 1
| 620
|
user2401856
|
78,744,539
| 11,420,295
|
Understanding sync_to_async method in Django?
|
<p>I have never worked with asyncio and/or asynchronous methods with django and am having a little difficulty understanding.</p>
<p>I am trying to convert a synchronous utility function (create_email_record) into an asynchronous function inside of a form method.</p>
<p>I will minimize some code for better understanding. My form method (begin_processing) has a good amount of logic inside.</p>
<pre><code>def create_email_record(submission):
print("-----CREATE EMAIL RECORD-------")
creation = EmailRecord.objects.create()
class Form(forms.Form):
comments = forms.TextArea()
def begin_processing():
submission = Submission.objects.get(id=1)
print("----------BEGIN ASYNC--------")
create_email_notification = sync_to_async(create_email_record)(submission)
asyncio.run(create_email_notification)
print("----------END ASYNC----------")
</code></pre>
<p>When I print this to my terminal I would think to expect:</p>
<p>("----------BEGIN ASYNC--------")<br>
("----------END ASYNC----------")<br>
("-----CREATE EMAIL RECORD-------")<br></p>
<p>What I receive is:</p>
<p>("----------BEGIN ASYNC--------")<br>
("-----CREATE EMAIL RECORD-------")<br>
("----------END ASYNC----------")<br></p>
<p>My Object gets created, and seems to work, but I don't believe that my converted sync_to_async function is being converted/called correctly.</p>
<p>Maybe I am having some trouble understanding, but what I want to do is call an asynchronous function from a synchronous function/method. I have read a lot on other posts, and online resources, but none seem to fit what I am looking to do. I have not seen this done inside of a form method.</p>
|
<python><django><asynchronous><django-forms><python-asyncio>
|
2024-07-13 17:36:41
| 0
| 415
|
master_j02
|
78,744,469
| 11,748,924
|
Numpy geta mask of false positives from the given two vectors of y_true and y_pred
|
<p>Given three classes (5,6,7) of two arrays:</p>
<pre><code>y_true = np.array([5,6,7,5])
y_pred = np.array([5,7,7,5])
</code></pre>
<p>Since second element is false, how to return one-hot encoded array of false positive array like this?</p>
<pre><code>y_falsep_class5: [0,0,0,0]
y_falsep_class6: [0,0,0,0]
y_falsep_class7: [0,1,0,0]
</code></pre>
<p>So the returned array will have dimension (3,4), where 3 is then num of classes and 4 is the length of vector.</p>
|
<python><numpy>
|
2024-07-13 16:58:17
| 2
| 1,252
|
Muhammad Ikhwan Perwira
|
78,744,437
| 6,173,214
|
Create an LSTM Model based on multiple arrays events of random numbers
|
<p>I need help on create a LSTM Model based on a complex structure of data.</p>
<p>I have a multidimensional array. It is an array of over 4000 arrays, each one consists of six numbers (from 1 to 128).
I have found a partial predction feature.</p>
<p>Suppose I select a group of 70 arrays [0..69], and I want to predict the 71th sequence. It is quitely impossible. But, if I could know the numbers on 71th sequence, I exclude all sequences from 0 to 69 that has inside at least one of the numbers in 71th sequence.</p>
<p>Then, I take all numbers on the remaining sequences once time (without duplicating), and I out them into an array naned Exclusion.</p>
<p>If I exclude all numbers on Exclusion from 1 to 128, I got a reducted group of numbers (from 9 to 14 elements). In this last reduced array I have surely the numbers on 71th sequence.</p>
<p>Now this model is replicable for all sequences: [1..70] for 72th, [2..71] for 73th, and so on, for all sequences.</p>
<p>So, I cannot understand how to create a LSTM Model where I need to input a [70, 6] with an output of [70, 1], where [70, 6] is the 70 sequences of 6 numbers, and [70, 2] is the 70 vales between 1 and 0, where 1 is the represented sequence to exclude.</p>
<p>I need globally to predict which sequences could be excluded.</p>
|
<python><multidimensional-array><lstm>
|
2024-07-13 16:39:37
| 1
| 393
|
Klode
|
78,744,422
| 11,032,590
|
Issues with Creating a Conda Environment from environment.yml
|
<p>I'm trying to create an environment from the environment.yml file I found <a href="https://github.com/liusiyan/UQnet/blob/master/environment.yml" rel="nofollow noreferrer">here</a>. I run the command:</p>
<pre><code>conda env create -f environment.yml
</code></pre>
<p>While everything seems fine up to a certain point, it then terminates with errors. I have also tried creating the environment through Anaconda Navigator. I have uninstalled Anaconda, deleted temporary files and folders, and reinstalled Anaconda. I did everything from scratch again, and still, it has issues. Does anyone know what's going on?</p>
<pre><code>Channels:
- defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: failed
Channels:
- defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: failed
LibMambaUnsatisfiableError: Encountered problems while solving:
- nothing provides requested wurlitzer 2.1.0**
- nothing provides requested unixodbc 2.3.9**
- nothing provides requested secretstorage 3.3.1**
- nothing provides requested readline 8.1**
- nothing provides requested patchelf 0.12**
- nothing provides requested pango 1.45.3**
- nothing provides requested ncurses 6.2**
- nothing provides requested libxcb 1.14**
- nothing provides requested libuuid 1.0.3**
- nothing provides requested libtool 2.4.6**
- nothing provides requested libstdcxx-ng 9.1.0**
- nothing provides requested libgfortran-ng 7.3.0**
- nothing provides requested libgcc-ng 9.1.0**
- nothing provides requested libffi 3.3**
- nothing provides requested libev 4.33**
- nothing provides requested libedit 3.1.20210216**
- nothing provides requested ld_impl_linux-64 2.33.1**
- nothing provides requested jbig 2.1**
- nothing provides requested harfbuzz 2.8.0**
- nothing provides requested gstreamer 1.14.0**
- nothing provides requested gst-plugins-base 1.14.0**
- nothing provides requested gmp 6.2.1**
- nothing provides requested glib 2.68.1**
- nothing provides requested fontconfig 2.13.1**
- nothing provides requested dbus 1.13.18**
- package pyzmq-20.0.0-py36hd77b12b_1 requires zeromq >=4.3.3,<4.3.4.0a0, but none of the providers can be installed
- package libglib-2.78.4-ha17d25a_0 requires zlib >=1.2.13,<1.3.0a0, but none of the providers can be installed
Could not solve for environment specs
The following packages are incompatible
ββ cairo 1.16.0** is installable with the potential options
β ββ cairo 1.16.0 would require
β β ββ glib >=2.69.1,<3.0a0 with the potential options
β β ββ glib 2.78.4 would require
β β β ββ libglib 2.78.4 ha17d25a_0, which requires
β β β ββ zlib >=1.2.13,<1.3.0a0 , which can be installed;
β β ββ glib 2.69.1 would require
β β ββ pcre >=8.45,<9.0a0 , which can be installed;
β ββ cairo 1.16.0 would require
β ββ fontconfig >=2.14.1,<3.0a0 , which requires
β ββ expat >=2.4.9,<3.0a0 , which can be installed;
ββ dbus 1.13.18** does not exist (perhaps a typo or a missing channel);
ββ expat 2.3.0** is not installable because it conflicts with any installable versions previously reported;
ββ fontconfig 2.13.1** does not exist (perhaps a typo or a missing channel);
ββ glib 2.68.1** does not exist (perhaps a typo or a missing channel);
ββ gmp 6.2.1** does not exist (perhaps a typo or a missing channel);
ββ gst-plugins-base 1.14.0** does not exist (perhaps a typo or a missing channel);
ββ gstreamer 1.14.0** does not exist (perhaps a typo or a missing channel);
ββ harfbuzz 2.8.0** does not exist (perhaps a typo or a missing channel);
ββ jbig 2.1** does not exist (perhaps a typo or a missing channel);
ββ ld_impl_linux-64 2.33.1** does not exist (perhaps a typo or a missing channel);
ββ libedit 3.1.20210216** does not exist (perhaps a typo or a missing channel);
ββ libev 4.33** does not exist (perhaps a typo or a missing channel);
ββ libffi 3.3** does not exist (perhaps a typo or a missing channel);
ββ libgcc-ng 9.1.0** does not exist (perhaps a typo or a missing channel);
ββ libgfortran-ng 7.3.0** does not exist (perhaps a typo or a missing channel);
ββ libstdcxx-ng 9.1.0** does not exist (perhaps a typo or a missing channel);
ββ libtool 2.4.6** does not exist (perhaps a typo or a missing channel);
ββ libuuid 1.0.3** does not exist (perhaps a typo or a missing channel);
ββ libxcb 1.14** does not exist (perhaps a typo or a missing channel);
ββ ncurses 6.2** does not exist (perhaps a typo or a missing channel);
ββ pango 1.45.3** does not exist (perhaps a typo or a missing channel);
ββ patchelf 0.12** does not exist (perhaps a typo or a missing channel);
ββ pcre 8.44** is not installable because it conflicts with any installable versions previously reported;
ββ pyzmq 20.0.0** is installable and it requires
β ββ zeromq >=4.3.3,<4.3.4.0a0 , which can be installed;
ββ readline 8.1** does not exist (perhaps a typo or a missing channel);
ββ secretstorage 3.3.1** does not exist (perhaps a typo or a missing channel);
ββ unixodbc 2.3.9** does not exist (perhaps a typo or a missing channel);
ββ wurlitzer 2.1.0** does not exist (perhaps a typo or a missing channel);
ββ zeromq 4.3.4** is not installable because it conflicts with any installable versions previously reported;
ββ zlib 1.2.11** is not installable because it conflicts with any installable versions previously reported.
</code></pre>
|
<python><yaml>
|
2024-07-13 16:30:17
| 0
| 349
|
KΟΟΟΞ±Ο ΞΞΏΟδαΟ
|
78,744,397
| 726,730
|
Connect a server peer (dynamic ip address) with browser peers javascript
|
<p>I want to make live ip video calls between a python pyqt5 user and a browser (html5).</p>
<p>The javascript code i use for client is:</p>
<pre class="lang-py prettyprint-override"><code>var port = 8080
var ip_address = "192.168.1.10"
var main_pc = {
"name":"",
"surname":"",
"pc":null,
"dc":null,
"uid":null,
"local_audio":null,
"local_video":null,
"remote_audio":null,
"remote_video":null
};
var peer_connections = [];
var closing = false
var controller = null;
var signal;
var stop_time_out = null;
function start(name,surname) {
$("#control_call_button").addClass("d-none");
$("#stop_call_button").removeClass("d-none");
$("#signal-audio").trigger("play");
main_pc = createPeerConnection(main_pc);
main_pc["name"] = name;
main_pc["surname"] = surname;
main_pc["dc"] = main_pc["pc"].createDataChannel('chat', {"ordered": true});
main_pc["dc"].onmessage = function(evt) {
data = JSON.parse(evt.data);
if(data["type"] == "closing"){
if (main_pc["uid"] == "uid"){
stop_peer_connection();
}else{
//stop_client_peer_connection(data["uid"]);
}
}
if (data["type"] == "uid"){
uid = data["uid"];
main_pc["uid"] = uid;
console.log(main_pc);
}
if (data["type"] == "new-client"){
var uid = data["uid"];
var client_name = data["name"];
var client_surname = data["surname"];
console.log("New client:");
console.log(uid);
console.log(client_name);
console.log(client_surname);
//start_client(uid,client_name,client_surname);
}
};
main_pc["pc"].onconnectionstatechange = (event) => {
let newCS = main_pc["pc"].connectionState;
if (newCS == "disconnected" || newCS == "failed" || newCS == "closed") {
stop_time_out = setTimeout(stop_with_time_out, 7000);
}else{
if (stop_time_out != null){
clearTimeout(stop_time_out);
stop_time_out = null;
}
}
}
main_pc["pc"].onclose = function() {
closing = true;
stop_peer_connection();
// close data channel
if (main_pc["dc"]) {
main_pc["dc"].close();
}
// close local audio / video
main_pc["pc"].getSenders().forEach(function(sender) {
sender.track.stop();
});
// close transceivers
if (main_pc["pc"].getTransceivers) {
main_pc["pc"].getTransceivers().forEach(function(transceiver) {
if (transceiver.stop) {
transceiver.stop();
}
});
}
main_pc["pc"] = null;
$("#control_call_button").removeClass("d-none");
$("#stop_call_button").addClass("d-none");
};
constraints = {audio:true,video:true};
navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
stream.getTracks().forEach(function(track) {
try {
main_pc["pc"].addTrack(track, stream);
if (track.kind == "video"){
//correct
main_pc["local_video"] = stream;
document.getElementById('client-video-1').srcObject = stream;
}else{
main_pc["local_audio"] = stream;
}
} catch(e){
}
});
return negotiate();
}, function(err) {
alert('Could not acquire media: ' + err);
});
}
function createPeerConnection(pc) {
var config = {
sdpSemantics: 'unified-plan'
};
config.iceServers = [{ urls: ['stun:stun.l.google.com:19302'] }];
pc["pc"] = new RTCPeerConnection(config);
// connect audio
pc["pc"].addEventListener('track', function(evt) {
if (evt.track.kind == 'audio'){
$("#signal-audio").trigger("pause");
$("#signal-audio").currentTime = 0; // Reset time
document.getElementById('server-audio').srcObject = evt.streams[0];
$("#control_call_button").addClass("d-none")
$("#stop_call_button").removeClass("d-none")
pc["remote_audio"] = evt.streams[0];
}else if (evt.track.kind == 'video'){
document.getElementById('server-video').srcObject = evt.streams[0];
pc["remote_video"] = evt.streams[0];
}
});
return pc;
}
function negotiate() {
return main_pc["pc"].createOffer({"offerToReceiveAudio":true,"offerToReceiveVideo":true}).then(function(offer) {
return main_pc["pc"].setLocalDescription(offer);
}).then(function() {
// wait for ICE gathering to complete
return new Promise(function(resolve) {
if (main_pc["pc"].iceGatheringState === 'complete') {
resolve();
} else {
function checkState() {
if (main_pc["pc"].iceGatheringState === 'complete') {
main_pc["pc"].removeEventListener('icegatheringstatechange', checkState);
resolve();
}
}
main_pc["pc"].addEventListener('icegatheringstatechange', checkState);
}
});
}).then(function() {
var offer = main_pc["pc"].localDescription;
controller = new AbortController();
signal = controller.signal;
try{
promise = timeoutPromise(60000, fetch('http://'+ip_address+':'+port+'/offer', {
body: JSON.stringify({
sdp: offer.sdp,
type: offer.type,
"name":name,
"surname":surname
}),
headers: {
'Content-Type': 'application/json'
},
method: 'POST',
signal
}));
return promise;
}catch (error){
console.log(error);
stop_peer_connection();
}
}).then(function(response) {
if (response.ok){
return response.json();
}else{
stop_peer_connection();
}
}).then(function(answer) {
console.log(answer);
if (answer.sdp == "" && answer.type == ""){
stop_peer_connection();
return null;
}else{
return main_pc["pc"].setRemoteDescription(answer);
}
}).catch(function(e) {
console.log(e);
stop_peer_connection();
return null;
});
}
function timeoutPromise(ms, promise) {
return new Promise((resolve, reject) => {
const timeoutId = setTimeout(() => {
reject(new Error("promise timeout"))
}, ms);
promise.then(
(res) => {
clearTimeout(timeoutId);
resolve(res);
},
(err) => {
clearTimeout(timeoutId);
reject(err);
}
);
})
}
function stop_peer_connection(dc_message=true) {
$("#signal-audio").trigger("pause");
$("#signal-audio").currentTime = 0; // Reset time
// send disconnect message because iceconnectionstate slow to go in failed or in closed state
try{
if (main_pc["dc"].readyState == "open"){
if (dc_message){
main_pc["dc"].send(JSON.stringify({"type":"disconnected"}));
}
}
}catch (e){
}
try{
if (main_pc["local_audio"] != null){
main_pc["local_audio"].getTracks().forEach(track => track.stop())
main_pc["local_video"].getTracks().forEach(track => track.stop())
main_pc["remote_audio"].getTracks().forEach(track => track.stop())
main_pc["remote_video"].getTracks().forEach(track => track.stop())
main_pc["local_audio"].stop();
main_pc["local_video"].stop();
main_pc["local_audio"] = null;
main_pc["local_video"] = null;
//main_pc["remote_audio"].stop();
//main_pc["remote_video"].stop();
//main_pc["remote_audio"] = null;
//main_pc["remote_video"] = null;
}
}
catch (e){
}
document.getElementById('client-video-1').srcObject = null;
document.getElementById('server-video').srcObject = null;
document.getElementById('server-audio').srcObject = null;
document.getElementById('client-audio-2').srcObject = null;
document.getElementById('client-video-2').srcObject = null;
document.getElementById('client-audio-3').srcObject = null;
document.getElementById('client-video-3').srcObject = null;
try{
if (controller != null){
controller.abort();
}
if (main_pc["dc"].readyState != "open"){
main_pc["pc"].close();
}
}catch (e){
}
$("#control_call_button").removeClass("d-none")
$("#stop_call_button").addClass("d-none")
}
function stop_with_time_out(){
stop_peer_connection(false);
stop_time_out = null;
}
$(document).ready(function(){
$("#control_call_button").on( "click", function() {
name = $("#name").val();
surname = $("#surname").val();
$("#me-name").html(name+" "+surname)
closing = false;
controller = null;
start(name,surname);
});
$("#stop_call_button").on( "click", function() {
closing = true;
stop_peer_connection();
});
})
</code></pre>
<p>As you can see in the first two lines i declare the ip address of the server peer and the port the service will use.</p>
<p>But this ip is a local ip address.</p>
<p>I want when the python user opens the pyqt5 app then with requests module to populate the site with the new public address.</p>
<p>Then if it's possible i want the html5 user to connect with the python user without need of port forwarding (it's hard every time the user install the app the change the router settings). I suppose this will be done with ngrok python api.</p>
<p>Am i right?</p>
<p>Is there something i am missing?</p>
<p>Is there any alternative solution (tunelling) without cost and without need of port forwarding?</p>
|
<python><portforwarding><aiortc>
|
2024-07-13 16:16:05
| 1
| 2,427
|
Chris P
|
78,744,276
| 1,418,326
|
How to use DataFrameMapper to delete rows with a null value in a specific column?
|
<p>I am using <code>sklearn-pandas.DataFrameMapper</code> to preprocess my data. I don't want to impute for a specific column. I just want to drop the row if this column is <code>Null</code>. Is there a way to do that?</p>
|
<python><sklearn-pandas><sklearn2pmml>
|
2024-07-13 15:34:57
| 2
| 1,707
|
topcan5
|
78,744,194
| 818,209
|
Get aggregates for a dataframe with different combinations
|
<p>Total <code>pyspark</code> noob here. I have a dataframe similar to this:</p>
<pre><code>df = spark.createDataFrame([
Row(ttype='C', amt='12.99', dt='2024/01/01'),
Row(ttype='D', amt='21.99', dt='2024/02/15'),
Row(ttype='C', amt='16.99', dt='2024/01/21'),
])
</code></pre>
<p>I want to find the average <code>amt</code> for the last 30, 60 and 90 days for <code>ttype='C'</code> and <code>ttype='D'</code></p>
<p>I can do this one at a time:</p>
<pre><code>#pseudo code
df.filter('ttype=C for the last 30 days`).avg()
df.filter('ttype=D for the last 30 days`).avg()
df.filter('ttype=C for the last 60 days`).avg()
df.filter('ttype=D for the last 90 days`).avg()
...
</code></pre>
<p>but I am looking for a more elegant way to do this.</p>
|
<python><apache-spark><pyspark><group-by>
|
2024-07-13 14:57:53
| 1
| 4,414
|
mithun_daa
|
78,743,993
| 4,599,564
|
File "<frozen runpy>", ModuleNotFoundError: No module named 'pip' o Windows
|
<p>Original title was: <em>Can't install pip on Windows Server 2019 Standard using Windows embeddable package for Python 3.12.4</em>
Edited to: <em>File "", ModuleNotFoundError: No module named 'pip' o Windows</em> to help finding it.</p>
<p>I'm using a Windows Server 2019 Standard with admin rights.</p>
<p>I had Python 3.11.0 portable under <code>C:\Portable\python</code> and I was using pip without problems.</p>
<p>I thought it was time to upgrade my Python environment, so I renamed <code>C:\Portable\python</code> to <code>C:\Portable\python311</code>, downloaded <code>https://www.python.org/ftp/python/3.12.4/python-3.12.4-embed-amd64.zip</code>, and unzipped its content to a new <code>C:\Portable\python</code> folder.</p>
<p>I tried to use pip but pip where not installed on this portable version.</p>
<p>I ran <code>curl -o C:\Portable\python\get-pip.py https://bootstrap.pypa.io/get-pip.py</code> to download <code>get-pip.py</code></p>
<p>I ran: <code>python get-pip.py</code> and got:</p>
<pre><code>Collecting pip
Using cached pip-24.1.2-py3-none-any.whl.metadata (3.6 kB)
Collecting setuptools
Using cached setuptools-70.3.0-py3-none-any.whl.metadata (5.8 kB)
Collecting wheel
Using cached wheel-0.43.0-py3-none-any.whl.metadata (2.2 kB)
Using cached pip-24.1.2-py3-none-any.whl (1.8 MB)
Using cached setuptools-70.3.0-py3-none-any.whl (931 kB)
Using cached wheel-0.43.0-py3-none-any.whl (65 kB)
Installing collected packages: wheel, setuptools, pip
Successfully installed pip-24.1.2 setuptools-70.3.0 wheel-0.43.0
</code></pre>
<p>Then I ran: <code>python.exe -m pip --version</code> and got:
<code>python.exe: No module named pip</code></p>
<p>If I go to <code>C:\Portable\python\Scripts</code>, I see these files:</p>
<pre><code>pip.exe
pip3.12.exe
pip3.exe
wheel.exe
</code></pre>
<p>If I run <code>C:\Portable\python\Scripts>pip</code> I get this error:</p>
<pre><code>C:\Portable\python\Scripts>pip
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Portable\python\Scripts\pip.exe\__main__.py", line 4, in <module>
ModuleNotFoundError: No module named 'pip'
</code></pre>
<p>I tried using <a href="https://www.python.org/ftp/python/3.12.4/python-3.12.4-embed-win32.zip" rel="nofollow noreferrer">https://www.python.org/ftp/python/3.12.4/python-3.12.4-embed-win32.zip</a> instead of <a href="https://www.python.org/ftp/python/3.12.4/python-3.12.4-embed-amd64.zip" rel="nofollow noreferrer">https://www.python.org/ftp/python/3.12.4/python-3.12.4-embed-amd64.zip</a>.</p>
<p>I don't want to use and 'installable' version on this machine.</p>
<p>If I delete <code>C:\Portable\python</code> folder and copy <code>C:\Portable\python311</code> to it, then go to <code>C:\Portable\python\Scripts</code>, remove all files, and run <code>python get-pip.py</code>. pip is installed perfectly and works without problems.</p>
<p>To be honest, I don't remember where I got my Python 3.11.0 version.</p>
<p>So, what else can I do?
Is the current <code>python-3.12.4-embed-amd</code> portable version not usable under Windows Server 2019 Standard?
Do I need to take any additional steps before using it?
Is there another portable version that comes with pip already working?</p>
<p>P.S.: I checked some very similar questions with the same problems, but they were either not solved or too old.</p>
|
<python><windows><pip><windows-server-2019>
|
2024-07-13 13:43:16
| 2
| 1,251
|
Juan Antonio TubΓo
|
78,743,822
| 1,422,096
|
Split a video with ffmpeg, without reencoding, at timestamps given in a txt file
|
<p>Let's say we have a video <code>input.mp4</code>, and a file <code>split.csv</code> containing:</p>
<pre><code>start;end;name
00:00:27.132;00:07:42.422;"Part A.mp4"
00:07:48.400;00:17:17.921;"Part B.mp4"
</code></pre>
<p>(or I could format the text file in any other format, but the timestamps must be hh:mm:ss.ddd)</p>
<p><strong>How to split the MP4 into different parts with the given start / end timestamps, and the given filename for each part?</strong></p>
<p>Is it possible directly with <code>ffmpeg</code>, and if not with a Python script?</p>
|
<python><ffmpeg><split>
|
2024-07-13 12:22:26
| 1
| 47,388
|
Basj
|
78,743,607
| 13,727,105
|
Socket can't find AF_UNIX attribute
|
<p>I'm using an Arch <strong>Linux</strong> machine and trying to run the following code from a Python file.</p>
<pre><code>import socket
import sys
if __name__ == "__main__":
print(sys.platform)
server = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
</code></pre>
<p>and it keeps telling me that</p>
<pre><code>AttributeError: module 'socket' has no attribute 'AF_UNIX'
</code></pre>
<h3>Things tried</h3>
<ul>
<li>Some posts claim this error occurs on Windows but obviously that isn't the case.
<ul>
<li><code>sys.platform</code> prints <code>linux</code></li>
</ul>
</li>
<li>Code works on my Mac which was running Python3.9
<ul>
<li>Downgraded from Python3.12 to Python3.9 on the Linux machine and still no luck</li>
</ul>
</li>
<li><code>socket.AF_INET</code> has the same issue</li>
<li>running <code>python -c "import socket; socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)</code> returns the same error</li>
<li>error when using system binary python, conda python and venv python</li>
<li>running it in the interactive shell has no issue however.</li>
</ul>
|
<python><linux><sockets>
|
2024-07-13 10:48:03
| 1
| 369
|
Praanto
|
78,743,539
| 2,783,767
|
how to open a pdf to specific page and zoom in python in mac-os
|
<p>I am trying to open a pdf in mac-os to specific page and zoom programatically.
But I am unable to do so.
Here is my code, can somebody please let me know what I need to change?</p>
<pre><code>import subprocess
import os
import traceback
def open_pdf_with_adobe(pdf_path, page_num, zoom):
try:
# Ensure the PDF path is absolute
pdf_path = os.path.abspath(pdf_path)
# Path to Adobe Acrobat Reader on macOS
path_to_acrobat = "/Applications/Adobe Acrobat Reader.app/Contents/MacOS/AdobeReader"
# Construct the command
command = [
path_to_acrobat,
pdf_path,
"/A",
f"page={page_num}&zoom={zoom}"
]
# Execute the command
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.wait()
except:
traceback.print_exc()
# Example usage
pdf_path = "/Users/granth.jain/Desktop/Work/Wi-Fi CERTIFIED n Test Plan v2.21_0.pdf"
page_num = 12 # Adjust the page number as needed
zoom = 100 # Adjust the zoom level as needed
open_pdf_with_adobe(pdf_path, page_num, zoom)
</code></pre>
<p>correct pdf is opening, but it is not giving proper page num and zoom that I have specified.</p>
|
<python><macos><pdf><adobe>
|
2024-07-13 10:25:18
| 1
| 394
|
Granth
|
78,743,275
| 3,486,684
|
How do I pretty print the source code of functions contained in a class instance's attributes?
|
<p>I would like to use <a href="https://pypi.org/project/rich/" rel="nofollow noreferrer"><code>rich</code></a> to pretty print a Python class like so:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
import inspect
from typing import Any, Callable
from rich import print as rprint
import rich.repr
from rich.syntax import Syntax
@dataclass
class MyClass:
fs: list[Callable[..., Any]]
def __rich_repr__(self) -> rich.repr.Result:
for f in self.fs:
yield (
f.__name__,
rprint(
Syntax(
inspect.getsource(f), lexer="python", theme="lightbulb"
),
),
)
def g(x: int, y: str) -> list[str]:
return [y] * x
def h(y: str, zs: list[str]) -> str:
return " ".join([y + z for z in zs])
my_class = MyClass([g, h])
rprint(my_class)
</code></pre>
<p>Which produces as output, roughly:</p>
<pre><code>def g(x: int, y: str) -> list[str]:
return [y] * x
def h(y: str, zs: list[str]) -> str:
return " ".join([y + z for z in zs])
MyClass(g=None, h=None)
</code></pre>
<p>How do I get <code>rich</code> to pretty print the source strings of the functions? I figured that <a href="https://rich.readthedocs.io/en/stable/pretty.html#pretty-renderable" rel="nofollow noreferrer"><code>Pretty</code></a> has something to do with it, but couldn't quite get it to work.</p>
<p><strong>EDIT:</strong> some more experimentation suggests that <code>Syntax</code> is also relevant. I have updated my example code to show something that prints something more like what I'd like, but it's not there yet. Essentially, what's happening is that <code>rprint</code> outputs the source code as I'd like, but the rich representation of <code>MyClass</code> ends up getting <code>None</code>s as the result of the call to <code>rprint</code>. So <code>rprint</code> is clearly not the right function to use there.</p>
|
<python><rich>
|
2024-07-13 08:24:49
| 1
| 4,654
|
bzm3r
|
78,743,265
| 10,186,547
|
Websocket from ESP32 to Python to slow
|
<p>I am working on a project where I have Python server that's connected to an esp32 cam (AI Thinker) via a websocket.</p>
<p>The ESP32 cam constantly takes frames and sends them to the server, where they are being streamed to the user.</p>
<p>Server code:</p>
<pre><code>from fastapi import FastAPI, WebSocket
from fastapi.responses import StreamingResponse
import uvicorn
import asyncio
app = FastAPI()
queue = asyncio.Queue()
async def stream():
while True:
img_bytes = await queue.get()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + img_bytes + b'\r\n\r\n')
@app.get("/")
async def root():
return StreamingResponse(
stream(), media_type="multipart/x-mixed-replace;boundary=frame"
)
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
while True:
res = await websocket.receive()
if "bytes" not in res.keys():
continue
await queue.put(res["bytes"])
print(f"updated queue, queue size is {queue.qsize()}")
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=30000)
</code></pre>
<p>client code</p>
<pre><code>#include <Arduino.h>
#include <WiFi.h>
#include "esp_camera.h"
#include "esp_websocket_client.h"
#define PWDN_GPIO_NUM 32
#define RESET_GPIO_NUM -1
#define XCLK_GPIO_NUM 0
#define SIOD_GPIO_NUM 26
#define SIOC_GPIO_NUM 27
#define Y9_GPIO_NUM 35
#define Y8_GPIO_NUM 34
#define Y7_GPIO_NUM 39
#define Y6_GPIO_NUM 36
#define Y5_GPIO_NUM 21
#define Y4_GPIO_NUM 19
#define Y3_GPIO_NUM 18
#define Y2_GPIO_NUM 5
#define VSYNC_GPIO_NUM 25
#define HREF_GPIO_NUM 23
#define PCLK_GPIO_NUM 22
const char *ssid = "usrname";
const char *password = "password";
esp_websocket_client_handle_t client;
void initSerialMonitor() {
Serial.begin(115200);
Serial.setDebugOutput(true);
Serial.println("Serial monitor initied");
}
void initWiFi() {
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, password);
Serial.print("Connecting to WiFi ..");
while (WiFi.status() != WL_CONNECTED) {
Serial.print('.');
delay(1000);
}
Serial.printf("\nWifi initied ");
Serial.println(WiFi.localIP());
}
void initCamera() {
camera_config_t config;
config.ledc_channel = LEDC_CHANNEL_0;
config.ledc_timer = LEDC_TIMER_0;
config.pin_d0 = Y2_GPIO_NUM;
config.pin_d1 = Y3_GPIO_NUM;
config.pin_d2 = Y4_GPIO_NUM;
config.pin_d3 = Y5_GPIO_NUM;
config.pin_d4 = Y6_GPIO_NUM;
config.pin_d5 = Y7_GPIO_NUM;
config.pin_d6 = Y8_GPIO_NUM;
config.pin_d7 = Y9_GPIO_NUM;
config.pin_xclk = XCLK_GPIO_NUM;
config.pin_pclk = PCLK_GPIO_NUM;
config.pin_vsync = VSYNC_GPIO_NUM;
config.pin_href = HREF_GPIO_NUM;
config.pin_sccb_sda = SIOD_GPIO_NUM;
config.pin_sccb_scl = SIOC_GPIO_NUM;
config.pin_pwdn = PWDN_GPIO_NUM;
config.pin_reset = RESET_GPIO_NUM;
config.xclk_freq_hz = 20000000;
config.frame_size = FRAMESIZE_UXGA;
config.pixel_format = PIXFORMAT_JPEG;
config.grab_mode = CAMERA_GRAB_WHEN_EMPTY;
config.fb_location = CAMERA_FB_IN_PSRAM;
config.jpeg_quality = 12;
config.fb_count = 1;
esp_err_t err = esp_camera_init(&config);
if (err != ESP_OK) {
Serial.printf("Camera init failed with error 0x%x", err);
return;
}
Serial.println("Camera initied");
}
static void websocket_event_handler(void *handler_args, esp_event_base_t base, int32_t event_id, void *event_data)
{
esp_websocket_event_data_t *data = (esp_websocket_event_data_t *)event_data;
switch (event_id) {
case WEBSOCKET_EVENT_CONNECTED:
Serial.println("Websocket connected");
break;
case WEBSOCKET_EVENT_DATA:
Serial.print("Websocket got data: ");
Serial.println((char *)data->data_ptr);
break;
}
}
void initWebsocket() {
esp_websocket_client_config_t websocket_cfg = {};
websocket_cfg.uri = "ws://172.20.10.2:30000/ws";
client = esp_websocket_client_init(&websocket_cfg);
esp_websocket_register_events(client, WEBSOCKET_EVENT_ANY, websocket_event_handler, (void *)client);
esp_websocket_client_start(client);
}
void setup() {
initSerialMonitor();
initCamera();
initWiFi();
initWebsocket();
Serial.println("Finish setup");
}
void loop() {
camera_fb_t *pic = esp_camera_fb_get();
esp_websocket_client_send(client, (char*)pic->buf, pic->len, portMAX_DELAY);
Serial.printf("got picture with len %d\n", pic->len);
esp_camera_fb_return(pic);
delay(10);
}
</code></pre>
<p>Everything works fine, but the function <code>esp_websocket_client_send</code> is way too slow and can take up to a second.</p>
<p>How can I make the websocket faster, and are they even right for this use case?</p>
|
<python><c++><websocket><esp32>
|
2024-07-13 08:19:57
| 0
| 312
|
yardenK
|
78,743,228
| 14,194,418
|
SERVER TIMESTAMP for Realtime Database in Cloud Functions (Python)
|
<p>To get the SERVER TIMESTAMP for firestore, it is</p>
<pre><code>from firebase_admin import firestore
...
{"createdAt": firestore.firestore.SERVER_TIMESTAMP}
</code></pre>
<p>but how to get the SERVER TIMESTAMP for Realtime Database?</p>
<pre><code>from firebase_admin import db
@https_fn.on_call()
def f1(req: https_fn.CallableRequest):
...
db.reference(f"users/{userId}").set({"createdAt": # SERVER TIMESTAMP refrence ?? })
</code></pre>
|
<python><firebase><firebase-realtime-database><google-cloud-functions>
|
2024-07-13 07:58:29
| 0
| 2,551
|
Ibrahim Ali
|
78,743,190
| 43,118
|
How to type-hint Python function that takes tuple of type constructors?
|
<p>I can write a function like</p>
<pre><code>async def par_all(fns: tuple[Awaitable[T], Awaitable[U]]) -> tuple[T,U]: ...
</code></pre>
<p>But how do I extend this to take tuples of any size, without losing type info? (And ideally not <code>*fns</code>, but I'm fine if it needs to be that.)</p>
<p>Tried something like this but it doesn't quite work, at least not with pyright.</p>
<pre><code>from typing import Awaitable, TypeVarTuple, Unpack
Ts = TypeVarTuple('Ts')
async def par_all(fns: tuple[Awaitable[Unpack[Ts]], ...]) -> tuple[Unpack[Ts]]:
return await asyncio.gather(*fns)
import asyncio
async def foo() -> int:
await asyncio.sleep(1)
return 1
async def bar() -> str:
await asyncio.sleep(1)
return "hello"
async def baz() -> float:
await asyncio.sleep(1)
return 3.14
async def main():
result = await par_all((foo(), bar(), baz()))
print(result) # (1, "hello", 3.14)
# The type of result should be tuple[int, str, float], but no luck!
asyncio.run(main())
</code></pre>
<p>(And additionally, what's the best way to alternatively take a <code>Iterable[Awaitable[T]]</code> and return a <code>tuple[T]</code>βjust with overloads?)</p>
|
<python><python-typing>
|
2024-07-13 07:36:51
| 0
| 16,578
|
xyzzyrz
|
78,743,031
| 3,486,684
|
Read-only class attributes that pass type checking, the modern way (3.11+)
|
<p>Relevant older questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/76249636/class-properties-in-python-3-11">Class properties in Python 3.11+</a></li>
<li><a href="https://stackoverflow.com/questions/76378373/what-are-some-similar-alternatives-to-using-classmethod-and-property-decorators">What are some similar alternatives to using classmethod and property decorators together?</a></li>
</ul>
<p>Out of all of them, <a href="https://stackoverflow.com/a/76378416/3486684">the following answer was most appealing to me</a>. A small rewrite + example (I am using Python 3.12+):</p>
<pre class="lang-py prettyprint-override"><code>from functools import update_wrapper
from typing import Callable
class classprop[T]:
def __init__(self, method: Callable[..., T]):
self.method = method
update_wrapper(self, method)
def __get__(self, obj, cls=None) -> T:
if cls is None:
cls = type(obj)
return self.method(cls)
class MyClass:
@classprop
def hello(self) -> str:
return "world"
x = MyClass()
x.hello
</code></pre>
<p>This raises a type error (silenced in the original answer):</p>
<pre><code>Argument of type "Self@classprop[T@classprop]" cannot be assigned to
parameter "wrapper" of type "(**_PWrapper@update_wrapper) -> _RWrapper@update_wrapper"
in function "update_wrapper"
Type "Self@classprop[T@classprop]" is incompatible with
type "(**_PWrapper@update_wrapper) -> _RWrapper@update_wrapper"
</code></pre>
<p>Is there a way to address this type error, instead of silencing it? (i.e. is it raising a valid concern?) What is the error even saying?</p>
|
<python><mypy><python-typing><pyright>
|
2024-07-13 06:14:47
| 1
| 4,654
|
bzm3r
|
78,742,511
| 4,119,292
|
Cumulative calculation across rows?
|
<p>Suppose I have a function:</p>
<pre><code>def f(prev, curr):
return prev * 2 + curr
</code></pre>
<p>(Just an example, could have been anything)</p>
<p>And a Polars dataframe:</p>
<pre><code>| some_col | other_col |
|----------|-----------|
| 7 | ...
| 3 |
| 9 |
| 2 |
</code></pre>
<p>I would like to use <code>f</code> on my dataframe cumulatively, and the output would be:</p>
<pre><code>| some_col | other_col |
|----------|-----------|
| 7 | ...
| 17 |
| 43 |
| 88 |
</code></pre>
<p>I understand that, naturally, this type of calculation isn't going to be very efficient since it has to be done one row at a time (at least in the general case).</p>
<p>I can obviously loop over rows. But is there an elegant, idiomatic way to do this in Polars?</p>
|
<python><python-polars>
|
2024-07-12 23:26:11
| 1
| 1,031
|
ldmat
|
78,742,490
| 14,649,310
|
How to stop Ollama model streaming
|
<p>So I have this class that streams the response form a model:</p>
<pre><code>from langchain_community.llms.ollama import Ollama
from app.config import (
LLM_MODEL_NAME,
MAX_LLM_INPUT_LENGTH,
MAX_LLM_INPUT_SENTENCES,
LLAMA_BASE_URL,
)
from app.utils.text_processing import TextProcessing
from app.utils.persona_mapper import PersonaPromptMapper
from app.enums import Personas
class LLMAssistant:
def __init__(
self, persona: Personas = Personas.PROFESSOR, model_name: str = LLM_MODEL_NAME
):
self.persona = persona
self.persona_prompt = PersonaPromptMapper.get_persona_prompt(persona)
self.model = Ollama(name=model_name, base_url=LLAMA_BASE_URL)
def process(self, question: str):
processed_question = TextProcessing.truncate_text(
question,
max_words=MAX_LLM_INPUT_LENGTH,
max_sentences=MAX_LLM_INPUT_SENTENCES,
)
input_text = f"Persona:'{self.persona_prompt}',Prompt: '{processed_question}'"
accumulated_text = "" # Initialize accumulated text
for chunk in self.model.stream(input_text):
# Append the chunk to accumulated text
accumulated_text += chunk
# Check if we have formed complete sentences
sentences = accumulated_text.split(". ") # Split by sentences
# Join sentences with proper punctuation
complete_sentences = []
for i in range(len(sentences) - 1):
if i == 0:
complete_sentences.append(sentences[i] + ". ") # Include the period
else:
complete_sentences.append(sentences[i] + ". ") # Include the period
# Send the current state of accumulated text (up to complete sentences)
yield "".join(complete_sentences)
# After finishing all chunks, yield any remaining accumulated text
yield accumulated_text
DefaultAssistant = LLMAssistant()
</code></pre>
<p>and this FastAPI endpoint:</p>
<pre><code>@router.get("/assistant/ask/stream")
async def ask_stream(question: str = Query(...)):
try:
# Process the text using LLMAssistant
return EventSourceResponse(DefaultAssistant.process(question))
except Exception as e:
print(e)
raise HTTPException(status_code=500, detail=f"Failed to process text: {str(e)}")
</code></pre>
<p>I listen for this stream on a simple HTML page with some JS methods. The problem is that when the LLM model is done sending then I get an empty message and then the stream starts again. It seems that the streamer never ends, how can I stop the stream after the Model is done outputting?</p>
|
<python><websocket><fastapi><langchain><ollama>
|
2024-07-12 23:11:33
| 1
| 4,999
|
KZiovas
|
78,742,371
| 21,540,734
|
WebDriverWait is getting the title, but javascript, or something is changing the browser's title after the page has loaded
|
<p>I'm using selenium to rename and sort the medium to a folder based on the title of the page, but the page is still loading content in the background and the title of the page changes after firefox has finished downloading and displaying the content.</p>
<p>Whenever I click on an episode of a docuseries that isn't the first episode in the series is when this problem happens. It is returning the episode name and not the title of the series, but after the background content has finished loading the browser's title and the text of the html tag I'm after changes to the title of the series which is what I want.<a href="https://i.sstatic.net/cwxz1cbg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwxz1cbg.png" alt="enter image description here" /></a></p>
<p>I've been searching google, plus searching here on stackoverflow.com for days. I've been through almost every part of the selenium module trying different things, plus I've been through every part of the webpage hoping to find something that I can use to get selenium to wait for the content to finish loading, with no luck.</p>
<p>Also, it was recommended in an answer from another question to use <code>WebDriverWait</code> with <code>expected_conditions</code> and to try to avoid <code>time.sleep</code> with <code>selenium</code>, and I get that. Even on a high speed internet connection there are a number of things that can slow down the load time of a webpage which would make <code>time.sleep</code> inconsistent.</p>
<p>I started out with the title itself using...</p>
<pre class="lang-py prettyprint-override"><code>import selenium.webdriver.support.expected_conditions as ec
from selenium.webdriver.support.wait import WebDriverWait
browser = Firefox()
wait = WebDriverWait(driver = browser, timeout = 30)
browser.get('https://curiositystream.com/video/3558')
wait.until_not(ec.title_is(current_title))
</code></pre>
<p>But, for the moment I've settled with this. The problem I'm having isn't happening as frequently with this, but the problem is still there.</p>
<pre class="lang-py prettyprint-override"><code>print(wait.until(
ec.visibility_of_element_located((
'xpath',
'//button[@aria-expanded="false" and @class="inline-block cursor-pointer"]'
'/span[@class="leading-tight text-lg tablet:text-2xl font-normal" and contains(@aria-label,"Show")]'
))).text, end = ' ')
if len(browser.find_elements(
by = 'xpath',
value = '//div[@class="pt-4"]/p[@class="font-medium text-light pt-2"]'
)) > 0:
print('(Docuseries)')
else:
print('(Documentary)')
</code></pre>
<hr />
<p>This isn't the actual source of what I have written, but it can reproduce the problem I'm having. I'm hoping someone is willing to explore through selenium and <a href="https://curiositystream.com/" rel="nofollow noreferrer">Curiosity Stream</a> to help me come up with a solution that works without using <code>time.sleep</code>.</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import suppress
from os import getpid, kill
from re import compile
from signal import SIGTERM
from time import sleep # noqa
import selenium.common.exceptions as exc
import selenium.webdriver.support.expected_conditions as ec
from selenium.webdriver import Firefox
from selenium.webdriver.support.wait import WebDriverWait
# from library import Firefox
if __name__ == '__main__':
browser = Firefox()
wait = WebDriverWait(driver = browser, timeout = 30)
browser.set_window_rect(x = 960, y = 10, width = 1920, height = 1580)
browser.get('https://curiositystream.com/')
url = compile(r'https://curiositystream.com/video/[0-9]+')
# current_url = current_title = ''
current_url, current_title = browser.current_url, browser.title
try:
while True:
if current_url != browser.current_url:
current_url = browser.current_url
wait.until_not(ec.title_is(current_title))
if url.match(string = browser.current_url):
current_title = browser.title
if (button := next((_ for _ in browser.find_elements(
by = 'xpath',
value = '//button[@class="vjs-big-play-button" and '
'@type="button" and '
'@title="Play Video" and '
'@aria-disabled="false"]'
)), None)) is not None:
with suppress(
exc.StaleElementReferenceException,
exc.ElementNotInteractableException
):
button.click()
# sleep(wait.__dict__['_poll'])
print(wait.until(
ec.visibility_of_element_located((
'xpath',
'//button[@aria-expanded="false" and @class="inline-block cursor-pointer"]'
'/span[@class="leading-tight text-lg tablet:text-2xl font-normal" and contains(@aria-label,"Show")]'
))).text, end = ' ')
if len(browser.find_elements(
by = 'xpath',
value = '//div[@class="pt-4"]/p[@class="font-medium text-light pt-2"]'
)):
print('(Docuseries)')
else:
print('(Documentary)')
except exc.NoSuchWindowException:
kill(getpid(), SIGTERM)
</code></pre>
|
<python><selenium-webdriver><web-scraping><firefox><webdriverwait>
|
2024-07-12 22:14:18
| 1
| 425
|
phpjunkie
|
78,742,278
| 3,367,091
|
Built-in constants in Python not accessible via __builtins__ using dot syntax?
|
<p>Reading <a href="https://docs.python.org/3/library/constants.html" rel="nofollow noreferrer">Built-in Constants</a> one can see that for example <code>False</code> is a built-in constant.</p>
<p>It is possible to retrieve it using:</p>
<pre class="lang-py prettyprint-override"><code>>>> print(getattr(__builtins__, "False"))
False
</code></pre>
<p>But this fails:</p>
<pre class="lang-py prettyprint-override"><code>>>> print(__builtins__.False)
File "<stdin>", line 1
print(__builtins__.False)
^^^^^
SyntaxError: invalid syntax
</code></pre>
<p>I assume this fails because the word <code>False</code> gets recognized by the interpreter as the keyword / constant it refers to and so no attribute lookup takes place?
The <a href="https://docs.python.org/3/library/functions.html#getattr" rel="nofollow noreferrer">documentation</a> says about <code>getattr</code>:</p>
<blockquote>
<p>Return the value of the named attribute of <em>object</em>. <em>name</em> must be a string. If the string is the name of one of the objectβs attributes, the result is the value of that attribute. <strong>For example, <code>getattr(x, 'foobar')</code> is equivalent to <code>x.foobar</code>.</strong></p>
</blockquote>
<p>But for a reserved keyword (?) this does not hold?</p>
|
<python>
|
2024-07-12 21:32:47
| 2
| 2,890
|
jensa
|
78,742,267
| 11,618,586
|
Getting hierachy from 2 columns that have parent child relationships
|
<p>I have a dataframe like so :</p>
<pre><code>data = {
'Parent': [None, None, 'A', 'B', 'C', 'I', 'D', 'F', 'G', 'H', 'Z', 'Y', None,None,None,None, 'AA', 'BB', 'CC', 'EE', 'FF', None, None],
'Child': ['A', 'B', 'D', 'D', 'D', 'C', 'E', 'E', 'F', 'F', 'G', 'H', 'Z', 'Y', 'AA', 'BB', 'CC', 'CC', 'DD', 'DD', 'DD', 'EE', 'FF']
}
df = pd.DataFrame(data)
Parent Child
0 None A
1 None B
2 A D
3 B D
4 C D
5 I C
6 D E
7 F E
8 G F
9 H F
10 Z G
11 Y H
12 None Z
13 None Y
14 None AA
15 None BB
16 AA CC
17 BB CC
18 CC DD
19 EE DD
20 FF DD
21 None EE
22 None FF
</code></pre>
<p>I want an output dataframe like so:</p>
<p><a href="https://i.sstatic.net/vTf2wZjo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTf2wZjo.png" alt="Expected Output" /></a></p>
<p>I tried using the <code>networkx</code> package as suggested in this <a href="https://stackoverflow.com/questions/53935815/find-all-the-ancestors-of-leaf-nodes-in-a-tree-with-pandas">post</a>,
This is the code I used</p>
<pre><code>df['parent']=df['parent'].fillna('No Parent')
leaves =set(df['parent']).difference(df['child'])
g= nx.from_pandas_edgelist(df, 'parent', 'child', create_using=nx.DiGraph())
ancestors = {
n: nx.algorithms.dag.ancestors(g, n) for n in leaves
}
df1=(pd.DataFrame.from_dict(ancestors, orient='index')
.rename(lambda x: 'parent_{}'.format(x+1), axis=1)
.rename_axis('child')
.fillna('')
)
</code></pre>
<p>But I get an empty dataframe.
Is there an elegant way to achieve this?</p>
|
<python><pandas><dataframe>
|
2024-07-12 21:28:43
| 2
| 1,264
|
thentangler
|
78,741,922
| 3,486,684
|
Attaching an "in-group index" to each row of sorted data with Polars
|
<p>Here's the solution I came up with for the problem:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import numpy as np
max_groups = 5
max_reps = 3
# print out all rows in our table, for the sake of convenience
pl.Config.set_tbl_rows(max_groups * max_reps)
num_groups = np.random.randint(3, max_groups + 1)
unique_ids = np.random.randint(97, 123, num_groups)
repetitions = np.random.randint(1, max_reps + 1, num_groups)
id_col = "id"
data_col = "point"
index_col = "ixs"
# # Generate data
# convert integers to ascii using `chr`
ids = pl.Series(
id_col,
[c for n, id in zip(repetitions, unique_ids) for c in [chr(id)] * n],
)
data = pl.Series(
data_col,
np.random.rand(len(ids)),
)
df = pl.DataFrame([ids, data])
# # Generate indices
df.sort(id_col, data_col).group_by(id_col).agg(
pl.col(data_col), pl.int_range(pl.len()).alias(index_col)
).explode(data_col, index_col).sort(id_col, data_col)
</code></pre>
<pre><code>shape: (7, 3)
βββββββ¬βββββββββββ¬ββββββ
β id β point β ixs β
β --- β --- β --- β
β str β f64 β i64 β
βββββββͺβββββββββββͺββββββ‘
β g β 0.030686 β 0 β
β g β 0.322024 β 1 β
β k β 0.124792 β 0 β
β k β 0.289025 β 1 β
β s β 0.485742 β 0 β
β s β 0.689397 β 1 β
β u β 0.516705 β 0 β
βββββββ΄βββββββββββ΄ββββββ
</code></pre>
<p>Can I do better? I sort twice, for instance: once before grouping, and once after. I can eliminate the need for the second sort by <code>maintain_order=True</code> in the <code>group_by</code>:</p>
<pre class="lang-py prettyprint-override"><code># # Generate indices, but maintain_order in group_by
df.sort(id_col, data_col).group_by(id_col, maintain_order=True).agg(
pl.col(data_col), pl.int_range(pl.len()).alias(index_col)
).explode(data_col, index_col)
</code></pre>
<p>(Some simple, very naive, <code>timeit</code> based experments suggest <code>maintain_order=True</code> generally wins over sorting twice, but not by a large margin.)</p>
|
<python><dataframe><window-functions><python-polars>
|
2024-07-12 19:14:14
| 2
| 4,654
|
bzm3r
|
78,741,828
| 874,024
|
inverted rows,cols in cv.resize
|
<p>I don't understand why the np.shape passed to cv.resize needs me to swap the tuple elements:</p>
<pre class="lang-py prettyprint-override"><code>
# Our operations on the frame come here
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
# Display the resulting frame
cv.imshow('frame', gray)
(rows, cols) = np.shape(gray)
resized = cv.resize(gray, (cols * 2, rows * 2))
cv.imshow('resized', resized)
</code></pre>
|
<python><opencv>
|
2024-07-12 18:43:08
| 0
| 60,094
|
CapelliC
|
78,741,811
| 11,318,930
|
Using rsync to insert a new subdirectory
|
<p>I have an existing remote file system like so:</p>
<pre><code>βββ freshmeat
βΒ Β βββ prime
βΒ Β βββ A
βΒ Β βββ B
</code></pre>
<p>I want to insert an update from a local directory: freshmeat/prime/c/D
So the end result would be:</p>
<pre><code>βββ freshmeat
βΒ Β βββ prime
βΒ Β βββ A
βΒ Β βββ B
| |___ C
|- D
</code></pre>
<p>I use <code>rsync -rvzP freshmeat/prime/c/D freshmeat/prime --delete</code>
This throws an error that the directory does not exist. Why does it not work?</p>
|
<python><rsync>
|
2024-07-12 18:36:29
| 1
| 1,287
|
MikeB2019x
|
78,741,573
| 353,337
|
Matplotlib axes caption with automatic wrapping
|
<p>In Matplotlib, I have two plots next to each other and I'd like to add a caption to each to them. I'd like the caption auto-wrapped to fit under the respective axes pair.</p>
<p>I've tried</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].plot([1.0, 1.2, 0.9])
axs[0].text(
0,
-0.1,
"""
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim
veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure dolor in reprehenderit in voluptate
velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat
cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id
est laborum.
""",
transform=axs[0].transAxes,
wrap=True,
)
plt.show()
</code></pre>
<p>but got</p>
<p><a href="https://i.sstatic.net/5X3p8fHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5X3p8fHO.png" alt="enter image description here" /></a></p>
<p>The <code>wrap</code> setting doesn't seem to do the trick.</p>
<p>What does?</p>
|
<python><matplotlib>
|
2024-07-12 17:17:28
| 1
| 59,565
|
Nico SchlΓΆmer
|
78,741,557
| 4,701,426
|
Panda's value_counts() method counting missing values inconsistently
|
<p>Please consider this simple dataframe:</p>
<pre><code>df = pd.DataFrame({'x': [1, 2, 3, 4, 10]}, index = range(5))
df:
x
0 1
1 2
2 3
3 4
4 10
</code></pre>
<p>Some indices:</p>
<pre><code>ff_idx = [1, 2]
sd_idx= [3, 4]
</code></pre>
<p>One way of creating a new column by filtering df based on the above indices:</p>
<pre><code>df['ff_sd_indicator'] = np.nan
df['ff_sd_indicator'][df.index.isin(ff_idx)] = 'ff_count'
df['ff_sd_indicator'][df.index.isin(sd_idx)] = 'sd_count'
</code></pre>
<p>Another way of doing the same thing:</p>
<pre><code>df['ff_sd_indicator2'] = np.select([df.index.isin(ff_idx) , df.index.isin(sd_idx)], ['ff_count','sd_count' ], default=np.nan)
</code></pre>
<p>Notice that while the values of <code>ff_sd_indicator</code> and <code>ff_sd_indicator2</code> are naturally the same, the missing values are printed differently (NaN vs nan):</p>
<pre><code>df:
x ff_sd_indicator ff_sd_indicator2
0 1 NaN nan
1 2 ff_count ff_count
2 3 ff_count ff_count
3 4 sd_count sd_count
4 10 sd_count sd_count
</code></pre>
<p>I don't care about the different prints, but surprisingly the missing values do not show up in the output of:</p>
<pre><code>df['ff_sd_indicator'].value_counts()
</code></pre>
<p>which is:</p>
<pre><code>ff_sd_indicator
ff_count 2
sd_count 2
</code></pre>
<p>But they do show up in the output of:</p>
<pre><code>df['ff_sd_indicator2'].value_counts()
</code></pre>
<p>which is:</p>
<pre><code>ff_sd_indicator2
ff_count 2
sd_count 2
nan 1
</code></pre>
<p>So, what is going on here with value_counts() not counting the missing values in <code>ff_sd_indicator</code> while they were created by the same np.nan as the missing values in <code>ff_sd_indicator2</code> were created?</p>
<p>Edit:
<code>df.info()</code> :</p>
<pre><code>RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 x 5 non-null int64
1 ff_sd_indicator 5 non-null object
2 ff_sd_indicator2 5 non-null object
</code></pre>
|
<python><pandas><numpy>
|
2024-07-12 17:12:40
| 1
| 2,151
|
Saeed
|
78,741,526
| 6,023
|
How to detect that a built-in function has had its name imported over by another module
|
<h2>Backstory</h2>
<p>A confession, I have used <code>import *</code> and felt the pain.</p>
<p>Specifically I had <code>from pyspark.sql.functions import *</code> and then attempted to use a builtin function that had been overridden by that liberal import.</p>
<p>Looking at how many same named functions exist in both, I guess it was only a matter of time before I snagged on a collision:</p>
<pre><code>>>> from pprint import pprint
>>> from pyspark.sql import functions as F
>>> pprint(list(set(dir(F)) & set(dir(__builtin__))))
['sum',
'__package__',
'__doc__',
'round',
'abs',
'min',
'bin',
'filter',
'ascii',
'__spec__',
'hex',
'__loader__',
'slice',
'hash',
'pow',
'max',
'__name__']
</code></pre>
<h2>Wish</h2>
<p>I have experimented with being a little more proactive, to see what has changed in my global namespace, but specifically the builtins are confusing me.</p>
<pre><code>>>> ORIGINAL_GLOBALS = globals().copy()
>>> from pyspark.sql.functions import *
>>> MUTATED_GLOBALS = {key: value
... for key, value in globals().items()
... if (key in ORIGINAL_GLOBALS
... and ORIGINAL_GLOBALS[key] != value)}
>>> print(MUTATED_GLOBALS)
{}
</code></pre>
<p>I expected the above to list the above functions that had been hidden by the <code>import *</code> however instead it is empty.</p>
<h2>Questions</h2>
<ul>
<li>Am I looking in the right place to interrogate the global namespace?</li>
<li>If so, is there something bespoke happening with <code>builtins</code> that I need to take into account?</li>
<li>And if there is something unusual with <code>builtins</code>, is there anything else that has the same side-door?</li>
</ul>
<h2>TL;DR</h2>
<p>Is there a good, pythonic way for me to detect something having changed in the global namespace?</p>
|
<python><python-import><built-in>
|
2024-07-12 17:03:54
| 1
| 1,340
|
Don Vince
|
78,741,441
| 2,071,807
|
Structural pattern matching binds already defined variables but treats instance attributes as literals: is this documented anywhere?
|
<p><a href="https://peps.python.org/pep-0636/#matching-specific-values" rel="nofollow noreferrer">PEP 636 - Structural Pattern Matching</a> discusses how pattern matching binds values to variables declared on the go:</p>
<blockquote>
<pre class="lang-py prettyprint-override"><code>match command.split():
case ["quit"]:
print("Goodbye!")
quit_game()
case ["look"]:
current_room.describe()
case ["get", obj]:
character.get(obj, current_room)
</code></pre>
<p>A pattern like <code>["get", obj]</code> will match only 2-element sequences that have a first element equal to "get". <strong>It will also bind <code>obj = subject[1]</code></strong>.</p>
</blockquote>
<p>If <code>obj</code> already had a value, we might expect the value to be used as if it was a literal, but that doesn't happen. Instead <code>obj</code> is overwritten:</p>
<pre class="lang-py prettyprint-override"><code>>>> # I'm using Python 3.12. You can copy/paste this straight into iPython
>>> obj = 3
>>> for i in range(5):
... match i:
... case obj:
... print(i) # Does this print only 3?
0
1
2
3
4
>>> obj
4
</code></pre>
<p>This doesn't happen if you specify an instance attribute:</p>
<pre class="lang-py prettyprint-override"><code>>>> from dataclasses import dataclass
>>> @dataclass
... class Num:
... x: int
...
>>> num = Num(x=3)
>>> for i in range(5):
... match i:
... case num.x: # does this bind num.x to i???
... print(i)
...
3
</code></pre>
<p>In this second case, <code>num.x</code> is unchanged and is used as if it were a literal.</p>
<p>This isn't mentioned anywhere in PEP 636 that I can see. Where can I read about this subtlety? Was this behaviour introduced after PEP 636 was implemented?</p>
<p><em>(Note: I think this behaviour is perfectly sensible and good. I'd just like to know if/where it's documented)</em></p>
|
<python><match><structural-pattern-matching>
|
2024-07-12 16:38:03
| 1
| 79,775
|
LondonRob
|
78,741,393
| 16,527,170
|
AttributeError: 'Index' object has no attribute 'strftime'
|
<p>In DataFrame <code>df</code>, I am having index column as <code>Timestamp</code> in it some values are in format: <code>%Y-%m-%d %H:%M:%S.%f</code> and other values in format: <code>%Y-%m-%d %H:%M:%S</code> due to which I am facing issue in Sorting the index <code>df =df.sort_index()</code> as well and Data Fomat it also different.</p>
<p>My Code:</p>
<pre><code>import pandas as pd
# Example DataFrame with mixed datetime formats in the index
data = {
'final_lowerband': [None, None, None, 6698.0, 6698.0, 6698.0],
'final_upperband': [None, None, None, None, None, None],
}
index = [
'2024-07-12 20:38:59.667000',
'2024-07-12 19:38:59.957000',
'2024-07-12 19:36:59.897000',
'2024-07-12 19:13:59.870000',
'2024-07-12 18:15:59',
'2024-07-12 21:35:00',
]
df = pd.DataFrame(data, index=index)
# Convert index to DatetimeIndex
df.index = pd.to_datetime(df.index)
# Convert index to desired format
df.index = df.index.strftime('%Y-%m-%d %H:%M:%S.%f')
# Display the DataFrame with the updated index format
print(df)
</code></pre>
<p>ERROR 1 for Line: <code>df.index = pd.to_datetime(df.index)</code></p>
<pre><code>ValueError: time data "2024-07-12 18:15:59" doesn't match format "%Y-%m-%d %H:%M:%S.%f", at position 4. You might want to try:
- passing `format` if your strings have a consistent format;
- passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;
- passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.
</code></pre>
<p>ERROR 2 From Line: <code>df.index = df.index.strftime('%Y-%m-%d %H:%M:%S.%f')</code></p>
<pre><code>AttributeError: 'Index' object has no attribute 'strftime'
</code></pre>
<p>When Tried Sorting Index , without `strfdatetime getting error.</p>
<p>ERROR 3: <code>TypeError: '<' not supported between instances of 'Timestamp' and 'str'</code></p>
<pre><code>df = df.sort_index()
</code></pre>
<p>Expected Output:</p>
<pre><code>Timestamp final_lowerband final_upperband
2024-07-12 18:15:59 6698.0 NaN
2024-07-12 19:13:59 6698.0 NaN
2024-07-12 19:36:59 NaN NaN
2024-07-12 19:38:59 NaN NaN
2024-07-12 20:38:59 NaN NaN
2024-07-12 21:35:00 6698.0 NaN
</code></pre>
|
<python><pandas><dataframe>
|
2024-07-12 16:24:04
| 1
| 1,077
|
Divyank
|
78,741,359
| 1,361,752
|
How to have a pip editable install not install dependencies that are already installed in editable mode
|
<p>I am often working on multiple python packages with interdependencies defined in pyproject.toml. I want them all installed in editable mode, which seems harder than it should be. What I want to do is to:</p>
<ol>
<li>Clone each package locally using git</li>
<li>Install each package in editable mode with <code>pip install -e .</code> Notionally, I'd do this in logical order for dependency.</li>
</ol>
<p>Let us say I have two packages, <code>parent</code> and <code>child</code>, where <code>child</code> lists <code>parent</code> as a dependency in pyproject.toml. If I install <code>parent</code> then <code>child</code> in editable mode then when installing <code>child</code>, <code>pip</code> will try to download and re-install <code>parent</code> from the repository. This can cause confusion if not realized, since you might not realize you've accidently installed a non-editable version of <code>parent</code> into your environment.</p>
<p>Is there any way to tell <code>pip</code> to respect the version of the package you already have installed in editable mode? I would like a way to do this in pyproject.toml, or by providing some flag to pip.</p>
<p><strong>Options Already Considered</strong></p>
<p>I know I could use <code>pip --no-deps</code> with <code>child</code>, but that causes it to not install <em>any</em> dependencies. Sometimes <code>child</code> has dependencies that you'll want installed in addition to <code>parent</code>.</p>
<p>I could also always install in reverse order (<code>child</code> then <code>parent</code>), which isn't a bad solution. But this can get complicated if you are working on multiple "children", or have more complex dependency trees.</p>
|
<python><pip><pyproject.toml>
|
2024-07-12 16:14:53
| 0
| 4,167
|
Caleb
|
78,741,275
| 2,153,235
|
Using something like MyDataFrame.print() instead of print(MyDataFrame)?
|
<p>When I run a script from the Spyder console using <code>runfile</code>, source file lines containing expressions (e.g., <code>"HELLO"</code>) don't print out to the console. I have to explicitly print, e.g., <code>print("HELLO")</code>.</p>
<p>Is there a way to output the string representation of a pandas DataFrame using a method? Something like the very last chained method below:</p>
<pre><code>dfShipName[['ShpNm','ShipNamesPrShpNm']] \
.drop_duplicates() \
.groupby('ShipNamesPrShpNm',as_index=False)['ShipNamesPrShpNm'] \
.agg(['count']) \
.rename(columns={'count':'nShpNm'}) \
.sort_values(by='nShpNm',ascending=False) \
.head(20).print()
</code></pre>
<p>I could encapsulate the entire thing (sans <code>.print()</code>) in a <code>print()</code> invocation, but that's another level of indentation. Just appending a <code>print()</code>-like method avoids having to be much more cleaner, which makes a big difference when I have lots of these (for exploratory analysis). I can also toggle the output by simply commenting away the <code>.print()</code>.</p>
|
<python><pandas><dataframe>
|
2024-07-12 15:51:58
| 2
| 1,265
|
user2153235
|
78,741,237
| 14,179,793
|
boto3 copy using SourceClient and access keys results in "AccessDenied"
|
<p>I am trying to determine if using boto3 copy with a <code>SourceClient</code> will work for my current use case. The <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/copy.html#copy" rel="nofollow noreferrer">documentation</a> mentions the <code>SourceClient</code> parameter "The client to be used for operation that may happen at the source object" . I have two buckets that require key access and I need to transfer files between them.</p>
<pre><code>def copy_function(src_bucket, src_key, src_akid, src_sak, dest_bucket, dest_key, dest_akid, dest_sak):
src_client = boto3.client(
's3',
aws_access_key_id=src_akid,
aws_secret_access_key=src_sak
)
dest_client = boto3.client(
's3',
aws_access_key_id=dest_akid,
aws_secret_access_key=dest_sak
)
rsp = dest_client.copy(
CopySource={
'Bucket': src_bucket,
'Key': src_key
},
Bucket=dest_bucket,
Key=dest_key,
SourceClient=src_client
)
</code></pre>
<p>But this results in the following error:</p>
<pre><code>{
"errorMessage": "An error occurred (AccessDenied) when calling the UploadPartCopy operation: Access Denied",
"errorType": "ClientError",
...
"stackTrace": [
" File \"/var/task/task/lambda_function.py\", line 16, in handler\n results = test_main(event, context)\n",
" File \"/var/task/task/test.py\", line 39, in test_main\n rsp = dest_client.copy(\n",
" File \"/var/task/boto3/s3/inject.py\", line 450, in copy\n return future.result()\n",
" File \"/var/task/s3transfer/futures.py\", line 103, in result\n return self._coordinator.result()\n",
" File \"/var/task/s3transfer/futures.py\", line 266, in result\n raise self._exception\n",
" File \"/var/task/s3transfer/tasks.py\", line 139, in __call__\n return self._execute_main(kwargs)\n",
" File \"/var/task/s3transfer/tasks.py\", line 162, in _execute_main\n return_value = self._main(**kwargs)\n",
" File \"/var/task/s3transfer/copies.py\", line 370, in _main\n response = client.upload_part_copy(\n",
" File \"/var/task/botocore/client.py\", line 565, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/task/botocore/client.py\", line 1021, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]
}
</code></pre>
<p>Which seems to suggest the destination client doesn't have permissions to upload but I know the keys provided allow for this. The following works without producing an error:</p>
<pre><code>def copy_function_2(src_bucket, src_key, src_akid, src_sak, dest_bucket, dest_key, dest_akid, dest_sak):
src_client = boto3.client(
's3',
aws_access_key_id=src_akid,
aws_secret_access_key=src_sak
)
rsp = src_client.get_object(
Bucket=src_bucket,
Key=src_key
)
dest_client = boto3.client(
's3',
aws_access_key_id=dest_akid,
aws_secret_access_key=dest_sak
)
rsp = dest_client.put_object(
Bucket=dest_bucket,
Key=dest_key,
Body=rsp.get('Body').read()
)
print(rsp)
</code></pre>
<p>Have I missed something or does the <code>SourceClient</code> in combination with a destination client not work in the manner I am attempting?</p>
|
<python><boto3>
|
2024-07-12 15:43:16
| 1
| 898
|
Cogito Ergo Sum
|
78,741,205
| 11,208,087
|
Unable to access FastApi app that is running as windows service created with pywin32
|
<p>i have a basic fastapi <code>main.py</code> as following:</p>
<pre><code>from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
origins = ["*"]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/")
def read_root():
return {"Hello": "World"}
</code></pre>
<p>and a <code>win_service.py</code> as following:</p>
<pre><code>import win32serviceutil
import win32service
import win32event
import servicemanager
import socket
import os
from threading import Thread
import uvicorn
class AppServerSvc(win32serviceutil.ServiceFramework):
_svc_name_ = "ABCD"
_svc_display_name_ = "ABCD Windows Service"
_svc_description_ = "A FastAPI application running as a Windows Service"
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
servicemanager.LogMsg(
servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_, "")
)
self.main()
def main(self):
def start_server():
os.chdir(os.path.dirname(os.path.abspath(__file__)))
uvicorn.run("main:app", host="0.0.0.0", port=8000)
server_thread = Thread(target=start_server)
server_thread.start()
win32event.WaitForSingleObject(self.hWaitStop, win32event.INFINITE)
server_thread.join()
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(AppServerSvc)
</code></pre>
<p>using windows 11 & python version 3.9. when i try to install (<code>python win_service.py install</code>) using admin cmd and debug (<code>python win_service.py debug</code>) it works & i can access it on http://localhost:8000</p>
<p>when i start (<code>python win_service.py start</code>), it gets started and in windows event viewer the log shows <code>The ABCD service has started.</code> but it's not accessable at http://localhost:8000</p>
<p>how can i access it, when it's running as a service?</p>
|
<python><python-3.x><windows-services><fastapi><pywin32>
|
2024-07-12 15:36:59
| 1
| 801
|
Jaydeep
|
78,741,177
| 11,037,602
|
How to read environment variables with spiders deployed to scrapyd?
|
<h3>TL;DR:</h3>
<p><code>load_env()</code> loads the env vars locally, but it doesn't when runs in scrapyd</p>
<h3>Details</h3>
<p>I have a scrapy project that needs to read some environment variables. These variables are found in the <code>.env</code> file and I use <a href="https://github.com/theskumar/python-dotenv" rel="nofollow noreferrer">python-dotenv</a> to load them. <strong>It works perfectly when running locally</strong>.</p>
<p>However I have a containerized server that runs <a href="https://github.com/scrapy/scrapyd" rel="nofollow noreferrer">scrapyd</a> to which the project is deployed (packaged into an egg). <strong>I have the <code>.env</code> file included in the <code>MANIFEST.in</code> and I confirmed it is included in the egg.</strong></p>
<p>The spiders run on the scrapyd environment, except for the fact that it doesn't load any of the environment variables that it should.</p>
<p>The project is structured as:</p>
<pre><code>testproject/
../setup.py
../scrapy.cfg
../MANIFEST.in
../testproject/
../settings.py
../.env
../spiders/
../testspider.py
</code></pre>
<p>The <code>settings.py</code> file contains:</p>
<pre><code>import os
from dotenv import load_dotenv
load_dotenv() # Load Enviroment Variables
print(
f"Google Cred? {os.environ.get('GOOGLE_APPLICATION_CREDENTIALS')}"
f" Proxy? {os.environ.get('HTTP_PROXY')}"
) # It should print the values, but prints None for both
</code></pre>
<p>And the <code>.env</code> file:</p>
<pre><code>GOOGLE_APPLICATION_CREDENTIALS=somevalue
HTTP_PROXY=somevalue
</code></pre>
<p>How can I load environment variables when running in scrapyd? I don't want to define then in the scrapyd Dockerfile.</p>
<hr />
<p>I have seen this answer <a href="https://stackoverflow.com/a/77964778">https://stackoverflow.com/a/77964778</a> that suggests to not use <code>python-dotenv</code>, but it also provides no solution. It seems that the author assumes that the <code>.env</code> file will just get loaded somehow, I've tried, it didn't work.</p>
|
<python><scrapy><scrapyd><python-dotenv><scrapyd-deploy>
|
2024-07-12 15:30:08
| 1
| 2,081
|
Justcurious
|
78,741,144
| 5,983,080
|
Is there an exponentially weighted moving sum (ewms instead of ewma) function in polars
|
<p>Also here: <a href="https://github.com/pola-rs/polars/issues/17602" rel="nofollow noreferrer">https://github.com/pola-rs/polars/issues/17602</a></p>
<p>In polars ewm_mean, the update is formulated as:</p>
<pre><code>y = (1 - alpha) * y + alpha * x
</code></pre>
<p>I found that sometimes it is useful to use non-decayed update, a.k.a. exponentially weighted moving sum instead:</p>
<pre><code>y = (1 - alpha) * y + x
</code></pre>
<p>ewms is merely a simple multiple (1 / alpha) of ewma for equal time lapse but not so trivial when updates are not equally spaced</p>
<p>To achieve this I implement some update function on myself and try to use <code>.map_batches()</code> but think it's rather suboptimal.</p>
<p>Is there a better alternative?</p>
<pre class="lang-py prettyprint-override"><code>def ems_iter(values: np.array, intervals: Union[np.array, float], half_lifes: Union[np.array, float]):
"""
>>> ems_iter([1, 0, 0, 1, 0, 0], 1, 1)
[1.0, 0.5, 0.25, 1.125, 0.5625, 0.28125]
"""
res = []
agg = 0.
values = np.array(values)
decays = (1/2) ** (np.array(intervals) / half_lifes)
decays = np.ones(values.shape) * decays
for value, decay in zip(values, decays):
agg = agg * decay + value
res.append(agg)
return res
def ems(val_col: str, interval_col: str, hl: int) -> pl.Expr:
return (
pl.struct([col, interval_col])
.map_batches(
lambda x: pl.Series(
ems_iter(
x.struct.field(col),
x.struct.field('lapse').dt.total_seconds(),
hl
)
)
)
)
</code></pre>
|
<python><python-polars>
|
2024-07-12 15:21:31
| 0
| 381
|
Celsius_Xu
|
78,741,064
| 7,580,944
|
weird shape when indexing a jax array
|
<p>I am experiencing a weird issue when indexing a Jax array using a list.
If I place a debugger in the middle of my code, I have the following:</p>
<p><a href="https://i.sstatic.net/IiaThfWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IiaThfWk.png" alt="indexing inside the code" /></a></p>
<p>This array are created by convering a numpy array.</p>
<p>However, when I try this in a new instance of Python, I have the correct behavior:
[<img src="https://i.sstatic.net/YsXRpkx7.png" alt="indexing in a new instance" /></p>
<p>What is it happening?</p>
|
<python><numpy><jax>
|
2024-07-12 15:01:07
| 1
| 359
|
Chutlhu
|
78,740,794
| 4,687,489
|
xml.etree.ElementTree. Element object has no attribute 'nsmap'
|
<p>I try to process xml data from the website <a href="https://www.marktstammdatenregister.de/MaStR/Datendownload" rel="nofollow noreferrer">https://www.marktstammdatenregister.de/MaStR/Datendownload</a>
I downloaded some datasets for a certain technology for example Netzanschlusspunkte. First step would be to delete certain columns. The second step would be to merge the files.</p>
<pre><code>import pandas as pd
import xml.etree.ElementTree as ET
import os, psutil
from bs4 import BeautifulSoup as b
path='....'
for filename in os.listdir(path):
if not filename.endswith('.xml'):continue
fullname = os.path.join(path,filename)
tree1 = ET.parse(fullname)
root1 = tree1.getroot()
path='....'
for filename in os.listdir(path):
if not filename.endswith('.xml'):continue
fullname = os.path.join(path,filename)
tree2 = ET.parse(fullname)
root2 = tree2.getroot()
names_to_delete = ['NetzanschlusspunktMastrNummer','LetzteAenderung']
for grandchild in list(root):
if grandchild.attrib in names_to_delete:
root.remove(grandchild)
def merge_namespaces(root1,root2):
for prefix, uri in root1.nsmap.items():
if prefix not in root2.nsmap.values():
ET.register_namespace(prefix,uri)
#Merge namespaces
merge_namespaces(root1, root2)
#Append elements from the second XML file to the first XML file
for elem in root2:
root1.append(elem)
#Create a new ElementTree object with the merged root
merged_tree = ET.ElementTree(root1)
#Write the merged XML to a new file
merged_tree.write('', encoding = 'utf-8', xml_declaration = True)
</code></pre>
<p>In the last step I do get the error AttributeError: 'xml.etree.ElementTree.Element' object has no attribute 'nsmap'</p>
<p>I think the problem is related the fact, that I try to convert str to bytes, but I don't know how to slove it.</p>
<p>I tried also to look at the file via trostring to get a better understanding, but the file is to big -> Javascript Error: too much recursion</p>
<p>Here is the sample for Wind:
<a href="https://i.sstatic.net/XY3QEUcg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XY3QEUcg.png" alt="enter image description here" /></a></p>
<p>Here is the sample for Netzanschlusspunkt:
<a href="https://i.sstatic.net/Cb0CSKKr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cb0CSKKr.png" alt="enter image description here" /></a></p>
<p>In reality the files do have much more columns.</p>
|
<python><xml><filter><merge><grandchild>
|
2024-07-12 14:08:57
| 1
| 359
|
Mlle Blanche
|
78,740,790
| 8,037,521
|
Efficient way of finding nearest pixel
|
<p>I have array of floating points Nx2 which is representing the reprojected 3D -> 2D coordinates in a 2D image. I need to find closest (integer) pixel. The naive solution is pretty simple:</p>
<pre><code>import numpy as np
# Original array of floating-point row and column values
points = np.array([
[327.47910325, 3928.36321963],
[1439.79734793, 3987.02652005],
[304.02698845, 2844.82490694],
[230.43053757, 4090.70452501]
])
def find_nearest_integer_point(point):
# Extract the row and column values
row, col = point
# Four possible nearest integer points
candidates = np.array([
[np.floor(row), np.floor(col)],
[np.floor(row), np.ceil(col)],
[np.ceil(row), np.floor(col)],
[np.ceil(row), np.ceil(col)]
])
# Calculate the Euclidean distances to the candidates
distances = np.linalg.norm(candidates - point, axis=1)
# Select the candidate with the minimum distance
nearest_point = candidates[np.argmin(distances)]
return nearest_point.astype(int)
# Apply the function to each point
rounded_points = np.array([find_nearest_integer_point(point) for point in points])
print(rounded_points)
</code></pre>
<p>But this for sure will be extremely inefficient once working with big amounts of data (2D videos instead of a single image).
Anyone could advise me on a some speed-up for the given method? Maybe something already implemented in some publicly available library? Or maybe some data structure I should take into account?</p>
|
<python><numpy>
|
2024-07-12 14:08:25
| 0
| 1,277
|
Valeria
|
78,740,763
| 12,633,371
|
Is it possible to pause a python script that has already started running and check the value of a variable?
|
<p>Even though the title is self explanatory, I have a python script running for quite a few days and I want to know if there is a way to pause it and check the value of a variable.</p>
<p>If I haven't already written code in the script for that purpose before running it, is it possible to check the value of this variable now?</p>
|
<python>
|
2024-07-12 14:03:25
| 1
| 603
|
exch_cmmnt_memb
|
78,740,675
| 8,580,574
|
How to programmatically generate some documentation from Python dictionary and insert it into .rst file using Sphinx
|
<p>I am writing a configuration tool that automates the deployment of some monitoring dashboards. Each dashboard has a written description that is available in the tool, and can be found in the deployed dashboard. But we also have some functional documentation that repeats this description so end-users can read it outside of the code. So basically we have the following:</p>
<ol>
<li>description_mapping.py</li>
</ol>
<pre class="lang-py prettyprint-override"><code>dashboard_description_mapping = {"dashboard_1": "This dashboard achieves xyz"}
</code></pre>
<ol start="2">
<li>README.rst</li>
</ol>
<pre><code>Templates
=========
Descriptions
------------
* **dashboard_1**
This dashboard achieves xyz
</code></pre>
<p>I simplified the approach, in the real case we have many dashboard and descriptions that are all duplicated in the file description_mapping.py and in README.rst.</p>
<p>I would like to have the ability to be able to do some sort of command that gives me the following:</p>
<pre><code>Templates
=========
Descriptions
------------
* **dashboard_1**
.. magic_command: description_mapping.dashboard_description_mapping.get("dashboard_1")
</code></pre>
<p>Which would then render as if the description is automagically grabbed, and basically then I don't violate the DRY principle anymore.</p>
<p>Is this possible? I have searched a while, and came across this former question ,but this does not seem to achieve what I need:</p>
<ol>
<li><a href="https://stackoverflow.com/questions/7250659/how-to-use-python-to-programmatically-generate-part-of-sphinx-documentation/18143318#18143318">How to use Python to programmatically generate part of Sphinx documentation?</a></li>
<li><a href="https://stackoverflow.com/questions/27875455/displaying-dictionary-data-in-sphinx-documentation">Displaying dictionary data in Sphinx documentation</a></li>
</ol>
<p>I would like to grab the string, not just print out the whole dictionary basically.</p>
<p>I tried to use existing functionality of ..autodoc, but this did not give me desired output but simply printed out the dictionary.</p>
|
<python><python-sphinx>
|
2024-07-12 13:40:33
| 1
| 2,542
|
PEREZje
|
78,740,522
| 2,123,706
|
How to split single element of list and maintain order in list?
|
<p>I have a list:</p>
<pre><code>ll=['the big grey fox','want to chase a squirell', 'because they are friends']
</code></pre>
<p>I want to split the 2nd element, <code>ll[1]</code>, into <code>"want to chase"</code> and <code>" a squirell"</code>.</p>
<p>I want to end up with:</p>
<pre><code>['the big grey fox','want to chase', ' a squirell', 'because they are friends']
</code></pre>
<p>Note that the first element needs to be always split so that the last 10 characters are a new element of the list, immediately after the first n-10 char of the element before, so that the order of joined text is not altered.</p>
<p>I currently use:</p>
<pre><code>ll[:1],ll[1][:len(ll[1])-10], ll[1][-10:],ll[2:]
</code></pre>
<p>but is there a prettier way of doing this?</p>
|
<python><list>
|
2024-07-12 13:08:20
| 1
| 3,810
|
frank
|
78,740,103
| 11,357,695
|
Test multiple functions raise the same error with pytest
|
<p>Is there a way to put multiple tests under one <code>pytest.raises</code> block? I'd like something similar to the below, but I would want to test that both functions raise the error (whereas I believe only one <code>ValueError</code> would need to be raised for <code>pytest</code> to be satisfied here).</p>
<pre><code>with pytest.raises(ValueError):
#I want to confirm both of these raise ValueError
do_first_thing()
do_second_thing()
</code></pre>
|
<python><pytest>
|
2024-07-12 11:24:16
| 0
| 756
|
Tim Kirkwood
|
78,739,761
| 1,564,070
|
Visio Container FillForegnd not taking effect
|
<p>I'm controlling Visio from Python using win32com and with good results overall so far. After a recent update to my code, container header fills are no longer taking effect. The code I'm using:</p>
<pre><code>class Container(Shape):
Β v_app: Application
Β def __init__(self, caption: str, items: list, header_fill: str=None, font: Font=None) -> None:
Β Β # select items for container:
Β Β window = self.v_app.app.ActiveWindow
Β Β selection = window.Selection; selection.DeselectAll()
Β Β for item in items:
Β Β Β selection.Select(item.shape, 2) # visSelect = 2
Β Β # drop and format container:
Β Β self.shape = self.v_app.page.DropContainer(self.v_app.doc.Masters.ItemU("Classic"), selection)
Β Β super().__init__(self.shape)
Β Β self.shape.Text = caption
Β Β if font:
Β Β Β self.set_font(font)
Β Β else:
Β Β Β self.set_font(self.v_app.font4)
Β Β if header_fill:
Β Β Β self.shape.Cells("FillForegnd").FormulaForceU = header_fill
Β Β self.shape.ContainerProperties.FitToContents()
</code></pre>
<p>The resulting shapesheet value looks correct:</p>
<p><a href="https://i.sstatic.net/nSk6nrHP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSk6nrHP.png" alt="shapesheet" /></a></p>
<p>However the header fill does not change from the default white. If I use the Visio GUI, I am able to update the fill color successfully.</p>
<p><a href="https://i.sstatic.net/f3bKGj6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f3bKGj6t.png" alt="Container without header fill" /></a></p>
<p>I created a stand-alone program to isolate the issue from my code base:</p>
<pre><code>import win32com.client as w32
app = w32.Dispatch("Visio.Application")
app.Window.SetWindowRect(10, 10, 2000, 1000)
doc = app.Documents.Open(r"C:\Users\hankb\OneDrive\Documents\Python\projects\visio_test\junk.vsdx")
# doc = app.Documents.Add("junk.vsdx")
page = doc.Pages.Add()
b1 = page.DrawRectangle(1, 5, 2, 4.5); b1.Text = "Box 1"
b2 = page.DrawRectangle(1, 4, 2, 3.5); b2.Text = "Box 2"
win = app.ActiveWindow
sel = win.Selection
sel.Select(b1, 2)
sel.Select(b2, 2)
con = page.DropContainer(doc.Masters.ItemU("Classic"), sel)
con.Cells("FillForegnd").FormulaForceU = "RGB(189, 215, 238)" # RGB(0,255,0)"
con.Text = "Container"
pass
</code></pre>
<p>The output from the short program:
<a href="https://i.sstatic.net/3kA2rVlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3kA2rVlD.png" alt="Output from short program" /></a></p>
<p>The code below solved my issue:</p>
<pre><code>con = page.DropContainer(stencil.Masters.ItemU("Classic"), sel)
con_header = con.Shapes[1]
con_header.Cells("FillForegnd").FormulaForceU = "RGB(189, 215, 238)"
con.Text = "Container"
</code></pre>
|
<python><visio>
|
2024-07-12 10:04:43
| 1
| 401
|
WV_Mapper
|
78,739,630
| 1,232,660
|
LXML automatically converts Windows newlines
|
<p>I am trying to parse an XML string that contains Windows newlines (the CR, LF pair):</p>
<pre class="lang-python prettyprint-override"><code>from lxml.etree import XML
root = XML('<root>_\r\n_\n_</root>')
print(
[ord(char) for char in root.text],
)
</code></pre>
<p>but the resulting text surprisingly contains only Linux newlines (LF character):</p>
<pre class="lang-python prettyprint-override"><code>[95, 10, 95, 10, 95]
</code></pre>
<p>Is it a feature, documented somewhere? Is it possible to change its behavior to access the unmodified text?</p>
<p>I am using (currently the newest) LXML 5.2.2.</p>
<hr>
<p>The <a href="https://stackoverflow.com/questions/16123277/how-to-control-newline-processing-in-the-lxml-xpath-text-function">linked question</a> is not relevant to my question - it talks about software changes between Fedora 17 and 18, while this behavior, as mentioned here in the comments, turns out to be defined in the XML standard. Also, the answer does not answer my question - it recommends to replace the manually added newlines with a triple quoted string.</p>
|
<python><lxml>
|
2024-07-12 09:38:10
| 0
| 3,558
|
Jeyekomon
|
78,739,619
| 6,003,901
|
Best Practice for updating class attribute based on values
|
<p>I have to read set of csv files which have 5 columns like name, age, address, type and distance. As these column names are string and I want to iterate them in my pandas df. I have created a class that store name with a variable so that if a name change in these files, I just need to update this in one location and work for all case.</p>
<pre><code>class PersonDetailName:
name = "name"
age = "age"
type = "type"
address = "address"
distance = "distance"
</code></pre>
<p>Now to access this in iteration I can directly call it like <code>row[PersonDetailName.name"]</code> and it can handle the name automatically.</p>
<p>If some files which still have these five fields however instead of age they have <code>personAge</code> and for address they have <code>FullAddress</code>. How can I handle this without changing much in the code. My approaches for this are:</p>
<h3>Dynamic Column Name Mapping</h3>
<pre><code>class PaymentDetailName:
age = "age"
address = "address"
dynamic_col_name_mapping = {
"type1": {"PersonAge", "PersonAddress"},
"type2": {"PersonAge1", "PersonAddress1"},
"default": {"age", "address"}
}
@classmethod
def update_payment_name(cls, type):
cls.age, cls.address = cls.dynamic_col_name_mapping[type]
</code></pre>
<h3>Updating name list in iterator</h3>
<pre><code>age, address = PaymentDetailName.dynamic_col_name_mapping[type]
age = row[age]
type = row[PaymentDetailName.type]
</code></pre>
<p>I want to understand which of the solution is better and is there other method or programming knowledge I can use to make it more dynamic and less amount of code. The PaymentDetailName class should remain and rest I can update to fulfill this requirement.
Creating a base class and creating two subclass for this seems overkill to me.
Please provide your insights to make better class design.</p>
|
<python><class><oop><metaprogramming>
|
2024-07-12 09:36:15
| 1
| 647
|
abby37
|
78,739,598
| 14,301,545
|
Python - RGBA image non-zero pixels extracting - numpy mask speed up
|
<p>I need to extract non-zero pixels from RGBA Image. Code below works, but because I need to deal with really huge images, some speed up will be salutary. Getting "f_mask" is the longest task. Is it possible to somehow make things work faster? How to delete rows with all zero values ([0, 0, 0, 0]) faster?</p>
<pre><code>import numpy as np
import time
img_size = (10000, 10000)
img = np.zeros((*img_size, 4), float) # Make RGBA image
# Put some values for pixels [float, float, float, int]
img[0][1] = [1.1, 2.2, 3.3, 4]
img[1][0] = [0, 0, 0, 10]
img[1][2] = [6.1, 7.1, 8.1, 0]
def f_img_to_pts(f_img): # Get non-zero rows with values from whole img array
f_shp = f_img.shape
f_newshape = (f_shp[0]*f_shp[1], f_shp[2])
f_pts = np.reshape(f_img, f_newshape)
f_mask = ~np.all(f_pts == 0, axis=1)
f_pts = f_pts[f_mask]
return f_pts
t1 = time.time()
pxs = f_img_to_pts(img)
t2 = time.time()
print('PIXELS EXTRACTING TIME: ', t2 - t1)
print(pxs)
</code></pre>
|
<python><numpy><image>
|
2024-07-12 09:30:09
| 2
| 369
|
dany
|
78,739,573
| 1,335,492
|
Get object properties without including defaults
|
<p>This is almost what I want:</p>
<pre><code>MyList = list(vars(MyObject).keys())
</code></pre>
<p>But it gives me</p>
<p><code>['__module__', '__dict__', '__weakref__', '__doc__']</code></p>
<p>I want a list without those default properties.What's currently the best way of doing that?</p>
|
<python>
|
2024-07-12 09:25:32
| 0
| 2,697
|
david
|
78,739,565
| 12,439,683
|
How to use a constant object like a literal type-hint?
|
<p>I have a unique and constant object as a special value, which I would like to use for type-hints in different occasions.</p>
<p>Lets assume I have a situation like this:</p>
<pre class="lang-py prettyprint-override"><code>NO_RESULT = object()
def foo() -> Hashable | ???:
try:
result : Hashable = res()
return result
except:
return NO_RESULT
</code></pre>
<p>Here I would like to annotate the return value <code>foo() -> Literal[NO_RESULT] | Hashable</code>. However, this is not proper and I will get warnings as this is not a literal value.</p>
<hr />
<p>I know I could use the following code and annotate it with <code>NoResultType</code></p>
<pre class="lang-py prettyprint-override"><code>NoResultType = NewType("NoResultType", object)
NO_RESULT = NoResultType(object())
</code></pre>
<p>However, this refers to instances of a type, and not a unique object, which is a semantic difference I do not want to have.</p>
<hr />
<p><strong>How do I have to modify <code>NO_RESULT</code> so that I can use it in type-hints as well as a variable?</strong></p>
|
<python><python-typing>
|
2024-07-12 09:24:05
| 1
| 5,101
|
Daraan
|
78,739,512
| 19,472,100
|
VSCode says 'Cannot access attribute "[attribute]" for class "FunctionType"' for Keras model even though it runs without error
|
<p>I'm trying to import a saved Keras model from disk:</p>
<pre class="lang-py prettyprint-override"><code>from keras._tf_keras.keras.models import load_model
model = load_model('data/model.keras')
print(model.summary())
</code></pre>
<p>(Note: <code>keras</code> is imported like this because everything other solution I tried gave an <code>Import 'keras._' could not be resolved</code> error.)</p>
<p>When I run this using <code>py</code> in a virtual environment:</p>
<pre class="lang-bash prettyprint-override"><code>(env) $ py test.py
2024-07-12 18:40:21.267652: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-07-12 18:40:21.268097: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-12 18:40:21.270249: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-07-12 18:40:21.278556: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:479] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-07-12 18:40:21.294538: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:10575] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-07-12 18:40:21.294580: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1442] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-07-12 18:40:21.304639: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-12 18:40:21.785069: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/home/user/projects/python/bonk_bot/env/lib/python3.12/site-packages/keras/src/saving/saving_lib.py:576: UserWarning: Skipping variable loading for optimizer 'adam', because it has 22 variables whereas the saved optimizer has 2 variables.
saveable.load_own_variables(weights_store.get(inner_path))
Model: "sequential"
ββββββββββββββββββββββββββββββββββββββββ³ββββββββββββββββββββββββββββββ³ββββββββββββββββββ
β Layer (type) β Output Shape β Param # β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β conv3d (Conv3D) β (None, 1, 1, 158, 32) β 15,073,312 β
ββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββΌββββββββββββββββββ€
β activation (Activation) β (None, 1, 1, 158, 32) β 0 β
ββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββΌββββββββββββββββββ€
β conv3d_1 (Conv3D) β (None, 1, 1, 79, 64) β 131,136 β
ββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββΌββββββββββββββββββ€
β activation_1 (Activation) β (None, 1, 1, 79, 64) β 0 β
ββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββΌββββββββββββββββββ€
β conv3d_2 (Conv3D) β (None, 1, 1, 79, 64) β 110,656 β
ββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββΌββββββββββββββββββ€
β activation_2 (Activation) β (None, 1, 1, 79, 64) β 0 β
ββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββΌββββββββββββββββββ€
β flatten (Flatten) β (None, 5056) β 0 β
ββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββΌββββββββββββββββββ€
β dense (Dense) β (None, 512) β 2,589,184 β
ββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββΌββββββββββββββββββ€
β dense_1 (Dense) β (None, 6) β 3,078 β
ββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββ΄ββββββββββββββββββ
Total params: 53,722,100 (204.93 MB)
Trainable params: 17,907,366 (68.31 MB)
Non-trainable params: 0 (0.00 B)
Optimizer params: 35,814,734 (136.62 MB)
None
</code></pre>
<p>Everything works fine and the model prints the summary as expected (apart from the verbose output it gives out at the beginning). However, VSCode gives an error when opened:</p>
<p><a href="https://i.sstatic.net/TMXNB7fJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMXNB7fJ.png" alt="Cannot access attribute 'summary' for class 'FunctionType'
Attribute 'summary' is unknown" /></a></p>
<p>This happens for every function I try to use on <code>model</code>. The editor I am currently using is Code-OSS version 1.91.0, and my linter is Pyright. I made sure that the editor was using the correct interpreter inside the Python virtual environment. I doubt it's a Keras problem since the editor does this. Does anyone know why my editor thinks it's an error and how to fix it?</p>
|
<python><tensorflow><visual-studio-code><keras><pyright>
|
2024-07-12 09:10:12
| 0
| 328
|
yees_7
|
78,739,444
| 815,170
|
Python pynput not working as expected on osx
|
<p>I'm trying to catch keystrokes in a python script, but it ONLY catches modifier keys (ctrl, cmd, alt) and not ordinary alphanumerical keys. Running on Mac Sonoma 14.1.1, Python 3.10.6.</p>
<pre><code>from pynput.keyboard import Key, Listener
def on_press(key):
try:
print(f'\nYou Entered {key.char}')
except AttributeError:
print(f'\nYou Entered {key}')
if key == Key.delete:
# Stop listener
return False
# Collect all event until released
with Listener(on_press=on_press) as listener:
listener.join()
</code></pre>
<p>The output I get is:</p>
<pre><code>You Entered Key.cmd
You Entered Key.alt
t
e
s
t
You Entered Key.cmd
You Entered Key.alt
</code></pre>
<p>Ie, it's not registering the normal keys. Same problem with using the <code>import keyboard</code> but with the following error:</p>
<pre><code>ValueError: ("Key 's' is not mapped to any known key.", ValueError('Unrecognized character: s'))
</code></pre>
|
<python><pynput>
|
2024-07-12 08:58:09
| 0
| 1,336
|
Hampus Brynolf
|
78,739,349
| 710,955
|
Expand templates to text in wikipedia
|
<p>I have some custom wikitext(which also includes templates) and need to convert to text.
To do this, I use <a href="https://bitbucket.org/wmj/wikiexpand/src/master/" rel="nofollow noreferrer">wikiexpand</a>.</p>
<p>In the README file it is indicated:</p>
<p><em>Modules written in Lua and executed using {{#invoke:}} are not recognised, but can be replaced by implementing callable templates (that is, functions that render Wikicode)</em></p>
<p>The documentation in <a href="https://bitbucket.org/wmj/wikiexpand/src/8fe22d35fb0daac5b9ff6aef539f361cd4abdd85/wikiexpand/expand/templates.py#lines-37" rel="nofollow noreferrer"><code>wikiexpand.expand.templates.TemplateStore.callable_templates</code></a> is not very explicit.</p>
<p>Would it be possible to help me by giving me an example implementation?</p>
<p>For example, for the following wikicode:</p>
<pre><code>{{ISBN|978-2-84066-599-1}}
{{Date de naissance|11|novembre|1866}}
</code></pre>
|
<python><wikipedia><wikitext>
|
2024-07-12 08:38:53
| 1
| 5,809
|
LeMoussel
|
78,739,294
| 13,562,186
|
Tkinter Plot becomes blurry when in a frame
|
<p><strong>TEST_PLOT.py</strong></p>
<pre><code>[![import matplotlib.pyplot as plt
import numpy as np
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import tkinter as tk
class TestPlot:
def __init__(self, master=None):
self.master = master
self.create_plot()
def create_plot(self):
# Generate x values
x = np.linspace(-10, 10, 400)
# Compute y values
y = x**2
# Create the plot
self.fig, self.ax = plt.subplots(figsize=(8, 6))
self.ax.plot(x, y, label='$y = x^2$')
self.ax.set_title('Plot of $y = x^2$')
self.ax.set_xlabel('x')
self.ax.set_ylabel('y')
self.ax.legend()
self.ax.grid(True)
if self.master:
# Embed the plot in the Tkinter frame
self.canvas = FigureCanvasTkAgg(self.fig, master=self.master)
self.canvas.get_tk_widget().pack(fill=tk.BOTH, expand=True)
self.canvas.draw()
else:
# Show the plot standalone
plt.show()
if __name__ == "__main__":
</code></pre>
<p><a href="https://i.sstatic.net/UDLpr0tE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDLpr0tE.png" alt="enter image description here" /></a></p>
<p>All nice and sharp.</p>
<p>But when I utilise this script in a frame it becomes slightly fuzzy:</p>
<p><strong>test1.py</strong></p>
<pre><code>import tkinter as tk
from TEST_PLOT import TestPlot # Assuming the above code is saved in test_plot.py
class Test1(tk.Tk):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.title('Test1 - Main Window with Plot')
self.geometry('800x600')
# Create a single frame for the plot
frame = tk.Frame(self, bg='lightblue', bd=5, relief=tk.RIDGE)
frame.pack(fill=tk.BOTH, expand=True)
# Embed TestPlot within the frame
test_plot = TestPlot(frame)
test_plot.master.pack(fill=tk.BOTH, expand=True)
if __name__ == '__main__':
app = Test1()
app.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/IQn6VXWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IQn6VXWk.png" alt="enter image description here" /></a></p>
<p>Sure it's some compression issue but not sure what is the best way to resolve.</p>
<p>Would like to use Tkinter as default with Python. Though I am open to better graphic options.</p>
<p>When I use this in my actual program there's also something very odd happening in that the Tkinter icon itself becomes blurry..</p>
<p><a href="https://i.sstatic.net/Fy3wtu9V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fy3wtu9V.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/zO3JUgu5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zO3JUgu5.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><tkinter><tkinter-canvas>
|
2024-07-12 08:26:34
| 0
| 927
|
Nick
|
78,739,236
| 3,909,896
|
How to configure pypi repo authentication for an Azure DevOps Artifact Feed in databricks.yml for Databricks Asset Bundles?
|
<p>I have a python_wheel_task in one of my asset bundle jobs which executes the whl file that is being built from my local repository from which I deploy the bundle. This process works fine in itself.</p>
<p>However - <strong>I need to add a custom dependency whl file</strong> (another repo, packaged and published to my Azure Artifact Feed) to the task as a library in order for my local repo's whl file to work completely.</p>
<p>I tried to define it as follows:</p>
<pre><code> - task_key: some_task
job_cluster_key: job_cluster
python_wheel_task:
package_name: my_local_package_name
entry_point: my_entrypoint
named_parameters: { "env": "dev" }
libraries:
- pypi:
package: custom_package==1.0.1
repo: https://pkgs.dev.azure.com/<company>/<some-id>/_packaging/<feed-name>/pypi/simple/
- whl: ../../dist/*.whl # my local repo's whl: being built as part of the asset-bundle
</code></pre>
<p>When I deploy and run the bundle, I get the following error in the job cluster:</p>
<pre><code>24/07/12 07:49:01 ERROR Utils:
Process List(/bin/su, libraries, -c, bash /local_disk0/.ephemeral_nfs/cluster_libraries/python/python_start_clusterwide.sh
/local_disk0/.ephemeral_nfs/cluster_libraries/python/bin/pip install 'custom_package==3.0.1'
--index-url https://pkgs.dev.azure.com/<company>/<some-id>/_packaging/<feed-name>/pypi/simple/
--disable-pip-version-check) exited with code 1, and Looking in indexes:
https://pkgs.dev.azure.com/<company>/<some-id>/_packaging/<feed-name>/pypi/simple/
24/07/12 07:49:01 INFO SharedDriverContext: Failed to attach library
python-pypi;custom_package;;3.0.1;https://pkgs.dev.azure.com/<company>/<some-id>/_packaging/<feed-name>/pypi/simple/
to Spark
</code></pre>
<p>I suppose I need to configure a personal access token / authentication for the feed somewhere, but I cannot find anything in the Databricks documentation about <a href="https://docs.databricks.com/en/dev-tools/bundles/library-dependencies.html#pypi-package" rel="nofollow noreferrer">library dependencies</a>. There is only one sentence about adding a custom index and nothing about authentication.</p>
<p>How can I get this to work?</p>
|
<python><databricks><databricks-asset-bundle>
|
2024-07-12 08:09:33
| 1
| 3,013
|
Cribber
|
78,738,969
| 1,371,481
|
pip installing packages via external tools
|
<p>I am trying to install an internal python package via</p>
<pre class="lang-bash prettyprint-override"><code>pip install -r requirements.txt
</code></pre>
<h5>requirements.txt</h5>
<pre class="lang-bash prettyprint-override"><code>black
numpy
scipy
interpackage @ git+ssh://git@github.com/.....
otherpackage @ git+ssh://git@github.com/....
tqdm
</code></pre>
<ul>
<li><p>Due to security reasons, the cluster has disabled <code>port forwarding</code>.</p>
</li>
<li><p>To work with <em>internal</em> & <em>external</em> repositories I am using <a href="https://cli.github.com/" rel="nofollow noreferrer">gh-cli</a> tool.</p>
</li>
</ul>
<p>How can i modify the <strong>requirements.txt</strong> to use <code>gh-cli</code> tool when trying to clone external repositories?</p>
<p>Do I need to modify all <strong>requirements.txt</strong> in my package <code>dependencies-chain</code> ?</p>
<p>(E.g. do I need to manually modify <code>requirements.txt</code> for <strong>dep1-pkg</strong> & <strong>dep2-pkg</strong>)</p>
<pre><code> ------------
|-->| dep1-pkg |
------------ | ------------
| main-pkg | -------
------------ | ------------
|-->| dep2-pkg |
------------
</code></pre>
|
<python><git><pip><github-cli>
|
2024-07-12 06:49:10
| 0
| 1,254
|
DOOM
|
78,738,850
| 205,147
|
Convert trained and pickled sklearn ColumnTransformer from sklearn 1.1.2 to 1.5.1
|
<p>I have a Scikit Learn ColumnTransformer containing a number of other transformers (passthrough, a custom transformer and a OneHotEncoder) that has been "trained" (transformer.fit() has been run on data) and stored in a Pickle file, created with Scikit Learn 1.1.2.</p>
<p>The OneHotEncoder seems to be the biggest challenge as the passthrough and the custom transformer should be compatible between the versions.</p>
<p>I need to find a way to convert this transformer from the old Scikit Learn version (1.1.2) to the newest version (1.5.1). How can this be done while maintaining all information stored in the transformers (what they have "learned")?</p>
|
<python><scikit-learn><migration><pickle>
|
2024-07-12 06:16:43
| 0
| 2,229
|
Hendrik Wiese
|
78,738,602
| 679,824
|
Handling diamond inheritance super class invocations in Python
|
<p>I have a class setup that looks like below</p>
<pre class="lang-py prettyprint-override"><code>from abc import abstractmethod
class Player:
def __init__(self, name, age):
self._player_name = name
self._age = age
@property
def player_name(self):
return self._player_name
@property
def age(self):
return self._age
@abstractmethod
def _prefix(self) -> str:
pass
@abstractmethod
def _suffix(self) -> str:
pass
def pretty_print(self) -> str:
return f"{self.player_name} is a {self._prefix()} and is {self.age} years old. Accomplishments: {self._suffix()}"
class Footballer(Player):
def __init__(self, name, age, goals_scored):
super().__init__(name, age)
self._goals_scored = goals_scored
@property
def goals_scored(self):
return self._goals_scored
def _prefix(self) -> str:
return "Football Player"
def _suffix(self) -> str:
return f"Goals Scored {self._goals_scored}"
class CarRacer(Player):
def __init__(self, name, age, races_won, laps):
super().__init__(name, age)
self._races_won = races_won
self._laps = laps
@property
def laps(self):
return self._laps
@property
def races_won(self):
return self._races_won
def _prefix(self) -> str:
return "Formula 1 racer"
def _suffix(self) -> str:
return f"Races won: {self.races_won}, Laps count: {self.laps}"
class AllRounder(Footballer, CarRacer):
def __init__(self, name, age, goals_scored, races_won, laps):
super().__init__(name, age, goals_scored)
super(CarRacer, self).__init__(name, age, races_won, laps)
def _prefix(self) -> str:
return "All Rounder"
def _suffix(self) -> str:
return f"{Footballer._prefix(self)}, {CarRacer._prefix(self)}"
</code></pre>
<p>Now within my main method, I am doing the following:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == '__main__':
all_rounder = AllRounder("Jack", 30, 150, 200, 1000)
print(all_rounder.pretty_print())
</code></pre>
<p>When the instantiation kicks in, I keep hitting the below error</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "/Users/kmahadevan/githome/playground/python_projects/playground/pythonProject/oops/diamond_inheritance.py", line 79, in <module>
all_rounder = AllRounder("Jack", 30, 150, 200, 1000)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kmahadevan/githome/playground/python_projects/playground/pythonProject/oops/diamond_inheritance.py", line 68, in __init__
super().__init__(name, age, goals_scored)
File "/Users/kmahadevan/githome/playground/python_projects/playground/pythonProject/oops/diamond_inheritance.py", line 31, in __init__
super().__init__(name, age)
TypeError: CarRacer.__init__() missing 2 required positional arguments: 'races_won' and 'laps'
</code></pre>
<p>I am not entirely sure as to how should the <code>__init__</code> method within <code>AllRounder</code> be called so that both the base classes are invoked.</p>
<p>A pictorial representation of the class hierarchy for a bit more better understanding</p>
<p><a href="https://i.sstatic.net/O9ltZoH1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9ltZoH1.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><oop><diamond-problem>
|
2024-07-12 04:29:54
| 1
| 14,756
|
Krishnan Mahadevan
|
78,738,462
| 3,793,935
|
Regular expression to extract monetary values from an invoice text
|
<p>I have a regular expression:
<code>\b[-+]?(?:\d{1,3}\.)(?:\d{3}\.)*(?:\d*)</code> (Python) that matches numeric values in strings like this:</p>
<pre><code>Amount: 12.234.55222 EUR
Some Text 123.222.22 maybe more text
1.245.455.2
22.34565 Could be at the beginning
It could be at the end of a string 21.1
221. It could be a number like this too (for US invoices, I saw a lot of different stuff)
</code></pre>
<p>Which is what I want.
But it also matches the first 2 parts of a date like this: <strong>08.05</strong>.2023 <br><br>
I know this is happening because of the first and the last group, but I don't know how to prevent that.
I only want to match values that stand by themselves.
<br> Can somebody point me in the right direction?</p>
<p><strong>Edit:</strong>
I forgot to mention that I've tried it with a negative look behind, but that didn't work:</p>
<pre><code>\b([-+]?(?:\d{1,3}\.)(?:\d{3}\.)*(?:\d*))(?!(?:\.d{4}))
</code></pre>
<p>Maybe I'm doing the look behind wrong?</p>
|
<python><regex>
|
2024-07-12 03:28:14
| 2
| 499
|
user3793935
|
78,738,178
| 816,566
|
Why does shutil.copytree() and distutils.dir_util.copy_tree() copy the contents of a source directory, not the directory itself?
|
<p>I'm using On python 3.11 on Windows. I have a directory "comfyui\js" I want to copy the directory "js", from" comfyui" to a destination "T:\temp" so that the result is "T:\temp\js".</p>
<p>I have found that both<code>distutils.dir_util.copy_tree("comfyui/js","T:/temp")</code> and <code>shutil.copytree("comfyui/js","T:/temp")</code> copy the contents under "comfyui\js" into "T:\tmp" but do not create "T:\tmp\js".</p>
<pre><code>copy_tree("L:/comfyui/js", "T:/temp")
['T:/temp\\green_diamond_00.png', 'T:/temp\\green_pixel_00.png', 'T:/temp\\green_square_8x8.png', 'T:/temp\\image_transceiver_controller.js', 'T:/temp\\magic_lamp_160x86.png', 'T:/temp\\pict_cha_mockup.html']
</code></pre>
<p>I'm pretty sure I can work around this with something like:</p>
<pre><code> if os.path.isdir(src_path):
copy_tree(src_path, f"{dest_dir}/{src_path.name}")
</code></pre>
<p>But this is not what I expected from reading the docs on both functions, and I don't get why I have to treat directories differently from files.</p>
|
<python><windows><path>
|
2024-07-12 00:59:14
| 0
| 1,641
|
Charlweed
|
78,738,067
| 1,806,566
|
In python, when raising an exception from an except block, is there a way to stop a traceback?
|
<p>This code:</p>
<pre><code>def abc():
raise ValueError()
def xyz():
try:
raise TypeError()
except TypeError:
abc()
xyz()
</code></pre>
<p>creates the following traceback:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 8, in xyz
TypeError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 12, in <module>
File "<stdin>", line 10, in xyz
File "<stdin>", line 4, in abc
ValueError
</code></pre>
<p>The issue is that xyz() realizes that whatever it tries first may not succeed (in this case, it just raises TypeError), and it calls abc() as a backup plan. The fact we raised an exception in the body of xyz() is of no consequence, and if it shows up in a chain of exceptions, it's just misleading noise.</p>
<p>That we got an exception in abc(), however, is a real error, and I want to see it. I would like to find a way to print abc's exception without seeing xyz's exception. The problem is that only xyz knows this is the case.</p>
<p>abc() could easily suppress the incoming chain of exceptions with <code>raise ValueError() from None</code>, but abc() doesn't know about xyz, so it doesn't know that it's caller is handling an inconsequential exception, so it doesn't know whether 'from None' is a good idea (for all it knows, it's caller could have hit a real error).</p>
<p>Similarly, the caller of xyz() could decide just to print the last exception without the rest of a chain, but the caller of xyz() doesn't necessarily know what xyz() is doing internally.</p>
<p>I think the only good place to handle this in inside xyz() itself, but I don't see a way of doing that short of not calling abc() from an except block. That's certainly possible, but the code gets crufty, particularly in larger examples.</p>
<p>Is there a way that xyz can call abc from its except block but not have its own exception "in the chain"?</p>
|
<python><try-except>
|
2024-07-11 23:50:44
| 1
| 1,241
|
user1806566
|
78,737,996
| 1,006,183
|
Set Pydantic BaseModel field type dynamically when instantiating an instance?
|
<p>With Pydantic is it possible to build a model where the type of one of its fields is set via an argument when creating an instance?</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>class Animal(BaseModel):
name: str
class Dog(Animal):
# more custom fields...
class Cat(Animal):
# more custom fields...
class Person(BaseModel):
name: str
pet: Cat | Dog
person_args = {
"name": "Dan",
# set which type to use via some kind of argument?
"_pet_type": Dog,
"pet": {
"name": "Fido",
# ...
},
}
person = Person(**person_args)
</code></pre>
<p>I'm aware that typically you could use a discriminated union to resolve which type to use. However in this particular case the information for which type I should validate against exists outside of this set of data in another, related model.</p>
<p>Alternately could I use some kind of private field on the types being discriminated that I set but isn't included in validated output? Something like:</p>
<pre class="lang-py prettyprint-override"><code>person_args = {
"name": "Dan",
"pet": {
"_pet_type": Dog, # exclude from output
"name": "Fido",
# ...
},
}
person = Person(**person_args)
</code></pre>
<p>I'd like to avoid use of a custom validator as I'm using FastAPI and I want the potential types to be reflected properly in the OpenAPI schema.</p>
<p>I'm currently on Pydantic v1.x, but would consider upgrading to v2 if it would help me solve this issue.</p>
|
<python><fastapi><pydantic><pydantic-v2>
|
2024-07-11 23:11:04
| 1
| 11,485
|
Matt Sanders
|
78,737,947
| 5,615,873
|
Pyglet 'window.set_location()' method does not work properly
|
<p>A most weird problem has occurred to me a couple of days ago during my exploration of the pyglet package: While <em>window.set_location(x,y)</em>, window centering on the screen etc. were working fine, at a certain point when I wanted to center a pyglet window on the Windows screen as usual, the window was totallly off center. See image below.</p>
<p><a href="https://i.sstatic.net/65VxVTpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65VxVTpB.png" alt="enter image description here" /></a></p>
<p>I then started to investigate the problem in various ways. One of them was to re-install pyglet. The problem insisted.</p>
<p>I also searched in the Web and I couldn't find a single reference about tht problem.</p>
<p>I did a few practical tests and I found out that there was a constant difference between the calculated (expected) window location and the actual one (what appeared on screen). This constant or factor was <strong>1.25</strong>. The coordinates on the screen were always increased by that factor in relation the calculated coordinates. I thought this had to do with dpi but the dpi in pyglet seems to apply only to text and more specifically to fonts.
So what the heck is this (pixelling?) factor that modifies positioning on screen and how it is put into effect?</p>
<p>Does anyone know anything about this subject?</p>
<p>Note that this problem is most probably not reproducable, since it occurs unexpectedly and I have no clue about why or how. However, I can provide a simple code, for anyone who would like to try it (It will most probably work fine! π:</p>
<pre><code>import pyglet
screen_w, screen_h = 1920, 1080 # Windows screen size
window_w, window_h = 400,400 # pyglet window size
window = pyglet.window.Window(window_w, window_h)
window.set_location((screen_w-window_w)//2, (screen_h-window_h)//2) # Center the window on the screen
pyglet.app.run()
</code></pre>
|
<python><pyglet>
|
2024-07-11 22:51:16
| 1
| 3,537
|
Apostolos
|
78,737,927
| 801,902
|
Playing an audio file on server from Django
|
<p>I have a Raspberry Pi that I have hooked up to a door chime sensor, so it plays different sounds as people enter the building. Right now, I just made a short script to play those sounds from whatever is in a directory, and it works fine.</p>
<p>I decided that it might be easier to upload sounds and organize their play order, if I setup a web server, and since I tend to use Django for all web servers, I thought I could get it to work. I know it's a pretty big hammer for such a small nail, but I use it regularly, so it's easy for me to use.</p>
<p>This code, when I put it in the Django InteractiveConsole, plays just fine. When I try to call it from a PUT request in a view, it won't play the sound, but it also doesn't throw any errors. This is the case both on my computer and the Raspberry Pi.</p>
<pre><code>>>> import vlc
>>> media_player = vlc.MediaPlayer()
>>> media = vlc.Media("/home/pi/chime/media/clips/clip1.mp3")
>>> media_player.set_media(media)
>>> media_player.play()
</code></pre>
<p>Is there something that would prevent these kinds of calls from running in a django view? Is there a way to work around it?</p>
<p><strong>EDIT: Django code example</strong></p>
<pre><code>class ClipsList(View):
template_name = "clips/clip_list.html"
# Ensure we have a CSRF cooke set
@method_decorator(ensure_csrf_cookie)
def get(self, request):
ctx = {
'object_list': Clip.objects.all().order_by('order'),
}
return render(self.request, self.template_name, context=ctx)
# Process POST AJAX Request
def post(self, request):
if request.headers.get('x-requested-with') == 'XMLHttpRequest':
try:
# Parse the JSON payload
data = json.loads(request.body)[0]
# Loop over our list order. The id equals the question id. Update the order and save
for idx, row in enumerate(data):
pq = Clip.objects.get(pk=row['id'])
pq.order = idx + 1
pq.save()
except KeyError:
HttpResponse(status="500", content="Malformed Data!")
return JsonResponse({"success": True}, status=200)
else:
return JsonResponse({"success": False}, status=400)
def put(self, request, pk):
if request.headers.get('x-requested-with') == 'XMLHttpRequest':
try:
data = json.loads(request.body)
clip = Clip.objects.get(pk=pk)
media_player = vlc.MediaPlayer()
media = vlc.Media(os.path.join(settings.BASE_DIR, str(clip.file)))
media_player.set_media(media)
media_player.play()
sleep(3)
media_player.stop()
except KeyError:
HttpResponse(status="500", content="Malformed Data!")
return JsonResponse({"success": True}, status=200)
else:
return JsonResponse({"success": False}, status=400)
</code></pre>
|
<python><django><raspberry-pi>
|
2024-07-11 22:42:37
| 1
| 1,452
|
PoDuck
|
78,737,908
| 3,272
|
Is it possible to get a dict of fields with FieldInfo from the Pydantic Dataclass?
|
<p>When creating classes by inheriting from BaseModel, I can use the model_fields property to get a dict of field names, and FieldInfo, but how can I do it if I use the Pydantic Dataclass decorator instead of BaseModel?</p>
<p>When using <code>BaseModel</code> I can do this:</p>
<pre><code>from pydantic import BaseModel, Field
class Person(BaseModel):
first_name: str = Field(alias='firstName')
age: int
p = Person(firstName='John', age=42)
p.model_fields
</code></pre>
<p><code>p.model_fields</code> will return <code>dict[str, FieldInfo]</code> with all the information about fields:</p>
<pre><code>{'first_name': FieldInfo(annotation=str, required=True, alias='firstName', alias_priority=2),
</code></pre>
<p>'age': FieldInfo(annotation=int, required=True)}</p>
<p>I want get same info about fields when using Pydantic's dataclasses:</p>
<pre><code>from pydantic.dataclasses import dataclass
from pydantic import Field
@dataclass
class Person:
first_name: str = Field(alias='firstName')
age: int
p = Person(firstName='John', age=42)
# p.model_fields doesn't exists
</code></pre>
<p>I looked at <code>RootModel</code> and there's nothing, I tried <code>TypeAdapter</code> which has <code>core_schema</code> but it has different structure and no <code>FieldInfo</code> objects.</p>
<p>Is there no way to get this infro from Pydantic's dataclass?</p>
|
<python><pydantic><pydantic-v2>
|
2024-07-11 22:33:14
| 1
| 971
|
Michal
|
78,737,894
| 1,806,566
|
In python, how do I evaluate a statement from a string and get the value if it's an expression?
|
<p>In python, I would like to take one statement (as a string) and execute it. If the statement is an expression, I'd like to get the value of the expression.</p>
<p><code>eval()</code> doesn't handle statements, only expressions.
<code>exec()</code> handles statements but doesn't give a way to retrieve a value if the statement is an expression.</p>
<p>It seems that <code>eval(compile(x, '<string>', 'single'))</code> is almost what I need, but it <em>prints</em> a non-None return value instead of returning it.</p>
<p>Is there a way to do this?</p>
|
<python><eval>
|
2024-07-11 22:26:42
| 1
| 1,241
|
user1806566
|
78,737,854
| 3,358,927
|
How to unpivot a Pandas dataframe on two ID fields with multiple sets of columns
|
<p>I have a pandas Dataframe that looks like this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>pid</th>
<th>fid</th>
<th>FirstReasonName</th>
<th>FirstReasonValue</th>
<th>SecondReasonName</th>
<th>SecondReasonValue</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>'x'</td>
<td>'a'</td>
<td>3</td>
<td>'b'</td>
<td>8.2</td>
</tr>
<tr>
<td>2</td>
<td>'y'</td>
<td>'c'</td>
<td>8</td>
<td>'d'</td>
<td>7</td>
</tr>
</tbody>
</table></div>
<p>Now I want to unpivot it on <code>pid</code> and <code>fid</code> so that it would become this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>pid</th>
<th>fid</th>
<th>Reason</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>'x'</td>
<td>'a'</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>'x'</td>
<td>'b'</td>
<td>8.2</td>
</tr>
<tr>
<td>2</td>
<td>'y'</td>
<td>'c'</td>
<td>8</td>
</tr>
<tr>
<td>2</td>
<td>'y'</td>
<td>'d'</td>
<td>7</td>
</tr>
</tbody>
</table></div>
<p>How do I do this?</p>
|
<python><pandas><pivot><unpivot>
|
2024-07-11 22:08:33
| 1
| 5,049
|
ddd
|
78,737,778
| 13,989,935
|
ImportError: DLL load failed while importing _rust: The specified procedure could not be found
|
<p>I have been trying to run a quick start <strong>flask app</strong> for the first time via <strong>PyCharm / VSCode</strong>. My pip is fully upgraded to the latest version, and I have installed requirements.txt in my venv, but when I try to run it, I see this error.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\app\__init__.py", line 3, in <module>
from flask import Flask, session, current_app
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\flask\__init__.py", line 2, in <module>
from .app import Flask as Flask
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\flask\app.py", line 15, in <module>
import click
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\app\click\__init__.py", line 1, in <module>
from .utils import create_click_api_client
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\app\click\utils.py", line 1, in <module>
from docusign_click import ApiClient
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\docusign_click\__init__.py", line 19, in <module>
from .apis.accounts_api import AccountsApi
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\docusign_click\apis\__init__.py", line 6, in <module>
from .accounts_api import AccountsApi
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\docusign_click\apis\accounts_api.py", line 24, in <module>
from ..client.api_client import ApiClient
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\docusign_click\client\api_client.py", line 25, in <module>
import jwt
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\jwt\__init__.py", line 1, in <module>
from .api_jwk import PyJWK, PyJWKSet
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\jwt\api_jwk.py", line 7, in <module>
from .algorithms import get_default_algorithms, has_crypto, requires_cryptography
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\jwt\algorithms.py", line 12, in <module>
from .utils import (
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\jwt\utils.py", line 7, in <module>
from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurve
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\cryptography\hazmat\primitives\asymmetric\ec.py", line 11, in <module>
from cryptography.hazmat._oid import ObjectIdentifier
File "C:\Users\abdul\Downloads\Quickstart App-1-python\Quickstart App-1-python\venv\lib\site-packages\cryptography\hazmat\_oid.py", line 7, in <module>
from cryptography.hazmat.bindings._rust import (
ImportError: DLL load failed while importing _rust: The specified procedure could not be found.
</code></pre>
<p>I have tried to following solutions:</p>
<ol>
<li>Uninstall and install cryptography</li>
<li>Install rustc</li>
<li>Upgraded pip to the latest version</li>
</ol>
<p>I'm having trouble finding the exact DLL that is causing this problem; these are my requirements.txt:</p>
<pre><code>astroid==3.1.0
certifi==2024.2.2
cffi==1.16.0
chardet==5.2.0
Click
cryptography==42.0.5
docusign-esign==4.0.0rc1
docusign-rooms==1.3.0
docusign-monitor==1.2.0
docusign-click==1.4.0
docusign-admin==1.4.1
docusign-webforms==1.0.2rc12
docusign-maestro==2.0.0rc1
Flask==2.3.3
Flask-OAuthlib==0.9.6
Flask-Session==0.8.0
flask-wtf==1.2.1
flake8==7.0.0
idna==3.7
isort==5.13.2
itsdangerous==2.2.0
Jinja2>=3.1.3
lazy-object-proxy==1.10.0
MarkupSafe==2.1.5
mccabe==0.7.0
oauthlib==2.1.0
pipenv==2023.12.1
py-oauth2==0.0.10
pycparser==2.22
pylint==3.1.0
python-dateutil==2.8.2
python-dotenv==1.0.1
requests>=2.31.0
requests-oauthlib==1.1.0
six==1.16.0
urllib3>=2.2.1
virtualenv==20.25.3
virtualenv-clone==0.5.7
Werkzeug==2.3.8
wrapt==1.16.0
</code></pre>
<p>What are the possible ways I can look into this solution, or if anyone has faced a similar error, how can this be fixed?</p>
|
<python><flask><requirements.txt><python-cryptography>
|
2024-07-11 21:37:14
| 1
| 715
|
Abdul Ahad Akram
|
78,737,729
| 10,979,307
|
Make the contents of a table in the header and/or footer RTL using Python docx library
|
<p>I've got a word document that has an empty 1 by 3 table (or a table with any dimension for that matter) in the header.
I want to be able to manipulate the cells of the table using python-docx library. The language of the document is Farsi and naturally all the text in Farsi must start from right to left. Here is how the code looks like:</p>
<pre class="lang-py prettyprint-override"><code>from docx import Document
document = Document("test.docx")
c = document.sections[0].header.tables[0].rows[0].cells[0]
c.text = "Ψ³ΩΨ§Ω
"
document.save("output.docx")
</code></pre>
<p>The problem is no matter what kind of trick I use to change the text direction, I can't get it to work. I've tried using the <code>WD_TABLE_DIRECTION.RTL</code> flag to change the direction of the table or changing the direction of the run using <code>run.font.rtl = True</code> or even using special unicode characters such as <code>u'\u202B'</code> and <code>u'\u202C'</code> to change the direction, but to no success.</p>
<p>I even tried altering the xml related to the header but honestly that's beyond me and all my attempts were unfortunately unsuccessful.</p>
<p>I'm writing this question late at night and I cannot express the amount of hatred I have for my native language right now. So any help would be appreciated.</p>
|
<python><ms-word><docx><python-docx>
|
2024-07-11 21:22:36
| 1
| 761
|
Amirreza A.
|
78,737,488
| 14,301,545
|
Python for loop numpy vectorization
|
<p>Do you have any suggestions how I can speed up the "f_img_update" function in the code below? I tried adding numba @jit decorator, but without luck. I guess that numpy vectorization might work, but I am not enough educated to make it work on my own :/</p>
<p>Code:</p>
<pre><code>import numpy as np
import time
import random
from random import randrange
dimensions = (100, 50) # [m]
resolution = 5 # [cm] 1,2,4,5,10,20,25,50,100
pix_res = int(100 / resolution)
height = dimensions[0] * pix_res
width = dimensions[1] * pix_res
img_size = (height, width)
# Make "RGBA" image
img = np.zeros((*img_size, 4), np.uint16)
# DATA PREPARATION
xmin = 1700000
ymin = 1400000
zmin = 100
xmax = xmin + dimensions[0]
ymax = ymin + dimensions[1]
zmax = 150
data = []
for i in range(100000):
xyzi = [random.uniform(xmin, xmax), random.uniform(ymin, ymax), random.uniform(zmin, zmax), randrange(255)]
data.append(xyzi)
# IMAGE UPDATE
def f_img_update(f_img, f_xmin, f_ymin, f_data, f_pix_per_1m): # IMAGE UPDATE
for f_xyzi in f_data:
f_x_img, f_y_img = f_xyzi[0:2]
f_x_img = int((f_x_img - f_xmin) * 100 / f_pix_per_1m)
f_y_img = int((f_y_img - f_ymin) * 100 / f_pix_per_1m)
f_img[f_x_img][f_y_img] = f_xyzi
return f_img
t1 = time.time()
img = f_img_update(img, xmin, ymin, data, pix_res)
t2 = time.time()
print("image update time: ", t2 - t1)
</code></pre>
|
<python><numpy><vectorization>
|
2024-07-11 20:03:45
| 1
| 369
|
dany
|
78,736,990
| 2,878,290
|
How to Looping the list of table contain business key pass to method to execute data process?
|
<p>We are doing the data validation to pass the source table for SQL view names like <code>TEST_SCH.VIEWNAME</code> and target datalake delta view for <code>schema.deltaview</code> to compare the column row count testing. So that we have to write many scripts to execute.</p>
<p>I have common scripts in which I have to master the source and target table to pass the single scripts - then my script should execute all the source and target views. In both views, we are passing a common business key to sort the final compared data to confirm there is no difference in the data.</p>
<p>Below are scripts currently we are using but make it optimized and a single script to pass the list of source and target view name with business column details.</p>
<p>Sample current scripts for us now:</p>
<pre><code>%run "/SQLSERVER/TEST_ConnectionInfo"
</code></pre>
<pre><code>table_name = "[TEST].[SQL_TABLE_VIEW]"
source = spark.read \
.format("jdbc") \
.option("url", jdbcUrl) \
.option("dbtable", table_name) \
.option("databaseName", database_name) \
.option("accessToken", access_token) \
.option("encrypt", "true") \
.option("hostNameInCertificate", "*.database.windows.net") \
.load()
source.createOrReplaceTempView("source_view")
</code></pre>
<pre><code>df = spark.sql(f"""select * from target.deltatableviewname""")
df.createOrReplaceTempView("target")
</code></pre>
<pre><code>Source=spark.sql('select * from source_view')
Target=spark.sql('select * from target')
</code></pre>
<p>The first level of check source and target count should match our scripts.</p>
<p>Below one A-B and B-A concept with source-target data operation :</p>
<pre><code>from pyspark.sql.functions import *
df_DataFromsrcDiff=Source.subtract(target).withColumn("Source", lit("A"))
df_DataFromtargetDiff=target.subtract(Source).withColumn("Target", lit("B"))
</code></pre>
<pre><code>from pyspark.sql.functions import col
difference=df_DataFromtargetDiff.union(df_DataFromsrcDiff)
difference=diffdatadf.sort(col("businesskey for column"))
</code></pre>
<pre><code>print(difference.count()) -- should give 0 count
</code></pre>
<p>The final result will be 0 only, if 0 there is no difference in both view data comparisons.</p>
<p>How should we achieve the list of views and business key pass to a single script to perform the data validation?</p>
<p>Kindly provide a solution to a single script to perform the above code to manipulate my data validation. Make sure each view we have to pass one or many business keys to sort in the final data difference approach, so please help us.</p>
|
<python><apache-spark><pyspark>
|
2024-07-11 17:39:49
| 0
| 382
|
Developer Rajinikanth
|
78,736,942
| 1,423,631
|
Is python gzip.open not fully implemented on Windows?
|
<p>At least on my system (Windows 11 Enterprise, Python 3.12.2), using <code>gzip.open(filename, 'wb')</code> to write a compressed file creates an empty .gz file, despite reporting having written data.</p>
<p>Minimal example:</p>
<pre><code>import os
import gzip
data = b'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.'
written = gzip.open('D:\\tmp\\try1.gz', 'wb').write(data)
compressed_written = os.stat('D:\\tmp\\try1.gz').st_size
print("theoretically written", written, "B, compressed size is", compressed_written, "B")
</code></pre>
<p>Which prints out (on my system): <code>theoretically written 445 B, compressed size is 0 B</code></p>
<p>Is this expected behaviour due to it not being supported on Windows, or is this a bug? (and if so, where should I report it?)
Thanks!</p>
<p>P.S.: Same thing occurs when using a <code>with</code> statement block.</p>
|
<python><windows><gzip>
|
2024-07-11 17:27:59
| 0
| 483
|
Oded R.
|
78,736,839
| 5,881,326
|
How to convert variable length datetime column to datetime dtype in pandas dataframe
|
<p>I have a column of data that is formatted as <code>XXh:XXm:XXs</code> but the hours/minutes/seconds can be different lengths depending on their values</p>
<p>Example:
<code>123h:23m:1s</code>
<code>1h:1m:1s</code>
<code>12h:0m:14s</code></p>
<p>How can I convert a column to datetime with these variable lengths.</p>
<p>I have tried:</p>
<pre class="lang-py prettyprint-override"><code>df['col'] = pd.to_datetime(df['col'], format='HHh:MMm:SSs')
</code></pre>
<p>But it fails on the first row that doesn't match that specific length for HH, MM, SS exactly.</p>
|
<python><pandas>
|
2024-07-11 17:01:47
| 0
| 587
|
Evan Brittain
|
78,736,780
| 8,605,685
|
What is the difference between installing a module with pip/setup.py vs adding it to PYTHONPATH?
|
<p>I have a custom Python module I want to use in another project. This module gets deployed to production as well. I have two options:</p>
<ol>
<li>Add a <code>setup.py</code> and install the module locally with <code>pip</code>.</li>
<li>Add the location of the module to the <code>PYTHONPATH</code> environment variable.</li>
</ol>
<p>What is the difference? I know instinctively that using <code>pip</code> with a <code>setup.py</code> should be preferred, but can't point to any concrete evidence why. Does manually adding to PYTHONPATH have adverse effects (maybe by skipping some other actions <code>pip install</code> does in the background)?</p>
|
<python><pip><python-module><python-packaging>
|
2024-07-11 16:46:47
| 1
| 12,587
|
Salvatore
|
78,736,775
| 13,187,876
|
Load a Registered Model in Azure ML Studio in an Interactive Notebook
|
<p>I'm using Azure Machine Learning Studio and I have an <code>sklearn mlflow</code> model stored in my default datastore (blob storage) which I have then registered as a model asset. How can I load this model inside an interactive notebook to perform some quick model inferencing and testing before deploying this as a batch endpoint.</p>
<p>I have seen a post linked <a href="https://learn.microsoft.com/en-us/answers/questions/1578278/using-registered-model-to-make-predictions-without" rel="nofollow noreferrer">here</a> that suggests downloading the model artefacts locally but I shouldn't need to do this. I should be able to load the model directly from the datastore or the registered asset without the need to duplicate the model in multiple places. I have tried the following without success.</p>
<p><strong>Reading from Registered Model Asset</strong></p>
<pre><code>import mlflow
from azure.ai.ml import MLClient
from azure.ai.ml.entities import Model
ml_client = MLClient(DefaultAzureCredential(), "<subscription_id>", "<resource_group>", "<workspace_id>")
model = ml_client.models.get("<model_name>", version="1")
loaded_model = mlflow.sklearn.load_model(model.id)
>>> OSError: No such file or directory: ...
</code></pre>
<p><strong>Reading from Datastore</strong></p>
<pre><code>import mlflow
model_path = "<datastore_uri_to_model_folder>"
loaded_model = mlflow.sklearn.load_model(model_path)
>>> DeserializationError: Cannot deserialize content-type: text/html
</code></pre>
|
<python><azure><machine-learning><azure-machine-learning-service><mlflow>
|
2024-07-11 16:45:51
| 1
| 773
|
Matt_Haythornthwaite
|
78,736,687
| 251,420
|
How do I change the logging level for ibkr's ibapi?
|
<p>I am trying to get more detailed logging to the console. I have done it before and seen details like</p>
<pre><code>4948305328 connState: None -> 0
Connecting to 127.0.0.1:4001 w/ id:1
4948305328 connState: 0 -> 1
</code></pre>
<p>I can't seem to get it working now. I have even tried changing the client.py source file like this:</p>
<pre><code>logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) # << new line
</code></pre>
<p>and I still can't get it to log more details.</p>
|
<python><ib-api>
|
2024-07-11 16:24:56
| 1
| 7,791
|
joels
|
78,736,538
| 9,357,484
|
Cannot use GPU, CuPy is not installed
|
<p>I have a GPU enabled machine.</p>
<p>O.S: Ubuntu 20.04.6 LTS</p>
<p>nvcc version: 12.2</p>
<p>Nvidia Driver Version: 535.183.01</p>
<p>Pytorch version 2.3.1+cu121</p>
<p>spaCy version 3.7.5</p>
<p>Python version 3.8.10</p>
<p>Pipelines : en_core_web_sm (3.7.1)</p>
<p>I am using a virtual environment.</p>
<p>I received the output <strong>True</strong> for the following command</p>
<pre><code>>>>import torch
>>> torch.cuda.is_available()
</code></pre>
<p>However, when I ran</p>
<pre><code>>>>import spacy
>>> spacy.require_gpu()
</code></pre>
<p>I got the error:</p>
<pre><code>File "<stdin>", line 1, in <module>
File "/home/abc/spacy_test_env/lib/python3.8/site-packages/thinc/util.py", line 230, in require_gpu
raise ValueError("Cannot use GPU, CuPy is not installed")
ValueError: Cannot use GPU, CuPy is not installed
</code></pre>
<p>I installed Spacy with GPU enabled version with the command <code>pip install -U 'spacy[cuda122]'</code>
While installing with the above command I got an <strong>warning like spacy 3.7.5 does not provide the extra 'cuda122'</strong></p>
<p>Can anyone please tell me what is the error in my installation?</p>
|
<python><pytorch><spacy><nvcc><cupy>
|
2024-07-11 15:49:51
| 1
| 3,446
|
Encipher
|
78,736,499
| 5,842,705
|
Why does multiple sequence of wildcard characters not work in re.search?
|
<p>I am trying to use re.search but with multiple wildcard characters and it does not work. Is there something else I have to do? Or is there a better method?</p>
<pre><code>import re
pattern = '2A-CS-*.GPM.DPR.V*'
file = '/home/files/2A-CS-WFF.GPM.DPR.V9-20240130.20240707-S220512-E220609.058829.V07C.HDF5'
print(re.search(pattern, file)) #returns None
</code></pre>
|
<python><string><matching>
|
2024-07-11 15:41:06
| 1
| 407
|
Charanjit Pabla
|
78,736,498
| 20,122,390
|
Why can't I call Run from asyncio twice in the same function?
|
<p>I am using Celery to process background tasks in my application. Sometimes, I need to process asynchronous functions within those tasks, for which I use asyncio. And in this case, I also need to process an asynchronous function in a finally block. So my code is something like this:</p>
<pre><code>@celery_app.task(name="charlie-search-engine-task", retry_backoff=True, acks_late=True, max_retries=5)
def search_process(
order_priority: str,
subject: str,
content: str,
comments: str,
attachment_names: str,
recipients: List[str],
user_ids: List[int],
department_ids: List[int],
current_user_id: int,
temporal_search_id: int
):
print(f"params: {order_priority}, {subject}, {content}, {comments}, {attachment_names}, {recipients}, {user_ids}, {department_ids}, {current_user_id}, {temporal_search_id}")
try:
"""
attachments will be send to searching elk if all params has been received
because it's meaning users are searching in all params
"""
search_params = SearchDefault(
order_priority=order_priority,
subject=subject,
content=content,
comments=comments,
attachment_names=attachment_names,
recipients=recipients,
user_ids=user_ids,
department_ids=department_ids,
current_user_id=current_user_id,
temporal_search_id=temporal_search_id
)
log.info(
f"[INFO]: A process of SEARCHING was started. PARAMS: {search_params.dict()}"
)
start = timeit.default_timer()
asyncio.run(execute_match_search(search_params=search_params))
stop = timeit.default_timer()
#log.info(f"Execution Time: {stop - start}")
except Exception as e:
log.error(
f"Error in task for searching with temporal_search_id: {temporal_search_id}"
)
log.error(e)
log.error(traceback.format_exc())
finally:
asyncio.run(deliver_message_service.emit_notification(
notifier=NOTIFIER,
action=ActionCodes.NO_MORE_RESULTS.value,
room=search_params.current_user_id,
channel="search-engine",
info={},
title=f"No more found for search {search_params.temporal_search_id}",
temporal_search_id=search_params.temporal_search_id,
))
</code></pre>
<p>In the test executions I have done I have not had any errors with the execute_match_search function, so when it finishes executing it goes to the finally block, where I get this error:</p>
<pre><code>Event loop is closed
</code></pre>
<p>I have a feeling that maybe it is due to calling asyncio.run twice in the same function. Why does this happen? What options do I have to solve it?</p>
|
<python><celery><python-asyncio>
|
2024-07-11 15:40:36
| 0
| 988
|
Diego L
|
78,736,419
| 1,420,553
|
Tensorflow Understanding Conv2D
|
<p>Beginner using Tensorflow. Started reading the documentation and checking the examples provided in the website:
<a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D</a>
<a href="https://www.tensorflow.org/tutorials/images/cnn" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/images/cnn</a></p>
<p>From a mathematical point of view I understand the convolution, but it is not clear to me how it is implemented using Conv2D.
From the second link, the convolution is implemented as:</p>
<p><code>layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))</code></p>
<p>From the convolution point of view, how is the filter defined by only one number? How does the convolution works in Tensorflow?</p>
<p>Thanks,
Gus</p>
|
<python><tensorflow>
|
2024-07-11 15:20:22
| 1
| 369
|
gus
|
78,736,338
| 7,465,462
|
"Exception: Power BI report is not embedded" inside Azure Databricks
|
<p>We are trying to use the <code>powerbiclient</code> package inside an Azure Databricks notebook to get information on reports but we are getting the error <code>Exception: Power BI report is not embedded</code>.
The same code works instead if we use it locally on Visual Studio Code.</p>
<ul>
<li>Azure Databricks cluster: Personal Compute, runtime 15.3</li>
<li>Packages of interest:
<ul>
<li>powerbiclient==3.1.1</li>
</ul>
</li>
</ul>
<p>Here is the code we are using:</p>
<pre><code>!pip install powerbiclient==3.1.1
dbutils.library.restartPython()
</code></pre>
<pre><code>from powerbiclient import Report, models
from io import StringIO
from ipywidgets import interact
import requests
import json
</code></pre>
<p>We tried both authenticating via Device Code Login and Service Principal, but we need to stick with the second option:</p>
<pre><code># # option 1
# from powerbiclient.authentication import DeviceCodeLoginAuthentication
# device_auth = DeviceCodeLoginAuthentication()
# option 2
def azuread_auth(tenant_id: str, client_id: str, client_secret: str, resource_url: str):
"""
Authenticates Service Principal to the provided Resource URL, and returns the OAuth Access Token
"""
url = f"https://login.microsoftonline.com/{tenant_id}/oauth2/token"
payload = f'grant_type=client_credentials&client_id={client_id}&client_secret={client_secret}&resource={resource_url}'
headers = {
'Content-Type': 'application/x-www-form-urlencoded'
}
response = requests.request("POST", url, headers=headers, data=payload)
access_token = json.loads(response.text)['access_token']
return access_token
tenant_id = 'XXX'
client_id = 'YYY
client_secret = 'ZZZ'
scope = 'https://analysis.windows.net/powerbi/api/.default'
resource_url = 'https://analysis.windows.net/powerbi/api'
token = azuread_auth(tenant_id, client_id, client_secret, resource_url)
</code></pre>
<p>And then we call the report:</p>
<pre><code>group_id = '123-456'
dataset_id = 'abc-def'
report_id = '7g8-h9i'
report = Report(group_id=group_id, report_id=report_id, auth=token)
</code></pre>
<p>But we see that it is not embedded:</p>
<pre><code>print(report._embedded)
# False
</code></pre>
<p>If we try to display the report we obtain nothing:</p>
<pre><code>def loaded_callback(event_details):
print('Report is loaded')
report.on('loaded', loaded_callback)
def rendered_callback(event_details):
print('Report is rendered')
report.on('rendered', rendered_callback)
report.set_size(200, 300)
report
</code></pre>
<p>And if we try to get the pages we get the aforementioned error:</p>
<pre><code>pages = report.get_pages()
Exception Traceback (most recent call last)
File <command-2809091020085831>, line 3
1 report_dict = {}
2 # Get list of pages
----> 3 pages = report.get_pages()
4 for page in pages:
5 report.set_active_page(page['name'])
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-48b6b502-a176-4d52-a15d-9ed2a921ac04/lib/python3.11/site-packages/powerbiclient/report.py:566, in Report.get_pages(self)
560 """Returns pages list of the embedded Power BI report
561
562 Returns:
563 list: list of pages
564 """
565 if not self._embedded:
--> 566 raise Exception(self.REPORT_NOT_EMBEDDED_MESSAGE)
568 # Start getting pages on client side
569 self._get_pages_request = True
Exception: Power BI report is not embedded
</code></pre>
<p>Is there a way to embed PowerBI reports inside Azure Databricks notebooks?</p>
|
<python><powerbi><databricks><azure-databricks>
|
2024-07-11 15:05:21
| 1
| 9,318
|
Ric S
|
78,736,182
| 865,169
|
How do I best send Pydantic model objects via put requests?
|
<p>I am quite new to Pydantic. I have noticed how it is often used in validating input to FastAPI.</p>
<p>I have a project where I need to send something to another application's API endpoint and thought I would try structuring my data via a model class that inherits from Pydantic's <code>BaseModel</code>.</p>
<p>I try something like this (MWE):</p>
<pre class="lang-py prettyprint-override"><code>from uuid import UUID
import requests
from pydantic import BaseModel
class Item(BaseModel):
id: UUID
content: str
item = Item(id=UUID(int=1), content="Something")
</code></pre>
<p>Now if I try put'ing the object like this:</p>
<pre class="lang-py prettyprint-override"><code>requests.put("http://localhost", json=item)
</code></pre>
<p>it complains with a TypeError saying that "Object of type Item is not JSON serializable". (It does not matter for the purpose of this demonstration that there is no one listening at 'localhost'.) OK, this is easy enough to work around:</p>
<pre class="lang-py prettyprint-override"><code>requests.put("http://localhost", data=item.model_dump_json())
</code></pre>
<p>It turns out I need to put a list of <code>Item</code>s and then it would be convenient to do something like this, which I cannot due to the error in the first example:</p>
<pre class="lang-py prettyprint-override"><code>requests.put("http://localhost", json=[item, item])
</code></pre>
<p>This becomes somewhat more involved to do via serialising the individual <code>Item</code>s.</p>
<p>This makes me wonder: is this the way I should be doing it at all? Probably not, how is it supposed to be done?</p>
<p>Is it the wrong choice in the first place to involve Pydantic here?</p>
|
<python><python-requests><pydantic>
|
2024-07-11 14:35:07
| 2
| 1,372
|
Thomas Arildsen
|
78,736,144
| 3,618,999
|
Debug flask docker container with breakpoints
|
<p>I am building my flask application on ubuntu image,</p>
<p>below is snippet of my dockerfile</p>
<pre><code>FROM ubuntu:24.04
# Install required packages and Python
RUN apt-get update && \
apt-get install -y python3 python3-pip python3-venv default-libmysqlclient-dev build-essential pkg-config curl && \
rm -rf /var/lib/apt/lists/*
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,source=requirements.txt,target=requirements.txt \
. ./venv/bin/activate && \
python3 -m pip install --upgrade pip && \
pip3 install -r requirements.txt && \
pip3 install debugpy
COPY . .
# Expose the port that the application listens on.
EXPOSE 5000
# Run the application.
CMD ["./venv/bin/python", "app.py"]
</code></pre>
<p>the issue is I want to debug using breakpoints on my APIs, but I am not able to do that, I have tried with multiple methods available on internet, those does not seems to work, please help in solving above issue</p>
|
<python><docker><flask><debugging><containers>
|
2024-07-11 14:27:58
| 0
| 579
|
ratnesh
|
78,736,095
| 562,335
|
Archtecture/Design pattern for communication between asyncio and non-asyncio context
|
<p>I am writing a TCP-based client which do both req/rep pattern communication and push/pull pattern communication. I tried to employ <code>asyncio</code> and its transport/protocol structure to handle the low-level transport actions. But the APIs for others to use is not run in asyncio context. So I create the following archetecture to run asyncio in a thread and use <code>asyncio.run_coroutine_threadsafe</code> to submit request and wait the response using a future.</p>
<pre class="lang-py prettyprint-override"><code>import asyncio as aio
from threading import Thread
class MyProtocol(aio.Protocol):
...
async def aio_main(drv, ip, port):
loop = aio.get_running_loop()
on_connection_lost = loop.create_future()
transport, protocol = loop.create_connection(
lambda: MyProtocol(drv, on_connection_lost ), ip, port
)
drv._transport = transport
drv._protocol = protocol
drv._loop = loop
try:
await on_connect_lost
finally:
transport.close()
class Driver:
def __init__(self):
self._thread = Thread(target=lambda: aio.run(aio_main(self, '192.168.0.1', 8888))
self._thread.start()
async def _request(self, data):
loop = aio.get_running_loop()
fut = loop.create_future()
self._protocol._rsp = fut
self._transport.write(data)
return await fut # which is set in MyProtocol
def request(self, data):
coro = self._request(b'HELLO')
fut = aio.run_coroutine_threadsafe(coro, self._loop)
return fut.result(1.0) # 1s timeout
</code></pre>
<p>Does this archetecture a good practice or bad? Is there better pattern do handle this?</p>
|
<python><design-patterns><python-asyncio>
|
2024-07-11 14:17:21
| 1
| 1,241
|
Holmes Conan
|
78,735,850
| 12,760,550
|
Why one of the label bars have their name omited in the graph displayed on Jupyter Notebook?
|
<p>I have the multiindex dataframe (example below but not complete) named "pivot_dftable" and the code below displaying the analysis I need for it.</p>
<ul>
<li><p>I would like to understand why in the graph displayed with the code provided, instead of one of the bar labels in the graph to be named "Re-Open" it just show blank in the graph.</p>
</li>
<li><p>Also, as you can see, the Queue Category names down below the graph is being cut in the image, how could I display it?</p>
</li>
</ul>
<p>Thank you so much for the help!</p>
<p>pivot_dftable =</p>
<pre><code> Status New \
Queue Category Ticket Age - Open Tickets
UAE Queue Older than 3 Months 0.0
Older than 1 Month 1.0
Older than 2 Weeks 1.0
Switzerland Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 1.0
Older than 1 Week 1.0
Less than 1 Week 1.0
HQ Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 1.0
Less than 1 Week 1.0
London Queue Older than 3 Months 0.0
Older than 1 Month 2.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
York Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Denmark Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Finland Queue Older than 3 Months 3.0
Norway Queue Older than 3 Months 0.0
Older than 1 Month 2.0
Older than 2 Weeks 0.0
Sweden Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 1.0
Older than 1 Week 3.0
Less than 1 Week 4.0
France Queue Older than 3 Months 7.0
Older than 1 Month 0.0
Older than 2 Weeks 4.0
Netherlands Queue Older than 3 Months 3.0
Older than 1 Month 14.0
Older than 2 Weeks 14.0
Older than 1 Week 4.0
Less than 1 Week 0.0
Spain Queue Older than 3 Months 2.0
Older than 1 Month 1.0
Older than 2 Weeks 3.0
Czech Queue Older than 3 Months 1.0
Older than 1 Month 0.0
Slovakia Queue Older than 3 Months 1.0
Older than 1 Month 3.0
Older than 2 Weeks 1.0
Portugal Queue Older than 3 Months 0.0
Older than 1 Month 4.0
Older than 2 Weeks 2.0
Older than 1 Week 2.0
Peru Queue Older than 3 Months 16.0
Older than 1 Month 1.0
Older than 2 Weeks 1.0
TMC Queue Older than 3 Months 0.0
Older than 1 Month 1.0
Older than 2 Weeks 5.0
Older than 1 Week 13.0
Less than 1 Week 14.0
CoE - Compensation & Benefits Older than 3 Months 0.0
Older than 1 Month 1.0
CoE - Talent Acquisition Older than 1 Month 0.0
CoE - Learning, Development & Culture Older than 3 Months 0.0
Older than 1 Month 0.0
Global - Hypercare Older than 3 Months 0.0
Older than 1 Month 0.0
Global - HR Business Partners Older than 3 Months 0.0
Older than 1 Month 2.0
Global Central HRIS Team Queue Older than 3 Months 2.0
Older than 1 Month 1.0
Older than 2 Weeks 1.0
Older than 1 Week 5.0
Less than 1 Week 9.0
AMS Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
Less than 1 Week 0.0
Status In Progress \
Queue Category Ticket Age - Open Tickets
UAE Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Switzerland Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 2.0
Less than 1 Week 2.0
HQ Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
Less than 1 Week 0.0
London Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
York Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Denmark Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Finland Queue Older than 3 Months 0.0
Norway Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Sweden Queue Older than 3 Months 0.0
Older than 1 Month 2.0
Older than 2 Weeks 3.0
Older than 1 Week 5.0
Less than 1 Week 4.0
France Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Netherlands Queue Older than 3 Months 3.0
Older than 1 Month 3.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
Less than 1 Week 0.0
Spain Queue Older than 3 Months 2.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Czech Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Slovakia Queue Older than 3 Months 1.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Portugal Queue Older than 3 Months 4.0
Older than 1 Month 3.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
Peru Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
TMC Queue Older than 3 Months 4.0
Older than 1 Month 13.0
Older than 2 Weeks 12.0
Older than 1 Week 3.0
Less than 1 Week 0.0
CoE - Compensation & Benefits Older than 3 Months 0.0
Older than 1 Month 0.0
CoE - Talent Acquisition Older than 1 Month 0.0
CoE - Learning, Development & Culture Older than 3 Months 0.0
Older than 1 Month 0.0
Global - Hypercare Older than 3 Months 1.0
Older than 1 Month 1.0
Global - HR Business Partners Older than 3 Months 0.0
Older than 1 Month 0.0
Global Central HRIS Team Queue Older than 3 Months 7.0
Older than 1 Month 26.0
Older than 2 Weeks 12.0
Older than 1 Week 4.0
Less than 1 Week 3.0
AMS Queue Older than 3 Months 2.0
Older than 1 Month 1.0
Older than 2 Weeks 7.0
Older than 1 Week 1.0
Less than 1 Week 6.0
Status Re-Open \
Queue Category Ticket Age - Open Tickets
UAE Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Switzerland Queue Older than 3 Months 6.0
Older than 1 Month 5.0
Older than 2 Weeks 1.0
Older than 1 Week 1.0
Less than 1 Week 0.0
HQ Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
Less than 1 Week 0.0
London Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
York Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Denmark Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Finland Queue Older than 3 Months 0.0
Norway Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Sweden Queue Older than 3 Months 0.0
Older than 1 Month 3.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
Less than 1 Week 0.0
France Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Netherlands Queue Older than 3 Months 2.0
Older than 1 Month 0.0
Older than 2 Weeks 1.0
Older than 1 Week 0.0
Less than 1 Week 0.0
Spain Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Czech Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Slovakia Queue Older than 3 Months 1.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Portugal Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
Peru Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
TMC Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 1.0
Older than 1 Week 0.0
Less than 1 Week 0.0
CoE - Compensation & Benefits Older than 3 Months 1.0
Older than 1 Month 0.0
CoE - Talent Acquisition Older than 1 Month 0.0
CoE - Learning, Development & Culture Older than 3 Months 0.0
Older than 1 Month 0.0
Global - Hypercare Older than 3 Months 0.0
Older than 1 Month 0.0
Global - HR Business Partners Older than 3 Months 0.0
Older than 1 Month 0.0
Global Central HRIS Team Queue Older than 3 Months 0.0
Older than 1 Month 3.0
Older than 2 Weeks 1.0
Older than 1 Week 0.0
Less than 1 Week 0.0
AMS Queue Older than 3 Months 0.0
Older than 1 Month 0.0
Older than 2 Weeks 0.0
Older than 1 Week 0.0
Less than 1 Week 0.0
Status Waiting
Queue Category Ticket Age - Open Tickets
UAE Queue Older than 3 Months 0.0
Older than 1 Month 0.0
</code></pre>
<p>Code:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from matplotlib.offsetbox import OffsetImage, AnnotationBbox, TextArea
# Assuming pivot_df is your DataFrame
# Map the "Queue Category" column using the dictionary
pivot_df.index = pivot_dftable.index.set_levels(pivot_dftable.index.levels[0].map(queuetocountry), level=0)
# Filter the DataFrame to include only the specified countries
countrieslist = ['AE', 'CH', 'London', 'York', 'DK', 'FI', 'NO', 'SE', 'FR', 'NL', 'ES', 'CZ', 'SK', 'PT', 'PE', 'TMC', 'HRIS', 'AMS']
filtered_df = pivot_df[pivot_df.index.get_level_values('Queue Category').isin(countrieslist)]
# Print the filtered DataFrame to debug
print(filtered_df)
# Define queue categories and ticket ages
queue_categories = filtered_df.index.get_level_values('Queue Category').unique()
ticket_ages = ['Older than 3 Months', 'Older than 1 Month', 'Older than 2 Weeks', 'Older than 1 Week', 'Less than 1 Week']
statuses = ['New', 'In Progress', 'Re-Open', 'Waiting']
# Plotting
fig, ax = plt.subplots(figsize=(20, 12))
# Define colors for each ticket age category
colors = {
'Older than 3 Months': 'red',
'Older than 1 Month': 'orange',
'Older than 2 Weeks': 'yellow',
'Older than 1 Week': 'green',
'Less than 1 Week': 'blue'
}
width = 0.2 # width of the bars
x = np.arange(len(queue_categories))
n = len(statuses)
# Loop through each status to create the grouped bars
for i, status in enumerate(statuses):
bottom = np.zeros(len(queue_categories))
for ticket_age in ticket_ages:
try:
current_counts = filtered_df.loc[(slice(None), ticket_age), status].reindex(queue_categories, fill_value=0).values
except KeyError as e:
print(f"KeyError for status '{status}' and ticket_age '{ticket_age}': {e}")
current_counts = np.zeros(len(queue_categories))
bars = ax.bar(x + (i - n / 2) * width, current_counts, bottom=bottom, width=width, color=colors[ticket_age], edgecolor='white', label=f'{ticket_age}' if i == 0 else None)
bottom += current_counts
# Add the total value on top of each bar
for bar in bars:
height = bar.get_height()
if height > 0: # Only annotate if the height is greater than 0
ax.annotate(f'{int(height)}',
xy=(bar.get_x() + bar.get_width() / 2, bar.get_y() + height),
xytext=(0, 0), # 3 points vertical offset
textcoords="offset points",
ha='center', va='bottom')
# Set labels and title
ax.set_title('9.1 Open Ticket Ages by Queue Category and Status', fontsize=15)
# Set x-ticks and labels for queue categories
ax.set_xticks(x)
ax.set_xticklabels([]) # Clear the existing labels
# Add status labels below the bars
status_labels = [f'{status}' for status in statuses for _ in range(len(queue_categories))]
status_positions = np.tile(x, n) + np.repeat((np.arange(n) - n / 2) * width, len(queue_categories))
ax.set_xticks(status_positions, minor=True)
ax.set_xticklabels(status_labels, minor=True, rotation=90, fontsize=8, ha='center')
# Move queue category labels down by 2 cm
for i, label in enumerate(queue_categories):
offsetbox = TextArea(label, textprops=dict(rotation=90, ha='center', va='top', fontsize=10))
ab = AnnotationBbox(offsetbox, (x[i], 0), xybox=(0, -60), # Move down by 60 points (approx 5 cm)
xycoords='data', boxcoords="offset points", frameon=False)
ax.add_artist(ab)
# Adjust layout to prevent overlap and increase bottom margin
plt.subplots_adjust(bottom=0.3)
# Create a legend with only the "Ticket Aging" categories
handles, labels = ax.get_legend_handles_labels()
unique_labels = {}
for handle, label in zip(handles, labels):
if label not in unique_labels:
unique_labels[label] = handle
ax.legend(unique_labels.values(), unique_labels.keys(), title='Ticket Age')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/xFv7DgEi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFv7DgEi.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><multi-index>
|
2024-07-11 13:31:36
| 0
| 619
|
Paulo Cortez
|
78,735,769
| 1,788,771
|
How to access a translation defined in the base model from views and the admin panel of a child model
|
<p>In a project I have inherited I have a few polymorphic models based on a common model. Something like:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from polymorphic.models import PolymorphicManager, PolymorphicModel
class Product(PolymorphicModel):
name=models.CharField(max_length=127)
objects=PolymorphicManager()
class Book(Product):
pass
class Drink(Product):
pass
</code></pre>
<p>Personally, I would not use polymorphism, but it's a bit too late for that and way out of scope. I am tasked with adding translations for the name on the base model using parler, however, the documentation for parler only covers translating fields on the child models.</p>
<p>Nevertheless, I modified the models as such:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from parler.models import TranslatableModel, TranslatedFields
from parler.managers import TranslatableManager, TranslatableQuerySet
from polymorphic.models import PolymorphicManager, PolymorphicModel
from polymorphic.query import PolymorphicQuerySet
class ProductQuerySet(PolymorphicQuerySet, TranslatableQuerySet):
pass
class ProductManager(PolymorphicManager, TranslatableManager):
queryset_class = MarkerQuerySet
class Product(PolymorphicModel):
translations = TranslatedFields(
name=models.CharField(max_length=127)
objects=ProductManager()
class Book(Product):
pass
class Drink(Product):
pass
</code></pre>
<p>And it seems to work fine in the Django shell:</p>
<pre><code>In [1]: from product.models import Book
In [2]: b0 = Book.objects.first()
In [3]: b0
Out[3]: <Book: A Brief History of Time>
In [4]: b1 = Book()
In [5]: b1.name = 'Hitchhikers Guide to the Galaxy'
In [6]: b1.save()
In [7]: b2 = Book(name='Slaughterhouse V')
In [8]: b2.save()
</code></pre>
<p>When I run my endpoint tests, however, I get a somewhat cryptic error:</p>
<pre><code>.venv/lib/python3.12/site-packages/django/db/models/sql/query.py:1577: in _add_q
child_clause, needed_inner = self.build_filter
.
.
.
> arg, value = filter_expr
E ValueError: too many values to unpack (expected 2)
</code></pre>
<p>And a different error when I try to see the admin panel for the child models:</p>
<pre><code>Cannot resolve keyword 'name' into field.
</code></pre>
<p>I have tried adding a computed properties to the Child models to see if it might fix the admin panel, but it did not. Has anyone any experience with doing something similar?</p>
|
<python><django><django-polymorphic><django-parler>
|
2024-07-11 13:16:01
| 1
| 4,107
|
kaan_atakan
|
78,735,697
| 19,648,465
|
How to Avoid Using --run-syncdb with python manage.py migrate
|
<p>I am working on a Django project that I cloned from GitHub. When I try to run python manage.py migrate, it fails and requires me to use --run-syncdb. However, I want to make it so that python manage.py migrate is sufficient without the need for --run-syncdb.</p>
<p>I noticed that there is no migrations folder in my project. Here are the steps I have taken so far:</p>
<ol>
<li>Verified that the project does not have any existing migration files.</li>
<li>Tried to run python manage.py makemigrations to generate migration files, but it still fails.</li>
</ol>
<p>It fails because after python manage.py makemigrations and python manage.py migrate I get no migrations folder in my apps.</p>
<p>How can I resolve this issue and ensure that python manage.py migrate works without requiring --run-syncdb? Is there a specific reason why the migrations folder might be missing, and how can I recreate it?</p>
|
<python><django><django-rest-framework><database-migration>
|
2024-07-11 13:01:15
| 0
| 705
|
coder
|
78,735,671
| 2,261,553
|
Parallel decorator on Numba routine featuring race condition
|
<p>I have the following routine aiming to calculate in parallel different SVD of random matrices:</p>
<pre><code>import numpy as np
from numba import jit,prange
@jit(nopython=True,parallel=True)
def svd_bn(aa,n):
res=[]
for k in prange(n):
u,s,v=np.linalg.svd(aa[k],0)
res.append( s )
return res
aa=[np.random.rand(2*k,2*(k+1)) for k in range(1,10)]
res=svd_bn(aa,len(aa))
</code></pre>
<p>On a Windows OS, this works fine and returns the sorted elements in the list. On a Linux OS, this leads to a "double free or corruption (!prev)" error. My suspicion is that the parallelization on Linux leads to a race condition that is avoided on Windows, plus the "res" list is changing its entries dynamically.</p>
<p>Is there a way to overcome this problem using Numba? Note that in principle we don't know the shape of the u,s,v arrays returned from the routine.</p>
|
<python><numpy><numba>
|
2024-07-11 12:54:34
| 0
| 411
|
Zarathustra
|
78,735,592
| 6,930,340
|
How to compute a column in Polars using np.linspace
|
<p>Consider the following <code>pl.DataFrame</code>:</p>
<pre><code>df = pl.DataFrame(
data={
"np_linspace_start": [0, 0, 0],
"np_linspace_stop": [8, 6, 7],
"np_linspace_num": [5, 4, 4]
}
)
shape: (3, 3)
βββββββββββββββββββββ¬βββββββββββββββββββ¬ββββββββββββββββββ
β np_linspace_start β np_linspace_stop β np_linspace_num β
β --- β --- β --- β
β i64 β i64 β i64 β
βββββββββββββββββββββͺβββββββββββββββββββͺββββββββββββββββββ‘
β 0 β 8 β 5 β
β 0 β 6 β 4 β
β 0 β 7 β 4 β
βββββββββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββ
</code></pre>
<p>How can I create a new column <code>ls</code>, that is the result of the <code>np.linspace</code> function? This column will hold an <code>np.array</code>.</p>
<p>I was looking for something along those lines:</p>
<pre><code>df.with_columns(
ls=np.linspace(
start=pl.col("np_linspace_start"),
stop=pl.col("np_linspace_stop"),
num=pl.col("np_linspace_num")
)
)
</code></pre>
<p>Is there a <code>polars</code> equivalent to <code>np.linspace</code>?</p>
|
<python><dataframe><python-polars>
|
2024-07-11 12:40:08
| 4
| 5,167
|
Andi
|
78,735,399
| 8,622,404
|
Handling 'Too Large for Available Bit Count' Error When Reading Part of an MP3 File in Python
|
<p>I am trying to read a specific part of an MP3 file, but I am encountering an error:</p>
<pre><code>[src/libmpg123/layer3.c:INT123_do_layer3():1771] error: part2_3_length (1376) too large for available bit count (760)
</code></pre>
<p>The audio file can be accessed <em><strong><a href="https://drive.google.com/file/d/1Kfk4bNuM-to5h1zrk7x2pElbailCuvuB/view?usp=drive_link" rel="nofollow noreferrer">here</a></strong></em>.</p>
<p>My environment is set up using this Docker image: <code>pytorch/pytorch:2.2.0-cuda12.1-cudnn8-devel</code>.</p>
<p>I installed librosa and librosa dependent programs with the following commands:</p>
<pre><code>apt install -y git-lfs ffmpeg unzip libsndfile1
conda install -y -c conda-forge libsndfile
pip install numpy
pip install librosa
</code></pre>
<p>I attempted to read the MP3 file with this Python code:</p>
<pre><code>import librosa
audio_array, sr = librosa.load('all.mp3', sr=16_000, duration=5.28, offset=3231.67)
</code></pre>
<p>While I can read the entire file using librosa, I am unable to read this specific part of the file. What should I do? Should I ignore this error since I still get the correct NumPy array?<br> What does this error message mean?</p>
|
<python><librosa><libsndfile>
|
2024-07-11 12:00:29
| 0
| 356
|
kingGarfield
|
78,735,104
| 2,859,449
|
Nested partitions of integers
|
<p>I want to create all nested partitions of an integer - with all possible permutations of numbers and brackets (nests) at all possible positions.</p>
<p>For example, for n = 3, I would like to have</p>
<pre><code>(3,)
(1, 2)
(1, 1, 1)
(1, (1, 1))
(2, 1)
((1, 1), 1)
</code></pre>
<p>In general something like</p>
<pre><code>(2,3,(1,1,(3,4,5),1),5,(1,2,3))
</code></pre>
<p>for a nested partition of 31.</p>
<p>I managed to write this code (based on the standard partition of integers):</p>
<pre><code>def partitions(n):
yield (n,)
for i in range(1, n):
for q in partitions(i):
for p in partitions(n-i):
yield q + p
if len(q) > 1:
yield (q,) + p
if len(p) > 1:
yield q + (p,)
if len(p) > 1 and len(q) > 1:
yield (q,) + (p,)
</code></pre>
<p>But it returns duplicates, for n=3, we get twice the sequence (1, 1, 1). It makes sense, since I once split 2 as 1+2 and 2+1 and then we flatten the subpartitions of the 2. Any ideas how to fix my program?</p>
|
<python><nested-lists><integer-partition>
|
2024-07-11 10:56:13
| 2
| 465
|
Jake B.
|
78,735,093
| 275,195
|
Is it possible to validate a pyrsistent data structure with jsonschema?
|
<p>I'm toying around with immutable data structures using the pyrsistent library in python.
One of the nice things when representing data with generic data structures is the ability to check data with schemas. Here an example of an immutable data structure in pyrsistent:</p>
<pre class="lang-py prettyprint-override"><code>from pyrsistent import pmap, pvector
users = pvector([
pmap({'id': 1, 'name': 'Jack', 'email': 'jack@example.org', 'active': True}),
pmap({'id': 2, 'name': 'Max', 'email': 'max@example.com', 'active': True}),
pmap({'id': 3, 'name': 'Allison', 'email': 'allison@example.org', 'active': False}),
pmap({'id': 4, 'name': 'David', 'email': 'david@example.net', 'active': False})
])
</code></pre>
<p>Here is a jsonschema that I would like to use to validate the data:</p>
<pre class="lang-py prettyprint-override"><code>from jsonschema import validate
schema = {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {"type": "integer"},
"name": {"type": "string"},
"email": {"type": "string","format": "email"
},
"active": {"type": "boolean"}
},
"required": ["id", "name", "email", "active"]
}
}
validate(instance=users, schema=schema)
</code></pre>
<p>With the given code there are type errors because jsonschema only works with native types.</p>
<p><strong>Is there an easy or a good way to validate such data structures?</strong></p>
<p>I'm looking for a generalizable solution that also extends to millions of items and deeply nested structures. The ideas are from <a href="https://blog.klipse.tech/dop/2022/06/22/principles-of-dop.html" rel="nofollow noreferrer">data-oriented programming</a>.</p>
|
<python><pyrsistent>
|
2024-07-11 10:53:59
| 1
| 2,338
|
Pascal
|
78,734,820
| 10,739,750
|
How to attach a print format to attachments upon saving the Doctype in ERPNext?
|
<p>I have created a custom print format in ErpNext from the Quotation Doctype.</p>
<p>It gets printed and I can download it as a PDF as well.</p>
<p>Now what I want is that whenever I save the Quotation Doctype, the PDF of that print format is named <code>Scope of Work</code> to attach in the attachments of Quotations (in the leftside bar of quotation).</p>
<p>Why I am doing this is because when I send the quotation in the email, I download the PDF manually when I print it and attach it to the email attachments so it is quite a hectic process, everytime I have to download the PDF and then attach it to the email attachments.</p>
<p>Now, that print format will be attached to attachments upon saving the Quotation, and when I send the email that attachment from quotation will automatically be attached to email attachments, which is what I want.</p>
<p>I was trying a custom script, but that did not work as expected, so I don't know may if there is any good way to achieve this.</p>
<p><code>Scope of Work</code> is not my default print format, there are other formats I want to attach, like βScope of Work Arabicβ, another print format.</p>
<p>I need a script or function, or whatever, even a custom button will do the job do that when I click it, the formats will be attached to the attachments to doctype it, in this case the Quotation.</p>
<p>My bench version is 14.</p>
|
<python><erpnext><frappe>
|
2024-07-11 09:56:14
| 1
| 1,235
|
Khayam Khan
|
78,734,801
| 2,132,593
|
Python Pandas monthly yoy% change when current month is partial
|
<p>I want to calculate yoy% change for month level data, taking into account that the last (current) period is partial.</p>
<p>This has already been asked: <a href="https://stackoverflow.com/questions/47842534/resampling-and-calculating-year-over-year-with-partial-data">Resampling and calculating year over year with partial data</a> but I cannot understand the answer that was given.</p>
<p>My code is as follows:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(555)
# Create a sample dataframe
df_input = pd.DataFrame({
'order_date': pd.date_range(start='2022-01-01', end='2024-07-10'),
'customers': np.random.randint(0, 100, size=(922, )),
'orders': np.random.randint(0, 100, size=(922, ))
})
df = df_input.copy()
df.set_index('order_date',inplace=True)
df_monthly = df.resample('ME').sum()
print(df_monthly.tail())
customers orders
order_date
202403 1358 1513
202404 1581 1419
202405 1584 1565
202406 1456 1652
202407 389 378
</code></pre>
<p>Now I calculate yoy % change for every month and add it back to the original dataset:</p>
<pre><code>yoy_change = df_monthly.pct_change(12).mul(100)
for column in df_monthly.columns:
df_monthly[f'{column}_pct_change'] = yoy_change[column]
customers orders customers_pct_change orders_pct_change
order_date
202403 1358 1513 -6.215470 -13.095922
202404 1581 1419 -1.801242 -11.423221
202405 1584 1565 22.885958 3.232190
202406 1456 1652 7.772021 -6.508206
202407 389 378 -78.460687 -76.330620
</code></pre>
<p>However the pandas resample sums the partial month of July 2024 (through the 10th) and compares it with the full month of last year July 2023 when the percentage change is calculated. This leaves it at a very negative number when that isn't the reality (since we are comparing a full month to a partial one).</p>
<p>For example, the number of customers for July 2023 "up to the 10th" was 513, therefore the yoy % for the month of July 2024 should be -24 not -78.</p>
|
<python><pandas>
|
2024-07-11 09:51:14
| 1
| 1,964
|
Giacomo
|
78,734,626
| 4,689,521
|
Optimizing Parallel Execution of Sequential and Dependent I/O and CPU-bound Tasks in Python
|
<p>I have a simple processing pipeline that consists of just a couple of steps:</p>
<pre><code>def run():
SIZE = 100
values = get <SIZE> entries from DB
for v in values:
# 1. Update Status for file in DB
# 2. Then download file from BLOB storage
# 3. Then generate thumbnail for file (cpu-bound)
# 4. Then send thumbnail to cloud for processing
# 5. Then send processing result to cloud to further process
# 6. Then write processing result to DB
</code></pre>
<p>It's mostly IO bound tasks except for the task of generating a thumbnail. What techniques can I use being able to process as many files as possible without horizontally scaling?</p>
<p>Right now I am creating multiple threads and run the tasks in parallel but it's quite limited using a ThreadPoolExecutor with 10 workers.</p>
|
<python><multithreading><asynchronous>
|
2024-07-11 09:12:52
| 0
| 629
|
M4V3N
|
78,734,532
| 73,323
|
Make a word optional between a negative lookbehind and a target word
|
<p>This regex <code>(?<!not)\s(hello\W*world)</code> will not match when <code>hello world</code> is preceded by <code>not</code> word.</p>
<p>How do I make it not match too when there's a word in between: <code>not a hello world</code>?</p>
<p>I'm trying this regex <code>(?<!not)(?:\s+.*)?\s(hello\W*world)</code> with the <code>(?:\s+.*)?</code> to make the in-between word optional but doesn't seem to work.</p>
<p><a href="https://regex101.com/r/HPcJ9F/1" rel="nofollow noreferrer">https://regex101.com/r/HPcJ9F/1</a></p>
|
<python><regex>
|
2024-07-11 08:53:34
| 2
| 7,713
|
kyw
|
78,734,383
| 5,197,329
|
asyncio how to chain coroutines
|
<p>I have the following test code, where I am trying to chain together different coroutines.
The idea is that I want to have one coroutine that downloads data, and as soon as data is downloaded I want to get the data into the second routine which then process the data.
The code below works, whenever I skip the process_data step, but whenever I include the process_data step (trying to chain together coroutines) it fails. How can I fix it?</p>
<pre><code>import asyncio
import time
task_inputs = [0,1,2,3,4,5,4,3,4]
async def download_dummy(url):
await asyncio.sleep(url)
data = url
print(f'downloaded {url}')
return data
async def process_data(data):
await asyncio.sleep(1)
processed_data = data*2
print(f"processed {data}")
return processed_data
async def main(task_inputs):
task_handlers = []
print(f"started at {time.strftime('%X')}")
async with asyncio.TaskGroup() as tg:
for task in task_inputs:
res = tg.create_task(process_data(download_dummy(task)))
# res = tg.create_task(download_dummy(task))
task_handlers.append(res)
print(f"finished at {time.strftime('%X')}")
results = [task_handler.result() for task_handler in task_handlers]
print(results)
asyncio.run(main(task_inputs))
</code></pre>
<p>The error I get is rather telling, it seems that the first coroutine is not actually executed, when it is passed to the second coroutine, but I am not sure how I can elegantly fix this.</p>
<pre><code>+ Exception Group Traceback (most recent call last):
| File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\pydevd.py", line 2252, in <module>
| main()
| File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\pydevd.py", line 2234, in main
| globals = debugger.run(setup['file'], None, None, is_module)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\pydevd.py", line 1544, in run
| return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\pydevd.py", line 1551, in _exec
| pydev_imports.execfile(file, globals, locals) # execute the script
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Program Files\JetBrains\PyCharm Community Edition 2024.1.4\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
| exec(compile(contents+"\n", file, 'exec'), glob, loc)
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 31, in <module>
| asyncio.run(main(task_inputs))
| File "C:\Users\Tue J Boesen\AppData\Local\Programs\Python\Python312-arm64\Lib\asyncio\runners.py", line 194, in run
| return runner.run(main)
| ^^^^^^^^^^^^^^^^
| File "C:\Users\Tue J Boesen\AppData\Local\Programs\Python\Python312-arm64\Lib\asyncio\runners.py", line 118, in run
| return self._loop.run_until_complete(task)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Users\Tue J Boesen\AppData\Local\Programs\Python\Python312-arm64\Lib\asyncio\base_events.py", line 687, in run_until_complete
| return future.result()
| ^^^^^^^^^^^^^^^
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 21, in main
| async with asyncio.TaskGroup() as tg:
| File "C:\Users\Tue J Boesen\AppData\Local\Programs\Python\Python312-arm64\Lib\asyncio\taskgroups.py", line 145, in __aexit__
| raise me from None
| ExceptionGroup: unhandled errors in a TaskGroup (9 sub-exceptions)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+---------------- 2 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+---------------- 3 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+---------------- 4 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+---------------- 5 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+---------------- 6 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+---------------- 7 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+---------------- 8 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+---------------- 9 ----------------
| Traceback (most recent call last):
| File "C:\Users\Tue J Boesen\Download\pythonProject\test.py", line 14, in process_data
| processed_data = data*2
| ~~~~^~
| TypeError: unsupported operand type(s) for *: 'coroutine' and 'int'
+------------------------------------
</code></pre>
|
<python><python-3.x><python-asyncio>
|
2024-07-11 08:23:25
| 3
| 546
|
Tue
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.