QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,503,112
| 11,389,265
|
Bug in Boto3 AWS S3 generate_presigned_url in Lambda Python 3.X with specified region?
|
<p>I tried to write a python lambda function that returns a pre-signed url to put an object.</p>
<pre><code>import os
import boto3
import json
import boto3
session = boto3.Session(region_name=os.environ['AWS_REGION'])
s3 = session.client('s3', region_name=os.environ['AWS_REGION'])
upload_bucket = 'BUCKER_NAME' # Replace this value with your bucket name!
URL_EXPIRATION_SECONDS = 30000 # Specify how long the pre-signed URL will be valid for
# Main Lambda entry point
def lambda_handler(event, context):
return get_upload_url(event)
def get_upload_url(event):
key = 'testimage.jpg' # Random filename we will use when uploading files
# Get signed URL from S3
s3_params = {
'Bucket': upload_bucket,
'Key': key,
'Expires': URL_EXPIRATION_SECONDS,
'ContentType': 'image/jpeg' # Change this to the media type of the files you want to upload
}
# Get signed URL
upload_url = s3.generate_presigned_url(
'put_object',
Params=s3_params,
ExpiresIn=URL_EXPIRATION_SECONDS
)
return {
'statusCode': 200,
'isBase64Encoded': False,
'headers': {
'Access-Control-Allow-Origin': '*'
},
'body': json.dumps(upload_url)
}
</code></pre>
<p>The code itself works and returns a signed URL in the format "https://BUCKET_NAME.s3.amazonaws.com/testimage.jpg?[...]"</p>
<p>However when using POSTMAN to try to put an object, it loads without ending.</p>
<p>Originally I thought it was because of my code, and after a while I wrote a NodeJS function that does the same thing:</p>
<pre><code>const AWS = require('aws-sdk')
AWS.config.update({ region: process.env.AWS_REGION })
const s3 = new AWS.S3()
const uploadBucket = 'BUCKET_NAME' // Replace this value with your bucket name!
const URL_EXPIRATION_SECONDS = 30000 // Specify how long the pre-signed URL will be valid for
// Main Lambda entry point
exports.handler = async (event) => {
return await getUploadURL(event)
}
const getUploadURL = async function(event) {
const randomID = parseInt(Math.random() * 10000000)
const Key = 'testimage.jpg' // Random filename we will use when uploading files
// Get signed URL from S3
const s3Params = {
Bucket: uploadBucket,
Key,
Expires: URL_EXPIRATION_SECONDS,
ContentType: 'image/jpeg' // Change this to the media type of the files you want to upload
}
return new Promise((resolve, reject) => {
// Get signed URL
let uploadURL = s3.getSignedUrl('putObject', s3Params)
resolve({
"statusCode": 200,
"isBase64Encoded": false,
"headers": {
"Access-Control-Allow-Origin": "*"
},
"body": JSON.stringify(uploadURL)
})
})
}
</code></pre>
<p>The NodeJs version gives me a url in the format of "https://BUCKET_NAME.s3.eu-west-1.amazonaws.com/testimage.jpg?"</p>
<p>The main difference between the two is the aws sub domain. When using NodeJS it gives me "BUCKET_NAME.<strong>s3.eu-west-1.amazonaws.com</strong>" and when using Python "https://BUCKET_NAME.<strong>s3.amazonaws.com</strong>"</p>
<p>When using python the region does not appear.
I tried, using the signed url generated in python to add the "s3.eu-west-1" manually and IT Works!!</p>
<p>Is this a bug in the AWS Boto3 python library?
as you can see, in the python code I tried to specify the region but it does not do anything.?</p>
<p>Any idea guys ?
I wanna solve this mystery :)</p>
<p>Thanks a lot in advance,</p>
<p>Léo</p>
|
<python><amazon-web-services><aws-lambda><boto3><aws-sdk-nodejs>
|
2023-02-19 20:07:54
| 1
| 557
|
leos
|
75,503,034
| 9,757,174
|
ValueError: [TypeError("'_cffi_backend.FFI' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')] in FastAPI
|
<p>I am working with FastAPI and Firestore and I created a very basic endpoint to read all the documents in a collection. However, I get the following error when I run my code and I can't seem to figure out where it's coming from.</p>
<p><code>ValueError: [TypeError("'_cffi_backend.FFI' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]</code></p>
<p>The code that I have used is below:</p>
<pre class="lang-py prettyprint-override"><code>@router.get("/reviews/{review_id}")
def read_review(review_id: str):
# Get all documents in the Reviews collection
db = firestore.client()
reviews_ref = db.collection('Reviews')
docs = reviews_ref.stream()
output = {}
for doc in docs:
output[doc.id] = doc.to_dict()
return output
</code></pre>
<p>I am not sure why this error is being generated and I couldn't find anything related to the FastAPI for this error.</p>
|
<python><google-cloud-firestore><fastapi>
|
2023-02-19 19:51:15
| 1
| 1,086
|
Prakhar Rathi
|
75,503,027
| 1,431,255
|
Programmatically register function as a test function in pytest
|
<p>I would like to programmatically add or mark a function as a test-case in pytest, so instead of writing</p>
<pre><code>def test_my_function():
pass
</code></pre>
<p>I would like to do something like (pseudo-api, I know neither <code>pytest.add_test</code> nor <code>pytest.testcase</code> exist by that identifier).</p>
<pre><code>def a_function_specification():
pass
pytest.add_test(a_function_specification)
</code></pre>
<p>or</p>
<p>I would like to do something like</p>
<pre><code>@pytest.testcase
def a_function_specification():
pass
</code></pre>
<p>Basically I would like to write some test-case-generating decorator that isn't exactly working like <code>pytest.mark</code>/parametrizing which is why I started to dig into the internals but I haven't found an obvious way how this can be done for python code.</p>
<p>The YAML example in the pytest docs seem to use pytest.Item but I have a hard time mapping this to something that would work within python and not as part of a non-Python-file test collection.</p>
|
<python><pytest>
|
2023-02-19 19:50:53
| 1
| 3,299
|
wirrbel
|
75,502,998
| 774,133
|
Lists become pd.Series, the again lists with one dimension more
|
<p>I have another problem with pandas, I will never make mine this library.</p>
<p>First, this is - I think - how <code>zip()</code> is supposed to work with lists:</p>
<pre><code>import numpy as np
import pandas as pd
a = [1,2]
b = [3,4]
print(type(a))
print(type(b))
vv = zip([1,2], [3,4])
for i, v in enumerate(vv):
print(f"{i}: {v}")
</code></pre>
<p>with output:</p>
<pre><code><class 'list'>
<class 'list'>
0: (1, 3)
1: (2, 4)
</code></pre>
<p>Problem. I create a dataframe, with list elements (in the actual code the lists come from grouping ops and I cannot change them, basically they contain all the values in a dataframe grouped by a column).</p>
<pre><code># create dataframe
values = [{'x': list( (1, 2, 3) ), 'y': list( (4, 5, 6))}]
df = pd.DataFrame.from_dict(values)
print(df)
x y
0 [1, 2, 3] [4, 5, 6]
</code></pre>
<p>However, the lists are now <code>pd.Series</code>:</p>
<pre><code>print(type(df["x"]))
<class 'pandas.core.series.Series'>
</code></pre>
<p>If I do this:</p>
<pre><code>col1 = df["x"].tolist()
col2 = df["y"].tolist()
print(f"col1 is of type {type(col1)}, with length {len(col1)}, first el is {col1[0]} of type {type(col1[0])}")
col1 is of type <class 'list'>, width length 1, first el is [1, 2, 3] of type <class 'list'>
</code></pre>
<p>Basically, the <code>tolist()</code> returned a list of list (why?):</p>
<p>Indeed:</p>
<pre><code>print("ZIP AND ITER")
vv = zip(col1, col2)
for v in zip(col1, col2):
print(v)
ZIP AND ITER
([1, 2, 3], [4, 5, 6])
</code></pre>
<p>I neeed only to compute this:</p>
<pre><code># this fails because x (y) is a list
# df['s'] = [np.sqrt(x**2 + y**2) for x, y in zip(df["x"], df["y"])]
</code></pre>
<p>I could add <code>df["x"][0]</code> that seems not very elegant.</p>
<p>Question:
How am I supposed to compute <code>sqrt(x^2 + y^2)</code> when <code>x</code> and <code>y</code> are in two columns <code>df["x"]</code> and <code>df["y"]</code></p>
|
<python><pandas><dataframe><numpy>
|
2023-02-19 19:46:48
| 3
| 3,234
|
Antonio Sesto
|
75,502,993
| 14,141,126
|
Looping thru file types and attaching them into email
|
<p>I have a list of text files and html files generated by two distinct functions. Each file is labeled signal1.txt, signal2, etc. and signal1.html, signal2.html, etc. I need to send an email with each file pair (signal1.txt and signal1.html, signal2.txt and signal.2.html, and so forth).</p>
<p>I've tried several different ways, but I keep getting just one file pair attached (the last file number whatever it is) over and over. I have no problem sending one file type, but it gets messy when I try with two different files. I'd like to give you as much info as possible and perhaps enough reproducible code for you to try it out on your end if you wish, so my apologies for the long question.</p>
<p>The data is collected from the server. The final result is sorted using the Counter module:</p>
<pre><code>data = Counter({('A user account was locked out ', 47, 'medium', 25): 1, ('An attempt was made to reset an accounts password ', 73, 'high', 2): 1, ('PowerShell Keylogging Script', 73, 'high', 37): 1, ('PowerShell Suspicious Script with Audio Capture Capabilities', 47, 'medium', 36): 1})
</code></pre>
<p>I need the rule name to be used in the email subject, so everything else is junk. For instance, in <code>('A user account was locked out ', 47, 'medium', 25): 1</code>, I only need <code>A user account was locked out</code>. So the following function takes care of all that:</p>
<pre><code>def create_txt_files():
global regex
global count
count = 0
#Convert dict into string and remove unwanted chars
for signal in dict(event_dict).keys():
indiv_signal = (str(signal).replace(",",'').replace('(','').replace(')','')\
.replace("'",'').replace('[','').replace(']',''))
#Further removal of debris using regex
pattern = '^(\D*)'
regex = ''.join(re.findall(pattern,indiv_signal,re.MULTILINE))
count +=1
with open(f"signal{count}.txt", "w") as fh:
fh.write(str(regex))
create_txt_files()
</code></pre>
<p>I also need to create html files that will go in the body of the email as a Dataframe. In this case I need almost all the fields in the data file. The dataframe should look like this:</p>
<pre><code> Alert Score Risk Severity Total
0 A user account was locked out 47 medium 26
</code></pre>
<p>The following function takes care of that:</p>
<pre><code>#Create Individual HTML files
def create_indiv_html_files():
global html_file
global count
count = 0
#Turn rows into columns
for items in list(event_dict):
df = pd.DataFrame(items)
new_df = df.transpose()
new_df.columns = ['Alert','Score Risk','Severity','Total']
html_file = new_df.to_html()
print(new_df)
count +=1
with open(f'signal{count}.html','w') as wf:
wf.write(html_file)
create_indiv_html_files()
</code></pre>
<p>So, up to this point everything is fine and dandy, albeit not as pretty a code as I'd like. But it works, and that's all I'm worried about now. The problem is that when I send the email, I'm getting only one rule (the last one) sent over and over. It's not iterating over the txt and html files and attaching them as it should.</p>
<p><a href="https://i.sstatic.net/vjGOl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vjGOl.png" alt="enter image description here" /></a></p>
<p>Here is the email function I'm using. Despite my several different attempts, I still have not been able to figure out what's wrong. Thank you for taking the time to help.</p>
<pre><code>from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email.mime.text import MIMEText
from email import encoders
import smtplib, ssl
import os
dirname = r'C:\Path\To\Files'
ext = ('.txt','html')
for files in os.scandir(dirname):
if files.path.endswith(ext):
def sendmail():
html_body = '''
<html>
<body>
<p style="font-size: 12;"> <strong>Alert</strong><br>{html_file}</p>
</body>
</html>
'''.format(html_file=html_file)
subject = f'Alert: {regex} '
senders_email = 'mail@mail.comt'
receiver_email = 'mail@mail.comt'
# Create a multipart message and set headers
message = MIMEMultipart('alternative')
message['From'] = senders_email
message['To'] = receiver_email
message['Subject'] = subject
#Attach email body
message.attach(MIMEText(html_body, 'html'))
# Name of the file to be attached
filename = f'signal{count}.html'
# Open file in binary mode
with open(filename, 'rb') as attachment:
# Add file as application/octet-stream
part = MIMEBase('application', 'octet-stream')
part.set_payload(attachment.read())
# Encodes file in ASCII characters to send via email
encoders.encode_base64(part)
# Add header as key/value pair to attachment part
part.add_header(
'Content-Disposition',
f"attachment; filename= {filename}",
)
# Add attachment to message and convert message to string
message.attach(part)
text = message.as_string()
# Log into server using secure connection
context = ssl.create_default_context()
with smtplib.SMTP("smtp.mail.com", 25) as server:
# server.starttls(context=context)
# server.login(senders_email, 'password')
server.sendmail(senders_email, receiver_email, text)
print("Email sent!")
sendmail()
</code></pre>
|
<python>
|
2023-02-19 19:46:21
| 1
| 959
|
Robin Sage
|
75,502,964
| 2,607,447
|
Different behavior between multiple nested lookups inside .filter and .exclude
|
<p>What's the difference between having multiple nested lookups inside <code>queryset.filter</code> and <code>queryset.exclude</code>?</p>
<p>For example car ratings. User can create ratings of multiple types for any car.</p>
<pre><code>class Car(Model):
...
class Rating(Model):
type = ForeignKey('RatingType') # names like engine, design, handling
user = ... # user
</code></pre>
<p>Let's try to get a list of cars without rating by user "a" and type "design".</p>
<p><strong>Approach 1</strong></p>
<pre><code>car_ids = Car.objects.filter(
rating__user="A", rating__type__name="design"
).values_list('id',flat=True)
Car.objects.exclude(id__in=car_ids)
</code></pre>
<p><strong>Approach 2</strong></p>
<pre><code>Car.objects.exclude(
rating__user="A", rating__type__name="design"
)
</code></pre>
<p>The <strong>Approach 1</strong> works well to me whereas the <strong>Approach 2</strong> looks to be excluding more cars. My suspicion is that nested lookup inside <code>exclude</code> does not behave like AND (for the rating), rather it behaves like OR.</p>
<p>Is that true? If not, why these two approaches results in different querysets?</p>
|
<python><django><django-queryset><django-orm>
|
2023-02-19 19:42:43
| 1
| 18,885
|
Milano
|
75,502,794
| 5,197,329
|
pytest unittest, how to group test cases?
|
<p>I am new to unit testing and trying to implement some in my latest project. However, I can't seem to get the structure quite right.
In the following example I have a bunch of redundant code and it still isn't working, with the @pytest.mark.parametrize</p>
<p>What I would like ideally is for my test_select_childnode to be run with various different games, but for each game I need to create a node and mcts object, which I then pass to the test along with an integer. I think I need parametrize in order to achieve this but it doesn't seem to be working in this example. The example was working when I directly fed the fixtures into the test_select_childnode, but then I would need to repeat that function for each game along with repeating the fixtures which seems like a lot of boilerplate I'm sure could be done smarter.</p>
<pre><code>@pytest.fixture
def node_ttt():
g = TicTacToeGame()
node = Node(g)
return node
@pytest.fixture
def mcts_ttt():
g = TicTacToeGame()
nnet = TicTacToeNNet(g,nn_args)
model = NNetWrapper(g,nnet,nn_args)
mcts_instance = MCTS(model, mcts_args)
return mcts_instance
@pytest.mark.parametrize("mcts, node, action_idx", [
(mcts_ttt, node_ttt, 0),
])
def test_select_childnode(mcts, node, action_idx):
"""
Assert that childnode is creating a new node when needed.
Assert that childnode is not creating a new node when not needed.
"""
mcts.nodes[node.id] = node
child_node = mcts.select_childnode(node, action_idx)
child_node2 = mcts.select_childnode(node, action_idx)
assert child_node != child_node2, "child nodes are not unique when they should be"
mcts.add_node(child_node, node.id, action_idx)
child_node2 = mcts.select_childnode(node, action_idx)
assert child_node == child_node2, "accessing the same child node that we previously added, should not create a new node"
Testing started at 8:02 p.m. ...
Connected to pydev debugger (build 223.8617.48)
Launching pytest with arguments /home/tue/PycharmProjects/Hive_nn/tests/test_mcts.py --no-header --no-summary -q in /home/tue/PycharmProjects/Hive_nn/tests
============================= test session starts ==============================
collecting ... collected 1 item
test_mcts.py::test_select_childnode[mcts_ttt-node_ttt-0] FAILED [100%]
test_mcts.py:43 (test_select_childnode[mcts_ttt-node_ttt-0])
mcts = <function mcts_ttt at 0x7fe673be1f30>
node = <function node_ttt at 0x7fe673be1e10>, action_idx = 0
@pytest.mark.parametrize("mcts, node, action_idx", [
(mcts_ttt, node_ttt, 0),
])
def test_select_childnode(mcts, node, action_idx):
"""
Assert that childnode is creating a new node when needed.
Assert that childnode is not creating a new node when not needed.
"""
> mcts.nodes[node.id] = node
E AttributeError: 'function' object has no attribute 'nodes'
test_mcts.py:54: AttributeError
============================== 1 failed in 1.29s ===============================
Process finished with exit code 1
</code></pre>
|
<python><unit-testing><pytest>
|
2023-02-19 19:15:32
| 2
| 546
|
Tue
|
75,502,642
| 12,858,691
|
Pyplot: change size of tick lines on axis
|
<p>Consider this reproducable example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
labels = ['0','1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20','21']
fig, axes = plt.subplots(nrows=3, ncols=3,figsize=(6, 3.5),dpi=300)
plt.subplots_adjust(wspace=0.025, hspace=0.2)
for ax in [a for b in axes for a in b]:
ax.imshow(np.random.randint(2, size=(22,42)))
ax.set_xticks([0,6,12,18,24,30,36,41])
ax.tick_params(axis='x', which='major', labelsize=3)
ax.set_yticks([])
ax.set_aspect('equal')
for ax in [item for sublist in axes for item in sublist][0::3]:
ax.set_yticks(range(len(labels)))
ax.set_yticklabels(labels,fontsize=3)
fig.savefig("example.png",bbox_inches='tight')
</code></pre>
<p><a href="https://i.sstatic.net/gtCg4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gtCg4.png" alt="output png" /></a></p>
<p>My issue is that even though I changed the tick font size, the lines of each tick remain the same. This looks ugly and wastes a lot of space, especially on the X axes. Any ideas how to get those lines smaller, thus that the xlabels are closer to the axis?</p>
<p>PS <code>tight_layout()</code> does not help.</p>
|
<python><matplotlib>
|
2023-02-19 18:53:34
| 3
| 611
|
Viktor
|
75,502,494
| 18,749,472
|
Django ORM vs raw SQL - security
|
<p>My django project makes use of lots and lots of database queries, some are complex and some are basic SELECT queries with no conditions or logic involved.</p>
<p>So far I have been using the <code>sqlite3</code> module to manage my database, instead of the django ORM which has worked very well. One problem or drawback I am aware of using raw SQL queries is their security flaws when compared to django's ORM, such as being viable to SQL injection attacks when passing in user input into my raw SQL queries.</p>
<p>My question is - Is it absolutely necessary to use django's ORM for queries involving user input or can I use a general function to remove any potentially malicious characters eg <code>(,' -, *, ;)</code></p>
<pre><code>def remove_characters(string:str):
characters = ["'", ";", "-", "*"]
for char in characters:
if char in string:
string = string.replace(char, "")
return string
</code></pre>
<p>example of vunrable query in my project</p>
<pre><code>username = "logan9997"
password = "x' or 'x' = 'x"
def check_login(self, username, password):
sql = f"""
SELECT *
FROM App_user
WHERE username = '{remove_character(username)}'
AND password = '{{remove_character(password)}'
"""
</code></pre>
<p>Without the <code>remove_characters</code> function a hacker could gain access to someone else's account if the inputs were not sanitized</p>
<p>would this remove ALL threats of an SQL injection attack?</p>
<p>And would it just make more sense to use the ORM for queries involving user input?</p>
|
<python><sql><django><security><sql-injection>
|
2023-02-19 18:30:50
| 0
| 639
|
logan_9997
|
75,502,448
| 654,019
|
split a string and get the first two sections (if they exist) in python
|
<p>I have these strings in Python:</p>
<pre><code>a=['One','one_two','one_two_three','one_two_three_four']
</code></pre>
<p>And I need to get the two first parts (splitting based on '_').</p>
<p>I can write something like this:</p>
<pre><code>for c in a:
x=c.split("_", 2)
if(len(x)>=2):
y=x[0]+'_'+x[1]
else:
y=x[0]
print(y)
</code></pre>
<p>But is there any better way to do this?</p>
|
<python>
|
2023-02-19 18:23:08
| 4
| 18,400
|
mans
|
75,502,400
| 9,983,652
|
why installed new version of a package is not available in my virtual environment?
|
<p>I use to have yfinance 0.1.67 installed and today I installed a new version of yfinance 0.2.12 using conda install -c conda-forge yfinance. However I didn't see this new version. I was still using old version. so I decided to remove yfinance and do it again from start.</p>
<p>after removing all yfinance, I used conda list to show the all packages. and I don't see yfinance anymore. However, when I import yfinance, there is still yfinance old version</p>
<pre><code>
(dash_tf) C:\Users\test>python
Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import yfinance as yf
>>> print(yf.__version__)
0.1.67
</code></pre>
<p>Now I use sys.executable to see which python I am using, and yes it is the python under the correct virtual enviroment</p>
<pre><code>>>> import sys
>>> print(sys.executable)
C:\Users\test\miniconda3\envs\dash_tf\python.exe
</code></pre>
<p>so now how do I remove this old version and install new version? Thanks</p>
|
<python><conda>
|
2023-02-19 18:15:48
| 0
| 4,338
|
roudan
|
75,502,336
| 1,549,736
|
How to properly implement single source project versioning in Python?
|
<p>I love the way <em>CMake</em> allows me to single source version my C/C++ projects, by letting me say:</p>
<pre><code>project(Tutorial VERSION 1.0)
</code></pre>
<p>in my <code>CMakeLists.txt</code> file and, then, use placeholders of the form:</p>
<pre class="lang-c prettyprint-override"><code>#define ver_maj @Tutorial_VERSION_MAJOR@
#define ver_min @Tutorial_VERSION_MINOR@
</code></pre>
<p>in my <code>*.h.in</code> file, which, when run through <code>configure_file()</code>, becomes:</p>
<pre class="lang-c prettyprint-override"><code>#define ver_maj 1
#define ver_min 0
</code></pre>
<p>in my equivalent <code>*.h</code> file.<br />
I'm then able to include that file anywhere I need access to my project version numbers.</p>
<p>This is decidedly different than the experience I have with Python projects.
In that case, I'm often rebuilding, because I forgot to sync. up the version numbers in my <code>pyproject.toml</code> and <code><module>/__init__.py</code> files.</p>
<p><strong>What is the preferred way to achieve something similar to <em>CMake</em> style single source project versioning in a Python project?</strong></p>
|
<python><cmake><versioning>
|
2023-02-19 18:06:30
| 1
| 2,018
|
David Banas
|
75,502,132
| 354,420
|
Using date picker as filter in django admin
|
<p>How to get date picker in Django Admin Filter?</p>
<p>using simple list_filter will get me text selection: Any date, last 7 days, last months etc. But I want simple datepicker, so i can choose just one date... I found django-admin-daterange but I don't need range - just ONE date..</p>
|
<python><django><django-admin>
|
2023-02-19 17:38:12
| 2
| 1,544
|
Tomasz Brzezina
|
75,502,050
| 8,913,338
|
share enum between c and python
|
<p>I have enum that used for communication between python server and c client.</p>
<p>I want to have the enum just in single file, prefer to use python enum class.</p>
<p>Also I prefer to avoid mixing with runtime parsing of C enum in python.</p>
|
<python><c><enums>
|
2023-02-19 17:25:54
| 1
| 511
|
arye
|
75,501,913
| 2,026,022
|
How do I download PDF invoices from stripe.com for previous year?
|
<p>I need to download all invoices from stripe.com for the past year for accounting purposes. I didn't find a button for that, and when I contacted stripe.com support, they said it's not possible and I should use API if I can.</p>
<p>I found <a href="https://modernwebtools.com/stripe-invoice-exporter/" rel="nofollow noreferrer">this page</a>, but it wasn't working. I didn't want to spend that much time on it, as I was sure this is a common use case and why fintech unicorn would not support this simple use case. Well, so I wrote a Python script for that and sharing it here. As I spend some time on it, I am sharing it here in the hope to be useful to somebody else as well.</p>
|
<python><stripe-payments><invoice>
|
2023-02-19 17:08:56
| 1
| 2,347
|
Lucas03
|
75,501,499
| 9,601,748
|
Can't run a Flask server
|
<p>I'm trying to follow <a href="https://levelup.gitconnected.com/using-tensorflow-with-flask-and-react-ba52babe4bb5" rel="nofollow noreferrer">this</a> online tutorial for a Flask/Tensorflow/React application, but I'm having some trouble at the end now trying to run the Flask server.</p>
<p>Flask version: 2.2.3</p>
<p>Python version: 3.10.0</p>
<p>I've search online for solutions, but nothing I've tried has worked. Here's the ways I've tried to run the application:</p>
<p><a href="https://i.sstatic.net/QNw10.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QNw10.png" alt="Attempts to run flask" /></a></p>
<p>Not sure if this could be helpful in coming to a solution, but incase it is, here's my app.py file:</p>
<pre><code>import os
import numpy as np
from flask import Flask, request
from flask_cors import CORS
from keras.models import load_model
from PIL import Image, ImageOps
app = Flask(__name__) # new
CORS(app) # new
@app.route('/upload', methods=['POST'])
def upload():
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)
# Load the model
model = load_model("keras_Model.h5", compile=False)
# Load the labels
class_names = open("labels.txt", "r").readlines()
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open("<IMAGE_PATH>").convert("RGB")
# resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.Resampling.LANCZOS)
# turn the image into a numpy array
image_array = np.asarray(image)
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.5) - 1
# Load the image into the array
data[0] = normalized_image_array
# Predicts the model
prediction = model.predict(data)
index = np.argmax(prediction)
class_name = class_names[index]
confidence_score = prediction[0][index]
# Print prediction and confidence score
print("Class:", class_name[2:], end="")
print("Confidence Score:", confidence_score)
</code></pre>
<p>Does anyone know what I'm doing wrong here, is there maybe something obvious I'm missing that's causing the problem? If there's any other info I can add that may be helpful, please let me know, thanks.</p>
<p>Edit:</p>
<p>I've added the execution section at the end of my code:</p>
<pre><code>if __name__ == '__main__':
app.run(host="127.0.0.1", port=5000)
</code></pre>
<p>And now 'python -m flask run' does attempt to run the app, so the original post question is answered. There does seem to be a subsequent problem now though, it constantly returns <a href="https://i.sstatic.net/jLoDx.jpg" rel="nofollow noreferrer">this</a> error. I installed tensorflow using 'pip3 install tensorflow' and it does install successfully, but then it always returns with the module not found error. Tensorflow doesn't appear in pip freeze package list, am now looking to see how/why this is.</p>
<p>Edit2: My question was flagged as already answered <a href="https://stackoverflow.com/questions/29882642/how-to-run-a-flask-application">here</a>, though I'm struggling to see how, as that post has absolutely no mention at all on why 'flask run' or any way of trying to run a flask app might not work, or what to do when it doesn't, which is what this question is. That post is simply discussing the 'correct' way to run a flask app, not how to run one at all when it's not running.</p>
|
<python><flask>
|
2023-02-19 16:07:47
| 1
| 311
|
Marcus
|
75,501,283
| 2,221,360
|
Slice array based by providing string ':' as index in Numpy
|
<p>I am in a position to extract whole data from an array. The simplest method would be simply passing <code>array[:]</code>. However, I want to make it automated as part of the larger project where the index would be varying with the data format. Therefore, is it possible to extract the whole data by passing a slicing string ":" as an index to the array?</p>
<p>To make things clear, here is an example of what I am trying to do.</p>
<p>create an array:</p>
<pre><code>>>> import numpy as np
>>> a = np.random.randint(0,10,(5,5))
>>> a
array([[3, 3, 3, 7, 2],
[8, 6, 8, 6, 3],
[4, 2, 2, 0, 3],
[4, 0, 6, 0, 1],
[1, 2, 0, 2, 8]])
</code></pre>
<p>General slicing of dataset using tradition method:</p>
<pre><code>>>> a[:]
array([[3, 3, 3, 7, 2],
[8, 6, 8, 6, 3],
[4, 2, 2, 0, 3],
[4, 0, 6, 0, 1],
[1, 2, 0, 2, 8]])
</code></pre>
<p>It works. However, I intend to make <code>:</code> as a variable and try to extract with above example like below:</p>
<pre><code>>>> b = ":"
>>> a[b]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
<p>As per the printed error, I did some correction while defining variable as indicated below which also resulted in error:</p>
<pre><code>>>> c = slice(':')
>>> a[c]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: slice indices must be integers or None or have an __index__ method
</code></pre>
<p>So, is it possible at all to extract data by passing a slicing string ":" as an index to an array?</p>
<p><strong>Update</strong></p>
<p>Thank you all for your comments. It is possible with following method:</p>
<pre><code>>>> c = np.index_exp[:]
>>> a[c]
array([[3, 3, 3, 7, 2],
[8, 6, 8, 6, 3],
[4, 2, 2, 0, 3],
[4, 0, 6, 0, 1],
[1, 2, 0, 2, 8]])
</code></pre>
|
<python><numpy><numpy-slicing>
|
2023-02-19 15:39:29
| 0
| 3,910
|
sundar_ima
|
75,501,243
| 6,366,251
|
How to install a package with pip from VCS URL only if not already installed?
|
<p>With pip’s legacy resolver, it was possible to install a package from a VCS URL only if it is not already installed:</p>
<pre><code>% pip install 'git+https://test.invalid/#egg=pip' --use-deprecated=legacy-resolver
Requirement already satisfied: pip from git+https://test.invalid/#egg=pip in ./.virtualenvs/test_vcs_install/lib/python3.10/site-packages (23.0.1)
</code></pre>
<p>However, that doesn’t work with the new resolver:</p>
<pre><code>% pip install 'git+https://test.invalid/#egg=pip'
Collecting pip
Cloning https://test.invalid/ to /tmp/pip-install-bocqltdk/pip_cb75496a41934db7b07abef3e91b78ab
Running command git clone --filter=blob:none --quiet https://test.invalid/ /tmp/pip-install-bocqltdk/pip_cb75496a41934db7b07abef3e91b78ab
fatal: unable to access 'https://test.invalid/': Could not resolve host: test.invalid
</code></pre>
<p>Is there a way to skip installation from a VCS URL if the package is already installed that works with the new resolver (and preferably the legacy resolver, too)?</p>
|
<python><pip>
|
2023-02-19 15:33:29
| 0
| 2,014
|
Manuel Jacob
|
75,501,212
| 12,276,690
|
Create a specific amount of function duplicates and run them simultaneously
|
<p>I have an async program based around a set of multiple infinitely-running functions that run simultaneously.</p>
<p>I want to allow users to run a specific amount of specific function duplicates.</p>
<p>A code example of what I have now:</p>
<pre class="lang-py prettyprint-override"><code>async def run():
await asyncio.gather(
func_1(arg1, arg2),
func_2(arg2),
func_3(arg1, arg3),
)
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
loop.close()
</code></pre>
<p>Let's say a user wants to run 2 instances of <code>func_2</code>. I want the core code to look like this:</p>
<pre class="lang-py prettyprint-override"><code>async def run():
await asyncio.gather(
func_1(arg1, arg2),
func_2(arg2),
func_2(arg2),
func_3(arg1, arg3),
)
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
loop.close()
</code></pre>
<p>Any way to elegantly achieve this?</p>
|
<python><multithreading><algorithm><concurrency><python-asyncio>
|
2023-02-19 15:28:55
| 1
| 620
|
TimesAndPlaces
|
75,501,133
| 1,614,466
|
Unsupported Interpolation Type using env variables in Hydra
|
<p>What I'm trying to do: use environment variables in a Hydra config.</p>
<p>I worked from the following links: <a href="https://omegaconf.readthedocs.io/en/1.4_branch/usage.html#environment-variable-interpolation" rel="noreferrer">OmegaConf: Environment variable interpolation</a> and <a href="https://hydra.cc/docs/configure_hydra/job/#hydrajobenv_copy" rel="noreferrer">Hydra: Job Configuration</a>.</p>
<p>This is my <code>config.yaml</code>:</p>
<pre><code>hydra:
job:
env_copy:
- EXPNAME
# I also tried hydra:EXPNAME and EXPNAME,
# which return None
test: ${env:EXPNAME}
</code></pre>
<p>Then I set the environment variable (Ubuntu) with:</p>
<pre><code>export EXPNAME="123"
</code></pre>
<p>The error I get is</p>
<pre><code>omegaconf.errors.UnsupportedInterpolationType: Unsupported interpolation type env
full_key: test
object_type=dict
</code></pre>
|
<python><hydra><omegaconf>
|
2023-02-19 15:17:16
| 1
| 839
|
KSHMR
|
75,500,991
| 5,197,270
|
Spoof ongoing datetime in Python
|
<p>I am looking for the simplest solution to spoof live <code>datetime</code>, specifically, I would like it to start at a specific time, say <code>2023-01-03 15:29</code>, and make it go on, so that the clock is ticking, so to speak.</p>
<p>There are plenty of ways to spoof current <code>datetime</code>, but I haven't find a way that would do so continuously, so the fake time keeps moving.</p>
|
<python><datetime>
|
2023-02-19 14:55:10
| 2
| 411
|
scott_m
|
75,500,948
| 12,248,328
|
Python arbritary keys in TypedDict
|
<p>Is it possible to make a TypedDict with a set of known keys and then a type for an arbitrary key? For example, in TypeScript, I could do this:</p>
<pre><code>interface Sample {
x: boolean;
y: number;
[name: string]: string;
}
</code></pre>
<p>What is the equivalent in Python?</p>
<hr />
<p>Edit: My problem is that I make a library where a function's argument expects a type of <code>Mapping[str, SomeCustomClass]</code>. I want to change the type so that there are now special keys, where the types are like this:</p>
<ul>
<li><code>labels: Mapping[str, str]</code></li>
<li><code>aliases: Mapping[str, List[str]]</code></li>
</ul>
<p>But I want it to still be an arbitrary mapping where if a key is not in the two special cases above, it should be of type <code>SomeCustomClass</code>. What is a good way to do this? I am not interested in making a backwards-incompatible change if there is an alternative.</p>
|
<python><python-3.x><python-typing><typeddict>
|
2023-02-19 14:48:37
| 1
| 423
|
Epic Programmer
|
75,500,937
| 19,771
|
Setting colors on a trimesh union mesh according to the original component meshes
|
<p>I'm working with the <a href="https://trimsh.org/" rel="nofollow noreferrer">trimesh</a> Python library, and I couldn't wrap my head around working with colors from reading the docs (a Spartan API reference) and examples. I'm not even sure if I'm setting the face colors right, if I can modify <code>mesh.visual.face_colors</code> directly or if I should make a <code>ColorVisuals</code>-type object.</p>
<p>But the main question is, can I set the color of different faces of a mesh <em>approximately</em> (I know there can be nasty edge cases) according to it being "overlapping" to faces in the original meshes I had before?</p>
<pre><code>from shapely import Polygon
import trimesh
pts = ((100, 100), (400, 100), (400, 400), (100, 400))
hole = ((150, 150), (350, 150), (350, 350), (150, 350))
p = Polygon(pts, [hole])
mesh = trimesh.creation.extrude_polygon(p, 100)
other = mesh.copy()
other.apply_translation((150, 50, 50))
mesh = mesh.union(other)
# Silly idea (most faces wouldn't be the same) and it doesn't work, I get an error.
# other_v_set = set(other.vertices)
# colors = [([255, 0, 0] if set(mesh.vertices[v] for v in f).issubset(other_v_set)
# else [0, 255, 0]) for f in mesh.faces]
# mesh.visual = trimesh.visual.ColorVisuals(mesh, colors)
mesh.show()
</code></pre>
<p><a href="https://i.sstatic.net/xvMbG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xvMbG.png" alt="mesh resulting from the union of two frame-like meshes, with text asking the main question about coloring the different parts" /></a></p>
|
<python><trimesh>
|
2023-02-19 14:45:45
| 0
| 368
|
villares
|
75,500,915
| 5,912,144
|
Imported python class instance inheritance
|
<p>Currently I'm inheriting from an external class and importing another one locally:</p>
<pre><code>from locust.contrib.fasthttp import FastHttpUser
from local_helper import TaskDetails
class User_1(FastHttpUser):
@task
def iteration_task(self):
self.client.get(url)
</code></pre>
<p>I want to offload <code>self.client.get(url)</code> to <code>TaskDetails</code>,</p>
<p>What I want to achieve is something like this:</p>
<pre><code>td = TaskDetails()
class User_1(FastHttpUser):
@task
td.client_get(url) # with included FastHttpUser methods from User_1 class
</code></pre>
<p>is it possible to do something like this?</p>
|
<python><python-3.x><oop><inheritance><locust>
|
2023-02-19 14:42:53
| 0
| 3,035
|
Ardhi
|
75,500,906
| 19,325,656
|
Combine models to get cohesive data
|
<p>I'm writing app in witch I store data in separate models. Now I need to combine this data to use it.</p>
<p><strong>The problem</strong></p>
<p>I have three models:</p>
<pre><code>class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(unique=True)
first_name = models.CharField(max_length=50, blank=True)
...
class Contacts(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="user")
contact_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="contact_user")
class UserPhoto(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
url = models.CharField(max_length=220)
</code></pre>
<p>How can I get the current user contacts with their names and pictures like this (serialized)</p>
<pre><code>{
{
"contact_user":"1",
"first_name":"Mark ",
"url":first picture that corresponds to contact_user id
},
{
"contact_user":"2",
"first_name":"The Rock",
"url":first picture that corresponds to contact_user id
}
}
</code></pre>
<p>Now I'm querying the <em>Contacts</em> model to get all contacts_user id's that he has connection to.</p>
<pre><code>class MatchesSerializer(serializers.ModelSerializer):
class Meta:
model = Contacts
fields = '__all__'
depth = 1
class ContactViewSet(viewsets.ModelViewSet):
serializer_class = ContactsSerializer
def get_queryset(self):
return Contacts.objects.filter(user__id=self.request.user.id)
</code></pre>
|
<python><django><django-models><django-rest-framework><django-serializer>
|
2023-02-19 14:42:06
| 1
| 471
|
rafaelHTML
|
75,500,838
| 13,803,549
|
How to change Discord.py button color and disable on interaction
|
<p>I am trying to change the button color to grey and disable it once it is clicked but nothing I find seems to be working. It really seems like it should be simple enough to do and I tried 'edit_message' but maybe I used it wrong. Here is the code for my button, I took out all the irrelevant code.</p>
<p>I really appreciate any help you can offer.
Thanks!</p>
<pre class="lang-py prettyprint-override"><code> @discord.ui.button(label="Daily Game", style=discord.ButtonStyle.blurple)
async def daily_fantasy(self, interaction:discord.Interaction,
button:discord.ui.Button):
await interaction.response.send_message(content=f"Let's get started!",
ephemeral=True)
</code></pre>
|
<python><discord><discord.py>
|
2023-02-19 14:31:06
| 1
| 526
|
Ryan Thomas
|
75,500,774
| 1,934,903
|
executing java command from script gives error even though same command from cli works fine
|
<p>I'm writing a python script that amongst other things launches a jar file.</p>
<p>I'm using the following method to fire the java command:</p>
<pre><code> def first_launch(self):
if self.is_initialize:
subprocess.run(
[f"-Xmx{self.max_memory}M", f"-Xms{self.min_memory}M", f"-jar server.jar", "nogui"],
executable="/usr/bin/java",
cwd=self.directory
)
else:
raise Exception("Server is not initialized")
</code></pre>
<p>Where the var values are:</p>
<ul>
<li>directory: /Users/cschmitz/Desktop/opt/minecraft/server</li>
<li>max memory: 2048</li>
<li>min memory: 1024</li>
</ul>
<p>which should all come out to be the command:</p>
<pre><code>/usr/bin/java -Xmx2048M -Xms1024M -jar server.jar nogui
</code></pre>
<p>When I run this command from my terminal it works fine:
<a href="https://i.sstatic.net/h4J55.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h4J55.gif" alt="running from the cli" /></a></p>
<p>But when I try firing it from the python subprocess I get the error:</p>
<blockquote>
<p>The operation couldn’t be completed. Unable to locate a Java Runtime that supports -Xmx2048M.
Please visit <a href="http://www.java.com" rel="nofollow noreferrer">http://www.java.com</a> for information on installing Java.</p>
</blockquote>
<p>I'm using an absolute path to my java executable (the same one I'm using on in the CLI) so it seems unlikely that I'd be grabbing a different version of the command (like if my cli path included a different version of the command than what my python file had access to).</p>
<p>I searched around for answers online and I saw where you'd see this message if you only had the java runtime and not the dev kit, but as far as I can tell I have the jdk installed (or at least I figured that if I installed open jdk I wouldn't get just the runtime environment).</p>
<pre><code>❯ java -version
openjdk version "19.0.1" 2022-10-18
OpenJDK Runtime Environment Homebrew (build 19.0.1)
OpenJDK 64-Bit Server VM Homebrew (build 19.0.1, mixed mode, sharing)
</code></pre>
<p>And really the fact that I can launch it just fine from the cli suggests that I have the right tools installed.</p>
<p>Any ideas of what I'm doing wrong or misunderstanding??</p>
|
<python><java>
|
2023-02-19 14:18:50
| 1
| 21,108
|
Chris Schmitz
|
75,500,758
| 19,115,554
|
What is the Group.lostsprites attribute for in pygame?
|
<p>In pygame, groups have a <code>lostsprites</code> attribute. What is this for?<br />
Link to where its first defined in the code: <a href="https://github.com/pygame/pygame/blob/main/src_py/sprite.py#L361" rel="nofollow noreferrer">pygame/src_py/sprite.py</a></p>
<p>It seems to be some sort of internal thing as I was unable to find any documentation on its purpose:</p>
<ul>
<li>Searching on the pygame website yields 1 result (which doesn't explain its purpose):<br />
<a href="https://www.pygame.org/docs/search.html?q=lostsprites" rel="nofollow noreferrer">lostsprite - Search Results</a></li>
<li>I also tried searching on google but I couldn't find anything</li>
</ul>
|
<python><pygame><sprite><internals>
|
2023-02-19 14:16:54
| 1
| 602
|
MarcellPerger
|
75,500,738
| 15,724,084
|
python selenium getting urls from google search results
|
<p>I am trying to get firt 10 urls from google search results with selenium. I knew that there was other term than <code>inerHTML</code> which will give me the text inside <code>cite</code> tags.</p>
<p>here is code</p>
<pre><code>#open google
from selenium.webdriver.chrome.options import Options
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.keys import Keys
chrome_options = Options()
chrome_options.headless = False
chrome_options.add_argument("start-maximized")
# options.add_experimental_option("detach", True)
chrome_options.add_argument("--no-sandbox")
chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
chrome_options.add_experimental_option('excludeSwitches', ['enable-logging'])
chrome_options.add_experimental_option('useAutomationExtension', False)
chrome_options.add_argument('--disable-blink-features=AutomationControlled')
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=chrome_options)
driver.get('https://www.google.com/')
#paste - write name
#var_inp=input('Write the name to search:')
var_inp='python google search'
#search for image
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.NAME, "q"))).send_keys(var_inp+Keys.RETURN)
#find first 10 companies
res_lst=[]
res=WebDriverWait(driver,10).until(EC.presence_of_all_elements_located((By.TAG_NAME,'cite')))
print(len(res))
for r in res:
print(r.get_attribute('innerHTML'))
#take email addresses from company
#send email
</code></pre>
<p>the result is below</p>
<pre><code>https://github.com<span class="dyjrff qzEoUe" role="text"> › opsdisk</span>
https://blog.apilayer.com<span class="dyjrff qzEoUe" role="text"> › h...</span>
https://blog.apilayer.com<span class="dyjrff qzEoUe" role="text"> › h...</span>
</code></pre>
<p>I want to get rid of <code><span...</code> as I need only urls. I can get off them with reg.ex but I need <code>get_attribute('TEXT')</code> or sth else that will easily give the result.</p>
|
<python><selenium-webdriver><getattribute>
|
2023-02-19 14:15:01
| 2
| 741
|
xlmaster
|
75,500,723
| 8,792,159
|
How to allow each subplot in hvplot to have it's own x-axis scaling?
|
<p>I have a data frame that has three columns: <code>stage</code> (which has two values, <code>"before"</code> and <code>"after"</code>, which stands for "before and after preprocessing"), <code>variable</code> (holding the name for each variable) and <code>value</code> (holding the values for each variable for different subjects). I would like to plot the histogram of values for each variable separately for each stage (<code>"before"</code> and <code>"after"</code>). I came up with the following solution:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import hvplot.pandas
import hvplot
from holoviews import opts
df = pd.DataFrame(
{'stage':['before','before','before','before','after','after','after','after'],
'variable':['foo','foo','bar','bar','foo','foo','bar','bar'],
'value':[1,1,2,2,1,1,30,30]})
# Create separate histograms for the "before" and "after" groups
histograms_before = df[df['stage'] == 'before'].hvplot.hist(y='value',by='variable',subplots=True)
histograms_after = df[df['stage'] == 'after'].hvplot.hist(y='value',by='variable',subplots=True)
# Combine the "before" and "after" histograms into a single plot with two columns
histograms = histograms_before.cols(1) + histograms_after.cols(1)
# Display the histograms with a scrollbar
histograms.opts(opts.Layout(shared_axes=False)).cols(2)
hvplot.show(histograms)
</code></pre>
<p>which gives you the following facet plot:</p>
<p><a href="https://i.sstatic.net/ySrfH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ySrfH.png" alt="enter image description here" /></a></p>
<p>This however forces all x-axes in each column to have the same x-axis scaling. For example, the variable <code>foo</code> has the same values for <code>"before"</code> and <code>"after"</code> so I would want the x-axis scaling to be the same for this variable.</p>
|
<python><holoviews><hvplot>
|
2023-02-19 14:12:33
| 1
| 1,317
|
Johannes Wiesner
|
75,500,672
| 12,285,101
|
pandas changed column value condition of three other columns
|
<p>I have the following pandas dataframe:</p>
<pre><code>df = pd.DataFrame({'pred': [1, 2, 3, 4],
'a': [0.4, 0.6, 0.35, 0.5],
'b': [0.2, 0.4, 0.32, 0.1],
'c': [0.1, 0, 0.2, 0.2],
'd': [0.3, 0, 0.1, 0.2]})
</code></pre>
<p>I want to change values on 'pred' column, based on columns a,b,c,d , as following:</p>
<p><strong>if</strong> a has the value at column a is larger than the values of column b,c,d<br />
and<br />
<strong>if</strong> one of columns - b , c or d has value larger than 0.25</p>
<p>then change value in 'pred' to 0. so the results should be:</p>
<pre><code> pred a b c d
0 1 0.4 0.2 0.1 0.1
1 0 0.6 0.4 0.0 0.0
2 0 0.35 0.32 0.2 0.3
3 4 0.5 0.1 0.2 0.2
</code></pre>
<p>How can I do this?</p>
|
<python><pandas>
|
2023-02-19 14:05:28
| 2
| 1,592
|
Reut
|
75,500,585
| 4,686,346
|
Does Python load a directory of parquet files in order while creating a Dataframe?
|
<p>A large dataframe is sorted using OrderBy in Spark. The output generated is a directory with 200 parquet files containing the sorted data.</p>
<p>To combine all parquet into one single parquet file, I'm using python. When the directory is loaded into the dataframe using <code>pandas.read_parquet("Path_to/Directory_containing_200parquet_files")</code>, is the sort order maintained? or are the parquet files read in randomly?</p>
|
<python><apache-spark><sorting><pyspark><parquet>
|
2023-02-19 13:50:14
| 0
| 620
|
TheLastCoder
|
75,500,307
| 13,184,183
|
How to vectorize slicing operations on torch tensors?
|
<p>I have a bunch of variables expressed by list comprehensions. I want to turn it into <code>torch.tensor</code>, so far I got</p>
<pre><code>import torch
n = 10
y = torch.rand(n ** 2, requires_grad=True)
one_node_per_position = torch.FloatTensor([sum(y[k:k + n]) - 1 for k in range(0, n ** 2, n)])
one_node_per_point = torch.FloatTensor([sum(y[j::n]) - 1 for j in range(n)])
connectivity = torch.FloatTensor([sum(y[k:k + n]) - sum(y[k - n:k]) for k in range(n, n ** 2, n)])
</code></pre>
<p>But it obviously doesn't look good. How can I rewrite it to get advantage of vectorization for further using?</p>
|
<python><torch>
|
2023-02-19 12:58:08
| 1
| 956
|
Nourless
|
75,500,303
| 65,545
|
pip wxpython gives ModuleNotFoundError: No module named 'attrdict'
|
<p>Installing wxpython with pip gives the error <code>ModuleNotFoundError: No module named 'attrdict'</code></p>
<h2>Details:</h2>
<p>py -3.10-64 -m pip install -U wxpython</p>
<pre><code>Collecting wxpython
Using cached wxPython-4.2.0.tar.gz (71.0 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\Bernard\AppData\Local\Temp\pip-install-dokcizpt\wxpython_662eefb4314c47eba7b194b4d07a8e18\setup.py", line 27, in <module>
from buildtools.config import Config, msg, opj, runcmd, canGetSOName, getSOName
File "C:\Users\Bernard\AppData\Local\Temp\pip-install-dokcizpt\wxpython_662eefb4314c47eba7b194b4d07a8e18\buildtools\config.py", line 30, in <module>
from attrdict import AttrDict
ModuleNotFoundError: No module named 'attrdict'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<h2>What works</h2>
<p>installing other packages works, e.g.</p>
<p>py -3.10-64 -m pip install -U mido
Requirement already satisfied: mido in c:\python311\lib\site-packages (1.2.10)</p>
<h2>Version info</h2>
<p>Windows 10 22H2
pip 23.0.1 from C:\Python311\Lib\site-packages\pip (python 3.11)</p>
<h2>Context</h2>
<p>This is used in the fluidpatcher installer, I logged a bug <a href="https://github.com/albedozero/fluidpatcher/issues/74" rel="noreferrer">here</a>.</p>
<h1>Update 1</h1>
<p>Seems to be a known issue reported here: <a href="https://github.com/wxWidgets/Phoenix/issues/2296" rel="noreferrer">https://github.com/wxWidgets/Phoenix/issues/2296</a></p>
<p>Tried workaround of manually installing</p>
<pre><code>py -3.10-64 -m pip install -U attrdict3
</code></pre>
<p>Which installs.</p>
<p>Then retried the wxpython install</p>
<pre><code>py -3.10-64 -m pip install -U wxpython
</code></pre>
<p>Which fails, this time with a different error message</p>
<pre><code>Collecting wxpython
Using cached wxPython-4.2.0.tar.gz (71.0 MB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: pillow in c:\python311\lib\site-packages (from wxpython) (9.4.0)
Requirement already satisfied: six in c:\python311\lib\site-packages (from wxpython) (1.16.0)
Requirement already satisfied: numpy in c:\python311\lib\site-packages (from wxpython) (1.24.2)
Installing collected packages: wxpython
DEPRECATION: wxpython is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for wxpython ... error
error: subprocess-exited-with-error
× Running setup.py install for wxpython did not run successfully.
│ exit code: 1
╰─> [49 lines of output]
C:\Python311\Lib\site-packages\setuptools\dist.py:771: UserWarning: Usage of dash-separated 'license-file' will not be supported in future versions. Please use the underscore name 'license_file' instead
warnings.warn(
C:\Python311\Lib\site-packages\setuptools\config\setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
C:\Python311\Lib\site-packages\setuptools\dist.py:317: DistDeprecationWarning: use_2to3 is ignored.
warnings.warn(f"{attr} is ignored.", DistDeprecationWarning)
running install
C:\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
Will build using: "C:\Python311\python.exe"
3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)]
Python's architecture is 64bit
cfg.VERSION: 4.2.0
Running command: build
Running command: build_wx
Command '"C:\Python311\python.exe" -c "import os, sys, setuptools.msvc; setuptools.msvc.isfile = lambda path: path is not None and os.path.isfile(path); ei = setuptools.msvc.EnvironmentInfo('x64', vc_min_ver=14.0); env = ei.return_env(); env['vc_ver'] = ei.vc_ver; env['vs_ver'] = ei.vs_ver; env['arch'] = ei.pi.arch; env['py_ver'] = sys.version_info[:2]; print(env)"' failed with exit code 1.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python311\Lib\site-packages\setuptools\msvc.py", line 1120, in __init__
self.si = SystemInfo(self.ri, vc_ver)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\setuptools\msvc.py", line 596, in __init__
vc_ver or self._find_latest_available_vs_ver())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\setuptools\msvc.py", line 610, in _find_latest_available_vs_ver
raise distutils.errors.DistutilsPlatformError(
distutils.errors.DistutilsPlatformError: No Microsoft Visual C++ version found
Finished command: build_wx (0m1.80s)
Finished command: build (0m1.80s)
WARNING: Building this way assumes that all generated files have been
generated already. If that is not the case then use build.py directly
to generate the source and perform the build stage. You can use
--skip-build with the bdist_* or install commands to avoid this
message and the wxWidgets and Phoenix build steps in the future.
"C:\Python311\python.exe" -u build.py build
Command '"C:\Python311\python.exe" -u build.py build' failed with exit code 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> wxpython
</code></pre>
<h1>Update 2</h1>
<p>Workaround: install Python 3.10.</p>
|
<python><pip><wxpython>
|
2023-02-19 12:57:36
| 1
| 5,146
|
Bernard Vander Beken
|
75,499,934
| 10,499,034
|
How to populate an 2D array from a list left to right with the diagonal being all zeroes
|
<p>I have a list:</p>
<pre><code>idmatrixlist=[0.61, 0.63, 0.54, 0.82, 0.58, 0.57]
</code></pre>
<p>I need to populate an array from left to right while maintaining the zeroes on the diagonal so that the resulting array looks like.</p>
<p><a href="https://i.sstatic.net/FwOUe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FwOUe.png" alt="enter image description here" /></a></p>
<p>I have tried the following code but it results in the wrong ordering of the entries.</p>
<pre><code>lowertriangleidmatrix = np.zeros((4,4))
indexer = np.tril_indices(4,k=-1)
lowertriangleidmatrix[indexer] = idmatrixlist
print(lowertriangleidmatrix)
result:
[[0. 0. 0. 0. ]
[0.61 0. 0. 0. ]
[0.63 0.54 0. 0. ]
[0.82 0.58 0.57 0. ]]
</code></pre>
<p>How can this be re-ordered?</p>
|
<python><arrays><numpy><sorting><numpy-ndarray>
|
2023-02-19 11:51:16
| 1
| 792
|
Jamie
|
75,499,884
| 15,704,286
|
count number of pages in a pdf file using python's pypdf2 library
|
<p>what is the exact code to find the total number of pages in a PDF using Py2PDF library? The old method that is <code>.numPages</code> is deprecated</p>
|
<python><python-3.x><pdf>
|
2023-02-19 11:42:57
| 1
| 842
|
Mounesh
|
75,499,602
| 19,115,554
|
Pass `special_flags` argument to group.draw in pygame
|
<p>Is there a way to pass the <code>special_flags</code> argument to <code>Group.draw</code> so that it calls the <code>.blit</code> method with those flags?
I've tried just passing it as a keyword argument like this:</p>
<pre class="lang-py prettyprint-override"><code>group.draw(surface, special_flags=pygame.BLEND_SOURCE_ALPHA)
</code></pre>
<p>but it gives this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\MarciAdam\PycharmProjects\pygame_stuff_1\main.py", line 394, in <module>
group.draw(surface, special_flags=pygame.BLEND_RGBA_MAX)
TypeError: draw() got an unexpected keyword argument 'special_flags'
</code></pre>
<p>I know I could do something like this:</p>
<pre class="lang-py prettyprint-override"><code>for sprite in group.sprites():
surface.blit(sprite.image, sprite.rect, special_flags=pygame.BLEND_SOURCE_ALPHA)
</code></pre>
<p>but I would need to duplicate a lot of the pygame code for the more complicated group types eg. <code>LayeredUpdates</code>.</p>
|
<python><pygame><drawing><pygame-surface><group>
|
2023-02-19 10:53:36
| 1
| 602
|
MarcellPerger
|
75,499,365
| 9,827,719
|
Python matplotlib dodged bar (series, data and category)
|
<p>I have series, data and categories that I feed into a function to create a dodged bar using matplotlib.</p>
<p>I have managed to created a stacked chart, however I want to create a dodged bar.</p>
<p>This is what I have managed to create (stacked bar):
<a href="https://i.sstatic.net/58IlU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/58IlU.png" alt="enter image description here" /></a></p>
<p>This is what I want to create (dodged bar):
<a href="https://i.sstatic.net/wV7wI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wV7wI.png" alt="enter image description here" /></a></p>
<pre><code>#
# File: bar_dodged.py
# Version 1
# License: https://opensource.org/licenses/GPL-3.0 GNU Public License
#
import matplotlib.pyplot as plt
import numpy as np
def bar_dodged(series_labels: list = ['Minor', 'Low'],
data: list = [
[1, 2, 3, 4],
[5, 6, 7, 8]
],
category_labels: list = ['01/2023', '02/2023', '03/2023', '04/2023'],
bar_background_colors: list = ['tab:orange', 'tab:green'],
bar_text_colors: list = ['white', 'grey'],
direction: str = "vertical",
x_labels_rotation: int = 0,
y_label: str = "Quantity (units)",
figsize: tuple = (18, 5),
reverse: bool = False,
file_path: str = ".",
file_name: str = "bar_dodged.png"):
"""
:param series_labels:
:param data:
:param category_labels:
:param bar_background_colors:
:param bar_text_colors:
:param direction:
:param x_labels_rotation:
:param y_label:
:param figsize:
:param reverse:
:param file_path:
:param file_name:
:return:
"""
# Debugging
print(f"\n")
print(f"bar_dodged() :: series_labels={series_labels}")
print(f"bar_dodged() :: data={data}")
print(f"bar_dodged() :: category_labels={category_labels}")
print(f"bar_dodged() :: bar_background_colors={bar_background_colors}")
# Set size
plt.figure(figsize=figsize)
# Plot!
show_values = True
value_format = "{:.0f}"
grid = False
ny = len(data[0])
ind = list(range(ny))
axes = []
cum_size = np.zeros(ny)
data = np.array(data)
if reverse:
data = np.flip(data, axis=1)
category_labels = reversed(category_labels)
for i, row_data in enumerate(data):
color = bar_background_colors[i] if bar_background_colors is not None else None
axes.append(plt.bar(ind, row_data, bottom=cum_size,
label=series_labels[i], color=color))
cum_size += row_data
if category_labels:
plt.xticks(ind, category_labels)
if y_label:
plt.ylabel(y_label)
plt.legend()
if grid:
plt.grid()
if show_values:
for axis in axes:
for bar in axis:
w, h = bar.get_width(), bar.get_height()
plt.text(bar.get_x() + w/2, bar.get_y() + h/2,
value_format.format(h), ha="center",
va="center")
# Rotate
plt.xticks(rotation=x_labels_rotation)
# Two lines to make our compiler able to draw:
plt.savefig(f"{file_path}/{file_name}", bbox_inches='tight', dpi=200)
if __name__ == '__main__':
# Usage example:
series_labels = ['Globally', 'Customer']
data = [[9, 6, 5, 4, 8], [8, 5, 4, 3, 7]]
category_labels = ['Feb/2023', 'Dec/2022', 'Nov/2022', 'Oct/2022', 'Sep/2022']
bar_background_colors = ['#800080', '#ffa503']
bar_dodged(series_labels=series_labels, data=data, category_labels=category_labels,
bar_background_colors=bar_background_colors)
</code></pre>
<p>What do I have to change in my code in order to make the chart dodged?</p>
|
<python><matplotlib>
|
2023-02-19 10:11:45
| 1
| 1,400
|
Europa
|
75,499,340
| 7,318,120
|
get position of mouse in python on click or button press
|
<p>I want to get the <code>x, y</code> position of the mouse (in <code>windows 11</code>) and use this position in the rest of the code.</p>
<p>I have tried two different modules but neither seem to work.</p>
<ol>
<li>pyautogui (for a mouse click or button press)</li>
<li>keyboard (for a button press)</li>
</ol>
<p>So far, i am able to get the current position (with <code>pyautogui</code>), but i <strong>cannot break out of the while loop</strong> to proceed to the next piece of code or even return the function.</p>
<p>Here is the function with my attempts:</p>
<pre class="lang-py prettyprint-override"><code>import time
import pyautogui
import keyboard
def spam_ordinates():
''' function to determin the mouse coordinates'''
print('press "x" key to lock position...')
while True:
# Check if the left mouse button is clicked
time.sleep(0.1)
print(pyautogui.displayMousePosition())
# various methods i have tried ...
if keyboard.is_pressed('x'):
print('x key pressed...')
break
if pyautogui.mouseDown():
print("Mouse clicked!")
break
if pyautogui.keyDown('x'):
print('x key pressed (autogui)...')
break
# Get the current mouse position
x, y = pyautogui.position()
print(f'spam at position: {x}, {y}')
return x, y
# call function
ords = spam_ordinates()
</code></pre>
<p>i see answers like this:
<a href="https://stackoverflow.com/questions/25848951/python-get-mouse-x-y-position-on-click">Python get mouse x, y position on click</a>, but unfortunately it doesn't actually return a value on the <code>mouse click</code> or <code>button press</code>.</p>
<p>So, how can i break out of the while loop such that the function returns the <code>x, y</code> position of the mouse?</p>
<p><strong>update</strong></p>
<p>it appears as though <code>print(pyautogui.displayMousePosition())</code> was preventing the code from breaking out of the while loop.</p>
<p>I am not sure why, but commenting out that line corrected the issue.</p>
|
<python><keyboard><pyautogui>
|
2023-02-19 10:06:59
| 2
| 6,075
|
darren
|
75,499,291
| 1,593,107
|
Why Cheerio XML parsing with Crawlee doesn't return text() for *some* keys?
|
<p>Considering an <code>XML</code> file like <a href="https://news.google.com/rss/search?q=test&hl=fr&gl=FR&ceid=FR:fr" rel="nofollow noreferrer">this one</a> (Google New RSS feed) and <code>item</code> like this:</p>
<pre><code><item>
<title>Test Like a Dragon Ishin...</title>
<link>https://news.google.com/rss/articles/CBMie2....</link>
<guid isPermaLink="false">CBMie2h0dHB...</guid>
<pubDate>Fri, 17 Feb 2023 15:00:03 GMT</pubDate>
<description>Test Like a Dragon Ishin...</description>
<source url="https://www.jeuxvideo.com">jeuxvideo.com</source>
</item>
</code></pre>
<p>I try to learn <code>Cheerio</code> (via <code>Crawlee</code>) and wrote the following (quite-working) function:</p>
<pre><code>const crawler = new CheerioCrawler({
async requestHandler({ request, response, body, contentType, $ }) {
$("item").each(function (i, ref) {
const el = $(ref);
const title = el.find("title").text();
const link = el.find('link').text();
const published_on = el.find('pubdate').text();
const published_by = $("source").text();
const snippet = el.find("description").text();
console.log("TITLE: ", title);
console.log("LINK: ", link); // DOESN'T WORK
console.log("PUBLISHED_ON: ", published_on);
console.log("PUBLISHED_BY: ", published_by); // DOESN'T WORK
console.log("SNIPPET: ", snippet )
console.log("AUTHOR: ", author);
});
},
});
</code></pre>
<p>It might be obvious (except for me), but I do not understand why I can't retrieve <code>link</code> and <code>published_by</code> content whereas it's working for the other ones.</p>
<p>Any clue?
Thanks a lot.</p>
|
<python><javascript><xml><parsing><cheerio>
|
2023-02-19 09:59:13
| 2
| 2,955
|
charnould
|
75,499,220
| 386,861
|
Writing to sql database with pandas
|
<p>Confused. Trying to build a scraper of UK news in python.</p>
<pre><code>import feedparser
import pandas as pd
def poll_rss(rss_url):
feed = feedparser.parse(rss_url)
for entry in feed.entries:
print("Title:", entry.title)
print("Description:", entry.description)
print("\n")
def poll_rss(rss_url):
feed = feedparser.parse(rss_url)
for entry in feed.entries:
print("Title:", entry.title)
print("Description:", entry.description)
print("\n")
# Example usage:
feeds = [{"type": "news","title": "BBC", "url": "http://feeds.bbci.co.uk/news/uk/rss.xml"},
{"type": "news","title": "The Economist", "url": "https://www.economist.com/international/rss.xml"},
{"type": "news","title": "The New Statesman", "url": "https://www.newstatesman.com/feed"},
{"type": "news","title": "The New York Times", "url": "https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml"},
{"type": "news","title": "Metro UK","url": "https://metro.co.uk/feed/"},
{"type": "news", "title": "Evening Standard", "url": "https://www.standard.co.uk/rss.xml"},
{"type": "news","title": "Daily Mail", "url": "https://www.dailymail.co.uk/articles.rss"},
{"type": "news","title": "Sky News", "url": "https://news.sky.com/feeds/rss/home.xml"},
{"type": "news", "title": "The Mirror", "url": "https://www.mirror.co.uk/news/?service=rss"},
{"type": "news", "title": "The Sun", "url": "https://www.thesun.co.uk/news/feed/"},
{"type": "news", "title": "Sky News", "url": "https://news.sky.com/feeds/rss/home.xml"},
{"type": "news", "title": "The Guardian", "url": "https://www.theguardian.com/uk/rss"},
{"type": "news", "title": "The Independent", "url": "https://www.independent.co.uk/news/uk/rss"},
{"type": "news", "title": "The Telegraph", "url": "https://www.telegraph.co.uk/news/rss.xml"},
{"type": "news", "title": "The Times", "url": "https://www.thetimes.co.uk/?service=rss"},
{"type": "news", "title": "The Mirror", "url": "https://www.mirror.co.uk/news/rss.xml"}]
for feed in feeds:
parsed_feed = feedparser.parse(feed['url'])
print("Title:", feed['title'])
print("Number of Articles:", len(parsed_feed.entries))
print("\n")
data = []
for entry in parsed_feed.entries:
title = entry.title
url = entry.link
print(entry.summary)
if entry.summary:
summary = entry.summary
data.append(summary)
else:
entry.summary = "No summary available"
if entry.published:
date = entry.published
data.append (data)
else:
data.append("No data available")
</code></pre>
<p>I then have a bit of code to sort out the saving.</p>
<pre><code>df = pd.DataFrame(data)
df.columns = ['title', 'url', 'summary', 'date']
print("data" + df)
from sqlalchemy import create_engine
import mysql.connector
engine = create_engine('mysql+pymysql://root:password_thingbob@localhost/somedatabase')
df.to_sql('nationals', con = engine, if_exists = 'append', index = False)
</code></pre>
<p>Although the nationals table has been created and the credentials are right, why does it not save?</p>
|
<python><mysql><sql><pandas><sqlalchemy>
|
2023-02-19 09:44:31
| 2
| 7,882
|
elksie5000
|
75,499,144
| 5,833,865
|
Grouping and summing cloudwatch log insights query
|
<p>I have about 10k logs from log insights in the below format (cannot post actual logs due to privacy rules). I am using boto3 to query the logs.</p>
<p>Log insights query:</p>
<pre><code>filter @message like /ERROR/
</code></pre>
<p>Output Logs format:</p>
<pre><code> timestamp:ERROR <some details>Apache error....<error details>
timestamp:ERROR <some details>Connection error.... <error details>
timestamp:ERROR <some details>Database error....<error details>
</code></pre>
<p>What I need is to group the errors having similar substring (like group by Connection error, Apache error, Database error) or any other similar errors and get a sum of those.</p>
<p>Expected output:</p>
<pre><code> Apache error 130
Database error 2253
Connection error 3120
</code></pre>
<p>Is there some regex or any other way I can use to pull out similar substrings and group them and get the sum? Either in python or in log insights.</p>
|
<python><regex><reporting><amazon-cloudwatchlogs><aws-cloudwatch-log-insights>
|
2023-02-19 09:27:58
| 1
| 770
|
Devang Sanghani
|
75,499,068
| 7,212,686
|
How to iterate over XML children with same name as current element and avoid current element in iteration?
|
<h3>I have</h3>
<p>A given XML (can't change naming) that have same name for a node and its direct children, here <code>items</code></p>
<h3>I want</h3>
<p>To iterate on the children only, the <code>items</code> that have a <code>description</code> field</p>
<h3>My issue</h3>
<p>The parent node of type <code>items</code> appears in the iteration, even the <code>iter</code> is called on itself if I understand well</p>
<pre class="lang-py prettyprint-override"><code>from xml.etree import ElementTree
content = """<?xml version="1.0" encoding="utf-8"?>
<root>
<items>
<items>
<description>foo1</description>
</items>
<items>
<description>foo2</description>
</items>
</items>
</root>
"""
tree = ElementTree.fromstring(content)
print(">>", tree.find("items"))
for item in tree.find("items").iter("items"):
print(item, item.find("description"))
</code></pre>
<p>Current output</p>
<pre><code>>> <Element 'items' at 0x0000020B5CBF8720>
<Element 'items' at 0x0000020B5CBF8720> None
<Element 'items' at 0x0000020B5CBF8770> <Element 'description' at 0x0000020B5CBF87C0>
<Element 'items' at 0x0000020B5CBF8810> <Element 'description' at 0x0000020B5CBF8860>
</code></pre>
<p>Expected output</p>
<pre><code>>> <Element 'items' at 0x0000020B5CBF8720>
<Element 'items' at 0x0000020B5CBF8770> <Element 'description' at 0x0000020B5CBF87C0>
<Element 'items' at 0x0000020B5CBF8810> <Element 'description' at 0x0000020B5CBF8860>
</code></pre>
|
<python><xml><elementtree>
|
2023-02-19 09:10:37
| 1
| 54,241
|
azro
|
75,499,048
| 6,734,243
|
how to reference metadata in a pyproject.toml file?
|
<p>I was previously using setup.py to package my Python libs. As it seems pyproject.toml is the future way of setuptools I decided to migrate before my next releases.</p>
<p>In setup.py I was using the following string to define the link to the downloadable tarball:</p>
<pre class="lang-py prettyprint-override"><code>setup(
version = "2.13.4"
download_url = "https://github.com/xx/xx/archive/v${metadata:version}.tar.gz"
)
</code></pre>
<p>My objective is to set up the version number just once. Is it still possible in pyproject.toml and if yes how ?</p>
<p>I tried the following but it's not included the version parameter in the url:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
version = "2.13.4"
[project.urls]
Download = "https://github.com/xx/xx/archive/v${metadata:version}.tar.gz"
</code></pre>
|
<python><setuptools><pyproject.toml>
|
2023-02-19 09:06:32
| 0
| 2,670
|
Pierrick Rambaud
|
75,498,943
| 5,278,594
|
Calling a recursive function inside a class
|
<p>I am trying to call a recursive method in order to find path from root to node in a binary tree. There are few solns. for this problem on the internet, but I am trying to use slightly different approach by implementing a method inside a <code>Node</code> class.</p>
<p>Here is my logic for the soln.</p>
<pre><code> def apend(self, arr, target):
""" arr is the list which has the path from root to target node, self is the root """
if self is None:
return False
arr.append(self.data)
if self.data==target:
return True
if self.left.apend(arr, target) or self.right.apend(arr, target):
return True
arr.pop()
return False
</code></pre>
<p>I a perfectly okay with how this logic is working, means if the target is found in either right or left subtree return True.</p>
<p>My question is; what if <code>self</code> is a leaf node, i.e. <code>self.left is None</code>. same with <code>self. right</code>. In that case the recursive call is giving an error.
Can I get some help on how to rectify that situation? thanx</p>
|
<python><recursion>
|
2023-02-19 08:44:30
| 1
| 1,483
|
jay
|
75,498,889
| 5,281,012
|
How to generate 10 digit unique-id in python?
|
<p>I want to generate 10 digit unique-id in python. I have tried below methods but no-one worked</p>
<ul>
<li>get_random_string(10) -> It generate random string which has probability of collision</li>
<li>str(uuid.uuid4())[:10] -> Since I am taking a prefix only, it also has probability of collision</li>
<li>sequential -> I can generate unique sequential ID by prepending 0s but it will be sequential and easier to guess. So I want to avoid sequential ids</li>
</ul>
<p>Do we have any proper system to generate 10 digit unique-id?</p>
|
<python><django><architecture><unique-key>
|
2023-02-19 08:33:06
| 2
| 2,985
|
SHIVAM JINDAL
|
75,498,815
| 1,319,998
|
What's the chance of a collision in Python's secrets. compare_digest function?
|
<p>The closest function I can find to a constant time compare in Python's standard library is <a href="https://docs.python.org/3/library/secrets.html#secrets.compare_digest" rel="nofollow noreferrer">secrets.compare_digest</a></p>
<p>But it makes me wonder, if in the case of using it to verify a secret token:</p>
<ul>
<li><p>What's the chance of a collision? As in, what's the chance of a secret passed that doesn't match the correct token, but the function returns true? (Assuming that both strings passed are the same length)</p>
</li>
<li><p>What's the length of secret token when it becomes pointless to make the secret longer, at least in terms of mitigating brute force attacks?</p>
</li>
</ul>
|
<python><security><hash><cryptography>
|
2023-02-19 08:17:38
| 1
| 27,302
|
Michal Charemza
|
75,498,789
| 16,591,526
|
FastAPI with request queue
|
<p>I have developed an application that takes an image and does some hard work on the GPU. The problem is that if a request is currently being processed (processing some image on the GPU) and another request for image processing comes to the server, then an error occurs related to the logic of using the GPU. Thus, I want each request to be processed by the server sequentially, that is, how to queue requests: do not execute a new request until the previous one has completed. How can this be implemented?</p>
<p>I read about celery and message brokers like RabbitMQ but I don't fully understand whether it should be used in my case</p>
|
<python><request><queue><fastapi><synchronous>
|
2023-02-19 08:11:39
| 0
| 909
|
padu
|
75,498,562
| 8,076,158
|
attrs - how to validate an instance of a Literal or None
|
<p>This is what I have. I believe there are two problems here - the Literal and the None.</p>
<pre><code>from attrs import frozen, field
from attrs.validators import instance_of
OK_ARGS = ['a', 'b']
@field
class MyClass:
my_field: Literal[OK_ARGS] | None = field(validator=instance_of((Literal[OK_ARGS], None)))
</code></pre>
<p>Error:</p>
<pre><code>TypeError: Subscripted generics cannot be used with class and instance checks
</code></pre>
<p>Edit: I've made a workaround with a custom validator. Not that pretty however:</p>
<pre><code>def _validator_literal_or_none(literal_type):
def inner(instance, attribute, value):
if (isinstance(value, str) and (value in literal_type)) or (value is None):
pass
else:
raise ValueError(f'You need to provide a None, or a string in this list: {literal_type}')
return inner
</code></pre>
|
<python><python-attrs>
|
2023-02-19 07:17:05
| 1
| 1,063
|
GlaceCelery
|
75,498,238
| 19,583,053
|
Fullstack web-hosting services
|
<p>I am totally new to web development, and I am trying to create a website.
From what I understand, if you create websites on Wix, Squarespace or GoDaddy, then there is a lot of security protection included. They will prevent spamming and things of that nature.</p>
<p>I want to create a website that has both a front-end side, and a lot of back-end python scripts.
I'm not even sure how to phrase my question properly (or if it even makes sense) since I am so new to web development, but here goes: are there web-hosting services that allow for back-end development in python? I would like the security that these sites offer and also be able to do full-stack development on them.</p>
|
<python><web-deployment-project><godaddy-api>
|
2023-02-19 05:46:03
| 1
| 307
|
graphtheory123
|
75,498,232
| 10,033,434
|
Convert specific columns to list and then create json
|
<p>I have a spreadsheet like the following:
<a href="https://i.sstatic.net/orDPq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/orDPq.png" alt="enter image description here" /></a></p>
<p>As you can see, there are multiple "tags" columns like this: "tags_0", "tags_1", "tags_2".
And they can be more.</p>
<p>I'm trying to find all the "tags", and put them inside a list using panda's data frame. And eventually, put them inside an array of "tags" inside a json file.</p>
<p>I thought of using regex, but I can't find a way to apply it.</p>
<p>This is the function I'm using to output the json file. I added the tags array for reference:</p>
<pre><code>def convert_products():
read_exc = pd.read_excel('./data/products.xlsx')
df = pd.DataFrame(read_exc)
all_data = []
for i in range(len(df)):
js = {
"sku": df['sku'][i],
"brand": df['brand'][i],
"tags": [?]
}
all_data.append(js)
json_object = json.dumps(all_data, ensure_ascii=False, indent=2)
with open("./data/products.json", "w", encoding='utf-8') as outfile:
outfile.write(json_object)
</code></pre>
<p>How can I achieve this?</p>
<p>Thanks</p>
|
<python><json><pandas><list>
|
2023-02-19 05:44:14
| 3
| 465
|
user10033434
|
75,498,132
| 14,917,676
|
Is there any method to find the next possible element following the list's pattern?
|
<p>I want a method to get the next possible element as a continuation of the list following the pattern on the list.</p>
<p>Say, there is a list ls,
<code>ls = [1,2,2,3,2,2,1,2,2,3]</code></p>
<p>And I want to get the next possible element of the list.. In this case, "2".</p>
|
<python><list><probability>
|
2023-02-19 05:10:24
| 2
| 487
|
whmsft
|
75,498,096
| 13,176,726
|
Django Rest Framework Password Rest Confirm Email not showing Form and returning as none
|
<p>In my Django Rest Framework, the users request to reset the password and when the email is received and the link is clicked, the url
<code>password-reset-confirm/<uidb64>/<token>/</code> as comes up requested but the form is not showing and when I added it as {{ form }} is displayed NONE</p>
<p>The password reset process is working perfectly fine when I do everytihng on the Django but if I try to reset the password from Django
Rest Framework the form does not appear.</p>
<p>Here is the main urls.py</p>
<pre><code>urlpatterns = [
path('', include('django.contrib.auth.urls')),
path('password-reset/', auth_views.PasswordResetView.as_view(template_name='users/password_reset.html', success_url=reverse_lazy('password_reset_done')), name='password_reset'),
path('password-reset/done/', auth_views.PasswordResetDoneView.as_view(template_name='users/password_reset_done.html'), name='password_reset_done'),
path('password-reset-confirm/<uidb64>/<token>/',auth_views.PasswordResetConfirmView.as_view(template_name='users/password_reset_confirm.html'),name='password_reset_confirm',),
path('password-reset-complete/', auth_views.PasswordResetCompleteView.as_view(template_name='users/password_reset_complete.html'), name='password_reset_complete'),
path('admin/', admin.site.urls),
path('api/', include('api.urls'), ),
path('users/', include('users.urls'), ),
]
</code></pre>
<p>Here is the API app urls.py that is related to DRF</p>
<pre><code>app_name = 'api'
router = routers.DefaultRouter()
router.register(r'users', UserViewSet, basename='user')
urlpatterns = [
path('', include(router.urls)),
path('dj-rest-auth/', include('dj_rest_auth.urls')),
path('dj-rest-auth/registration/', include('dj_rest_auth.registration.urls')),
path('token/', TokenObtainPairView.as_view(), name='token_obtain_pair'),
path('token/refresh/', TokenRefreshView.as_view(), name='token_refresh'),
]
</code></pre>
<p>here is the template password_reset_confirm.html</p>
<pre><code><main class="mt-5" >
<div class="container dark-grey-text mt-5">
<div class="content-section">
<form method="POST">
{% csrf_token %}
<fieldset class="form-group">
<legend class="border-bottom mb-4">Reset Password</legend>
{{ form|crispy }}
{{ form }}
</fieldset>
<div class="form-group">
<button class="btn btn-outline-info" type="submit">Reset Password</button>
</div>
</form>
</div>
</div>
</main>
</code></pre>
<p>My question is: Why is the form showing as NONE and how do I fix it.</p>
|
<python><django><django-rest-framework><django-urls><django-rest-auth>
|
2023-02-19 04:56:57
| 1
| 982
|
A_K
|
75,498,019
| 6,791,416
|
Tensorflow : Trainable variable not getting learnt
|
<p>I am trying to implement a custom modified ReLU in Tensorflow 1, in which I use two learnable parameters. But the parameters are not getting learnt even after running 1000 training steps, as suggested by printing their values before and after training. I have observed that inside the function, when I execute the commented lines instead, then the coefficients are learnt. Could anyone suggest why the first case results in the trainable coefficients not being learnt and how this can be resolved?</p>
<pre><code>import numpy as np
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
def weight_variable(shape,vari_name):
initial = tf.truncated_normal(shape, stddev=0.1,dtype=tf.float32)
return tf.Variable(initial,name = vari_name)
def init_Prelu_coefficient(var1, var2):
coeff = tf.truncated_normal(([1]), stddev=0.1,dtype=tf.float32)
coeff1 = tf.truncated_normal(([1]), stddev=0.1,dtype=tf.float32)
return tf.Variable(coeff, trainable=True, name=var1), tf.Variable(coeff1, trainable=True, name=var2)
def Prelu(x, coeff, coeff1):
s = int(x.shape[-1])
sop = x[:,:,:,:s//2]*coeff+x[:,:,:,s//2:]*coeff1
sop1 = x[:,:,:,:s//2]*coeff-x[:,:,:,s//2:]*coeff1
copied_variable = tf.concat([sop, sop1], axis=-1)
copied_variable = tf.math.maximum(copied_variable,0)/copied_variable
# copied_variable = tf.identity(x)
# copied_variable = tf.math.maximum(copied_variable*coeff+copied_variable*coeff1,0)/copied_variable
# copied_variable = tf.multiply(copied_variable,x)
return copied_variable
def conv2d_dilate(x, W, dilate_rate):
return tf.nn.convolution(x, W,padding='VALID',dilation_rate = [1,dilate_rate])
matr = np.random.rand(1, 60, 40, 8)
target = np.random.rand(1, 58, 36, 8)
def learning(sess):
# define placeholder for inputs to network
Input = tf.placeholder(tf.float32, [1, 60, 40, 8])
input_Target = tf.placeholder(tf.float32, [1, 58, 36, 8])
kernel = weight_variable([3, 3, 8, 8],'G1')
coeff, coeff1 = init_Prelu_coefficient('alpha', 'alpha1')
conv = Prelu(conv2d_dilate(Input, kernel , 2), coeff, coeff1)
error_norm = 1*tf.norm(input_Target - conv)
print("MOMENTUM LEARNING")
train_step = tf.train.MomentumOptimizer(learning_rate=0.001,momentum=0.9,use_nesterov=False).minimize(error_norm)
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
init = tf.initialize_all_variables()
else:
init = tf.global_variables_initializer()
sess.run(init)
print("INIT coefficient ", sess.run(coeff), sess.run(coeff1))
init_var = tf.trainable_variables()
error_prev = 1 # initial error, we set 1 and it began to decrease.
for i in range(1000):
sess.run(train_step, feed_dict={Input: matr, input_Target: target})
if i % 100 == 0:
error_now=sess.run(error_norm,feed_dict={Input : matr, input_Target: target})
print('The',i,'th iteration gives an error',error_now)
error = sess.run(error_norm,feed_dict={Input: matr, input_Target: target})
print(sess.run(kernel))
print("LEARNT coefficient ", sess.run(coeff), sess.run(coeff1))
sess = tf.Session()
learning(sess)
</code></pre>
|
<python><tensorflow>
|
2023-02-19 04:33:28
| 1
| 386
|
psj
|
75,497,969
| 10,863,293
|
How can I resolve the error 'str' object has no attribute 'is_paused' in discord.py?
|
<blockquote>
<p>[2023-02-19 05:14:12] [ERROR ] discord.ext.commands.bot: Ignoring exception in command play
Traceback (most recent call last):
File "C:\Users\toto\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\ext\commands\core.py", line 229, in wrapped
ret = await coro(*args, **kwargs)
File "C:\Users\toto\PycharmProjects\pythonProject\music_bot-main\main.py", line 101, in play
elif self.is_paused:
AttributeError: 'str' object has no attribute 'is_paused'</p>
</blockquote>
<blockquote>
<p>The above exception was the direct cause of the following exception:</p>
</blockquote>
<blockquote>
<p>Traceback (most recent call last):
File "C:\Users\toto\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\ext\commands\bot.py", line 1349, in invoke
await ctx.command.invoke(ctx)
File "C:\Users\toto\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\ext\commands\core.py", line 1023, in invoke
await injected(*ctx.args, **ctx.kwargs) # type: ignore
File "C:\Users\toto\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\ext\commands\core.py", line 238, in wrapped
raise CommandInvokeError(exc) from exc
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: AttributeError: 'str' object has no attribute 'is_paused'</p>
</blockquote>
<pre><code>import discord
from discord.ext import commands
import os
import asyncio
# import all of the cogs
# from help_cog import help_cog
# from music_cog import music_cog
intents = discord.Intents.all()
client = commands.Bot(command_prefix='.', intents=intents)
@client.event
async def on_ready():
print(f'Logged in as {client.user} (ID: {client.user.id})')
print('------')
# remove the default help command so that we can write out own
client.remove_command('help')
# register the class with the bot
# bot.add_cog(help_cog(bot))
# client.add_cog(music_cog(client))
# code du bot
class music_cog(commands.Cog):
def __init__(self, bot):
self.bot = bot
# all the music related stuff
self.is_playing = False
self.is_paused = False
# 2d array containing [song, channel]
self.music_queue = []
self.YDL_OPTIONS = {'format': 'bestaudio', 'noplaylist': 'True'}
self.FFMPEG_OPTIONS = {'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5',
'options': '-vn'}
self.vc = None
# searching the item on youtube
def search_yt(self, item):
with YoutubeDL(self.YDL_OPTIONS) as ydl:
try:
info = ydl.extract_info("ytsearch:%s" % item, download=False)['entries'][0]
except Exception:
return False
return {'source': info['formats'][0]['url'], 'title': info['title']}
def play_next(self):
if len(self.music_queue) > 0:
self.is_playing = True
# get the first url
m_url = self.music_queue[0][0]['source']
# remove the first element as you are currently playing it
self.music_queue.pop(0)
self.vc.play(discord.FFmpegPCMAudio(m_url, **self.FFMPEG_OPTIONS), after=lambda e: self.play_next())
else:
self.is_playing = False
# infinite loop checking
async def play_music(self, ctx):
if len(self.music_queue) > 0:
self.is_playing = True
m_url = self.music_queue[0][0]['source']
# try to connect to voice channel if you are not already connected
if self.vc == None or not self.vc.is_connected():
self.vc = await self.music_queue[0][1].connect()
# in case we fail to connect
if self.vc == None:
await ctx.send("Could not connect to the voice channel")
return
else:
await self.vc.move_to(self.music_queue[0][1])
# remove the first element as you are currently playing it
self.music_queue.pop(0)
self.vc.play(discord.FFmpegPCMAudio(m_url, **self.FFMPEG_OPTIONS), after=lambda e: self.play_next())
else:
self.is_playing = False
@client.command(name="play", aliases=["p", "playing"], help="Plays a selected song from youtube")
async def play(ctx, self, *args):
query = " ".join(args)
voice_channel = ctx.author.voice.channel
if voice_channel is None:
# you need to be connected so that the bot knows where to go
await ctx.send("Connect to a voice channel!")
elif self.is_paused:
self.vc.resume()
else:
song = self.search_yt(query)
if type(song) == type(True):
await ctx.send(
"Could not download the song. Incorrect format try another keyword. This could be due to playlist or a livestream format.")
else:
await ctx.send("Song added to the queue")
self.music_queue.append([song, voice_channel])
if self.is_playing == False:
await self.play_music(ctx)
@client.command(name="pause", help="Pauses the current song being played")
async def pause(self, ctx, *args):
if self.is_playing:
self.is_playing = False
self.is_paused = True
self.vc.pause()
elif self.is_paused:
self.is_paused = False
self.is_playing = True
self.vc.resume()
@client.command(name="resume", aliases=["r"], help="Resumes playing with the discord bot")
async def resume(self, ctx, *args):
if self.is_paused:
self.is_paused = False
self.is_playing = True
self.vc.resume()
@client.command(name="skip", aliases=["s"], help="Skips the current song being played")
async def skip(self, ctx):
if self.vc != None and self.vc:
self.vc.stop()
# try to play next in the queue if it exists
await self.play_music(ctx)
@client.command(name="queue", aliases=["q"], help="Displays the current songs in queue")
async def queue(self, ctx):
retval = ""
for i in range(0, len(self.music_queue)):
# display a max of 5 songs in the current queue
if (i > 4): break
retval += self.music_queue[i][0]['title'] + "\n"
if retval != "":
await ctx.send(retval)
else:
await ctx.send("No music in queue")
@client.command(name="clear", aliases=["c", "bin"], help="Stops the music and clears the queue")
async def clear(self, ctx):
if self.vc != None and self.is_playing:
self.vc.stop()
self.music_queue = []
await ctx.send("Music queue cleared")
@client.command(name="leave", aliases=["disconnect", "l", "d"], help="Kick the bot from VC")
async def dc(self, ctx):
self.is_playing = False
self.is_paused = False
await self.vc.disconnect()
@client.command()
async def bonjour(ctx):
await ctx.send("Bonjour")
# start the bot with our token
client.run(os.getenv("TOKEN"))
</code></pre>
|
<python><discord.py>
|
2023-02-19 04:22:11
| 1
| 898
|
user10863293
|
75,497,932
| 107,083
|
Why does multiprocessing.Queue.put() seem faster at pickling a numpy array than actual pickle?
|
<p>It appears that I can call <code>q.put</code> 1000 times in under 2.5ms. How is that possible when just pickling that very same array 1000 times takes over 2 seconds?</p>
<pre><code>>>> a = np.random.rand(1024,1024)
>>> q = Queue()
>>> timeit.timeit(lambda: q.put(a), number=1000)
0.0025581769878044724
>>> timeit.timeit(lambda: pickle.dumps(a), number=1000)
2.690145633998327
</code></pre>
<p>Obviously, I am not understanding something about how <code>Queue.put</code> works. Can anyone enlighten me?</p>
<p>I also observed the following:</p>
<pre><code>>>> def f():
... q.put(a)
... q.get()
</code></pre>
<pre><code>>>> timeit.timeit(lambda: f(), number=1000)
42.33058542700019
</code></pre>
<p>This appears to be more realistic and suggests to me that simply calling <code>q.put()</code> will return before the object is actually serialized. Is that correct?</p>
|
<python><numpy><python-multiprocessing>
|
2023-02-19 04:08:56
| 1
| 18,077
|
chaimp
|
75,497,593
| 3,078,473
|
How to correctly use WebDriverWait & presence_of_element_located() in 2023?
|
<p>Here is the HTML I am detecting:</p>
<pre><code><div class="arrowPopup arrowPopup-start">
<div class="arrowPopupText arrowPopupTextTwoLine arrowPopupText-flashOn" style="white-space: nowrap;">type<br>this</div>
</div>
</code></pre>
<p>Here is my code</p>
<pre><code>element = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.CLASS_NAME, 'arrowPopup arrowPopup-start')))
</code></pre>
<p>This does not work and times out after 20 seconds.
Any help?</p>
|
<python><selenium-webdriver><webdriverwait>
|
2023-02-19 02:15:22
| 1
| 419
|
JackOfAll
|
75,497,585
| 257,583
|
Evaluate ANSI escapes in Python string
|
<p>Say I have the string <code>'\033[2KResolving dependencies...\033[2KResolving dependencies...'</code></p>
<p>In the Python console, I can print this, and it'll only display once</p>
<pre><code>Python 3.10.9 (main, Jan 19 2023, 07:59:38) [GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> output = '\033[2KResolving dependencies...\033[2KResolving dependencies...'
>>> print(output)
Resolving dependencies...
</code></pre>
<p>Is there a way to get a string that consists solely of the printed output? In other words, I would like there to be some function</p>
<pre class="lang-py prettyprint-override"><code>def evaluate_ansi_escapes(input: str) -> str:
...
</code></pre>
<p>such that <code>evaluate_ansi_escapes(output) == 'Resolving dependencies...'</code> (ideally with the correct amount of whitespace in front)</p>
<hr />
<p>edit: I've come up with the following stopgap solution</p>
<pre class="lang-py prettyprint-override"><code>import re
def evaluate_ansi_escapes(input: str) -> str:
erases_regex = r"^.*(\\(033|e)|\x1b)\[2K"
erases = re.compile(erases_regex)
no_erases = []
for line in input.split("\n"):
while len(erases.findall(line)) > 0:
line = erases.sub("", line)
no_erases.append(line)
return "\n".join(no_erases)
</code></pre>
<p>This does successfully produce output that is close enough to I want:</p>
<pre><code>>>> evaluate_ansi_escapes(output)
'Resolving dependencies...'
</code></pre>
<p>But I would love to know if there is a less hacky way to solve this problem, or if the whitespace preceding <code>'Resolving dependencies...'</code> can be captured as well.</p>
|
<python><string><tty><ansi-escape>
|
2023-02-19 02:12:52
| 0
| 18,996
|
wrongusername
|
75,497,581
| 4,372,759
|
How to embed Python code in a fish script?
|
<p>I'm trying to convert over a Bash script that includes the following commands:</p>
<pre><code>PYCODE=$(cat << EOF
#INSERT_PYTHON_CODE_HERE
EOF
)
RESPONSE=$(COLUMNS=999 /usr/bin/env python3 -c "$PYCODE" $@)
</code></pre>
<p>The idea being that a <code>sed</code> find/replace is then used to inject an arbitrary Python script where <code>#INSERT_PYTHON_CODE_HERE</code> is, creating the script that is then ran.</p>
<p>The corresponding Fish command would seem to be something like this</p>
<pre><code>set PYCODE "
#INSERT_PYTHON_CODE_HERE
"
set RESPONSE (COLUMNS=999 /usr/bin/env python3 -c "$PYCODE" $argv)
</code></pre>
<p>but this falls apart when you have a Python script that can include both <code>'</code> and <code>"</code> (and any other valid) characters.</p>
<p>What is the correct way to handle translate this use of <code>EOF</code>?</p>
<p>As a side note, I would prefer not to modify the <code>sed</code> command that is injecting the python code, but for reference here it is:</p>
<pre><code>set FISH_SCRIPT (sed -e "/#INSERT_PYTHON_CODE_HERE/r $BASE_DIR/test_sh.py" $BASE_DIR/../src/mfa.fish)
</code></pre>
|
<python><fish>
|
2023-02-19 02:10:24
| 1
| 3,981
|
flybonzai
|
75,497,503
| 8,713,442
|
How to generate Pyspark dynamic frame name dynamically
|
<p><a href="https://i.sstatic.net/5s4r0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5s4r0.png" alt="enter image description here" /></a></p>
<p>I have a table which has data as shown in the diagram . I want to create store results in dynamically generated data frame names.</p>
<p>For eg here in the below example I want to create two different data frame name
dnb_df and es_df and store the read result in these two frames and print structure of each data frame</p>
<p>When I am running the below code getting the error</p>
<blockquote>
<p>SyntaxError: can't assign to operator (TestGlue2.py, line 66)</p>
</blockquote>
<pre><code>
import sys
import boto3
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
from pyspark.sql.functions import regexp_replace, col
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
#sc.setLogLevel('DEBUG')
glueContext = GlueContext(sc)
spark = glueContext.spark_session
#logger = glueContext.get_logger()
#logger.DEBUG('Hello Glue')
job = Job(glueContext)
job.init(args["JOB_NAME"], args)
client = boto3.client('glue', region_name='XXXXXX')
response = client.get_connection(Name='XXXXXX')
connection_properties = response['Connection']['ConnectionProperties']
URL = connection_properties['JDBC_CONNECTION_URL']
url_list = URL.split("/")
host = "{}".format(url_list[-2][:-5])
new_host=host.split('@',1)[1]
port = url_list[-2][-4:]
database = "{}".format(url_list[-1])
Oracle_Username = "{}".format(connection_properties['USERNAME'])
Oracle_Password = "{}".format(connection_properties['PASSWORD'])
#print("Oracle_Username:",Oracle_Username)
#print("Oracle_Password:",Oracle_Password)
print("Host:",host)
print("New Host:",new_host)
print("Port:",port)
print("Database:",database)
Oracle_jdbc_url="jdbc:oracle:thin:@//"+new_host+":"+port+"/"+database
print("Oracle_jdbc_url:",Oracle_jdbc_url)
source_df = spark.read.format("jdbc").option("url", Oracle_jdbc_url).option("dbtable", "(select * from schema.table order by VENDOR_EXECUTION_ORDER) ").option("user", Oracle_Username).option("password", Oracle_Password).load()
vendor_data=source_df.collect()
for row in vendor_data :
vendor_query=row.SRC_QUERY
row.VENDOR_NAME+'_df'= spark.read.format("jdbc").option("url",
Oracle_jdbc_url).option("dbtable", vendor_query).option("user",
Oracle_Username).option("password", Oracle_Password).load()
print(row.VENDOR_NAME+'_df')
</code></pre>
<p>Added use case in picture
<a href="https://i.sstatic.net/xH8v1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xH8v1.png" alt="enter image description here" /></a></p>
|
<python><apache-spark><pyspark>
|
2023-02-19 01:44:14
| 2
| 464
|
pbh
|
75,497,496
| 3,163,618
|
Why is 0/1 faster than False/True for this sieve in PyPy?
|
<p>Similar to <a href="https://stackoverflow.com/questions/57838797/why-use-true-is-slower-than-use-1-in-python3">why use True is slower than use 1 in Python3</a> but I'm using pypy3 and not using the sum function.</p>
<pre><code>def sieve_num(n):
nums = [0] * n
for i in range(2, n):
if i * i >= n: break
if nums[i] == 0:
for j in range(i*i, n, i):
nums[j] = 1
return [i for i in range(2, n) if nums[i] == 0]
def sieve_bool(n):
nums = [False] * n
for i in range(2, n):
if i * i >= n: break
if nums[i] == False:
for j in range(i*i, n, i):
nums[j] = True
return [i for i in range(2, n) if nums[i] == False]
</code></pre>
<p><code>sieve_num(10**8)</code> takes 2.55 s, but <code>sieve_bool(10**8)</code> takes 4.45 s, which is a noticeable difference.</p>
<p>My suspicion was that <code>[0]*n</code> is somehow smaller than <code>[False]*n</code> and fits into cache better, but <code>sys.getsizeof</code> and vmprof line profiling are unsupported for PyPy. The only info I could get is that <code><listcomp></code> for <code>sieve_num</code> took 116 ms (19% of total execution time) while <code><listcomp></code> for <code>sieve_bool</code> tool 450 ms (40% of total execution time).</p>
<p>Using PyPy 7.3.1 implementing Python 3.6.9 on Intel i7-7700HQ with 24 GB RAM on Ubuntu 20.04. With Python 3.8.10 <code>sieve_bool</code> is only slightly slower.</p>
|
<python><performance><pypy>
|
2023-02-19 01:43:16
| 1
| 11,524
|
qwr
|
75,497,331
| 468,807
|
python, woocommerce: how to add categories with translations?
|
<p>I need a python snippet to create categories with translation.
Python 3.8 with woocommerce package.</p>
<p>Wordpress with woocommerce plugin in version 7.3</p>
<p>WPML Multilingual CMS: 4.5.14</p>
<p>I have this snippet in python:</p>
<pre class="lang-py prettyprint-override"><code>from woocommerce import API
# create an instance of the API class
wcapi = API(
url="FQDN",
consumer_key="yourconsumerkey",
consumer_secret="yourconsumersecret",
wp_api=True,
version="wc/v3"
)
# create a dictionary with the category data for the main language
category_data_en = {
"name": "Category Name in English",
"parent": 0,
"meta_data": [
{
"key": "_wpml_language",
"value": "en"
},
{
"key": "_wpml_translation_status",
"value": "0"
},
{
"key": "_wpml_element_type",
"value": "tax_category"
}
]
}
# create the category using the WooCommerce API for the main language
new_category_en = wcapi.post("products/categories", category_data_en).json()
# create a dictionary with the category data for the translation
category_data_pl = {
"name": "Category Name in Polish",
"meta_data": [
{
"key": "_wpml_language",
"value": "pl"
},
{
"key": "_wpml_translation_of",
"value": new_category_en.get("id")
},
{
"key": "_wpml_translation_status",
"value": "1"
},
{
"key": "_wpml_element_type",
"value": "tax_category"
}
]
}
# create the translation using the WooCommerce API
new_category_pl = wcapi.post("products/categories", category_data_pl).json()
</code></pre>
<p>and it creates two categories in the English language on my website. What am I doing wrong?
<a href="https://i.sstatic.net/YOly1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YOly1.png" alt="result of running this snippet" /></a></p>
<p>I can see that in my WPML plugin setting there is the info:
WooCommerce Multilingual Not installed
Is it necessary to install it to get those categories created properly?</p>
|
<python><wordpress><woocommerce><wordpress-rest-api><wpml>
|
2023-02-19 00:50:07
| 1
| 799
|
gerpaick
|
75,497,304
| 17,696,880
|
Remove consecutively repeated substring in a string using regex
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = "((PERS)Yo), ((PERS)Yo) ((PERS)yo) hgasghasghsa ((PERS)Yo) ((PERS)Yo) ((PERS)Yo) ((PERS)yo) jhsjhsdhjsdsdh ((PERS)Yo) jhdjfjhdffdj ((PERS)ella) ((PERS)Ella) ((PERS)ellos) asassaasasasassaassaas ((PERS)yo) ssdsdsd"
pattern = re.compile(r'\(\(PERS\)\s*yo\s*\)(?:\(\(PERS\)\s*yo\s*\))+', flags = re.IGNORECASE)
modified_text = re.sub(pattern, '((PERS)yo)', input_text)
print(modified_text)
</code></pre>
<p>Why is this code not used to eliminate the repeated occurrences one after the other of the sequence of characters <code>((PERS)\s*yo\s*)</code> ?</p>
<p>This should be the correct output:</p>
<pre><code>"((PERS)Yo), ((PERS)yo) hgasghasghsa ((PERS)yo) jhsjhsdhjsdsdh ((PERS)yo) jhdjfjhdffdj ((PERS)ella) ((PERS)Ella) ((PERS)ellos) asassaasasasassaassaas ((PERS)yo) ssdsdsd"
</code></pre>
|
<python><python-3.x><regex><replace><regex-group>
|
2023-02-19 00:40:07
| 0
| 875
|
Matt095
|
75,497,274
| 914,641
|
Symbolic simplification of algebraic expressions composed of complex numbers
|
<p>I have a question concerning the symbolic simplification of algebraic expressions composed of complex numbers. I have executed the following Python script:</p>
<pre><code>from sympy import *
expr1 = 3*(2 - 11*I)**Rational(1, 3)*(2 + 11*I)**Rational(2, 3)
expr2 = 3*((2 - 11*I)*(2 + 11*I))**Rational(1, 3)*(2 + 11*I)**Rational(1, 3)
print("expr1 = {0}".format(expr1))
print("expr2 = {0}\n".format(expr2))
print("simplify(expr1) = {0}".format(simplify(expr1)))
print("simplify(expr2) = {0}\n".format(simplify(expr2)))
print("expand(expr1) = {0}".format(expand(expr1)))
print("expand(expr2) = {0}\n".format(expand(expr2)))
print("expr1.equals(expr2) = {0}".format(expr1.equals(expr2)))
</code></pre>
<p>The output is:</p>
<pre><code>expr1 = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
expr2 = 3*((2 - 11*I)*(2 + 11*I))**(1/3)*(2 + 11*I)**(1/3)
simplify(expr1) = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
simplify(expr2) = 15*(2 + 11*I)**(1/3)
expand(expr1) = 3*(2 - 11*I)**(1/3)*(2 + 11*I)**(2/3)
expand(expr2) = 15*(2 + 11*I)**(1/3)
expr1.equals(expr2) = True
</code></pre>
<p>My questions is why the simplifications does not work for <code>expr1</code> but
works for <code>expr2</code> thoug the expressions are algebraically equal.
What has to be done to get the same result from <code>simplify</code> for <code>expr1</code> as for <code>expr2</code>?</p>
<p>Thanks in advance for your replys.</p>
<p>Kind regards</p>
<p>Klaus</p>
|
<python><sympy>
|
2023-02-19 00:27:35
| 2
| 832
|
Klaus Rohe
|
75,497,265
| 2,166,823
|
How to modify which properties show up in the JupyterLab help pop up?
|
<p>For most classes and functions the help pop-up in JupyterLab shows the signature and the docstring, as in this picture:</p>
<p><a href="https://i.sstatic.net/fWy9J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fWy9J.png" alt="enter image description here" /></a></p>
<p>However, for some classes, there are additional fields added, which distracts from the docstring and makes the help pop-up harder to parse:</p>
<p><a href="https://i.sstatic.net/60Ad2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/60Ad2.png" alt="enter image description here" /></a></p>
<p>I wish to remove the fields <code>Call signature</code>, <code>Type</code>, <code>String form</code>, and <code>File</code>, how can I achieve this? If it is not possible I would at least like to move the docstring up so that it comes just after the signature.</p>
<p>I have gone through most of the reasonable suspects from <code>dir(alt.X().bin)</code>, but to no avail. This is all that is returned:</p>
<pre><code>['__call__',
'__class__',
'__delattr__',
'__dict__',
'__dir__',
'__doc__',
'__eq__',
'__format__',
'__ge__',
'__get__',
'__getattribute__',
'__gt__',
'__hash__',
'__init__',
'__init_subclass__',
'__le__',
'__lt__',
'__module__',
'__ne__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__signature__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'cls',
'obj',
'prop',
'schema']
</code></pre>
<p>Unfortunately I can't create an MRE for this one. The class <code>.bin</code> is a patched shortcut to the <code>Bin</code> class to make it accessible directly from the <code>X</code> class via a property setter method and the code that creates these patched classes can be seen in this PR <a href="https://github.com/altair-viz/altair/pull/2795/files" rel="nofollow noreferrer">https://github.com/altair-viz/altair/pull/2795/files</a>, most notably in the changes to <code>altair/utils/schemapi.py</code>. If I look at the <code>Bin</code> class directly I can see the desired help pop up:</p>
<p><a href="https://i.sstatic.net/iKsEp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iKsEp.png" alt="enter image description here" /></a></p>
|
<python><ipython><jupyter-lab><docstring>
|
2023-02-19 00:24:40
| 0
| 49,714
|
joelostblom
|
75,497,033
| 9,008,162
|
How to join different dataframe with specific criteria?
|
<p>In my MySQL database <code>stocks</code>, I have 3 different tables. <strong>I want to join all of those tables to display the EXACT format that I want to see</strong>. Should I join in mysql first, or should I first extract each table as a dataframe and then join with pandas? How should it be done? I don't know the code also.</p>
<p>This is how I want to display: <a href="https://www.dropbox.com/s/fc3mll0q3vefm3q/expected%20output%20sample.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/fc3mll0q3vefm3q/expected%20output%20sample.csv?dl=0</a></p>
<p><a href="https://i.sstatic.net/o6Imn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o6Imn.png" alt="enter image description here" /></a></p>
<p>Because this is just an example format, it only shows two tickers. The expected one should include all of the tickers from my data.</p>
<p>So each ticker is a row that contains all of the specific columns from my tables.</p>
<p>Additional info:</p>
<ul>
<li><p>I only need the <strong>most recent 8 quarters</strong> for quarterly and <strong>5 years for yearly</strong> to be displayed</p>
</li>
<li><p>The exact date for different tickers for quarterly data may differ. If done by hand, the most recent eight quarters can be easily copied and pasted into the respective columns, but <strong>I have no idea how to do it with a computer to determine which quarter it belongs to and show it in the same column as my example output</strong>. (I use the terms q1 through q8 simply as column names to display. So, if my most recent data is May 30, q8 is not necessarily the final quarter of the second year.</p>
</li>
<li><p>If the most recent quarter or year for one ticker is not available (as in "ADUS" in the example), but it is available for other tickers such as "BA" in the example, simply leave that one blank.</p>
</li>
</ul>
<p>1st table <code>company_info</code>: <a href="https://www.dropbox.com/s/g95tkczviu84pnz/company_info.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/g95tkczviu84pnz/company_info.csv?dl=0</a> contains company info data:</p>
<p><a href="https://i.sstatic.net/OadPK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OadPK.png" alt="enter image description here" /></a></p>
<p>2nd table <code>income_statement_q</code>: <a href="https://www.dropbox.com/s/znf3ljlz4y24x7u/income_statement_q.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/znf3ljlz4y24x7u/income_statement_q.csv?dl=0</a> contains quarterly data:</p>
<p><a href="https://i.sstatic.net/uRjcd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uRjcd.png" alt="enter image description here" /></a></p>
<p>3rd table <code>income_statement_y</code>: <a href="https://www.dropbox.com/s/zpq79p8lbayqrzn/income_statement_y.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/zpq79p8lbayqrzn/income_statement_y.csv?dl=0</a> contains yearly data:</p>
<p><a href="https://i.sstatic.net/xasEa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xasEa.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><join><mysql-python>
|
2023-02-18 23:27:42
| 1
| 775
|
saga
|
75,496,854
| 7,019,069
|
How can I make a variable a constant in python for pylint?
|
<p>I'm trying to solve most of the issues I find with pylint, but I'm having troubles with this one:</p>
<blockquote>
<p>C0103: Variable name "ELEMENT_CLASS" doesn't conform to snake_case naming style (invalid-name)</p>
</blockquote>
<p>This is an example of the piece of code:</p>
<pre class="lang-py prettyprint-override"><code>def get_title(driver: Chrome) -> str:
"""
Extract title of the location section
"""
ELEMENT_CLASS = 'content-intro'
try:
element: WebElement = WebDriverWait(driver, 30).until(
EC.element_to_be_clickable(
(
By.XPATH,
f'//div[contains(@class,"{ELEMENT_CLASS}")]')
)
)
except TimeoutException:
print(f'{ELEMENT_CLASS} could not be found')
raise
return element.text
</code></pre>
<p>I want to make visible that my class is hard coded and is a constant. I though the convention for that was having it as upper case (as for example in vscode it is highlighted in a different color).</p>
<p>What can I do to solve this issue (as pylint would like)?</p>
<p>NOTE: I do not want to add a new rule in the <code>pylintrc</code> or just <code>type: ignore</code> it. I want to know what is the "right way" or most common way of writing this kind of constants.</p>
|
<python><pylint>
|
2023-02-18 22:47:43
| 0
| 2,288
|
nck
|
75,496,722
| 4,496,756
|
Error: Unable to extract uploader id (yt-dlp Library)
|
<p>I'm running yt-dlp in an iOS App, and 2 days ago they just fixed it because of a YouTube code change:
<a href="https://github.com/yt-dlp/yt-dlp/issues/6247" rel="nofollow noreferrer">https://github.com/yt-dlp/yt-dlp/issues/6247</a></p>
<p>The problem is that I already updated from the yt-dlp library file from the new releases:
<a href="https://github.com/yt-dlp/yt-dlp/releases" rel="nofollow noreferrer">https://github.com/yt-dlp/yt-dlp/releases</a></p>
<p>And I'm getting exactly the same error:
<a href="https://i.sstatic.net/Fr0Sm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fr0Sm.png" alt="enter image description here" /></a></p>
<p>Anything that I'm missing?</p>
|
<python><ios><yt-dlp>
|
2023-02-18 22:15:57
| 3
| 577
|
Dani G.
|
75,496,582
| 2,607,447
|
Django - generate a unique suffix to an uploaded file (uuid is cut and django adds another suffix)
|
<p>I'm going to use DigitalOcean Spaces as a file storage and I want to add suffixes to uploaded filenames for two reasons:</p>
<ul>
<li>impossible to guess file url with bruteforce</li>
<li>ensure it is unique, as I'm not sure if Django can check for filename uniqueness on S3</li>
</ul>
<p>This is the method:</p>
<pre><code>def cloudfile_upload_to(instance, filename):
path = storage_path_service.StoragePathService.cloud_dir(instance.cloud)
filename, ext = os.path.splitext(filename)
_uuid = uuid.uuid4()
return os.path.join(path, f"{filename}-{uuid}{ext}")
</code></pre>
<p>in the code:</p>
<pre><code>path == "user/11449_bacccbe4-6794-42e3-89c3-5045b024fa11/income/1541/cloud"
filename, ext = "webp", ".webp"
uuid == "f8851579-3aa6-403b-bc08-86923d72e80b"
</code></pre>
<p>When I check the file it is in a correct dir, but the filename is:</p>
<pre><code>"webp-527846f0-4284-4e_YvqcSrf.webp"
</code></pre>
<p>AS you can see, uuid is cut and Django adds it's own suffix. How to make it work?</p>
|
<python><django><django-file-upload><django-storage>
|
2023-02-18 21:45:14
| 0
| 18,885
|
Milano
|
75,496,537
| 4,573,162
|
Python AWS Apprunner Service Failing Health Check
|
<p>New to Apprunner and trying to get a vanilla Python application to deploy successfully but continuing to fail the TCP Health Checks. The following is some relevant portions of the service and Apprunner console logs:</p>
<p>Service Logs:</p>
<pre><code>2023-02-18T15:20:20.856-05:00 [Build] Step 5/5 : EXPOSE 80
2023-02-18T15:20:20.856-05:00 [Build] ---> Running in abcxyz
2023-02-18T15:20:20.856-05:00 [Build] Removing intermediate container abcxyz
2023-02-18T15:20:20.856-05:00 [Build] ---> f3701b7ee4cf
2023-02-18T15:20:20.856-05:00 [Build] Successfully built abcxyz
2023-02-18T15:20:20.856-05:00 [Build] Successfully tagged application-image:latest
2023-02-18T15:30:49.152-05:00 [AppRunner] Failed to deploy your application source code.
</code></pre>
<p>Console:</p>
<pre><code>02-18-2023 03:30:49 PM [AppRunner] Deployment with ID : 123456789 failed. Failure reason : Health check failed.
02-18-2023 03:30:38 PM [AppRunner] Health check failed on port '80'. Check your configured port number. For more information, read the application logs.
02-18-2023 03:24:21 PM [AppRunner] Performing health check on port '80'.
02-18-2023 03:24:11 PM [AppRunner] Provisioning instances and deploying image for privately accessible service.
02-18-2023 03:23:59 PM [AppRunner] Successfully built source code.
</code></pre>
<p>My app is a vanilla, non-networked Python application into which I've added a <code>SimpleHTTPRequestHandler</code> running on a <code>TCPServer</code> configured to run as a seperate thread as follows:</p>
<pre><code>import socketserver
import threading
from http.server import SimpleHTTPRequestHandler
# handler for server
class HealthCheckHandler(SimpleHTTPRequestHandler):
def do_GET(self) -> None:
self.send_response(code=200)
self.send_header("Content-Type", "text/html")
self.end_headers()
self.wfile.write("""<html><body>hello, client.</body><html>""".encode('utf-8'))
# runs the server
def run_healthcheck_server():
with socketserver.TCPServer(("127.0.0.1", 80), HealthCheckHandler) as httpd:
print(f"Fielding health check requests on: 80")
httpd.serve_forever()
# dummy
def my_app_logic():
while True:
print('hello, server.')
time.sleep(5)
# wrapper to run primary application logic AND TCPServer
def main():
# run the server in a thread
threading.Thread(target=run_healthcheck_server).start()
# run my app
my_app_logic()
if __name__ == "__main__":
main()
</code></pre>
<p>This works fine on my local machine and I see "hello, client." in my browser when going to <code>127.0.0.1</code> and a stream of <code>hello, server.</code> messages every 5 seconds in my console.</p>
<p>I don't know much about networking and the only reason I'm incorporating this into the app is to facilitate the AWS HealthCheck which I can't disable in the AppRunner service. I am trying to understand if the issue is how I'm <em>trying</em> to handle TCP requests from AWS' Health Checker or if it's something else on the Apprunner side.</p>
|
<python><tcpserver><amazon-app-runner>
|
2023-02-18 21:36:11
| 0
| 4,628
|
alphazwest
|
75,496,527
| 20,038,123
|
Keras Transformer - Test Loss Not Changing
|
<p>I'm trying to create a small transformer model with Keras to model stock prices, based off of <a href="https://keras.io/examples/timeseries/timeseries_transformer_classification/" rel="nofollow noreferrer">this tutorial</a> from the Keras docs. The problem is, my test loss is massive and barely changes between epochs, unsurprisingly resulting in severe underfitting, with my outputs all the same arbitrary value.</p>
<p>My code is below:</p>
<pre><code>def transformer_encoder_block(inputs, head_size, num_heads, filters, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=filters, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
data = ...
input = np.array(
keras.preprocessing.sequence.pad_sequences(data["input"], padding="pre", dtype="float32"))
output = np.array(
keras.preprocessing.sequence.pad_sequences(data["output"], padding="pre", dtype="float32"))
# Input shape: (723, 36, 22)
# Output shape: (723, 36, 1)
# Train data
train_features = input[100:]
train_labels = output[100:]
train_labels = tf.keras.utils.to_categorical(train_labels, num_classes=3)
# Test data
test_features = input[:100]
test_labels = output[:100]
test_labels = tf.keras.utils.to_categorical(test_labels, num_classes=3)
inputs = keras.Input(shape=(None,22), dtype="float32", name="inputs")
# Ignore padding in inputs
x = layers.Masking(mask_value=0)(inputs)
x = transformer_encoder_block(x, head_size=64, num_heads=16, filters=3, dropout=0.2)
# Multiclass = Softmax (decrease, no change, increase)
outputs = layers.TimeDistributed(layers.Dense(3, activation="softmax", name="outputs"))(x)
# Create model
model = keras.Model(inputs=inputs, outputs=outputs)
# Compile model
model.compile(loss="categorical_crossentropy", optimizer=(tf.keras.optimizers.Adam(learning_rate=0.005)), metrics=['accuracy'])
# Train model
history = model.fit(train_features, train_labels, epochs=10, batch_size=32)
# Evaluate on the test data
test_loss = model.evaluate(test_features, test_labels, verbose=0)
print("Test loss:", test_loss)
out = model.predict(test_features)
</code></pre>
<p>After padding, <code>input</code> is of shape <code>(723, 36, 22)</code>, and <code>output</code> is of shape <code>(723, 36, 1)</code> (before converting output to one hop, after which there are 3 output classes).</p>
<p>Here's an example output for ten epochs (trust me, more than ten doesn't make it better):</p>
<pre><code>Epoch 1/10
20/20 [==============================] - 2s 62ms/step - loss: 10.7436 - accuracy: 0.3335
Epoch 2/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7083 - accuracy: 0.3354
Epoch 3/10
20/20 [==============================] - 1s 60ms/step - loss: 10.6555 - accuracy: 0.3392
Epoch 4/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7846 - accuracy: 0.3306
Epoch 5/10
20/20 [==============================] - 1s 60ms/step - loss: 10.7600 - accuracy: 0.3322
Epoch 6/10
20/20 [==============================] - 1s 59ms/step - loss: 10.7074 - accuracy: 0.3358
Epoch 7/10
20/20 [==============================] - 1s 59ms/step - loss: 10.6569 - accuracy: 0.3385
Epoch 8/10
20/20 [==============================] - 1s 60ms/step - loss: 10.7767 - accuracy: 0.3314
Epoch 9/10
20/20 [==============================] - 1s 61ms/step - loss: 10.7346 - accuracy: 0.3341
Epoch 10/10
20/20 [==============================] - 1s 62ms/step - loss: 10.7093 - accuracy: 0.3354
Test loss: [10.073813438415527, 0.375]
4/4 [==============================] - 0s 22ms/step
</code></pre>
<p>Using the same data on a simple LSTM model with the same shape yielded a desirable prediction with a constantly decreasing loss.</p>
<p>Tweaking the learning rate appears to have no effect, nor does stacking more <code>transformer_encoder_block()</code>s.</p>
<p>If anyone has any suggestions for how I can solve this, please let me know.</p>
|
<python><tensorflow><machine-learning><keras><transformer-model>
|
2023-02-18 21:34:32
| 0
| 429
|
Twisted Tea
|
75,496,045
| 4,723,405
|
Displaying all the columns of a dataframe after aggregating on one column in a groupby
|
<p>I have a dataframe of games played by different number of players for each game. Simplified, it looks like this:</p>
<pre><code>GameID Player Score
------ ------ -----
1 John 10
1 Alice 20
1 Bob 30
2 John 15
2 Alice 25
</code></pre>
<p>I'm trying to display the dataframe of the winner of each game but I'm having trouble with the syntax after using <code>groupby</code>. I managed to get a Series or a Dataframe of the rows with the highest score in each game by doing either of these:</p>
<pre><code>df.groupby('GameID')['Score'].max()
df.groupby('GameID').aggregate({'Score': max})
</code></pre>
<p>But I'd like to display the entire dataframe, including the <code>Player</code> column (as well as other columns present in the dataframe, which I've simplified here).</p>
<p>What is the correct way to do this?</p>
|
<python><pandas>
|
2023-02-18 20:00:35
| 0
| 628
|
Cirrocumulus
|
75,495,963
| 5,212,614
|
How can I find the Optimal Price Point and Group By ID?
|
<p>I have a dataframe that looks like this.</p>
<pre><code>import pandas as pd
# intialise data of lists.
data = {'ID':[101762, 101762, 101762, 101762, 102842, 102842, 102842, 102842, 108615, 108615, 108615, 108615, 108615, 108615],
'Year':[2019, 2019, 2019, 2019, 2020, 2020, 2020, 2020, 2021, 2021, 2021, 2021, 2021, 2021],
'Quantity':[60, 80, 88, 75, 50, 55, 62, 58, 100, 105, 112, 110, 98, 95],
'Price':[2000, 3000, 3330, 4000, 850, 900, 915, 980, 1000, 1250, 1400, 1550, 1600, 1850]}
# Create DataFrame
df = pd.DataFrame(data)
</code></pre>
<pre class="lang-none prettyprint-override"><code> ID Year Quantity Price
0 101762 2019 60 2000
1 101762 2019 80 3000
2 101762 2019 88 3330
3 101762 2019 75 4000
4 102842 2020 50 850
5 102842 2020 55 900
6 102842 2020 62 915
7 102842 2020 58 980
8 108615 2021 100 1000
9 108615 2021 105 1250
10 108615 2021 112 1400
11 108615 2021 110 1550
12 108615 2021 98 1600
13 108615 2021 95 1850
</code></pre>
<p>Here are some plots of the data.</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
uniques = df['ID'].unique()
for i in uniques:
fig, ax = plt.subplots()
fig.set_size_inches(4,3)
df_single = df[df['ID']==i]
sns.lineplot(data=df_single, x='Price', y='Quantity')
ax.set(xlabel='Price', ylabel='Quantity')
plt.xticks(rotation=45)
plt.show()
</code></pre>
<p><img src="https://i.sstatic.net/nEgm5.png" alt="enter image description here" /></p>
<p><img src="https://i.sstatic.net/hQ4g1.png" alt="enter image description here" /></p>
<p><a href="https://i.sstatic.net/zU9u3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zU9u3.png" alt="enter image description here" /></a></p>
<p>Now, I am trying to find the optimal price to sell something, before quantity sold starts to decline. I think the code below is pretty close, but when I run the code I get '33272.53'. This doesn't make any sense. I am trying to get the optimal price point per ID. How can I do that?</p>
<pre><code>df["% Change in Quantity"] = df["Quantity"].pct_change()
df["% Change in Price"] = df["Price"].pct_change()
df["Price Elasticity"] = df["% Change in Quantity"] / df["% Change in Price"]
df.columns
import pandas as pd
from sklearn.linear_model import LinearRegression
x = df[["Price"]]
y = df["Quantity"]
# Fit a linear regression model to the data
reg = LinearRegression().fit(x, y)
# Find the optimal price that maximizes the quantity sold
optimal_price = reg.intercept_/reg.coef_[0]
optimal_price
</code></pre>
|
<python><python-3.x><pandas><dataframe><optimization>
|
2023-02-18 19:44:15
| 0
| 20,492
|
ASH
|
75,495,814
| 7,946,143
|
Understanding binary addition in python using bit manipulation
|
<p>I am looking at solutions for this <a href="https://leetcode.com/problems/sum-of-two-integers/description/" rel="nofollow noreferrer">question</a>:</p>
<blockquote>
<p>Given two integers <code>a</code> and <code>b</code>, return the sum of the two integers without using the operators <code>+</code> and <code>-</code>. (Input Limits: <code>-1000</code> <= a, b <= <code>1000</code>)</p>
</blockquote>
<p>In all these solutions, I am struggling to understand why the solutions do <code>~(a ^ mask)</code> when <code>a</code> exceeds 32-bit number max <code>0x7fffffff</code> when evaluating <code>a + b</code> [see code below].</p>
<pre><code>def getSum(self, a: int, b: int) -> int:
# 32bit mask
mask = 0xFFFFFFFF # 8Fs = all 1s for 32 bits
while True:
# Handle addition and carry
a, b = (a ^ b) & mask, ((a & b) << 1) & mask
if b == 0:
break
max_int = 0x7FFFFFFF
print("A:", a)
print("Bin A:", bin(a))
print("Bin M:", bin(mask))
print(" A^M:", bin(a ^ mask))
print("~ A^M:", bin(~(a ^ mask)))
print(" ~ A:", bin(~a))
return a if a < max_int else ~(a ^ mask)
</code></pre>
<p>I don't get why we need to mask <code>a</code> again when returning answer?</p>
<p>When exiting the loop it was already masked: <code>a = (a ^ b) & mask</code>. So why can't we just do <code>~a</code> if the <code>32nd bit</code> is set to <code>1</code> for <code>a</code>?</p>
<p>I looked at <a href="https://stackoverflow.com/questions/46573219/the-meaning-of-bit-wise-not-in-python">The meaning of Bit-wise NOT in Python</a>, to understand ~ operation, but did not get it.</p>
<p>Output for <code>a = -12</code>, <code>b = -8</code>. Correctly returns <code>-20</code>:</p>
<pre><code>A: 4294967276
Bin A: 0b11111111111111111111111111101100
Bin M: 0b11111111111111111111111111111111
A^M: 0b10011
~ A^M: -0b10100
~ A: -0b11111111111111111111111111101101
</code></pre>
|
<python><bit-manipulation>
|
2023-02-18 19:21:50
| 2
| 3,110
|
Dracula
|
75,495,800
| 21,055,247
|
Error: Unable to extract uploader id - Youtube, Discord.py
|
<p>I have a very powerful bot in discord (discord.py, PYTHON) and it can play music in voice channels. It gets the music from youtube (youtube_dl). It <strong>worked perfectly before</strong> but now it doesn't want to work with any video.
I tried updating youtube_dl but it still doesn't work
I searched everywhere but I still can't find a answer that might help me.</p>
<p>This is the Error: <code>Error: Unable to extract uploader id</code></p>
<p>After and before the error log there is no more information.
Can anyone help?</p>
<p>I will leave some of the code that I use for my bot...
The youtube setup settings:</p>
<pre><code>youtube_dl.utils.bug_reports_message = lambda: ''
ytdl_format_options = {
'format': 'bestaudio/best',
'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s',
'restrictfilenames': True,
'noplaylist': True,
'nocheckcertificate': True,
'ignoreerrors': False,
'logtostderr': False,
'quiet': True,
'no_warnings': True,
'default_search': 'auto',
'source_address': '0.0.0.0', # bind to ipv4 since ipv6 addresses cause issues sometimes
}
ffmpeg_options = {
'options': '-vn',
}
ytdl = youtube_dl.YoutubeDL(ytdl_format_options)
class YTDLSource(discord.PCMVolumeTransformer):
def __init__(self, source, *, data, volume=0.5):
super().__init__(source, volume)
self.data = data
self.title = data.get('title')
self.url = data.get('url')
self.duration = data.get('duration')
self.image = data.get("thumbnails")[0]["url"]
@classmethod
async def from_url(cls, url, *, loop=None, stream=False):
loop = loop or asyncio.get_event_loop()
data = await loop.run_in_executor(None, lambda: ytdl.extract_info(url, download=not stream))
#print(data)
if 'entries' in data:
# take first item from a playlist
data = data['entries'][0]
#print(data["thumbnails"][0]["url"])
#print(data["duration"])
filename = data['url'] if stream else ytdl.prepare_filename(data)
return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data)
</code></pre>
<p>Approximately the command to run the audio (from my bot):</p>
<pre><code>sessionChanel = message.author.voice.channel
await sessionChannel.connect()
url = matched.group(1)
player = await YTDLSource.from_url(url, loop=client.loop, stream=True)
sessionChannel.guild.voice_client.play(player, after=lambda e: print(
f'Player error: {e}') if e else None)
</code></pre>
|
<python><ffmpeg><discord><youtube-dl>
|
2023-02-18 19:19:23
| 9
| 979
|
nikita goncharov
|
75,495,715
| 14,088,584
|
TFIDFVectorizer making concatenated word tokens
|
<p>I am using the <a href="https://ir.dcs.gla.ac.uk/resources/test_collections/cran/" rel="nofollow noreferrer">Cranfield Dataset</a> to make an Indexer and Query Processor. For that purpose I am using TFIDFVectorizer to tokenize the data. But after using TFIDFVectorizer when I check the vocabulary,there were lot of tokens formed using a concatenation of two words.</p>
<p>I am using the following code to achieve it:</p>
<pre><code>import re
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
from nltk import word_tokenize
from nltk.stem import WordNetLemmatizer
#reading the data
with open('cran.all', 'r') as f:
content_string=""
content = [line.replace('\n','') for line in f]
content = content_string.join(content)
doc=re.split('.I\s[0-9]{1,4}',content)
f.close()
#some data cleaning
doc = [line.replace('.T',' ').replace('.B',' ').replace('.A',' ').replace('.W',' ') for line in doc]
del doc[0]
doc= [ re.sub('[^A-Za-z]+', ' ', lines) for lines in doc]
vectorizer = TfidfVectorizer(analyzer ='word', ngram_range=(1,1), stop_words=text.ENGLISH_STOP_WORDS,lowercase=True)
X = vectorizer.fit_transform(doc)
print(vectorizer.vocabulary_)
</code></pre>
<p>I have attached below a few examples I obtain when I print vocabulary:</p>
<p><code>'freevibration': 7222, 'slendersharp': 15197, 'frequentlyapproximated': 7249, 'notapplicable': 11347, 'rateof': 13727, 'itsvalue': 9443, 'speedflow': 15516, 'movingwith': 11001, 'speedsolution': 15531, 'centerof': 3314, 'hypersoniclow': 8230, 'neice': 11145, 'rutkowski': 14444, 'chann': 3381, 'layerapproximations': 9828, 'probsteinhave': 13353, 'thishypersonic': 17752</code></p>
<p>When I use with small data, it does not happen. How to prevent this from happening?</p>
|
<python><scikit-learn><nlp><tfidfvectorizer>
|
2023-02-18 19:03:13
| 2
| 422
|
SHIVANSHU SAHOO
|
75,495,667
| 16,363,897
|
Reference dataframes using indices stored in another dataframe
|
<p>I'm trying to reference data from source dataframes, using indices stored in another dataframe.</p>
<p>For example, let's say we have a "shifts" dataframe with the names of the people on duty on each date (some values can be NaN):</p>
<pre><code> a b c
2023-01-01 Sam Max NaN
2023-01-02 Mia NaN Max
2023-01-03 NaN Sam Mia
</code></pre>
<p>Then we have a "performance" dataframe, with the performance of each employee on each date. Row indices are the same as the shifts dataframe, but column names are different:</p>
<pre><code> Sam Mia Max Ian
2023-01-01 4.5 NaN 3.0 NaN
2023-01-02 NaN 2.0 3.0 NaN
2023-01-03 4.0 3.0 NaN 4.0
</code></pre>
<p>and finally we have a "salary" dataframe, whose structure and indices are different from the other two dataframes:</p>
<pre><code> Employee Salary
0 Sam 100
1 Mia 90
2 Max 80
3 Ian 70
</code></pre>
<p>I need to create two output dataframes, with same structure and indices as "shifts".
In the first one, I need to substitute the employee name with his/her performance on that date.
In the second output dataframe, the employee name is replaced with his/her salary. Theses are the expected outputs:</p>
<pre><code>Output 1:
a b c
2023-01-01 4.5 3.0 NaN
2023-01-02 2.0 NaN 3.0
2023-01-03 NaN 4.0 3.0
Output 2:
a b c
2023-01-01 100.0 80.0 NaN
2023-01-02 90.0 NaN 80.0
2023-01-03 NaN 100.0 90.0
</code></pre>
<p>Any idea of how to do it? Thanks</p>
|
<python><pandas><dataframe>
|
2023-02-18 18:55:18
| 2
| 842
|
younggotti
|
75,495,451
| 5,378,816
|
How to combine two "mypy: disable-error-code = ERR" comments?
|
<p>I put these on the top of a module:</p>
<pre><code># mypy: disable-error-code=misc
# mypy: disable-error-code=attr-defined
</code></pre>
<p>but only the last line is honoured, the first one is ignored. The same with reversed order or with three lines. In each case all lines except the last one are ignored. I was also trying to aggregate it into one line but failed.</p>
<p>What is the correct syntax to suppress multiple mypy errors in the whole module?</p>
|
<python><mypy><python-typing>
|
2023-02-18 18:22:47
| 1
| 17,998
|
VPfB
|
75,495,329
| 1,377,912
|
How to terminate FFMPEG running from Python
|
<p>If I send SIGTERM to FFMPEG it finishes with non-zero status. I found a solution with "q" to the STDIN but it doesn't work in my case.</p>
<p>I try to do it this way: I launch FFMPEG from Python like this:</p>
<pre class="lang-py prettyprint-override"><code>stop_file_fd, _ = tempfile.mkstemp()
with os.fdopen(stop_file_fd, 'w+') as stop_file_fp:
p = subprocess.Popen((
'ffmpeg', '-hide_banner', '-loglevel', 'error',
'-i', m3u8_playlist_url, '-c:v', 'copy', '-c:a', 'copy', '-f', 'flv',
rtmp_endpoint,
), stdin=stop_file_fp)
launch_another_thread(p, stop_file_fp)
p.communicate()
</code></pre>
<p>In another thread:</p>
<pre class="lang-py prettyprint-override"><code>stop_file_fp.write('q')
os.fsync(stop_file_fp.fileno())
stop_file_fp.seek(0)
</code></pre>
<p>It seems like FFMPEG stops to stream the data, but the process doesn't exit.</p>
<p><strong>UPD.</strong> I have created a repository with the problem reproducing: <a href="https://github.com/andre487/ffmpeg-stop-reproducing" rel="nofollow noreferrer">https://github.com/andre487/ffmpeg-stop-reproducing</a></p>
<p>I tried to use <code>p.stdin.write(b'q')</code>. The problem appears this way: when you call stop from a client – it works. But when you try to quite a server with <code>Ctrl+C</code> FFMPEG stops to handle data but a process is still alive.</p>
<p>Also I have tried <code>'q'.encode("GBK")</code> – it works the same way.</p>
|
<python><ffmpeg>
|
2023-02-18 18:01:24
| 0
| 1,409
|
andre487
|
75,495,278
| 578,749
|
How to prevent VSCode from reordering python imports across statements?
|
<p>This is the <a href="https://pygobject.readthedocs.io/en/latest/" rel="noreferrer">correct way</a> to import Gtk3 into python:</p>
<pre class="lang-py prettyprint-override"><code>import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk, GObject
</code></pre>
<p>When I save such code in VSCode with <code>"editor.formatOnSave": true</code>, it gets reordered to:</p>
<pre class="lang-py prettyprint-override"><code>from gi.repository import Gtk, Gdk
import gi
gi.require_version('Gtk', '3.0')
</code></pre>
<p>which makes Gtk to be loaded before I have the chance to specify the version I am using, which at very least leads to the following warning being displayed:</p>
<pre><code>PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '4.0') before import to ensure that the right version gets loaded.
</code></pre>
<p>or worse, get me an exception like:</p>
<pre><code>ValueError: Namespace Gtk is already loaded with version 4.0
</code></pre>
<p>Now, I like VSCode code formatting, but I don't want it to reorder my imports, specially not across statements (being that imports in python have side effects). How to properly use VSCode's Python code formatter with Gtk?</p>
|
<python><visual-studio-code><code-formatting>
|
2023-02-18 17:52:21
| 3
| 13,621
|
lvella
|
75,495,212
| 1,273,987
|
Type hinting numpy arrays and batches
|
<p>I'm trying to create a few array types for a scientific python project. So far, I have created generic types for 1D, 2D and ND numpy arrays:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, Generic, Protocol, Tuple, TypeVar
import numpy as np
from numpy.typing import _DType, _GenericAlias
Vector = _GenericAlias(np.ndarray, (Tuple[int], _DType))
Matrix = _GenericAlias(np.ndarray, (Tuple[int, int], _DType))
Tensor = _GenericAlias(np.ndarray, (Tuple[int, ...], _DType))
</code></pre>
<p>The first issue is that mypy says that <code>Vector</code>, <code>Matrix</code> and <code>Tensor</code> are not valid types (e.g. when I try <code>myvar: Vector[int] = np.array([1, 2, 3])</code>)</p>
<p>The second issue is that I'd like to create a generic type <code>Batch</code> that I'd like to use like so: <code>Batch[Vector[complex]]</code> should be like <code>Matrix[complex]</code>, <code>Batch[Matrix[float]]</code> should be like <code>Tensor[float]</code> and <code>Batch[Tensor[int]</code> should be like <code>Tensor[int]</code>. I am not sure what I mean by "should be like" I guess I mean that mypy should not complain.</p>
<p>How to I get about this?</p>
|
<python><numpy><mypy><python-typing>
|
2023-02-18 17:42:51
| 1
| 2,105
|
Ziofil
|
75,495,158
| 544,884
|
Calculating Expected Value With Matrix Values
|
<p>I have the following input data</p>
<pre><code>class_p = [0.0234375, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1748046875, 0.0439453125, 0.0, 0.35302734375, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3828125]
league_p = [0.4765625, 0.0, 0.00634765625, 0.4658203125, 0.0, 0.0, 0.046875, 0.0, 0.0, 0.0029296875, 0.0, 0.0, 0.0, 0.0, 0.0]
a2_p = [0.1171875, 0.0, 0.0, 0.1171875, 0.0, 0.0078125, 0.30322265625, 0.31103515625, 0.0, 0.0, 0.0, 0.1435546875, 0.0, 0.0, 0.0]
p1_p = [0.0, 0.03125, 0.375, 0.09375, 0.0234375, 0.0, 0.46875, 0.0078125, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
p2_p = [0.3984375, 0.0, 0.0, 0.3828125, 0.08935546875, 0.08935546875, 0.023345947265625, 0.007720947265625, 0.0, 0.0, 0.0087890625, 0.00018310546875, 0.0, 0.0, 0.0]
class_v = [55, 75, 55, 75, 500, 10000, 55, 55, 55, 75, 75, 55, 55, 500, 55, 55, 75, 75, 55, 55, 55]
league_v = [0, 0, 0, 0, 0, 0, 0, 0, 40, 40, 40, 40, 1500, 1500, 3000]
a2_v= [0, 0, 0, 0, 0, 0, 0, 0, 40, 40, 40, 40, 1500, 1500, 3000]
p1_v = [0, 0, 0, 0, 0, 0, 0, 40, 40, 40, 40, 40, 1500, 1500, 3000]
p2_v = [0, 0, 0, 0, 0, 0, 0, 0, 40, 40, 40, 40, 1500, 1500, 3000]
</code></pre>
<p>With that data, I am generating the odds of each combination occurring.</p>
<p>As an example to generate the chance of a given combination</p>
<pre><code>class_p[0]
league_p[6]
a2_p[11]
p1_p[7]
p2_p[3]
</code></pre>
<p>I would multiply their values with each other
<code>0.0234375x0.046875x0.1435546875x0.0078125x0.3828125</code></p>
<p>That would give me <code>4.716785042546689510345458984375 × 10^-7</code></p>
<p>Since the given combination had <code>class_p[0], league_p[6], a2_p[11], p1_p[7], p2_p[3]</code>, I would take the following values in the "values" arrays.
I would sum</p>
<p><code>class_v[0] + league_v[6] + a2_v[11] + p1_v[7] + p2_v[3]</code>
That would give me <code>55+0+40+40+0 = 135</code></p>
<p>To finalize the process I would do
<code>(0.0234375*0.046875*0.1435546875*0.0078125*0.3828125)*(55+0+40+40+0) = 0.00006367659807</code></p>
<p>The full final calc is
<code>(0.0234375×0.046875×0.1435546875×0.0078125×0.3828125) (55 + 0 + 40 + 40 + 0)</code>
<code>(combintation_chance)*(combination_value)</code>
I need to do this process for all possible combinations of <code>combintation_chance</code></p>
<p>This should give me a column of values(1xN). If I sum the values of that column I reach the EV overall, by summing the EV of individual combinations.</p>
<p>Calculating <code>combintation_chance</code> is working just fine. My issue is how to line up the given combination with its corresponding value sum (<code>combination_value</code>). At the moment, I have additional identifiers attached to the <code>*_p</code> arrays and I then do a string comparison with them to determine which <code>combination</code> value to use. This is very slow for billions of comparisons, therefore I am exploring a better approach.</p>
<p>I am using python 3.8 & numpy 1.24</p>
<p><strong>Edit</strong></p>
<p>The question has been adjusted to include much more detail</p>
|
<python><numpy><statistics><matrix-multiplication>
|
2023-02-18 17:35:50
| 2
| 1,982
|
Adam
|
75,495,108
| 16,363,897
|
Fast way to get index of non-blank values in row/column
|
<p>Let's say we have the following pandas dataframe:</p>
<pre><code>df = pd.DataFrame({'a': {0: 3.0, 1: 2.0, 2: None}, 'b': {0: 10.0, 1: None, 2: 8.0}, 'c': {0: 4.0, 1: 2.0, 2: 6.0}})
a b c
0 3.0 10.0 4.0
1 2.0 NaN 2.0
2 NaN 8.0 6.0
</code></pre>
<p>I need to get a dataframe with, for each row, the column names of all non-NaN values.
I know I can do the following, which produces the expected outupt:</p>
<pre><code>df2 = df.apply(lambda x: pd.Series(x.dropna().index), axis=1)
0 1 2
0 a b c
1 a c NaN
2 b c NaN
</code></pre>
<p>Unfortunately, this is quite slow with large datasets. Is there a faster way?</p>
<p>Getting the row indices of non-Null values of each column could work too, as I would just need to transpose the input dataframe. Thanks.</p>
|
<python><pandas><dataframe><numpy>
|
2023-02-18 17:28:45
| 1
| 842
|
younggotti
|
75,494,833
| 13,874,745
|
Why are the weights not updating when splitting the model into two `class` in pytorch and torch-geometric?
|
<p>I tried two different ways to build my model:</p>
<ul>
<li>First approach: split the model into two <code>class</code>, one is <code>MainModel()</code> and the other is <code>GinEncoder()</code>, and when I call the <code>MainModel()</code>, it would also call <code>GinEncoder()</code> too.</li>
<li>Second approach: Create a single class: <code>MainModel2()</code> by merging the two classes: <code>MainModel()</code> and <code>GinEncoder()</code>.</li>
</ul>
<p>So the model layer structure of <code>MainModel2()</code> are as same as 『<code>MainModel()</code> + <code>GinEncoder()</code>』, but:</p>
<ul>
<li>In the first approach, *<em>the weights of <code>GinEncoder()</code> cannot be updated</em>, while the weights of <code>MainModel()</code> can be updated.</li>
<li>In the second approach, <strong>all weights</strong> of <code>MainModel2()</code> can be updated</li>
</ul>
<p>My question is:
Why are the weights not updating when splitting the model into two <code>class</code> in pytorch and torch-geometric? But when I merge the layers of these two classes, all weight can be updated?</p>
<p>Here are partial codes:</p>
<ul>
<li><p>First approach: split the model to two <code>class</code>, one is <code>MainModel</code>, the other <code>GinEncoder</code>, as shown as below:</p>
<pre class="lang-py prettyprint-override"><code>class GinEncoder(torch.nn.Module):
def __init__(self):
super(GinEncoder, self).__init__()
self.gin_convs = torch.nn.ModuleList()
self.gin_convs.append(GINConv(Sequential(Linear(1, dim_h), ReLU(),
Linear(dim_h, dim_h), ReLU(),
BatchNorm1d(dim_h))))
for _ in range(gin_layer-1):
self.gin_convs.append(GINConv(Sequential(Linear(dim_h, dim_h), ReLU(),
Linear(dim_h, dim_h), ReLU(),
BatchNorm1d(dim_h))))
def forward(self, x, edge_index, batch_node_id):
# Node embeddings
nodes_emb_layers = []
for i in range(gin_layer):
x = self.gin_convs[i](x, edge_index)
nodes_emb_layers.append(x)
# Graph-level readout
nodes_emb_pools = [global_add_pool(nodes_emb, batch_node_id) for nodes_emb in nodes_emb_layers]
# Concatenate and form the graph embeddings
graph_embeds = torch.cat(nodes_emb_pools, dim=1)
return graph_embeds
class MainModel(torch.nn.Module):
def __init__(self, graph_encoder:torch.nn.Module):
super(MainModel, self).__init__()
self.graph_encoder = graph_encoder
self.lin1 = Linear(dim_h*gin_layer, 4)
self.lin2 = Linear(4, dim_h*gin_layer)
def forward(self, x, edge_index, batch_node_id):
graph_embeds = self.graph_encoder(x, edge_index, batch_node_id)
out_lin1 = self.lin1(graph_embeds)
pred = self.lin2(out_lin1)[-1]
return pred
</code></pre>
</li>
<li><p>Second approach: create <code>MainModel2()</code> by merging layers of the two class: <code>MainModel()</code> and <code>GinEncoder()</code></p>
<pre class="lang-py prettyprint-override"><code>class MainModel2(torch.nn.Module):
def __init__(self):
super(MainModel2, self).__init__()
self.gin_convs = torch.nn.ModuleList()
self.gin_convs.append(GINConv(Sequential(Linear(1, dim_h), ReLU(),
Linear(dim_h, dim_h), ReLU(),
BatchNorm1d(dim_h))))
self.gin_convs.append(GINConv(Sequential(Linear(dim_h, dim_h), ReLU(),
Linear(dim_h, dim_h), ReLU(),
BatchNorm1d(dim_h))))
self.lin1 = Linear(dim_h*gin_layer, 4)
self.lin2 = Linear(4, dim_h*gin_layer)
def forward(self, x, edge_index, batch_node_id):
# Node embeddings
nodes_emb_layers = []
for i in range(2):
x = self.gin_convs[i](x, edge_index)
nodes_emb_layers.append(x)
# Graph-level readout
nodes_emb_pools = [global_add_pool(nodes_emb, batch_node_id) for nodes_emb in nodes_emb_layers]
# Concatenate and form the graph embeddings
graph_embeds = torch.cat(nodes_emb_pools, dim=1)
out_lin1 = self.lin1(graph_embeds)
pred = self.lin2(out_lin1)[-1]
return pred
</code></pre>
</li>
</ul>
<p>PS.</p>
<ul>
<li>I put the complete codes in here:
<a href="https://gist.github.com/theabc50111/8a38b88713f494be1d92d4ea2bdecc5e" rel="nofollow noreferrer">https://gist.github.com/theabc50111/8a38b88713f494be1d92d4ea2bdecc5e</a></li>
<li>I put the training data on Google Drive: <a href="https://drive.google.com/drive/folders/1_KMwCzf1diwS4gGNdSSxG7bnemqQkFxI?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1_KMwCzf1diwS4gGNdSSxG7bnemqQkFxI?usp=sharing</a></li>
<li>I asked a related question: <a href="https://stackoverflow.com/questions/75444625/how-to-update-the-weights-of-a-composite-model-composed-of-pytorch-and-torch-geo">How to update the weights of a composite model composed of pytorch and torch-geometric?</a></li>
</ul>
|
<python><pytorch><neural-network><pytorch-geometric><graph-neural-network>
|
2023-02-18 16:47:41
| 1
| 451
|
theabc50111
|
75,494,820
| 491,894
|
How do I create an entrypoint in a custom location and activate its virtual environment when executed?
|
<p>I've searched, but either I'm phrasing the question wrong, or it's an anti-pattern.</p>
<p>While I'm not new to python, I've only ever hacked on existing packages.</p>
<p>I'm working on a package that will only be run from wherever its repository has been cloned. I can use <code>pip install -e .</code> during development and <code>pip install .</code> once it's working the way I want.</p>
<p>I'm using a virtual environment, so I don't pollute my namespace, but I want to be able to run my various entrypoints without having first to activate it.</p>
<p><code>pip install</code> installs those entrypoints in <code>venv/bin</code>; thus, when I want to change to that directory while doing something else, I have to run <code>. venv/bin/activate && entrypoint-alpha</code>.</p>
<p>I've written a script to do that in a <code>bin</code> directory at the top level of the repository. However, I just spent an hour trying to figure out why something needed to be fixed when I had just forgotten to include the new entrypoint in my script.</p>
<p>Is there a way to tell <code>pip install</code> to install the entrypoints in this <code>bin</code> directory instead of <code>venv/bin</code> and have the package activate the virtual environment when it's run?</p>
<p><strong>Edit</strong>: I didn't explain myself well enough. To demonstrate what I mean, what I want to do is have a one-time (per cloned instance) installation command. I don't have a problem manually activating the virtual environment here. It's after the initial installation that I want to be able to just run the various endpoints directly, without having to remember to activate the virtual environment.</p>
<p>It's a personal project, no one else is going to be using it.</p>
<p>Installation:</p>
<pre><code>git clone <git-url>
cd newinstance
python -m venv venv
. venv/bin/activate
pip install .
</code></pre>
<p>This is all fine. But the entrypoints are installed in <code>venv/bin</code>. When I come back in a new session I'd like to be able to just run the entrypoint without having to remember to activate the virtual environment.</p>
<pre><code><after logging in again>
cd newinstance
bin/entrypoint-alpha
</code></pre>
<ul>
<li>Can I change the installation of those entrypoints to <code>bin</code> in the top-level of the repository?</li>
<li>Can I modify my script (<strong>not venv</strong>) to activate the virtual environment if it's not already active?</li>
</ul>
|
<python><virtualenv>
|
2023-02-18 16:44:41
| 0
| 1,304
|
harleypig
|
75,494,698
| 4,560,509
|
Celery Module Tasks are Cached so edits in dev environment do not apply
|
<p>I have a problem which makes local development with Celery very difficult.</p>
<p>If I edit my local files and restart docker containers none of the CODE changes are applied. Not talking about task result caching here... just the actual function execution.</p>
<p>I have to prune everything for them to be applied.
Does anyone have any solution for this?</p>
<p>supervisord.conf</p>
<pre><code>[supervisord]
nodaemon=true
[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=celery -A worker.scheduler.schedule.celery_app worker -l info
[program:celerybeat]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=celery -A worker.scheduler.schedule.celery_app beat -l info
</code></pre>
<p>worker.Dockerfile</p>
<pre><code>FROM python:3.10
WORKDIR /usr/src/app
# install supervisord
RUN apt-get update && apt-get install -y supervisor
RUN apt-get install -y python3-pymysql
# copy requirements and install (so that changes to files do not mean rebuild cannot be cached)
RUN mkdir worker
COPY worker/requirements.txt /usr/src/app/worker
COPY worker/supervisord.conf /usr/src/app
RUN pip install -r /usr/src/app/worker/requirements.txt
# copy all files into the container
COPY ./worker /usr/src/app/worker
COPY ./db /usr/src/app/db
# needs to be set else Celery gives an error (because docker runs commands inside container as root)
ENV C_FORCE_ROOT=1
# run supervisord
CMD ["/usr/bin/supervisord"]
</code></pre>
<p>compose subset</p>
<pre class="lang-yaml prettyprint-override"><code> celery_worker:
build:
context: server
dockerfile: worker.Dockerfile
volumes:
- ./worker:/tmp/worker
environment:
- CELERY_BROKER_URL=amqp://guest:guest@rabbitmq:5672//
- CELERY_RESULT_BACKEND=redis://@redis:6000/0
depends_on:
- db
- rabbitmq
- redis
</code></pre>
|
<python><docker><celery><supervisord>
|
2023-02-18 16:22:42
| 0
| 2,897
|
Michael Paccione
|
75,494,645
| 11,380,409
|
Python wildcard search
|
<p>I have a Lambda python function that I inherited which searches and reports on installed packages on EC2 instances. It pulls this information from SSM Inventory where the results are output to an S3 bucket. All of the installed packages have specific names until now. Now we need to report on Palo Alto Cortex XDR. The issue I'm facing is that this product includes the version number in the name and we have different versions installed. If I use the exact name (i.e. Cortex XDR 7.8.1.11343) I get reporting on that particular version but not others. I want to use a wild card to do this. I import regex (<code>import re</code>) on line 7 and then I change line 71 to <code>xdr=line['Cortex*']</code>) but it gives me the following error. I'm a bit new to Python and coding so any explanation as to what I'm doing wrong would be helpful.</p>
<pre><code>File "/var/task/SoeSoftwareCompliance/RequiredSoftwareEmail.py", line 72, in build_html
xdr=line['Cortex*'])
</code></pre>
<pre><code>import configparser
import logging
import csv
import json
from jinja2 import Template
import boto3
import re
# config
config = configparser.ConfigParser()
config.read("config.ini")
# logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# @TODO
# refactor common_csv_header so that we use one with variable
# so that we write all content to one template file.
def build_html(account=None,
ses_email_address=None,
recipient_email=None):
"""
:param recipient_email:
:param ses_email_address:
:param account:
"""
account_id = account["id"]
account_alias = account["alias"]
linux_ec2s = []
windows_ec2s = []
ec2s_not_in_ssm = []
excluded_ec2s = []
# linux ec2s html
with open(f"/tmp/{account_id}_linux_ec2s_required_software_report.csv", "r") as fp:
lines = csv.DictReader(fp)
for line in lines:
if line["platform-type"] == "Linux":
item = dict(id=line['instance-id'],
name=line['instance-name'],
ip=line['ip-address'],
ssm=line['amazon-ssm-agent'],
cw=line['amazon-cloudwatch-agent'],
ch=line['cloudhealth-agent'])
# skip compliant linux ec2s where are values are found
compliance_status = not all(item.values())
if compliance_status:
linux_ec2s.append(item)
# windows ec2s html
with open(f"/tmp/{account_id}_windows_ec2s_required_software_report.csv", "r") as fp:
lines = csv.DictReader(fp)
for line in lines:
if line["platform-type"] == "Windows":
item = dict(id=line['instance-id'],
name=line['instance-name'],
ip=line['ip-address'],
ssm=line['Amazon SSM Agent'],
cw=line['Amazon CloudWatch Agent'],
ch=line['CloudHealth Agent'],
mav=line['McAfee VirusScan Enterprise'],
trx=line['Trellix Agent'],
xdr=line['Cortex*'])
# skip compliant windows ec2s where are values are found
compliance_status = not all(item.values())
if compliance_status:
windows_ec2s.append(item)
# ec2s not found in ssm
with open(f"/tmp/{account_id}_ec2s_not_in_ssm.csv", "r") as fp:
lines = csv.DictReader(fp)
for line in lines:
item = dict(name=line['instance-name'],
id=line['instance-id'],
ip=line['ip-address'],
pg=line['patch-group'])
ec2s_not_in_ssm.append(item)
# display or hide excluded ec2s from report
display_excluded_ec2s_in_report = json.loads(config.get("settings", "display_excluded_ec2s_in_report"))
if display_excluded_ec2s_in_report == "true":
with open(f"/tmp/{account_id}_excluded_from_compliance.csv", "r") as fp:
lines = csv.DictReader(fp)
for line in lines:
item = dict(id=line['instance-id'],
name=line['instance-name'],
pg=line['patch-group'])
excluded_ec2s.append(item)
# pass data to html template
with open('templates/email.html') as file:
template = Template(file.read())
# pass parameters to template renderer
html = template.render(
linux_ec2s=linux_ec2s,
windows_ec2s=windows_ec2s,
ec2s_not_in_ssm=ec2s_not_in_ssm,
excluded_ec2s=excluded_ec2s,
account_id=account_id,
account_alias=account_alias)
# consolidated html with multiple tables
tables_html_code = html
client = boto3.client('ses')
client.send_email(
Destination={
'ToAddresses': [recipient_email],
},
Message={
'Body': {
'Html':
{'Data': tables_html_code}
},
'Subject': {
'Charset': 'UTF-8',
'Data': f'SOE | Software Compliance | {account_alias}',
},
},
Source=ses_email_address,
)
print(tables_html_code)
</code></pre>
|
<python><python-3.x><aws-lambda>
|
2023-02-18 16:15:27
| 2
| 625
|
TIM02144
|
75,494,626
| 4,402,572
|
Comment / documentation as metadata
|
<p>If I type-annotate my code properly, and use something like Pylance in my IDE, when I hover over a method or function I get helpful information about that code's 'signature': the variables it expects, their types, and what I can expect from that code in terms of a response. All that is great, and I have come to rely on it in my daily coding activities.</p>
<p>Anyway, I was reviewing some old code of mine, and was trying to make sense of it after 18+ months (I'm sure I'm alone in that, right? lol). And although I had commented the code in question back then, looking at it now I found the comments to actually clutter things up, instead of being very helpful.</p>
<p>So it got me to thinking: what if there were some sort of plugin or code-assistant library that, when I hovered over a section of code, not only gave me the code's signature/type annotation info, but also gave me the documentation for that particular section? I could potentially make the comments much longer and clearer without having to worry about code bloat, or its visual impact.</p>
<h2>tldr;</h2>
<p><em><strong>is there a way to make my comments be metadata instead of having to be embedded in the code itself?</strong></em></p>
|
<python><metadata>
|
2023-02-18 16:11:39
| 0
| 1,264
|
Jeff Wright
|
75,494,613
| 16,723,327
|
override a value in async function
|
<p>I have this function in <code>my_module.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>async def foo():
a = 10
b = 10
return a+b
</code></pre>
<p>I am trying to override the value of <code>a</code> in <code>unittesting</code> using <code>aiohttp.test_utils.AioHTTPTestCase</code> and <code>@patch</code> decorator with <code>return_value=5</code> argument.
How to do this?</p>
|
<python><unit-testing><mocking><python-asyncio>
|
2023-02-18 16:08:57
| 0
| 333
|
alibustami
|
75,494,440
| 15,637,940
|
import python module with one file inside
|
<p>I packaged one file:</p>
<pre><code>proxy_master
├── __init__.py
└── proxy_master.py
</code></pre>
<p>After installing it, i need to type:</p>
<pre><code>from proxy_master import proxy_master
</code></pre>
<p>I want to know is it even possible just type?</p>
<pre><code>import proxy_master
</code></pre>
<p>Thanks! Advices to change something may be good too, i did all like mentioned here <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/" rel="nofollow noreferrer">https://packaging.python.org/en/latest/tutorials/packaging-projects/</a></p>
|
<python><python-3.x><python-packaging>
|
2023-02-18 15:41:45
| 0
| 412
|
555Russich
|
75,494,363
| 2,991,243
|
Substitute a part of string (ignoring special characters)
|
<p>Suppose that I have a list of strings, and I want to substitute different parts of it by <code>re.sub</code>. My problem is that sometimes this substitution contains special characters inside so this function can't properly match the structure. One example:</p>
<pre><code>import re
txt = 'May enbd company Ltd (Pty) Ltd, formerly known as apple shop Ltd., is a full service firm which is engaged in the sale and servicing of motor vehicles.'
re.sub('May enbd company Ltd (Pty) Ltd', 'PC (Pty) Ltd', txt)
</code></pre>
<p>Here the issue is coming from <code>(</code> and <code>)</code>, but the other forms may happen that I'm not aware of it now. So I want to totally ignore these special characters inside and replace them with my preferred strings. In this case, it means:</p>
<pre><code> 'PC (Pty) Ltd, formerly known as apple shop Ltd., is a full-service firm which is engaged in the sale and servicing of motor vehicles.'
</code></pre>
|
<python><string><replace>
|
2023-02-18 15:29:07
| 1
| 3,823
|
Eghbal
|
75,494,208
| 2,485,063
|
Efficient way to drop subset of columns using a threshold of row numbers
|
<p>I have a 10 million row dataframe like this</p>
<pre><code>>>> df.info(show_counts=True)
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 10000000 non-null datetime64[ns]
1 cust1 6000647 non-null float64
2 cust2 6001585 non-null float64
3 cust3 6000415 non-null float64
4 cust4 9001290 non-null float64
5 cust5 9000402 non-null float64
6 cust6 9000093 non-null float64
7 cust7 8999538 non-null float64
8 cust8 9000211 non-null float64
9 cust9 9000745 non-null float64
10 cust10 9001119 non-null float64
</code></pre>
<p>In the general case, all of the columns contain <code>NA</code> values. In this example columns <code>cust1, cust2, cust3</code> contain around 40% of <code>NA</code> values, 10% for the rest. Column <code>date</code> has no missing values, for the sake of testing - the general problem assumes every column can have any number of <code>NA</code> values.
I'm looking for an idiomatic/efficient way to drop those <code>custXX</code> columns whose rows contain less than 70% (i.e. 7 million) of <code>non-NA</code> values.</p>
<p>I'm treating <code>DataFrame.dropna(axis=1, thresh=thresh)</code> as a baseline result, just to see how much time it would take Pandas to clear the whole dataframe.</p>
<pre><code>%timeit df.dropna(axis=1, thresh=thresh)
701 ms ± 12.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>I can't use the <code>subset</code> parameter, because in this case it would affect rows, not columns.
I've tried the following solutions:</p>
<ol>
<li>Split the dataframe into one containing only the <code>custXX</code> subset of columns, and the other containing the <code>data</code> column. Drop NA columns in the first DF, then merge it with the other one using index:
<blockquote>
<pre><code>def split_merge(df):
date_df = df[['date']]
rest_df = df.drop('date', axis=1)
cleared = rest_df.dropna(thresh=thresh, axis=1)
return date_df.merge(cleared, left_index=True, right_index=True)
</code></pre>
<pre><code>%timeit split_merge(df)
1.65 s ± 49.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
</blockquote>
</li>
<li>Select <code>custXX</code> DF subset, for each column count number of non-NA values, select only those columns where <code>count</code> is at least 70%, then use those columns to select from original dataframe
<blockquote>
<pre><code>def count_select(df):
nan_cols = df.filter(like='cust').columns
non_na_counts = df[nan_cols].notna().sum()
valid_cols = non_na_counts[non_na_counts >= thresh]
all_cols = pd.concat([pd.Series(0, index=['date']), valid_cols]).index
return df[all_cols]
</code></pre>
<pre><code>%timeit count_select(df)
1.73 s ± 79.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
</blockquote>
</li>
<li>Similar to the previous one, but instead of counting, we drop <code>NA</code> values and use all of the resulting columns to select from the original dataframe:
<blockquote>
<pre><code>def select_dropna_select(df):
nan_cols = df.filter(like='cust')
cleared = nan_cols.dropna(axis=1, thresh=thresh).columns
new_cols = ['date', *cleared.values]
return df[new_cols]
</code></pre>
<pre><code>%timeit select_dropna_select(df)
1.54 s ± 14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
</blockquote>
</li>
</ol>
<p>The last one is the fastest, but still more than twice slower than the baseline solution (clearing the whole dataframe). Is there an idiomatic way to do it that would achieve similar efficiency?</p>
|
<python><pandas><dataframe>
|
2023-02-18 15:03:19
| 1
| 732
|
szimon
|
75,494,190
| 12,439,119
|
VSCode IntelliSense thinks a Python 'function()' class exists
|
<p>VSCode / IntelliSense is completing a Python class called <code>function()</code> that does not appear to exist.</p>
<p>For example, this appears to be valid code:</p>
<pre><code>def foo(value):
return function(value)
foo(0)
</code></pre>
<p>But <code>function</code> is not defined in this scope, so running this raises a <code>NameError</code>:</p>
<pre><code>Traceback (most recent call last):
File "/home/hayesall/wip.py", line 4, in <module>
foo(0)
File "/home/hayesall/wip.py", line 2, in foo
return function(value)
NameError: name 'function' is not defined
</code></pre>
<p>I expected IntelliSense to warn me about <code>function</code> being undefined. <code>function()</code> does not appear to have a docstring, and I cannot find anything about it in the wider Python/CPython/VSCode documentation. (<em>Side note</em>: <code>pylint</code> recognizes "Undefined variable 'function'").</p>
<p>What is <code>function()</code>? Or: is there an explanation for why IntelliSense is matching this?</p>
<hr />
<p><strong>Screenshots</strong>:</p>
<p>Writing the word <code>function</code> provides an autocomplete:</p>
<p><a href="https://i.sstatic.net/y4zby.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y4zby.png" alt="Screenshot of vscode, with a def foo like the code listing above. VSCode is autocompleting the name function as a Python class with no documentation." /></a></p>
<hr />
<p><code>function</code> is not defined in this scope, but IntelliSense seems to <em>think</em> that it is:</p>
<p><a href="https://i.sstatic.net/f7vUG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f7vUG.png" alt="The same listing as before, and VSCode has not indicated that function is undefined." /></a></p>
<hr />
<p>Some version info:</p>
<pre><code>Debian
code 1.75.1 (x86)
Pylance v2023.2.30
Python 3.9.15 (CPython, GCC 11.2.0)
</code></pre>
|
<python><visual-studio-code><intellisense><pylance>
|
2023-02-18 14:59:35
| 1
| 4,303
|
Alexander L. Hayes
|
75,494,055
| 1,668,622
|
Why is reading a file asynchronously (with aiofile) so much (15x) slower than its synchronous equivalent?
|
<p>I'm experimenting mit named pipes and <code>async</code> approaches and was a bit surprised, how slow reading the file I've created seems to be.</p>
<p>And as <a href="https://stackoverflow.com/questions/68957147/aiofiles-take-longer-than-normal-file-operation">this question</a> suggests, this effect is not limited to named pipes as in the example below but applies to 'normal' files as well. Since my final goal is reading those named pipes I prefer to keep the examples below.</p>
<p>So here is what I initially came up with:</p>
<pre class="lang-py prettyprint-override"><code>import sys, os
from asyncio import create_subprocess_exec, gather, run
from asyncio.subprocess import DEVNULL
from aiofile import async_open
async def read_strace(namedpipe):
with open("async.log", "w") as outfp:
async with async_open(namedpipe, "r") as npfp:
async for line in npfp:
outfp.write(line)
async def main(cmd):
try:
myfifo = os.mkfifo('myfifo', 0o600)
process = await create_subprocess_exec(
"strace", "-o", "myfifo", *cmd,
stdout=DEVNULL, stderr=DEVNULL)
await gather(read_strace("myfifo"), process.wait())
finally:
os.unlink("myfifo")
run(main(sys.argv[1:]))
</code></pre>
<p>You can run it like <code>./sync_program.py <CMD></code> e.g. <code>./sync_program.py find .</code></p>
<p>This one uses default <code>Popen</code> and reads what <code>strace</code> writes to <code>myfifo</code>:</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen, DEVNULL
import sys, os
def read_strace(namedpipe):
with open("sync.log", "w") as outfp:
with open(namedpipe, "r") as npfp:
for line in npfp:
outfp.write(line)
def main(cmd):
try:
myfifo = os.mkfifo('myfifo', 0o600)
process = Popen(
["strace", "-o", "myfifo", *cmd],
stdout=DEVNULL, stderr=DEVNULL)
read_strace("myfifo"),
finally:
os.unlink("myfifo")
main(sys.argv[1:])
</code></pre>
<p>Running both programs with <code>time</code> reveals that the async program is about 15x slower:</p>
<pre><code>$ time ./async_program.py find .
poetry run ./async_program.py find . 4.06s user 4.75s system 100% cpu 8.727 total
$ time ./sync_program.py find .
poetry run ./sync_program.py find . 0.27s user 0.07s system 76% cpu 0.438 total
</code></pre>
<p>The linked question suggests that <code>aiofile</code> is known to be somehow slow, but 15x? I'm pretty sure that I still come close to the synchronous approach by using an extra thread and writing to a queue, but admittedly I didn't try it yet.</p>
<p>Is there a recommended way to read a file asynchronously - maybe even an approach more dedicated to named pipes as I use them in the given example?</p>
|
<python><asynchronous><python-asyncio><named-pipes><python-aiofiles>
|
2023-02-18 14:37:46
| 1
| 9,958
|
frans
|
75,493,970
| 19,369,310
|
New column based on last time row value equals some numbers in Pandas dataframe
|
<p>I have a dataframe sorted in descending order date that records the Rank of students in class and the predicted score.</p>
<pre><code>Date Student_ID Rank Predicted_Score
4/7/2021 33 2 87
13/6/2021 33 4 88
31/3/2021 33 7 88
28/2/2021 33 2 86
14/2/2021 33 10 86
31/1/2021 33 8 86
23/12/2020 33 1 81
8/11/2020 33 3 80
21/10/2020 33 3 80
23/9/2020 33 4 80
20/5/2020 33 3 80
29/4/2020 33 4 80
15/4/2020 33 2 79
26/2/2020 33 3 79
12/2/2020 33 5 79
29/1/2020 33 1 70
</code></pre>
<p>I want to create a column called <code>Recent_Predicted_Score</code> that record the last predicted_score where that student actually ranks top 3. So the desired outcome looks like</p>
<pre><code>Date Student_ID Rank Predicted_Score Recent_Predicted_Score
4/7/2021 33 2 87 86
13/6/2021 33 4 88 86
31/3/2021 33 7 88 86
28/2/2021 33 2 86 81
14/2/2021 33 10 86 81
31/1/2021 33 8 86 81
23/12/2020 33 1 81 80
8/11/2020 33 3 80 80
21/10/2020 33 3 80 80
23/9/2020 33 4 80 80
20/5/2020 33 3 80 79
29/4/2020 33 4 80 79
15/4/2020 33 2 79 79
26/2/2020 33 3 79 70
12/2/2020 33 5 79 70
29/1/2020 33 1 70
</code></pre>
<p>Here's what I have tried but it doesn't quite work, not sure if I am on the right track:</p>
<pre><code>df.sort_values(by = ['Student_ID', 'Date'], ascending = [True, False], inplace = True)
lp1 = df['Predicted_Score'].where(df['Rank'].isin([1,2,3])).groupby(df['Student_ID']).bfill()
lp2 = df.groupby(['Student_ID', 'Rank'])['Predicted_Score'].shift(-1)
df = df.assign(Recent_Predicted_Score=lp1.mask(df['Rank'].isin([1,2,3]), lp2))
</code></pre>
<p>Thanks in advance.</p>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-02-18 14:23:09
| 3
| 449
|
Apook
|
75,493,853
| 5,070,460
|
Tensorflow: Shape of 3D tensor (of an image) in Filter has None
|
<p>I am following <a href="https://www.tensorflow.org/guide/data#preprocessing_data" rel="nofollow noreferrer">this tutorial</a> on <a href="https://www.tensorflow.org/" rel="nofollow noreferrer">tensorflow.org</a>.</p>
<p>I have folder <em>images</em> with two folders <em>cat</em> and <em>dog</em> in it. Following above tutorial I am trying to convert .jpg and .png images to features (NumPy array) for modeling.</p>
<h3>Problem</h3>
<p>After processing the images to tensors I found that some images were converted to tensor with shape <code>(28, 28, 4)</code>. So I added condition to filter out such tensors. This logic works while explicitly looping each tensor, using <code>for</code> loop, after converting it to numpy array, but same logic does not work when used with <code>filter</code>.</p>
<p>Please help me fix this <code>filter()</code> I went through <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#filter" rel="nofollow noreferrer"><code>filter()</code></a> documentation and could not find any solution.</p>
<h3>Source code</h3>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import os
print("TensorFlow version:", tf.__version__)
def process_image(file_path_tensor):
parts = tf.strings.split(file_path_tensor, os.sep)
label = parts[-2]
image = tf.io.read_file(file_path_tensor)
image = tf.image.decode_jpeg(image)
image = tf.image.resize(image, [128, 128])
image = tf.image.convert_image_dtype(image, tf.float32)
image = image / 255
return image, label
def check_shape(x, y):
print("\nShape received in filter():", x.shape)
d1, d2, d3 = x.shape
return d3 == 3
images_ds = tf.data.Dataset.list_files("./images/*/*", shuffle=True)
file_path = next(iter(images_ds))
image, label = process_image(file_path)
print("Shape:", image.shape)
print("Class label:", label.numpy().decode())
# ETL pipeline.
X_y_tensors = (
images_ds
.map(process_image) # Extra and Transform
.filter(check_shape) # Filter
.as_numpy_iterator() # Load
)
print("\nTechnique 1:")
print("Final X count:", len(list(X_y_tensors)))
X_y_tensors = images_ds.map(process_image)
count = 0
for x, y in X_y_tensors:
d1, d2, d3 = x.shape
if d3 > 3:
continue
count += 1
print("\nTechnique 2:")
print("Final X count:", count)
</code></pre>
<h3>Output</h3>
<pre><code>TensorFlow version: 2.6.0
Shape: (128, 128, 3)
Class label: cat
Shape received in filter(): (128, 128, None)
Technique 1:
Final X count: 0
Technique 2:
Final X count: 123
</code></pre>
<p>As it can be seen,</p>
<ol>
<li>Count is 0 when <em>Technique 1</em> is used to filter tensors, since the shape of the tensor received is <code>(128, 128, None)</code>.</li>
<li>Count is 123 (image count after filtering) when <em>Technique 2</em> is used.</li>
</ol>
<p>I do not think <a href="https://stackoverflow.com/questions/58331837/filter-data-in-tensorflow">this</a> is an issue since I am <strong>not using batches</strong>.</p>
<p><a href="https://github.com/DheemanthBhat/tensorflow-image-pipeline" rel="nofollow noreferrer">Full code with dataset</a></p>
|
<python><numpy><machine-learning><deep-learning><tensorflow2.0>
|
2023-02-18 14:03:38
| 1
| 4,502
|
Dheemanth Bhat
|
75,493,772
| 8,127,672
|
Python Error : Importing library : ModuleNotFoundError: No module named 'InitProject'
|
<p>I am setting up a python project using RobotFramework and it is throwing No module named error even if I have added a Library entry in init.resource.</p>
<p>Also, I created an empty <strong>init</strong>.py in the folder in which the file exists so that the python file can be located.</p>
<p>My project structure is as below:</p>
<p><a href="https://i.sstatic.net/WMABM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WMABM.png" alt="enter image description here" /></a></p>
<p>My Code is as below:</p>
<p>init.robot</p>
<pre><code>*** Settings ***
Library MyLibrary.py
Test Setup Setup
Test Teardown Do Teardown
</code></pre>
<p>HelloVariable.robot</p>
<pre><code>*** Settings ***
Resource init.resource
*** Test Cases ***
My First Robot Test
Say Hello From Library
Log To Console ${Data}
</code></pre>
<p>init.resource</p>
<pre><code>*** Settings ***
Library test.py
Library MyLibrary.py
Library InitProject/ProtoFolder/ProtoService.py
</code></pre>
<p>MyLibrary.py</p>
<pre><code># mylibrary.py
from robot.api.deco import keyword
from robot.libraries.BuiltIn import BuiltIn
from robot.api import logger
import json
from InitProject.ProtoFolder.ProtoService import *
class MyLibrary:
def __init__(self):
self.data = None
@keyword("Setup")
def setup(self):
logger.console("Setting up test environment...")
self.data = {"key1": "value1", "key2": "value2"}
BuiltIn().set_test_variable("${Data}", self.data)
with open('/InitProject/ProtoFolder/RobotFramework/test.json') as f:
data = json.load(f)
logger.console(data)
@keyword("Do Teardown")
def teardown_test_environment(self):
logger.console("Tearing down test environment...")
self.data = None
ProtoService.proto_methods()
</code></pre>
<p>ProtoService.py</p>
<pre><code>from robot.api import logger
from robot.api.deco import keyword
from robot.libraries.BuiltIn import BuiltIn
class ProtoService:
def proto_methods(self):
logger.console("Proto Method Called")
</code></pre>
<p>Actual Error:</p>
<pre><code>er/RobotFramework/MyLibrary.py' failed: ModuleNotFoundError: No module named 'InitProject'
Traceback (most recent call last):
File "/Users/user/Python/pythonProject4/InitProject/ProtoFolder/RobotFramework/MyLibrary.py", line 7, in <module>
from InitProject.ProtoFolder.ProtoService import *
PYTHONPATH:
</code></pre>
|
<python><python-3.x><robotframework>
|
2023-02-18 13:48:55
| 1
| 534
|
JavaMan
|
75,493,729
| 552,916
|
Create python class with type annotations programatically
|
<p>I want to be able to create a python class like the following programmatically:</p>
<pre class="lang-py prettyprint-override"><code>class Foo(BaseModel):
bar: str = "baz"
</code></pre>
<p>The following almost works:</p>
<pre class="lang-py prettyprint-override"><code>Foo = type("Foo", (BaseModel,), {"bar":"baz"})
</code></pre>
<p>But doesn't include the annotation, <code>Foo.__annotations__</code> is set in first example but not the second.</p>
<p>Is there any way to achieve this?</p>
<p>My motivation is to create a class decorator that creates a clone of the decorated class with modified type annotations. The annotations have to be set during class creation (not after the fact) to that the metaclass of BaseModel will see them.</p>
|
<python><metaprogramming><python-typing>
|
2023-02-18 13:41:38
| 1
| 3,097
|
Nat
|
75,493,512
| 5,618,339
|
Updated Macbook to Ventura - now getting 'command not found: Flutter'
|
<p>Oddly enough I'm able to run my flutter applications, but I'd like to upgrade. However since I've updated to MacOS Ventura Flutter can't be found anymore...</p>
<p>output of <code>echo $PATH</code>:</p>
<pre><code>/Users/me/bin:/usr/local/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/flutter/bin:/Library/Apple/usr/bin:/Library/Frameworks/Mono.framework/Versions/Current/Commands
</code></pre>
<p>What the top of my <code>.zshrc</code> file looks like:</p>
<pre><code># If you come from bash you might have to change your $PATH.
export PATH=$HOME/bin:/usr/local/bin:$PATH
# Path to your oh-my-zsh installation.
export ZSH="$HOME/.oh-my-zsh"
</code></pre>
<p>Any help and info is greatly appreciated</p>
<blockquote>
<p>P.S dart and python also cannot be found. But they're not in my path so guess that makes sense</p>
</blockquote>
|
<python><flutter><macos><dart><macos-ventura>
|
2023-02-18 13:03:06
| 1
| 972
|
King
|
75,493,434
| 11,665,178
|
How can i delete specific sub-directories google cloud storage python SDK
|
<p>I have the following code :</p>
<pre><code>storage_client = storage.Client()
bucket = storage.Bucket(storage_client, name="mybucket")
blobs = storage_client.list_blobs(bucket_or_name=bucket, prefix="content/")
print('Blobs:')
for blob in blobs:
print(blob.name)
</code></pre>
<p>My storage hierarchy is :</p>
<ul>
<li>"content/{uid}/subContentDirectory1/anyFiles*"</li>
<li>"content/{uid}/subContentDirectory2/anyFiles*"</li>
</ul>
<p>My goal is to be able to delete from the python code <code>content/{uid}/subContentDirectory1</code></p>
<p>With the above code, i am retrieving all the files in all sub-directories of <code>content</code> but i don't know how to wildcard to delete only a specific subdirectory.</p>
<p>I am aware i can do it with <code>gsutil</code> but it will be running in an AWS lambda function so i would rather use the python SDK.</p>
<p>How can i do that ?</p>
<p>EDIT : to make it more clear, the <code>{uid}</code> is the wildcard, that can possibly match millions of results and under each of these result, i want to delete <code>subContentDirectory1</code>.</p>
|
<python><google-cloud-storage>
|
2023-02-18 12:45:05
| 0
| 2,975
|
Tom3652
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.