QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,073,442
| 1,367,655
|
Custom return type for class function based on how it was initialized
|
<p>Is it possible or makes sense to <code>@overload</code> the return type <code>Hashable</code> of a <code>CLASS.function() -> Hashable:</code> based on how the relevant <code>self.obj</code> was initialized?</p>
<pre><code>from typing import Hashable, overload
class CLASS:
obj = None
obj_type = Hashable
def __init__(self, INPUT: Hashable) -> None:
self.obj = INPUT
# Specifying the type
self.obj_type = type(self.obj)
# This does not work?
if self.obj_type == int:
@overload
def returner() -> int: ...
def returner() -> Hashable:
return self.obj
</code></pre>
<p>If I pass a value of a certain type to the constructor, then the type of the value returned is unambiguous.</p>
|
<python><python-typing><pyright>
|
2024-02-28 09:50:08
| 0
| 980
|
Radio Controlled
|
78,073,334
| 2,307,441
|
Create Mysql Storec Procedure via Python using Mysql Connector
|
<p>I Am builing a small application. where I Need to create MySQL Stored procedure via python using mysql Connector.</p>
<p>I have tried following code. But getting an following error that</p>
<blockquote>
<p>"Commands out of sync; you can't run this command now."</p>
</blockquote>
<p>I Am using MySQL Version <code>5.7.40-43-log</code></p>
<p>Below is the python code snippet what I have tried.</p>
<pre class="lang-py prettyprint-override"><code>import mysql.connector
mydb = mysql.connector.connect(host=HOST,user=USERNAME,passwd=PASSWORD,database=DBNAME,port=PORT,allow_local_infile=True)
cursor = mydb.cursor()
sqlquery = """DROP PROCEDURE IF EXISTS usp_myprocedure;
DELIMITER //
CREATE PROCEDURE usp_myprocedure()
BEGIN
TRUNCATE TABLE mytable;
INSERT INTO mytable
SELECT col1
,col2
,col3
,col4
,CASE WHEN New_col='%_%' THEN logic1
ELSE logic2
END AS mylogic
FROM oldtable;
SELECT * FROM mytable;
END //
DELIMITER ;
"""
cursor.execute(sqlquery)
mydb.commit()
</code></pre>
<p>Above piece of MySQL code is working fine when running via Mysql Workbench. But while executing via python mysql connector getting an error <code>Commands out of sync; you can't run this command now.</code></p>
<p>Any help would be Appreciated.</p>
|
<python><mysql><mysql-connector><mysql-5.7>
|
2024-02-28 09:31:02
| 2
| 1,075
|
Roshan
|
78,073,286
| 395,857
|
How can I have the value of a Gradio block be passed to a function's named parameter?
|
<p>Example:</p>
<pre><code>import gradio as gr
def dummy(a, b='Hello', c='!'):
return '{0} {1} {2}'.format(b, a, c)
with gr.Blocks() as demo:
txt = gr.Textbox(value="test", label="Query", lines=1)
txt2 = gr.Textbox(value="test2", label="Query2", lines=1)
answer = gr.Textbox(value="", label="Answer")
btn = gr.Button(value="Submit")
btn.click(dummy, inputs=[txt], outputs=[answer])
gr.ClearButton([answer])
demo.launch()
</code></pre>
<p><a href="https://i.sstatic.net/pMjIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pMjIz.png" alt="enter image description here" /></a></p>
<p>How can I have the value of <code>txt2</code> be given to the <code>dummy()</code>'s named parameter <code>c</code>?</p>
|
<python><named-parameters><gradio>
|
2024-02-28 09:23:28
| 1
| 84,585
|
Franck Dernoncourt
|
78,073,185
| 1,734,545
|
TypeError: 'int' object is not iterable" and PCA Assertion Error in Python Clustering Function
|
<p>I'm working on a Python function (cluster_articles) to perform document clustering and return a dictionary of results. However, I'm encountering the following test errors:</p>
<p>TypeError: 'int' object is not iterable (in test_number_of_observations_kmeans10 and possibly test_proper_dict_return)
AssertionError: Assertion error at PCA explained value (in test_pca_explained)</p>
<pre><code>import pickle
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.metrics import completeness_score, v_measure_score
def cluster_articles(data):
# K-Means on original data
kmeans_100 = KMeans(n_clusters=10, random_state=2, tol=0.05, max_iter=50)
kmeans_100.fit(data['vectors'])
labels_100 = kmeans_100.labels_
# PCA Dimensionality Reduction
pca = PCA(n_components=10, random_state=2)
reduced_data = pca.fit_transform(data['vectors'])
# K-Means on reduced data
kmeans_10 = KMeans(n_clusters=10, random_state=2, tol=0.05, max_iter=50)
kmeans_10.fit(reduced_data)
labels_10 = kmeans_10.labels_
print(type(kmeans_10.n_iter_)) # Debugging output
# Results Dictionary (Potential issue here)
result = {
'nobs_100': kmeans_100.n_iter_,
'nobs_10': kmeans_10.n_iter_,
'pca_explained': pca.explained_variance_ratio_[0],
# ... rest of the results
}
return result
</code></pre>
<p><strong>Task and Data Description:</strong></p>
<p><strong>Goal:</strong> Cluster documents using K-Means (with and without PCA). Calculate metrics like completeness score, V-measure, and PCA explained variance.</p>
<p><strong>Data Structure (data dictionary):</strong></p>
<ul>
<li>.id: Document IDs</li>
<li>.vectors: Doc2Vec vectors (size 100)</li>
<li>.groups: True group labels (0 to 9)</li>
</ul>
<p><strong>Relevant Packages:</strong></p>
<ul>
<li><p>scikit-learn (0.24.1)</p>
</li>
<li><p>NumPy (1.20.1)</p>
</li>
<li><p>SciPy (1.6.1)</p>
</li>
<li><p>pandas (1.2.3)</p>
</li>
</ul>
<p><strong>Questions:</strong></p>
<ul>
<li>How can I resolve the TypeError: 'int' object is not iterable error?
I suspect the issue is in how I'm constructing the results
dictionary, but I'm not sure how to fix it.</li>
<li>Why is my PCA explained variance failing the assertion? Could this
be due to randomness or different data in the tests?</li>
</ul>
<p><strong>What I've Tried:</strong>
Printing the type of kmeans_10.n_iter_ confirms it's an integer.</p>
<p><strong>Additional Notes:</strong></p>
<ul>
<li><p>I don't have access to the test code.</p>
</li>
<li><p>There might be a file "subset_documents.p" which could be relevant.</p>
</li>
</ul>
<p><strong>Thank you for your help!</strong></p>
|
<python><cluster-analysis><pickle><pca><doc2vec>
|
2024-02-28 09:07:31
| 0
| 473
|
Sushant Patekar
|
78,073,101
| 23,461,455
|
Delete all Occurences of a Substring in SQL-Statement in Python
|
<p>I have a file from a mariadb containing 3GBs of SQL-Statements. Problem is, that my SQLlite DB doesn't support the contained Key-Statements.</p>
<p>Is there a way to edit the Strings containing the Statements that cuts out all substrings or statements that contain the word "key"?</p>
<p>I tried the following regex pattern :</p>
<pre><code>\n(.*Key.*)
</code></pre>
<p>to filter out the key statements. Any other more efficient way of doing this in Python or with other tools?</p>
<h2>Input:</h2>
<pre><code>CREATE TABLE Persons (
PersonID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255),
Primary Key(`PersonID`),
Foreign Key(`City`)
);
</code></pre>
<h2>Desired Output:</h2>
<pre><code>CREATE TABLE Persons (
PersonID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);
</code></pre>
|
<python><sql><regex><mariadb><key>
|
2024-02-28 08:53:27
| 2
| 1,284
|
Bending Rodriguez
|
78,072,628
| 6,997,665
|
Pass gradients through min function in PyTorch
|
<p>I have a variable, say, <code>a</code> which has some gradient associated with it from some operations before. Then I have integers <code>b</code> and <code>c</code> which have no gradients. I want to compute the minimum of <code>a</code>, <code>b</code>, and <code>c</code>. MWE is as given below.</p>
<pre><code>import torch
a = torch.tensor([4.], requires_grad=True) # As an example I have defined a leaf node here, in my program I have an actual variable with gradient
b = 5
c = 6
d = torch.min(torch.tensor([a, b, c])) # d does not have gradient associated
</code></pre>
<p>How can I write this differently so that the gradient from <code>a</code> flows through to <code>d</code>? Thanks.</p>
|
<python><python-3.x><machine-learning><deep-learning><pytorch>
|
2024-02-28 07:29:56
| 2
| 3,502
|
learner
|
78,072,506
| 263,409
|
Upsert table using PySpark in Azure Databricks
|
<p>I have a table in a SQL Server database</p>
<pre><code>create table person (Name varchar(255), Surname varchar(255))
</code></pre>
<p>And I am trying a simple upsert operation with PySpark:</p>
<pre><code># Read data from the "person" table
person_df = spark.read.jdbc(url=database_url, table="person", properties=properties)
# Retrieve all records from the "person" table
person_df.show()
# Check if the person with name 'Jack' already exists
existing_records = person_df.filter(person_df["Name"] == "Jack")
if existing_records.count() == 0:
# Person with name 'Jack' doesn't exist, so add them to the DataFrame
print("Not found")
new_person = [("Jack", "Brown")]
new_person_df = spark.createDataFrame(new_person, ["Name", "Surname"])
updated_person_df = person_df.union(new_person_df)
else:
print("Found")
# Person with name 'Jack' already exists, so update their surname to 'Brown'
updated_person_df = person_df.withColumn("Surname", F.when(person_df["Name"] == "Jack", "White")
.otherwise(person_df["Surname"]))
# Show the updated data
updated_person_df.show()
# Save the updated data back to the "person" table
updated_person_df.write.jdbc(url=database_url,
table="person",
properties=properties,
mode="overwrite")
</code></pre>
<p>This works well if there is no record in the table.
But after inserting the record, even if the updated_person_df dataframe is correct, this operation deletes all the records in the table.</p>
|
<python><sql-server><pyspark><azure-databricks>
|
2024-02-28 07:05:03
| 2
| 639
|
fkucuk
|
78,072,303
| 20,075,659
|
Formating CSV to JSON Python (Nested Objects)
|
<p>I was trying to convert CSV to JSON Objects and I need to decode the nested objects. For example.</p>
<pre><code>"row_id",_emitted_at,_data
rid_1,124,"{""property_id"":""xx"",""date"":""20220221"",""active1DayUsers"":3}"
rid_2,124,"{""property_id"":""xx"",""date"":""20220222"",""active1DayUsers"":2}"
</code></pre>
<p>So what I need in</p>
<pre><code>{
row_id: rid_1,
_emitted_at :124,
property_id : xx,
date: 20220221,
active1DayUsers: 3
}
</code></pre>
<p>Do I have any third-party library to flatten the nested objects in CSV in Python?</p>
<p>My current approach is like read first and second line and extract the keys.</p>
<p>Thanks</p>
|
<python>
|
2024-02-28 06:20:23
| 4
| 396
|
Anon
|
78,072,112
| 1,788,205
|
How to run python code using Android kotlin code
|
<p>I am trying to read python code using android kotlin code .This is my python code</p>
<pre><code>import json
import pandas as pd
from ultralytics import YOLO
model = YOLO("file:///android_asset/YoloV8_model.pt")
def detectOCR(imagePath):
try:
results = model(imagePath)
df = pd.DataFrame(results[0].boxes.data)
df.sort_values(by=[0], inplace=True)
df.reset_index(inplace=True, drop=True)
output = ""
kwh = False
names = results[0].names
for i in range(len(df)):
if names[int(df[5][i])] == "kwh":
kwh = True
continue
output += names[int(df[5][i])]
if kwh:
output += " kwh"
return json.dumps({"status": "success", "msg": "successfully processed", "result": output})
except:
return "error"
</code></pre>
<p>this how i am trying to install python library .</p>
<p>all library installed but when i build code i am getting cpp related error</p>
<p>CMake Error at /private/var/folders/f1/bdvrvq0j1tsfylmfwl7pbb680000gn/T/pip-build-env-yi3y8_5d/overlay/lib/python3.8/site-packages/cmake/data/share/cmake-3.28/Modules/CMakeDetermineCCompiler.cmake:49 (message):
Could not find compiler set in environment variable CC:
Chaquopy_cannot_compile_native_code.
Call Stack (most recent call first):
CMakeLists.txt:3 (ENABLE_LANGUAGE)</p>
<p>Please help me what i am doing wrong</p>
|
<python><android>
|
2024-02-28 05:29:49
| 1
| 331
|
MARSH
|
78,072,111
| 1,530,508
|
Unexpected `FrozenInstanceError` in unit test when using a frozen exception
|
<p>I have a unit test that I want to show failing as part of a bug report. It's important not only that the test fail with an error, but that the error message very clearly evidences the underlying bug. I do <strong>NOT</strong> want the test to pass.</p>
<p>Here is a minimal example:</p>
<pre><code>import unittest
import attrs
@attrs.frozen
class FrozenException(Exception):
pass
class BugException(Exception):
pass
class Test(unittest.TestCase):
def testFromNone(self):
raise BugException
def testFromFrozen(self):
try:
raise FrozenException
except FrozenException:
raise BugException
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>When I execute this file, I expect (and desire) for both test methods to fail and show <code>BugException</code> as the reason for the failure. <code>testFromNone</code> behaves as expected:</p>
<pre><code>$ python3 -m unittest scratch.Test.testFromNone
E
======================================================================
ERROR: testFromNone (scratch.Test.testFromNone)
----------------------------------------------------------------------
Traceback (most recent call last):
...
scratch.BugException
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
</code></pre>
<p>However, running <code>testFromFrozen</code> causes the unit test code itself to crash<sup>1</sup>, hiding the <code>BugException</code> deep in the stack trace<sup>2</sup>:</p>
<pre><code>$ python3 -m unittest scratch.Test.testFromFrozen
Traceback (most recent call last):
...
<very long traceback>
...
attr.exceptions.FrozenInstanceError
</code></pre>
<p>It appears that the <code>unittest</code> framework does some sort of post-processing on exceptions that aren't caught by the test, and that in the course of this processing, it tries to mutate the <code>FrozenException</code> instance stored in the <code>BugException</code>'s traceback, triggering the <code>FrozenInstanceError</code>.</p>
<p>How can I work around this so that the test shows the correct error message? I'd prefer to avoid removing the <code>@attrs.frozen</code> decorator if at all possible<sup>3</sup>.</p>
<p>Python version 3.11.8, <code>attrs</code> version 22.2.0.</p>
<hr />
<p><em><sup>1</sup>This behavior is only observable if the exception is not caught within the body of the test method. If my example is changed to use <code>assertRaises</code> or an explicit <code>try/catch</code>, the <code>FrozenInstanceError</code> is not raised.</em></p>
<hr />
<p><em><sup>2</sup>In the real world, beyond the scope of this example, the symptom is even worse because the unexpected error causes the entire test suite to hang, causing no further tests to be executed and the process to just sit there until the test job times out.</em></p>
<hr />
<p><em><sup>3</sup>In the real code, the frozen exception class actually has attributes.</em></p>
|
<python><exception><immutability><python-unittest><python-attrs>
|
2024-02-28 05:29:21
| 1
| 14,373
|
ApproachingDarknessFish
|
78,072,040
| 1,734,545
|
Efficient Algorithm to Make Array Strictly Increasing with Segment Modifications
|
<p>I'm working on a problem where I need to calculate the minimum number of moves required to transform an array into a strictly increasing one. Moves involve selecting a segment of the array and increasing all of its elements by a positive integer.</p>
<p>Problem Description:</p>
<p>Given an array A of N positive integers, I want to find the minimum moves needed to make it strictly increasing (each element, except the last, is less than the next).</p>
<p>Constraints:</p>
<p>N is in the range [1..100,000]
Each element in A is within the range [1..1,000,000,000)
The array elements can exceed the upper bound of the original range after modifications.
Examples:</p>
<p>Input: A = [4, 2, 4, 1, 3, 5]
Output: 2
Explanation: One solution: add 3 to the segment [2, 4] and 8 to the segment [1, 3, 5].</p>
<p>Input: A = [3, 5, 7, 7]
Output: 1</p>
<p>Input: A = [1, 5, 6, 10]
Output: 0 (already strictly increasing)</p>
<pre><code>def solution(A):
moves = 0
max_so_far = A[0]
for i in range(1, len(A)):
current_val = A[i]
if current_val <= max_so_far:
diff = max_so_far + 1 - current_val
moves += diff
max_so_far = current_val + diff
else:
max_so_far = max(max_so_far, current_val)
return moves
if __name__ == "__main__":
import sys
A = [int(x) for x in sys.argv[1:]]
result = solution(A)
print(result)
</code></pre>
<p><strong>With this code when I entered
Input: A = [3, 5, 7, 7] Output: 1
Input: A = [1, 5, 6, 10] Output: 0
it works fine,</strong></p>
<p><strong>but when I enter
Input: A = [4, 2, 4, 1, 3, 5]
It returns Output: 20 which is incorrect</strong></p>
<p><strong>So what changes shall I need to make in a code so it will return output 2 instead of 20 for input A = [4, 2, 4, 1, 3, 5]</strong></p>
|
<python><arrays>
|
2024-02-28 05:04:13
| 1
| 473
|
Sushant Patekar
|
78,071,815
| 13,677,363
|
Design Add and Search Words Data Structure: Leetcode 211
|
<p>I am currently trying to solve the problem <a href="https://leetcode.com/problems/design-add-and-search-words-data-structure/description/?source=submission-noac" rel="nofollow noreferrer">Add and Search Words Data Structure</a> on leetcode. The question is as follows:</p>
<blockquote>
<p>Design a data structure that supports adding new words and finding if
a string matches any previously added string.</p>
<p>Implement the WordDictionary class:</p>
<p><code>WordDictionary()</code> Initializes the object.</p>
<p><code>void addWord(word)</code> Adds <code>word</code> to the data structure, it can be matched later.</p>
<p><code>bool search(word)</code> Returns <code>true</code> if there is any string in the data structure that matches
<code>word</code> or <code>false</code> otherwise. word may contain dots <code>.</code> where dots can be
matched with any letter.</p>
</blockquote>
<p><strong>My Strategy:</strong></p>
<p>My strategy involves representing a trie with a hashmap instead of a traditional linked-list-based tree structure, aiming for better performance and lower complexity. By using a hashmap, we can quickly access the next node without traversing through unnecessary nodes, making operations faster especially with large datasets.</p>
<p>For example, when inserting words like apple and application into this structure, it's organized as nested hashmaps where each character in a word points to another hashmap representing the next character. The end of a word is marked with a special key-value pair {'end': {}}. This way, we efficiently store and search for words with minimal space and time complexity.</p>
<p><strong>My Code:</strong></p>
<pre><code>class WordDictionary(object):
def __init__(self):
self.map = {}
def addWord(self, word):
"""
:type word: str
:rtype: None
"""
current = self.map
for i in word:
if i in current:
current = current[i]
else:
current[i] = {}
current = current[i]
current['end'] = {}
return
def search(self, word):
"""
:type word: str
:rtype: bool
"""
current = self.map
for i in word:
if i in current:
current = current[i]
elif i == '.':
current = {key: value for d in current.values() for key, value in d.items()}
else:
return False
if 'end' in current:
return True
return False
</code></pre>
<p>The solution seems to be effective for the majority of cases, but I've hit a error with <strong>test case 16</strong>, where it's not giving the right outcome. The length of test case 16 makes it particularly challenging to pinpoint where the mistake is happening. I'm in need of some guidance to track down and <strong>fix this logical error</strong>. Would you be able to lend a hand in sorting this out?</p>
|
<python><dictionary><hashmap><trie>
|
2024-02-28 03:47:59
| 2
| 949
|
Kanchon Gharami
|
78,071,606
| 6,197,439
|
Why does modifying a Python list of dict in this loop, change dicts parameters after completion?
|
<p>I have a list of OrderedDict, which have <code>id</code> and <code>content</code> keys. I want to insert new such OrderedDicts in the list, depending on some condition which I need to find for each original item individually - in the below example simulated with <code>special_indices</code> - and therefore I need to loop.</p>
<p>In my other attempts, the loop recursed endlessly - so I decided to append instead of insert, and recursion stopped, but obviously, then the entries were not in order in the list. I hoped to sort the list entries by the <code>id</code> key, but I couldn't - because while within the loop, I <em>do</em> get correct id numbers - when I print the whole list after the loop is done, the <code>id</code> keys of the appended items are changed?!</p>
<pre class="lang-python prettyprint-override"><code>from collections import OrderedDict
from pprint import pprint
mylist = []
mylist = [ OrderedDict([('id', 15+i), ("content", chr(i+65))]) for i in range(0,11) ]
print("\noriginal:")
pprint(mylist)
special_indices = (2, 7, 9)
i = 0
newid = mylist[0]["id"] - 1
print(len(mylist))
print("")
while i < len(mylist):
newid += 1
sel_entry = mylist[i]
sel_entry["id"] = newid
if i in special_indices:
prepend_entry = OrderedDict([
('id', sel_entry['id']+0),
('content', 'prepend')
])
sel_entry['id'] = sel_entry['id'] + 1
newid += 1
print("prepend entry: {}".format(prepend_entry))
mylist.append(prepend_entry)
i += 1
print("\nmodified:")
pprint(mylist)
</code></pre>
<p>This outputs:</p>
<pre class="lang-none prettyprint-override"><code>original:
[OrderedDict([('id', 15), ('content', 'A')]),
OrderedDict([('id', 16), ('content', 'B')]),
OrderedDict([('id', 17), ('content', 'C')]),
OrderedDict([('id', 18), ('content', 'D')]),
OrderedDict([('id', 19), ('content', 'E')]),
OrderedDict([('id', 20), ('content', 'F')]),
OrderedDict([('id', 21), ('content', 'G')]),
OrderedDict([('id', 22), ('content', 'H')]),
OrderedDict([('id', 23), ('content', 'I')]),
OrderedDict([('id', 24), ('content', 'J')]),
OrderedDict([('id', 25), ('content', 'K')])]
11
prepend entry: OrderedDict([('id', 17), ('content', 'prepend')])
prepend entry: OrderedDict([('id', 23), ('content', 'prepend')])
prepend entry: OrderedDict([('id', 26), ('content', 'prepend')])
modified:
[OrderedDict([('id', 15), ('content', 'A')]),
OrderedDict([('id', 16), ('content', 'B')]),
OrderedDict([('id', 18), ('content', 'C')]),
OrderedDict([('id', 19), ('content', 'D')]),
OrderedDict([('id', 20), ('content', 'E')]),
OrderedDict([('id', 21), ('content', 'F')]),
OrderedDict([('id', 22), ('content', 'G')]),
OrderedDict([('id', 24), ('content', 'H')]),
OrderedDict([('id', 25), ('content', 'I')]),
OrderedDict([('id', 27), ('content', 'J')]),
OrderedDict([('id', 28), ('content', 'K')]),
OrderedDict([('id', 29), ('content', 'prepend')]),
OrderedDict([('id', 30), ('content', 'prepend')]),
OrderedDict([('id', 31), ('content', 'prepend')])]
</code></pre>
<p>... and instead, I was expecting to get:</p>
<pre class="lang-none prettyprint-override"><code>[OrderedDict([('id', 15), ('content', 'A')]),
OrderedDict([('id', 16), ('content', 'B')]),
OrderedDict([('id', 18), ('content', 'C')]),
OrderedDict([('id', 19), ('content', 'D')]),
OrderedDict([('id', 20), ('content', 'E')]),
OrderedDict([('id', 21), ('content', 'F')]),
OrderedDict([('id', 22), ('content', 'G')]),
OrderedDict([('id', 24), ('content', 'H')]),
OrderedDict([('id', 25), ('content', 'I')]),
OrderedDict([('id', 27), ('content', 'J')]),
OrderedDict([('id', 28), ('content', 'K')]),
OrderedDict([('id', 17), ('content', 'prepend')]),
OrderedDict([('id', 23), ('content', 'prepend')]),
OrderedDict([('id', 26), ('content', 'prepend')])]
</code></pre>
<p>... which would have been sortable by <code>id</code>.</p>
<p>So, how come the <code>id</code> of the appended OrderedDict items changed after the loop was done - even though they were correct when printed from within the loop?!</p>
<p>It's as if <code>('id', sel_entry['id'])</code> assigns by reference - even though sel_entry['id'] is a primitive int, and not an object; but that is why I added the +0 (in hope that an arithmetic operation would be forced, and a new value returned), and yet still, the change of appended object <code>id</code>s occurs still?</p>
<p>(Btw, I need this for Python 2, but can confirm the same behavior in Python 3)</p>
|
<python><python-2.7>
|
2024-02-28 02:28:47
| 1
| 5,938
|
sdbbs
|
78,071,552
| 890,803
|
Writing to s3: expected string or bytes-like object
|
<p>I am getting the following error from the very last line of my script in AWS Lambda when trying to write a cav file to a bucket:</p>
<pre><code>Response
{
"errorMessage": "expected string or bytes-like object, got 's3.Bucket'",
"errorType": "TypeError",
"requestId": "14b29de4-bf60-4125-b59e-e9fb6e3032ed",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 190, in lambda_handler\n s3_dx.Object(dx_bucket, prefix + 'items.csv').put(Body=csv_buffer.getvalue())\n",
" File \"/var/lang/lib/python3.12/site-packages/boto3/resources/factory.py\", line 580, in do_action\n response = action(self, *args, **kwargs)\n",
" File \"/var/lang/lib/python3.12/site-packages/boto3/resources/action.py\", line 88, in __call__\n response = getattr(parent.meta.client, operation_name)(*args, **params)\n",
" File \"/var/lang/lib/python3.12/site-packages/botocore/client.py\", line 535, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/lang/lib/python3.12/site-packages/botocore/client.py\", line 928, in _make_api_call\n api_params = self._emit_api_params(\n",
" File \"/var/lang/lib/python3.12/site-packages/botocore/client.py\", line 1043, in _emit_api_params\n self.meta.events.emit(\n",
" File \"/var/lang/lib/python3.12/site-packages/botocore/hooks.py\", line 412, in emit\n return self._emitter.emit(aliased_event_name, **kwargs)\n",
" File \"/var/lang/lib/python3.12/site-packages/botocore/hooks.py\", line 256, in emit\n return self._emit(event_name, kwargs)\n",
" File \"/var/lang/lib/python3.12/site-packages/botocore/hooks.py\", line 239, in _emit\n response = handler(**kwargs)\n",
" File \"/var/lang/lib/python3.12/site-packages/botocore/handlers.py\", line 278, in validate_bucket_name\n if not VALID_BUCKET.search(bucket) and not VALID_S3_ARN.search(bucket):\n"
]
}
Function Logs
START RequestId: 14b29de4-bf60-4125-b59e-e9fb6e3032ed Version: $LATEST
[ERROR] TypeError: expected string or bytes-like object, got 's3.Bucket'
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 190, in lambda_handler
s3_dx.Object(dx_bucket, prefix + 'items.csv').put(Body=csv_buffer.getvalue())
File "/var/lang/lib/python3.12/site-packages/boto3/resources/factory.py", line 580, in do_action
response = action(self, *args, **kwargs)
File "/var/lang/lib/python3.12/site-packages/boto3/resources/action.py", line 88, in __call__
response = getattr(parent.meta.client, operation_name)(*args, **params)
File "/var/lang/lib/python3.12/site-packages/botocore/client.py", line 535, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/lang/lib/python3.12/site-packages/botocore/client.py", line 928, in _make_api_call
api_params = self._emit_api_params(
File "/var/lang/lib/python3.12/site-packages/botocore/client.py", line 1043, in _emit_api_params
self.meta.events.emit(
File "/var/lang/lib/python3.12/site-packages/botocore/hooks.py", line 412, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/var/lang/lib/python3.12/site-packages/botocore/hooks.py", line 256, in emit
return self._emit(event_name, kwargs)
File "/var/lang/lib/python3.12/site-packages/botocore/hooks.py", line 239, in _emit
response = handler(**kwargs)
File "/var/lang/lib/python3.12/site-packages/botocore/handlers.py", line 278, in validate_bucket_name
if not VALID_BUCKET.search(bucket) and not VALID_S3_ARN.search(bucket):END RequestId: 14b29de4-bf60-4125-b59e-e9fb6e3032ed
REPORT RequestId: 14b29de4-bf60-4125-b59e-e9fb6e3032ed Duration: 8542.55 ms Billed Duration: 8543 ms Memory Size: 128 MB Max Memory Used: 128 MB Init Duration: 3194.20 ms
</code></pre>
<p>here is my code:</p>
<pre><code>csv_buffer = StringIO()
new_df.to_csv(csv_buffer)
prefix="0001/dates/"
s3_dx = boto3.resource('s3', aws_access_key_id=day, aws_secret_access_key= ask)
s3_dx.Object('bucket', prefix + 'names.csv').put(Body=csv_buffer.getvalue())
</code></pre>
<p>Any thoughts on whats causing the error?</p>
|
<python><amazon-web-services>
|
2024-02-28 02:07:28
| 1
| 1,725
|
DataGuy
|
78,071,487
| 558,639
|
Python spidev xfer2 clarification
|
<p>I've not found good answer to the following, and the spidev documentation isn't much help:</p>
<p>Assume I have a SPI device where you send two bytes plus a pad byte before reading back a four byte response. Here's what I want to send, and what I expect to receive:</p>
<pre><code>byte 0 1 2 3 4 5 6
MOSI: txdata0 txdata1 padding dummy dummy dummy dummy
MISO: ignored ignored ignored data0 data1 data2 data3
</code></pre>
<p>What is the <code>spidev</code> incantation that results in the MOSI / MISO exchange above?
Is it:</p>
<ul>
<li><code>rxdata = spi.xfer2([txdata0, txdata1], 7)</code> # i.e. expecting 7 bytes back total</li>
<li><code>rxdata = spi.xfer2([txdata0 txdata1, 0], 4)</code> # i.e. expect 4 bytes following txbuf plus pad</li>
<li>something else?</li>
</ul>
|
<python><spidev>
|
2024-02-28 01:42:13
| 1
| 35,607
|
fearless_fool
|
78,071,482
| 3,609,976
|
Lists construction in Python Bytecode
|
<p>This python code:</p>
<pre><code>import dis
def f():
a=[1,2,3]
dis.dis(f)
</code></pre>
<p>generates this output:</p>
<pre><code> 2 0 RESUME 0
3 2 BUILD_LIST 0
4 LOAD_CONST 1 ((1, 2, 3))
6 LIST_EXTEND 1
8 STORE_FAST 0 (a)
10 LOAD_CONST 0 (None)
12 RETURN_VALUE
</code></pre>
<p>I am confused about the array building process.</p>
<p>My guess is that:</p>
<ul>
<li><code>BUILD_LIST 0</code> sets a marker, or maybe a placeholder for the list's pointer?.</li>
<li><code>LOAD_CONST 1</code> puts all the elements of the #1 constant (that happens to be that tuple), one at the time from the last to the first in the machine stack.</li>
<li><code>LIST_EXTEND 1</code> pops and adds one element from the machine stack at the time to the list, until reaching the marker (I don't know what the <code>1</code> would be for).</li>
<li><code>STORE_FAST</code> now we have the pointer on top, so this instruction finally binds the pointer to the #0 identifier that happens to be <code>a</code>.</li>
</ul>
<p>Is this correct? (edit: no, the guess is incorrect)</p>
|
<python><bytecode><cpython>
|
2024-02-28 01:39:44
| 1
| 815
|
onlycparra
|
78,071,474
| 604,296
|
How can I augment class type definition in Python?
|
<p>I have a 3rd party class with the following code</p>
<pre class="lang-py prettyprint-override"><code>class CBlockHeader(ImmutableSerializable):
"""A block header"""
__slots__ = ['nVersion', 'hashPrevBlock', 'hashMerkleRoot', 'nTime', 'nBits', 'nNonce']
def __init__(self, nVersion=2, hashPrevBlock=b'\x00'*32, hashMerkleRoot=b'\x00'*32, nTime=0, nBits=0, nNonce=0):
</code></pre>
<p>I want to tell the type checker that <code>nVersion</code> is of type <code>int</code> and I was able to achieve that by creating a <code>pyi</code> file and putting the following content in it</p>
<pre class="lang-py prettyprint-override"><code>class CBlockHeader:
nVersion: int
</code></pre>
<p>Problem now is, the type checker is only looking at the class definition in my <code>pyi</code> file. So it doesn't see the other attributes or the <code>__init__</code> function or anything. To fix that, I would have to redefine all of these things in my <code>pyi</code> file when all I wanted was the type of the <code>nVersion</code> attribute.</p>
<p>If I can merge the original type definition with my <code>pyi</code> file that would fix the problem.</p>
<p>Is that possible? Or how is this situation handled in Python?</p>
<p>Edit: TypeScript has an ability similar to what I'm describing <a href="https://www.typescriptlang.org/docs/handbook/declaration-merging.html" rel="nofollow noreferrer">https://www.typescriptlang.org/docs/handbook/declaration-merging.html</a> Given the comments I'm seeing on my question, I'm guessing that Python simply doesn't have this.</p>
|
<python><python-typing>
|
2024-02-28 01:36:16
| 0
| 7,139
|
M.K. Safi
|
78,071,393
| 6,653,602
|
How to query LanceDB vector DB using Langchain api with filters?
|
<p>I am trying to use Langchain with LanceDB as vector database. Here is how I instatiate database:</p>
<pre><code>db = lancedb.connect("./data/lancedb")
table = db.create_table("my_docs", data=[
{"vector": embeddings.embed_query(chunks[0].page_content), "text": chunks[0].page_content, "id": "1", "file":"bb"}
], mode="overwrite")
</code></pre>
<p>I then load more documents with different <code>file</code> metadata:</p>
<p><code>vectordb = LanceDB.from_documents(chunks[1:], embeddings, connection=table)</code></p>
<p>Then another batch with also a different <code>file</code> metadata value</p>
<p><code>vectordb = LanceDB.from_documents(chunks_ma, embeddings, connection=table)</code></p>
<p>I can see they were loaded succesfully and my vector db has correct amount of docuemnts:</p>
<p><code>print(len(db['my_docs']))</code></p>
<p><code>11</code></p>
<p>Now I want to create a retriever that will be able to pre filter the data based on <code>file</code> value:</p>
<p>I tried this</p>
<pre><code>retriever = vectordb.as_retriever(search_kwargs={"k": 6, 'filter':{'file': 'bb'}})
retrieved_docs = retriever.invoke("My query regarding something")
</code></pre>
<p>But when I check the outputs of the query invocation its still giving me the documents with wrong file metadata values:</p>
<p><code>print(retrieved_docs[0].metadata['file'])</code></p>
<p><code>'cc'</code></p>
<p>But it was supposed to only query the docuemnts in the database matchin the file value.</p>
<p>Is there something I am doing wrong, or what is the correct approach to filter the values before running retrieval query from LanceDB vector DB using Langchain api?</p>
|
<python><langchain><vector-database>
|
2024-02-28 00:58:03
| 0
| 3,918
|
Alex T
|
78,071,316
| 1,030,287
|
matplotlib plot 3D scatter plot where one axis is time or date
|
<p>I'm having problems with plotting 3D data where one of the dimensions is a date. I am wondering how I can solve it because any solution I can come up with looks too complicated for such a common case problem.</p>
<p>My solution is to drop the date dimension and replace it with an integer. I'd like to map this integer back to the dates. Any ideas what the best way to do this?</p>
<p>The code below works, but please adjust N to a smaller number if you want.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
N = 1000
bm = pd.DataFrame(
index=pd.bdate_range(start='2012-01-01', periods=N, freq='B'),
data={x: np.random.randn(N) for x in range(1, 11)}
)
# Simulate some zeros
bm = pd.DataFrame(index=bm.index, columns=bm.columns, data=np.where(np.abs(bm.values) < 0.02, 0, bm.values))
# Set zeros to Nan so that I don't plot them
bm = bm.replace({0: np.nan})
# unstack dataframe
flat_bm = bm.reset_index(drop=True).unstack() # DROP DATES AND REPLACE WITH INT
x = flat_bm.index.get_level_values(0)
y = flat_bm.index.get_level_values(1)
z = flat_bm.values
# Set up plot
fig = plt.figure(figsize = (15,10))
ax = plt.axes(projection ='3d')
# plotting
ax.scatter(x, y, z, '.', c=flat_bm.values, cmap='Reds')
</code></pre>
<p>At the end, the plot has an integer instead of date labels.</p>
|
<python><matplotlib><plot>
|
2024-02-28 00:28:49
| 1
| 12,343
|
s5s
|
78,071,313
| 890,803
|
Copy and files between two different S3 buckets in different accounts
|
<p>I am trying to copy one file from one S3 bucket in my account to a different S3 bucket in another account of which I have assumed the role for.</p>
<p>I am getting the following error:</p>
<pre><code>"errorMessage": "An error occurred (403) when calling the HeadObject operation: Forbidden"
</code></pre>
<p>Here is my method and a description of each component:</p>
<ol>
<li>Created a new role with * s3 access and assumes ARN of the role on the destination bucket</li>
<li>BucketA is the origin bucket and has one file in it: names.csv</li>
<li>BucketB is the destination bucket</li>
</ol>
<p>Code:</p>
<pre><code>#resources
bucketB_session = boto3.Session(aws_access_key_id = aak,aws_secret_access_key=ask)
bucketB_s3 = bucketB_session.resource('s3')
#copy files
copy_source = {'Bucket': 'BucketA','Key': 'names.csv'}
bucketB_s3.Object("BucketB", 'names.csv').copy(copy_source)
</code></pre>
|
<python><amazon-web-services>
|
2024-02-28 00:28:00
| 1
| 1,725
|
DataGuy
|
78,071,185
| 2,142,728
|
Building docker python project that uses dbmate
|
<p>From what I've read, migrations can be run separately from the application (dbmate), or as part of the bootstrap of the application (Flyway in the JVM).</p>
<p>I prefer the latter approach, yet most tools in python (and in general) use the former approach.</p>
<p>My intention is to build a docker image that once run, evolves the db/runs the migrations before building the db engine. But how?</p>
<p>When I build the docker image, I first build a skimmed venv that includes the sourcecode of my app and the runtime dependencies. Migration CLI tools like dbmate and alembic are not included. Besides, they don't read from the packages using <code>pkg_resources</code> nor <code>importlib.resources</code>, they're intended to be used on the workspace, not by the bundled running app.</p>
<p>I don't know if I'm overcomplicating stuff, so I'd like to know how to achieve migrations during the bootstrap of my application, and not as part of an independent pipeline or the deployment pipeline.</p>
<p>I prefer it that way given that when I'm running tests, I like running pg docker images and run migrations on them (using testcontainers). If I use the migrations as a "separate process", those tests are very complicated to get running.</p>
<p>If you think that such approach is very complicated, please state why, and share how it is usually done so that evolutions and deployment happen one after the other.</p>
|
<python><docker><schema-migration>
|
2024-02-27 23:41:46
| 0
| 3,774
|
caeus
|
78,071,148
| 11,098,908
|
Matplotlib transformation from data coordinates to figure coordinates
|
<p>The following code came from the <a href="https://www.packtpub.com/en-au/product/matplotlib-30-cookbook-9781789135718?type=print&gad_source=1&gclid=CjwKCAiAlcyuBhBnEiwAOGZ2S5AG1tIdmdDusS18Q4CYS6Cv8c3TLClPdVPEMOz85nEEyw5kpOYxmRoCJLoQAvD_BwE" rel="nofollow noreferrer">Matplotlib 3.0 Cookbook</a></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
theta = np.linspace(0, 2*np.pi, 128)
y = np.sin(theta)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(theta, y, 'r-*')
xdata, ydata = 3, 0
xdisplay, ydisplay = ax.transData.transform_point((xdata, ydata))
bbox = dict(boxstyle="round", fc="1.0") # bounding box to be used around the text description
arrowprops = dict(arrowstyle="->", color='green', lw=5,
connectionstyle="angle,angleA=0,angleB=90,rad=10")
offset = 72 # specify the coordinates where the text description is to be placed on the plot
# Annotation for the point in the data co-ordinate system
data = ax.annotate('data = (%.1f, %.1f)' % (xdata, ydata),
(xdata, ydata), xytext=(-2*offset, offset),
textcoords='offset points', bbox=bbox, arrowprops=arrowprops)
# annotation for the point in the display coordinate system
disp = ax.annotate('display = (%.1f, %.1f)' % (xdisplay, ydisplay),
(xdisplay, ydisplay), xytext=(0.5*offset, -offset),
xycoords='figure pixels', textcoords='offset points',
bbox=bbox, arrowprops=arrowprops);
</code></pre>
<p>The above code is supposed to produce this visual (according to the book)
<a href="https://i.sstatic.net/nRdMk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nRdMk.png" alt="enter image description here" /></a></p>
<p>However, when I ran that code in Jupyter Lab (3.5.3), it generated this image
<a href="https://i.sstatic.net/nXPhR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nXPhR.png" alt="enter image description here" /></a></p>
<p>I suspected this was due to this line of code</p>
<pre><code>xdisplay, ydisplay = ax.transData.transform_point((xdata, ydata))
</code></pre>
<p>Which yielded different values compared to those shown in the book's visual (which were 214.5, 144.7)</p>
<p><a href="https://i.sstatic.net/bz9Jw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bz9Jw.png" alt="enter image description here" /></a></p>
<p>Could someone please explain what had gone wrong?</p>
<p><strong>EDIT:</strong></p>
<p>I also ran a similar <a href="https://matplotlib.org/stable/users/explain/artists/transforms_tutorial.html" rel="nofollow noreferrer">tutorial code</a> provided by <code>matplotlib</code> using VS Code, and the outcome wasn't correct either. Was this because my laptop's display or the code didn't consider/take into account the differences in users' hardware?</p>
<p><a href="https://i.sstatic.net/OOdaz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OOdaz.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-02-27 23:33:13
| 0
| 1,306
|
Nemo
|
78,071,104
| 3,237,438
|
EasyOCR segfault when used with Decord
|
<ul>
<li>EasyOCR version: 1.7.1</li>
<li>Decord version: 0.6.0</li>
<li>Python version: 3.11.7</li>
</ul>
<p>Minimum reproducible example:</p>
<pre><code>import decord
import easyocr
ocr = easyocr.Reader(['en'])
</code></pre>
<p><code>[1] 1106258 segmentation fault (core dumped) python</code></p>
<p><a href="https://stackoverflow.com/questions/77287219/python-easyocr-crashing-python-vm-doesnt-weem-to-be-related-to-opencv">This post</a> disables CUDA which makes processing slow, and <a href="https://stackoverflow.com/questions/70567344/easyocr-segmentation-fault-core-dumped">downgrading OpenCV</a> is not possible due to incompatibility with my Python version and should not be required anyway as a fix should has been merged.</p>
|
<python><easyocr>
|
2024-02-27 23:20:45
| 1
| 4,856
|
hkchengrex
|
78,071,026
| 1,417,926
|
wxpython alpha colors overwrite previous drawing
|
<p>I create a new bitmap on each frame so that everything draws on a new canvas but apparently something else is going on, each frame seems to be drawing on top of the previous, is there something I have to modify in this code:</p>
<pre><code>import wx
class MyPanel(wx.Window):
def __init__(self, parent, *args, **kwargs):
wx.Window.__init__(self)
self.SetBackgroundStyle(wx.BG_STYLE_TRANSPARENT)
self.Create(parent, style=wx.TRANSPARENT_WINDOW, *args, **kwargs)
self.SetBackgroundColour(wx.TransparentColour)
# ~ self.SetBackgroundColour((200,100,100,10))
self.SetTransparent(0)
self.Bind(wx.EVT_PAINT, self.OnPaint, self)
self.Bind(wx.EVT_ERASE_BACKGROUND, lambda ev: None)
self.timer = wx.Timer(self)
self.timer.Start(20, oneShot=False)
self.Bind(wx.EVT_TIMER, lambda ev: self.Refresh(eraseBackground=True), self.timer)
self.bmp = wx.Bitmap.FromRGBA(*self.GetSize(), 0,0,0,wx.ALPHA_TRANSPARENT)
def OnPaint(self, event):
print("painting")
self.bmp = wx.Bitmap.FromRGBA(*self.GetSize(), 0,0,0,wx.ALPHA_TRANSPARENT)
dc = wx.BufferedDC(wx.ClientDC(self), self.bmp)
# ~ dc = wx.BufferedPaintDC(self, self.bmp)
# ~ dc = wx.PaintDC(self)
dc.SetBackgroundMode(wx.TRANSPARENT)
dc.SetBackground(wx.TRANSPARENT_BRUSH)
dc.Clear() #This should be clearing the drawing!
gc = wx.GCDC(dc)
gc.SetBrush(wx.Brush((255,255,0,10)))
gc.DrawRectangle(0,0,*dc.GetSize())
gc.SetBrush(wx.Brush((255,255,0,55)))
gc.DrawRectangle(100,100,100,100)
gc.DrawText("square", 200,100)
gc.SetBrush(wx.Brush((0, 255, 255, 55)))
gc.DrawRectangle(100,300,100,100)
gc.DrawText("square", 200,300)
class MyFrame(wx.Frame):
def __init__(self, *args, **kwargs):
wx.Frame.__init__(self, *args, **kwargs)
panel = MyPanel(self)
button1 = wx.Button(panel, label="button")
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(None, title="Frame", size=(400, 800))
frame.SetBackgroundColour("blue")
self.SetTopWindow(frame)
frame.Show(True)
return True
app = MyApp()
app.MainLoop()
</code></pre>
|
<python><wxpython><wxwidgets><repaint><paintevent>
|
2024-02-27 22:59:59
| 0
| 7,569
|
shuji
|
78,070,974
| 4,119,262
|
Splitting a word at the uppercase without using regex or a function leads to error cannot assign the expression or not enough value to unpack
|
<p>I am trying to learn python. In this context I need to split a word when encountering an upper case, convert it as a lower case and then insert a "_".</p>
<p>I have seen various quite complex answer <a href="https://stackoverflow.com/questions/63715861/insert-space-if-uppercase-letter-is-preceded-and-followed-by-one-lowercase-lette">here</a> or <a href="https://stackoverflow.com/questions/32713591/format-string-by-adding-a-space-before-each-uppercase-letter">here</a> or <a href="https://stackoverflow.com/questions/56342930/a-pythonic-way-to-insert-a-space-before-capital-letters-but-not-insert-space-bet">here</a>, but I am trying to do it as simply as possible.</p>
<p>So far, here is my code:</p>
<pre><code>word = input("What is the camelCase?")
for i , k in word:
if i.isupper() and k.islower():
word2 = k + "_" + i.lower + k
print(word2)
else:
print(word)
</code></pre>
<p>This leads to "<code>not enough values to unpack (expected 2, got 1)</code>" after I input my word.</p>
<p>Another attempt here:</p>
<pre><code>word = input("What is the camelCase?")
for i and k in word:
if i.isupper() and k.islower():
word2 = k + "_" + i.lower + k
print(word2)
else:
print(word)
</code></pre>
<p>In this case I cannot even write my word, I directly have the error: "<code>cannot assign to expression</code>"</p>
|
<python><loops><iteration><camelcasing>
|
2024-02-27 22:45:44
| 2
| 447
|
Elvino Michel
|
78,070,928
| 11,627,201
|
Allow Python to access the page that is displayed on Google Chrome?
|
<p>I heard the google terms of service does not allow automated scraping for the search results, and the API is pretty slow, so I thought I might try something like</p>
<ol>
<li>Manually enter a search item</li>
<li>Have the Python script get all URLs from the google search page and analyze them, something like</li>
</ol>
<pre><code>def onPageLoad():
# get python to read the HTML of the current open tab
first_res = select_first_result_from_browser()
if not certain_criteria(first_res):
do_something()
else:
do_something_else()
</code></pre>
<p>Is this possible (if it does not violate the terms of service)?</p>
|
<python>
|
2024-02-27 22:32:13
| 1
| 798
|
qwerty_99
|
78,070,834
| 10,969,942
|
Implementing Rate Limiting in asyncio for Remote Calls with QPS Constraints
|
<p><strong>Note: The answer to <a href="https://stackoverflow.com/questions/38683243/asyncio-rate-limiting">Asyncio & rate limiting</a> that uses sleep method is not what I want.</strong></p>
<p>I'm working with an asynchronous function in Python that performs a remote call, which takes approximately 20 seconds to complete. However, this remote call is subject to a QPS (Queries Per Second) limit imposed by the server: it only allows up to 2 calls per second. Exceeding this limit results in an exception being thrown.</p>
<p>Here's a simplified version of my async function:</p>
<pre class="lang-py prettyprint-override"><code>async def fun(x: int) -> int:
# Simulates a remote call that takes 20 seconds to complete
await asyncio.sleep(20)
return x
</code></pre>
<p>I've attempted two methods to process multiple calls to this function:</p>
<p><strong>Method 1: Sequential Await in a Loop</strong></p>
<pre class="lang-py prettyprint-override"><code>res = [await fun(i) for i in range(1000)]
</code></pre>
<p>This approach does not fully use the asynchronous capabilities, as it waits for each call to complete before making the next one.</p>
<p><strong>Method 2: Using asyncio.gather to Run Concurrently</strong></p>
<pre class="lang-py prettyprint-override"><code>tasks = [fun(i) for i in range(1000)]
res = await asyncio.gather(*tasks)
</code></pre>
<p>While this method attempts to run the calls concurrently, it results in exceeding the QPS limit, leading to errors.</p>
<p>I'm seeking a way to effectively use asyncio for making these remote calls, respecting the QPS limit without causing exceptions due to rate limiting. Specifically, I need a mechanism or pattern that allows me to:</p>
<ol>
<li>Ensure that no more than 2 calls are made per second.</li>
<li>Use asyncio to manage the asynchronous execution efficiently.</li>
</ol>
<p>Certainly, I could manually craft a for loop and introduce some sleep functions within a sliding window to adjust the rate, but this approach feels somewhat inelegant and less in line with Pythonic principles. <strong>The answer to <a href="https://stackoverflow.com/questions/38683243/asyncio-rate-limiting">Asyncio & rate limiting</a> that uses sleep method is not what I want.</strong></p>
<p>Is there a built-in feature in Python's asyncio library or a recommended pattern for adding a rate limiter to manage QPS constraints in such scenarios?</p>
|
<python><python-asyncio>
|
2024-02-27 22:10:39
| 0
| 1,795
|
maplemaple
|
78,070,661
| 1,491,229
|
Visualize a vector of booleans as an image grid of symbols with different colors in python
|
<p>For teaching purposes in statistics and probability, I want to give a visual representation of various probabilities by an image grid with objects or symbols in different colors in a Jupyter notebook.</p>
<p>For instance, by the following code an pixelwise random image is created.</p>
<pre><code>import numpy as np
from PIL import Image
a = np.random.rand(100, 100)
img = Image.fromarray((a * 255).astype('uint8'), mode='L')
display(img)
</code></pre>
<p><a href="https://i.sstatic.net/rroKI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rroKI.png" alt="boolean array depicted as a black and white image" /></a></p>
<p>What I would like to have instead is something like this.</p>
<p><a href="https://i.sstatic.net/MoKf8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MoKf8.png" alt="enter image description here" /></a></p>
<p>In order to better visualize the connection between sets and probability.</p>
|
<python><jupyter-notebook><statistics><visualization><probability>
|
2024-02-27 21:27:34
| 1
| 709
|
user1491229
|
78,070,540
| 1,065,175
|
Numpy prints a vector of rows, where I expected a vector of vectors
|
<p>I have a 4x1 vector v, and I reshape it to 2x1x2, so to a vector of vectors, w.</p>
<pre><code>v=np.array([[1],[2],[3],[4]])
print(v)
w=np.reshape(v,(2,1,2))
print(w)
</code></pre>
<p>But it prints a vector of rows:</p>
<pre><code>[[1]
[2]
[3]
[4]]
[[[1 2]]
[[3 4]]]
</code></pre>
<p>I expected something like:</p>
<pre><code>[[[1]
[2]]
[[3]
[4]]]
</code></pre>
|
<python><numpy>
|
2024-02-27 20:58:50
| 0
| 2,331
|
ericj
|
78,070,504
| 832,560
|
"Upgrading" an SQLAlchemy object to a derived class
|
<p>Given these two models:</p>
<pre><code>class Game(Base):
__tablename__ = 'games'
type = Column(String(11))
__mapper_args__ = {
'polymorphic_identity': 'game',
'polymorphic_on': type,
}
class HostedGame(Game):
__tablename__ = 'hosted_games'
name = Column(String, ForeignKey('games.name'), primary_key=True)
__mapper_args__ = {
'polymorphic_identity': 'hosted_game',
'inherit_condition': (name == Game.name),
}
</code></pre>
<p>If I have in my database a Game object, and I wish to turn it into a HostedGame, what is the best approach? Struggling at the moment to find a simple answer for this given that, in pure SQL, it would simply be a case of inserting a row into the hosted_games table and be done with it.</p>
<p>I can manage this if I use two separate transactions, one for deleting the existing Game and another for inserting the new HostedGame, but if the second transaction breaks then I can't easily roll back the first, so I'd really like to do it in a single transaction. However, something like this:</p>
<pre><code> async with session.begin()
new_hosted_game = await create_hosted_game_object_from_game(game)
new_hosted_game.attr1 = new_host_attr1
# etc
await session.delete(game)
session.add(new_hosted_game)
</code></pre>
<p>Doesn't work, which probably reveals my lack of understanding with ORMs.</p>
<p>The error:</p>
<pre><code>E sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table 'hosted_games' expected to update 1 row(s); 0 were matched.
</code></pre>
<p>Is there a really easy to achieve this that I've missed, or should I come at it from another angle? Many thanks.</p>
|
<python><sqlalchemy>
|
2024-02-27 20:51:17
| 0
| 830
|
Cerzi
|
78,070,484
| 4,119,262
|
Why is looping over a dictionary (made form a json) leading to an error?
|
<p>I am trying to learn python. In this context, I get a json, and get a specific value. However, I get the following error:</p>
<blockquote>
<p>string indices must be integers, not "str"."</p>
</blockquote>
<p>for my last line of code:</p>
<pre><code>print(dollar["rate_float"])
</code></pre>
<p>I have seen other questions on the topic, such as this <a href="https://stackoverflow.com/questions/6077675/why-am-i-seeing-typeerror-string-indices-must-be-integers">one</a>, but I am not able to apply it or really understand it.</p>
<p>My code is as follows:</p>
<pre><code>import sys
import requests
import json
if len(sys.argv) != 2:
print("Missing command-line argument")
# Check if the user is giving a parameter after the program name
elif sys.argv[1].isalpha():
print("Command-line argument is not a number")
# Check if the user is giving numbers
else:
response = requests.get("https://api.coindesk.com/v1/bpi/currentprice.json")
# Get a json from coindesk
data_store = response.json()
# Store the json in the variable "data_store"
for dollar in data_store["bpi"]:
# Loop over the dictionary to pick the price in dollar
print(dollar["rate_float"])
</code></pre>
<p>After adapting the code as follows, based on the comments, I get another error: "name "bpi" is not defined".</p>
<pre><code># Store the json in the variable "data_store"
for dollar in data_store[bpi.values()]:
# Loop over the dictionary to pick the price in dollar
print(dollar[rate_float.values()])
</code></pre>
<p>The Json is as follows:</p>
<pre><code>{
"time": {
"updated": "Feb 27, 2024 20:16:10 UTC",
"updatedISO": "2024-02-27T20:16:10+00:00",
"updateduk": "Feb 27, 2024 at 20:16 GMT"
},
"disclaimer": "This data was produced from the CoinDesk Bitcoin Price Index (USD). Non-USD currency data converted using hourly conversion rate from openexchangerates.org",
"chartName": "Bitcoin",
"bpi": {
"USD": {
"code": "USD",
"symbol": "&#36;",
"rate": "57,197.495",
"description": "United States Dollar",
"rate_float": 57197.4947
},
"GBP": {
"code": "GBP",
"symbol": "&pound;",
"rate": "45,108.976",
"description": "British Pound Sterling",
"rate_float": 45108.9758
},
"EUR": {
"code": "EUR",
"symbol": "&euro;",
"rate": "52,744.155",
"description": "Euro",
"rate_float": 52744.1549
}
}
}
</code></pre>
|
<python><loops><dictionary>
|
2024-02-27 20:46:23
| 1
| 447
|
Elvino Michel
|
78,070,444
| 8,652,920
|
Is there a concise one liner to process and replace values in a dictionary? (using some form of dictionary comprehension)
|
<p>say you have a class like</p>
<pre><code>class Foo:
bar = {"hello": 123, "world": 456}
</code></pre>
<p>And I want to define <code>Foo.bar</code> such that for all the keys and values of <code>bar</code>, all even values are replaced with 0.</p>
<p>Is there a clean way to do that in one line? The issue is that the dictionary <code>{"hello": 123, "world": 456}</code> is not stored into a variable yet (I want <code>bar</code> to be defined with the processing already happened)</p>
<p>So the below is impossible because <code>d</code> is not a variable, but it feels (to me) like <code>d</code> has to be defined to do the one liner I'm asking for.</p>
<pre><code>class Foo:
bar = {key: (d[key] if not d[key] % 2 else 0) for key in {"hello": 123, "world": 456}}
</code></pre>
<p>Is what I'm asking for impossible?</p>
|
<python><dictionary>
|
2024-02-27 20:35:25
| 3
| 4,239
|
notacorn
|
78,070,439
| 1,030,287
|
matplotlib scatter plot with NaNs and a cmap colour matrix
|
<p>Hi I have a simple 3D scatter plot - a dataframe <code>bm</code> with columns and index as the <code>x</code> and <code>y</code> axes. When I plot it, I want to add a colour map - also simple and I've done it below.</p>
<p>However, in my data <code>bm</code> I have some zeros which I do not want to plot - this is also easy - I set them to <code>NaN</code>. <strong>However, this causes a problem with the colour matrix.</strong> <code>scatter</code> does not like this. I've tried both passing colours matrix with nans and without nans and they both fail with an error.</p>
<p>The code below is fully functional if you remove the line <code>bm = bm.replace({0: np.nan})</code> it will plot.</p>
<pre><code>N = 100
bm = pd.DataFrame(
index=pd.bdate_range(start='2012-01-01', periods=N, freq='B'),
data={x: np.random.randn(N) for x in range(1, 11)}
)
# Simulate some zeros
bm = pd.DataFrame(index=bm.index, columns=bm.columns, data=np.where(np.abs(bm.values) < 0.02, 0, bm.values))
# Set zeros to Nan so that I don't plot them
bm = bm.replace({0: np.nan})
z = bm.values
x = bm.columns.tolist()
y = bm.reset_index().index.tolist()
x, y = np.meshgrid(x, y)
# Set up plot
fig = plt.figure(figsize = (15,10))
ax = plt.axes(projection ='3d')
# plotting
ax.scatter(x, y, z, '.', c=bm.values, cmap='Reds') # THIS FAILS
ax.xaxis.set_ticklabels(bm.columns);
ax.yaxis.set_ticklabels(bm.index.strftime('%Y-%m-%d'));
</code></pre>
<p>Any help will be welcome</p>
|
<python><matplotlib><plot>
|
2024-02-27 20:34:41
| 2
| 12,343
|
s5s
|
78,070,421
| 7,422,287
|
How to get a QFormLayout() next to a GLViewWidget() in a QVBoxLayout()
|
<p>I am trying to get a QFormLayout() next to a GLViewWidget() in a QVBoxLayout() in QMainWindow() using the below code.</p>
<p>However, I struggle to find out how to display both at once, I can only display one at a time.</p>
<p>My aim is to have something like this:</p>
<p><a href="https://i.sstatic.net/6qSV4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6qSV4.png" alt="enter image description here" /></a></p>
<p>Any idea, how to achieve that?</p>
<pre><code>import sys
import pyqtgraph.opengl as gl
from PyQt6.QtWidgets import QMainWindow, QApplication, QVBoxLayout, QHBoxLayout, QGroupBox, QFormLayout,\
QLabel, QLineEdit, QWidget
from PyQt6.QtGui import QIntValidator
class MainWindow(QMainWindow):
def __init__(self):
super(MainWindow, self).__init__()
first_view_widget = gl.GLViewWidget()
second_view_widget = gl.GLViewWidget()
third_view_widget = gl.GLViewWidget()
central_widget = QWidget()
vertical_layout = QVBoxLayout()
central_widget.setLayout(vertical_layout)
self.setCentralWidget(central_widget)
horizontal_layout = QHBoxLayout()
horizontal_layout.addWidget(first_view_widget)
horizontal_layout.addWidget(second_view_widget)
params = QHBoxLayout()
target_distance = QLabel('First row')
distance_input = QLineEdit()
distance_input.setText(str(100))
only_integers = QIntValidator()
distance_input.setValidator(only_integers)
air_pressure = QLabel('Second row')
pressure_input = QLineEdit()
pressure_input.setText(str(1.01325))
only_integers = QIntValidator()
pressure_input.setValidator(only_integers)
f_layout = QFormLayout()
f_layout.addRow(target_distance, distance_input)
f_layout.addRow(air_pressure, pressure_input)
params.addLayout(f_layout)
params.addWidget(third_view_widget)
group_box = QGroupBox('Parameters')
group_box.setLayout(params)
vertical_layout.addWidget(group_box)
vertical_layout.addLayout(params)
vertical_layout.addLayout(horizontal_layout)
if __name__ == '__main__':
app = QApplication(sys.argv)
main = MainWindow()
main.showMaximized()
sys.exit(app.exec())
</code></pre>
|
<python><python-3.x><qwidget><pyqtgraph><pyqt6>
|
2024-02-27 20:30:44
| 0
| 727
|
New2coding
|
78,070,316
| 11,197,957
|
Does passing a string from Rust to Python like this cause memory leaks?
|
<p>I'm currently tinkering with interfacing between <strong>Python and Rust</strong> for a hobby. (Clearly I have too much time on my hands.) The problem I'm working on right now is passing <strong>large integers</strong> between the two languages.</p>
<p>Passing large integers from Python to Rust was suprisingly straighforward. You convert the large integer into an array of 32-bit integers, and Rust is commendably laid back about the whole thing. The problem is going the other way: passing a large integer from Rust to Python. Rust really doesn't like passing arrays to external interface!</p>
<p>A solution which I have found, which some may find hacky but which I find acceptable, is to convert the large integer into a <strong>string</strong>, and then pass that string to Python, which will then convert it back into an integer. That seems to function, but, on the basis of Vladimir Matveev's answer to <a href="https://stackoverflow.com/questions/30510764/returning-a-string-from-rust-function-to-python">this question</a>, I'm worried that I may be introducing memory leaks. Am I?</p>
<p>(Note that, if I try to call the <code>free</code> function that Vladimir Matveev recommends immediately after receiving the string from Rust, it gives me a <code>core dumped</code> fault.)</p>
<p>This is what my Rust code looks like:</p>
<pre class="lang-rust prettyprint-override"><code>const MAX_DIGITS: usize = 100;
unsafe fn import_bigint(digit_list: PortableDigitList) -> BigInt {
let digits = Vec::from(*digit_list.array);
let result = BigInt::new(Sign::Plus, digits);
return result;
}
fn export_bigint(n: BigInt) -> *mut c_char {
let result_str = n.to_str_radix(STANDARD_RADIX);
let result = CString::new(result_str).unwrap().into_raw();
return result;
}
#[repr(C)]
pub struct PortableDigitList {
array: *const [u32; MAX_DIGITS]
}
#[no_mangle]
pub unsafe extern fn echo_big_integer(
digit_list: PortableDigitList
) -> *mut c_char {
let big_integer = import_bigint(digit_list);
println!("Received big integer: {:?}. Sending it back...", big_integer);
return export_bigint(big_integer);
}
</code></pre>
<hr />
<p>Someone requested a MWE. Here it is. This is my <code>lib.rs</code>:</p>
<pre class="lang-rust prettyprint-override"><code>use std::ffi::CString;
use libc::c_char;
#[no_mangle]
pub unsafe extern fn return_string() -> *mut c_char {
let result_string = "4294967296";
println!("Returning string to Python: {}", result_string);
let result = CString::new(result_string).unwrap().into_raw();
return result;
}
#[no_mangle]
pub unsafe extern fn free_string(pointer: *mut c_char) {
let _ = CString::from_raw(pointer);
}
</code></pre>
<p>And this is my Cargo.toml:</p>
<pre class="lang-ini prettyprint-override"><code>[package]
name = "return_string"
version = "0.1.0"
edition = "2021"
[lib]
name = "return_string"
crate-type = ["dylib"]
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
libc = "0.2.152"
</code></pre>
<p>Using <code>cargo build --release</code>, I then compile this into an .so file, which is what I call from Python, like so:</p>
<pre class="lang-py prettyprint-override"><code>from ctypes import CDLL, c_char_p, cast
lib = CDLL("return_string/target/release/libreturn_string.so")
rust_func = lib.return_string
rust_func.restype = c_char_p
result_pointer = rust_func()
result = cast(result_pointer, c_char_p).value.decode()
print("Rust returned string: "+result)
free_func = lib.free_string
free_func.argtypes = [c_char_p]
free_func(result_pointer)
</code></pre>
<p>If I run the above Python script, it gives me this output:</p>
<pre><code>Returning string to Python: 4294967296
Rust returned string: 4294967296
munmap_chunk(): invalid pointer
Aborted (core dumped)
</code></pre>
|
<python><rust><memory-leaks><ffi>
|
2024-02-27 20:06:36
| 0
| 734
|
Tom Hosker
|
78,070,256
| 376,829
|
htmx: hx-target-error fails when hx-target defined (django python)
|
<p>I am using the htmx extension "response-targets" via "hx-ext" in a python django project.</p>
<p><strong>The Problem</strong></p>
<p>"hx-target-error" will not work when "hx-target" is also defined, but works when it is the only <em>target</em>.</p>
<p><strong>Environment</strong></p>
<p>htmx 1.9.10, response-targets 1.9.10, python 3.12, django 5.0.1</p>
<p><strong>Example html</strong>:</p>
<pre><code><div hx-ext="response-targets">
<button
hx-get={% ur "appname:endpoint" %}
hx-target-error="#id-fail"
hx-target="#id-success"
type="submit"
>
Press Me!
</button>
</div>
</code></pre>
<p><strong>hx-target</strong></p>
<p>I have tried with various other types of targets such as "next td", "next span", etc. All of them produce a valid target value "request.htmx.target" in the endpoint defined in views.py.</p>
<p><strong>hx-target-error</strong></p>
<p>I have also tried various alternatives supported in response-targets such as "hx-target-*", "hx-target-404", etc with no change in results.</p>
<p>I verified that "response-targets" is installed correctly and being used correctly because when "hx-target" is removed from the "button", then "hx-target-error" works.</p>
<p><strong>Error generation in views.ph</strong></p>
<pre><code>@login_required
def endpoint(request)
return HttpResponseNotFound("Not found 404")
</code></pre>
<p><strong>Logs</strong></p>
<pre><code>Bad Request: /testbed/endpoint/
127.0.0.1 - [27/Feb/2024] "GET /testbed/endpoint/ HTTP/1.1" 404
</code></pre>
<p><strong>response-targets extension</strong></p>
<p><a href="https://htmx.org/extensions/response-targets/" rel="nofollow noreferrer">https://htmx.org/extensions/response-targets/</a></p>
|
<python><django><htmx>
|
2024-02-27 19:56:29
| 1
| 12,696
|
David Manpearl
|
78,070,219
| 10,976,654
|
Some minor tick labels do not show on log axis
|
<p>I'm trying to customize major and minor tick labels on a log y axis where all labels are in <code>%g</code> format.</p>
<p>When the minor 2s and 5s are labeled, and both major and minor grid lines are showing and the ticklabels show as expected.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
N = 100
x = np.linspace(0, 1, N)
np.random.seed(123)
y = np.random.uniform(0.1, 50, N)
fig, ax = plt.subplots()
ax.plot(x, y)
##### Eventually will roll this chunk into a formatter function
ax.set(yscale="log")
# Major ticks
major_ticks = [0.001, 0.01, 0.1, 1, 10, 100]
ax.set_yticks(major_ticks)
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("%g"))
# Minor ticks
minor_ticks = [0.002, 0.005, 0.02, 0.05, 0.2, 0.5, 2, 5, 20, 50]
minor_locator = ticker.LogLocator(base=10.0, subs=(0.2, 0.5), numticks=12)
def minor_tick_filter(val, pos):
if val in minor_ticks:
return '%g' % val
return ''
ax.yaxis.set_minor_locator(minor_locator)
ax.yaxis.set_minor_formatter(ticker.FuncFormatter(minor_tick_filter))
#####
ax.set(xlim=[0, 1], ylim=[0.1, 100])
plt.minorticks_on()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/38rIp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/38rIp.png" alt="enter image description here" /></a></p>
<p>But if I want to plot the 3s, the label for 0.3 is missing. How can I fix this? Using matplotlib version 3.2.1. Yes it's old but it's a relic env I'm stuck with for this application.</p>
<pre><code>minor_ticks = [0.003, 0.03, 0.3, 3, 30]
minor_locator = ticker.LogLocator(base=10.0, subs=[0.3], numticks=12)
</code></pre>
<p><a href="https://i.sstatic.net/oYNCV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oYNCV.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><matplotlib><logarithm><yticks>
|
2024-02-27 19:52:29
| 1
| 3,476
|
a11
|
78,070,176
| 10,462,461
|
Python returning 0's instead of value
|
<p>I'm working on a project where I need to convert SAS code to Python. I've gotten most of the script converted but I'm having an issue with the following SAS snippet:</p>
<pre><code>if balloondt_i <= intnx('month',datadt,1,'end') then prinamt = currbal;
else do;
do i = 1 to dpd_mult;
if currbal > 0 then do;
intbal = currbal * rate / 1200 * pmtfreq;
if int_only = 'IO' then prinbal = 0;
else if intbal >= pmtamt then prinbal = mort(currbal,.,rate*pmtfreq/1200,max(1,(intck('month',datadt,balloondt)-((i-1)*pmtfreq)))/pmtfreq) - intbal;
else prinbal = pmtamt - intbal;
currbal = currbal - prinbal;
prinamt = prinamt + prinbal;
intamt = intamt + intbal;
end;
end;
end;
</code></pre>
<p>I've created a sample data set below for Python:</p>
<pre><code>df = pd.DataFrame({
'loannum': [111, 222],
'datadt': [dt.datetime(2023, 12, 31), dt.datetime(2023, 12, 31)],
'balloondt_i': [dt.datetime(2026, 6, 1), dt.datetime(2031, 3, 1)],
'balloondt': [dt.datetime(2026, 6, 1), dt.datetime(2031, 3, 1)],
'currbal': [2171044.89, 2020983.87],
'rate': [5.50, 4.17],
'pmtfreq': [1, 1],
'int_only': [None, None],
'dpd_mult': [1, 3],
'prinbal': [0, 0],
'prinamt': [0, 0],
'pmtamt': [77622.93, 26188.26],
'intamt': [0, 0],
'intbal': [0, 0]
})
</code></pre>
<p>The Python code I've come up with to replicate the SAS snippet is below:</p>
<pre><code>for idx in df.index:
datadt = df.loc[idx, 'datadt']
balloondt_i = df.loc[idx, 'balloondt']
currbal = df.loc[idx, 'currbal']
rate = df.loc[idx, 'rate']
pmtfreq = df.loc[idx, 'pmtfreq']
pmtamt = df.loc[idx, 'pmtamt'] if not pd.isna(df.loc[idx, 'pmtamt']) else None
int_only = df.loc[idx, 'int_only']
if balloondt_i <= datadt + relativedelta(months=+1): # Approximates 'intnx'
df.loc[idx, 'prinamt'] = df.loc[idx, 'currbal']
else:
for i in range(df.loc[idx, 'dpd_mult']):
if currbal <= 0: # Break if current balance is zero or negative
break
intbal = df.loc[idx, 'currbal'] * rate / 1200 * pmtfreq
if int_only == 'IO':
prinbal = 0
elif intbal >= pmtamt:
nperiods = max(1, (balloondt_i.year - datadt.year) * 12 + (balloondt_i.month - datadt.month) - (i - 1) * pmtfreq)
prinbal = npf.pmt(rate * pmtfreq / 1200 / 12, nperiods, df.loc[idx, 'currbal']) - intbal # Use np.pmt()
else:
prinbal = pmtamt - intbal
df.loc[idx, 'currbal'] -= prinbal
df.loc[idx, 'prinamt'] += prinbal
df.loc[idx, 'intamt'] += intbal
</code></pre>
<p>The Python code calculates most of the attributes correctly. However, <code>prinbal</code> and <code>intbal</code> are returned with <code>0</code>. I've tried moving the calculation for <code>intbal</code> inside the first <code>if</code> statement before the <code>break</code> but that yielded the same results. My desired output is below:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>loannum</th>
<th>datadt</th>
<th>balloondt_i</th>
<th>balloondt</th>
<th>currbal</th>
<th>rate</th>
<th>pmtfreq</th>
<th>int_only</th>
<th>dpd_mult</th>
<th>prinbal</th>
<th>prinamt</th>
<th>pmtamt</th>
<th>intamt</th>
<th>intbal</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>2023-12-31</td>
<td>2026-06-01</td>
<td>2026-06-01</td>
<td>2171044.89</td>
<td>5.50</td>
<td>1</td>
<td>None</td>
<td>1</td>
<td>67672.307588</td>
<td>67672.307588</td>
<td>77622.93</td>
<td>9950.622413</td>
<td>9950.622413</td>
</tr>
<tr>
<td>222</td>
<td>2023-12-31</td>
<td>2031-03-01</td>
<td>2031-03-01</td>
<td>2020983.87</td>
<td>4.17</td>
<td>1</td>
<td>None</td>
<td>3</td>
<td>19298.771606</td>
<td>57696.053269</td>
<td>26188.26</td>
<td>20868.726731</td>
<td>6889.488395</td>
</tr>
</tbody>
</table></div>
|
<python><pandas><sas>
|
2024-02-27 19:44:03
| 1
| 340
|
gernworm
|
78,070,052
| 22,518,813
|
How to create an uninstaller for my python app?
|
<p>I'm currently developing a python application for windows and i want to share it to my friends.
To do so, I converted my python files in to an executable using pyinstaller.
I then created an installer and an updater which retrieves the files in a github repository, downloads them and unzips them to be used.</p>
<p>I now want to create an uninstaller so it is easier to delete the application but I cannot find a great way to do it.</p>
<p>I tried to delete the directory containing the installation but I keep getting permission errors.</p>
<p>Is there another great way to do it ?</p>
|
<python><windows><uninstallation>
|
2024-02-27 19:19:16
| 0
| 314
|
fastattack
|
78,069,994
| 6,458,245
|
What are user and global installations when in a conda environment?
|
<p>I am trying to update pip within my conda environment using pip install --upgrade pip. However, I am getting this warning message: "Defaulting to user installation because normal site-packages is not writeable"</p>
<p>What does user and global refer to when I am already within a conda environment?</p>
|
<python><pip><anaconda>
|
2024-02-27 19:06:25
| 0
| 2,356
|
JobHunter69
|
78,069,925
| 1,867,985
|
Why does comparison between empty ndarrays and empty lists behave differently if done inside an f-string?
|
<p>I was messing around a bit with empty <code>ndarray</code>s and uncovered some unexpected behavior. Minimal working example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
print(f"{np.empty(0)} :: {type(np.empty(0))}")
print(f"{[]} :: {type([])}")
print(f"internal -> {np.empty(0) == []} :: {type(np.empty == [])}")
external = np.empty(0) == []
print(f"external -> {external} :: {type(external)}")
</code></pre>
<p>Gives the output:</p>
<pre><code>[] :: <class 'numpy.ndarray'>`
[] :: <class 'list'>
internal -> [] :: <class 'bool'>
external -> [] :: <class 'numpy.ndarray'>
</code></pre>
<p>I have three questions about this, and I suspect that the answers are probably related:</p>
<ol>
<li>Why does numpy return an empty array instead of a boolean when comparing an empty array with an empty list (or an empty tuple, or an empty dict, or an empty set, or another instance of an empty array)?</li>
<li>Why <em>don't</em> we get the same type result when evaluating this inside of an f-string?</li>
<li>Why does numpy <em>not</em> return an empty array when comparing an empty array with an empty string (<code>np.empty(0) == ''</code> returns False)?</li>
</ol>
<p>Based on the <code>FutureWarning</code> that gets raised when trying out the comparison to an empty string, I'm guessing that the answer to (1) probably has something to do with numpy performing an element-wise comparison between iterables with no elements, but I don't really get the details, nor why cases (2) and (3) seem to behave differently.</p>
|
<python><numpy><numpy-ndarray><f-string>
|
2024-02-27 18:53:20
| 1
| 325
|
realityChemist
|
78,069,777
| 7,418,125
|
Exporting Prometheus Alertmanager Alerts to CSV Using Python, Filtering by Specific Timestamps
|
<p>I'm working with a monitoring platform provisioned on AWS, utilising EKS, and configured with a Prometheus data source and Alertmanager. Currently, alerts triggered by Alertmanager are sent to a Slack channel. I have used Alertmanager API to fetch all the alerts triggered within a specific period of time.</p>
<p>Below is the Python code I've written for this purpose. Please note that I'm not very experienced with Python. The code successfully exports alerts to an Excel sheet, but it includes random alerts from various dates (e.g., 20th, 21st, 26th, 27th of February 2024), rather than only those triggered on a specific day (e.g., 27th of February).</p>
<p>I'm seeking assistance to modify the code so that it exports only alerts triggered within the specified date range. Currently, it downloads the Excel sheet to my PC with numerous random alerts outside the provided dates in the script.</p>
<p>Any help or suggestions would be greatly appreciated. Thank you.</p>
<pre><code>import requests
import pandas as pd
from datetime import datetime
import os # Import the os module
def export_alerts(prometheus_url, start_time, end_time):
# Construct the URL to query the alerts
alerts_url = f"{prometheus_url}/api/v1/alerts"
# Initialize an empty list to store all alerts
all_alerts = []
# Define query parameters for the initial request
params = {
"start": start_time,
"end": end_time,
"limit": 100 # Adjust as needed based on your expected number of alerts per query
}
try:
while True:
# Send a GET request to fetch alerts with pagination
response = requests.get(alerts_url, params=params)
response.raise_for_status() # Raise an exception for any error response
# Extract alerts from the response
alerts_data = response.json()["data"]["alerts"]
# Append alerts to the list of all alerts
all_alerts.extend(alerts_data)
# Check if there are more alerts to fetch
if "next" in response.json()["data"]:
params["start"] = response.json()["data"]["next"]
else:
break # Exit loop if no more alerts to fetch
# Return all alerts
return all_alerts
except requests.RequestException as e:
# Handle any HTTP request errors
print(f"Error fetching alerts: {e}")
return None
# Example usage
prometheus_url = "https://prometheus.dev.spencecom.eu.spence.cloud"
start_time = "2024-02-26T00:00:00Z"
end_time = "2024-02-27T23:59:59Z"
alerts = export_alerts(prometheus_url, start_time, end_time)
if alerts:
# Convert alerts to a DataFrame
df = pd.DataFrame(alerts)
# Print column names for verification
print("Column names in DataFrame:")
print(df.columns)
# Generate timestamp for filename
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
# dir to save excel
# Example: "/Users/your_username/Documents/alerts/"
output_directory = "/Users/spenceebuka/Documents/PYTHON_SCRIPT"
# Ensure that the output directory exists
os.makedirs(output_directory, exist_ok=True)
# Export DataFrame to Excel file with specified column names and timestamp in filename
excel_file_path = os.path.join(output_directory, f"alerts_{timestamp}.xlsx")
df.to_excel(excel_file_path, index=False)
print(f"Alerts exported to {excel_file_path} successfully.")
else:
print("Failed to fetch alerts.")
</code></pre>
|
<python><excel><amazon-web-services><prometheus><prometheus-alertmanager>
|
2024-02-27 18:27:09
| 1
| 504
|
Emmanuel Spencer Egbuniwe
|
78,069,591
| 16,525,263
|
How to read and execute hql file(hive query) and create pyspark dataframe
|
<p>I have a <code>.hql</code> file. I need to read and execute the query to create a dataframe from the query result.
I have the below code</p>
<pre><code>def read_and_exec_hql(hql_file_path):
with open(hql_file_path, 'r') as f:
hql_query = f.read().strip()
queries = [q.strip() for q in hql_query.splitlines() if q.strip() and not q.startswith('--')]
df = None
for query in queries:
if df is None:
df = spark.sql(query)
else:
df = df.union(spark.sql(query))
return df
hql_file_path = 'path/to/hql/file'
df = read_and_exec_hql(hql_file_path)
df.show()
</code></pre>
<p>I'm getting <code>Py4JJavaError</code> for this.</p>
<p>Is there any other approach to read and execute hql files in pyspark.
Please let me know.
Thanks in advance</p>
|
<python><apache-spark><pyspark><hql>
|
2024-02-27 17:54:00
| 0
| 434
|
user175025
|
78,069,588
| 7,128,934
|
Implications of querying database before lambda_handler
|
<p>Here is the pseudo-code for what current lambda function looks like;</p>
<pre class="lang-py prettyprint-override"><code>import pandas
import pymysql
def get_db_data(con_):
query = "SELECT * FROM mytable"
data = pandas.read_sql(query, con_)
return data
def lambda_handler(event, context):
con = pymysql.connect()
data = get_db_data(con)
"""
do other things with event
"""
con.close()
</code></pre>
<p>I am debating if I can do this instead:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
import pymysql
con = pymysql.connect()
def get_db_data(con_):
query = "SELECT * FROM mytable"
data = pandas.read_sql(query, con_)
return data
data = get_db_data(con)
def lambda_handler(event, context):
"""
do other things with event
"""
con.close()
</code></pre>
<p>But I am not sure if it is a good practice. What implications would the second option have on run-time and cost? Is it against the recommended way?</p>
|
<python><amazon-web-services><aws-lambda>
|
2024-02-27 17:53:18
| 1
| 32,598
|
d.b
|
78,069,220
| 16,759,116
|
What sorting algorithm is this? ("merge iterator with itself")
|
<p>As long as the list isn't sorted, keep replacing it with a merge of an iterator with itself. Is that (equivalent to) one of the commonly known sorting algorithms, just implemented weirdly, or is it something new?</p>
<pre class="lang-py prettyprint-override"><code>from random import shuffle
from heapq import merge
from itertools import pairwise
# Create test data
a = list(range(100))
shuffle(a)
# Sort
while any(x > y for x, y in pairwise(a)):
it = iter(a)
a = list(merge(it, it))
print(a)
</code></pre>
<p><a href="https://ato.pxeger.com/run?1=PZAxDsIwDEX3nMISSyqBBBtCgoUjcAJLONRSmwTHFfQsLCxwJy7BGXChxYstf-c_O7dn7rVO8X5_dBoW69c7SGpBMB4tcZuTKJS6C6Eh95VqwnyelJbkNPZZSTSlpkxaRpYLF3JuBnshVAKlonBERYewhYaLeiOdyK-Wy6pyI8djNbw5mIm71NwQYOz9FXbQQ0gC17kVHP8Am682DixYzXZYZLAYGn_Md1HPOjfZSC4LR7Wp39Xj8dMnfAA" rel="nofollow noreferrer">Attempt This Online!</a></p>
<p><code>heapq.merge</code> does merge two inputs in the "standard" way, but it's an implementation detail and its inputs are supposed to be sorted, which they aren't here (and they're also not independent, as they use the same source). To eliminate it being an implementation detail, let <code>merge</code> actually be this (the standard way, always comparing the two "current" values, yielding the smaller one and fetching a replacement for it):</p>
<pre><code>def merge(xs, ys):
none = object()
x = next(xs, none)
y = next(ys, none)
while (x is not none) and (y is not none):
if x <= y:
yield x
x = next(xs, none)
else:
yield y
y = next(ys, none)
if x is not none:
yield x
if y is not none:
yield y
yield from xs
yield from ys
</code></pre>
|
<python><algorithm><sorting><merge>
|
2024-02-27 16:53:39
| 3
| 10,901
|
no comment
|
78,069,167
| 4,439,019
|
Python: Opening specific file with latest timestamp in name
|
<p>I have set of JSON files in my directory that look like this:</p>
<pre><code>P2096_20240221114151310.json
QA558_20240223154745500.json
QA558_20240223165244591.json
</code></pre>
<p>what I want to do is pass the string to the left of the underscore as an argument, and then open the json file with the latest timestamp in the name.</p>
<p>This is what I have so far:</p>
<pre><code>import glob
def get_file(name):
files = glob.glob('./*.json')
correct_files = []
for file in files:
file = file.strip('.\\')
if file.startswith(name):
correct_files.append(file)
print(correct_files)
get_file('QA558')
</code></pre>
<p>And that gets me to here:</p>
<pre><code>['QA558_20240223154745500.json', 'QA558_20240223165244591.json']
</code></pre>
<p>That is where I am stuck. I know after I find the max timestamp, I will add the <code>with</code> statement to load the file as data, I just don't know how to get to that point.</p>
|
<python><string>
|
2024-02-27 16:45:35
| 3
| 3,831
|
JD2775
|
78,068,883
| 2,726,900
|
Spark session with .master("yarn"): addFile(...) does not upload file to executor nodes
|
<p>I have a pyspark session (client mode, with <code>.master("yarn")</code>). I have a file that I need to upload to each Spark executor's file system.</p>
<p>I've uploaded the file to HDFS, created a Spark session (to "/tmp/dummy.txt"), and tried to</p>
<p>I'm creating a SparkSession,and call:</p>
<ul>
<li><code>SparkSession.addFiles("hdfs:///tmp/dummy.txt")</code></li>
<li><code>SparkFiles.get("dummy.txt")</code>.</li>
</ul>
<p>And this <code>SparkFiles.get("dummy.txt")</code> gives me a path to some folder in <code>/tmp/spark-blablalba-hash/userFiles-blablabla-hash/dummy.txt</code>. Such path <strong>does not exist on my Spark executors</strong>, but... does exist on my Spark driver (where I don't need it at all).</p>
<p>How can it be fixed and how can I finally copy the file from driver to executors?</p>
<p>P.S. Some code that I did to get such strange behaviour:</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark import SparkFiles
if __name__ == "__main__":
spark = SparkSession.builder.enableHiveSupport()\
.master("yarn").getOrCreate()
# I have uploaded a file "dummy.txt" to HDFS (to hdfs:///tmp/dummy.txt)
# I want this file from HDFS to be copied to all executors.
spark.sparkContext.addFile("hdfs:///tmp/dummy.txt")
# Let's look where it was uploaded...
path_on_exectors = SparkFiles.get("dummy.txt")
print(path_on_exectors)
# > '/tmp/spark-6e61ed03-6866-4863-adb6-b7345836caf3/userFiles-28d28f93-e4df-44c3-a1e5-7411d02d4cf5/dummy.txt'
# This path was created on my driver node but DOES NOT EXIST on my executors.
# What can I do to fix it?
# (Quitting programming, moving to Somalia and becoming a pirate is not an option yet).
</code></pre>
|
<python><apache-spark><pyspark>
|
2024-02-27 15:58:54
| 0
| 3,669
|
Felix
|
78,068,806
| 14,230,633
|
Polars aggregate without a groupby
|
<p>Is there a way to call <code>.agg()</code> without grouping first? I want to perform standard aggregations but only want one row in the response, rather than separate rows for separate groups.</p>
<p>I could do something like</p>
<pre><code>df.with_columns(dummy_col=pl.lit("dummy_col")).group_by('dummy_col').agg(<aggregateion>)
</code></pre>
<p>but I'm wondering if there's a way without the dummy stuff</p>
|
<python><dataframe><python-polars>
|
2024-02-27 15:45:49
| 2
| 567
|
dfried
|
78,068,464
| 14,833,503
|
Error: PyType_GetFlags when trying to use keras package
|
<p>I am using <code>library(keras)</code> and <code>library(tensorflow)</code> to estimate an LSTM model. When I try to run the code I am getting the following error message:</p>
<p>MODULE: Removing module /shared/modulefiles-tools/python/miniconda-3.9
Error: PyType_GetFlags - /usr/lib64/libpython2.7.so.1.0: undefined symbol: PyType_GetFlags</p>
<p>This is the R version used:</p>
<p>$platform
[1] "x86_64-pc-linux-gnu"</p>
<p>$arch
[1] "x86_64"</p>
<p>$os
[1] "linux-gnu"</p>
<p>$system
[1] "x86_64, linux-gnu"</p>
<p>$status
[1] ""</p>
<p>$major
[1] "4"</p>
<p>$minor
[1] "2.3"</p>
<p>$year
[1] "2023"</p>
<p>$month
[1] "03"</p>
<p>$day
[1] "15"</p>
<p>$<code>svn rev</code>
[1] "83980"</p>
<p>$language
[1] "R"</p>
<p>$version.string
[1] "R version 4.2.3 (2023-03-15)"</p>
<p>$nickname
[1] "Shortstop Beagle"</p>
<p>and /shared/modulefiles-tools/python/miniconda-3.9</p>
<p>Has anyone encountered this error before? and has any idea on how to solve it? help is appreciated.</p>
|
<python><r><tensorflow><keras>
|
2024-02-27 14:54:12
| 0
| 405
|
Joe94
|
78,068,423
| 14,034,263
|
When creating Zoom I am getting invalid access token error when my keys are valid
|
<pre><code>import requests
import json
API_KEY = 'API KEY'
API_SEC = 'SECRET KEY'
zoom_api_url = 'https://api.zoom.us/v2/users/me/meetings'
meeting_details = {
"topic": "The title of your Zoom meeting",
"type": 2,
"start_time": "2024-02-27T10:00:00",
"duration": "45",
"timezone": "Europe/Madrid",
"agenda": "Test Meeting",
"settings": {
"host_video": "true",
"participant_video": "true",
"join_before_host": "False",
"mute_upon_entry": "False",
"watermark": "true",
"audio": "voip",
"auto_recording": "cloud"
}
}
headers = {
'Content-Type': 'application/json',
}
auth = (API_KEY, API_SEC)
response = requests.post(zoom_api_url, headers=headers, auth=auth, data=json.dumps(meeting_details))
print(response.json())
</code></pre>
<p>After I am getting access token, but its saying invalid access token.
I have added keys and allowed authentication on Zoom.
I have tested it with postman as well as through my django app</p>
<p>From every platform I am getting same error.</p>
<blockquote>
<p><strong>Invalid Access Token</strong></p>
</blockquote>
<p>I tried to create access with jwt.io too but still its not working.</p>
<p>If anyone can help, what is wrong?</p>
|
<python><django><django-views><zoom-sdk>
|
2024-02-27 14:46:55
| 0
| 359
|
Vishal Pandey
|
78,068,207
| 962,190
|
How to interrupt a sleeping python process
|
<p>I tried to write generic timeout-context-manager, and can't get it to work properly:</p>
<pre class="lang-py prettyprint-override"><code>import signal
import contextlib
from threading import Thread
from _thread import interrupt_main # 3.10 and up only
import time
class CustomInterrupt(Exception):
SIG = 63 # I can only test on linux
def handle_custom_interrupt(signum, _):
raise CustomInterrupt
signal.signal(CustomInterrupt.SIG, handle_custom_interrupt)
@contextlib.contextmanager
def timeout(seconds: float = 3.0):
class TimedOut:
def __init__(self, seconds_: float):
self.seconds: float = seconds_
self.state: bool | None = None
self.thread = Thread(target=self._task)
def _task(self):
time.sleep(self.seconds)
if self.state is None:
interrupt_main(CustomInterrupt.SIG)
def start(self):
self.thread.start()
def __bool__(self):
if self.state is None:
raise ValueError("Can't decide yet, value is only meaningful on exit.")
return self.state
timed_out = TimedOut(seconds)
try:
timed_out.start()
yield timed_out
timed_out.state = False
except CustomInterrupt:
timed_out.state = True
finally:
timed_out.thread.join()
</code></pre>
<p>Interrupting an endless loop works:</p>
<pre class="lang-py prettyprint-override"><code>with timeout(1) as t_o:
while True:
pass
print(f"escaped! timed out: {bool(t_o)}") # prints "escaped! timed out: True" after 1 second
</code></pre>
<p>Interrupting a non-responsive request doesn't:</p>
<pre class="lang-py prettyprint-override"><code>import requests
with timeout(1) as t_o:
requests.get("http://www.google.com:81/")
print(f"escaped! timed out: {bool(t_o)}") # prints "escaped! timed out: True" after 130 seconds
</code></pre>
<p>Interrupting sleep doesn't:</p>
<pre class="lang-py prettyprint-override"><code>with timeout(1) as t_o:
time.sleep(5)
print(f"escaped! timed out: {bool(t_o)}") # prints "escaped! timed out: True" after 5 seconds
</code></pre>
<p>While the fact that it can escape from an infinite loop proves that it works in principle, failing on the two actual use cases is unfortunate.</p>
<p>Some additional context and things I wonder about:</p>
<ul>
<li>Changing the <code>time.sleep</code> instruction in <code>TimedOut._task</code> to <code>subprocess.run(["sleep", str(self.seconds)])</code> doesn't change anything, so it's not a GIL-and-threads thing</li>
<li>Switching from a thread to <code>subprocess.Popen</code> would be the next step, but would make the state-information-exchange a pain, so I wanted to make this post just in case all of this also works with threads and I'm just doing something wrong</li>
<li>The fact that <code>bool(timed_out)</code> returns <code>True</code> even in the cases where the interrupt didn't work means that excepting the <code>CustomInterrupt</code> still took place, but only when the code in the <code>with</code>-block was done running. Why/what is it blocking? Can I make it non-blocking, and send/handle the signal immediately?</li>
<li>When I ctrl+c while the code is running, it crashes just fine. I don't know which conclusion to draw from this, but I did this as a sanity-check.</li>
</ul>
|
<python><multithreading><async-await><signals><interrupt>
|
2024-02-27 14:13:16
| 0
| 20,675
|
Arne
|
78,068,080
| 10,927,050
|
how to handle if root level keys if they are dynamic in pydantic
|
<pre><code>{
"type": "thirdparty",
"key": "postgres",
"host": "test",
"id": "88c8fo90"
}
</code></pre>
<p>or</p>
<pre><code>{
"type": "weburl",
"url": "https://www.google.com"
}
</code></pre>
<p>Here the keys are changing based on type, how do I add a pydantic model for this?</p>
|
<python><pydantic>
|
2024-02-27 13:57:06
| 1
| 668
|
Prajna
|
78,068,060
| 2,646,505
|
`importlib.import_module` not finding module in working directory
|
<p>I would like to import from the current working directory.
For example:</p>
<pre class="lang-py prettyprint-override"><code>import os
import pathlib
import tempfile
from importlib import import_module
with tempfile.TemporaryDirectory() as temp_dir:
tmp_dir = pathlib.Path(temp_dir)
(tmp_dir / "foo").mkdir()
(tmp_dir / "foo" / "__init__.py").write_text("def bar(): pass")
orig = os.getcwd()
os.chdir(tmp_dir)
print(import_module("foo"))
os.chdir(orig)
</code></pre>
<p>However:</p>
<pre><code>Traceback (most recent call last):
File "/Users/tdegeus/data/prog/src/conda_envfile/t.py", line 12, in <module>
print(import_module("foo"))
^^^^^^^^^^^^^^^^^^^^
File "/Users/tdegeus/miniforge3/envs/pre/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'foo'
</code></pre>
<p>(It does work if the directory exists prior to execution.)</p>
|
<python><python-importlib>
|
2024-02-27 13:53:41
| 1
| 6,043
|
Tom de Geus
|
78,067,992
| 682,661
|
reindex and group dataframe using the repeated ids
|
<p>I have the following dataframe:</p>
<pre><code> doc_id term_candidate tfidf
0 1240600489869402113 space 0.057618
1 1240600489869402113 capacity 0.057618
2 1240600489869402113 idea 0.057618
3 1340608455992815617 priority 0.064477
4 1340608455992815617 typical dates 0.064477
5 1340608455992815617 hope 0.064477
</code></pre>
<p>And I would like to group by the doc_id as follows:</p>
<pre><code>doc_id term_candidate tfidf
1240600489869402113 space 0.057618
capacity 0.057618
idea 0.057618
1340608455992815617 priority 0.064477
typical dates 0.064477
hope 0.064477
</code></pre>
<p>I tried <code>groupby</code> but no luck, any ideas?</p>
<p>also to sort the tfidf values for each doc_id.</p>
|
<python><pandas><dataframe>
|
2024-02-27 13:42:13
| 1
| 907
|
Omnia
|
78,067,953
| 8,050,183
|
How to extract Sign In methods from AuthBlockingEvent dataclass firebase cloud function
|
<p>I would like to extract the associated sign-in method used, like Google or email/password from the event.</p>
<p>Given the Google Documentations here for <a href="https://firebase.google.com/docs/auth/extend-with-blocking-functions?gen=2nd#getting_user_and_context_information" rel="nofollow noreferrer">Getting user and context information</a>, some fields are not available on the <code>AuthBlockingEvent</code> dataclass.</p>
<p>For example I can extract the following:</p>
<pre class="lang-py prettyprint-override"><code>from firebase_functions import https_fn, identity_fn, options
@identity_fn.before_user_signed_in()
def handle_user(
event: identity_fn.AuthBlockingEvent,
) -> identity_fn.BeforeSignInResponse | None:
email = event.data.email
uid = event.data.uid
event_id = event.event_id
event_type = event.credential
</code></pre>
<p>However, the following does on exist on the even signature.</p>
<pre class="lang-py prettyprint-override"><code>from firebase_functions import https_fn, identity_fn, options
@identity_fn.before_user_signed_in()
def handle_user(
event: identity_fn.AuthBlockingEvent,
) -> identity_fn.BeforeSignInResponse | None:
event_type = event.event_type
auth_type = event.auth_type
resource = event.resource
# etc
</code></pre>
<p>I also opened an issue on the official repository <a href="https://github.com/firebase/firebase-functions-python/issues/180" rel="nofollow noreferrer">https://github.com/firebase/firebase-functions-python/issues/180</a></p>
|
<python><firebase-authentication><google-cloud-functions><google-cloud-identity>
|
2024-02-27 13:35:21
| 1
| 1,159
|
axelmukwena
|
78,067,735
| 273,593
|
statically inspect a python test suite
|
<p>I have a (unittest-based) test suite for my python project.
Here I have my test classes, with my test methods, etc...
In (some of my) tests I call a function to initialize the scenarios of the tests. Let's call this function <code>generate_scenario(...)</code>, and it has a bunch of parameters.</p>
<p>I was wondering if I could write an additional python code that could find all the times <code>generate_scenario(...)</code> is called and with the parameter passed, so I could check if all the "possible" scenarios are actually generated.</p>
<p>Ideally I want an additional test module to check that.</p>
|
<python><unit-testing><testing><python-unittest>
|
2024-02-27 12:55:55
| 1
| 1,703
|
Vito De Tullio
|
78,067,709
| 11,749,309
|
How to calculate a rolling statistic in Polars, looking back from the `end_date`?
|
<p>I would like to calculate a rolling <code>"1m"</code> statistic on a financial data time series. Given that there won't always be an equal number of rows, per month calculation, unless you have exactly enough data to divide into months, which is uncommon.</p>
<p>I am trying to assign a column <code>window_index</code> to keep track of the rows that are included in the calculation, as i will use <code>.rolling().over()</code> expression to calc the stat on each window. I would like the <code>window_index</code> integer to be added starting from the latest date, and working its way back.</p>
<p>Here is an image to help explain:
<a href="https://i.sstatic.net/VjMyc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VjMyc.png" alt="Stock Data" /></a></p>
<p>Currently, what I am trying to do is add the <code>window_index</code> in groups denoted by the red and blue pen. Although I would like the data to be grouped by the yellow pen marking. The dataset ends at <code>2023-02-07</code> and one month prior would be <code>2023-01-07</code>, or the closest value <code>2023-01-06</code>.</p>
<p>This is the code I am using to achieve this, but I am not sure how to get the grouping window that I desire.</p>
<pre class="lang-py prettyprint-override"><code>df_window_index = (
data.group_by_dynamic(
index_column="date", every="1m", by="symbol"
)
.agg()
.with_columns(
pl.int_range(0, pl.len()).over("symbol").alias("window_index")
)
)
data = data.join_asof(df_window_index, on="date", by="symbol").sort(
"symbol"
)
</code></pre>
<p>Using a <code>timedelta</code> instead of a string didn't seem to solve the issue.</p>
<p>On the left is the code above, and the image on the right is when I make <code>every=timedelta(days=31)</code>. Is still doesnt make sense to me why Polars is pulling those dates for a 31 day timedelta.</p>
<p><a href="https://i.sstatic.net/6PU02.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6PU02.png" alt="Trying TimeDelta" /></a></p>
<p>Any help would be much appreciated, or any tips to point me in the right direction! Thanks!</p>
<h3>Data:</h3>
<pre class="lang-py prettyprint-override"><code>df = pl.read_csv(b"""
date,open,high,low,close,volume,dividends,stock_splits,symbol,window_index
2021-01-04T00:00:00.000000000,133.52,133.61,126.76,129.41,143301900,0.0,0.0,AAPL,0
2021-01-05T00:00:00.000000000,128.89,131.74,128.43,131.01,97664900,0.0,0.0,AAPL,0
2021-01-06T00:00:00.000000000,127.72,131.05,126.38,126.6,155088000,0.0,0.0,AAPL,0
2021-01-07T00:00:00.000000000,128.36,131.63,127.86,130.92,109578200,0.0,0.0,AAPL,1
2021-01-08T00:00:00.000000000,132.43,132.63,130.23,132.05,105158200,0.0,0.0,AAPL,1
2021-01-11T00:00:00.000000000,129.19,130.17,128.5,128.98,100384500,0.0,0.0,AAPL,1
2021-01-12T00:00:00.000000000,128.5,129.69,126.86,128.8,91951100,0.0,0.0,AAPL,1
2021-01-13T00:00:00.000000000,128.76,131.45,128.49,130.89,88636800,0.0,0.0,AAPL,1
2021-01-14T00:00:00.000000000,130.8,131.0,128.76,128.91,90221800,0.0,0.0,AAPL,1
2021-01-15T00:00:00.000000000,128.78,130.22,127.0,127.14,111598500,0.0,0.0,AAPL,1
2021-01-19T00:00:00.000000000,127.78,128.71,126.94,127.83,90757300,0.0,0.0,AAPL,1
2021-01-20T00:00:00.000000000,128.66,132.49,128.55,132.03,104319500,0.0,0.0,AAPL,1
2021-01-21T00:00:00.000000000,133.8,139.67,133.59,136.87,120150900,0.0,0.0,AAPL,1
2021-01-22T00:00:00.000000000,136.28,139.85,135.02,139.07,114459400,0.0,0.0,AAPL,1
2021-01-25T00:00:00.000000000,143.07,145.09,136.54,142.92,157611700,0.0,0.0,AAPL,1
2021-01-26T00:00:00.000000000,143.6,144.3,141.37,143.16,98390600,0.0,0.0,AAPL,1
2021-01-27T00:00:00.000000000,143.43,144.3,140.41,142.06,140843800,0.0,0.0,AAPL,1
2021-01-28T00:00:00.000000000,139.52,141.99,136.7,137.09,142621100,0.0,0.0,AAPL,1
2021-01-29T00:00:00.000000000,135.83,136.74,130.21,131.96,177523800,0.0,0.0,AAPL,1
2021-02-01T00:00:00.000000000,133.75,135.38,130.93,134.14,106239800,0.0,0.0,AAPL,1
2021-02-02T00:00:00.000000000,135.73,136.31,134.61,134.99,83305400,0.0,0.0,AAPL,1
2021-02-03T00:00:00.000000000,135.76,135.77,133.61,133.94,89880900,0.0,0.0,AAPL,1
2021-02-04T00:00:00.000000000,136.3,137.4,134.59,137.39,84183100,0.0,0.0,AAPL,1
2021-02-05T00:00:00.000000000,137.35,137.42,135.86,136.76,75693800,0.2,0.0,AAPL,1
""".strip(), try_parse_dates=True)
</code></pre>
|
<python><python-3.x><python-polars><date-arithmetic><rolling-computation>
|
2024-02-27 12:52:08
| 1
| 373
|
JJ Fantini
|
78,067,592
| 1,472,474
|
Is it possible to dummy/fake types with mypy for problematic imports?
|
<h2>Motivation:</h2>
<p>I have to use external python library ("SDK") which is just a directory of code (i.e. it's not a package and can't be "installed" in any meaninful way).
So the only way to use it is to add it to system path (<code>sys.path</code> in python or <code>PYTHONPATH</code> in shell) manually.
Also, because its designed for an embedded device, it mostly doesn't work on normal devel workstation (i.e. my laptop) - that is, some things can be imported, some can't.
Also it's not type annotated in any way.</p>
<h2>What I want</h2>
<p>So I want to have <em>some</em> typing added to my code which calls / uses classes from this library. For example:</p>
<pre><code>from bad_evil_library import BadEvilClass # <- fails
...
def myfunc1(x: BadEvilClass) -> str: # <- not checked
...
def myfunc2() -> BadEvilClass: # <- not checked
....
x = myfunc1(myfunc2()) # <- not checked
...
</code></pre>
<p>Which means even though this function is not called, pytest stops on broken import (yes, I know I can use <code>--continue-on-collection-errors</code>), and mypy complains.</p>
<p>So I was thinking - is there some way to make "a dummy type" - i.e. a type which would in fact not represent the real type, but which would still be checked in "my code". Perhaps something like this:</p>
<pre><code>BadEvilClass = DummyType()
...
def myfunc1(x: BadEvilClass) -> str: # <- checked!
...
def myfunc2() -> BadEvilClass: # <- checked!
....
x = myfunc1(myfunc2()) # <- checked!
...
</code></pre>
<p>This would be fine, but it won't be checked against the actual library if is available (so I won't be able to detect that <code>BadEvilClass</code> has been renamed, or it lost some method, etc).</p>
<h2>What I don't want</h2>
<p>I don't want, don't need and am not supposed to solve the problem of importing - that's NOT the issue and it won't help me in any meaningful way.</p>
<p>I'm not looking for ways to "modify" the "SDK" to make it proper python package or how to import it more easily.</p>
<p>In other words: the problem with importing is not what I want to solve, its only a trigger to the real problem, which is: "How to add at least some typing to untyped external classes so I can at least check if they get passed correctly".</p>
<h2>What have I tried?</h2>
<p>Currently, I created "dummy" types, which are all just <code>Any</code> and I moved all imports into one function returning a structure containing these "dummy" types, like this:</p>
<pre><code>BadEvilClass = Any
WrongCruelClass = Any
@dataclass
class MyInterface:
bad_evil: BadEvilClass
wrong_cruel: WrongCruelClass
def get_my_interface() -> MyInterface:
# setup sys.path..
from bad_evil_library import BadEvilClass
from bad_evil_library import WrongCruelClass
return MyInterface(BadEvilClass(), WrongCruelClass())
</code></pre>
<p>Which works (there are MORE than one "bad evil class"), but I have absolutely no type checking, so mistakes like this are considered fine:</p>
<pre><code>def foo(x: BadEvilClass) -> None:
...
my_iface = get_my_interface()
foo(my_iface) # <- I'm passing MyIface, not BadEvilClass!
# and it passes, since BadEvilClass is really "Any"
</code></pre>
<p>This is basically the main reason for this question - I want to be able to check if I'm passing the correct types.</p>
<h2>Simplified question:</h2>
<p>Is there a way to "partialy" type something which sometimes isn't available, so if it is, I get full typing checks advantage, and if it isn't, I get at least checks in my code?</p>
|
<python><python-import><python-typing><mypy>
|
2024-02-27 12:33:12
| 0
| 5,587
|
Jan Spurny
|
78,067,569
| 9,189,389
|
Python - proper logging import to classes
|
<p>I struggle with proper imports of <code>logging</code> to classes. Generally, I would like to avoid showing debug messages in the console and saving several same messages in a log file. However, when running in J. notebook via imported classes, the second class in sequence starts to show debug messages in the console during the initializing. Also, without <code>if not logger.handlers:</code> There is a doubling of saved messages every time. The classes import the <code>get_logger</code> function during the initializing. What should I do differently? Thx</p>
<pre><code>import logging
from logging.handlers import TimedRotatingFileHandler
_log_format = f"%(asctime)s - [%(levelname)s] - %(name)s - " \
f"(%(filename)s).%(funcName)s(%(lineno)d) - %(message)s"
def get_timed_rotating_file_handler():
file_handler = TimedRotatingFileHandler("zon_logs.log", when='D', interval=30,
backupCount=15, delay=False, utc=False)
file_handler.setLevel(logging.DEBUG) # Log DEBUG and above to file
file_handler.setFormatter(logging.Formatter(_log_format))
return file_handler
def get_stream_handler():
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO) # Print INFO and above to console
stream_handler.setFormatter(logging.Formatter(_log_format))
return stream_handler
def get_logger(name="Admin"):
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
if not logger.handlers:
logger.addHandler(get_timed_rotating_file_handler())
logger.addHandler(get_stream_handler())
return logger
</code></pre>
|
<python><logging><python-logging>
|
2024-02-27 12:28:51
| 0
| 464
|
Luckasino
|
78,067,552
| 710,955
|
Error when benchmarking French datasets constituting the MTEB French leaderboard
|
<p>I want to evaluate some french embeddings models using MTEB Semantic Text Similarity (STS) task.
To do this, I took inspiration from this code <a href="https://github.com/embeddings-benchmark/mteb/blob/main/scripts/run_mteb_french.py" rel="nofollow noreferrer">run_mteb_french.py</a></p>
<pre><code> import logging
from sentence_transformers import SentenceTransformer
from mteb import MTEB
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("main")
model_name = "dangvantuan/sentence-camembert-large"
model = SentenceTransformer(model_name)
TASK_LIST_STS = [
"SummEvalFr",
"STSBenchmarkMultilingualSTS",
"STS22",
"SICKFr"
]
for task in TASK_LIST_STS:
logger.info(f"Running task: {task}")
evaluation = MTEB(tasks=[task], task_langs=["fr"])
evaluation.run(model_name, output_folder=f"fr_results/{model_name}")
</code></pre>
<p>But I got this error:</p>
<blockquote>
<p>Summarization</p>
<pre><code>- SummEvalFr, p2p
</code></pre>
<p>ERROR:mteb.evaluation.MTEB:Error while evaluating SummEvalFr:
'batch_size' is an invalid keyword argument for encode()</p>
<hr />
<p>TypeError Traceback (most recent call
last)</p>
<p> in <cell line: 17>()
18 logger.info(f"Running task: {task}")
19 evaluation = MTEB(tasks=[task], task_langs=["fr"])
---> 20 evaluation.run(model_name, output_folder=f"fr_results/{model_name}")</p>
<p>4 frames</p>
<p>/usr/local/lib/python3.10/dist-packages/mteb/evaluation/evaluators/SummarizationEvaluator.py
in <strong>call</strong>(self, model)
51
52 logger.info(f"Encoding {sum(human_lens)} human summaries...")
---> 53 embs_human_summaries_all = model.encode(
54 [summary for human_summaries in self.human_summaries for summary in human_summaries],
55 batch_size=self.batch_size,</p>
<p>TypeError: 'batch_size' is an invalid keyword argument for encode()</p>
</blockquote>
<p>What should I do ?</p>
|
<python><huggingface-transformers><embedding>
|
2024-02-27 12:26:39
| 1
| 5,809
|
LeMoussel
|
78,067,449
| 243,467
|
Unable to forward HTTP to HTTPS with Gunicorn on Heroku
|
<p>I am running a Python app using Dash library on Heroku and trying to redirect any HTTP request to HTTPS, but I'm unable to get the setting right.</p>
<p>In my Proc file I have:</p>
<pre><code>web: gunicorn --config=gunicorn.conf.py app:server
</code></pre>
<p>And my config file:</p>
<pre><code>forwarded_allow_ips = '*'
secure_scheme_headers = { 'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
</code></pre>
<p>My expectation is that HTTP requests get redirected or fail, but they currently do not and work fine.</p>
|
<python><heroku><plotly-dash>
|
2024-02-27 12:10:58
| 0
| 659
|
benbyford
|
78,067,336
| 4,186,430
|
rebin an array in python to have at least N counts in each entry
|
<p>I have an array as:</p>
<pre><code>A = [1,8,2,6,4,8,1,0,1,6,7,3,1,4,9,1,2,1,2,1,1,2]
</code></pre>
<p>and I'd like to rebin/group it into a smaller size array, which has at least a value of <code>10</code> in each entry, i.e.:</p>
<pre><code>A_reb = [[1,8,2],[6,4],[8,1,0,1],[6,7],[3,1,4,9],[1,2,1,2,1,1,2]]
A_reb = [11, 10, 10, 13, 17, 10]
</code></pre>
<p>Is there an effective way to do it in python? Thanks in advance.</p>
|
<python><arrays><bin><group>
|
2024-02-27 11:48:48
| 1
| 641
|
urgeo
|
78,067,054
| 13,757,692
|
How to always place axis unit in specific tick label position in Matplotlib?
|
<p>There is a specific way of including units in graphs that I like:
<a href="https://i.sstatic.net/541rp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/541rp.png" alt="Image from wikipedia" /></a></p>
<p>The image is from Wikipedia, from <a href="https://de.wikipedia.org/wiki/DIN_461" rel="nofollow noreferrer">a page about DIN 461</a>.</p>
<p>The unit is always placed in the second to last axis tick. I want to recreate this kind of behaviour using Matplotlib and I have been successful by modifying the <code>xlabels</code> directly using <code>plt.gca().set_xticklabels()</code>. The use of <code>set_xticklabels</code> is however discouraged, because it stops displaying the proper values once you zoom/pan, do whatever with the plot. Instead, I would like to have a solution, where the second to last tick label is always changed to be a unit.</p>
<p>I tried writing a custom tick label formatter and placing the unit depending on the position. This has not yet worked, due to the different number of ticks that might be shown, depending on the zoom aspect ratio, etc., as well as the fact, that pos is often None, for some reason.</p>
<pre><code>from matplotlib.ticker import FuncFormatter
@FuncFormatter
def my_formatter(x, pos):
return f"{x}" if pos != 6 else 'Unit'
</code></pre>
<p>Alternatively, I used ax.set_xticks with some formatting function Formatter: ax.set_xticks(ax.get_xticks(), Formatter(ax.get_xticks())), but this shows more ticks than I want and also does not change the ticks when zooming, see below:</p>
<p>Example plot:
<a href="https://i.sstatic.net/g5CLAm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g5CLAm.png" alt="enter image description here" /></a></p>
<p>Got xticks, replaced the second to last label with the unit and set xticks:</p>
<pre><code>xticks = ax.get_xticks()
xtick_labels = list(xticks)
xtick_labels[-2] = "mA"
ax.set_xticks(xticks, labels=xtick_labels)
</code></pre>
<p>New ticks appear in the corners and x limits change:
<a href="https://i.sstatic.net/wB4izm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wB4izm.png" alt="![enter image description here" /></a></p>
<p>Zooming in does not update ticks anymore and the unit label dissapears
<a href="https://i.sstatic.net/rH9bfm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rH9bfm.png" alt="![enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-02-27 11:05:59
| 2
| 466
|
Alex V.
|
78,067,053
| 8,792,159
|
How to configure pyproject.toml for a pip package to enable one Python script to execute another via command line?
|
<p>I would like to create a pip package where one python script can execute the other script over the command line.</p>
<p>Take this as an example:</p>
<pre><code>my-package/
└── my-package
├── script.py
└── tests.py
</code></pre>
<p>with <code>script.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
def run():
# run the other script
pytest.main(["tests.py"])
# then do the actual stuff
print("I did something!")
</code></pre>
<p>and <code>tests.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
class Test:
def test_stuff(self):
if not 1 == 2:
pytest.fail('1 does not equal 2')
</code></pre>
<p>How can I configure my <code>pyproject.toml</code> file so that the <code>run()</code> function inside <code>script.py</code> always "knows" where to find <code>tests.py</code> so it can execute it via <a href="https://docs.pytest.org/en/7.1.x/how-to/usage.html#calling-pytest-from-python-code" rel="nofollow noreferrer"><code>pytest.main()</code></a>? Right now, I have the problem that the <code>run()</code> function expects <code>tests.py</code> to be always in the directory that I am currently in. Of course that's not the case. I need a way to tell the <code>run</code> function that it will find <code>tests.py</code> in the same directory where itself is located.</p>
<p><strong>Note:</strong> I saw a few suggestions but they always refer to specific functions inside scripts that are made available as command-line options. But in my case one script should call the whole other script not just one single function inside of it.</p>
<p>Possibly related:</p>
<p><a href="https://stackoverflow.com/questions/59286983/how-to-run-a-script-using-pyproject-toml-settings-and-poetry">how to run a script using pyproject.toml settings and poetry?</a></p>
|
<python><pip><command><pytest><pyproject.toml>
|
2024-02-27 11:05:54
| 0
| 1,317
|
Johannes Wiesner
|
78,066,983
| 7,724,187
|
Python Struct Assign Bitfield with typesafety
|
<p>Consider I have the following struct:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
class MyStruct(ctypes.BigEndianStructure):
# ! These typehints are added so that the IDE recognizes the fields of the Structure
field_size1: ctypes.c_uint8
field_size7: ctypes.c_uint8
field_size8: ctypes.c_uint8
_pack_ = 1
_fields_ = [
("field_size1", ctypes.c_uint8, 1),
("field_size7", ctypes.c_uint8, 7),
("field_size8", ctypes.c_uint8, 8),
]
</code></pre>
<p>I am able to assign the bits in this struct from Python types and get the bytes I expect.</p>
<pre class="lang-py prettyprint-override"><code>s = MyStruct()
s.field_size1 = 1
s.field_size7 = 2
s.field_size8 = 3
</code></pre>
<p>I get the mypy linting error:</p>
<pre><code>Incompatible types in assignment (expression has type "int", variable has type "c_uint8") Mypy(assignment)
</code></pre>
<p>However, if I try to use the <code>ctype</code>, the program crashes (ie the message below isn't printed):</p>
<pre class="lang-py prettyprint-override"><code>s = MyStruct()
s.field_size1 = ctypes.c_uint8(1)
print("completed")
</code></pre>
<p>Which is what I believe I would expect. Trying to shove the 8bits into a 1bit length wouldn't make sense.</p>
<p>Is there a correct way to use the <code>ctypes</code> types when assigning bitfields to keep both Python and mypy happy without 'type ignores'</p>
|
<python><ctypes>
|
2024-02-27 10:55:11
| 1
| 621
|
Adehad
|
78,066,450
| 3,809,691
|
Create a binary pytorch tensor with n% ones
|
<p>I want to create a binary tensor with 0 and 1s only, but the number of 1s should be n%.</p>
<p>I am using this code, but it takes 7+ seconds and has 19926MB of memory.</p>
<p>Is there any faster way to do that?</p>
<pre><code>import torch
import time
start_time = time.time()
def create_random_binary_tensor(shape, percentage, device):
size = shape[0] * shape[1]
num_ones = int(size * (percentage / 100.0))
# Create a tensor filled with zeros
tensor = torch.zeros(size, dtype=torch.float32, device=device)
# Set random indices to ones until the desired number of ones is reached
indices = torch.randperm(size, device=device)[:num_ones]
tensor[indices] = 1
# Reshape the tensor to the desired shape
tensor = tensor.view(shape)
return tensor
# Set the shape and percentage of ones
shape = (19000, 19000)
percentage = 0.5
# Check if CUDA is available and use it if possible
device = torch.device("cuda:2" if torch.cuda.is_available() else "cpu")
# Create the random binary tensor
tensor = create_random_binary_tensor(shape, percentage, device)
# Print the tensor
execution_time = time.time() - start_time
print("Execution Time: {:.4f} seconds".format(execution_time))
</code></pre>
|
<python><pytorch><pytorch-geometric>
|
2024-02-27 09:33:32
| 1
| 3,065
|
Adnan Ali
|
78,066,329
| 13,919,925
|
from okta auth0 how can i increase the azure ad token validity?
|
<p>I'm using Okta as the identity provider for user management and have integrated Azure AD with Auth0 for authentication. However, when i retrieve user information after authentication using this API of auth0</p>
<pre><code>url = 'https://' + AUTH0_DOMAIN + '/api/v2/users/' + user_id
</code></pre>
<p>I notice that the providers access token remains the same each time I call the API.
Is there a way to get different token. If not, then how can i increase the expiry time of provider access token
as My auth0 expiry time is 72hr and azure ad is 1hr.</p>
<p>I have used this but it didn't help me :
<a href="https://learn.microsoft.com/en-us/entra/identity-platform/configure-token-lifetimes#create-a-policy-and-assign-it-to-a-service-principal" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/entra/identity-platform/configure-token-lifetimes#create-a-policy-and-assign-it-to-a-service-principal</a></p>
|
<python><azure-active-directory><auth0><okta>
|
2024-02-27 09:14:56
| 1
| 302
|
sandeepsinghnegi
|
78,066,319
| 2,826,018
|
VS Code in WSL2: Python extension doesn't find references, definitions or dependencies in code
|
<p>I am running a Python-Project in WSL2 using Visual Studio Code.
The extensions I have installed are:</p>
<ul>
<li>isort</li>
<li>Pylance</li>
<li>Python</li>
<li>Python Debugger</li>
</ul>
<p>all of them are specifically installed and enabled on WSL. I open VS-Code by starting a WSL shell, cd-ing to my directory and then typing <code>code .</code>. I don't understand why nothing seems to be working.</p>
<p>Code completion, complete syntax highlighting, jumping to function or variable definitions, finding references, etc. all of them don't work.
Syntax is partially highlighted but not completely.</p>
<p>I am using Python 3.10.2 as an interpreter installed in <code>/usr/local/bin</code>.</p>
<p>If I open my project on Windows instead, everything seems to be working fine.</p>
<p>Anyone who has similar issues?</p>
|
<python><visual-studio-code><windows-subsystem-for-linux>
|
2024-02-27 09:13:06
| 1
| 1,724
|
binaryBigInt
|
78,066,260
| 1,652,219
|
In python how do you get the "last" directory in a path string?
|
<p>I am working on a remote file system, where I don't have direct access to the files/directories, so I cannot check if a string represents a file or a directory.</p>
<p>I have the following paths I need to handle, where I have to get a hold of the "partition column":</p>
<pre><code>path1 = "/path/to/2012-01-01/files/2014-01-31/la.parquet"
path2 = "/path/to/2012-01-01/files/2014-01-31/"
path3 = "/path/to/2012-01-01/files/2014-01-31"
</code></pre>
<p>In all cases, the deepest path (partition column) is "2014-01-31". Is there a way consistently to get this path in a single line of code, or do I have to do all sorts of checks of file names?</p>
<p>I was hoping to do something like:</p>
<pre><code>import os
os.path.dirname(path).split("/")[-1]
</code></pre>
<p>But this doesn't work for <strong>path3</strong>. Does one need to have access to the filesystem to correctly identify the deepest directory, or is there some easy way?</p>
|
<python><path><subdirectory>
|
2024-02-27 09:02:44
| 5
| 3,944
|
Esben Eickhardt
|
78,066,199
| 11,926,970
|
AWS Cost and Usage Report difference from AWS Billing Dashboard
|
<p>I have an activated AWS service called AWS Cost and Usage Report(CUR), which generates on an <code>Hourly</code> basis.</p>
<p>The format of the CUR is in <code>parquet</code> format.</p>
<p>I have a Python code that takes the <code>parquet</code> and reads it.</p>
<p>The issue is that there is a difference in total cost calculated fetched from the <code>parquet</code> file and showing in <code>AWS Billing Dashboard</code>.</p>
<p>The total cost difference is not small but very huge, from example <code>EC2</code> in Billing Dashboard shows a cost of around <code>$12,000</code>, but the cost calculated from <code>CUR</code> is roughly <code>$4500</code>.</p>
<p>Here is my Python code:</p>
<pre><code>import sys
import time
from fastparquet import ParquetFile
import json
import datetime
from collections import defaultdict
tags_key = 'resource_tags_'
def read_parquet_file(parquet_file_path):
start_time_rf = time.time()
dataset = ParquetFile(parquet_file_path)
# Convert the Parquet table to a Pandas DataFrame
df = dataset.to_pandas()
resource_tags_columns = [col for col in df.columns if col.startswith(tags_key)]
# print(resource_tags_columns)
relevant_columns = ['identity_line_item_id', 'product_sku', 'line_item_line_item_type',
'line_item_resource_id', 'line_item_product_code', 'product_region',
'bill_billing_period_start_date', 'bill_billing_period_end_date',
'line_item_usage_start_date', 'line_item_usage_end_date',
'product_usagetype'] + resource_tags_columns + ['line_item_unblended_cost']
df = df[relevant_columns]
# Perform the aggregation directly without grouping if possible
grouped_data = df.groupby(relevant_columns[:-1])['line_item_unblended_cost'].sum()
# Sort the grouped data by line_item_product_code
grouped_data = grouped_data.sort_index(level='line_item_product_code')
# Record the end time
end_time_rf = time.time()
# Calculate the elapsed time
elapsed_time = end_time_rf - start_time_rf
# print(f'Time taken to read the Parquet file: {elapsed_time} seconds')
return grouped_data, resource_tags_columns
def parse_data(grouped_data, resource_tags_columns):
start_time_rf = time.time()
# Print the total cost for each 'identity_line_item_id'
final_sum = 0
tax = 0
usage = 0
# grouping data based in username
final_grouped_data = defaultdict(list)
def convert_cost(cost):
try:
# Attempt direct string formatting for potential efficiency
return f"{cost:.{len(str(cost).split('.')[-1])}f}"
except ValueError:
# Handle cases where direct formatting fails (e.g., scientific notation)
exponent_str = str(cost)
base, power = exponent_str.split('e')
decimal = len(base.split('.')[1]) if len(base.split('.')) > 1 else 0
power = abs(int(power))
return f"{cost:.{power + decimal}f}"
for (identity_line_item_id,product_sku,line_item_line_item_type,line_item_resource_id,line_item_product_code,
product_region ,bill_billing_period_start_date,
bill_billing_period_end_date, line_item_usage_start_date, line_item_usage_end_date,
product_usagetype, *resource_tags),total_cost in grouped_data.items():
final_sum += total_cost
if (line_item_line_item_type == 'Tax' ):
tax += total_cost
elif (line_item_line_item_type == 'Usage'):
usage += total_cost
entry = {
'identityLineItemId': identity_line_item_id,
'line_item_resource_id': line_item_resource_id,
'totalCost': convert_cost(total_cost),
'product_sku': product_sku,
'region': product_region,
'line_item_line_item_type': line_item_line_item_type,
'line_item_product_code':line_item_product_code,
'product_usagetype': product_usagetype,
'start_date': datetime.datetime.strftime(bill_billing_period_start_date, '%Y-%m-%d %H:%M:%S'),
'end_date': datetime.datetime.strftime(bill_billing_period_end_date,'%Y-%m-%d %H:%M:%S'),
'usage_start_date':datetime.datetime.strftime(line_item_usage_start_date, '%Y-%m-%d %H:%M:%S'),
'usage_end_date':datetime.datetime.strftime(line_item_usage_end_date, '%Y-%m-%d %H:%M:%S'),
}
final_grouped_data[line_item_product_code].append(entry)
for col, tag_key in enumerate(resource_tags_columns):
if tag_value := resource_tags[col]:
entry[tag_key] = tag_value
def groupByRegion(data):
group_by_region = defaultdict(list) # Use defaultdict for efficient handling of new keys
for entry in data:
if region := entry.get('region'):
group_by_region[region].append(entry)
return group_by_region
def groupByIdentityLineItemId(data):
obj = {}
for entry in data:
# Extract values
identity_id = entry["identityLineItemId"]
total_cost = float(entry["totalCost"])
# Add to existing entry or create a new one
if identity_id in obj:
obj[identity_id]["total_cost"] = convert_cost(float(obj[identity_id]["total_cost"]) + total_cost)
else:
obj[identity_id] = {
"total_cost": convert_cost(total_cost),
"identityLineItemId": entry['identityLineItemId'],
"line_item_resource_id": entry['line_item_resource_id'],
"region": entry['region'],
"line_item_product_code": entry['line_item_product_code'],
"product_usagetype": entry['product_usagetype']
}
for key, value in entry.items():
if key.startswith(tags_key):
obj[identity_id][key] = value
return obj
def groupByProductUsageType(data, type):
group_by_type = defaultdict(list) # Use defaultdict for efficient handling of new keys
for entry in data:
if type == 'AmazonEC2':
if 'EBS' in entry['product_usagetype']:
group_by_type['EBS'].append(entry)
elif 'AmazonEC2Stopped' in entry['product_usagetype']:
group_by_type['AmazonEC2Stopped'].append(entry)
else:
group_by_type['AmazonEC2Running'].append(entry)
# group_by_type['EBS' if 'EBS' in entry['product_usagetype'] else 'AmazonEC2Running'].append(entry)
else:
group_by_type[entry['product_usagetype']].append(entry)
return getCostByproductUsageType(group_by_type)
def getCostByRegion(data, type):
group_by_region = defaultdict(list) # Use defaultdict for efficient handling
for region, entries in data.items():
region_wise_sum = sum(float(entry['totalCost']) for entry in entries) # Sum total cost using generator expression
details = groupByProductUsageType(entries, type) # Group by product usage type
usage_sum = 0
for entry in entries:
if entry['line_item_line_item_type'] == 'Usage':
usage_sum += float(entry['totalCost'])
group_by_region[region].append({'total_cost': convert_cost(region_wise_sum),
'usage_cost': convert_cost(usage_sum), 'details': details})
return group_by_region
def getCostByproductUsageType(data):
group_by_type = defaultdict(list)
for key, value in data.items():
regionWiseSum = sum(float(entry['totalCost']) for entry in value) # Efficient summation
# Group and convert cost within the loop for clarity
group_by_type[key].append({
f'{key}_total_cost': convert_cost(regionWiseSum),
f'{key}_details': groupByIdentityLineItemId(value)
})
return group_by_type
def get_cost_each_line_item_product(grouped_data):
cost_details = {
'details': {},
'cost_details': {
'total_cost': f'{final_sum:.2f}',
'tax_cost': f'{tax:.2f}',
'usage_cost': f'{usage:.2f}'
}
}
for key, data in grouped_data.items():
total_cost = sum(float(entry['totalCost']) for entry in data)
tax_service = sum(float(entry['totalCost']) for entry in data if entry['line_item_line_item_type'] == 'Tax')
usage_service = sum(float(entry['totalCost']) for entry in data if entry['line_item_line_item_type'] == 'Usage')
region_wise_details = groupByRegion(data)
cost_details['details'][key] = {
'total_cost': convert_cost(total_cost),
'usage_cost': convert_cost(usage_service),
'tax_cost': convert_cost(tax_service),
'region_wise_details': getCostByRegion(region_wise_details, key)
}
formatted_json = json.dumps(cost_details, indent=2, ensure_ascii=False)
# with open('final_code_cpy.json', 'w') as json_file:
# json.dump(cost_details, json_file)
return formatted_json
data = get_cost_each_line_item_product(final_grouped_data)
print(data)
# Record the end time
end_time_rf = time.time()
elapsed_time = end_time_rf - start_time_rf
print(f'Time taken to scan the Parquet file: {elapsed_time} seconds')
if __name__ == '__main__':
if len(sys.argv) < 2:
sys.exit(1)
parquet_file_path = sys.argv[1]
grouped_data, resource_tags_columns = read_parquet_file(parquet_file_path)
parse_data(grouped_data, resource_tags_columns)
</code></pre>
<blockquote>
<p>Note: <code>parquet</code> file that I am using is latest <code>parquet</code> file.</p>
</blockquote>
|
<python><amazon-web-services><parquet><aws-cost-explorer>
|
2024-02-27 08:51:17
| 0
| 2,700
|
Not A Robot
|
78,066,157
| 3,111,290
|
ValueError: setting an array element with a sequence in Python numpy array
|
<p>I'm currently working on a project to build an AI chatbot using Python, and I'm encountering an error that I can't seem to resolve. When attempting to convert my training data into a numpy array, I'm getting the following error:</p>
<pre><code>ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (29, 2) + inhomogeneous part.
</code></pre>
<p>Here's the relevant section of my code:</p>
<pre><code>training = []
output_empty = [0] * len(classes)
for document in documents:
bag = []
word_patterns = document[0]
word_patterns = [lematizer.lemmatize(word.lower()) for word in word_patterns]
for word in words:
bag.append(1) if word in word_patterns else bag.append(0)
output_row = list(output_empty)
output_row[classes.index(document[1])] = 1
training.append([bag,output_row])
random.shuffle(training)
training = np.array(training)
train_x = list(training[:, 0])
train_y = list(training[:, 1])
</code></pre>
<p>My training variable is a list of lists, where each inner list contains a bag of words and an output row. I've checked the dimensions of the training list using np.shape(training), and it returns (29,), indicating that it's a 1-dimensional array. However, when I attempt to convert it into a numpy array, I encounter the aforementioned error.</p>
<p>I've double-checked the contents of my training list, and it seems to be formatted correctly. Each inner list contains a bag of words (a list of integers) and an output row (a list of integers), both of which have consistent lengths across all entries.</p>
<p>I'm not sure why I'm encountering this error or how to resolve it. Any insights or suggestions would be greatly appreciated. Thank you!</p>
<p>Code Files :<br>
<a href="https://github.com/GH0STH4CKER/AI_Chatbot/blob/main/training.py" rel="nofollow noreferrer">https://github.com/GH0STH4CKER/AI_Chatbot/blob/main/training.py</a>
<a href="https://github.com/GH0STH4CKER/AI_Chatbot/blob/main/intents.json" rel="nofollow noreferrer">https://github.com/GH0STH4CKER/AI_Chatbot/blob/main/intents.json</a></p>
|
<python><arrays><numpy><tensorflow><valueerror>
|
2024-02-27 08:43:28
| 1
| 643
|
ghost21blade
|
78,065,964
| 9,394,465
|
Is it possible to load limited columns with pandas for processing but output with entire columns though
|
<p>I have been working with pandas to process very huge csv files lately, and the read_csv(..) itself is taking more time than the actual processing. I am not able to figure out if its possible to read only few columns from the csv for the task that I am doing so my read_csv would be faster?
My task involve reading two CSVs and concatenate them by doing the following:</p>
<ol>
<li>Take one of the column from both and change them to something which is defined in a method <code>def resolver(series_1, series_2)</code>. This is done by doing the <code>csv2_df[that_col] = csv1_df[that_col].combine(csv2_df[that_col], resolver)</code>.</li>
<li>Concatenate csv1 with csv2 preferring csv2 for the intersecting rows: <code>result_df = pd.concat([csv1_df[~csv1_df.index.isin(csv2_df.index)], csv2_df])</code></li>
<li>Remove rows that are marked for deletion in the 1st step: <code>result_df = result_df[result_df[that_col] != '$Delete']</code></li>
<li>result_df.to_csv(...)</li>
</ol>
<ul>
<li>Note: Before these steps, my read_csv(..) for those two files resembles the following: <code>pd.read_csv(input_file, sep='\x1D', engine='python', skiprows=1, skipfooter=1, header=0, dtype=str, keep_default_na=False)</code></li>
</ul>
<p><strong>EDIT:</strong>
Reproducible code:</p>
<pre><code>csv2_df = pd.read_csv(csv2, sep='\x1D', engine='python', skiprows=1, skipfooter=1, header=0, dtype=str, keep_default_na=False)
csv1_df = pd.read_csv(csv1, sep='\x1D', engine='python', skiprows=1, skipfooter=1, header=0, dtype=str, keep_default_na=False)
csv2_df.set_index(primary_idx_list, inplace=True, verify_integrity=verify_integrity)
csv1_df.set_index(primary_idx_list, inplace=True, verify_integrity=verify_integrity)
def resolver(v1, v2):
return '$Delete' if v1 == 'A' and v2 == 'B' else (v1 if v1 == 'A' and v2 == 'C' else (v2 if pd.notna(v2) else v1))
csv2_df[that_col] = csv1_df[that_col].combine(csv2_df[that_col], resolver)
result_df = pd.concat([csv1_df[~csv1_df.index.isin(csv2_df.index)], csv2_df])
result_df = result_df[result_df[that_col] != '$Delete']
result_df.to_csv(...)
</code></pre>
|
<python><pandas>
|
2024-02-27 08:06:06
| 1
| 513
|
SpaceyBot
|
78,065,466
| 15,412,256
|
Polars groupby map UDF using multiple columns as parameter
|
<p>I have a numba UDF:</p>
<pre class="lang-py prettyprint-override"><code>@numba.jit(nopython=True)
def generate_sample_numba(cumulative_dollar_volume: np.ndarray, dollar_tau: Union[int, np.ndarray]) -> np.ndarray:
""" Generate the sample using numba for speed.
"""
covered_dollar_volume = 0
bar_index = 0
bar_index_array = np.zeros_like(cumulative_dollar_volume, dtype=np.uint32)
if isinstance(dollar_tau, int):
dollar_tau = np.array([dollar_tau] * len(cumulative_dollar_volume))
for i in range(len(cumulative_dollar_volume)):
bar_index_array[i] = bar_index
if cumulative_dollar_volume[i] >= covered_dollar_volume + dollar_tau[i]:
bar_index += 1
covered_dollar_volume = cumulative_dollar_volume[i]
return bar_index_array
</code></pre>
<p>The UDF takes two inputs:</p>
<ol>
<li>The <code>cumulative_dollar_volume</code> numpy array, which is essentially the groups in <code>group_by</code></li>
<li>The <code>dollar_tau</code> threshold, which is either an <em>integer</em> or <em>numpy array</em>.</li>
</ol>
<p>In this question, I am particularly interested in the numpy array configuration. <a href="https://stackoverflow.com/questions/76683540/how-to-vectorize-complex-cumulative-aggregation-problem">This post</a> well explains the idea behind the <code>generat_sample_numba</code> function.</p>
<p>I want to achieve the same results from Pandas by <strong>using polars</strong>:</p>
<pre class="lang-py prettyprint-override"><code>data["bar_index"] = data.groupby(["ticker", "date"]).apply(lambda x: generate_sample_numba(x["cumulative_dollar_volume"].values, x["dollar_tau"].values)).explode().values.astype(int)
</code></pre>
<p>Apprently, the best option in Polars is by <code>group_by().agg(pl.col().map_batehces()</code>:</p>
<pre class="lang-py prettyprint-override"><code>cqt_sample = cqt_sample.with_columns(
(pl.col("price") * pl.col("size")).alias("dollar_volume")).with_columns(
pl.col("dollar_volume").cum_sum().over(["ticker", "date"]).alias("cumulative_dollar_volume"),
pl.lit(1_000_000).alias("dollar_tau")
)
(cqt_sample
.group_by(["ticker", "date"])
.agg(pl.col(["cumulative_dollar_volume", "dollar_tau"])
.map_batches(lambda x: generate_sample_numba(x["cumulative_dollar_volume"].to_numpy(), 1_000_000))
)#.alias("bar_index")
)#.explode("bar_index")
</code></pre>
<p>but the <code>map_bathces()</code> method seems to throw some strange results.`</p>
<p>However, when I use the integer <code>dollar_tau</code> with one input column it works fine:</p>
<pre class="lang-py prettyprint-override"><code>(cqt_sample
.group_by(["ticker", "date"])
.agg(pl.col("cumulative_dollar_volume")
.map_batches(lambda x: generate_sample_numba(x.to_numpy(), 1_000_000))
).alias("bar_index")
).explode("bar_index")
</code></pre>
|
<python><pandas><numpy><numba><python-polars>
|
2024-02-27 06:15:38
| 2
| 649
|
Kevin Li
|
78,065,443
| 12,314,521
|
How to track the best metric score during training (Pytroch Lightning)
|
<p>I'm learning Pytorch Lightning.</p>
<p>I want to track loss and performance metric scores of each epoch for saving best model or plot chart or get the highest performance but all I have is the the loss and performance score of a batch in the training_step method like this:</p>
<pre><code>
class Model(pl.LightningModule):
...
def training_step(self, batch, batch_idx):
...
probs = self.forward(input_ids, labels, output_ids)
loss = self.loss(probs, labels)
# calculate blue_score
predictions = self.decode_prediction(probs, input_tokens)
output_sequences = [[x] for x in output_sequences]
blue_score = self.blue_score(predictions, output_sequences)
self.log_dict(
{'train_loss': loss, 'blue_score': blue_score},
on_step=True,
on_epoch=True,
prog_bar=True
)
return loss
</code></pre>
<p>Edit:</p>
<p>After doing some researches. I think I figured it out. As the default logger already compute accumulate loss and metric score on each epoch (I set <code>on_epoch = True</code>). Now I will write a custom Logger inherit the Logger class of Torch Lightning:</p>
<pre><code>class HistoryLogger(Logger):
def __init__(self):
super().__init__()
self.history = collections.defaultdict(list)
@property
def name(self):
return "HistoryLogger"
@property
def version(self):
return "1.0"
@rank_zero_only
def log_metrics(self, metrics, step):
for metric_name, metric_value in metrics.items():
self.history[metric_name].append(metric_value)
return
logger = HistoryLogger()
trainer = Trainer(logger=logger)
...
# access the history
logger.history
</code></pre>
|
<python><pytorch><pytorch-lightning>
|
2024-02-27 06:07:36
| 0
| 351
|
jupyter
|
78,065,399
| 3,700,524
|
Select consecutive elements that satisfy a certain condition as separate arrays
|
<p>Given an array of values, I want to select multiple sequences of consecutive elements that satisfy a condition. The result should be one array for each sequence of elements.</p>
<p>For example I have an array containing both negative and positive numbers. I need to select sequences of negative numbers, with each sequence in a separate array.<br />
Here is an example :</p>
<pre><code>import numpy as np
# Example data
values = np.array([1, 2, 3, -1, -2, -3, 4, 5, 6, -7, -8, 10])
mask = values < 0
</code></pre>
<p>Here is how the output should look like :</p>
<pre><code>Array 1:
[-1 -2 -3]
Array 2:
[-7 -8]
</code></pre>
<p>I tried to do it using <code>numpy.split</code>, but it became more like spaghetti code. I was wondering is there a Pythonic way to do this task?</p>
|
<python><arrays><numpy>
|
2024-02-27 05:52:32
| 4
| 3,421
|
Mohsen_Fatemi
|
78,065,151
| 758,982
|
Provide schema while reading avro files using pyspark
|
<p>I am trying to read avro files using pyspark. I want to provide my own schema while reading the file. Below is the sample code.</p>
<pre><code>json_schema = """
{
"type": "record",
"name": "User",
"fields": [
{
"name": "routingNumber",
"type": "string"
}
]
}
"""
schema_dict = json.loads(json_schema)
avro_schema = StructType.fromJson(schema_dict)
spark = SparkSession.builder.appName("AvroReadExample")\
.config('spark.jars', '/Users/harbeerkadian/Documents/workspace/learn-spark/jars/spark-avro_2.12-3.5.0.jar')\
.getOrCreate()
df = spark.read.format("avro").schema(avro_schema).load("/Users/harbeerkadian/Downloads/accounts.avro")
</code></pre>
<p>It is giving errors where I am tyring to convert the schema json into StructType.</p>
<p>Below is the error details</p>
<pre><code>Traceback (most recent call last):
File "/Users/harbeerkadian/Documents/workspace/learn-python/solution.py", line 992, in <module>
avro_using_files()
File "/Users/harbeerkadian/Documents/workspace/learn-python/solution.py", line 954, in avro_using_files
avro_schema = StructType.fromJson(schema_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harbeerkadian/Documents/workspace/python_env/ncr-env/lib/python3.11/site-packages/pyspark/sql/types.py", line 1017, in fromJson
return StructType([StructField.fromJson(f) for f in json["fields"]])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harbeerkadian/Documents/workspace/python_env/ncr-env/lib/python3.11/site-packages/pyspark/sql/types.py", line 1017, in <listcomp>
return StructType([StructField.fromJson(f) for f in json["fields"]])
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/harbeerkadian/Documents/workspace/python_env/ncr-env/lib/python3.11/site-packages/pyspark/sql/types.py", line 709, in fromJson
json["nullable"],
~~~~^^^^^^^^^^^^
KeyError: 'nullable'
</code></pre>
|
<python><apache-spark><pyspark><schema><avro>
|
2024-02-27 04:24:45
| 2
| 394
|
Harbeer Kadian
|
78,065,114
| 10,200,497
|
How can I conditionally change background color and font color of a cell when using Pandas to_excel?
|
<p>This is my DataFrame:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(
{
'a': [2, 2, 2, -4, np.nan, np.nan, 4, -3, 2, -2, -6],
'b': [2, 2, 2, 4, 4, 4, 4, 3, 2, 2, 6]
}
)
</code></pre>
<p>I want to change background color and font color of cells only for column <code>a</code>. For example, if <code>df.a > 0</code> background color is green otherwise it is red.</p>
<p>This code does that:</p>
<pre><code>def highlight_cells(s):
if s.name=='a':
conds = [s > 0, s < 0, s == 0]
labels = ['background-color: lime', 'background-color: pink', 'background-color: gold']
array = np.select(conds, labels, default='')
return array
else:
return ['']*s.shape[0]
df.style.apply(highlight_cells).to_excel('df.xlsx', index=False)
</code></pre>
<p>But I don't know how to add font color to that. I can add the same conditional with <code>color</code> property and repeat these lines but I want to know if there is a better way.</p>
|
<python><pandas>
|
2024-02-27 04:12:18
| 2
| 2,679
|
AmirX
|
78,065,058
| 2,955,541
|
Knapsack Problem: Find Top-K Lower Profit Solutions
|
<p>In the classic 0-1 knapsack problem, I am using the following (dynamic programming) algorithm to construct a "dp table":</p>
<pre><code>def knapsack(weights, values, capacity):
n = len(weights)
weights = np.concatenate(([0], weights)) # Prepend a zero
values = np.concatenate(([0], values)) # Prepend a zero
table = np.zeros((n+1, capacity+1), dtype=np.int64) # The first row/column are zeros
for i in range(n+1):
for w in range(capacity+1):
if i == 0 or w == 0:
table[i, w] = 0
elif weights[i] <= w:
table[i, w] = max(
table[i-1, w-weights[i]] + values[i],
table[i-1, w]
)
else:
table[i, w] = table[i-1, w]
return table
</code></pre>
<p>Then, by traversing the table using the following code, I am then able identify the items that make up the optimal solution:</p>
<pre><code>def get_items(weights, capacity, table):
items = []
i = len(weights)
j = capacity
weights = np.concatenate(([0], weights)) # Prepend a zero
table_copy = table.copy()
while i > 0 and j > 0:
if table_copy[i, j] == table_copy[i-1, j]:
pass # Item is excluded
else:
items.append(i-1) # Item is included, fix shifted index due to prepending zero
j = j - weights[i]
i = i-1
return items
</code></pre>
<p>This is great for finding the items that make up the single optimal solution (i.e., highest total summed value). However, I can't seem to figure out how to retrieve, say, the top-3 or top-5 solutions from this table that has a total weight that is less than or equal to the maximum capacity.</p>
<p>For example, the top-3 solutions for the following input would be:</p>
<pre><code>weights = [1, 2, 3, 2, 2]
values = [6, 10, 12, 6, 5]
capacity = 5
# with corresponding "dp table"
array([[ 0, 0, 0, 0, 0, 0],
[ 0, 6, 6, 6, 6, 6],
[ 0, 6, 10, 16, 16, 16],
[ 0, 6, 10, 16, 18, 22],
[ 0, 6, 10, 16, 18, 22]])
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Total Summed Value</th>
<th>Items (Zero-based Index)</th>
</tr>
</thead>
<tbody>
<tr>
<td>22</td>
<td>1, 2</td>
</tr>
<tr>
<td>22</td>
<td>0, 1, 3</td>
</tr>
<tr>
<td>21</td>
<td>0, 1, 4</td>
</tr>
</tbody>
</table></div>
<p>Note that there is a tie for both first place and so we'd truncate the solutions after the first 3 rows (though, getting all ties is preferred if possible). Is there an efficient way to obtain the top-k solutions from the "dp table"?</p>
|
<python><optimization><dynamic-programming><knapsack-problem>
|
2024-02-27 03:49:06
| 1
| 6,989
|
slaw
|
78,065,048
| 1,938,410
|
How to use Pillow to set RGB value of a PNG based on a binary mask
|
<p>I have an PNG image and a black-and-white mask. This post (<a href="https://note.nkmk.me/en/python-pillow-putalpha/" rel="nofollow noreferrer">https://note.nkmk.me/en/python-pillow-putalpha/</a>) demonstrated how to use <code>putalpha()</code> to send the alpha channel of the image so the masked out area becomes transparent. I wonder if there is any function in Pillow can actually change the RGB value of the masked out area into all 255?</p>
<p>Following is an example. The result image should have all white area actually have RGB value of 255, instead of only have alpha being set as 0.</p>
<p><a href="https://i.sstatic.net/sxqxU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sxqxU.jpg" alt="The original image" /></a> <a href="https://i.sstatic.net/ynq7c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ynq7c.png" alt="The mask" /></a> <a href="https://i.sstatic.net/DfqoA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DfqoA.png" alt="The image I want to get, the white area should have RGB value of 255." /></a></p>
<p><strong>Updated question</strong></p>
<p>(I run into some error when I try to use <code>ImageChops</code>:</p>
<p>Here is the code I ran:</p>
<pre><code>from PIL import Image, ImageChops
im = Image.open('lena.jpg')
mask = Image.open('mask_circle.png')
print(im, mask)
darker = ImageChops.darker(im, mask)
darker.save('darker.png')
</code></pre>
<p>And the error I got:</p>
<pre><code><PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=400x225 at 0x24725330BF0> <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=400x225 at 0x24724F2CD40>
Traceback (most recent call last):
File "E:\test.py", line 7, in <module>
darker = ImageChops.darker(im, mask)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\...\Local\Programs\Python\Python312\Lib\site-packages\PIL\ImageChops.py", line 81, in darker
return image1._new(image1.im.chop_darker(image2.im))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: images do not match
</code></pre>
|
<python><image-processing><python-imaging-library><png>
|
2024-02-27 03:44:53
| 1
| 507
|
SamTest
|
78,065,046
| 12,545,137
|
How can I find all hex codes for a Unicode character?
|
<p>I have to find some special characters using a Python regular expression. For example, for the character 'à',
I found some hex codes to build a regex pattern:</p>
<pre><code>r'\x61\x300|\xe0|\x61\x300'
</code></pre>
<p>But I am afraid that I may miss some other hex codes.
How can I find all possible hex codes for a character?</p>
|
<python><regex><unicode><escaping><python-re>
|
2024-02-27 03:44:34
| 2
| 584
|
Dinh Quang Tuan
|
78,064,799
| 7,938,796
|
Python script works locally but not on AWS Lambda Container
|
<p>I'm trying to run a linear optimization problem using <code>scipy</code> within AWS Lambda. <code>scipy</code> is over the limit to be added as a layer, so I've created a container and deployed to AWS Lambda. I was able to run a script that imports and uses basic parts of <code>scipy</code>, so I know the process works in general but it's getting hung up on certain functions. Here is my entire script:</p>
<pre><code>import json
import pandas as pd
import numpy as np
import scipy.sparse
from scipy.optimize import milp, LinearConstraint, Bounds
def lambda_handler(event, context):
# Constraints
maxSal = 3000
totalItems = 4
# Setup variables
itemId = pd.DataFrame([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], columns=['itemId'])
itemCount = pd.DataFrame([1, 1, 1, 1, 1, 1, 1, 1, 1, 1], columns=['itemCount'])
itemCost = pd.DataFrame([710, 720, 730, 740, 750, 760, 770, 780, 790, 800], columns=['itemCost'])
itemValue = [20, 30, 40, 50, 60, 70, 80, 90, 100, 110]
# Dummy DF to be used to select the optimal items
itemDF = pd.concat([itemId, itemCount, itemCost], axis=1)
# Compile all constraint vectors
ConVec_StaticAllFixed = pd.concat([itemCount, itemCost], axis=1)
print(ConVec_StaticAllFixed)
print(type(ConVec_StaticAllFixed))
# Setup constraints
ConVal_AllGT = [totalItems, 0]
ConVal_AllLT = [totalItems, maxSal]
# Setup problem
A = scipy.sparse.csc_array(
scipy.sparse.vstack((
scipy.sparse.csr_array(ConVec_StaticAllFixed.iloc[:,i]) for i in range(len(ConVec_StaticAllFixed.columns))
)))
# All decisions are binary
integrality = np.ones(shape=len(itemValue), dtype=np.uint8)
bounds = Bounds(
lb=np.zeros_like(itemValue),
ub=np.ones_like(itemValue), # 685
)
## Scrub things up so it'll run ##
ConValGT_MILP = np.block(ConVal_AllGT)
ConValLT_MILP = np.block(ConVal_AllLT)
milpValues_Final = pd.Series(itemValue).astype(float)
# Run milp
result = milp(
c=-milpValues_Final, integrality=integrality, bounds=bounds,
constraints=(LinearConstraint(A=A, lb=ConValGT_MILP, ub=ConValLT_MILP),),
options={'time_limit':2},
)
if result.success:
outputs = pd.Series(name='Assigned', data=result.x)
selections = outputs[outputs > 0.5].index
filtered_selections = [x for x in selections if x < len(itemDF.index)] # only keep selections within itemDF
itemDFSolved = itemDF.loc[filtered_selections]
itemDFSolved["ItemValue"] = milpValues_Final
else:
itemDFSolved = pd.DataFrame()
print(itemDFSolved)
totalVal = sum(itemDFSolved.ItemValue)
return {
'statusCode': 200,
'response': json.dumps('Successfully ran scipy MILP in lambda!'),
'body': totalVal
}
</code></pre>
<p>I've confirmed that this works locally, it spits out an optimized total value of 240 which is correct. However on AWS Lambda it's getting hung up on <code>scipy.sparse.vstack</code> which is used to build the matrix that gets optimized. Here is the error traceback in aws:</p>
<pre><code>{
"errorMessage": "iteration over a 0-d array",
"errorType": "TypeError",
"requestId": "702c28fc-a594-4b7b-a984-d911df3e0fad",
"stackTrace": [
" File \"/var/task/my_lambda_function.py\", line 29, in lambda_handler\n scipy.sparse.vstack((\n",
" File \"/var/lang/lib/python3.11/site-packages/scipy/sparse/_construct.py\", line 781, in vstack\n return _block([[b] for b in blocks], format, dtype, return_spmatrix=True)\n"
]
}
</code></pre>
<p>So to me it seems like the error stems from the lambda function reading <code>ConVec_StaticAllFixed</code> as a scalar rather than a pandas dataframe since the error refers to it as a 0-d array. But in the cloud watch logs I can see that it prints out the type as <code><class 'pandas.core.frame.DataFrame'></code>:</p>
<p><a href="https://i.sstatic.net/m503s.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m503s.png" alt="Cloudwatch Logs" /></a></p>
<p>So I'm stumped. I'm pretty new to containers and aws in general so if you have advice on how to sort through differences between local and containerized code.</p>
|
<python><amazon-web-services><docker><aws-lambda>
|
2024-02-27 02:04:24
| 0
| 767
|
CoolGuyHasChillDay
|
78,064,721
| 7,631,480
|
Why does my AWS Boto3 API request asking to create some Route53 records silently fail?
|
<p>I have some code that has always just worked. It uses the AWS API to request information about some ECS containers, then queries Route53 to get information about existing records, and then finally makes a Boto3 Route53 call to update a DNS zone to match the ECS information.</p>
<p>This code has worked in the past, but now I'm tracking down the source of some bad behavior, and the above logic seems to be the problem. When I step over the <em><strong>Boto3</strong></em> call to <em><strong>change_resource_record_sets</strong></em>, the call succeeds. It doesn't throw an exception, and its HTTP Response Code is 200. However, the changes I've asked to have applied to my DNS zone do not occur. From what I can tell, the call didn't do anything.</p>
<p>I'm wondering if anyone can tell me why I'm getting this bad behavior. Why is the code below silently having no affect on my environment:</p>
<pre><code>try:
r = get_primary_client('route53').change_resource_record_sets(
HostedZoneId=dns_zone['Id'],
ChangeBatch={'Changes': changes}
)
processing_info['r'] = copy.deepcopy(r)
assert r['ResponseMetadata']['HTTPStatusCode'] == 200
except Exception as ex:
return str(ex)
</code></pre>
<p>The value of <em><strong>dns_zone</strong></em> dumped as JSON is:</p>
<pre><code>{
"CallerReference": "RISWorkflow-3bdd50ce72fe59acd56ed296752aab38",
"Config": {
"Comment": "HostedZone created by Route53 Registrar",
"PrivateZone": false
},
"Id": "/hostedzone/Z30M8MWTQXQ2O2",
"Name": "ourdomain.com.",
"ResourceRecordSetCount": 54
}
</code></pre>
<p>The value of the <em><strong>changes</strong></em> param, dumped as JSON, is:</p>
<pre><code> {
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "appserver2h.devqa.ourdomain.com",
"ResourceRecords": [
{
"Value": "10.13.11.98"
}
],
"TTL": 60,
"Type": "A"
}
},
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "grinder2h.devqa.ourdomain.com",
"ResourceRecords": [
{
"Value": "10.13.12.172"
}
],
"TTL": 60,
"Type": "A"
}
}
],
</code></pre>
<p>This code skips the except clause (doesn't throw an exception) and the assert statement succeeds (the request returns a response code of 200). And yet, nothing changes in my DNS zone. In this case, the following names do not have records in my DNS zone, as you'd expect after this call:</p>
<pre><code>grinder2h.devqa.ourdomain.com
appserver2h.devqa.ourdomain.com
</code></pre>
<p>Here is a JSON dump of the what the call returns:</p>
<pre><code>{
"ChangeInfo": {
"Id": "/change/C102227247EPWZ3JJQCS",
"Status": "PENDING",
"SubmittedAt": "Tue, 27 Feb 2024 01:13:50 GMT"
},
"ResponseMetadata": {
"HTTPHeaders": {
"content-length": "282",
"content-type": "text/xml",
"date": "Tue, 27 Feb 2024 01:13:49 GMT",
"x-amzn-requestid": "735f38f8-bd9d-440b-b29d-b68ebfd0f45e"
},
"HTTPStatusCode": 200,
"RequestId": "735f38f8-bd9d-440b-b29d-b68ebfd0f45e",
"RetryAttempts": 0
}
</code></pre>
<p>},</p>
<p>I did notice that the status returned is "Pending". I had to read up on what this is about. Apparently the changes are made asynchronously. To get the status of my call, I do this with the AWS CLI:</p>
<pre><code>> aws route53 get-change --id "/change/C06636843RDY913EELESJ"
{
"ChangeInfo": {
"Id": "/change/C06636843RDY913EELESJ",
"Status": "INSYNC",
"SubmittedAt": "2024-02-27T00:22:50.985000+00:00"
}
}
</code></pre>
<p>A status of "INSYNC" apparently means that the requested changes have been made.</p>
<p>Huh? I've been banging my head and staring at this for hours. I can't figure out what's going on. Anyone?</p>
|
<python><amazon-web-services><dns><boto3><route53>
|
2024-02-27 01:35:11
| 0
| 23,355
|
CryptoFool
|
78,064,599
| 890,803
|
copy one file from one S3 bucket to a key within another S3 bucket
|
<p>I am using boto3 and have two s3 buckets:</p>
<pre><code>source s3 bucket: 'final-files'
destination s3 bucket: 'exchange-files'
prefix for destination s3 bucket: 'timed/0001'
key: final_file[-1] #this is assigned earlier and is a file name
</code></pre>
<p>I need to copy a single file from the source bucket to a folder within the destination bucket and Im unsure how to add the prefix to the destination; here's my code:</p>
<pre><code>#create a source dictionary that specifies bucket name and key name of the object to be copied
copy_source = {
'Bucket': 'final-files',
'Key': final_file[-1]
}
bucket = s3.Bucket('exchange-files')
prefix="timed/0001"
bucket.copy(copy_source, prefix + key)
# Printing the Information That the File Is Copied.
print('Single File is copied')
</code></pre>
<p>This is the error Im getting:</p>
<pre><code>"errorMessage": "expected string or bytes-like object, got 's3.Bucket'"
</code></pre>
<p>What am I missing?</p>
|
<python><amazon-web-services><amazon-s3>
|
2024-02-27 00:48:05
| 1
| 1,725
|
DataGuy
|
78,064,540
| 9,659,759
|
Unit testing flask errorhandler
|
<p>I am trying to unit test my Flask app's <a href="https://flask.palletsprojects.com/en/1.1.x/errorhandling/#registering" rel="nofollow noreferrer">errorhandler</a>, which looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>
from http.client import INTERNAL_SERVER_ERROR
bp = flask.Blueprint('my_app', __name__, url_prefix='/my_app')
@bp.route('/get_metadata', methods=['GET'])
def get_metadata():
raise CustomException("custom message")
@bp.errorhandler(INTERNAL_SERVER_ERROR)
def handle_exception(e):
response = flask.make_response()
response.status_code = 500
# custom stuff here
return response
</code></pre>
<p>This code works fine for my actual application - the CustomException is raised, caught by the errorhandler, processed, and then returned as a 500 response.</p>
<p>For my unit test, I have something like this:</p>
<pre class="lang-py prettyprint-override"><code>class AppTests(parameterized.TestCase):
def setup(self):
self.flask_app = app.create_flask_app('localhost:5000', 'localhost:3000')
@parameterized.named_parameters(('failed_server_call', 500))
def test_get_branch_metadata_endpoint(self, expected_status_code):
with self.flask_app.test_client() as client:
response = client.get('/my_app/get_metadata')
self.assertEqual(expected_status_code, response.status_code)
if __name__ == '__main__':
absltest.main()
</code></pre>
<p>Lots of implementation omitted, but I think those are the relevant portions. In the unit testing context, the CustomException is raised, but not handled by the @errorhandler (in fact, the @errorhandler is never invoked at all), and the test basically just errors out because of the unhandled exception. My expectation is that the "fake" flask client (from <code>test_client()</code> should behave exactly like the real client, and correctly invoke the @errorhandler without any additional work from me.</p>
<p>For background, we are still using <code>Flask 1.4</code> (Upgrade coming soon(tm)) and <code>absl.py 1.3</code>. Tests are executed using <code>bazel test</code>, <code>Bazel 6.4</code>. <code>Python 3.10.9</code>. I am something of a Python newbie, so unsure what else might be relevant.</p>
|
<python><unit-testing><flask>
|
2024-02-27 00:20:50
| 1
| 883
|
Jake
|
78,064,494
| 5,159,284
|
Importing file from another directory python
|
<p><strong>Disclaimer</strong>: I have tried adding my root of the project folder to the path and also followed <a href="https://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time">this post</a> but still finding issues.</p>
<p>My project structure is:</p>
<pre><code>package_name
- __init__.py
folder1
- __init__.py
- file_to_be_tested.py
- tests
- __init__.py
- test_my_test.py
</code></pre>
<p>How do I import a function from the <code>file_to_be_tested.py</code> in <code>test_my_test.py</code>? I have tried the following:</p>
<pre><code>from package.folder1.file_to_be_tested import my_func
from folder1.file_to_be_tested import my_func
</code></pre>
<p>And for running the test file, I run the following command from the root of the package (inside <code>package_name</code>) folder:</p>
<pre><code>python folder1/tests/test_my_test.py
</code></pre>
<p>Apologies if I didn't read enough before posting.</p>
|
<python>
|
2024-02-27 00:04:01
| 1
| 2,635
|
Prabhjot Singh Rai
|
78,064,460
| 2,307,570
|
Why does `__builtins__` not contain itself?
|
<p>The following code seems like a good way to get an overview of the builtin objects in Python.<br></p>
<pre class="lang-py prettyprint-override"><code>dunder = [] # 8
lower_fun = [] # 41
lower_type = [] # 26
lower_sitebuiltins = [] # 6
upper_exception = [] # 50
upper_warning = [] # 11
upper_other = [] # 10
for name, obj in vars(__builtins__).items():
first_char = name[0]
if first_char == '_':
dunder.append(name)
else:
if first_char.islower():
t = str(type(obj))
if t == "<class 'builtin_function_or_method'>":
lower_fun.append(name)
elif t == "<class 'type'>":
lower_type.append(name)
elif '_sitebuiltins' in t:
lower_sitebuiltins.append(name)
else:
assert False # does not happen
else:
if 'Error' in name or 'Exception' in name:
upper_exception.append(name)
elif 'Warning' in name:
upper_warning.append(name)
else:
upper_other.append(name)
</code></pre>
<p>I would have expected <code>__builtins__</code> to appear in <code>dunder</code>. But it is not there.</p>
<pre class="lang-py prettyprint-override"><code>dunder == [
'__name__', '__doc__', '__package__', '__loader__',
'__spec__', '__build_class__', '__import__', '__debug__'
]
</code></pre>
<p>Apparently I do not understand, what "built in" means.<br>
So what are the 152 entries of <code>vars(__builtins__)</code>?<br>
And what (like <code>__builtins__</code>) is missing?</p>
<hr />
<p><strong>Edit:</strong> <code>builtins</code> and <code>sys</code> are also not in these lists.</p>
<p>The 152 entries came from running this code from a Python file.<br>
In a console <code>dir(__builtins__)</code> looks completely different, and has only 41 entries.<br>
<code>vars(__builtins__)</code> causes a TypeError in the console.</p>
<p>Initially the code used <code>dir(__builtins__)</code>. Following the advice in a comment, it was rewritten using <code>vars(__builtins__)</code>. The result is essentially the same, but the lists have a different order.</p>
|
<python><python-builtins>
|
2024-02-26 23:50:45
| 0
| 1,209
|
Watchduck
|
78,064,166
| 11,618,586
|
converting http directory listing into a dataframe and access path in each row
|
<p>I am able to access an http url and retrieve the directory listing. Then i go line by line and check if each url has a <code>.txt</code> extension and accessing it using <code>requests.content</code> and decode the txt file.
However I want to be able to filter the listing based on the date. The directory listing is like so:</p>
<pre><code><HTML><HEAD><TITLE>IP Address/Log</TITLE>
<BODY>
<H1>Log</H1><HR>
<PRE><A HREF="/Main/">[To Parent Directory]</A>
23/11/18 19:07 314 <A HREF="/Log/Alarm_231118.txt">Alarm_231118.txt</A>
23/11/16 23:59 150516 <A HREF="/Log/Temperature%20Detail_Data%20Log_231116.txt">Temperature Detail_Data Log_231116.txt</A>
23/11/28 15:22 450 <A HREF="/Log/Alarm_231128.txt">Alarm_231128.txt</A>
23/11/17 0:00 450536 <A HREF="/Log/Temperature%20Detail_Data%20Log.log">Temperature Detail_Data Log.log</A>
23/11/16 23:59 110148 <A HREF="/Log/Water%20Temp%20Trend_Data%20Log_231116.txt">Water Temp Trend_Data Log_231116.txt</A>
</PRE><HR></BODY></HTML>
</code></pre>
<p>Im only interested in the rows that have the data, time size and the HREF link. I want to create a dataframe where the first column is the date, second is the time, third is the size and the 4th is the link.
To access each link i use the following code:</p>
<pre><code>for line in lines:
if ".txt" in line:
filename = line.split('"')[1]
if filename.startswith(file_prefix_all) and filename.endswith(".txt"):
file_url = url_root + filename
print(file_url)
file_response = requests.get(file_url, auth=auth)
if file_response.status_code == 200:
# Read the CSV content into a Pandas DataFrame
file_content = file_response.content.decode('utf-8')
df = pd.read_csv(StringIO(file_content), encoding='utf-8', sep='\t')
files_dataframes.append(df)
</code></pre>
<p>Will I be able to use the same method once i get the listing into a dataframe?
Any help/suggestions would be greatly appreciated!</p>
|
<python><python-3.x><pandas><urllib>
|
2024-02-26 22:07:46
| 1
| 1,264
|
thentangler
|
78,064,126
| 903,521
|
Read and Write files using Threadpool
|
<p>I want to read large number of small files (1000s) from AWS S3 and write them as few big partitions (100s) in AWS S3. So, I group the files in a chunks based on their size and read/write the files in threads to parallelize the whole process.</p>
<pre><code>Threadpool(10).starmap(read_write, file_dict)
def read_write(file_dict):
with s3fs.S3FileSystem().open(file_path, 'wb') as f
f.write(read_file(file_dict.values()))
def read_file(key):
return s3fs.get_object(bucket, key).get("body").read()
</code></pre>
<p>However, I get Out Of Memory after processing about 10 - 15 group of file chunks. Anything wrong with this approach?</p>
|
<python><python-3.x>
|
2024-02-26 21:59:46
| 0
| 3,628
|
syv
|
78,064,016
| 746,100
|
how can i programmatically set a python breakpoint in visual studio code on Macos
|
<p>That is, in my Python3 code on MacOS, is there some Python command I can put in my code to cause a breakpoint in Visual Studio Code?</p>
<p>I believe in Java they use "debugger" for this.</p>
|
<python><visual-studio-code>
|
2024-02-26 21:35:40
| 1
| 8,387
|
Doug Null
|
78,063,706
| 3,475,298
|
Pairs of ragged sequences as input for Tensorflow model
|
<p>I have a model that takes pairs of sequences. The sequences are of variable length.</p>
<p>My model does this:</p>
<ol>
<li>embeds each item of both sequences</li>
<li>sums embeddings of all items for each sequence</li>
<li>calculates dot product of the sums</li>
</ol>
<p>This is the minimal code illustrating the idea:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
inputs = tf.keras.Input(shape=(2, None), ragged=True, dtype=tf.int32)
embedder = tf.keras.layers.Embedding(input_dim=16, output_dim=16)
x = embedder(inputs)
x = tf.reduce_sum(x, axis=-2)
v1 = x[:, 0, :]
v2 = x[:, 1, :]
outputs = tf.reduce_sum(tf.multiply(v1, v2), axis=1)
model = tf.keras.models.Model(inputs=[inputs], outputs=[outputs])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True))
xs = tf.ragged.constant([
[[0, 1, 2], [3, 4]],
[[2, 0], [5]],
])
ys = tf.constant([0, 1])
dataset = tf.data.Dataset.from_tensor_slices((xs, ys))
model.fit(dataset)
</code></pre>
<p>An error occurs:</p>
<pre><code>TypeError: Exception encountered when calling layer 'tf.__operators__.ragged_getitem' (type SlicingOpLambda).
Ragged __getitem__ expects a ragged_tensor.
Call arguments received by layer 'tf.__operators__.ragged_getitem' (type SlicingOpLambda):
• rt_input=tf.Tensor(shape=(None, 16), dtype=float32)
• key=({'start': 'None', 'stop': 'None', 'step': 'None'}, '0', {'start': 'None', 'stop': 'None', 'step': 'None'})
</code></pre>
<p>My questions are these:</p>
<ul>
<li>Is it possible to do <code>x[:, 0, :]</code> indexing without getting the error?</li>
<li>What is the most idiomatic way to implement the idea using Tensorflow 2 API?</li>
</ul>
|
<python><tensorflow><ragged-tensors>
|
2024-02-26 20:32:46
| 0
| 743
|
Maxim Blumental
|
78,063,667
| 1,663,462
|
How to call python function with default values when they are using typer decorated functions
|
<p>I'm currently migrating some Python functions to be used with the Typer library for a CLI application. Previously, these functions worked well with default parameters, but I'm encountering issues when applying Typer's argument and option decorators, specifically when dealing with default parameter values. I am using Python <code>3.10</code>.</p>
<p>Here's an example of my code where the issue occurs:</p>
<p>main.py:</p>
<pre><code>import typer
app = typer.Typer()
log_level_arg = typer.Argument(default="debug", help="The output logging level. This can be one of: debug, info, warning, error, and critical.")
@app.command()
def view(log_level: str = log_level_arg):
"""
View IP address of switch in environment if it exists
"""
print("log level:")
print(log_level)
print(type(log_level))
@app.command()
def view_old(log_level: str = "debug"):
print("log level:")
print(log_level)
print(type(log_level))
print("view:")
view()
print()
print("view old:")
view_old()
</code></pre>
<p>Running <code>python main.py view</code> outputs:</p>
<pre><code>python main.py view
view:
log level:
<typer.models.ArgumentInfo object at 0x7f9804e4ceb0>
<class 'typer.models.ArgumentInfo'>
view old:
log level:
debug
<class 'str'>
log level:
debug
<class 'str'>
</code></pre>
<p>This bit here is the troublesome part:</p>
<pre><code>view:
log level:
<typer.models.ArgumentInfo object at 0x7f9804e4ceb0>
<class 'typer.models.ArgumentInfo'>
</code></pre>
<p>Instead of the log_level argument being recognized as a string with the default value "debug," it's being treated as a typer.models.ArgumentInfo object. This behavior is unexpected and causes subsequent functionality that depends on log_level being a string (e.g., lower() method calls) to fail.</p>
<p>So my question: I guess I can workaround this with additional functions that just wrap the existing ones, but is there perhaps a way to avoid this and do this natively?</p>
|
<python><typer>
|
2024-02-26 20:24:29
| 1
| 34,785
|
Chris Stryczynski
|
78,063,556
| 6,641,693
|
How to Set a Custom Logger for the Requests Library in Python?
|
<p>I am trying to set a custom logger for the requests library in Python, but I am having trouble finding the correct way to do it. I have already created a custom logger and configured it with my desired settings, such as the log level, format, and file handler.</p>
<p>no matter what i try the requests library defaults to the default logger and there is no way of setting the logger function :</p>
<p>this is my custom logger but i don't think it's the issue :</p>
<pre><code>logger = logging.getLogger('logger')
handler = TimedRotatingFileHandler(
filename=os.path.join(log_dir, 'logger.log'),
when='D',
interval=7,
backupCount=7,
encoding='utf-8'
)
handler.setFormatter(logging.Formatter('%(asctime)s | %(levelname)s | %(message)s'))
logger.addHandler(handler)
logger.propagate = False
</code></pre>
|
<python><logging><python-requests>
|
2024-02-26 19:57:45
| 1
| 1,656
|
Kaki Master Of Time
|
78,063,452
| 4,373,898
|
How to access index value in `pd.Series` aggregation or mapping?
|
<p>After a <code>groupby</code> operation, I have a DataFrame with a MultiIndex (say, level names are <code>i1</code>, <code>i2</code>).</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{
"i1": list("abcde"),
"i2": range(10, 20, 2),
"c1": range(1, 11, 2),
"c2": list("fghij"),
}
).set_index(["i1", "i2"])
print(df)
</code></pre>
<pre><code> c1 c2
i1 i2
a 10 1 f
b 12 3 g
c 14 5 h
d 16 7 i
e 18 9 j
</code></pre>
<p>I would like to update the column <code>c1</code> according to a condition based on both <code>i1</code> and <code>c2</code>.
I think of something like:</p>
<pre class="lang-py prettyprint-override"><code>def _agg(row) -> float:
# See "Edit 1" for more details
return my_custom_function(row["c2"], row["i1"])
df["c1"] += df.agg(_agg, axis="columns")
</code></pre>
<p>However, this raises an exception because "i1" is not available.</p>
<pre><code>KeyError: 'i1'
</code></pre>
<p>I tried with <code>reset_index</code> but this raises another exception because you cannot sum two series with non-overlapping index:</p>
<pre><code>df["c1"] += df.reset_index().agg(_agg, axis="columns")
</code></pre>
<pre><code>ValueError: cannot join with no overlapping index names
</code></pre>
<p>Kind,
Alexis</p>
<hr />
<h1>Edit 1</h1>
<p>In my first example, I use only numeric columns and my aggregation function was a "simple" numeric operation:</p>
<pre class="lang-py prettyprint-override"><code>def _agg(row):
return row["c2"] * (row["i1"] % 4)
</code></pre>
<p>For such kind of computation, we can avoid the <code>df.agg</code> call as suggested by <a href="https://stackoverflow.com/a/78064304/4373898">the answer</a> of <a href="https://stackoverflow.com/users/10035985/andrej-kesely">@Andrej Kesely</a>. This is a valid answer but is not applicable to my use case.</p>
<p>I tried to craft a more representative aggregation function something like this, with more weird conditionals and some "side-effects" (accessing environment variables or something similar):</p>
<pre class="lang-py prettyprint-override"><code>def _agg(row) -> float:
import os
if str(row["c2"]).startswith("f"):
return float(os.environ.get(str(row["i1"]).upper(), -1))
elif not str(row["c2"]).startswith("j"):
return float(os.environ.get(f"MY_VAR_{str(row['i1']).upper()}", 0))
else:
return 1
</code></pre>
<p>I don't think that a solution based on accessing the level values through <code>df.index.get_level_values</code> can be applied for such case.</p>
<h1>Edit 2</h1>
<p>One of my untold idea was to conserve the index (because my dataset can be large and having an index sounds good, as well as it makes sure that the right rows are summed).</p>
<p>I looked for a method, similar to <code>reset_index(level="i1")</code>, which add the <code>i1</code> column to the dataset, but keep it in the index too !</p>
<p>This doesn't seem to exist. Is <code>df.assign({"i1": df.index.get_level_values("i1")})</code> an acceptable workaround (what is the cost of adding such column) ?</p>
|
<python><pandas><dataframe>
|
2024-02-26 19:35:16
| 1
| 3,169
|
AlexisBRENON
|
78,063,406
| 4,119,262
|
Why is my function always outputting "invalid" as a result?
|
<p>I am learning python. In this context, I do an exercise in which I need to perform a series of test on a string. If these checks are met, python should output "Valid", if not "Invalid".</p>
<p>Here is my code, with the comments describing the goal of each functions:</p>
<pre><code>import string
def main():
plate = input("Plate: ")
if is_valid(plate):
print("Valid")
else:
print("Invalid")
# Function "main" calls the "is_valid" function
def is_valid(s):
if lenght_valid(s) == True and starts_alpha(s) == True and check_ponctuation(s) == False:
return True
else:
return False
# Function "is_valid" calls all the other function checking single cirterias
def lenght_valid(s):
if len(s) > 6 or len(s) < 2:
return False
else:
return True
# Function "lengt_valid" check the propoer lenght (2 characters min., 6 max.)
def starts_alpha(s):
s = s[0:2]
if s.isalpha():
return True
else:
return False
# Function "starts_alpha" check that the first two characters are letters
def check_ponctuation(s):
return (character in string.punctuation for character in s)
# Function "check_punctuation" checks if punctuation is present. If "True" then this will be rejected in the above function
main()
</code></pre>
<p>I do not understand why it always outputs "Invalid". I tested "KA" and "KOKO" and these should be valid, however, these are not valid.</p>
|
<python><function>
|
2024-02-26 19:24:30
| 1
| 447
|
Elvino Michel
|
78,063,283
| 6,779,049
|
How to use Numpy as Default for Code Generation In Sympy?
|
<p>I'm digging into SymPy's code generation capabilities and am struggling with a few basic things. Ironically, of all the languages that SymPy supports for code generation, the documentation for python code generation seems to be the most minimal/lacking. I would like to use <code>numpy as np</code> as the default for all math functions in the code conversion but am struggling. Looking at the source code (since it is not documented), it looks like you can input a <code>settings</code> dict that has a <code>user_functions</code> field which maps SymPy functions to custom functions. I have a basic example below:</p>
<pre><code>import sympy as sp
from sympy.printing.pycode import PythonCodePrinter
class MyPrinter(PythonCodePrinter):
def __init__(self):
super().__init__({'user_functions':{'cos':'np.cos', 'sin':'np.sin', 'sqrt':'np.sqrt'}})
x = sp.symbols('x')
expr = sp.sqrt(x) + sp.cos(x)
mpr = MyPrinter()
mpr.doprint(expr)
</code></pre>
<p>This produces the following output:
<code>'math.sqrt(x) + np.cos(x)'</code></p>
<p>You can see that the mapping worked correctly for <code>cos</code> but not for <code>sqrt</code>.</p>
<ol>
<li>Why am I getting this behavior?</li>
<li>Is there a better way to do this than to manually specify numpy funcs for every function I want to use?</li>
</ol>
|
<python><sympy><code-generation>
|
2024-02-26 18:57:23
| 1
| 398
|
Nick-H
|
78,063,175
| 10,425,150
|
XML to DATAFRAME and back to XML in python
|
<p>I would to convert XML file into pandas dataframe.</p>
<p>Afterwards I would like to convert the dataframe back to exactly the same XML.</p>
<p><strong>[First step]</strong>: save the XLM file from string:</p>
<pre><code> xml = '''<system>
<station id="1" type="DeviceName Functional Test">
<stationId>Functional_Test</stationId>
</station>
<software>
<component version="1.0.0.0" date="01/01/1889">Functional_Test.exe</component>
</software>
</system>'''
element = ET.XML(xml)
ET.indent(element)
pretty_xml = ET.tostring(element, encoding='unicode')
xml_file_name = r"save_xml.xml"
xml_file=open(xml_file_name,"w")
xml_file.write(str(pretty_xml))
xml_file.close()
</code></pre>
<p><strong>[Second step]:</strong> read the xml and save it as csv using pandas:</p>
<pre><code>import pandas as pd
df = pd.read_xml(xml_file_name)
df.to_csv("xml_to_csv.csv")
</code></pre>
<p><a href="https://i.sstatic.net/lRPe2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lRPe2.png" alt="enter image description here" /></a></p>
<p><strong>[Third step]:</strong> convert dataframe back to XML:</p>
<pre><code>back_to_xml = df.to_xml(index = False, root_name = "system", xml_declaration = False)
remove_rows = back_to_xml.replace("<row>", "").replace("</row>", "")
print(remove_rows)
</code></pre>
<p>Unfortunately the output XML is not the same as the input XML.</p>
<pre><code><system>
<id>1.0</id>
<type>DeviceName Functional Test</type>
<stationId>Functional_Test</stationId>
<component/>
<id/>
<type/>
<stationId/>
<component>Functional_Test.exe</component>
</system>
</code></pre>
<p>I would much apricate any support and guidance.</p>
|
<python><pandas><xml><dataframe>
|
2024-02-26 18:35:43
| 1
| 1,051
|
Gооd_Mаn
|
78,063,035
| 2,075,630
|
How to use context managers with callback functions?
|
<p>How do you correctly use resources, when the resource is expected to be released asynchronously?</p>
<p>Motivated by <code>tempfile.TemporaryDirectory</code> emitting a warning, when the temporary directory is cleaned up by the destructor rather than explicitly, and this leading to problems in <code>PYTHONSTARTUP</code>. Note that the parameter <code>ignore_cleanup_errors</code> does not affect the <em>warning</em>.</p>
<h2>Toy example with lexical closures.</h2>
<p>Consider e.g. the following nonsense example:</p>
<pre><code>import tempfile, os
def make_writer():
tmpdir = tempfile.TemporaryDirectory()
counter = 0
def writer():
nonlocal counter
counter += 1
filename = os.path.join(tmpdir.name, f"{counter}.dat")
with open(filename, "w") as fout:
print(f"{counter = }", file=fout)
os.system(f"tree '{tmpdir.name}'")
return writer
writer = make_writer()
writer()
writer()
</code></pre>
<p>This will output e.g.</p>
<pre><code>/tmp/tmp9ke4swfe
└── 1.dat
0 directories, 1 file
/tmp/tmp9ke4swfe
├── 1.dat
└── 2.dat
0 directories, 2 files
/usr/local/lib/python3.11/tempfile.py:934: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp9ke4swfe'>
_warnings.warn(warn_message, ResourceWarning)
</code></pre>
<p>Normally, I'd rewrite this to somehow use a context manager, but what, if <code>writer</code> is used as e.g. an asynchronous callback? Or a utility to be used in interactive python shells? In that case, using a context manager doesn't seem possible.</p>
<h2>Toy example with a class.</h2>
<p>So, next I tried to write a wrapper class, that explicitly cleans up the temporary directory, when <em>it</em> is destroyed. Remember, this is going into <code>PYTHONSTARTUP</code>, so we can't explicitly clean up the object.</p>
<pre><code>import tempfile, os, sys
class Writer:
def __init__(self):
self.tempdir = tempfile.TemporaryDirectory()
self.counter = 0
def __call__(self):
self.counter += 1
filename = os.path.join(self.tempdir.name, f"{self.counter}.dat")
with open(filename, "w") as fout:
print(f"{self.counter = }", file=fout)
os.system(f"tree '{self.tempdir.name}'")
def __del__(self):
sys.stdout.flush()
sys.stderr.flush()
print("Executed after the cleanup warning?")
sys.stdout.flush()
sys.stderr.flush()
self.tempdir.cleanup()
writer = Writer()
writer()
writer()
</code></pre>
<p>Which generates:</p>
<pre><code>/tmp/tmpmwrb2nhf
└── 1.dat
0 directories, 1 file
/tmp/tmpmwrb2nhf
├── 1.dat
└── 2.dat
0 directories, 2 files
/usr/local/lib/python3.11/tempfile.py:934: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpmwrb2nhf'>
_warnings.warn(warn_message, ResourceWarning)
Executed after the cleanup warning?
</code></pre>
<p>Note that the warning is already emitted, before the destructor of the manager class is called. That part I currently don't understand at all.</p>
<h2>Toy example using generators</h2>
<p>For completeness sake, this also doesn't solve the problem:</p>
<pre><code>import tempfile, os
def make_writer():
def make_writer_obj():
with tempfile.TemporaryDirectory() as tempdir:
counter = 0
while True:
counter += 1
filename = os.path.join(tempdir, f"{counter}.dat")
with open(filename, "w") as fout:
print(f"{counter = }", file=fout)
os.system(f"tree '{tempdir}'")
yield
writer_obj = make_writer_obj()
return lambda: next(writer_obj)
writer = make_writer()
writer()
writer()
</code></pre>
<p>Output:</p>
<pre><code>/tmp/tmpa6t11t5e
└── 1.dat
0 directories, 1 file
/tmp/tmpa6t11t5e
├── 1.dat
└── 2.dat
0 directories, 2 files
/usr/local/lib/python3.11/tempfile.py:934: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpa6t11t5e'>
_warnings.warn(warn_message, ResourceWarning)
</code></pre>
<h2>Current solution: atexit.register</h2>
<p>The current use-case of customizing the interactive shell I have solved by using</p>
<pre><code>tempdir = tempfile.TemporaryDirectory()
atexit.register(tempdir.cleanup)
</code></pre>
<p>The downside of this solution is, that it will potentially do the cleanup unnecessarily late for uses, where cleanup could be performed earlier.</p>
|
<python><python-3.x><contextmanager><resource-management>
|
2024-02-26 18:09:37
| 0
| 4,456
|
kdb
|
78,062,935
| 2,148,414
|
Snowflake ARRAY column as input to Snowpark modeling.decomposition
|
<p>I have a Snowflake table with an ARRAY column containing custom embeddings (with array size>1000).
These arrays are sparse, and I would like to reduce their dimension with SVD (or one of the Snowpark <code>ml.modeling.decomposition</code> methods).
A toy example of the dataframe would be:</p>
<pre><code>df = session.sql("""
select 'doc1' as doc_id, array_construct(0.1, 0.3, 0.5, 0.7) as doc_vec
union
select 'doc2' as doc_id, array_construct(0.2, 0.4, 0.6, 0.8) as doc_vec
""")
print(df)
# DOC_ID | DOC_VEC
# doc1 | [ 0.1, 0.3, 0.5, 0.7 ]
# doc2 | [ 0.2, 0.4, 0.6, 0.8 ]
</code></pre>
<p>However, when I try to fit this dataframe</p>
<pre><code>from snowflake.ml.modeling.decomposition import TruncatedSVD
tsvd = TruncatedSVD(input_cols = 'doc_vec', output_cols='out_svd')
print(tsvd)
out = tsvd.fit(df)
</code></pre>
<p>I get</p>
<pre><code> File "snowflake/ml/modeling/_internal/snowpark_trainer.py", line 218, in fit_wrapper_function
args = {"X": df[input_cols]}
~~^^^^^^^^^^^^ File "pandas/core/frame.py", line 3767, in __getitem__
indexer = self.columns._get_indexer_strict(key, "columns")[1]
<...snip...>
KeyError: "None of [Index(['doc_vec'], dtype='object')] are in the [columns]"
</code></pre>
<p>Based on the information in this tutorial <a href="https://%20https://github.com/Snowflake-Labs/sfquickstarts/blob/master/site/sfguides/src/text_embedding_as_snowpark_python_udf/assets/notebook.ipynb" rel="nofollow noreferrer">text_embedding_as_snowpark_python_udf</a>,
I suspect the Snowpark array needs to be converted to a <code>np.ndarray</code> before being fed to underlying <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html" rel="nofollow noreferrer">sklearn.decomposition.TruncatedSVD</a></p>
<p>Can someone point me to any example using Snoflake arrays as inputs to the Snowpark models, please?</p>
|
<python><arrays><snowflake-cloud-data-platform><dimensionality-reduction>
|
2024-02-26 17:50:50
| 1
| 394
|
user2148414
|
78,062,820
| 1,014,217
|
TypeError: 'function' object is not iterable in flask with langchain
|
<h3>Description</h3>
<p>I am creating a REST API with flask and using langchain and AzureOpenAI, however the llm.stream doesnt seam to work correctly based on my code.</p>
<p>How can I stream AzureOPENAI responses using flask?</p>
<h3>Example Code</h3>
<pre class="lang-py prettyprint-override"><code>@app.route('/stream2', methods=['GET'])
def stream2():
try:
user_query = request.json.get('user_query')
if not user_query:
return "No user query provided", 400
callback_handler = StreamHandler()
callback_manager = CallbackManager([callback_handler])
llm = AzureChatOpenAI(
azure_endpoint=AZURE_OPENAI_ENDPOINT,
openai_api_version=OPENAI_API_VERSION,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type=OPENAI_API_TYPE,
model_name=OPENAI_MODEL_NAME,
streaming=True,
model_kwargs={
"logprobs": None,
"best_of": None,
"echo": None
},
#callback_manager=callback_manager,
temperature=0)
@stream_with_context
async def generate():
async for chunk in llm.stream(user_query):
yield chunk
return Response(generate(), mimetype='text/event-stream')
except Exception as e:
logger.error(f"An error occurred: {e}")
return "An error occurred", 500
</code></pre>
<h3>Error Message and Stack Trace (if applicable)</h3>
<pre><code>Traceback (most recent call last):
File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\serving.py", line 362, in run_wsgi
execute(self.server.app)
File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\serving.py", line 325, in execute
for data in application_iter:
File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\wsgi.py", line 256, in __next__
return self._next()
File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\wrappers\response.py", line 32, in _iter_encoded
for item in iterable:
TypeError: 'function' object is not iterable
</code></pre>
<h3>System Info</h3>
<pre><code>langchain==0.1.0
langchain-community==0.0.12
langchain-core==0.1.12
langchainhub==0.1.14
</code></pre>
|
<python><flask><langchain>
|
2024-02-26 17:29:48
| 1
| 34,314
|
Luis Valencia
|
78,062,685
| 7,453,074
|
Issue in iterating over Python dictionaries with lists and sub-dictionaries
|
<p>I have a data structure I have read from a file which looks like below -</p>
<pre><code>data = {"test":['[{"Day":"Monday","Device":"Android","Data":[1, 2, 3]}, {"Day":"Tuesday","Device":"Iphone","Data":[10, 20, 30]}]']}
</code></pre>
<p>I am trying to access the elements of the structure like below, peeling off one by one -</p>
<pre><code>value = data["test"]
print(value)
#['[{"Day":"Monday","Device":"Android","Data":[1, 2, 3]}, {"Day":"Tuesday","Device":"Iphone","Data":[10, 20, 30]}]']
value1 = value[0]
print(value1)
#[{"Day":"Monday","Device":"Android","Data":[1, 2, 3]}, {"Day":"Tuesday","Device":"Iphone","Data":[10, 20, 30]}]
print(value1[0])
#I get only [ as if it is a string
</code></pre>
<p>Does the incoming structure has some problems, or am I accessing the wrong way.</p>
<p>Here is the link to reproduce - <a href="https://godbolt.org/" rel="nofollow noreferrer">https://godbolt.org/</a></p>
|
<python><dictionary><parsing>
|
2024-02-26 17:05:05
| 1
| 371
|
Nitron_707
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.