QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,097,219
| 5,790,653
|
python for loop over different lists not to be equal shows incorrect output
|
<p>This is my python code:</p>
<pre class="lang-py prettyprint-override"><code>statuses = [
{'id': '1', 'plan_id': '124124124', 'ip': '1.1.1.1', 'name': 'Saeed1', 'status': 'active'},
{'id': '2', 'plan_id': '124224124', 'ip': '2.2.2.2', 'name': 'Saeed2', 'status': 'suspended'},
{'id': '3', 'plan_id': '164124124', 'ip': '3.3.3.3', 'name': 'Saeed3', 'status': 'suspended'},
{'id': '4', 'plan_id': '164124124', 'ip': '4.4.4.4', 'name': 'Saeed4', 'status': 'suspended'},
{'id': '5', 'plan_id': '124124124', 'ip': '5.5.5.5', 'name': 'Saeed51', 'status': 'active'},
]
all_servers = [
{'id': '1', 'name': 'Saeed1', 'addresses': {'External_Network': [{'addr': '1.1.1.1'}]}, 'plan': 'planA', 'status': 'suspended'},
{'id': '2', 'name': 'Saeed2', 'addresses': {'External_Network': [{'addr': '6.6.6.6'}]}, 'plan': 'planB', 'status': 'suspended'},
{'id': '3', 'name': 'Saeed3', 'addresses': {'External_Network': [{'addr': '3.3.3.3'}]}, 'plan': 'planG', 'status': 'active'},
{'id': '4', 'name': 'Saeed4', 'addresses': {}, 'plan': 'planC', 'status': 'active'},
{'id': '5', 'name': 'Saeed5', 'addresses': {'External_Network': [{'addr': '8.8.8.8'}]}, 'plan': 'planA', 'status': 'suspended'},
]
all_plans = [
{'name': 'planA', 'id': '124124124'},
{'name': 'planB', 'id': '124224124'},
{'name': 'planC', 'id': '164124124'},
{'name': 'planG', 'id': '174124124'},
]
tmp = []
final = []
for status in statuses:
for server in all_servers:
for plan in all_plans:
if status['id'] == server['id'] and status['id'] not in tmp:
tmp.append(status['id'])
if (status['plan_id'] != plan['id']) and (plan['name'] != server['plan']):
final.append({'name': status['name'], 'code': 'plan_mismatch'})
if 'External_Network' not in server['addresses']:
final.append({'name': server['name'], 'code': 'has no ip'})
else:
for addr in server['addresses']['External_Network']:
if addr['addr'] != status['ip']:
final.append({'name': status['name'], 'code': 'ip_mismatch'})
if status['status'] != server['status']:
final.append({'name': status['name'], 'code': 'status_mismatch'})
if status['name'] != server['name']:
final.append({'name': status['name'], 'code': 'names_mismatch'})
</code></pre>
<p>This is the logic I'm having:</p>
<pre><code>from `statuses` and `all_servers`, first check if `id` matches, then do this:
1) if `status['plan_id'] != plan['id']` and also `plan['name'] != all['plan']`, print plan_mismatch
2)+ if `status['name'] != all['name']`, print names_mismatch.
3)+ if status['ip'] != all['addresses']['External_Network']['addr'], print ip_mismatch.
4)+ if status['status'] != all['status'], print status_mismatch.
</code></pre>
<p>Based on the current <code>final</code> list, these are the issues that shouldn't be here, but I don't know why:</p>
<ol>
<li>{'name': 'Saeed2', 'code': 'plan_mismatch'}</li>
<li>{'name': 'Saeed4', 'code': 'plan_mismatch'}</li>
</ol>
<p>All of the bugs to be printed should be these:</p>
<ol>
<li>Saeed1 status_mismatch</li>
<li>Saeed2 ip_mismatch</li>
<li>Saeed3 plan_mismatch</li>
<li>Saeed3 status_mismatch</li>
<li>Saeed4 has no ip</li>
<li>Saeed4 status_mismatch</li>
<li>Saeed51 name_mismatch</li>
<li>Saeed51 ip_mismatch</li>
<li>Saeed51 status_mismatch</li>
</ol>
|
<python>
|
2024-03-03 16:59:17
| 1
| 4,175
|
Saeed
|
78,097,177
| 9,601,258
|
How to get specific unique combinations of a dataframe using only dataframe operations?
|
<p>I have this data that contains entries of 2 players playing a game one after another then after they both gain a win or lose they get a shared score (logic of this is unimportant numbers are random anyway this is just an example to describe what I want).</p>
<p>So there are scores obtained for every possible outcome after player P1 and player P2 have played.</p>
<p>The logic of the game is not important, all I want to know is if I can create a new dataframe of all unique combinations of these 4 players playing using my initial dataframe. So calculate a new score for all possible combinations of these 4 Players if they all play and get a score together, let's say their total scores would be summed up.</p>
<p>Example:</p>
<pre><code>Player_1 Player_2 Player_3 Player_4 Outcome_1 Outcome_2 Outcome_3 Outcome_4 Score
P1 P2 P3 P4 win win win win 72
</code></pre>
<p>and other possible unique combinations.</p>
<p>The key is to get a score of 30 from the combination where both P1 and P2 win and get 42 from the combination where both P3 and P4 have won and sum them to create the score if these 4 players have played and they have all won.</p>
<p>I can do this with generating unique combinations etc., but in a real use case with larger parameters etc. its too long and results in dirty hard to read code. What I want to know is is there a way to achieve this using only operations such as merge, groupby, join, agg etc.</p>
<pre><code>import pandas as pd
data = {
"Player_1": ["P1", "P1", "P1", "P1", "P2", "P2", "P2", "P2", "P1", "P1", "P1", "P1", "P3", "P3", "P3", "P3"],
"Player_2": ["P2", "P2", "P2", "P2", "P3", "P3", "P3", "P3", "P4", "P4", "P4", "P4", "P4", "P4", "P4", "P4"],
"Outcome_1": ["win", "win", "lose", "lose", "win", "win", "lose", "lose", "win", "win", "lose", "lose", "win", "win", "lose", "lose"],
"Outcome_2": ["win", "lose", "win", "lose", "win", "lose", "win", "lose", "win", "lose", "win", "lose", "win", "lose", "win", "lose"],
"Score": [30, 45, 12, 78, 56, 21, 67, 90, 15, 32, 68, 88, 42, 74, 8, 93]
}
df = pd.DataFrame(data)
print(df)
</code></pre>
<pre><code> Player_1 Player_2 Outcome_1 Outcome_2 Score
0 P1 P2 win win 30
1 P1 P2 win lose 45
2 P1 P2 lose win 12
3 P1 P2 lose lose 78
4 P2 P3 win win 56
5 P2 P3 win lose 21
6 P2 P3 lose win 67
7 P2 P3 lose lose 90
8 P1 P4 win win 15
9 P1 P4 win lose 32
10 P1 P4 lose win 68
11 P1 P4 lose lose 88
12 P3 P4 win win 42
13 P3 P4 win lose 74
14 P3 P4 lose win 8
15 P3 P4 lose lose 93
</code></pre>
|
<python><pandas><dataframe>
|
2024-03-03 16:48:58
| 1
| 925
|
Cem Koçak
|
78,097,126
| 1,290,485
|
How do I resolve the TypeError with databricks.proto for databricks-registry-webhooks?
|
<p>I am trying to create webhooks for <a href="https://docs.databricks.com/en/_extras/notebooks/source/mlflow/mlflow-model-registry-webhooks-python-client-example.html" rel="nofollow noreferrer">MLflow in Databricks</a>. However, I am getting the following TypeError when importing <code>from databricks_registry_webhooks import RegistryWebhooksClient, JobSpec, HttpUrlSpec</code></p>
<p><code>TypeError: Couldn't build proto file into descriptor pool: duplicate file name databricks.proto</code></p>
<p>It is the same issue I found on the community site <a href="https://community.databricks.com/t5/community-discussions/error-loading-databricks-proto/td-p/45303" rel="nofollow noreferrer">here</a>.</p>
|
<python><typeerror><databricks><webhooks><mlflow>
|
2024-03-03 16:36:33
| 1
| 6,832
|
Climbs_lika_Spyder
|
78,096,938
| 16,667,620
|
Selective Thresholding in OpenCV
|
<p>I am exploring OpenCV and struck at this point.
here wanted to do kind of selective threshold a image ,so that RGB / non-text image dont get distorted am able to do threshold to get the following result :<br>
code :</p>
<pre><code>Mat adap_threshold(Mat img)
{
Mat img_gray;
cvtColor(img, img_gray, COLOR_BGR2GRAY);
Mat thresh1;
adaptiveThreshold(img_gray, thresh1, 255, ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 21, 15); // BETTER
// imshow("src", thresh1);
// waitKey(0);
return thresh1;
}
</code></pre>
<p>Output :<br>
1.(Correct Result) <br>
<img src="https://i.sstatic.net/bZ9ka.jpg" height="400" />
<img src="https://i.sstatic.net/X7htO.jpg" height="400" /></p>
<p>2.(incorrect Result for non text / more rgb clustered image) <br>
<br>Here in result 2 ,i only want to auto detect the desired part and apply threshold/enhancement there only.(look at rightmost output for expected output) <br>
<img src="https://i.sstatic.net/LMZa7.jpg" height="400" />
<img src="https://i.sstatic.net/s7tyj.jpg" height="400" />
<br></p>
<p>3.Expected result of above image <br>
<img src="https://i.sstatic.net/5ELin.jpg" height="400" /></p>
<p>The expected output is result of enhancement done via optimizing the contrast and brightness of image. I am searching for a way to figure how can I <strong>identify that a particular image can be enhanced via adaptive threshold or via contrast and brightness fix</strong>.
How to fix this ?</p>
|
<python><c++><opencv><image-processing><ocr>
|
2024-03-03 15:37:32
| 1
| 426
|
Ayush Yadav
|
78,096,863
| 13,130,804
|
Working outside of application context when using flask_api_key
|
<p>Based on <a href="https://pypi.org/project/flask-api-key/" rel="nofollow noreferrer">https://pypi.org/project/flask-api-key/</a> I am trying to implement:</p>
<pre><code>from flask import Flask
from flask_api_key import APIKeyManager, api_key_required
app = Flask(__name__)
my_key_manager = APIKeyManager(app)
my_key_manager.create("First_key")
@app.route("/")
def home():
return "hi Home"
@app.route("/protected")
@api_key_required
def protected():
return "hi protected"
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p><strong>Error message:</strong></p>
<blockquote>
<p>"RuntimeError: Working outside of application context. This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information."</p>
</blockquote>
|
<python><flask><api-key>
|
2024-03-03 15:14:08
| 1
| 446
|
Miguel Gonzalez
|
78,096,759
| 632,472
|
How to edit windows file details?
|
<p>In <strong>windows explorer</strong>, there is a long list of details you can choose to view:</p>
<p><a href="https://i.sstatic.net/jLsz3.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jLsz3.jpg" alt="right click in columns and select More..." /></a></p>
<p>How can I programaticaly edit theses itens in the files with <strong>Python</strong> or <strong>PowerShell</strong>?</p>
<p>Ex. I want all my files have "Anniversary" value equal to something. How can I edit it?</p>
|
<python><windows><powershell><windows-explorer>
|
2024-03-03 14:48:28
| 0
| 12,804
|
Rodrigo
|
78,095,982
| 4,429,265
|
Having one vector column for multiple text columns on Qdrant
|
<p>I have a products table that has a lot of columns, which from these, the following ones are important for our search:</p>
<ol>
<li>Title 1 to Title 6 (title in 6 different languages)</li>
<li>Brand name (in 6 different languages)</li>
<li>Category name (in 6 different languages)</li>
<li>Product attributes like size, color, etc. (in 6 different languages)</li>
</ol>
<p>We are planning on using qdrant vector search to implement fast vector queries. But the problem is that all the data important for searching, are in different columns and <em>I do not think</em> (correct me if I am wrong) generating vector embeddings separately for all the columns is the best solution.</p>
<p>I came up with the idea of mixing the columns together and generating separate collections; and I came up with this solution because the title, the category, brand and attrs columns are essentially the same just in different langs.</p>
<p>Also I use the <a href="https://huggingface.co/BAAI/bge-m3" rel="nofollow noreferrer">"BAAI/bge-m3"</a> model which is a multilingual text embedding model that supports more than 100 langs.</p>
<p>So, in short, I created different collections for different languages, and for each collection I have a vector column containing the vector for the combined text of title, brand, color, and category in each language and when searched, because we already know which language the website is, we will search in that specific language collection.</p>
<p>Now, the question is, is this a valid method? What are the pros and cons of this method? I know for sure that when combined, I can not give different weights to different parts of this vector. For example one combined text of title, category, color, and brand may look like this:</p>
<p>"Koala patterned hoodie children blue Bubito"</p>
<p>or Something like:</p>
<p>"Striped t-shirt men navy blue Zara"</p>
<p>Now, user may search "blue hoodie for men", but due to the un-weighted structure of the combined vector, it will not retrieve the best results.</p>
<p>I may be wrong and this may be one of the best results, but please tell me more about the pros and cons of this method, and if you can, give me a better idea.</p>
<p>It is important to note that currently we have more than 300,000(300K) products and they will grow to more than 1,000,000 (1M) in the near future.</p>
|
<python><vector><embedding><vector-search><qdrant>
|
2024-03-03 10:46:29
| 3
| 417
|
Vahid
|
78,095,746
| 20,292,449
|
FileNotFoundError: rtfparser python package
|
<p>Any one experienced with this package, i have tried many thing and it does not work , i have checked whether the file exists or not many times, absolute | relative path, it does not work</p>
<pre><code>D:\pythonCodePycharmProjects\carProject\venv\Scripts\python.exe D:\pythonCodePycharmProjects\carProject\main.py
Traceback (most recent call last):
File "D:\pythonCodePycharmProjects\carProject\main.py", line 10, in <module>
parsed = parser.parse_file()
^^^^^^^^^^^^^^^^^^^
File "D:\pythonCodePycharmProjects\carProject\venv\Lib\site-packages\rtfparse\parser.py", line 66, in parse_file
file = open(self.rtf_path, mode="rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'D:\\pythonCodePycharmProjects\\carProject\\rtf\\test.rtf'
Process finished with exit code 1
</code></pre>
<pre><code>import pathlib
from rtfparse.parser import Rtf_Parser
from rtfparse.renderers import de_encapsulate_html
source_path = pathlib.Path(r"D:\pythonCodePycharmProjects\carProject\rtf\test.rtf")
target_path = pathlib.Path(r"D:\pythonCodePycharmProjects\carProject\rtf\test.html")
parser = Rtf_Parser(rtf_path=source_path)
parsed = parser.parse_file()
renderer = de_encapsulate_html.De_encapsulate_HTML()
with open(target_path, mode="w", encoding="utf-8") as html_file:
renderer.render(parsed, html_file)
</code></pre>
|
<python><pip><rtf>
|
2024-03-03 09:21:25
| 1
| 532
|
ayex
|
78,095,743
| 1,626,977
|
python ldap search returns error when ou is filtered
|
<p>I have a python script to search in an LDAP server. I used to search filtering the DC in my base DN. Now, I want to expand my filter to OU too. But when I use OU in my base DN, I get following error:</p>
<pre><code>/usr/bin/python3.6 /home/zeinab/PycharmProjects/ldap-test/main.py
Traceback (most recent call last):
File "/home/zeinab/PycharmProjects/ldap-test/main.py", line 29, in <module>
search_scope=SUBTREE)
File "/home/zeinab/.local/lib/python3.6/site-packages/ldap3/core/connection.py", line 853, in search
response = self.post_send_search(self.send('searchRequest', request, controls))
File "/home/zeinab/.local/lib/python3.6/site-packages/ldap3/strategy/sync.py", line 178, in post_send_search
responses, result = self.get_response(message_id)
File "/home/zeinab/.local/lib/python3.6/site-packages/ldap3/strategy/base.py", line 403, in get_response
raise LDAPOperationResult(result=result['result'], description=result['description'], dn=result['dn'], message=result['message'], response_type=result['type'])
ldap3.core.exceptions.LDAPOperationsErrorResult: LDAPOperationsErrorResult - 1 - operationsError - None - 000020D6: SvcErr: DSID-03100837, problem 5012 (DIR_ERROR), data 0
- searchResDone - None
</code></pre>
<p>This is my script:</p>
<pre><code>from ldap3 import Server, Connection, SIMPLE, ALL, SUBTREE
server = Server(host=host, port=port, get_info=ALL, connect_timeout=10, use_ssl=True)
connection = Connection(
server=server,
user=username,
password=password,
raise_exceptions=True,
authentication=SIMPLE,
)
connection.open()
connection.bind()
base_dn = "dc=dc1,dc=local,ou=ou0,ou=ou1"
search_filter = "(&(objectClass=person)(mail=john*))"
search_attribute = ['mail']
connection.search(search_base=base_dn,
search_filter=search_filter,
attributes=search_attribute,
search_scope=SUBTREE)
resp = connection.response
print(len(resp))
print(resp)
connection.unbind()
</code></pre>
<p><strong>EDIT 1:</strong>
I have also tried <a href="https://stackoverflow.com/a/28346124/1626977">this solution</a>:</p>
<pre><code>base_dn = "dc=dc1,dc=local,ou=ou0,ou=ou1"
search_filter = "(&(objectClass=person)(mail=john*)(ou:dn:=ou1))"
</code></pre>
<p>But I got the exact same error.</p>
<p><strong>EDIT 2:</strong>
I tried <a href="https://stackoverflow.com/a/28346124/1626977">previous solution</a> this way:</p>
<pre><code>base_dn = "dc=dc1,dc=local"
search_filter = "(&(objectClass=person)(mail=john*)(ou:dn:=ou1))"
</code></pre>
<p>And also this way:</p>
<pre><code>base_dn = "dc=dc1,dc=local"
search_filter = "(&(objectClass=person)(mail=john*)(ou:=ou1))"
</code></pre>
<p>These two didn't raise any error, but returned no result; even though I have results matching the filter.</p>
|
<python><ldap><ou>
|
2024-03-03 09:18:48
| 1
| 10,557
|
Zeinab Abbasimazar
|
78,095,712
| 18,483,009
|
ValueError: The following model_kwargs are not used by the model: ['return_full_text'] when using RetrievalQA in LangChain
|
<p>I'm encountering a ValueError while using the RetrievalQA class in LangChain. The error message indicates that the model_kwargs parameter contains an unused parameter, 'return_full_text'. I've reviewed my code and ensured that this parameter is not being passed to the model initialization or the RetrievalQA instantiation.</p>
<pre class="lang-py prettyprint-override"><code>qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
client_settings=CHROMA_SETTINGS
)
</code></pre>
<p>Despite removing the return_full_text parameter, the error persists. I suspect that the issue might be related to how the RetrievalQA class interacts with the underlying model.</p>
|
<python><large-language-model><langchain-js>
|
2024-03-03 09:08:34
| 0
| 583
|
AmrShams07
|
78,095,645
| 922,712
|
How do I get Selenium on python to connect to an existing instance of Firefox - I am unable to connect to the marionette port
|
<p>I am trying to use Selenium to connect to existing instance of Firefox - the documentation says to use something like this</p>
<pre><code>options=webdriver.FirefoxOptions()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
webdriver_service = Service(r'c:\tmp\geckodriver.exe')
driver = webdriver.Firefox(service = webdriver_service, service_args=['--marionette-port', '2828', '--connect-existing'])
</code></pre>
<p>However, I get the error</p>
<pre><code> driver = webdriver.Firefox(service = webdriver_service, service_args=['--marionette-port', '2828', '--connect-existing'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: WebDriver.__init__() got an unexpected keyword argument 'service_args'
</code></pre>
<p>I see other questions about "unexpected keyword argument", they say latest versions of Selenium has other ways of passing arguments through Options.</p>
<p>I tried</p>
<pre><code>options.add_argument('--marionette-port')
options.add_argument('2828')
options.add_argument('--connect-existing')
</code></pre>
<p>But it still seems to create a new instance of Firefox</p>
<p>I have started firefox with the following arguments</p>
<p>"C:\Program Files\Mozilla Firefox\firefox.exe" -marionette -start-debugger-server 2828</p>
<p>How do I fix this?</p>
<hr />
<p>These are my versions</p>
<p>Python version</p>
<pre><code>python --version
Python 3.12.2
</code></pre>
<p>Selenium version</p>
<pre><code>pip show selenium
Name: selenium
Version: 4.18.1
</code></pre>
<p>Geckodriver version</p>
<pre><code>geckodriver --version
geckodriver 0.34.0 (c44f0d09630a 2024-01-02 15:36 +0000)
</code></pre>
<p>Firefox 123.0 (64-bit)</p>
<p>Windows 11</p>
|
<python><selenium-webdriver><firefox><automation>
|
2024-03-03 08:41:10
| 2
| 14,081
|
user93353
|
78,095,616
| 16,813,096
|
Syncing audio/video in pyav decoding
|
<p>I am making a simple video player with pyav, and using the <code>container.decode(video=0, audio=0)</code> method to decode both audio and video frames, then syncing them using PIL and pyaudio.</p>
<p>Here is the summary code/method I used:</p>
<pre class="lang-py prettyprint-override"><code>import av
import pyaudio
from PIL import ImageTk
import tkinter as tk
...
with av.open(path, "r") as self._container:
audio_stream = self._container.streams.audio[0]
p = pyaudio.PyAudio()
audio_stream = p.open(format=pyaudio.paFloat32,
channels=audio_stream.channels,
rate=audio_stream.rate, output=True)
while True:
try:
frame = next(self._container.decode(video=0, audio=0)) # decode both audio and video simultaneously
if 'Video' in repr(frame):
# show the video frame
self._current_img = frame.to_image() # converts the video frame to pil image
self._current_imgtk = ImageTk.PhotoImage(self._current_img)
self.label.config(image=self._current_imgtk) # display the image in a tk label
else:
# play the audio frame
audio_data = frame.to_ndarray().astype('float32')
interleaved_data = audio_data.T.flatten().tobytes()
audio_stream.write(interleaved_data)
except (StopIteration, av.error.EOFError, tk.TclError):
break
self._container = None
audio_stream.stop_stream()
audio_stream.close()
p.terminate()
...
</code></pre>
<p>The video/audio is properly synced when decoding is done like this (according to pts):</p>
<pre><code>video_frame
audio_frame
audio_frame
video_frame
audio_frame
audio_frame
...
</code></pre>
<p>But not all videos are decoded like this, instead a continues series is decoded and not synced properly:</p>
<pre><code>video_frame
video_frame
video_frame
video_frame
audio_frame
audio_frame
audio_frame
audio_frame
audio_frame
audio_frame
video_frame
video_frame
video_frame
video_frame
...
</code></pre>
<p>it just stutters between audio and video frames</p>
<p><strong>Is there any method to decode both frames evenly for all videos?</strong>
Or at least provide any similar project of video player using pyav like this.</p>
<p>I am actually working on this project: <a href="https://github.com/Akascape/tkVideoPlayer" rel="nofollow noreferrer">https://github.com/Akascape/tkVideoPlayer</a></p>
<p>Thank you</p>
|
<python><playback><pyaudio><video-player><pyav>
|
2024-03-03 08:31:25
| 0
| 582
|
Akascape
|
78,095,596
| 4,522,501
|
How to continue looping a list without wait for new messages telethon python
|
<p>I'm new with telethon and I have a telethon python script that is listening for telegram messages, all this part works correctly, but I'm improving my script, what I need to achieve is the following:</p>
<p>I'm looping a list, that needs to do something that takes around 10-15 seconds for each item in the list, If I do this right after the message arrives, I lost those 10-15 seconds, hence I decided to change my code but I'm facing an issue that next item is not iterated as telethon is waiting now for the next message to arrive hence the code starts failing, I'm new with async functions and probably is something easy but I can't get it, please see my code example below:</p>
<p>Working code:</p>
<pre><code>client = TelegramClient('session', API_ID, API_HASH)
@client.on(events.NewMessage(chats=chatId))
async def my_event_handler(event):
text = event.raw_text
for item in items:
# do something, it works fine, but I need to have other stuff ready
# before telegram message arrives
client.start()
client.run_until_disconnected()
</code></pre>
<p>Looking for something like this?</p>
<pre><code>for item in items:
# do something before a new message arrives
client = TelegramClient('session', API_ID, API_HASH)
@client.on(events.NewMessage(chats=chatId))
async def my_event_handler(event):
text = event.raw_text
# do all the stuff that I need to do right after message is arrived
await client.disconnect()
client.start()
client.run_until_disconnected()
</code></pre>
<p>The issue with code above is that once client is disconnected it finishes the code, but I'm looking to continue with the next iteration</p>
|
<python><for-loop><telegram><telethon>
|
2024-03-03 08:24:35
| 0
| 1,188
|
Javier Salas
|
78,095,381
| 23,106,915
|
Is it possible to host a selenium app on a server?
|
<h2>Description:</h2>
<p>Hi there, I have created a Selenium app that's interactive and changeable according to users need with the help of Streamlit interface. The app itself is working perfectly fine without any issue, additionally all the packages that needs to be installed are added inside of the <code>requirements.txt</code> file. So no package issue at all.</p>
<h2>Issue:</h2>
<p>The issue is when I upload the app on selenium or any other web-server such as GitHub Codespace the app doesn't run. The error is generated by this line:</p>
<pre class="lang-py prettyprint-override"><code>driver = webdriver.Chrome()
</code></pre>
<h2>Tried Solutions:</h2>
<p>I have tried downloading the chrome driver and placing it inside the working directory and use <code>external_path = "./chromedriver.exe"</code> argument but no luck there. I am open to any suggestions, Thank you.</p>
<h2>Code Snippet:</h2>
<pre class="lang-py prettyprint-override"><code>if st.button("Scrape Data"):
driver = webdriver.Chrome()
driver.get(f'https://www.airbnb.com{"" if location == f"I{comma}m flexible" else f"/s/{location}"}/homes?tab_id=home_tab&refinement_paths%5B%5D=%2Fhomes&price_filter_input_type=0&channel=EXPLORE&date_picker_type=calendar&source=structured_search_input_header&search_type=filter_change&price_filter_num_nights=6&checkin={check_in_date}&checkout={check_out_date}{f"&adults={adults_count}" if adults_count > 0 else ""}{f"&children={children_count}" if children_count > 0 else ""}{f"&infants={infants_count}" if infants_count > 0 else ""}{f"&pets={pets_count}" if pets_count > 0 else ""}&flexible_trip_lengths%5B%5D={"one_week" if additional_search_filters == False else stay}{f"&max_price={max_price}" if additional_search_filters else ""}{f"&min_price={min_price}" if additional_search_filters and min_price > 10 else ""}')
titles = driver.find_elements(By.CSS_SELECTOR, 'div.t1jojoys.atm_g3_1kw7nm4.atm_ks_15vqwwr.atm_sq_1l2sidv.atm_9s_cj1kg8.atm_6w_1e54zos.atm_fy_1vgr820.atm_7l_18pqv07.atm_cs_qo5vgd.atm_w4_1eetg7c.atm_ks_zryt35__1rgatj2.dir.dir-ltr')
subtitles = driver.find_elements(By.CSS_SELECTOR, 'span.t6mzqp7.atm_g3_1kw7nm4.atm_ks_15vqwwr.atm_sq_1l2sidv.atm_9s_cj1kg8.atm_6w_1e54zos.atm_fy_kb7nvz.atm_7l_12u4tyr.atm_am_qk3dho.atm_ks_zryt35__1rgatj2.dir.dir-ltr')
prices = driver.find_elements(By.CSS_SELECTOR, 'span._tyxjp1')
ratings = driver.find_elements(By.CSS_SELECTOR, 'span.r1dxllyb.atm_7l_18pqv07.atm_cp_1ts48j8.dir.dir-ltr')
links = driver.find_elements(By.CSS_SELECTOR, 'a.l1ovpqvx.atm_1y33qqm_1ggndnn_10saat9.atm_17zvjtw_zk357r_10saat9.atm_w3cb4q_il40rs_10saat9.atm_1cumors_fps5y7_10saat9.atm_52zhnh_1s82m0i_10saat9.atm_jiyzzr_1d07xhn_10saat9.bn2bl2p.atm_5j_8todto.atm_9s_1ulexfb.atm_e2_1osqo2v.atm_fq_idpfg4.atm_mk_stnw88.atm_tk_idpfg4.atm_vy_1osqo2v.atm_26_1j28jx2.atm_3f_glywfm.atm_kd_glywfm.atm_3f_glywfm_jo46a5.atm_l8_idpfg4_jo46a5.atm_gi_idpfg4_jo46a5.atm_3f_glywfm_1icshfk.atm_kd_glywfm_19774hq.atm_uc_x37zl0_1w3cfyq_oggzyc.atm_70_thabx4_1w3cfyq_oggzyc.atm_uc_glywfm_1w3cfyq_pynvjw.atm_uc_x37zl0_18zk5v0_oggzyc.atm_70_thabx4_18zk5v0_oggzyc.atm_uc_glywfm_18zk5v0_pynvjw.dir.dir-ltr')
for title, subtitle, price, rating, link in zip(titles, subtitles, prices, ratings, links):
if output == "Markdown":
data["Title"].append(title.text)
data["Subtitle"].append(subtitle.text)
data["Price"].append(price.text)
data["Rating"].append(f"⭐{rating.text}")
data["Link"].append(link.get_attribute("href"))
df = pd.DataFrame(data)
</code></pre>
<h2>Error:</h2>
<pre><code>Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "/mount/src/airbnb-webscraper/main.py", line 83, in <module>
driver = webdriver.Chrome()
^^^^^^^^^^^^^^^^^^
File "/home/adminuser/venv/lib/python3.11/site-packages/selenium/webdriver/chrome/webdriver.py", line 45, in __init__
super().__init__(
File "/home/adminuser/venv/lib/python3.11/site-packages/selenium/webdriver/chromium/webdriver.py", line 50, in __init__
self.service.start()
File "/home/adminuser/venv/lib/python3.11/site-packages/selenium/webdriver/common/service.py", line 102, in start
self.assert_process_still_running()
File "/home/adminuser/venv/lib/python3.11/site-packages/selenium/webdriver/common/service.py", line 115, in assert_process_still_running
raise WebDriverException(f"Service {self._path} unexpectedly exited. Status code was: {return_code}")
selenium.common.exceptions.WebDriverException: Message: Service /home/appuser/.cache/selenium/chromedriver/linux64/122.0.6261.94/chromedriver unexpectedly exited. Status code was: 127
</code></pre>
|
<python><selenium-webdriver><selenium-chromedriver><streamlit>
|
2024-03-03 06:48:21
| 1
| 546
|
AshhadDevLab
|
78,095,361
| 1,418,326
|
Not able to debug unittest in vscode for python
|
<p>I am able to Run unittest for python in vscode, but not debug it. what did I do wrong?</p>
<p>Structure:</p>
<pre><code>myproject
\mymodule
\matrix.py
\test
\matrix_test.py
</code></pre>
<p>in matrix_test.py:</p>
<pre><code>import unittest
from mymodule import matrix
class MyTestCase(unittest.TestCase):
def test_convert(self):
self.assertEqual('abc', matrix.convert('abc'))
</code></pre>
<p>I am getting error:</p>
<pre><code>ModuleNotFoundError: No module named 'mymodule'
</code></pre>
|
<python><python-unittest><vscode-debugger>
|
2024-03-03 06:37:31
| 0
| 1,707
|
topcan5
|
78,095,320
| 713,200
|
How to grab a text from a particular tag?
|
<p>I have the following source and I want to get text from a particular attribute of tag <code>image</code>
<a href="https://i.sstatic.net/Cokj3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cokj3.png" alt="enter image description here" /></a></p>
<p>I'm able to get to the image tag using the following xpath.</p>
<p><code>//*[name()='g' and contains(@entityid, '61042482270050282')]/*[name()='image']</code></p>
<p>But I do not know how I can grab the text that is underlined in red in the image that is <code>down_state_16x16</code> using python in selenium</p>
|
<python><selenium-webdriver><xpath>
|
2024-03-03 06:23:20
| 1
| 950
|
mac
|
78,095,279
| 1,226,676
|
Create a queryset that automatically queries onetoone objects
|
<p>I'm building a django application that has a model with multiple <code>OneToOne</code> relationships:</p>
<pre><code>class MyApp(BaseModel):
site_1 = models.OneToOneField(
Site, on_delete=models.CASCADE, related_name="site1", null=True
)
site_2 = models.OneToOneField(
Site, on_delete=models.CASCADE, related_name="site2", null=True
)
site_3 = models.OneToOneField(
Site, on_delete=models.CASCADE, related_name="site3", null=True
)
def __str__(self):
return self.title
class Meta:
verbose_name = "Main Application"
verbose_name_plural = "Main Application"
</code></pre>
<p>I'm trying to create a view that returns the full JSON object with all data. When I create a queryset using <code>MyApp.objects.all()</code> it returns an object with just the primary keys:</p>
<pre><code> {
"model": "mesoamerica.mesoamericaapp",
"pk": 1,
"fields": {
"site_1": 1,
"site_2": 2,
"site_3": 3
}
</code></pre>
<p>I can go ahead and manually do the queries, convert back and forth from json, and manually link to the querysets:</p>
<pre><code> app = MyApp.objects.all().first()
sites = Site.objects.filter(id__in=(app.site_1.pk, app.site_2.pk, app.site_3.pk))
# look up sites and attach data
serializer = JSONSerializer()
app_data = serializer.serialize([app])
site_data = serializer.serialize(sites)
data_dict = json.loads(app_data)
data_dict[0]["fields"]["site_1"] = json.loads(site_data)[0]
data_dict[0]["fields"]["site_2"] = json.loads(site_data)[1]
data_dict[0]["fields"]["site_3"] = json.loads(site_data)[2]
app_data = json.dumps(data_dict)
return HttpResponse(app_data, content_type="application/json")
</code></pre>
<p>and this gives me the result that I'm looking for.</p>
<p>However, I'd like to create a queryset that grabs each of the <code>Site</code> models by pk and attaches them to the query, rather than having to do that myself. Is there a queryset that will do this?</p>
|
<python><django><django-queryset>
|
2024-03-03 06:00:35
| 1
| 5,568
|
nathan lachenmyer
|
78,094,992
| 4,159,833
|
How to obtain and click the URL of a dynamically loaded website?
|
<p>I would like to scrape the marathon results from the link (call it page A): <a href="https://www.marathon.tokyo/2023/result/index.php" rel="nofollow noreferrer">https://www.marathon.tokyo/2023/result/index.php</a></p>
<p>Suppose I choose the 'Marathon Men' in the first option and then search, I get to the following webpage showing the results (call it page B):</p>
<p><a href="https://i.sstatic.net/kukBa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kukBa.png" alt="enter image description here" /></a></p>
<p>When I click the names, I then get to the result of each individual athlete (page C):</p>
<p><a href="https://i.sstatic.net/wI1kE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wI1kE.png" alt="enter image description here" /></a></p>
<p>My question is, how to get from page A to page C? I have no problems scraping the data I want from page C. The problem is getting from page A to B, obtain all the URLs pointing to the individual result entry (page C), and then navigate to page C.</p>
<p>To get from page A to page B, I have something like the following:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
url = 'https://www.marathon.tokyo/2023/result/index.php'
driver = webdriver.Chrome()
driver.get(url)
options = driver.find_elements(By.TAG_NAME, "option")
for option in options:
if 'Marathon Men' in option.text:
print(option.text)
option.click() # click on the option 100
break
</code></pre>
<p>It does automatically select the correct option (Marathon Men), but I don't know how to click the 'search' button.</p>
<p>To get from page B to C, I try the following code while at page B:</p>
<pre><code>raw_links = driver.find_elements(By.XPATH, '//a [@href]')
for link in raw_links:
l = link.get_attribute("href")
print("raw_link:{}".format(l))
</code></pre>
<p>And I get the following output:</p>
<pre><code>raw_link:javascript:page(2);
raw_link:javascript:page(3);
raw_link:javascript:page(4);
# and so on
</code></pre>
<p>Again, the problem is I don't know how to convert those to clickable URLs and navigate to them.</p>
<p>Any help to get me started would be greatly appreciated.</p>
|
<javascript><python><selenium-webdriver><web-scraping><beautifulsoup>
|
2024-03-03 03:09:51
| 1
| 3,068
|
Physicist
|
78,094,964
| 2,739,700
|
Example for Azure Alert create using Python SDK
|
<p>I am trying to create Azure Alert using Python SDK and Below is the code</p>
<pre><code>from azure.mgmt.monitor import MonitorManagementClient
from azure.mgmt.monitor.models import RuleMetricDataSource
from azure.identity import DefaultAzureCredential
from azure.mgmt.monitor.models import ThresholdRuleCondition
from azure.mgmt.monitor.models import RuleEmailAction
subscription_id = 'xxxxxx-xxxxxxxxxx-xxxxxxxxx'
resource_group_name = 'example-rg'
vm_name = 'example_vm'
# Set up the credentials
credentials = DefaultAzureCredential()
resource_id = (
"subscriptions/{}/"
"resourceGroups/{}/"
"providers/Microsoft.Compute/virtualMachines/{}"
).format(subscription_id, resource_group_name, vm_name)
# create client
client = MonitorManagementClient(
credentials,
subscription_id
)
# I need a subclass of "RuleDataSource"
data_source = RuleMetricDataSource(
resource_uri=resource_id,
metric_name='Percentage CPU'
)
# I need a subclasses of "RuleCondition"
rule_condition = ThresholdRuleCondition(
data_source=data_source,
operator='GreaterThanOrEqual',
threshold=90,
window_size='PT5M',
time_aggregation='Average'
)
# I need a subclass of "RuleAction"
rule_action = RuleEmailAction(
send_to_service_owners=True,
custom_emails=[
'abc@gmail.com'
]
)
rule_name = 'MyPyTestAlertRule'
my_alert = client.alert_rules.create_or_update(
resource_group_name,
rule_name,
{
'location': 'North Central US',
'alert_rule_resource_name': rule_name,
'description': 'Testing Alert rule creation',
'is_enabled': True,
'condition': rule_condition,
'actions': [
rule_action
]
}
)
</code></pre>
<p>Unfortunately I am getting below error, difficult find any valid example code in the website. It would be really great if somebody provides a example to create Azure alert using python SDK.</p>
<p>As per error it classic alert rules based on this metric is no longer supported.</p>
<pre><code>Traceback (most recent call last):
File "/Users/testuser/test/alert/test.py", line 71, in <module>
my_alert = monitor_client.alert_rules.create_or_update(
File "/usr/local/lib/python3.10/site-packages/azure/core/tracing/decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/azure/mgmt/monitor/v2016_03_01/operations/_alert_rules_operations.py", line 370, in create_or_update
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
azure.core.exceptions.HttpResponseError: (BadRequest) Creating or editing classic alert rules based on this metric is no longer supported. To learn about new alert rules see https://aka.ms/create-metric-alerts
Code: BadRequest
Message: Creating or editing classic alert rules based on this metric is no longer supported. To learn about new alert rules see https://aka.ms/create-metric-alerts
</code></pre>
|
<python><azure><azure-monitoring><azure-alerts>
|
2024-03-03 02:51:59
| 1
| 404
|
GoneCase123
|
78,094,928
| 2,585,024
|
CommandError: Could not load shell runner: 'IPython Notebook'
|
<p>I'm having an issues starting using jupyter on my system (Mac OSX 14.2.1) even on new django projects.</p>
<pre class="lang-bash prettyprint-override"><code>django-admin startproject mysite
cd mysite/
pip install virtualenv
virtualenv --python=$(pyenv root)/versions/3.9.18/bin/python3 env
pip install jupyter ipython django-extensions
# add django_extensions to INSTALLED_APPS
vim mysite/settings
python manage.py shell_plus --notebook
</code></pre>
<p>Running the above has the following error:</p>
<pre><code>$ python manage.py shell_plus --notebook
Traceback (most recent call last):
File "/path/to/project/mysite/env/lib/python3.9/site-packages/django_extensions/management/commands/shell_plus.py", line 281, in get_notebook
from notebook.notebookapp import NotebookApp
ModuleNotFoundError: No module named 'notebook.notebookapp'
CommandError: Could not load shell runner: 'IPython Notebook'.
</code></pre>
<p>I have a feeling the answer is something really straightforward, but I've been stuck on this for weeks now and haven't gotten to the bottom of it.</p>
|
<python><django><jupyter-notebook>
|
2024-03-03 02:33:49
| 1
| 1,262
|
Android
|
78,094,841
| 11,170,350
|
Django save post via celery
|
<p>I have a particular use case where i write text for a blog post in admin dashboard. Then i want to generate the embeddings of post.
I am doing this in <code>admin.py</code></p>
<pre><code>@admin.register(Post)
class PostAdmin(admin.ModelAdmin):
list_display = ('title', 'author', 'created_at')
readonly_fields = ('embeddings_calculated','emebddings_updated_at')
def save_model(self, request, obj, form, change):
if not obj.meta_title:
obj.meta_title = obj.title
obj.meta_description = obj.content[:160]
super().save_model(request, obj, form, change)
print("django post id",obj.id)
post = get_object_or_404(Post, id=obj.id)
print("django post title",post)
if not change: # if new post not an edit post
task_id = perform_task.delay(obj.id)
result = AsyncResult(str(task_id), app=app)
print("AsyncResultx", result)
print("AsyncResultx_state", result.state)
</code></pre>
<p>here is my celery task</p>
<pre><code>@shared_task()
def perform_task(post_id):
print("celery post id",post_id)
post = get_object_or_404(Post, id=post_id)
print("celery post title",post)
</code></pre>
<p>I am getting this error</p>
<pre><code>django.http.response.Http404: No Post matches the given query.
</code></pre>
<p>Though the print statement such as <code>django post id</code> and <code>django post title</code> show me correct result. But the issue lies in celery. I cant see <code>celery post title</code> in console.</p>
<p>I aslo tried</p>
<pre><code>task_id = transaction.on_commit(lambda: perform_task.delay(obj.id))
</code></pre>
<p>But it did not work as well. Without celery every thing work well, but i want to use celery.</p>
|
<python><django><celery>
|
2024-03-03 01:23:06
| 0
| 2,979
|
Talha Anwar
|
78,094,521
| 3,259,222
|
How to shift matrix upper right triangular values to lower right
|
<p>Given the following upper triangular matrix, I need to shift the values to the bottom as shown:</p>
<pre><code>input_array = np.array([
[np.nan, 1, 2, 3],
[np.nan, np.nan, 4, 5],
[np.nan, np.nan, np.nan, 6],
[np.nan, np.nan, np.nan, np.nan],
])
result = np.array([
[np.nan, np.nan, np.nan, np.nan],
[np.nan, np.nan, np.nan, 3],
[np.nan, np.nan, 2, 5],
[np.nan, 1, 4, 6],
])
</code></pre>
<p>What is the best way to do this in numpy in terms of fewer lines and speed? Here is the solution I've got using meshgrid:</p>
<pre><code>import numpy as np
n = input_array.shape[0]
j, i = np.meshgrid(np.arange(n), np.arange(n))
i2 = i - np.arange(1,n+1)[::-1]
result = input_array[i2, j]
</code></pre>
|
<python><numpy>
|
2024-03-02 22:33:14
| 2
| 431
|
Konstantin
|
78,094,405
| 10,292,638
|
how to append a distinct value for each iteration to a pandas dataframe through a loop?
|
<p>I am trying to add the stock name value for each iteration to it's corresponding data bulk upload to a pandas dataframe:</p>
<p>This is what I've tried so far:</p>
<pre><code>from pandas_datareader import data as pdr
import requests
from bs4 import BeautifulSoup
import json, requests
import pandas as pd
import re
import numpy as np
import pandas_datareader.data as web
import yfinance as yfin
from tqdm import tqdm
import numpy as np
import datetime
from datetime import timedelta
################# fetch series names for sic ######################
sic_emisoras_df = pd.json_normalize(
json.loads(
requests.get('https://www.bmv.com.mx/es/Grupo_BMV/BmvJsonGeneric?idSitioPagina=6&mercado=CGEN_SCSOP&tipoValor=CGEN_CASEO&random=5845')
.text
.split(';(', 1)[-1]
.split(')')[0]
)['response']['resultado']
).dropna(axis=1, how='all')
####################################################################
# define time range:
start=datetime.date.today()-datetime.timedelta(days=14)
end=datetime.date.today()
# fetch data
# get all SIC names as list
stock_names = sic_emisoras_df["cveCorta"].values.tolist()
# append information per stock name
sic_market_df = pd.DataFrame([])
sic_market_df["stock_name"] = np.nan
for i in tqdm(stock_names):
# fetch data per stock_name
try:
yfin.pdr_override()
# append stock name
sic_market_df["stock_name"]=i
# fetch information by stock name
data = web.DataReader(i,start,end)
# append rows to empty dataframe
sic_market_df = sic_market_df.append(data)
except KeyError:
pass
print("Fetched sic_market_df!")
</code></pre>
<p>And the output only fetches the name for the first iteration but gets NaN for every other bulk upload :</p>
<pre><code> stock_name Open High Low Close Adj Close Volume
2024-02-20 ZS 14.500000 14.950000 14.490000 14.700000 14.700000 30253100.0
2024-02-21 ZS 14.590000 14.860000 14.570000 14.790000 14.790000 23032400.0
2024-02-22 ZS 14.940000 15.280000 14.890000 15.240000 15.240000 35702500.0
2024-02-23 ZS 15.150000 15.290000 14.950000 15.130000 15.130000 22914900.0
2024-02-26 ZS 15.130000 15.480000 15.130000 15.280000 15.280000 23675800.0
</code></pre>
<p>I would like to get a dataframe that identifies each iteration bulk upload with it's unique stock name, i.e, something like this:</p>
<pre><code> stock_name Open High Low Close Adj Close Volume
2024-02-20 ZS 14.500000 14.950000 14.490000 14.700000 14.700000 30253100.0
2024-02-21 ZS 14.590000 14.860000 14.570000 14.790000 14.790000 23032400.0
2024-02-22 ZS 14.940000 15.280000 14.890000 15.240000 15.240000 35702500.0
2024-02-23 ZS 15.150000 15.290000 14.950000 15.130000 15.130000 22914900.0
2024-02-26 ZS 15.130000 15.480000 15.130000 15.280000 15.280000 23675800.0
... ... ... ... ... ... ... ...
2024-02-20 AAPL 14.500000 14.950000 14.490000 14.700000 14.700000 30253100.0
2024-02-21 AAPL 14.590000 14.860000 14.570000 14.790000 14.790000 23032400.0
2024-02-22 AAPL 14.940000 15.280000 14.890000 15.240000 15.240000 35702500.0
2024-02-23 AAPL 15.150000 15.290000 14.950000 15.130000 15.130000 22914900.0
2024-02-26 AAPL 15.130000 15.480000 15.130000 15.280000 15.280000 23675800.0
</code></pre>
<p>package versions:</p>
<pre><code>!pip show pandas. #1.5.3
!pip show beautifulsoup4 #4.12.3
!pip show pandas-datareader #0.10.0
</code></pre>
<p>Could you please assist on how to accomplish this?</p>
|
<python><pandas><sorting><for-loop><row>
|
2024-03-02 21:46:11
| 1
| 1,055
|
AlSub
|
78,094,377
| 1,743,843
|
Handling "InterfaceError: another operation is in progress" with Async SQLAlchemy and FastAPI
|
<p>I'm developing a FastAPI application where I use SQLModel with the <code>asyncpg</code> driver for asynchronous database operations. Despite following the asynchronous patterns and ensuring proper await usage on database calls, I encounter the following error during my <code>pytest</code> tests:</p>
<p><code>InterfaceError: cannot perform operation: another operation is in progress</code></p>
<p>This error arises when executing database operations, seemingly due to concurrent access or overlapping database transactions. I've tried ensuring that each test and request uses its own <code>AsyncSession</code> and that all sessions and transactions are properly closed and committed.</p>
<pre><code>import random
import string
import pytest
import pytest_asyncio
from httpx import AsyncClient, ASGITransport
from main import app # Make sure this import points to your FastAPI app instance
@pytest_asyncio.fixture
async def client():
async with AsyncClient(transport=ASGITransport(app=app), base_url="http://test") as client:
yield client
@pytest_asyncio.fixture
def generate_random_phone_number():
def _generate(length=10):
return ''.join(random.choices(string.digits, k=length))
return _generate
@pytest_asyncio.fixture
def generate_random_phone_prefix():
def _generate():
prefix_length = random.randint(1, 3)
return '+' + ''.join(random.choices(string.digits, k=prefix_length))
return _generate
@pytest.mark.asyncio
async def test_create_user(client: AsyncClient, generate_random_phone_number, generate_random_phone_prefix):
user_data = {
"phone_number": generate_random_phone_number(),
"phone_prefix": generate_random_phone_prefix()
}
response = await client.post("/api/user/", json=user_data)
assert response.status_code == 201
data = response.json()
assert data["phone_number"] == user_data["phone_number"]
assert data["phone_prefix"] == user_data["phone_prefix"]
@pytest.mark.asyncio
async def test_duplicate_user(client: AsyncClient, generate_random_phone_number, generate_random_phone_prefix):
phone_number = generate_random_phone_number()
phone_prefix = generate_random_phone_prefix()
user_data = {
"phone_number": phone_number,
"phone_prefix": phone_prefix
}
await client.post("/api/user/", json=user_data)
response = await client.post("/api/user/", json=user_data)
assert response.status_code == 400
data = response.json()
assert data["detail"] == "A user with the given phone number and prefix already exists."
@pytest.mark.asyncio
async def test_create_interest(client: AsyncClient, generate_random_phone_number, generate_random_phone_prefix):
# First, create a user
user_data = {
"phone_number": generate_random_phone_number(),
"phone_prefix": generate_random_phone_prefix()
}
user_response = await client.post("/api/user/", json=user_data)
assert user_response.status_code == 201
user = user_response.json()
interest_data = {
"topic": "Sample Topic",
"found": 1,
"search": True
}
headers = {"User-ID": str(user["id"])}
response = await client.post("/api/interest/", json=interest_data, headers=headers)
assert response.status_code == 201
interest = response.json()
assert interest["topic"] == interest_data["topic"]
assert interest["found"] == interest_data["found"]
assert interest["search"] == interest_data["search"]
@pytest.mark.asyncio
async def test_get_interest(client: AsyncClient, generate_random_phone_number, generate_random_phone_prefix):
user_data = {
"phone_number": generate_random_phone_number(),
"phone_prefix": generate_random_phone_prefix()
}
user_response = await client.post("/api/user/", json=user_data)
assert user_response.status_code == 201
user = user_response.json()
headers = {"User-ID": str(user["id"])}
interest_1 = {
"topic": "Sample Topic1",
"found": 1,
"search": True
}
response = await client.post("/api/interest/", json=interest_1, headers=headers)
assert response.status_code == 201
interest_2 = {
"topic": "Sample Topic2",
"found": 0,
"search": True
}
response = await client.post("/api/interest/", json=interest_2, headers=headers)
assert response.status_code == 201
response = await client.get("/api/interest/", headers=headers)
interests = response.json()
assert len(interests) == 2
# Validate the content of the first interest object
interest_1_response = interests[0]
assert interest_1_response["topic"] == "Sample Topic1"
assert interest_1_response["found"] == 1
assert interest_1_response["search"] is True
assert "created_at" in interest_1_response
assert "updated_at" in interest_1_response
assert interest_1_response["created_at"] <= interest_1_response["updated_at"]
# Validate the content of the second interest object
interest_2_response = interests[1]
assert interest_2_response["topic"] == "Sample Topic2"
assert interest_2_response["found"] == 0
assert interest_2_response["search"] is True
assert "created_at" in interest_2_response
assert "updated_at" in interest_2_response
assert interest_2_response["created_at"] <= interest_2_response["updated_at"]
</code></pre>
<p>I've also ensured that my <code>AsyncClient</code> for testing is properly set up and that each test function is marked with <code>@pytest.mark.asyncio</code> to run in an async context.</p>
<p>I'm looking for insights or solutions to properly handle this error and ensure that my asynchronous database operations don't conflict with each other.</p>
<p><strong>Update</strong>
Here are all the codes.
Endpoints:</p>
<pre><code>@asynccontextmanager
async def lifespan():
await init_db()
app = FastAPI(lifespan=lifespan)
allowed_origins = [
"http://127.0.0.1:5173",
]
app.add_middleware(
CORSMiddleware,
allow_origins=allowed_origins, # List of allowed origins
allow_credentials=True,
allow_methods=["*"], # Allows all methods
allow_headers=["*"], # Allows all headers
)
def user_id_from_header(user_id: str = Header(...)) -> str:
if not user_id:
raise HTTPException(status_code=400, detail="User-ID is missing")
return user_id
# Health
@app.get("/")
async def health():
return {"health": "ok"}
# User
@app.get("/api/user/{user_id}", status_code=status.HTTP_200_OK, response_model=UserRead)
async def get_user(*, session: AsyncSession = Depends(get_session), user_id: uuid.UUID):
user = await session.get(User, user_id)
if not user:
raise HTTPException(status_code=404, detail="User not found")
return user
@app.post("/api/user/", response_model=UserRead, status_code=status.HTTP_201_CREATED)
async def create_user(*, user_create: UserCreate, session: AsyncSession = Depends(get_session)):
existing_user = await session.exec(
select(User).where(
User.phone_number == user_create.phone_number,
User.phone_prefix == user_create.phone_prefix
)
)
if existing_user.first():
raise HTTPException(
status_code=400,
detail="A user with the given phone number and prefix already exists."
)
db_user = User.model_validate(user_create)
session.add(db_user)
await session.commit()
await session.refresh(db_user)
return db_user
# Interest
@app.post("/api/interest/", response_model=InterestRead, status_code=status.HTTP_201_CREATED)
async def create_interest(*, interest_create: InterestCreate, session: AsyncSession = Depends(get_session),
user_id: uuid.UUID = Depends(user_id_from_header)):
user = await session.get(User, user_id)
if not user:
raise HTTPException(status_code=404, detail="User not found")
db_interest = Interest(**interest_create.model_dump(), user=user)
session.add(db_interest)
await session.commit()
await session.refresh(db_interest)
return db_interest
@app.get("/api/interest/", response_model=List[InterestRead])
async def read_interests(*, user_id: uuid.UUID = Depends(user_id_from_header),
session: AsyncSession = Depends(get_session)):
interests = select(Interest).where(Interest.user_id == user_id)
results = await session.exec(interests)
return results
</code></pre>
<p>DB connector</p>
<pre><code>engine = create_async_engine("postgresql+asyncpg://xxxxxxxxx:xxxx@127.0.0.1:5432/dobotsvc", echo=True, future=True)
async def init_db():
async with engine.begin() as conn:
# await conn.run_sync(SQLModel.metadata.drop_all)
await conn.run_sync(SQLModel.metadata.create_all)
async def get_session() -> AsyncSession:
async_session = sessionmaker(bind=engine, class_=AsyncSession, expire_on_commit=False)
async with async_session() as session:
yield session
</code></pre>
<p>and models</p>
<pre><code>class UserBase(SQLModel):
id: UUID = Field(default_factory=uuid4, primary_key=True)
phone_number: str = Field(max_length=255)
phone_prefix: str = Field(max_length=10)
class User(UserBase, table=True):
__table_args__ = (
UniqueConstraint("phone_number", "phone_prefix", name="phone_numbe_phone_prefix_constraint"),
)
registered_at: datetime = Field(sa_column=sa.Column(sa.DateTime(timezone=True), nullable=False),
default_factory=lambda: datetime.now(timezone.utc))
interests: List["Interest"] = Relationship(back_populates="user")
class UserRead(UserBase):
pass
class UserCreate(UserBase):
pass
class InterestBase(SQLModel):
id: Optional[int] = Field(default=None, primary_key=True)
topic: str = Field(max_length=100)
found: int = 0
search: bool = Field(default=False)
created_at: datetime = Field(sa_column=sa.Column(sa.DateTime(timezone=True), nullable=False),
default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(sa_column=sa.Column(sa.DateTime(timezone=True), nullable=False),
default_factory=lambda: datetime.now(timezone.utc))
class Interest(InterestBase, table=True):
user_id: UUID = Field(foreign_key="user.id")
user: User = Relationship(back_populates="interests")
proposals: List["Proposal"] = Relationship(back_populates="interest")
class InterestCreate(InterestBase):
pass
class InterestRead(InterestBase):
pass
class ProposalBase(SQLModel):
id: Optional[int] = Field(default=None, primary_key=True)
interest_id: int = Field(foreign_key="interest.id")
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
text: str
class Proposal(ProposalBase, table=True):
interest: Interest = Relationship(back_populates="proposals")
</code></pre>
|
<python><asynchronous><fastapi><sqlmodel>
|
2024-03-02 21:32:32
| 0
| 34,339
|
softshipper
|
78,094,364
| 16,332,690
|
using list of strings in Numba jitclass
|
<p>What is the proper way to include a list of strings in a Numba jitclass? The documentation here is very limited and I am currently encountering DeprecationWarnings.</p>
<p>Should I use an array of strings instead?</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba.experimental import jitclass
from numba import types
spec = [
('datetime', types.NPDatetime('s')),
('strings', types.List(types.unicode_type)),
]
@jitclass(spec)
class DateTimeStringClass:
def __init__(self, datetime, strings):
self.datetime = datetime
self.strings = strings
# Example usage
datetime_obj = np.datetime64('2024-03-02 02:00:00')
string_list = ['string1', '323', 'string3']
obj = DateTimeStringClass(datetime_obj, string_list)
</code></pre>
<pre><code><string>:3: NumbaPendingDeprecationWarning:
Encountered the use of a type that is scheduled for deprecation: type 'reflected list' found for argument 'strings' of function 'DateTimeStringClass.__init__'.
For more information visit https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-reflection-for-list-and-set-types
File "<stdin>", line 3:
<source missing, REPL/exec in use?>
C:\numba\core\ir_utils.py:2172: NumbaPendingDeprecationWarning:
Encountered the use of a type that is scheduled for deprecation: type 'reflected list' found for argument 'strings' of function 'ctor'.
For more information visit https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-reflection-for-list-and-set-types
File "<string>", line 2:
<source missing, REPL/exec in use?>
warnings.warn(NumbaPendingDeprecationWarning(msg, loc=loc))
</code></pre>
|
<python><numba>
|
2024-03-02 21:27:22
| 0
| 308
|
brokkoo
|
78,094,346
| 395,857
|
How can I find out the location of the endpoint when using openai Python library and Azure OpenAI?
|
<p>E.g., when I make a basic Azure OpenAI request, I don't see the endpoint in the response object:</p>
<pre><code>#Note: This code sample requires OpenAI Python library version 1.0.0 or higher.
import json
import pprint
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = "https://xxxxxx.openai.azure.com/",
api_key='xxxxxxxxxxxxxxxxxxxxx',
api_version="2023-07-01-preview"
)
message_text = [{"role":"system","content":"You are an AI assistant that helps people find information."}]
completion = client.chat.completions.create(
model="gpt-4xxxxxxxx",
messages = message_text,
temperature=0.7,
max_tokens=800,
top_p=0.95,
frequency_penalty=0,
presence_penalty=0,
stop=None
)
print('completion:\n')
pprint.pprint(completion)
# Convert Python object to JSON
json_data = json.dumps(completion, default=lambda o: o.__dict__, indent=4)
# Print JSON
print(json_data)
</code></pre>
<p>Looking at the output, the response object <code>completion</code> contains:</p>
<pre><code>ChatCompletion(id='chatcmpl-xxxxxxxxx', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Great! How can I assist you today?', role='assistant', function_call=None, tool_calls=None), content_filter_results={'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}})], created=1709313222, model='gpt-4', object='chat.completion', system_fingerprint='fp_xxxxx', usage=CompletionUsage(completion_tokens=9, prompt_tokens=18, total_tokens=27), prompt_filter_results=[{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}])
</code></pre>
<p>How can I find out the location of the endpoint when using openai Python library and Azure OpenAI?</p>
<hr />
<p>I know that one may view the location on <a href="https://portal.azure.com/" rel="nofollow noreferrer">https://portal.azure.com/</a>:</p>
<p><a href="https://i.sstatic.net/CFwjd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CFwjd.png" alt="enter image description here" /></a></p>
<p>but I don't have access to all Azure OpenAI instances that I work with in my account.</p>
|
<python><openai-api><azure-openai><gpt-4>
|
2024-03-02 21:20:36
| 1
| 84,585
|
Franck Dernoncourt
|
78,094,136
| 1,830,639
|
How to create strongly typed langchain Runnables and pass data through several steps?
|
<p>The latest langchain LCEL enable us to create <code>Runnable</code>s. <code>Runnable</code> abstraction can be used for a lot of things, even outside of <em>chains</em> or <em>prompts</em>.</p>
<p>In my scenario, as part of the chain pipeline, the first steps are not LLM or Prompts.</p>
<p>I want that every step (<code>Runnable</code>) receives a strongly typed <em>input</em> named <code>ElementSelectionContext</code>, but they can <em>output</em> different data. As part of the pipeline, between every step, I need to update the <code>ElementSelectionContext</code> fields with the output of the previous step, and pass it to the next step.</p>
<p>Enough talking, here's the code attempt</p>
<pre class="lang-py prettyprint-override"><code>class ElementSelectionContext(BaseModel):
element_name: str = Field(frozen=True)
objective: str = Field(frozen=True)
page: Optional[Analysis] = None
class ElementSelectionPipeline(RunnableSerializable[ElementSelectionContext, Optional[Element]]):
def invoke(self, input: ElementSelectionContext, config: RunnableConfig | None = None) -> Optional[InterestingElement]:
page = Page()
exact_match = ExactMatch()
pipeline = (
page
| {"page": RunnablePassthrough()} # WHAT do I add here to update the context and pass it through the next step??
| exact_match
)
return pipeline.invoke(input)
class Page(RunnableSerializable[ElementSelectionContext,Analysis]):
def invoke(self, input: ElementSelectionContext, config: RunnableConfig | None = None) -> Analysis:
current_dir = os.path.dirname(os.path.abspath(__file__))
file_path = os.path.join(current_dir, 'sample.json')
with open(file_path, 'r', encoding="utf-8") as file:
data = json.load(file)
response = Analysis(**data)
return response
class ExactMatch(RunnableSerializable[Analysis, Optional[Element]]):
def invoke(self, input: ElementSelectionContext, config: RunnableConfig | None = None) -> Optional[Element]:
#print(input)
# ERROR in the next line since the 'input' is a dictionary, not 'ElementSelectionContext'
first_value = next(iter(input.page.map.values()))
return first_value
# USAGE EXAMPLE
pipeline = ElementSelectionPipeline()
response = pipeline.invoke(ElementSelectionContext(element_name="joba", objective="joba"))
</code></pre>
<ol>
<li>In the <code>ExactMatch</code> the <code>input</code> is a <code>dict</code> and not <code>ElementSelectionContext</code>.</li>
<li>In the <code>ElementSelectionPipeline</code> I do not know how to update the <code>ElementSelectionContext</code> instance and pass it through the <code>ExactMatch</code>.</li>
</ol>
<p>Any insights?</p>
|
<python><langchain><py-langchain>
|
2024-03-02 19:56:59
| 0
| 1,033
|
JobaDiniz
|
78,093,891
| 1,043,882
|
Ensuring Deterministic Outputs in Neural Network Training
|
<p>I am new to neural networks and currently working with TensorFlow. For an experiment, I would like to build a model that consistently produces the same output for identical inputs. However, my initial attempt using a trivial test and setting the <code>batch_size</code> equal to the size of the training data did not achieve this goal:</p>
<pre><code>model = keras.Sequential([keras.layers.Dense(1)])
model.compile( loss="MSE", metrics=[keras.metrics.BinaryAccuracy()])
model.fit(
training_inputs,
training_targets,
epochs=5,
batch_size=1000,
validation_data=(val_inputs, val_targets)
)
</code></pre>
<p>I suspect that the default optimizer, <code>SGD</code> (Stochastic Gradient Descent), might be causing random outputs.</p>
<p>My questions are:</p>
<ul>
<li>Are there any other factors in above code, besides the default optimizer (<code>SGD</code>), that can introduce randomness in the output of above neural network model?</li>
<li>How can I modify the provided code to ensure that the model produces the same output for the same input?</li>
</ul>
<p>Thank you for your assistance.</p>
|
<python><tensorflow><machine-learning><keras><neural-network>
|
2024-03-02 18:30:36
| 1
| 14,064
|
hasanghaforian
|
78,093,582
| 1,592,764
|
Tuple problem with parameterized SQLite query
|
<p>I'm working on a telethon-based telegram chatbot that can query a customer db given a last name in the following format: <code>/search thompson</code>, but am having some trouble using the fill function to keep the queries safe.</p>
<p>I'm getting one of two errors - one is <code>tuple index out of range</code> when accessing <code>query</code> directly, and <code>Incorrect number of bindings supplied. The current statement uses 1, and there are 8 supplied.</code> when I use join (method 2 above). What am I doing incorrectly here?</p>
<p>UPDATE - this is the full error traceback followed by the code:</p>
<pre><code>Traceback (most recent call last):
File "/Users/.../.../.../.../script.py", line 155, in select
test_message = create_message_select_query(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/.../.../.../.../script.py", line 115, in create_message_select_query
lname = i[1]
~^^^
IndexError: tuple index out of range
</code></pre>
<p>And associated code:</p>
<pre><code># create message listing matches
def create_message_select_query(ans):
text = ""
for i in ans:
id = i[0]
lname = i[1]
fname = i[2]
creation_date = i[3]
text += "<b>"+ str(id) +"</b> | " + "<b>"+ str(lname) +"</b> | " + "<b>"+ str(fname)+"</b> | " + "<b>"+ str(creation_date)+"</b>\n"
message = "Information about customers:\n\n"+text
return message
@client.on(events.NewMessage(pattern="(?i)/search"))
async def select(event):
try:
# Get the sender of the message
sender = await event.get_sender()
SENDER = sender.id
# Get the text of the user AFTER the /search command
list_of_words = event.message.text.split(" ")
# accessing first item
query = list_of_words[1]
sql = "select (SELECT * from customers where lname = ?)"
#args = ''.join((query))
args = query
cursor = conn.execute(sql, (args,))
res = cursor.fetchall() # fetch all the results
# If there is at least 1 row selected, print a message with matches
# The message is created using the function defined above
if(res):
test_message = create_message_select_query(res)
await client.send_message(SENDER, test_message, parse_mode='html')
# Otherwhise, print a default text
else:
text = "No matching customers found"
await client.send_message(SENDER, text, parse_mode='html')
</code></pre>
|
<python><tuples><parameterization><sqlite3-python>
|
2024-03-02 17:02:17
| 1
| 1,695
|
Marcatectura
|
78,093,343
| 4,575,197
|
ValueError: invalid on specified as date_x, must be a column (of DataFrame), an Index or None
|
<p>Trying to take a Monthly Rolling average from my DF based on 'ISIN' column on Tone column. this is the df:</p>
<pre><code>import pandas as pd
# Given data lists
tone = [-0.397617, -1.217575, 0.101528, -0.736255, 1.077126]
date_x = ["2014-01-01 00:00:00", "2014-02-01 00:00:00", "2014-03-01 00:00:00", "2014-04-01 00:00:00", "2014-05-01 00:00:00"]
isin = ["DE0007664005", "DE0007664005", "DE0007664005", "DE0007664005", "DE0007664005"]
# Create DataFrame
df = pd.DataFrame({'ISIN': isin, 'date_x': date_x, 'Tone': tone})
# Convert 'date_x' to datetime
df['date_x'] = pd.to_datetime(df['date_x'])
</code></pre>
<p>so this is my code:</p>
<pre><code>news.groupby('ISIN')['Tone'].transform(lambda s: s.rolling('30D',on='date_x').mean())
</code></pre>
<p>and this is my error:</p>
<pre><code>----> 4 news.groupby('ISIN')['Tone'].transform(lambda s: s.rolling('30D',on='date_x').mean())
File c:\Users\user\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\groupby\generic.py:517, in SeriesGroupBy.transform(self, func, engine, engine_kwargs, *args,
**kwargs)
514 @Substitution(klass="Series", example=__examples_series_doc)
515 @Appender(_transform_template)
516 def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
--> 517 return self._transform(
518 func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
519 )
File c:\Users\user\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\groupby\groupby.py:2021, in GroupBy._transform(self, func, engine, engine_kwargs, *args,
**kwargs) 2018 warn_alias_replacement(self, orig_func, func) 2020 if not isinstance(func, str):
-> 2021 return self._transform_general(func, engine, engine_kwargs, *args, **kwargs) 2023 elif func not in base.transform_kernel_allowlist: 2024 msg = f"'{func}' is not a valid function name for transform(name)"
File c:\Users\user\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\groupby\generic.py:557, in SeriesGroupBy._transform_general(self, func, engine, engine_kwargs,
*args, **kwargs)
552 for name, group in self._grouper.get_iterator(
553 self._obj_with_exclusions, axis=self.axis
554 ):
555 # this setattr is needed for test_transform_lambda_with_datetimetz
556 object.__setattr__(group, "name", name)
--> 557 res = func(group, *args, **kwargs)
559 results.append(klass(res, index=group.index))
561 # check for empty "results" to avoid concat ValueError
Cell In[18], line 4
1 # news.set_index('date_x', inplace=True)
2
3 # news.groupby('Player').rolling(window='30D',on='date_x')['Tone'].mean()
----> 4 news.groupby('ISIN')['Tone'].transform(lambda s: s.rolling('30D',on='date_x').mean())
5 # news.reset_index(inplace=True)
File c:\Users\user\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\generic.py:12573, in NDFrame.rolling(self, window, min_periods, center, win_type, on, axis, closed, step, method) 12559 if win_type is not None: 12560 return Window( 12561 self, 12562 window=window, (...) 12570 method=method, 12571 )
> 12573 return Rolling( 12574 self, 12575 window=window, 12576 min_periods=min_periods, 12577 center=center, 12578 win_type=win_type, 12579 on=on, 12580 axis=axis, 12581 closed=closed, 12582 step=step, 12583 method=method, 12584 )
File c:\Users\user\anaconda3\envs\PythonCourse2023\Lib\site-packages\pandas\core\window\rolling.py:164, in BaseWindow.__init__(self, obj, window, min_periods, center, win_type, axis, on, closed, step, method, selection)
162 self._on = Index(self.obj[self.on])
163 else:
--> 164 raise ValueError(
165 f"invalid on specified as {self.on}, "
166 "must be a column (of DataFrame), an Index or None"
167 )
169 self._selection = selection
170 self._validate()
ValueError: invalid on specified as date_x, must be a column (of DataFrame), an Index or None
</code></pre>
<p>How can i take the rolling average of Tone based on the ISIN on monthly basis?</p>
|
<python><pandas><dataframe><rolling-computation>
|
2024-03-02 15:58:22
| 1
| 10,490
|
Mostafa Bouzari
|
78,093,025
| 3,362,334
|
ERR_TUNNEL_CONNECTION_FAILED when using proxy in undetected chromedriver
|
<p>WHen I set up my proxy this way:</p>
<pre><code>options = uc.ChromeOptions()
options.add_argument(f'--proxy-server={host}:{port}')
</code></pre>
<p>I get ERR_TUNNEL_CONNECTION_FAILED error on many websites.</p>
<p>When I folowed advice <a href="https://stackoverflow.com/questions/65156932/selenium-proxy-server-argument-unknown-error-neterr-tunnel-connection-faile">here</a> to use desired_capabilities to set the proxy I ran into a wall, cause I can't make desired capabilities to work with undetected chromedriver</p>
<p>I tried this:</p>
<pre><code>import random
import time
from selenium.webdriver import DesiredCapabilities
from undetected_chromedriver import Chrome, ChromeOptions
# Read user agents from file
with open("user-agents.txt", "r") as file:
user_agents = file.readlines()
# Choose a random user agent
random_user_agent = random.choice(user_agents).strip()
# Generate a random number between 0 and 9
random_number = random.randint(0, 9)
# Define proxy settings
proxy_host = "gate.smartproxy.com"
proxy_port = 10000 + random_number
proxy = f"{proxy_host}:{proxy_port}"
# Set up Chrome options
options = ChromeOptions()
# Set user agent
options.add_argument(f"user-agent={random_user_agent}")
capabilities = DesiredCapabilities.CHROME.copy()
capabilities.CHROME['proxy'] = {
"httpProxy": proxy,
"ftpProxy": proxy,
"sslProxy": proxy,
"proxyType": "MANUAL",
}
# Add proxy configuration to Chrome options
options.add_argument("--disable-blink-features=AutomationControlled")
# Create the Undetected Chrome driver instance with configured options
driver = Chrome(options=options, desired_capabilities=capabilities)
# Navigate to a website to check the IP address
driver.get("https://www.whatismyip.com/")
time.sleep(10) # Allow time for the page to load
# Keep the browser open for a long time for testing purposes
time.sleep(3000000)
</code></pre>
<p>Andyone has solution to this problem ? How can I configure the proxy to work on all websites on undetected chromedriver ?</p>
|
<python><selenium-webdriver><proxy><undetected-chromedriver>
|
2024-03-02 14:24:08
| 1
| 2,228
|
user3362334
|
78,092,914
| 5,747,092
|
Django can't change username in custom User model
|
<p>I have the following User model in Django's <code>models.py</code>:</p>
<pre><code>class User(AbstractBaseUser):
username = models.CharField(max_length=30, unique=True, primary_key=True)
full_name = models.CharField(max_length=65, null=True, blank=True)
email = models.EmailField(
max_length=255, unique=True, validators=[EmailValidator()]
)
</code></pre>
<p>When trying to update a username via shell <code>python manage.py shell</code>:</p>
<pre><code>userobj = User.objects.get(username="username1")
userobj.username = user.lower()
userobj.save()
</code></pre>
<p>It seems that it is trying to create a new user, for which the UNIQUE constraint of the e-mail is violated:</p>
<pre><code>django.db.utils.IntegrityError: UNIQUE constraint failed: data_app_user.email
</code></pre>
<p>Any solutions? Thank you!</p>
|
<python><django><django-models>
|
2024-03-02 13:48:43
| 2
| 383
|
Daniyal Shahrokhian
|
78,092,833
| 474,563
|
Delegate class construction leads to recursion error
|
<p>I'm building a framework and want to allow users to easily influence the construction of an object (e.g. singletons)</p>
<p>However no matter what I've tried I always get recursion. I wonder if there's a way to achieve this <strong>without altering <code>create_instance</code></strong>. I'm sure I can probably achieve this by passing all the necessary stuff from the metaclass to the function, but the key is keeping it extremely simple for end users.</p>
<pre class="lang-py prettyprint-override"><code>def create_instance(cls, *args, **kwargs):
print("CUSTOM LOGIC TO CREATE INSTANCE")
return cls(*args, **kwargs)
class PersonMetaclass(type):
def __call__(cls, *args, **kwargs):
# Delegate creation elsewhere
return create_instance(cls, *args, **kwargs)
class Person(metaclass=PersonMetaclass):
def __init__(self, *args, **kwargs):
print("Person instance created")
person = Person()
</code></pre>
<p>Output:</p>
<pre><code>CUSTOM LOGIC TO CREATE INSTANCE
CUSTOM LOGIC TO CREATE INSTANCE
CUSTOM LOGIC TO CREATE INSTANCE
CUSTOM LOGIC TO CREATE INSTANCE
..
E RecursionError: maximum recursion depth exceeded while calling a Python object
!!! Recursion detected (same locals & position)
</code></pre>
|
<python><python-3.x><recursion><constructor><metaclass>
|
2024-03-02 13:25:30
| 2
| 20,558
|
Pithikos
|
78,092,770
| 68,862
|
What is the correct way to set multiple columns in Pandas from a list?
|
<p>This must be burried somewhere in Pandas docs but I think I am so overwhelmed at this point, I can't find the right doc page. I have some code that worked with python3.9 pandas==1.4.3/numpy==1.23.1 but recently started failing with the newer version on python3.11 pandas==2.2.1/numpy==1.26.4.</p>
<p>I have some multi-column results in <code>pd.Series</code> that I want to set to the same number of columns inside <code>pd.DataFrame</code>. If I have code similar to:</p>
<pre class="lang-py prettyprint-override"><code>from unittest.mock import MagicMock
import pandas as pd
def test_assign_results() -> None:
df = pd.DataFrame([{"is_ok": True, "failed": None, "col1": None, "col2": None}])
m1 = MagicMock()
m1.name = "col1_val"
results = pd.Series([(m1, None)])
df.loc[df["is_ok"] == 1, ["col1", "col2"]] = results.to_numpy().tolist()
row = df.iloc[0]
assert row["col1"].name == "col1_val"
assert row["col2"] is None
</code></pre>
<p>The code used to work like this before but now it results in the error: <a href="https://gist.github.com/mrlifetime/b7b7ac1d7d4767128b286b9d06a753a2" rel="nofollow noreferrer"><code>AttributeError: 'list' object has no attribute 'ndim'</code></a></p>
<p>I suspect this code was brittle to begin with, so what would be the correct way to take 2 values and assign them to 2 columns within a DataFrame?</p>
|
<python><pandas><numpy>
|
2024-03-02 13:05:25
| 1
| 2,226
|
sneg
|
78,092,742
| 7,212,686
|
How to graph rolling average dated-expenses per category?
|
<h3>Goal</h3>
<p>I have data that represent expenses, with date and category, and I want to plot the <strong>rolling average per category over time</strong></p>
<hr />
<h3>Sources</h3>
<p>I have tried using and combining the following without success</p>
<ul>
<li><a href="https://stackoverflow.com/questions/53339021/python-pandas-calculate-moving-average-within-group">Python Pandas: Calculate moving average within group</a> doesn't handle dates</li>
<li><a href="https://stackoverflow.com/questions/51914445/calculating-monthly-aggregate-of-expenses-with-pandas">Calculating monthly aggregate of expenses with pandas</a> doesn't handle rolling average</li>
</ul>
<hr />
<h3>Tries and MCVE</h3>
<p>The best I've come with, using the second link is this</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from matplotlib import pyplot as plt
from random import randrange, seed
from datetime import datetime
seed(321)
nb = 24
df = pd.DataFrame({
"date": [datetime(2023, 1 + i // 2, 5) for i in range(nb)],
"category": [item for _ in range(nb // 2) for item in ["food", "wear"]],
"value": [randrange(10, 120) for _ in range(nb)],
})
df.set_index("date", inplace=True)
all_s = []
for x in set(df["category"]):
s = df.loc[df['category'] == x, "value"]
s = s.groupby(pd.Grouper(freq="ME")).sum()
all_s.append(s.rename(x))
df = pd.concat(all_s, axis=1).fillna(0).asfreq("ME", fill_value=0)
df.plot(style='.-', figsize=(15, 20), ylim=(0, 130))
plt.show()
</code></pre>
<p>Rendering in</p>
<p><a href="https://i.sstatic.net/NAJsG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NAJsG.png" alt="enter image description here" /></a></p>
<hr />
<h3>Expectation</h3>
<p>I expect to add something like a <code>.rolling(window=3, min_periods=1)</code> somewhere, to get kind of a flat line on the graph, to avoid peaks and just have an average over a given period</p>
|
<python><pandas><matplotlib>
|
2024-03-02 12:55:36
| 1
| 54,241
|
azro
|
78,092,661
| 4,181,509
|
which one should I used in order to check the uniqueness of enum values in python?
|
<p>I have an enum in Python, which I want to make sure that its values are unique. I see that there are 2 ways I can use to achieve it:</p>
<ul>
<li>Wrapping the class with <code>@verify(UNIQUE)</code></li>
<li>Wrapping the class with <code>@unique</code></li>
</ul>
<p>What is the difference with using each one of them? Which one should I use to gain the best performance?</p>
|
<python><python-3.x><enums>
|
2024-03-02 12:27:45
| 3
| 10,102
|
Yuval Pruss
|
78,092,648
| 2,604,247
|
How to Make Batch Jobs Logs Available When the Jobs Run Inside Ephemeral Docker Containers?
|
<h4>Context</h4>
<p>So, basically I am running a cron job (python ETL script) via a docker container. That means, every day at 12.30 am my cron job runs</p>
<pre class="lang-bash prettyprint-override"><code>docker run $IMAGE
</code></pre>
<p>In the Dockerfile I have the script like</p>
<pre><code># Run the script at container boot time.
CMD ["./run_manager.sh"]
</code></pre>
<p>This is how the <code>run_manager.sh</code> looks like.</p>
<pre><code>python3 main.py>>main.log 2>&1
</code></pre>
<p>I am using the python <code>logging</code> module like this</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
"""
This file contains the script
"""
import logging
from contextlib import AbstractContextManager
import polars as pl
import tensorflow as tf
import sqlalchemy as sa
logging.basicConfig(format='%(asctime)s|%(levelname)s: %(message)s',
datefmt='%H:%M:%S, %d-%b-%Y', level=logging.INFO)
...
# Other codes
</code></pre>
<h4>Question</h4>
<p>Since the container is an ephemeral one that is created and destroyed every day as the cron is triggered, I have no way to access the log. So how do we change it to make the logs persist, rotate and are visible outside the container? Is there a way?</p>
<h4>Addendum</h4>
<p>Right now it is running as a cron on an on-prem Ubuntu instance. But I am going to migrate it to google cloud scheduler very soon, keeping the design intact as much as possible. Is there any solution in that case as well, basically, to be able to see the logs of past jobs?</p>
|
<python><docker><google-cloud-platform><logging><cron>
|
2024-03-02 12:24:06
| 2
| 1,720
|
Della
|
78,092,404
| 11,267,783
|
Display 3D Matrix with colors Pyqtgraph
|
<p>I want to display a 3D matrix in Pyqtgraph using GLVolumeItem (with x, y and z axis) with the specific colors corresponding to the input data.</p>
<p>How can I proceed with my code ?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pyqtgraph as pg
import pyqtgraph.opengl as gl
from pyqtgraph import functions as fn
app = pg.mkQApp("GLVolumeItem Example")
w = gl.GLViewWidget()
w.show()
w.setWindowTitle('pyqtgraph example: GLVolumeItem')
w.setCameraPosition(distance=200)
g = gl.GLGridItem()
g.scale(10, 10, 1)
w.addItem(g)
x = np.linspace(0,10,11)
y = np.linspace(0,10,11)
z = np.linspace(0,10,11)
data = np.zeros((10,10,10))
data[1,:,:] = 10
data[3,:,:] = 10
d2 = np.empty(data.shape + (4,), dtype=np.ubyte)
v = gl.GLVolumeItem(d2)
w.addItem(v)
ax = gl.GLAxisItem()
w.addItem(ax)
if __name__ == '__main__':
pg.exec()
</code></pre>
<p>The goal is to get a 3D representation with different colors corresponding to the levels of the data.</p>
|
<python><pyqtgraph>
|
2024-03-02 11:09:00
| 1
| 322
|
Mo0nKizz
|
78,092,146
| 12,314,521
|
How to calculate euclidean distance between 2D and 3D tensors in Pytorch
|
<p>Given:</p>
<ul>
<li>a tensor A has shape <code>(batch_size, dim)</code></li>
<li>a tensor B has shape <code>(batch_size, N, dim)</code></li>
</ul>
<p>I want to calculate euclidean distance between each row in A and the correspond row in B which has shape <code>(N, dim)</code></p>
<p>The expected result has shape <code>(batch_size, N)</code></p>
|
<python><pytorch><tensor>
|
2024-03-02 09:40:08
| 1
| 351
|
jupyter
|
78,091,649
| 14,250,641
|
Finding overlapping regions in large DataFrame
|
<p>I have a Pandas DataFrame in pandas with genomic regions represented by their chromosome, start position, and stop position. I'm trying to identify overlapping regions within the same chromosome and compile them along with their corresponding labels. I'm not sure if the way that I'm doing is correct-- also I want an efficient approach since my df is very large (3 million rows), so a for loop is not ideal.</p>
<p>I have access to 8 cores- I assumed my code was using them (not sure), but should I add something to parallelize my code to make it faster?</p>
<p>Here's my current approach:</p>
<pre><code>import pandas as pd
# Sample DataFrame
data = {
'chromosome': ['chr1', 'chr1', 'chr1', 'chr1', 'chr1'],
'start': [10, 15, 35, 45, 55],
'stop': [20, 25, 55, 56, 60],
'hg_38_locs': ['chr1:10-20', 'chr1:15-25', 'chr1:35-55', 'chr1:45-56', 'chr1:55-60'],
'main_category': ['label1', 'label2', 'label2', 'label3', 'label1']
}
full_df = pd.DataFrame(data)
# Initialize lists to store overlapping regions and their labels
overlapping_regions = []
overlapping_labels = []
# Iterate through each row and check for overlaps
for i in range(len(full_df)):
for j in range(i + 1, len(full_df)):
# Check if chromosomes match
if full_df['chromosome'][i] != full_df['chromosome'][j]:
continue
# Check for overlap using start and stop positions
if full_df['stop'][i] >= full_df['start'][j] and full_df['start'][i] <= full_df['stop'][j]:
overlapping_regions.append((full_df['hg_38_locs'][i], full_df['hg_38_locs'][j]))
overlapping_labels.append((full_df['main_category'][i], full_df['main_category'][j]))
# Create a new DataFrame to compile the information
compiled_df = pd.DataFrame({
'overlapping_regions': overlapping_regions,
'overlapping_labels': overlapping_labels
})
# Display the compiled DataFrame
print(compiled_df)
</code></pre>
<p>Output:</p>
<pre><code> overlapping_regions overlapping_labels
0 (chr1:10-20, chr1:15-25) (label1, label2)
1 (chr1:10-20, chr1:35-55) (label1, label2)
2 (chr1:15-25, chr1:35-55) (label2, label2)
3 (chr1:35-55, chr1:45-56) (label2, label3)
4 (chr1:45-56, chr1:55-60) (label3, label1)
</code></pre>
|
<python><pandas><dataframe>
|
2024-03-02 06:15:32
| 0
| 514
|
youtube
|
78,091,510
| 489,088
|
How can I iterate through a Pandas dataframe and get the value for all columns associated with a given datetime, forward filling values that are -1?
|
<p>Suppose you have a pandas dataframe with 5 columns: one called <code>code</code> which contains a string, one named named <code>calendardate</code> of <code>datetime</code> format, which contains date and time information, and three others named <code>A</code>, <code>B</code> and <code>C</code>, each containing an integer from -1 to 100.</p>
<p>For each code value, there can be multiple entries of calendardate. These entries have a typical interval between them (say, 5 minutes, or 1 hour, etc). This interval may vary sometimes.</p>
<p>How can we iterate through each unique datetime value in the <code>calendardate</code> column (let's call it N), and for each value N get an array with the the value N, plus the values of columns code, A, B and C such that if either A, B or C is equal to -1, then we get the most recent previous value of A, B or C that is not -1 associated with that same code?</p>
<p>In other words, I want to print out each slice of the dataframe for each given datetime value with the most recent value of A, B and C up to N that are not -1.</p>
<p>Here is my attempt:</p>
<pre><code>import pandas as pd
# Sample DataFrame
data = {
'code': ['A1', 'A1', 'A1', 'A1', 'A1', 'A1', 'B1', 'B1', 'B1', 'B1'],
'calendardate': [
'2024-02-29 09:00:00', '2024-02-29 09:05:00', '2024-02-29 09:10:00', '2024-02-29 09:15:00',
'2024-02-29 09:20:00', '2024-02-29 09:25:00', '2024-02-29 09:00:00', '2024-02-29 09:05:00',
'2024-02-29 09:10:00', '2024-02-29 09:15:00'
],
'A': [10, -1, 20, -1, 30, 40, 50, -1, -1, 60],
'B': [-1, 15, -1, 25, -1, 35, -1, 45, -1, -1],
'C': [-1, -1, -1, 35, -1, -1, -1, -1, 55, 65]
}
df = pd.DataFrame(data)
df['calendardate'] = pd.to_datetime(df['calendardate'])
# Define a function to fill missing values with the most recent non-missing value for each column independently
def forward_fill(group):
# Sort the group by 'calendardate' in ascending order
group = group.sort_values(by='calendardate')
# Forward fill missing values in column A, B and C
group['A'] = group['A'].replace(-1, pd.NA).ffill()
group['B'] = group['B'].replace(-1, pd.NA).ffill()
group['C'] = group['C'].replace(-1, pd.NA).ffill()
return group
# Apply the function within each group defined by 'calendardate'
filled_df = df.groupby('calendardate').apply(forward_fill).reset_index(drop=True)
# Iterate through each row and print the values for columns A, B, and C
for index, row in filled_df.iterrows():
print("Datetime:", row['calendardate'])
print("Code:", row['code'])
print("A:", row['A'])
print("B:", row['B'])
print("C:", row['C'])
print("------------------------")
</code></pre>
<p>It is very close, but for the first three iterations I get this:</p>
<pre><code>Datetime: 2024-02-29 09:00:00
Code: A1
A: 10
B: <NA>
C: nan
------------------------
Datetime: 2024-02-29 09:00:00
Code: B1
A: 50
B: <NA>
C: nan
------------------------
Datetime: 2024-02-29 09:05:00
Code: A1
A: nan
B: 15
C: nan
</code></pre>
<p>When the third iteration should have printed:</p>
<pre><code>Datetime: 2024-02-29 09:05:00
Code: A1
A: 10
B: 15
C: nan
</code></pre>
<p>Because for column A there was a value prior to timestamp <code>2024-02-29 09:05:00</code> that was not -1 (so the most recent non -1 value should be printed instead of <code>NaN</code>).</p>
<p>How can I accomplish this in Pandas? I'm using version 2.2.1</p>
<p>Thank you!</p>
|
<python><pandas><dataframe>
|
2024-03-02 04:55:31
| 1
| 6,306
|
Edy Bourne
|
78,091,460
| 6,182,064
|
Can you concatenate some, but not all, entries of a Pandas Series using str.cat when using a search string to iterate over a different column?
|
<p>I'm using the data and dataframe below to use a search_string to query one column/series, and then when the string is a match, update information in different column/series. I can get it done, but not how I want to - I want spaces between the text updates. I have exhausted my searching and looking at the documentation. The closest I have found is str.cat - but that only seems to work for the entire series. I repeatedly receive the "ValueError: Did you mean to supply a <code>sep</code> keyword?"</p>
<p>Below is shows what works (albeit without spaces) commented out and what currently does NOT work.</p>
<pre><code>import pandas as pd
search_str = ['STRAUSS', 'STREET', 'STUBBY\'S']
data = {
"calories": ['STRAUSS_STREET', 'ten', 'twenty'],
"duration": [50, 40, 45],
"test": ['not_yet_set', 'not_yet_set', 'not_yet_set']
}
df_1 = pd.DataFrame(data)
df_1["calories"] = pd.Series(df_1["calories"], dtype=pd.StringDtype)
for k in range(len(search_str)):
#df_1.loc[df_1['calories'].str.contains(search_str[k]), 'test'] += search_str[k]
df_1.loc[df_1['calories'].str.contains(search_str[k]), 'test'] =
df_1['test'].str.cat(search_str[k], sep=',', na_rep='-')
df_1
</code></pre>
|
<python><pandas><string><series><string-concatenation>
|
2024-03-02 04:24:51
| 2
| 765
|
wiseass
|
78,091,298
| 5,178,988
|
Minimal example of docker oci_image with custom python toolchain in bazel
|
<p>I am trying to produce a docker image using the <code>rules_oci</code> bazel repo.</p>
<p>I am using a custom python toolchain I have registered in my <code>WORKSPACE</code>. The following code builds a docker image that contains the python toolchain and <code>run_api_server.py</code> and its relevant dependencies.</p>
<p>My questions are:</p>
<ol>
<li>How can I correctly set the python toolchain and <code>run_api_server</code> as entrypoint of the docker image such that all bazel paths work out of the box?</li>
<li>How can I isolate the relevant code that goes on the bazel project to a separate folder <code>/app</code> on the docker image?</li>
</ol>
<p><code>BUILD</code>:</p>
<pre class="lang-py prettyprint-override"><code>load("@rules_oci//oci:defs.bzl", "oci_tarball")
load("@rules_python//python:defs.bzl", "py_binary")
load(":py_image.bzl", "py_docker_image")
py_binary(
name = "run_api_server",
srcs = "run_api_server.py",
deps = [
"run_api_server.py",
"//myproject:my_py_library",
],
)
py_docker_image(
name = "run_api_server_docker",
base = "@ubuntu_2204_base",
binary = ":run_api_server",
)
</code></pre>
<p><code>py_image.bzl</code> (taken from <a href="https://github.com/aspect-build/bazel-examples/blob/main/oci_python_image/py_layer.bzl" rel="nofollow noreferrer">https://github.com/aspect-build/bazel-examples/blob/main/oci_python_image/py_layer.bzl</a>):</p>
<pre class="lang-py prettyprint-override"><code>load("@aspect_bazel_lib//lib:tar.bzl", "mtree_spec", "tar")
load("@rules_oci//oci:defs.bzl", "oci_image", "oci_tarball")
# match *only* external repositories that have the string "python"
# e.g. this will match
# `/hello_world/hello_world_bin.runfiles/rules_python~0.21.0~python~python3_9_aarch64-unknown-linux-gnu/bin/python3`
# but not match
# `/hello_world/hello_world_bin.runfiles/_main/python_app`
PY_INTERPRETER_REGEX = "\\.runfiles/.*python.*-.*"
# match *only* external pip like repositozries that contain the string "site-packages"
SITE_PACKAGES_REGEX = "\\.runfiles/.*/site-packages/.*"
def py_layers(name, binary):
"""
Create three layers for a py_binary target: interpreter, third-party dependencies, and application code.
This allows a container image to have smaller uploads, since the application layer usually changes more
than the other two.
Args:
name: prefix for generated targets, to ensure they are unique within the package
binary: a py_binary target
Returns:
a list of labels for the layers, which are tar files
"""
# Produce layers in this order, as the app changes most often
layers = ["interpreter", "packages", "app"]
# Produce the manifest for a tar file of our py_binary, but don't tar it up yet, so we can split
# into fine-grained layers for better docker performance.
mtree_spec(
name = name + ".mf",
srcs = [binary],
)
native.genrule(
name = name + ".interpreter_tar_manifest",
srcs = [name + ".mf"],
outs = [name + ".interpreter_tar_manifest.spec"],
cmd = "grep '{}' $< >$@".format(PY_INTERPRETER_REGEX),
)
native.genrule(
name = name + ".packages_tar_manifest",
srcs = [name + ".mf"],
outs = [name + ".packages_tar_manifest.spec"],
cmd = "grep '{}' $< >$@".format(SITE_PACKAGES_REGEX),
)
# Any lines that didn't match one of the two grep above
native.genrule(
name = name + ".app_tar_manifest",
srcs = [name + ".mf"],
outs = [name + ".app_tar_manifest.spec"],
cmd = "grep -v '{}' $< | grep -v '{}' >$@".format(SITE_PACKAGES_REGEX, PY_INTERPRETER_REGEX),
)
result = []
for layer in layers:
layer_target = "{}.{}_layer".format(name, layer)
result.append(layer_target)
tar(
name = layer_target,
srcs = [binary],
mtree = "{}.{}_tar_manifest".format(name, layer),
)
return result
def py_oci_image(name, binary, tars = [], **kwargs):
"""
Wrapper around oci_image that splits the py_binary into layers.
Note you need to wrap the result of this rule in oci_tarball to produce
an image that can be loaded by docker
Args:
name: name for the target
binary: a py_binary target
tars: extra docker layers, apart from `binary` dependencies
kwargs: see oci_image https://github.com/bazel-contrib/rules_oci/blob/main/docs/image.md
"""
oci_image(
name = name,
tars = tars + py_layers(name, binary),
**kwargs
)
def py_docker_image(name, binary, repo_tags = [], **kwargs):
py_oci_image(
name = name + ".image",
binary = binary,
**kwargs,
)
# Wrap in oci_tarball to produce an image that can be loaded by docker
oci_tarball(
name = name,
image = ":" + name + ".image",
repo_tags = ["my.repo/py_{}:latest".format(name)] + repo_tags,
)
</code></pre>
<p><code>WORKSPACE</code>:</p>
<pre class="lang-py prettyprint-override"><code>load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# -----------------------------------------------------------------------------
# Load rules_python repository - contains rules for building python code
# -----------------------------------------------------------------------------
http_archive(
name = "rules_python",
sha256 = "9acc0944c94adb23fba1c9988b48768b1bacc6583b52a2586895c5b7491e2e31",
strip_prefix = "rules_python-0.27.0",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.27.0/rules_python-0.27.0.tar.gz",
)
load("@rules_python//python:repositories.bzl", "py_repositories", "python_register_toolchains")
py_repositories()
python_register_toolchains(name = "python_runtime", python_version = "3.10")
load("@python_runtime//:defs.bzl", python_interpreter = "interpreter")
# -----------------------------------------------------------------------------
# Load rules_oci repository - contains rules for (docker) containers
# -----------------------------------------------------------------------------
http_archive(
name = "rules_oci",
sha256 = "4a276e9566c03491649eef63f27c2816cc222f41ccdebd97d2c5159e84917c3b",
strip_prefix = "rules_oci-1.7.4",
url = "https://github.com/bazel-contrib/rules_oci/releases/download/v1.7.4/rules_oci-v1.7.4.tar.gz",
)
load("@rules_oci//oci:dependencies.bzl", "rules_oci_dependencies")
rules_oci_dependencies()
load("@rules_oci//oci:repositories.bzl", "LATEST_CRANE_VERSION", "oci_register_toolchains")
oci_register_toolchains(name = "oci", crane_version = LATEST_CRANE_VERSION)
load("@rules_oci//oci:pull.bzl", "oci_pull")
oci_pull(
name = "ubuntu_2204_base",
digest = "sha256:81bba8d1dde7fc1883b6e95cd46d6c9f4874374f2b360c8db82620b33f6b5ca1",
registry = "index.docker.io",
repository = "library/ubuntu",
)
# -----------------------------------------------------------------------------
# Load aspect_bazel_lib repository - useful for constructing rulesets and BUILD files
# -----------------------------------------------------------------------------
http_archive(
name = "aspect_bazel_lib",
sha256 = "f5ea76682b209cc0bd90d0f5a3b26d2f7a6a2885f0c5f615e72913f4805dbb0d",
strip_prefix = "bazel-lib-2.5.0",
url = "https://github.com/aspect-build/bazel-lib/releases/download/v2.5.0/bazel-lib-v2.5.0.tar.gz",
)
load("@aspect_bazel_lib//lib:repositories.bzl", "aspect_bazel_lib_dependencies", "aspect_bazel_lib_register_toolchains")
aspect_bazel_lib_dependencies() # Required bazel-lib dependencies
aspect_bazel_lib_register_toolchains() # Register bazel-lib toolchains
</code></pre>
|
<python><docker><bazel><rules-oci>
|
2024-03-02 02:46:30
| 0
| 1,178
|
niko
|
78,091,161
| 6,449,621
|
Pytroch segmentation model(.pt) not converting to CoreML
|
<p>According to apple article <a href="https://developer.apple.com/videos/play/tech-talks/10154/#" rel="nofollow noreferrer">link</a> we need Wrap the Model to Allow Tracing which is be followed the same.</p>
<pre><code>class WrappedDeeplabv3Resnet1011(nn.Module):
def __init__(self):
super(WrappedDeeplabv3Resnet1011, self).__init__()
self.model = torch.load('/content/aircraft_best_model.pt',map_location ='cpu').eval()
def forward(self, x):
res = self.model(x)
# extract the tensor we want from the output dictionary
x = res['out']
return x
</code></pre>
<p>I see the my model doesn't have the key "Out" but I see the other keys like show below</p>
<pre><code>[{'boxes': tensor([[ 510.2429, 229.1375, 1011.1587, 399.5730],
[ 550.1007, 202.8524, 1047.5089, 376.9215],
[ 457.9409, 196.4182, 947.7454, 412.4210],
[ 333.6804, 204.8605, 1073.0546, 442.6238]],
grad_fn=<StackBackward0>), 'labels': tensor([1, 2, 3, 1]), 'scores': tensor([0.0870, 0.0631, 0.0587, 0.0531], grad_fn=<IndexBackward0>)}]
</code></pre>
<p>when I apply any of these keys as output it throws the error as shown below</p>
<pre><code>TypeError: list indices must be integers or slices, not str
</code></pre>
<p>my model eval() shown like below</p>
<pre><code>FasterRCNN(
(transform): GeneralizedRCNNTransform(
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Resize(min_size=(800,), max_size=1333, mode='bilinear')
)
(backbone): BackboneWithFPN(
(body): IntermediateLayerGetter(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): FrozenBatchNorm2d(64, eps=0.0)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(64, eps=0.0)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(64, eps=0.0)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(256, eps=0.0)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): FrozenBatchNorm2d(256, eps=0.0)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(64, eps=0.0)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(64, eps=0.0)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(256, eps=0.0)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(64, eps=0.0)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(64, eps=0.0)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(256, eps=0.0)
(relu): ReLU(inplace=True)
)
)
(layer2): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(128, eps=0.0)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(128, eps=0.0)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(512, eps=0.0)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): FrozenBatchNorm2d(512, eps=0.0)
)
)
(1): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(128, eps=0.0)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(128, eps=0.0)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(512, eps=0.0)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(128, eps=0.0)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(128, eps=0.0)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(512, eps=0.0)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(128, eps=0.0)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(128, eps=0.0)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(512, eps=0.0)
(relu): ReLU(inplace=True)
)
)
(layer3): Sequential(
(0): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): FrozenBatchNorm2d(1024, eps=0.0)
)
)
(1): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
(4): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
(5): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(256, eps=0.0)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(256, eps=0.0)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(1024, eps=0.0)
(relu): ReLU(inplace=True)
)
)
(layer4): Sequential(
(0): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(512, eps=0.0)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(512, eps=0.0)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(2048, eps=0.0)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): FrozenBatchNorm2d(2048, eps=0.0)
)
)
(1): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(512, eps=0.0)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(512, eps=0.0)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(2048, eps=0.0)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(512, eps=0.0)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(512, eps=0.0)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(2048, eps=0.0)
(relu): ReLU(inplace=True)
)
)
)
(fpn): FeaturePyramidNetwork(
(inner_blocks): ModuleList(
(0): Conv2dNormActivation(
(0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
)
(1): Conv2dNormActivation(
(0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
)
(2): Conv2dNormActivation(
(0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
)
(3): Conv2dNormActivation(
(0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
)
)
(layer_blocks): ModuleList(
(0-3): 4 x Conv2dNormActivation(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
)
(extra_blocks): LastLevelMaxPool()
)
)
(rpn): RegionProposalNetwork(
(anchor_generator): AnchorGenerator()
(head): RPNHead(
(conv): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
)
)
(cls_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
(bbox_pred): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
)
)
(roi_heads): RoIHeads(
(box_roi_pool): MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'], output_size=(7, 7), sampling_ratio=2)
(box_head): TwoMLPHead(
(fc6): Linear(in_features=12544, out_features=1024, bias=True)
(fc7): Linear(in_features=1024, out_features=1024, bias=True)
)
(box_predictor): FastRCNNPredictor(
(cls_score): Linear(in_features=1024, out_features=6, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=24, bias=True)
)
)
)
</code></pre>
<p>How to convert the pytroch model to CoreML</p>
|
<python><machine-learning><coreml><faster-rcnn>
|
2024-03-02 01:26:48
| 2
| 465
|
anandyn02
|
78,091,058
| 1,576,804
|
can't we vectorize code with nested loops to update matrix values
|
<p>I wrote a piece of code but I am not sure if we can get rid of the loops and vectorize it to make it faster. Can you please give suggestions? I am just updating the co-occurence matrix .</p>
<pre><code> M = np.zeros((num_words,num_words))
word2Ind = {words[i]:i for i in range(len(words))}
for document in corpus:
for i,word in enumerate(document):
for j in range(i - window_size ,i + window_size + 1):
if i != j and j >= 0 and j <= len(document) - 1:
M[word2Ind[document[i]],word2Ind[document[j]]] += 1
</code></pre>
|
<python><numpy><refactoring>
|
2024-03-02 00:28:40
| 1
| 4,234
|
vkaul11
|
78,091,050
| 902,657
|
Unable to configure safety settings in gemini-pro: google.api_core.exceptions.InvalidArgument
|
<p>The code below runs find but when I uncomment any of the safety settings it throws:</p>
<pre><code>google.api_core.exceptions.InvalidArgument: 400 Request contains an invalid argument
</code></pre>
<p>The reason I'm trying to customize the safety settings is because in a real scenario gemini-pro is throwing me an error: response was blocked. This happens when I'm trying to use it to generate SQL.</p>
<pre><code>from vertexai.preview.generative_models import GenerativeModel
from vertexai.preview import generative_models
safety_config = {
#generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
#generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
#generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
#generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
#generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
#generative_models.HarmCategory.HARM_CATEGORY_UNSPECIFIED: generative_models.HarmBlockThreshold.BLOCK_NONE,
}
model = GenerativeModel(
"gemini-pro",
safety_settings=safety_config
)
response = model.generate_content("What is the future of AI in one sentence?")
print(response.text)
</code></pre>
|
<python><google-gemini>
|
2024-03-02 00:25:24
| 1
| 2,717
|
Ya.
|
78,090,683
| 3,316,842
|
Python Dependecies for DataprocCreateBatchOperator
|
<p>Cannot submit Python job onto dataproc serverless when third party python dependencies are needed. It's working fine when dependencies are not needed. I push up the pyspark python file to a cloud storage bucket and then the DataprocCreateBatchOperator reads in that file. I was hoping I can just pass in a list of pip packages but this might not be baked into the operator.</p>
<p>From the docs, dataproc serverless offers a <a href="https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/operators/dataproc/index.html#airflow.providers.google.cloud.operators.dataproc.DataprocCreateBatchOperator" rel="nofollow noreferrer">metadata</a> option which I presumed is how we inform dataproc to install additional python dependencies but its not working as seen below.</p>
<p><strong>DAG</strong></p>
<pre><code>from airflow import DAG
from airflow.providers.google.cloud.operators.dataproc import DataprocCreateBatchOperator
from datetime import datetime, timedelta
PROJECT_ID = "foobar"
REGION = "us-central1"
IMPERSONATION_CHAIN = "demo@foobar.iam.gserviceaccount.com"
BUCKET = "gs://my-bucket"
JOB="countingwords.py"
BATCH_CONFIG = {
"pyspark_batch": {
"main_python_file_uri": f"{BUCKET}/python/latest/{JOB}",
"args": ["gs://pub/shakespeare/rose.txt", f"{BUCKET}/sample-output-data"]
},
"environment_config": {
"execution_config": {
"network_uri": f"projects/{PROJECT_ID}/global/networks/main-vpc-prd",
"subnetwork_uri": f"https://www.googleapis.com/compute/v1/projects/{PROJECT_ID}/regions/{REGION}/subnetworks/data-prd",
"service_account": IMPERSONATION_CHAIN,
}
}
}
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 0,
'retry_delay': timedelta(minutes=1)
}
with DAG(
dag_id="serverless_countwords_py",
start_date=datetime(2024, 1, 1),
schedule_interval='@daily',
default_args=default_args,
catchup=False,
) as dag:
submit_serverless = DataprocCreateBatchOperator(
task_id="submit_batch",
project_id=PROJECT_ID,
region=REGION,
batch=BATCH_CONFIG,
gcp_conn_id="google_cloud_default",
metadata={'PIP_PACKAGES':'requests'}, ## <--- ISSUE
impersonation_chain=IMPERSONATION_CHAIN,
)
submit_serverless
</code></pre>
<p><strong>Error</strong></p>
<p><code>Failed to execute job 103 for task submit_batch (too many values to unpack (expected 2)</code></p>
<p><strong>Other Thoughts</strong></p>
<p>Maybe I can somehow just zip my dependencies up and dump into a bucket and then dataproc can just read that zip? The BATCH_CONFIG.pyspark_batch has an additional field named python_file_uris where maybe I can just passed in a zipped file location containing all the dependencies. Unsure on the list of commands to achieve this zipped approach</p>
|
<python><google-cloud-platform><pip><airflow><dataproc>
|
2024-03-01 22:06:18
| 2
| 782
|
baskInEminence
|
78,090,576
| 4,928,433
|
Two figures in one plot
|
<p>I'm trying to plot two figures from colour.plotting in one plot.
I know there's a show-Parameter so I tried to show only second figure but it won't work:</p>
<pre><code>import colour
colour.plotting.plot_planckian_locus_in_chromaticity_diagram_CIE1931(["A", "B", "C"],show=False);
colour.plotting.plot_RGB_colourspaces_in_chromaticity_diagram_CIE1931(["ITU-R BT.709", "ITU-R BT.2020"],show=True)
</code></pre>
|
<python><colors>
|
2024-03-01 21:39:30
| 1
| 311
|
FLX
|
78,090,575
| 13,454,049
|
Bug when using typing & daemon threads
|
<p>When I run this code on Python 3.11 it runs fine, but if I import <code>typing_minimal</code> (or <code>typing</code>) in <code>reader.py</code> I get this error:</p>
<pre class="lang-none prettyprint-override"><code>Fatal Python error: _enter_buffered_busy: could not acquire lock for <_io.BufferedReader name='<stdin>'> at interpreter shutdown, possibly due to daemon threads
Python runtime state: finalizing (tstate=0x00007fff13ab6960)
Current thread 0x00003944 (most recent call first):
<no Python frame>
</code></pre>
<p>main.py:</p>
<pre class="lang-py prettyprint-override"><code>import reader
from typing_minimal import Generic, TypeVar
VALUE = TypeVar("VALUE")
class BaseClass(Generic[VALUE]):
def __init__(self):
pass
class SubClass(BaseClass[VALUE]):
pass
reader.thread.start()
</code></pre>
<p>reader.py:</p>
<pre class="lang-py prettyprint-override"><code>from sys import stdin
from threading import Thread
# BUG: I get an error when I uncomment this import:
# import typing_minimal
thread = Thread(target=lambda: stdin.buffer.read(), daemon=True)
</code></pre>
<p>typing_minimal.py:</p>
<pre class="lang-py prettyprint-override"><code>from functools import lru_cache, wraps
def _tp_cache(func):
cached = lru_cache()(func)
@wraps(func)
def inner(*args, **kwds):
return cached(*args, **kwds)
return inner
class TypeVar:
def __init__(self, _):
pass
class _GenericAlias:
def __init__(self, origin):
self.origin = origin
def __eq__(self, other):
if isinstance(other, _GenericAlias):
return self.origin is other.origin
return NotImplemented
def __hash__(self):
return hash(self.origin)
def __mro_entries__(self, _):
return (self.origin,)
class Generic:
@_tp_cache
def __class_getitem__(cls, *_):
return _GenericAlias(cls)
</code></pre>
<p>Does anyone have any idea what's going on here? I'm very confused.</p>
<p><em>Note: while this code produces the same error, this question is about why importing <code>typing_minimal</code> in <code>reader.py</code> breaks things:</em></p>
<pre class="lang-py prettyprint-override"><code>from sys import stdin
from threading import Thread
Thread(target=stdin.buffer.read, daemon=True).start()
</code></pre>
|
<python><multithreading>
|
2024-03-01 21:39:22
| 2
| 1,205
|
Nice Zombies
|
78,090,424
| 1,561,777
|
Getting Error Attempting to Debug Python Azure Function - attempted relative import with no known parent package
|
<p>I need to be able to debug an Azure function that was written in Python. I am just starting to learn Python, so I apologize if this is overly simplistic. I do not understand why I am getting this error <code>attempted relative import with no known parent package</code>.</p>
<p>I am using <code>VS Code</code> and a Python extension to debug. My project is organized as follows:</p>
<pre><code>/folder1 // the following files are in folder1
.funcignore
host.json
proxies.json
requirements.txt
/folder1/subfolder1 // the following files are in subfolder1
__init__.py
function.json
host.json
file1.py
file2.py
file3.py
</code></pre>
<p>function.json</p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"post",
"get"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
</code></pre>
<p>.vscode/launch.json</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
}
]
}
</code></pre>
<p>.vscode/settings.json</p>
<pre><code>{
"azureFunctions.projectSubpath": "folder1"
}
</code></pre>
<p><strong>init</strong>.py</p>
<pre><code>import logging
import sys, os, json
import azure.functions as func
from datetime import datetime
from . import file1 as foo
from . file2 import *
</code></pre>
<p>The error, <code>attempted relative import with no known parent package</code> happens on the line <code>from . import file1 as foo</code> and line <code>from . file2 import *</code>.</p>
<p>This function works well on Azure. It is only an issue when trying to run locally. I have read post after post saying that <code>you're not supposed to use relative imports when running your script as the main module.</code> See <a href="https://stackoverflow.com/a/73434418/1561777">here</a> for example. That post also states that the simplest fix is <code>Be sure that your root folder is in sys.path so that interpreter always finds it. (If not, append it manually)</code>. But it doesn't describe how to do that.</p>
<p>One thing I should mention is that I installed all of the packages into a virtual environment if that makes a difference. I am also trying to debug with the virtual environment activated.</p>
<p>From the console, if I execute <code>echo $env:pythonpath</code>, I get <code>C:\Users\me\repo\ProjectName\folder1</code></p>
<p>I did some reading about <code>PYTHONPATH</code>: See <a href="https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPATH" rel="nofollow noreferrer">Python Docs</a>. I set my PYTHONPATH as
<a href="https://i.sstatic.net/YXhFD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YXhFD.png" alt="PYTHONPATH" /></a></p>
<p>Where the last folder is equivalent to Folder1 in my example. I also tried it with the SubFolder1. Each time I changed the environment variable, I logged out and back in.</p>
<p>I can make the error go away by getting rid of the dot, which I believe means relative path, but the code works in production and we don't want to change the code to make it work locally.</p>
<p>What is the best and most correct way to fix this issue?</p>
|
<python><azure><azure-functions>
|
2024-03-01 20:58:30
| 1
| 772
|
David.Warwick
|
78,090,257
| 3,446,927
|
Azure Functions Core Tools fails to start: Value cannot be null. (Parameter 'provider')
|
<p>When attempting to start my Azure Functions project in local debug mode in Visual Studio Code on my dev box on prem, I receive the following error:</p>
<pre><code>[2024-03-01T20:11:07.673Z] Error building configuration in an external startup class.
[2024-03-01T20:11:07.675Z] Error building configuration in an external startup class. System.Net.Http: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (functionscdn.azureedge.net:443). System.Net.Sockets: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
[2024-03-01T20:11:07.732Z] A host error has occurred during startup operation 'cb26a4d1-0d3d-492f-8b7f-e04cff13d48e'.
[2024-03-01T20:11:07.733Z] Microsoft.Azure.WebJobs.Script: Error building configuration in an external startup class. System.Net.Http: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (functionscdn.azureedge.net:443). System.Net.Sockets: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Value cannot be null. (Parameter 'provider')
</code></pre>
<p>Looking at the <a href="https://learn.microsoft.com/en-us/visualstudio/install/install-and-use-visual-studio-behind-a-firewall-or-proxy-server?view=vs-2022#use-visual-studio-and-azure-services" rel="nofollow noreferrer">Azure Services networking requirements for Visual Studio behind a firewall</a>, I can see that <code>functionscdn.azureedge.net</code> access is used "for checking for updated versions of the Azure Functions CLI."</p>
<p>How can I disable this behavior to enable me to debug my Azure Functions project without making this network change?</p>
|
<python><azure><azure-functions>
|
2024-03-01 20:18:43
| 1
| 539
|
Joe Plumb
|
78,090,245
| 10,251,414
|
OpenVINOInferencer on GPU
|
<p>I've trained a model (Padim with resnet 50_2) with the anomalib package (python). The recall & precision are quite good, so I want to implement it for a demo.</p>
<p>Currently I'm using this code:</p>
<pre><code>inferencer = OpenVINOInferencer(
path=openvino_model_path,
metadata=metadata_path,
device="CPU",
)
predictions = inferencer.predict(image=image)
</code></pre>
<p>This works perfectly but only gets around 40 fps. When I try to use it with "gpu", the build fails (as I have Nvidia).</p>
<p>My application requires 1000 - 2000 fps. (which I've been able to do with CNN's).</p>
<p>How can I do interference on the GPU?</p>
<p>My training process outputs multiple model file types:</p>
<pre><code>- model.bin
- model.onnx
- model.xml
- metadata.json
</code></pre>
|
<python><deep-learning><anomaly-detection><openvino>
|
2024-03-01 20:14:30
| 1
| 5,850
|
Karel Debedts
|
78,090,095
| 825,227
|
Relatively easy way to overlay a seaborn historigram with normal distribution plot
|
<p>Have a dataframe for which I've created a histogram in Seaborn:</p>
<pre><code>import seaborn as sns
sns.histplot(df, bins=100)
</code></pre>
<p><a href="https://i.sstatic.net/0rr5x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0rr5x.png" alt="enter image description here" /></a></p>
<p>I'm also able to create a pdf from its distribution:</p>
<pre><code>import scipy.stats as stats
import numpy as np
import matplotlib.pyplot as plt
a = df.describe()
x = np.linspace(a['min'], a['max'])
plt.plot(x, stats.norm.pdf(x, a['mean'], a['std']))
</code></pre>
<p><a href="https://i.sstatic.net/S9MaC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S9MaC.png" alt="enter image description here" /></a></p>
<p>But when I combine them, the scales don't match, so the PDF isn't visible.</p>
<p>Is there a quick way to do this (and the same for a lognormal distribution)?</p>
|
<python><pandas><statistics><seaborn><visualization>
|
2024-03-01 19:41:26
| 1
| 1,702
|
Chris
|
78,089,788
| 759,880
|
Scikit-learn t-SNE plot
|
<p>I am doing a t-SNE plot for a time series of vectors, showing that the vectors end up in different clusters depending on "jumps" in the values of some components of the vectors. The visualization clearly shows 3 clusters corresponding to the 3 average vectors in the time series. But to highlight the temporal aspect of this, I'd like to plot arrows from one point to the next in the time series, following the time order. How could I do that??</p>
<p>Code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
# Create random vector time series
X = np.random.random(size=(1000,5))
X[500:,3] += 3*np.ones(500)
X[750:,4] += 2*np.ones(250)
y = np.hstack((np.zeros(500), np.ones(250), 2*np.ones(250)))
# Calculate 2d embedding and display
X_embedded = TSNE(n_components=2, learning_rate='auto', init='random', perplexity=6).fit_transform(X)
fig, ax = plt.subplots(1,1)
plt.scatter(X_embedded[:,0], X_embedded[:,1], c = y, cmap=plt.cm.rainbow);
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/8E3wT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8E3wT.png" alt="enter image description here" /></a></p>
|
<python><scikit-learn><tsne>
|
2024-03-01 18:31:27
| 0
| 4,483
|
ToBeOrNotToBe
|
78,089,666
| 12,305,582
|
Why is 'None' in builtins.__dict__
|
<p>The following are true:</p>
<ul>
<li><code>'None' in keyword.kwlist</code></li>
<li><code>'None' in builtins.__dict__ # import builtins</code></li>
</ul>
<p>My understanding:</p>
<ul>
<li>Python evals identifier <code>x</code> by getting object <code>builtins.__dict__[x]</code></li>
<li>Python evals keyword <code>x</code> in a special way that depends on what <code>x</code> is</li>
</ul>
<p>This implies that Python evals keyword <code>None</code> to the value of type <code>NoneType</code> (which is interned) without using <code>builtins.__dict__</code>. So why does <code>builtins.__dict__</code> contain <code>'None'</code>?</p>
<p>(the same question applies to <code>True</code> and <code>False</code>)</p>
|
<python><keyword><python-builtins>
|
2024-03-01 18:06:08
| 1
| 579
|
user615536
|
78,089,594
| 164,171
|
Python: Reading long lines with asyncio.StreamReader.readline()
|
<p>The asyncio version of <code>readline</code>() (1) allows reading a single line from a stream asynchronously. However, if it encounters a line that is longer than a limit, it will raise an exception (2). It is unclear how to resume reading after such an exception is raised.</p>
<p>I would like a function similar to <code>readline</code>(), that simply discards parts of lines that exceed the limit, and continues reading until the stream ends. Does such a method exist? If not, how to write one?</p>
<p>1: <a href="https://docs.python.org/3/library/asyncio-stream.html#streamreader" rel="nofollow noreferrer">https://docs.python.org/3/library/asyncio-stream.html#streamreader</a></p>
<p>2: <a href="https://github.com/python/cpython/blob/3.12/Lib/asyncio/streams.py#L549" rel="nofollow noreferrer">https://github.com/python/cpython/blob/3.12/Lib/asyncio/streams.py#L549</a></p>
|
<python><asynchronous><io><python-asyncio>
|
2024-03-01 17:51:26
| 4
| 56,902
|
static_rtti
|
78,089,551
| 12,390,973
|
how to create a simple capacity expansion model while maximizing the revenue using PYOMO?
|
<p>I have create a capacity expansion model using PYOMO, where there are 2 main generators <strong>gen1</strong> and <strong>gen2</strong> and one backup generator <strong>lost_load</strong>. The logic is pretty simple, gen1 and gen2 will run to fulfill the demand profile and they will get the money for that and there is some negative cost associated with gen1 and gen2 capacity. Here is the code:</p>
<pre><code>import datetime
import pandas as pd
import numpy as np
from pyomo.environ import *
model = ConcreteModel()
np.random.seed(24)
load_profile = np.random.randint(90, 120, 24)
model.m_index = Set(initialize=list(range(len(load_profile))))
model.grid = Var(model.m_index, domain=NonNegativeReals)
# Gen1 variable
model.gen1_cap = Var(domain=NonNegativeReals)
model.gen1_use = Var(model.m_index, domain=NonNegativeReals)
# Gen2 variables
model.gen2_cap = Var(domain=NonNegativeReals)
model.gen2_use = Var(model.m_index, domain=NonNegativeReals)
# Load profile
model.load_profile = Param(model.m_index, initialize=dict(zip(model.m_index, load_profile)))
model.lost_load = Var(model.m_index, domain=NonNegativeReals)
# Objective function
def revenue(model):
total_revenue = sum(
model.gen1_use[m] * 5.2 +
model.gen2_use[m] * 6.1 +
model.lost_load[m] * -100
for m in model.m_index)
total_fixed_cost = model.gen1_cap * -45 + model.gen2_cap * -50
total_cost = total_revenue + total_fixed_cost
return total_cost
model.obj = Objective(rule=revenue, sense=maximize)
# When i=0
def energy_balance1(model, m):
return model.grid[m] <= model.gen1_use[m] + model.gen2_use[m] + model.lost_load[m]
model.energy_balance1 = Constraint(model.m_index, rule=energy_balance1)
def grid_limit(model, m):
return model.grid[m] == model.load_profile[m]
model.grid_limit = Constraint(model.m_index, rule=grid_limit)
def max_gen1(model, m):
eq = model.gen1_use[m] <= model.gen1_cap
return eq
model.max_gen1 = Constraint(model.m_index, rule=max_gen1)
def max_gen2(model, m):
eq = model.gen2_use[m] <= model.gen2_cap
return eq
model.max_gen2 = Constraint(model.m_index, rule=max_gen2)
Solver = SolverFactory('gurobi')
Solver.options['LogFile'] = "gurobiLog"
# Solver.options['MIPGap'] = 0.50
print('\nConnecting to Gurobi Server...')
results = Solver.solve(model)
if (results.solver.status == SolverStatus.ok):
if (results.solver.termination_condition == TerminationCondition.optimal):
print("\n\n***Optimal solution found***")
print('obj returned:', round(value(model.obj), 2))
else:
print("\n\n***No optimal solution found***")
if (results.solver.termination_condition == TerminationCondition.infeasible):
print("Infeasible solution")
exit()
else:
print("\n\n***Solver terminated abnormally***")
exit()
grid_use = []
gen1 = []
gen2 = []
lost_load = []
load = []
for i in range(len(load_profile)):
grid_use.append(value(model.grid[i]))
gen1.append(value(model.gen1_use[i]))
gen2.append(value(model.gen2_use[i]))
lost_load.append(value(model.lost_load[i]))
load.append(value(model.load_profile[i]))
print('gen1 capacity: ', value(model.gen1_cap))
print('gen2 capacity: ', value(model.gen2_cap))
pd.DataFrame({
'Grid': grid_use,
'Gen1': gen1,
'Gen2': gen2,
'Shortfall': lost_load,
'Load': load
}).to_excel('capacity expansion.xlsx')
</code></pre>
<p>This model will work if :</p>
<pre><code>def energy_balance1(model, m):
return model.grid[m] == model.gen1_use[m] + model.gen2_use[m] + model.lost_load[m]
model.energy_balance1 = Constraint(model.m_index, rule=energy_balance1)
</code></pre>
<p>But if I set it to <strong>(<=)</strong> greater than equal to, it gives me an infeasible error. I want this condition because in actuality I want gen1 and gen2 to run at <strong>100% capacity</strong> at all 24 instances. After all, in the objective function, we can see the positive cost associated with <strong>gen1_use</strong> and <strong>gen2_use</strong>. So to maximize the revenue they should run at 100% capacity even if the load profile is fulfilled. Here is the output I am getting:</p>
<pre><code>optimal gen1 capacity: 9MW
optimal gen2 capacity: 108MW
</code></pre>
<p>Although I want the dispatch something like this :
<a href="https://i.sstatic.net/KrJOM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KrJOM.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/uNSeU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uNSeU.png" alt="enter image description here" /></a></p>
|
<python><pyomo><capacity-planning>
|
2024-03-01 17:43:28
| 1
| 845
|
Vesper
|
78,089,393
| 395,857
|
Why is my response JSON object missing `prompt_filter_results` when serializing the Azure OpenAI response object into a JSON object?
|
<p>I run some Azure OpenAI request, and try to convert the response object into JSON:</p>
<pre><code>#Note: This code sample requires OpenAI Python library version 1.0.0 or higher.
import json
import pprint
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = "https://xxxxxx.openai.azure.com/",
api_key='xxxxxxxxxxxxxxxxxxxxx',
api_version="2023-07-01-preview"
)
message_text = [{"role":"system","content":"You are an AI assistant that helps people find information."}]
completion = client.chat.completions.create(
model="gpt-4xxxxxxxx",
messages = message_text,
temperature=0.7,
max_tokens=800,
top_p=0.95,
frequency_penalty=0,
presence_penalty=0,
stop=None
)
print('completion:\n')
pprint.pprint(completion)
# Convert Python object to JSON
json_data = json.dumps(completion, default=lambda o: o.__dict__, indent=4)
# Print JSON
print(json_data)
</code></pre>
<p>Looking at the output, the response object <code>completion</code> contains:</p>
<pre><code>ChatCompletion(id='chatcmpl-xxxxxxxxx', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Great! How can I assist you today?', role='assistant', function_call=None, tool_calls=None), content_filter_results={'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}})], created=1709313222, model='gpt-4', object='chat.completion', system_fingerprint='fp_xxxxx', usage=CompletionUsage(completion_tokens=9, prompt_tokens=18, total_tokens=27), prompt_filter_results=[{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}])
</code></pre>
<p>However, the corresponding JSON object is missing <code>prompt_filter_results</code>:</p>
<pre><code>{
"id": "chatcmpl-xxxxxxx",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "Great! How can I assist you today?",
"role": "assistant",
"function_call": null,
"tool_calls": null
}
}
],
"created": 1709313222,
"model": "gpt-4",
"object": "chat.completion",
"system_fingerprint": "fp_xxxxx",
"usage": {
"completion_tokens": 9,
"prompt_tokens": 18,
"total_tokens": 27
}
}
</code></pre>
<p>Why is my response JSON object missing <code>prompt_filter_results</code> when serializing the Azure OpenAI response object into a JSON object?</p>
|
<python><json><serialization><azure-openai>
|
2024-03-01 17:16:34
| 1
| 84,585
|
Franck Dernoncourt
|
78,089,384
| 5,799,799
|
A scalable way of checking if a string column is contained within another string column in Polars
|
<p>Is there a scalable way of creating the column <code>B_in_A</code> below that doesn't rely on map_elements?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"A":["foo","bar","foo"],"B":["f","b","s"]})
df = (
df
.with_columns(
pl.struct(["A","B"])
.map_elements(lambda row: (
row["B"] in row["A"]
), return_dtype=pl.Boolean).alias("B_in_A")
)
)
print(df)
</code></pre>
<p>output is</p>
<p>shape: (3, 3)</p>
<pre><code>┌─────┬─────┬────────┐
│ A ┆ B ┆ B_in_A │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ bool │
╞═════╪═════╪════════╡
│ foo ┆ f ┆ true │
│ bar ┆ b ┆ true │
│ foo ┆ s ┆ false │
└─────┴─────┴────────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2024-03-01 17:14:30
| 1
| 435
|
DataJack
|
78,089,288
| 759,991
|
Bar Chart Not Stacking
|
<p>I have been trying to adapt this tutorial, <a href="https://simpleisbetterthancomplex.com/tutorial/2020/01/19/how-to-use-chart-js-with-django.html" rel="nofollow noreferrer">How to Use Chart.js with Django</a>, to my AWS billing reports. I want to tweak this graph from the tutorial:</p>
<p><a href="https://i.sstatic.net/Fyq2r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fyq2r.png" alt="enter image description here" /></a></p>
<p>I want to "stack" the costs from the various AWS services for each month, but when I tried to stack the cost I get this chart:</p>
<p><a href="https://i.sstatic.net/1M2TJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1M2TJ.png" alt="My chart" /></a></p>
<p>Here is the pertinent code I have written:</p>
<p><code>monthly_cost/views.py</code>:</p>
<pre><code>from django.shortcuts import render
from django.db.models import Sum
from rest_framework import viewsets
from django.http import JsonResponse
from .models import MonthlyCostReport
from .serializers import MonthlyCostReportSerializer
def home(request):
return render(request, 'home.html')
def product_cost_chart(request):
colors = ['#DFFF00', '#FFBF00', '#FF7F50', '#DE3163', '#9FE2BF', '#40E0D0', '#6495ED', '#CCCCFF', '#9CC2BF',
'#40E011', '#641111', '#CCCC00']
labels = [label.get('bill_billing_period_start_date').strftime('%Y-%m-%d') for label in
MonthlyCostReport.objects.values('bill_billing_period_start_date').order_by(
'bill_billing_period_start_date').distinct()]
datasets = []
for i, product_cost_pair in \
enumerate(MonthlyCostReport.objects.filter(bill_billing_period_start_date=labels[0]).values(
'line_item_product_code').annotate(product_cost=Sum('line_item_blended_cost')).order_by(
'-product_cost')):
dataset = {
'label': product_cost_pair.get('line_item_product_code'),
'backgroundColor': colors[i % len(colors)],
'data': [pc.get('product_cost') for pc in MonthlyCostReport.objects \
.filter(line_item_product_code=product_cost_pair.get('line_item_product_code')) \
.values('line_item_product_code', 'bill_billing_period_start_date') \
.annotate(product_cost=Sum('line_item_unblended_cost')).order_by('bill_billing_period_start_date')]
}
datasets.append(dataset)
return JsonResponse(data={
'labels': labels,
'datasets': datasets,
})
</code></pre>
<p><code>templates/home.html</code></p>
<pre><code>{% block content %}
<div id="container" style="width: 75%;">
<canvas id="product-cost-chart" data-url="{% url 'product-cost-chart' %}"></canvas>
</div>
<script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script>
$(function () {
var $productCostChart = $("#product-cost-chart");
$.ajax({
url: $productCostChart.data("url"),
success: function (data) {
console.log(data);
var ctx = $productCostChart[0].getContext("2d");
new Chart(ctx, {
type: 'bar',
data: { labels: data.labels, datasets: data.datasets, },
options: {
plugins: { title: { display: true, text: 'Stacked Bar chart for pollution status' }, },
scales: { x: { stacked: true, }, y: { stacked: true } }
}
});
}
});
});
</script>
{% endblock %}
</code></pre>
<p>My guess is that there is something wrong with my "options" section.</p>
<h2>Update</h2>
<p>Thanks @kikon for the suggestion. I am now using this line:</p>
<pre><code><script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
</code></pre>
<p>and the chart now looks like this:</p>
<p><a href="https://i.sstatic.net/O6TVO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O6TVO.png" alt="enter image description here" /></a></p>
|
<javascript><python><django><chart.js>
|
2024-03-01 16:57:55
| 0
| 10,590
|
Red Cricket
|
78,089,263
| 17,323,391
|
Python UDP socket not sending all packets, or not all packets are received on the other end
|
<p>I'm working on an implementation of a logical data diode, which means that data can only flow in one direction. No ACKs are allowed. Therefore, I have chosen UDP. This is the gist of the protocol:</p>
<ol>
<li>Split the payload into small chunks</li>
<li>Give each chunk a sequence number</li>
<li>Transmit the chunk to the receiver using UDP</li>
</ol>
<p>For small payloads, this works flawlessly. For larger payloads, however, at around 600 datagrams sent, the packets seem to mysteriously disappear in transmit. All packets are <em>logged</em> as sent, but on the receiving end, the packets just stop.</p>
<p>Here is a code snippet:</p>
<pre><code> for i in range(redundancy + 1):
seq = 0
sent_bytes = 0
bytes_to_send = len(session.encrypted_data)
while sent_bytes < bytes_to_send:
logger.info("Sending payload chunk %d", seq)
header = concat_bytes(str(seq).zfill(16).encode(), session.session_uuid.bytes_le)
remaining_room = BUFFER_SIZE - len(header)
data = session.encrypted_data[sent_bytes : sent_bytes + remaining_room]
payload = concat_bytes(header, data)
self._transmit_bytes(payload)
sent_bytes += len(data)
seq += 1
</code></pre>
<p><code>_transmit_bytes</code>:</p>
<pre><code> def _transmit_bytes(self, message: bytes):
self.server_socket.sendto(message, self.addr)
time.sleep(MESSAGE_DELAY)
</code></pre>
<p>The initialization of <code>server_socket</code>:</p>
<pre><code> self.server_socket: LDDSocket = LDDSocket(listen=False)
so_linger_options = struct.pack("ii", 1, CLOSE_TIMEOUT)
send_buffer_size = 1024 * 1024 * 100 # 100 MB
self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, so_linger_options)
self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, send_buffer_size)
</code></pre>
<p><code>LDDSocket</code> is an extremely simple wrapper for <code>socket.socket</code>. There is nothing you need to know about it (<code>listen=False</code> just makes it not bind).</p>
<p>As you can see, I have tried some tips that I found online, including:</p>
<ul>
<li>Setting SO_LINGER options (<code>CLOSE_TIMEOUT</code> is 10 seconds)</li>
<li>Setting SO_SNDBUF options</li>
<li>Before closing the socket, I'm waiting 10 seconds. I expected that I would still see packets being received during these 10 seconds, but the receiving stops well before that timeout and the subsequent <code>socket.close()</code> call happen.</li>
</ul>
<p>The last point is done in the <code>__exit__</code> method of the "transmitter class", which is used as a context manager:</p>
<pre><code> def __exit__(self, exc_type, exc_val, exc_tb):
cleanup_grace_period = 10 # seconds
logger.info("Waiting %d seconds for cleanup...", cleanup_grace_period)
time.sleep(cleanup_grace_period)
logger.info("Cleanup presumed to be complete; closing socket.")
self.close()
logger.info("Socket closed.")
</code></pre>
<p>Again, the receiving end stops receiving messages well before this <code>__exit__</code> method is called.</p>
<p>What could cause a socket to stop sending data after a certain amount of data has been sent? (It always ends roughly around the same sequence number, but not exactly.)</p>
|
<python><sockets><network-programming><udp><python-sockets>
|
2024-03-01 16:54:26
| 1
| 310
|
404usernamenotfound
|
78,089,139
| 1,504,082
|
FastAPI / pydantic: field_validator not considered when using empty Depends()
|
<p>I am following <strong>Method 2</strong> of this <a href="https://stackoverflow.com/a/70640522/1504082">answer</a> to be able to upload multiple files in combination with additional data using fastapi. It is working fine.</p>
<p>After starting to implement the handling of the additional data including validation using pydantic's <code>BaseModel</code> i am facing an issue:</p>
<p>My custom <code>field_validator</code> is working when using the model class directly but it is not working as expected, when using it via FastAPI and <code>Depends()</code>.</p>
<p>The key point is that i want to use a python <code>Enum</code> and i want to be able to use the Enum's <code>names</code> in the additional data (query parameters). For this reason i am using a custom validator to only allow names which exist in the enum.</p>
<p>When i initialize the model manually, the validation works as expected:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
from pydantic import BaseModel, field_validator, ValidationInfo, ValidationError
class VehicleSubSystems(Enum):
A = "A Verbose"
B = "B Verbose"
class EvaluationArguments(BaseModel):
vehicle_sub_system: VehicleSubSystems
@field_validator("vehicle_sub_system", mode='before')
@classmethod
def validate_vehicle_sub_system(cls, vehicle_sub_system: str, _info: ValidationInfo) -> VehicleSubSystems:
""" Allows using the enum names instead of the values """
try:
return VehicleSubSystems[vehicle_sub_system]
except KeyError:
raise ValueError(f"Can not find vehicle subsystem '{vehicle_sub_system}'. "
f"Allowed values: {[e.name for e in VehicleSubSystems]}")
def test_validation_is_performed():
""" Test that the validation is performed """
EvaluationArguments(vehicle_sub_system="A")
try:
EvaluationArguments(vehicle_sub_system="DOES_NOT_EXIST")
except ValidationError:
print("Test passed")
else:
print("Test failed")
if __name__ == '__main__':
test_validation_is_performed()
# prints "Test passed" as expected
</code></pre>
<p>Combining this with the FastAPI application shows unexpected behavior: The field_validator is not considered. Instead the default behavior of the model class is used.</p>
<p>Server code:</p>
<pre class="lang-py prettyprint-override"><code>import uvicorn
from typing import List
from fastapi import FastAPI, File, Depends, UploadFile
app = FastAPI()
def create_args(vehicle_sub_system: str):
return EvaluationArguments(vehicle_sub_system=vehicle_sub_system)
@app.post("/process-works")
def process_works(files: List[UploadFile] = File(...), eval_args: EvaluationArguments = Depends(create_args)):
return f"Got {len(files)} files and {eval_args}"
@app.post("/process-fails")
def process_fails(files: List[UploadFile] = File(...), eval_args: EvaluationArguments = Depends()):
return f"Got {len(files)} files and {eval_args}"
if __name__ == '__main__':
uvicorn.run(app, host="0.0.0.0", port=8000)
</code></pre>
<p>Client code:</p>
<pre class="lang-py prettyprint-override"><code>import requests
if __name__ == '__main__':
url = 'http://127.0.0.1:8000'
files = [('files', open('d:/temp/a.txt', 'rb')), ('files', open('d:/temp/b.txt', 'rb'))]
params = {"vehicle_sub_system": "A"}
print("Calling process-works")
resp = requests.post(url=f"{url}/process-works", params=params, files=files)
print(resp.json())
print("Calling process-fails")
resp = requests.post(url=f"{url}/process-fails", params=params, files=files)
print(resp.json())
# Output
# Calling process-works
# Got 2 files and vehicle_sub_system=<VehicleSubSystems.A: 'A Verbose'>
# Calling process-fails
# {'detail': [{'type': 'enum', 'loc': ['query', 'vehicle_sub_system'], 'msg': "Input should be 'A Verbose' or 'B Verbose'", 'input': 'A', 'ctx': {'expected': "'A Verbose' or 'B Verbose'"}}]}
</code></pre>
<p>The <code>process-works</code> endpoint shows the expected behavior but only when using a separate dependency <code>Depends(create_args)</code> which mimics the direct usage of the model class.</p>
<p>The <code>process-fails</code> endpoint (using <code>Depends()</code>) shows the issue. I would expect that <code>Depends()</code> is just making FastAPI to call the init method of the model class and uses the validation as expected. But somehow it just ignores it.</p>
<p>I could not figure out why, perhaps somebody can explain what happens here and if there is a solution without the workaround?</p>
|
<python><dependency-injection><enums><fastapi><pydantic>
|
2024-03-01 16:30:42
| 1
| 4,475
|
maggie
|
78,089,081
| 5,759,359
|
Release redis connection back to pool using python
|
<p>I have a job scheduled which writes data to Redis cache.
For each data set row, I fetch a connection from the pool and use redis.pipeline to insert it into Redis, but when I monitor via info clients the connected_clients number keeps on increasing, I see maxclients as 7500 but when connected_clients reach 1049, its halts and does not process further with error redis.exceptions.TimeoutError: Timeout connecting to server</p>
<p><a href="https://i.sstatic.net/me212.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/me212.png" alt="enter image description here" /></a></p>
<p>I fetch the connection via the below method</p>
<pre><code>from redis import asyncio as aioredis
def get_redis_connection(db_name: int) -> aioredis.Redis:
redis_protocol = "rediss" if redis_settings.redis_ssl else "redis"
return aioredis.Redis(
auto_close_connection_pool=True,
host=redis_settings.redis_hostname,
port=redis_settings.redis_port,
db=db_name,
password=redis_settings.redis_password,
ssl=redis_settings.redis_ssl,
connection_pool=aioredis.ConnectionPool.from_url(
f"{redis_protocol}://:{redis_settings.redis_password}@{redis_settings.redis_hostname}:{redis_settings.redis_port}/{db_name}",
connection_class=aioredis.Connection,
max_connections=redis_settings.redis_pool_size,
),
health_check_interval=HEALTH_CHECK_INTERVAL,
)
</code></pre>
<p>This is how I use the connection</p>
<pre><code>async def populate_data_to_cache(bars: Iterator[BarsUnique]):
grouped_bars = {}
for bar in bars:
key = f"{bar.code}:{bar.dir}"
if key in grouped_bars:
grouped_bars[key].append(encode_data(bar))
else:
grouped_bars[key] = [encode_data(bar)]
async with cache.get_redis_connection(BAR_DB) as redis:
for key in grouped_bars.keys():
async with redis.pipeline(transaction=True) as p:
await p.delete(key)
await p.rpush(key, *grouped_data[key])
await p.expire(key, timedelta(days=cache.DEFAULT_EXPIRATION_DAYS))
await p.execute()
</code></pre>
<p>Currently, I am using Python 3.10 and redis 5.0.0</p>
|
<python><caching><redis>
|
2024-03-01 16:20:23
| 1
| 477
|
Kashyap
|
78,089,057
| 1,145,808
|
Accessing `self` and method name from dynamically-created methods
|
<p>I have a class of which several methods could be dynamically generated, which would save some boilerplate:</p>
<pre><code>class A:
def save_foo(self, **kwargs):
do_foo_stuff_here()
def save_bar(self, **kwargs):
do_bar_stuff_here()
def save_foobar(self, **kwargs):
do_foobar_stuff_here()
</code></pre>
<p><a href="https://stackoverflow.com/questions/17929543/how-can-i-dynamically-create-class-methods-for-a-class-in-python">This stackoverflow answer</a> taught me about that I can create the methods using</p>
<pre><code>class A():
pass
def _method_generator(cls, **kwargs):
if method_name == "save_foo":
do_foo_stuff_here()
elif method_name == "save_foo":
do_bar_stuff_here()
elif method_name == "save_foobar":
do_foobar_stuff_here()
for method_name in ["save_foo", "save_bar", "save_foobar"]:
setattr(A, method_name, classmethod(_method_generator))
</code></pre>
<p>My problem is twofold:</p>
<ol>
<li>From <code>_method_generator</code>, I can access the class via <code>cls</code>, but I don't see a way to access the instance, e.g. <code>self</code>. How can I do this?</li>
<li>In order to replicate the logic of <code>save_foo</code>, <code>save_foo</code>, <code>save_foobar</code> from <code>_method_generator</code>, I need to know which was called. I've seen answers like <a href="https://stackoverflow.com/questions/2654113/how-to-get-the-callers-method-name-in-the-called-method">this one</a>, but that gives me <code>_method_generator</code>, not the method/attribute that called it. How can I get this?</li>
</ol>
|
<python>
|
2024-03-01 16:16:52
| 1
| 829
|
DobbyTheElf
|
78,088,987
| 2,707,864
|
Msys2: Python 2.7 and Python 3.8 side by side, how to load correct site.py
|
<p>I have portable msys2 under Windows.
It works great.</p>
<p>I have both python 2.7 and python 3.8 installed.
If I load python 3.8 it works fine.</p>
<pre><code>$ python
Python 3.8.2 (default, Apr 16 2020, 15:31:48)
[GCC 9.3.0] on msys
Type "help", "copyright", "credits" or "license" for more information.
Reading /home/user/.pythonrc
readline is in /usr/lib/python3.8/lib-dynload/readline.cpython-38-i386-msys.dll
>>>
</code></pre>
<p>But I can't load python2</p>
<pre><code>$ python2
File "/c/Users/user1/Documents/appls_mydocs/PortableApps/MSYS2Portable/App/msys32/mingw64/lib/python3.8/site.py", line 178
file=sys.stderr)
^
SyntaxError: invalid syntax
</code></pre>
<p>How can I setup msys2 correctly for both versions?
Where is the dir that contains <code>site.py</code> specified?</p>
|
<python><msys2>
|
2024-03-01 16:04:02
| 1
| 15,820
|
sancho.s ReinstateMonicaCellio
|
78,088,922
| 5,987
|
Create TZ string from Python ZoneInfo
|
<p>I can easily get a <code>ZoneInfo</code> object for the time zone of my choice:</p>
<pre><code>import zoneinfo
ct = zoneinfo.ZoneInfo('America/Chicago')
</code></pre>
<p>I need to create a <code>TZ</code> environment variable for an embedded system that doesn't have a timezone database, so I can't simply reuse the location string as you might do on a regular Linux system. I need to embed the actual daylight savings transition rules in the string, for my example it would look like:</p>
<pre><code>TZ=CST06CDT05,M3.2.0/2,M11.1.0/2
</code></pre>
<p>I haven't been able to find any way to build that string. It doesn't bother me that the string might become obsolete if the rules change.</p>
<p>This also needs to work for time zones outside the U.S.</p>
<p>If there's a better way to get the information than using ZoneInfo, I'm willing to entertain answers for that too.</p>
|
<python><timezone><zoneinfo>
|
2024-03-01 15:54:55
| 1
| 309,773
|
Mark Ransom
|
78,088,887
| 12,691,626
|
No response when getting a table from a web using BeautifulSoap library
|
<p>I'm trying to get two data tables from a <a href="https://www.avamet.org/" rel="nofollow noreferrer">web</a>. I'm using the BeautifulSoup Python library from Google Colab. The URL to download is the following: <a href="https://www.avamet.org/mx-consulta-diaria.php?id=%%25%%2525&ini=2024-02-27&fin=2024-02-28&token=s0141%21" rel="nofollow noreferrer">https://www.avamet.org/mx-consulta-diaria.php?id=%%25%%2525&ini=2024-02-27&fin=2024-02-28&token=s0141%21</a> You can see the two tables there!</p>
<p>I'm trying to do the following:</p>
<pre><code>import requests
import pandas as pd
from bs4 import BeautifulSoup
def get_avamet_data(START_DATE=None, END_DATE=None, region='%%25%%25'):
# All stations: %%25%%25 -------------------------------------------> Doesn't work
# Only the station 15: c15%25 ---------------------------------------> It works!!!
url = 'https://www.avamet.org/mx-consulta-diaria.php?id=' + region + '25&ini=' + START_DATE + '&fin=' + END_DATE + '&token=s0141%21'
print('Getting data from: ' + url)
response = requests.get(url)
web_html = BeautifulSoup(response.content, 'html.parser')
selector = 'table'
tables = web_html.find_all(selector)
table_1 = tables[0]
table_2 = tables[1]
data_1 = []
for row in table_1.find_all('tr'):
cols = row.find_all(['td', 'th'])
cols = [ele.text.strip() for ele in cols]
data_1.append([ele for ele in cols if ele])
data_2 = []
for row in table_2.find_all('tr'):
cols = row.find_all(['td', 'th'])
cols = [ele.text.strip() for ele in cols]
data_2.append([ele for ele in cols if ele])
return([pd.DataFrame(data_1), pd.DataFrame(data_2)])
</code></pre>
<p>Then:</p>
<pre><code>a = get_avamet_data(START_DATE='2024-02-27', END_DATE='2024-02-28')
a[0]
a[1]
</code></pre>
<p>But I obtain an empty list in the <code>tables</code> variable. However when I change the region argument from <code>'%%25%%25'</code> to <code>'c15%25'</code> it works.</p>
<p>Where is the problem?</p>
|
<python><html><web-scraping><beautifulsoup><google-colaboratory>
|
2024-03-01 15:48:31
| 1
| 327
|
sermomon
|
78,088,608
| 4,704,065
|
Save the Matplotlib plot in html format
|
<p>I have a plot written in matplotlib.</p>
<p>Currently I saved that plot in pdf ,but i also want to save it in .html format for better visualization.</p>
<p>I am not aware of any html stuff , so need inputs here</p>
<p>Below is my code:</p>
<pre><code> output_file = config["output_dir"] / "report.pdf"
with PdfPages(output_file) as pdf:
fig, ax = plt.subplots(2, constrained_layout = True)
for x in new_hpg_groupdf:
ax[0].plot(x[1]['iTOW'] , x[1]['ionoEst'] , label=f"{satellite} {x[0][1]}")
ax[0].set_ylabel("Iono Estimation (TECU)")
ax[0].set_xlabel("Itows (mSec)")
ax[0].grid(True)
ax[0].legend(fontsize=4 , loc='upper left' , bbox_to_anchor=[1.01, 1.02])
ax[0].set_title('Iono Estimation Output')
ax[1].plot(x[1]['iTOW'] , x[1]['ionoEstAcc'] , label=f"{satellite} {x[0][1]}")
ax[1].set_ylabel("Iono Accuracy (TECU)")
ax[1].set_xlabel("Itows (mSec)")
ax[1].grid(True)
ax[1].legend(fontsize=4 , loc='upper left' , bbox_to_anchor=[1.01, 1.02])
ax[1].set_title('Iono Estimation Accuracy Output')
pdf.savefig()
plt.close()
</code></pre>
|
<python><html><matplotlib>
|
2024-03-01 14:57:34
| 1
| 321
|
Kapil
|
78,088,237
| 6,891,461
|
Encountering Race Conditions Despite Using Django's Atomic Transactions and select_for_update
|
<p>I'm encountering race conditions in my Django application despite implementing atomic transactions and utilizing the select_for_update method. Here's an overview of the problem and the steps I've taken so far:</p>
<p><strong>Problem</strong>:
I have two Django models, Transaction and Account, where Transaction instances affect the balance of associated Account instances. However, due to the large size of transactions, I'm experiencing race conditions, resulting in false outcomes and incorrect balance updates.</p>
<p><strong>Database Schema</strong>:</p>
<pre><code>Transaction:
id (Primary Key)
amount
account_id (Foreign Key referencing Account.id)
Account:
id (Primary Key)
balance
</code></pre>
<p><strong>Steps Taken</strong></p>
<ol>
<li><p>Atomic Transactions: I've wrapped the critical sections of my code that involve database operations within Django's atomic transactions using the <code>@transaction.atomic</code> decorator or transaction.atomic context manager.</p>
</li>
<li><p>Row-Level Locking: I've utilized the <code>select_for_update</code> method to lock the rows in the Account table during reads, ensuring that concurrent transactions don't interfere with each other's updates.</p>
</li>
</ol>
<p>Despite implementing these measures, I'm still encountering race conditions, leading to incorrect balance calculations and data inconsistency.</p>
<p><strong>Additional Context</strong>:</p>
<p>I'm using a PostgreSQL database for my Django application.
The race conditions seem to occur when multiple transactions attempt to update the balances of related accounts simultaneously.
I've verified that the critical sections of my code are indeed encapsulated within atomic transactions and that <code>select_for_update</code> is applied appropriately.</p>
<p><strong>Question</strong>:
What additional steps or considerations should I take to mitigate race conditions in my Django application? Are there any common pitfalls or overlooked factors that might be contributing to this issue despite using atomic transactions and row-level locking? Any insights or recommendations would be greatly appreciated.</p>
|
<python><django><postgresql><transactions><race-condition>
|
2024-03-01 13:55:16
| 1
| 1,209
|
l.b.vasoya
|
78,088,133
| 13,086,128
|
All columns to uppercase in a DataFrame?
|
<p>Suppose there are hundreds of columns in a DataFrame. Some of the column names are in lower and some are in upper case.</p>
<p>Now, I want to convert all the columns to upper case.</p>
<pre><code>import polars as pl
df = pl.DataFrame({
"foo": [1, 2, 3, 4, 5, 8],
"baz": [5, 4, 3, 2, 1, 9],
})
</code></pre>
<p>What I tried:</p>
<pre><code>df.columns = [x.upper() for x in df.columns]
</code></pre>
<p>It worked, but is there any other way preferably without a for loop?</p>
|
<python><python-3.x><dataframe><python-polars>
|
2024-03-01 13:38:12
| 2
| 30,560
|
Talha Tayyab
|
78,088,123
| 4,593,642
|
tesseract opens console with pyinstaller
|
<p>I have a program that does ocr using tesseract and pyqt5</p>
<p><strong>thread.py</strong></p>
<pre><code>class SearchThread(QThread, QObject):
signal = pyqtSignal(str)
finished = pyqtSignal()
def __init__(self, data):
super(QThread, self).__init__()
super(QObject, self).__init__()
self.data = data
def search(self):
pdf_files = PDF.get_pdf_files(self.data['folder'])
for pdf in pdf_files:
if pdf.search(self.data['query']):
self.signal.emit(pdf.pdf_path)
self.finished.emit()
def run(self):
try:
self.search()
except Exception as e:
raise e
self.signal.emit(f'Error : {str(e)}')
</code></pre>
<p><strong>util.py</strong></p>
<pre><code>class PDF:
def __init__(self, pdf_path):
self.pdf_path = pdf_path
def search(self, query):
pytesseract.pytesseract.tesseract_cmd = OCR_EXEC_PATH
pytesseract_config = f'--tessdata-dir "{OCR_DATA_PATH}"'
images = pdf2image.convert_from_path(self.pdf_path)
for image in images:
text = pytesseract.image_to_string(image, lang='ara', config=pytesseract_config)
for keyword in query.split():
if keyword in text:
return True
@staticmethod
def get_pdf_files(folder):
path = pathlib.Path(folder)
return [PDF(str(file.resolve())) for file in path.glob('**/*.pdf')]
</code></pre>
<p><strong>ui.py</strong></p>
<pre><code>class UI(QMainWindow):
def __init__(self):
super(UI, self).__init__()
self.search_thread = None
try:
import os, sys
os.chdir(sys._MEIPASS)
except:
pass
loadUi(DESIGNER_FILE, self)
....
self.show()
def on_search_btn_click(self):
if self.search_thread and self.search_thread.isRunning():
return
else:
data = {'query': self.query.text().lower().strip(), 'folder': self.folder.text()}
self.search_thread = SearchThread(data)
self.search_thread.signal.connect(self.on_signal_received)
self.search_thread.finished.connect(self.on_finished)
self.search_thread.start()
def on_signal_received(self, value):
self.add_item(value)
def on_finished(self, value):
self.cancel_btn.setEnabled(False)
</code></pre>
<p><strong>main.py</strong></p>
<pre><code>if __name__ == '__main__':
app = QApplication(argv)
window = UI()
app.exec_()
</code></pre>
<p><strong>build.bat</strong></p>
<p><code>pyinstaller --noconsole --onefile --name ArchiveSearch --distpath . --add-data="data:data" main.py</code></p>
<p>Every time a pdf is processed, a console window opens and closes in less than a second when running as an exe. When running directly from python it works fine.</p>
|
<python><pyqt5><pyinstaller><python-tesseract>
|
2024-03-01 13:36:56
| 0
| 2,339
|
Amine Messaoudi
|
78,088,003
| 534,238
|
How to get IDEs (VS Code) to work with syntax highlighting with Protobufs (Python)
|
<p>I am using VS Code, but I suspect the following problem exists across all IDEs.</p>
<hr />
<p>When compiling a protobuf file (<code>.proto</code>) to Python (<code>_pb2.py</code>), the Messages <em>will be available as classes, but Google's mechanism to expose the classes is difficult for IDEs to discover</em>.</p>
<p>For instance, with the following toy example</p>
<p><code>person.proto</code></p>
<pre><code>syntax = "proto3";
message Person {
string name = 1;
string address = 2;
int32 age = 3;
}
</code></pre>
<p>if I run</p>
<pre class="lang-bash prettyprint-override"><code>> protoc -I=. --python_out=. person.proto
</code></pre>
<p>it will generate a <code>person_pb2.py</code> file that will have a <code>Person</code> class that I can create an object from, looking something like this:</p>
<pre class="lang-py prettyprint-override"><code>from person_pb2 import Person
me = Person(name="me", address="heaven", age=2)
</code></pre>
<hr />
<p>But the line <code>from person_pb2 import Person</code> will be highlighted as an error.</p>
<p>Specifically, in VS Code with Pylance installed, it will say</p>
<blockquote>
<p>"Person" is unknown import symbol <em>[Pylance (reportAttributeAccessIssue)]</em></p>
<p>(import) Person: Unknown</p>
</blockquote>
<p>The reason for this is clear: if I were to look at the <code>person_pb2.py</code> file <strong>there is no top level object <code>Person</code> exported</strong>! This will be true for any protobuf messages, at any depth. They all get mangled into complicated objects and then populated to the runtime via</p>
<pre class="lang-py prettyprint-override"><code>_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'person_pb2', globals())
</code></pre>
<p>This is a very strange edge case for populating the environment that I wouldn't really expect the IDEs to discover.</p>
<p>So, what can I do - or what do most people do - to have the IDE properly identify protobufs?</p>
<p>For now, I am just littering all my code with <code># type: ignore</code> annotation, which is not ideal, since I'm "solving" the problem by just turning off the tool. Is there a better option?</p>
<hr />
<p>[To be clear, I currently "solve" this by doing things like <code>from person_pb2 import Person # type: ignore</code> in order to turn off Pylance's complaints. But I actually like Pylance's advice... the problem is that Google is hiding its exposed objects through this weird <code>globals()</code> trick.]</p>
|
<python><ide><protocol-buffers>
|
2024-03-01 13:14:02
| 1
| 3,558
|
Mike Williamson
|
78,087,955
| 11,222,963
|
Key Error when slicing in Pandas, but column exists?
|
<p>My columns:</p>
<pre><code>
df = pd.read_excel('data.xlsx')
print(df.columns)
Index(['ID', 'Start time', 'Completion time', 'Email', 'Name',
'Last modified time', 'Report #1 Name', 'Type of Report',
'Location of Report', 'Purpose of Report', 'Report Owner / Creator',
'How often do you use this report?', 'Report #2 Name',
'Type of Report2', 'Location of Report2', 'Purpose of Report2',
'Report Owner / Creator2', 'How often do you use this report?2',
'Report #3 Name', 'Type of Report3', 'Location of Report3',
'Purpose of Report3', 'Report Owner / Creator3',
'How often do you use this report?3'],
dtype='object')
</code></pre>
<p>Let's check the data for column <code>Report #2 Name</code></p>
<pre><code>print(df['Report #2 Name'])
KeyError Traceback (most recent call last)
~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3360 try:
-> 3361 return self._engine.get_loc(casted_key)
3362 except KeyError as err:
~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Report #2 Name'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_14316\2316622378.py in <module>
----> 1 df['Report #2 Name']
~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\pandas\core\frame.py in __getitem__(self, key)
3456 if self.columns.nlevels > 1:
...
-> 3363 raise KeyError(key) from err
3364
3365 if is_scalar(key) and isna(key) and not self.hasnans:
KeyError: 'Report #2 Name'
</code></pre>
<p>The same error occurs for <code>df['Report #3 Name']</code></p>
<p>This works fine with no error:</p>
<pre><code>df['Report #1 Name']
df['Purpose of Report2']
df['Type of Report3']
</code></pre>
<p>What's going on here? I thought the hash # symbol might be causing an error, but <code>df['Report #1 Name']</code> works fine.</p>
<p>edit: some more data for Panda Kim below in comments:</p>
<pre><code>print('{}, {}'.format(df.columns, df.get('Report #2 Name', 'don exist')))
Index(['ID', 'Start time', 'Completion time', 'Email', 'Name',
'Last modified time', 'Report #1 Name', 'Type of Report',
'Location of Report', 'Purpose of Report', 'Report Owner / Creator',
'How often do you use this report?', 'Report #2 Name',
'Type of Report2', 'Location of Report2', 'Purpose of Report2',
'Report Owner / Creator2', 'How often do you use this report?2',
'Report #3 Name', 'Type of Report3', 'Location of Report3',
'Purpose of Report3', 'Report Owner / Creator3',
'How often do you use this report?3'],
dtype='object'), don exist
</code></pre>
<p>Some more tests from comments below:</p>
<pre><code>df.columns.union(['Report #2 Name'], sort=False)
Index(['ID', 'Start time', 'Completion time', 'Email', 'Name',
'Last modified time', 'Report #1 Name', 'Type of Report',
'Location of Report', 'Purpose of Report', 'Report Owner / Creator',
'How often do you use this report?', 'Report #2 Name',
'Type of Report2', 'Location of Report2', 'Purpose of Report2',
'Report Owner / Creator2', 'How often do you use this report?2',
'Report #3 Name', 'Type of Report3', 'Location of Report3',
'Purpose of Report3', 'Report Owner / Creator3',
'How often do you use this report?3', 'Report #2 Name'],
dtype='object')
</code></pre>
<p>And</p>
<pre><code>df.columns[12] == 'Report #2 Name'
False
</code></pre>
<p>And. This one seems to be the culprit.</p>
<pre><code>df.columns[12].encode('utf-8')
b'Report #2\xc2\xa0Name'
</code></pre>
|
<python><pandas>
|
2024-03-01 13:07:56
| 1
| 3,416
|
SCool
|
78,087,811
| 2,938,491
|
Unable to deploy function on Azure Function App requiring pytorch
|
<p>I am unable to make successful deployment of Azure Function when I require torch for my service. Earlier, I could deploy if I comment-out torch from requirements.txt file, which is explained after the following Update section:</p>
<p><strong>UPDATE:</strong> I partially solved it. The problem is with dependencies listed in requirements.txt file. I am using spacy and downloading 'en_core_web_sm' model. So I added following line in requirements.txt file and it was able to load en_core_web_sm as well:</p>
<pre><code>https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1.tar.gz
</code></pre>
<p>Now, the basic functionality is working. But for one of a service I need torch. If I comment-out this service and torch from requirements.txt file, then the deployment is successful and I can access endpoints. As soon as I try to install torch, I get deployment failed with error in Output:
<a href="https://i.sstatic.net/Y1zU0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y1zU0.png" alt="enter image description here" /></a></p>
<p>I am installing torch by having this line in requirements.txt file:</p>
<pre><code>torch==2.2.0 --index-url https://download.pytorch.org/whl/cpu
</code></pre>
<p><strong>Question</strong> now: Does it have to do with size of torch package? (I am running function in an Azure App Service Plan with Pricing plan = Y1)</p>
<p>My app looks like this (one function shown here1) in function_app.py file:</p>
<pre><code>@app.function_name(name='Tokenizer')
@app.route(route='tokenization')
def tokenization(req: func.HttpRequest) -> func.HttpResponse:
if req.method=='POST':
logging.info("Processing POST Request ... ")
return _post_handler_tokenization(req)
elif req.method == 'GET':
logging.info("Processing GET Request ... ")
return _get_handler_tokenization(req)
return func.HttpResponse('[INFO] Service isnt ready', mimetype='text/plain', status_code=200)
</code></pre>
<p>I am able to send and get HTTP req/res locally as shown below:
<a href="https://i.sstatic.net/9xtWl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9xtWl.png" alt="enter image description here" /></a></p>
<p><em><strong>====NOTE====: The problem below was solved by commenting out torch (as discussed above) and providing correct spacy dependency in requirements.txt file!</strong></em></p>
<p>Now when I try to deploy this function to Azure Function App, I am able to deploy it but the function doesn't apear on Azure, as shown below:</p>
<p><a href="https://i.sstatic.net/KDpNz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KDpNz.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/TORjT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TORjT.png" alt="enter image description here" /></a></p>
<p>I restarted the Azure Function and tried again the deployment but the function is not accessible. In the output section I saw logs and I sense a problem. I was expecting it would detect http trigger URLs as I saw on sample azure function deployment, but I see this:</p>
<p><a href="https://i.sstatic.net/iikLa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iikLa.png" alt="enter image description here" /></a></p>
<p>And I also tried to do deployment via terminal using this command:</p>
<blockquote>
<p>func azure functionapp publish <Azure_Function_App_Name> --build remote</p>
</blockquote>
<p>And the result on terminal was this:</p>
<p><a href="https://i.sstatic.net/SJIbX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SJIbX.jpg" alt="enter image description here" /></a></p>
<p>But the function doesn't show up on Azure Function App. Can anyone help me where I am going wrong? Thanks</p>
|
<python><azure><azure-functions>
|
2024-03-01 12:37:20
| 1
| 683
|
Amir
|
78,087,806
| 874,829
|
Unexpected numpy image reshape into grid problem
|
<p>I'm trying to reshape a numpy array image into a grid image of 16x16 elements per cell. I think that</p>
<pre><code>grid_image = image.reshape((h//16, w//16, -1))
</code></pre>
<p>should do the trick. In the example below grid_image has a (1, 2, 256) shape. However, when I compare elements grid_image[0,0] with the elements in image[:16, :16].ravel() they are not the same. The code below prints False. My logic must be wrong but I don't know where. any help?</p>
<pre><code>import numpy as np
h,w = 16,32
image = np.random.rand(h, w)
grid_image = image.reshape((h//16, w//16, -1))
print((image[:16, :16].ravel() == grid_image[0,0]).all())
</code></pre>
|
<python><numpy>
|
2024-03-01 12:36:54
| 1
| 2,784
|
martinako
|
78,087,630
| 5,726,057
|
Are there conventional UDF in Spark 3.5.0 not using arrow?
|
<p>I have used Pandas UDF and I am aware they use pyarrow to avoid object translation between JVM and the Python interpreter running in the workers. I am also aware of the property <code>spark.sql.execution.arrow.enabled</code> that can be set to <code>true</code> in order to optimize interactions between Pandas DF and Spark DF. My questions are:</p>
<ul>
<li>Are there still conventional UDFs not affected by pyarrow in pyspark 3.5.0?</li>
<li>Is it necessary to set <code>spark.sql.execution.arrow.enabled</code> to <code>true</code> in order to use <strong>Pandas UDF</strong> and avoid converting JVM objects <-> Python objects?</li>
<li>When the property is enabled, does it mean that even with conventional UDFs, there is no conversion between Java and Python? Or the improvement is only for Pandas UDF specifically, but not for general UDFs? (this is basically a re-word of my first question)</li>
</ul>
|
<python><apache-spark><pyspark><user-defined-functions><pyarrow>
|
2024-03-01 12:06:56
| 0
| 368
|
Pablo
|
78,087,560
| 11,809,811
|
Selenium in a for loop reuses first value
|
<p>I want to use selenium to check multiple accounts on a website.</p>
<p>I have a url and then a list with the data</p>
<pre><code>url = 'website.com'
data = [(username1, password1), (username2, password2), (username3, password3)]
</code></pre>
<p>And then I am using a for loop to get the data:</p>
<pre><code>for username, password:
driver = webdriver.Chrome()
driver.get(url)
username = driver.find_element(By.ID, "username")
password = driver.find_element(By.ID, "password")
username.send_keys(username)
password.send_keys(password)
# etc
</code></pre>
<p>The problem is that selenium always uses the first value pair of the list; hence I know the code works but I only get the right data for the first user.</p>
<p>Do I need to reset the webdriver ro something?</p>
|
<python><selenium-webdriver><web-scraping>
|
2024-03-01 11:55:27
| 1
| 830
|
Another_coder
|
78,087,546
| 1,714,692
|
Type hint a Python dictionary having integers as keys
|
<p>Suppose I have a function that returns a dictionary having integers as keys and instances of a defined custom class as values:</p>
<pre><code>from typing import Dict
class A:
a = 4
def my_f() -> dict:
return {0: A()}
</code></pre>
<p>I want to type hint the type of keys and values. Many posts suggest to use <code>TypedDict</code> but this is not possible when using integers as keys.</p>
<p>How can I type hint a dictionary with integers as keys?</p>
|
<python><python-typing>
|
2024-03-01 11:53:23
| 1
| 9,606
|
roschach
|
78,087,517
| 11,749,309
|
Is this the most performant way to rename a Polars DF column?
|
<h2>Issue:</h2>
<p>I have a column name that can change its prefix and suffix based on some function arguments, but there is a section of the column name that is always the same. I need to rename that column to something easy for reference in a different workflow. I am in search of the quickest way to find the column I am looking for and rename it to my desired name.</p>
<p>I am using a for loop to check if the part of the string is in each column, but I don't think that this is the most performant way to rename a column based on regex filtering.</p>
<h2>Solution + Reprex</h2>
<p>This is what I have come up with:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = pl.DataFrame({
"foo": [1, 2, 3, 4, 5],
"bar": [5, 4, 3, 2, 1],
"std_volatility_pct_21D": [0.1, 0.2, 0.15, 0.18, 0.16]
})
for col in data.columns:
if "volatility_pct" in col:
new_data = data.rename({col: "realized_volatility"})
</code></pre>
<h2>Performance</h2>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import polars.selectors as cs
data = pl.DataFrame(
{
"foo": [1, 2, 3, 4, 5],
"bar": [5, 4, 3, 2, 1],
"std_volatility_pct_21D": [0.1, 0.2, 0.15, 0.18, 0.16],
}
)
# 1
def rename_volatility_column(data):
for col in data.columns:
if "volatility_pct" in col:
return data.rename({col: "realized_volatility"})
return data
# 2
def adjust_volatility_column(data):
return data.select(
~cs.contains("volatility_pct"),
cs.contains("volatility_pct").alias("realized_volatility"),
)
%timeit rename_volatility_column(data)
%timeit adjust_volatility_column(data)
#3
%timeit data.rename(lambda col: "realized_volatility" if "volatility_pct" in col else col)
</code></pre>
<pre><code>#1
18.8 µs ± 636 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
#2
330 µs ± 11.7 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
#3
133 µs ± 7.71 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
|
<python><dataframe><for-loop><rename><python-polars>
|
2024-03-01 11:46:42
| 2
| 373
|
JJ Fantini
|
78,087,223
| 5,195,209
|
"psycopg.OperationalError: sending query and params failed: another command is already in progress" when trying to update table
|
<p>I am trying to efficiently update a row in a postgresql table. For that I use a temp table and try to use <code>COPY</code>. My code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>conn = psycopg.connect(
host="localhost", dbname=DB_NAME, user=DB_USER, password=DB_PASSWORD
)
with conn.cursor(name="wordfreq_cursor") as cur, conn.cursor() as ins_cur:
ins_cur.execute("CREATE TEMP TABLE temp_frequency(id INTEGER NOT NULL, frequency FLOAT4) ON COMMIT DROP")
cur.itersize = 20_000
cur.execute("SELECT id, lang_code, word FROM etymology LIMIT 3000") # The limit is only for debugging
i = 1
pbar = tqdm(total=20_000_000)
with ins_cur.copy("COPY temp_frequency (id, frequency) FROM STDIN") as copy:
for row in cur:
id, lang_code, word = row
if lang_code in langs: # langs is a set of string
frequency = zipf_frequency(word, lang_code)
copy.write_row((id, frequency))
pbar.update(1)
ins_cur.execute("UPDATE etymology e SET e.frequency = t.frequency FROM temp_frequency t WHERE e.id = t.id")
i += 1
conn.commit()
</code></pre>
<p>However, I am hitting a <code>psycopg.OperationalError: sending query and params failed: another command is already in progress</code>. When googling for this error I only get mistakes related to concurrency, but in my case I don't make any use of multithreading.</p>
<p>How can I fix this error?</p>
<p>I am using Windows 10, Python 3.12, psycopg 3.1.18. The full stack trace is as follows:</p>
<pre><code>aceback (most recent call last):
File "c:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\ebook_dictionary_creator\add_wordfreq_to_db.py", line 75, in <module>
temp_table_solution()
File "c:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\ebook_dictionary_creator\add_wordfreq_to_db.py", line 70, in temp_table_solution
ins_cur.execute("UPDATE etymology e SET e.frequency = t.frequency FROM temp_frequency t WHERE e.id = t.id")
File "C:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\.venv\Lib\site-packages\psycopg\cursor.py", line 732, in execute
raise ex.with_traceback(None)
psycopg.OperationalError: sending query failed: another command is already in progress
0%| | 3000/20000000 [00:04<7:26:13, 746.90it/s]
PS C:\Users\hanne\Documents\Programme\ultimate-dictionary-api> & C:/Users/hanne/Documents/Programme/ultimate-dictionary-api/ebook_dictionary_creator/.venv/Scripts/python.exe c:/Users/hanne/Documents/Programme/ultimate-dictionary-api/ebook_dictionary_creator/ebook_dictionary_creator/add_wordfreq_to_db.py
0%| | 0/20000000 [00:00<?, ?it/s]Traceback (most recent call last):
File "c:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\ebook_dictionary_creator\add_wordfreq_to_db.py", line 75, in <module>
temp_table_solution()
File "c:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\ebook_dictionary_creator\add_wordfreq_to_db.py", line 64, in temp_table_solution
for row in cur:
File "C:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\.venv\Lib\site-packages\psycopg\server_cursor.py", line 332, in __iter__
recs = self._conn.wait(self._fetch_gen(self.itersize))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\.venv\Lib\site-packages\psycopg\connection.py", line 969, in wait
return waiting.wait(gen, self.pgconn.socket, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\.venv\Lib\site-packages\psycopg\waiting.py", line 228, in wait_select
s = next(gen)
^^^^^^^^^
File "C:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\.venv\Lib\site-packages\psycopg\server_cursor.py", line 173, in _fetch_gen
res = yield from self._conn._exec_command(query, result_format=self._format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hanne\Documents\Programme\ultimate-dictionary-api\ebook_dictionary_creator\.venv\Lib\site-packages\psycopg\connection.py", line 467, in _exec_command
self.pgconn.send_query_params(command, None, result_format=result_format)
File "psycopg_binary\\pq/pgconn.pyx", line 276, in psycopg_binary.pq.PGconn.send_query_params
psycopg.OperationalError: sending query and params failed: another command is already in progress
</code></pre>
|
<python><postgresql><psycopg3>
|
2024-03-01 10:48:03
| 1
| 587
|
Pux
|
78,087,188
| 3,358,488
|
Is Python's copyreg supposed to be used in regular Python code?
|
<p>Python's <a href="https://docs.python.org/3/library/copyreg.html" rel="nofollow noreferrer">copyreg documentation</a> looks like it's just a way to customize the pickling of specific classes defined in Python.</p>
<p>However, <a href="https://github.com/python/cpython/blob/010aac7c1a43afd63b4c4019c4f217f1e3a72689/Lib/copyreg.py#L3" rel="nofollow noreferrer">a comment in its source code</a> says that "This is only useful to add pickle support for extension types defined in C, not for instances of user-defined classes."</p>
<p>Is that really true? If so, it is odd that it's not mentioned in the actual documentation page.</p>
<p>If it is true, can it actually be used for customizing the pickling of user-defined Python classes?</p>
|
<python><pickle>
|
2024-03-01 10:41:59
| 1
| 5,872
|
user118967
|
78,087,152
| 5,618,856
|
pandas compare dataframes and write/read result to/from excel - how to get the compare format back into a dataframe
|
<p>How can I read the pandas.dataframe.compare result back from excel into an equivalent dataframe?</p>
<p>If I do</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame({
'A': [1, 2, 3],
'B': [4, 5, 6]
})
df2 = pd.DataFrame({
'A': [1, 2, 4], # Changed here, '3' to '4'
'B': [4, 5, 6]
})
df1.compare(df2,keep_equal=True,keep_shape=True)
</code></pre>
<p>I get a nice comparison and can write it to excel:</p>
<p><a href="https://i.sstatic.net/5ayHy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5ayHy.png" alt="" /></a></p>
<p>But how can I read this excel into a dataframe and get the same layout? <code>dfd = pd.read_excel(tmp_file,header=[0,1])</code> does not work (see <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">docs</a>).</p>
|
<python><pandas><excel><dataframe>
|
2024-03-01 10:36:57
| 1
| 603
|
Fred
|
78,086,980
| 885,770
|
Merging multiple dataframes (time series)
|
<p>I'm trying to merge a list of dataframe.
It's a time series, each dataframe has the same columns.
They contain the same id points, the only variables that change are the <code>utc</code> and <code>temp</code>.</p>
<p>df1</p>
<pre><code>id name2 geom utc temp
140826 AAA140826 POLYGON ((...)) 2010-07-01T00:00:00.000000000 15.3
140827 AAA140827 POLYGON ((...)) 2010-07-01T00:00:00.000000000 17.3
140828 AAA140828 POLYGON ((...)) 2020-07-01T00:00:00.000000000 10.0
</code></pre>
<p>df2</p>
<pre><code>id name2 geom utc temp
140826 AAA140826 POLYGON ((...)) 2010-08-01T00:00:00.000000000 11.3
140827 AAA140827 POLYGON ((...)) 2010-08-01T00:00:00.000000000 10.3
140828 AAA140828 POLYGON ((...)) 2010-08-01T00:00:00.000000000 12.0
</code></pre>
<p>df3</p>
<pre><code>id name2 geom utc temp
140826 AAA140826 POLYGON ((...)) 2010-09-01T00:00:00.000000000 13.3
140827 AAA140827 POLYGON ((...)) 2010-09-01T00:00:00.000000000 18.3
140828 AAA140828 POLYGON ((...)) 2010-09-01T00:00:00.000000000 12.0
</code></pre>
<p>(...)</p>
<p>My output should be something like this:</p>
<pre><code>id name2 geom utc temp
140826 AAA140826 POLYGON ((...)) 2010-07-01T00:00:00.000000000 15.3
140826 AAA140826 POLYGON ((...)) 2010-08-01T00:00:00.000000000 11.3
140826 AAA140826 POLYGON ((...)) 2010-09-01T00:00:00.000000000 13.0
140827 AAA140827 POLYGON ((...)) 2010-07-01T00:00:00.000000000 17.3
140827 AAA140827 POLYGON ((...)) 2010-08-01T00:00:00.000000000 10.3
140827 AAA140827 POLYGON ((...)) 2010-09-01T00:00:00.000000000 18.0
140828 AAA140828 POLYGON ((...)) 2010-07-01T00:00:00.000000000 10.0
140828 AAA140828 POLYGON ((...)) 2010-08-01T00:00:00.000000000 12.0
140828 AAA140828 POLYGON ((...)) 2010-09-01T00:00:00.000000000 12.0
</code></pre>
<p>I did something similar previously but each dataframe had a different variable.
I tried to use pd.merge and merge one by one but i get the error <code>Passing 'suffixes' which cause duplicate columns {'temp_x'} is not allowed</code>.</p>
<p>Thank you everybody for your help !</p>
|
<python><pandas>
|
2024-03-01 10:07:03
| 0
| 1,971
|
Gago-Silva
|
78,086,896
| 13,242,312
|
SHA1 implementation differs from builtin SHA1
|
<p>I'm trying to reimplement the SHA1 algorithm in python, but the result of my function differs from the one in builtin hashlib, at first I thought the problem was in my rotate function, but after fixing it, there is still a difference in the outputs, here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from hashlib import sha1 as builtin_sha1
def rotl32(value: int, count: int) -> int:
return ((value << count) | (value >> (32 - count))) & 0xffffffff
def default_sha1(data: bytes) -> bytes:
return builtin_sha1(data).digest()
def sha1(data: bytes) -> bytes:
# initialize variables
h0 = 0x67452301
h1 = 0xefcdab89
h2 = 0x98badcfe
h3 = 0x10325476
h4 = 0xc3d2e1f0
msg_len = len(data)
# append 0x80
data += b"\x80"
# append 0x00 until msg_len % 64 == 56
data += b"\x00" * ((56 - msg_len % 64) % 64)
# append bit length as 64-bit big-endian integer
data += (msg_len * 8).to_bytes(8, "big")
# get the new length (now a multiple of 64)
msg_len = len(data)
for i in range(0, msg_len, 64):
# for each chunk of 64 bytes
# break the chunk into sixteen 32-bit big-endian words
words = [int.from_bytes(data[i + j:i + j + 4], "big")
for j in range(0, 64, 4)]
# extend the sixteen 32-bit words into eighty 32-bit words
for j in range(16, 80):
words.append(
rotl32((words[j - 3] ^ words[j - 8] ^ words[j - 14] ^ words[j - 16]), 1)
)
# initialize hash value for this chunk
a = h0
b = h1
c = h2
d = h3
e = h4
for j in range(80):
if 0 <= j <= 19:
f = (b & c) | ((~b) & d)
k = 0x5a827999
elif 20 <= j <= 39:
f = b ^ c ^ d
k = 0x6ed9eba1
elif 40 <= j <= 59:
f = (b & c) | (b & d) | (c & d)
k = 0x8f1bbcdc
else: # 60 <= j <= 79:
f = b ^ c ^ d
k = 0xca62c1d6
temp = (rotl32(a, 5) + f + e + k + words[j]) & 0xffffffff
e = d
d = c
c = rotl32(b, 30)
b = a
a = temp
# add this chunk's hash to result so far
h0 = (h0 + a) & 0xffffffff
h1 = (h1 + b) & 0xffffffff
h2 = (h2 + c) & 0xffffffff
h3 = (h3 + d) & 0xffffffff
h4 = (h4 + e) & 0xffffffff
# produce the final hash value
return ((h0 << 128) | (h1 << 96) | (h2 << 64) | (h3 << 32) | h4).to_bytes(20, "big")
if __name__ == "__main__":
assert(sha1(b"hello") == default_sha1(b"hello")) # diff
</code></pre>
<p>Maybe it's an endianness problem, but I use big endian everywhere, I used as reference the pseudo code on wikipedia <a href="https://en.wikipedia.org/wiki/SHA-1" rel="nofollow noreferrer">SHA1</a>.</p>
<p>EDIT: I was appending the byte length of the message instead of the bit length, now the code is corrected and appends the bit length but it still differs from the builtin implementation</p>
|
<python><hash><sha1>
|
2024-03-01 09:51:49
| 1
| 1,463
|
Fayeure
|
78,086,853
| 6,207,773
|
python P8000 unable to connect: no pg_hba.conf entry for host
|
<p>I am using pg8000==1.30.5 in python 3.12</p>
<p>I was migrating from psycopg2</p>
<p>But connection I got an error:</p>
<pre><code>pg8000.Connection(self.db_username, database=self.schema_name, host=self.db_host,
password=self.db_password,
port=self.db_port)
</code></pre>
<p>the error is the follow:</p>
<pre><code>pg8000.exceptions.DatabaseError: {'S': 'FATAL', 'V': 'FATAL', 'C': '28000', 'M': 'no pg_hba.conf entry for host "xxx", user "xx", database "xx", no encryption', 'F': 'auth.c', 'L': '543', 'R': 'ClientAuthentication'}
</code></pre>
<p>You might think is postgres server problem, firewall, my vpn/ip, access control, permissions etc...</p>
<p>However I was working with "psycopg2" without any problem:</p>
<pre><code>psycopg2.connect(database=self.schema_name, host=self.db_host, user=self.db_username,
password=self.db_password,
port=self.db_port)
</code></pre>
<p>I can not understand why one works and the other does not.</p>
|
<python><postgresql><pg8000>
|
2024-03-01 09:43:03
| 0
| 316
|
Lucke
|
78,086,715
| 963,844
|
How to check a character is belong or not belong to specific code page?
|
<p>I want to print the character only if it is not belong to specific code page.</p>
<p>What function I can use for this purpose?</p>
<pre><code>with open('in.txt', 'r', encoding="utf-16-le") as f:
while True:
c = f.read(1)
if not c:
break
if not c.isprintable():
continue
if not ?????(c):
print(c)
</code></pre>
<hr />
<p>Version 2:</p>
<pre><code>def is_supported(char, encoding):
try:
char.encode(encoding)
except UnicodeEncodeError:
return False
return True
with open('in.txt', 'r', encoding="utf-16-le") as f:
while True:
c = f.read(1)
if not c:
break
if not c.isprintable():
continue
if is_supported(c, 'cp950'):
print(c + "(yes)")
else:
print(c + "(no)")
</code></pre>
|
<python><python-3.x><unicode><python-unicode><codepages>
|
2024-03-01 09:23:46
| 1
| 3,769
|
CL So
|
78,086,553
| 2,107,030
|
symbol lookup error: undefined symbol: __pow_finite
|
<p>I am trying to run <a href="https://github.com/scottransom/presto/tree/master" rel="nofollow noreferrer">a software</a> that ran flawlessly before upgrading from Ubuntu 20.04 to 22.04, but now I receive the error</p>
<pre><code>symbol lookup error: /Softwares/presto/lib/libpresto.so: undefined symbol: __pow_finite
</code></pre>
<p>On <a href="https://stackoverflow.com/questions/72363848/cannot-load-log-finite-from-libm-so-6-with-ctypes">another stack page</a> it suggests to</p>
<pre><code>nm -gD libpresto.so | grep __pow_finite
U __pow_finite
</code></pre>
<p>On <a href="https://stackoverflow.com/questions/62334452/fast-math-cause-undefined-reference-to-pow-finite">this other post</a> it says it might be a CLANG bug. I quickly checked, and it doesn't seem to show a <code>.comment</code> file or a similar log file where such info are reported. How do I know by which CLANG version I have compiled the software? If it is a < 10 version should I reinstall CLANG first and then re-<code>make</code>?</p>
|
<python><clang><shared-libraries><glibc>
|
2024-03-01 08:52:54
| 0
| 2,166
|
Py-ser
|
78,086,541
| 4,106,261
|
Pandas resample behaves differently with SUM and MEAN
|
<pre><code>python==3.8.10
pandas==2.0.3 (cannot update to python >= 3.9, so I am stuck to this version)
</code></pre>
<p>A dataframe contains some hourly data for a year. I filter the data to remove the months from April to October, and then aggregate by DAY. That should mean 151 days are left.</p>
<p><strong>Is there any reason why MEAN and SUM are calculated on a different number of rows?</strong></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_parquet(BASE_DIR + "hour_fw_temp2022.parquet") # load hour data
df = df[(df.index.month < 4)|(df.index.month > 10)] # remove April to October
df = df.resample("1D").mean() # aggregate per day
df.describe()
</code></pre>
<p>returns</p>
<pre><code> temp rh% fw
count 151.000000 151.000000 151.000000 <=== MEAN calculated on 151 days
mean 3.725442 77.780077 96365.618102
std 4.281750 12.593718 21074.110945
min -9.304167 25.500000 44666.666667
25% 1.254167 72.125000 82250.000000
50% 3.420833 80.125000 100083.333333
75% 6.814583 85.937500 109395.833333
max 14.091667 98.041667 166333.333333
</code></pre>
<p>while</p>
<pre class="lang-py prettyprint-override"><code>df = df.resample("1D").sum() # aggregate per day
</code></pre>
<p>returns</p>
<pre><code> temp rh% fw
count 365.000000 365.000000 3.650000e+02 <== SUM calculated on 365 days
mean 36.989041 772.260274 9.567918e+05
std 79.347405 940.838631 1.185907e+06
min -223.300000 0.000000 0.000000e+00 <== removed rows are filled with 0
25% 0.000000 0.000000 0.000000e+00 and all the stats are wrong...
50% 0.000000 0.000000 0.000000e+00
75% 53.600000 1849.000000 2.224000e+06
max 338.200000 2353.000000 3.992000e+06
</code></pre>
|
<python><pandas><dataframe><pandas-resample>
|
2024-03-01 08:51:11
| 2
| 2,566
|
Alex Poca
|
78,086,235
| 3,179,698
|
Using exclamation followed by command in jupyter cell and directly using command in terminal has different result, windows system
|
<p>I just noticed, in windows 11, running jupyter</p>
<p>We have the command <code>where pip</code></p>
<p>however, we run this command in four different places, has different result.</p>
<p>1 I run inside jupyter notebook's cell:</p>
<pre><code>!where pip
</code></pre>
<p>I got the pip's location.</p>
<p>2 I run using jupyter's terminal:</p>
<pre><code>where pip
</code></pre>
<p>I got empty result</p>
<p>3 I run using windows powershell:</p>
<pre><code>where pip
</code></pre>
<p>still empty result.</p>
<p>4 I run using windows cmd:</p>
<pre><code>where pip
</code></pre>
<p>failed with command not found.</p>
<p>So only the one in jupyter cell gave useful info, while jupyter cell using ! should using some shell, but none of the shell I mentioned has the same result, so what specific shell it is using, if it is using some shell. And why no other terminal in windows system could have same result as it?</p>
|
<python><powershell><jupyter>
|
2024-03-01 07:47:12
| 0
| 1,504
|
cloudscomputes
|
78,086,220
| 11,793,491
|
Expand a grouped value in pandas
|
<p>I have this dataset:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'prod':['A','A','A','B','B'],'quant':[2,3,4,3,1]}
prod quant
0 A 2
1 A 3
2 A 4
3 B 3
4 B 1
</code></pre>
<p>And I want to aggregate the values by <code>prod</code> but to keep the values as a new column. This is the expected data frame:</p>
<pre class="lang-py prettyprint-override"><code> prod quant added_gr
0 A 2 9
1 A 3 9
2 A 4 9
3 B 3 4
4 B 1 4
</code></pre>
<p>I tried this</p>
<pre class="lang-py prettyprint-override"><code>ds['added_gr'] = ds.groupby('prod')['quant'].sum()
</code></pre>
<p>But it returns me a column with all NaNs. Please, could you point out what I am doing wrong?</p>
|
<python><pandas>
|
2024-03-01 07:43:10
| 1
| 2,304
|
Alexis
|
78,085,989
| 17,729,094
|
Deprecation warning: explicitly call ax.remove() as needed
|
<p>I have the following code that works correctly for me with <code>matplotlib==3.8.2</code>:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
# Random data
r = np.random.randn(1000)
theta = np.random.randn(1000)
fig = plt.figure()
# Ignore all other subplots for MWE
# ...
axc = plt.subplot(236)
ax = plt.subplot(236, projection='polar')
ax.set(xticklabels=[], yticklabels=[])
ax.grid(False)
hist, phi_edges, r_edges = np.histogram2d(theta, r, bins=50)
axc.set_xlim(-r_edges[-1], r_edges[-1])
axc.set_ylim(-r_edges[-1], r_edges[-1])
X, Y = np.meshgrid(phi_edges, r_edges)
pc = ax.pcolormesh(X, Y, hist.T)
cbar = plt.colorbar(pc, ax=[axc, ax], location='right')
cbar.set_label("Counts", rotation=270, labelpad=15)
plt.show()
</code></pre>
<p>But it fails using <code>matplotlib==3.7.5</code> with:</p>
<pre><code>/path/to/scripts/./bin/program.py:108: MatplotlibDeprecationWarning: Auto-removal of overlapping axes is deprecated since 3.6 and will be removed two minor releases later; explicitly call ax.remove() as needed.
ax = plt.subplot(236, projection="polar")
Traceback (most recent call last):
File "/path/to/scripts/./bin/program.py", line 113, in <module>
cbar = fig.colorbar(pc, ax=[ax, axc], location="right")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/scripts/.venv/lib/python3.11/site-packages/matplotlib/figure.py", line 1300, in colorbar
cax, kwargs = cbar.make_axes(ax, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/scripts/.venv/lib/python3.11/site-packages/matplotlib/colorbar.py", line 1434, in make_axes
raise ValueError('Unable to create a colorbar axes as not all '
ValueError: Unable to create a colorbar axes as not all parents share the same figure.
</code></pre>
<p>I need my script to run with python 3.8.0 and that is why I now need it to run with matplotlib 3.7.5. Can someone suggest how to get this working? I have tried to <code>ax.remove()</code> and <code>plt.clf()</code> before my second axis, but of course that is not working because I need both axes to draw the colorbar where I need it.</p>
|
<python><matplotlib>
|
2024-03-01 06:52:01
| 0
| 954
|
DJDuque
|
78,085,938
| 23,461,455
|
Pandas pd.read_csv() using single quote as quotechar throws SyntaxError: incomplete input
|
<p>I am currently trying to read in a <code>.csv</code> with the following structure:</p>
<pre><code> samplecsv = """ 'column A', 'column b', 'column c',
'valueA', 'valueb', 'valuec,d',
'valueA', 'valueb', 'valuecd',
'valueA', 'valueb', 'valuecd',
'valueA', 'valueb', 'valuec,d'
"""
</code></pre>
<p>Note that I don't include it as a string but I stored the data in the same structure in a <code>.txt</code> file.</p>
<p>I want python to ignore the comma in <code>'valuec,d'</code> for the separation, for this value to be stored in a single column later. I found out that you can provide a to then be ignored quotechar as argument, yet I can’t get it to run with single quote. I have tried:</p>
<pre><code>import pandas as pd
DF = pd.read_csv(r'Myfilepath', sep = ',', quotechar =''')
</code></pre>
<p>As well as:</p>
<pre><code>DF = pd.read_csv(r'Myfilepath', sep = ',', quotechar ='''')
</code></pre>
<p>I also tried:</p>
<pre><code>DF = pd.read_csv(r'Myfilepath', sep = ',', quotechar ="'")
</code></pre>
<p>All 3 versions gave me <code>SyntaxError: incomplete input</code>. What am I doing wrong?</p>
<p>Note: The question marked as duplicate only explains the difference between single and double quotes but not how to exclude them in <code>pd.read_csv</code> or why this doesn't work in this case.</p>
|
<python><pandas><dataframe><csv>
|
2024-03-01 06:39:12
| 1
| 1,284
|
Bending Rodriguez
|
78,085,909
| 31,352
|
How can I see past versions of a Google sheet when I update it via the API
|
<p>I am updating a Google sheet via the api, Python code snippet that I'm using here (this works fine):</p>
<pre><code>sheet = service.spreadsheets()
result = (
sheet.values()
.update(
spreadsheetId=SPREADSHEET_ID,
range="Sheet1!A1:A4",
valueInputOption='RAW',
body=dict(
majorDimension='COLUMNS',
values= [
["Item 3", "Wheel 3", "Door 3", "Engine 3"]
]
)
).execute()
)
</code></pre>
<p>But when I call that repeatedly (with different values, so it <em>is</em> changing the sheet), and I look at the version history for the sheet, there are no versions. How can I make it so I can see all the past versions that I've updated?</p>
|
<python><google-sheets><google-drive-api><google-sheets-api>
|
2024-03-01 06:32:27
| 0
| 748
|
Brad
|
78,085,841
| 12,314,521
|
How to reduce mean ignore padded rows in 3D tensor
|
<p>I have and a 3D tensor <code>A</code> has shape <code>(batch_size, N, dim)</code> and a 3D tensor <code>B</code> has shape <code>(batch_size, N, 2)</code>. In which <code>B</code> has some padding row to fill to N (which is not a zero vector, as it already passed in some functions). To know which row is padded, I have to look up tensor <code>B</code>, if row <code>k</code> is padded, the value at the <code>k</code>-row in tensor <code>B</code> is <code>[0, 0]</code>. I want to filter out these padded rows in <code>A</code> before calculating the mean.</p>
<p>After reducing A to its mean along <code>dim=1</code>, the result has a shape of <code>(batch_size, dim)</code>.</p>
<p>Edit: I figured out one solution. Any other solution is welcome!</p>
<pre><code># squeeze to 2D
A = A.view(-1, A.size(-1))
B = B.view(-1, B.size(-1))
# mask out padded row
mask = torch.sum(B, dim=-1)
mask[mask!=0] = 1
# the point is I need to assign padded row in A by zero
A = mask.unsqueeze(-1)*A
# calculate the real size of each cluster.
mask = mask.view(batch_size, -1)
batch_cluster_size = torch.sum(mask, dim=-1, keepdim=True)
# convert back to original shape
A = A.view(batch_size, -1, A.size(-1))
output = torch.sum(A, dim=1)/batch_cluster_size
</code></pre>
|
<python><pytorch><tensor>
|
2024-03-01 06:16:01
| 1
| 351
|
jupyter
|
78,085,597
| 9,707,286
|
processing hundreds of csv files one row at a time for embedding, upload to pinecone using OpenAI embeddings
|
<p>This is my current code which works for a while and then throws an error of "can't start a new thread." Tried both threading and multi-processing and both cause this error eventually.</p>
<pre><code>def process_file(file_path):
print(f'file: {file_path}')
def process_row(row):
text = row['text']
row2data = row['row2data']
year = row['year']
group_id = row['group_id']
docs = embedder(text, text, year, group_id)
my_index = pc_store.from_documents(docs, embeddings, index_name=PINECONE_INDEX_NAME)
with open(file_path, 'r') as file:
reader = csv.DictReader(file)
for row in reader:
process_row(row)
if __name__ == '__main__':
file_paths = ['file1', 'file2', 'file3']
processes = []
for file_path in file_paths:
p = Process(target=process_file, args=(file_path,))
p.start()
processes.append(p)
for p in processes:
p.join()
</code></pre>
<p>Here is the stack trace of the error:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/pool.py", line 215, in __init__
self._repopulate_pool()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/dummy/__init__.py", line 51, in start
threading.Thread.start(self)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/threading.py", line 971, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
</code></pre>
|
<python><multithreading><multiprocessing><langchain><pinecone>
|
2024-03-01 04:51:41
| 0
| 747
|
John Taylor
|
78,085,526
| 15,093,600
|
Simulate stdout in fake subprocess.Popen
|
<p>I would like to test a function, which invokes <code>subprocess.Popen</code> and captures <code>stdout</code>. In particular, I need to test <code>stdout</code> content physically captured in a file on disc without ever calling the actual process.</p>
<p>Sample function:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
def func():
with open('stdout.txt', 'wt') as out:
subprocess.Popen(_cmdline(), stdout=out)
def _cmdline() -> list[str]:
# Or whatever process we want to run
return ['ls']
</code></pre>
<p>Test function:</p>
<pre class="lang-py prettyprint-override"><code>from unittest import mock
def test_captures_stdout():
with mock.patch('subprocess.Popen') as mock_popen:
func()
# Try to intercept stdout and write to it
[(_, kwargs)] = mock_popen.call_args_list
with kwargs['stdout']:
buf.write('some standard output')
with open('stdout.txt') as buf:
assert buf.read() == 'some standard output'
</code></pre>
<p>Here I mock <code>subprocess.Popen</code>, then I intercept <code>stdout</code> passed to its constructor and try to write to the buffer. Then I intend to run assertions on the content of <code>stdout.txt</code> file.</p>
<p>Apparently, when I try to write to <code>stdout</code> buffer, it is already closed and I get IO error.</p>
<pre><code>================================== FAILURES ===================================
____________________________ test_captures_stdout _____________________________
def test_captures_stdout():
with mock.patch('subprocess.Popen') as mock_popen:
func()
# Try to intercept stdout and write to it
[(_, kwargs)] = mock_popen.call_args_list
> with kwargs['stdout']:
E ValueError: I/O operation on closed file.
test_subprocess.py:22: ValueError
=========================== short test summary info ===========================
</code></pre>
<p>I wonder if there is a convenient way of mocking <code>Popen</code> and somehow simulating <code>stdout</code> written to a file.</p>
|
<python><testing><mocking><subprocess>
|
2024-03-01 04:20:31
| 1
| 460
|
Maxim Ivanov
|
78,085,327
| 15,474,507
|
Remove links using myjdapi
|
<p>I try to remove link that have .opus in the filename using <a href="https://github.com/mmarquezs/My.Jdownloader-API-Python-Library/blob/master/myjdapi/myjdapi.py" rel="nofollow noreferrer">myjdapi</a> but it fails to remove</p>
<p>JD API reference : <a href="https://my.jdownloader.org/developers/" rel="nofollow noreferrer">here</a></p>
<p>Example of code that I use:</p>
<p><a href="https://i.sstatic.net/aGXOe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aGXOe.png" alt="jd link" /></a></p>
<pre><code>import myjdapi
import sys
try:
jd = myjdapi.Myjdapi()
jd.set_app_key("JDPeter") # name of your choice
# Connect using your username and password
jd.connect("xxxx@gmail.com", "password")
# connection
if jd.is_connected():
print("Connected successfully.")
else:
print("Failed to connect.")
sys.exit()
# update device
jd.update_devices()
# print device
print("Dispositivi: ", jd.list_devices())
# Get the device you want to use
device = jd.get_device("JDownloader@abc") # Sostituisci con il nome del tuo dispositivo
# Test your device
if device:
print("Device obtained successfully.")
else:
print("Unable to obtain device.")
sys.exit()
# Now you can use the various API functions. For example, to get packages from the LinkGrabber:
packages = device.linkgrabber.query_packages([{
"bytesLoaded": True,
"bytesTotal": True,
"enabled": True,
"eta": True,
"finished": True,
"running": True,
"speed": True,
"status": True,
"childCount": True,
"hosts": True,
"saveTo": True,
"maxResults": -1,
"startAt": 0,
}])
# Iterate through packages
for package in packages:
# Print the package name
print("Pacchetto: ", package['name'])
# Check the connection
if not jd.is_connected():
print("Connessione persa. Tentativo di riconnessione...")
jd.connect("xxxx.com", "xxxx") # Sostituisci con il tuo email e password
if not jd.is_connected():
print("Impossibile ristabilire la connessione.")
sys.exit()
else:
print("Connessione ristabilita con successo.")
# Get package details
package_links = device.linkgrabber.query_links([{
"packageUUIDs": [package['uuid']]
}])
# Check if there are .m4a and .opus files in the package
m4a_exists = any(link['name'].endswith('.m4a') for link in package_links)
opus_links = [link for link in package_links if link['name'].endswith('.opus')]
# If a .m4a file exists, delete all .opus files
if m4a_exists and opus_links:
for opus_link in opus_links:
response = device.linkgrabber.remove_links([opus_link['uuid']], [package['uuid']])
print("Risposta da remove_links: ", response)
except Exception as e:
print("Errore: ", str(e))
</code></pre>
<p>The problem essentially my script lists both the packages and the names of the files within these packages, but it doesn't eliminate the names containing .opus as an extension</p>
|
<python><jdownloader>
|
2024-03-01 02:51:18
| 0
| 307
|
Alex Doc
|
78,085,129
| 673,167
|
Installing Python BeautifulSoup package with Homebrew on macOS
|
<p>I installed Python on my macOS using Homebrew. Now, I'm attempting to use <code>BeautifulSoup</code>, so I executed the following command: <code>brew install python-beautifulsoup4</code>. However, I encountered a message stating, 'Warning: No available formula with the name <code>python-beautifulsoup4</code>.</p>
<p>What could be the issue here? How should I properly install and use this Python package?</p>
|
<python><beautifulsoup><homebrew>
|
2024-03-01 01:34:44
| 2
| 1,548
|
notGeek
|
78,085,119
| 4,549,682
|
How can I pre-warm a Python azure function or set up pre-warmed instances?
|
<p>We have Python Azure httpTrigger functions that have a cold start problem when we scale out. The response time spikes to 10-40s after creating a new instance for scale out. I found 2 possible solutions for this:</p>
<ul>
<li>use a <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-warmup?tabs=isolated-process%2Cnodejs-v4&pivots=programming-language-python" rel="nofollow noreferrer">warmupTrigger</a></li>
<li>use a <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=azurecli#prewarmed-instances" rel="nofollow noreferrer">pre-warmed instances</a></li>
</ul>
<p>It looks like prewarmed instances can be set with something like <code>preWarmedInstanceCount=n</code> but I'm not sure if this is applicable for the Premium P3V3 plan we're on. And can I just set that in the function app environment settings, or have to do it via CLI like in the docs?</p>
<p>On the warmupTrigger, I added it as another function within the function app, but it's not firing.</p>
<p>Is there some other option for having scale-out instances wait to take requests until the function code is fully loaded and ready?</p>
|
<python><azure><serverless>
|
2024-03-01 01:29:05
| 1
| 16,136
|
wordsforthewise
|
78,085,089
| 21,935,028
|
Antlr Python error processing simple PLSQL
|
<p>Antlr4 was installed on Ubuntu 22.04 with Python as follows:</p>
<pre class="lang-bash prettyprint-override"><code>wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlLexerBase.py
wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlParserBase.py
wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlLexer.g4
wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlParser.g4
</code></pre>
<p>Install Antlr4 for Python</p>
<pre><code>pip3 install antlr4-python3-runtime==4.13.1
</code></pre>
<p>The following test script is used to parse the simple SQL, PLSQL files:</p>
<pre class="lang-py prettyprint-override"><code>def main():
with open(sys.argv[1], 'r') as file:
filesrc = file.read()
lexer = PlSqlLexer(InputStream(filesrc))
parser = PlSqlParser(CommonTokenStream(lexer))
tree = parser.sql_script()
traverse(tree, parser.ruleNames)
def traverse(tree, rule_names, indent = 0):
if tree.getText() == "<EOF>":
return
elif isinstance(tree, TerminalNodeImpl):
print("{0}TOKEN='{1}'".format(" " * indent, tree.getText()))
else:
print("{0}{1}".format(" " * indent, rule_names[tree.getRuleIndex()]))
for child in tree.children:
traverse(child, rule_names, indent + 1)
if __name__ == '__main__':
main()
</code></pre>
<p>I have a simple input file as follows to test the above with, which is processed happily without errors by the Python script:</p>
<pre class="lang-sql prettyprint-override"><code>DECLARE
l_x NUMBER;
BEGIN
SELECT length(c1)
INTO l_x
FROM the_table
WHERE c2 = 'X';
END;
</code></pre>
<p>Which gives:</p>
<pre><code>python3 ./runPLSQLFile.py test.sql
sql_script
unit_statement
anonymous_block
TOKEN='DECLARE'
seq_of_declare_specs
declare_spec
variable_declaration
identifier
id_expression
regular_id
TOKEN='l_x'
type_spec
datatype
native_datatype_element
TOKEN='NUMBER'
TOKEN=';'
TOKEN='BEGIN'
seq_of_statements
statement
sql_statement
data_manipulation_language_statements
select_statement
select_only_statement
subquery
subquery_basic_elements
query_block
TOKEN='SELECT'
selected_list
select_list_elements
expression
logical_expression
unary_logical_expression
multiset_expression
relational_expression
compound_expression
concatenation
model_expression
unary_expression
atom
general_element
general_element_part
id_expression
regular_id
non_reserved_keywords_pre12c
TOKEN='length'
function_argument
TOKEN='('
argument
expression
logical_expression
unary_logical_expression
multiset_expression
relational_expression
compound_expression
concatenation
model_expression
unary_expression
atom
general_element
general_element_part
id_expression
regular_id
TOKEN='c1'
TOKEN=')'
into_clause
TOKEN='INTO'
general_element
general_element_part
id_expression
regular_id
TOKEN='l_x'
from_clause
TOKEN='FROM'
table_ref_list
table_ref
table_ref_aux
table_ref_aux_internal
dml_table_expression_clause
tableview_name
identifier
id_expression
regular_id
TOKEN='the_table'
where_clause
TOKEN='WHERE'
condition
expression
logical_expression
unary_logical_expression
multiset_expression
relational_expression
relational_expression
compound_expression
concatenation
model_expression
unary_expression
atom
general_element
general_element_part
id_expression
regular_id
TOKEN='c2'
relational_operator
TOKEN='='
relational_expression
compound_expression
concatenation
model_expression
unary_expression
atom
constant
quoted_string
TOKEN=''X''
TOKEN=';'
TOKEN='END'
TOKEN=';'
</code></pre>
<p>But when I run against this script I get an error:</p>
<pre class="lang-sql prettyprint-override"><code>CREATE OR REPLACE PACKAGE pa_tsheet AS
--
PROCEDURE pr_new_tsheet_template
(
p_act_id IN timesheets.act_id %TYPE,
p_apd_id IN timesheets.apd_id %TYPE,
p_weekend_yn IN VARCHAR2,
p_job_desc IN timesheet_items.job_details %TYPE,
p_job_rate IN timesheet_items.rate %TYPE,
p_job_hours IN timesheet_items.hours %TYPE,
p_tms_id IN OUT timesheets.id %TYPE
);
--
END pa_tsheet;
/
</code></pre>
<p>Error:</p>
<pre class="lang-bash prettyprint-override"><code>python3 ./runPLSQLFiles.py ../pa_tsheet.pkh 2>&1
Traceback (most recent call last):
File "antlr_plsql/grammars/./runPLSQLFiles.py", line 33, in <module>
main()
File "antlr_plsql/grammars/./runPLSQLFiles.py", line 19, in main
tree = parser.sql_script()
File "antlr_plsql/grammars/PlSqlParser.py", line 15340, in sql_script
self.unit_statement()
File "antlr_plsql/grammars/PlSqlParser.py", line 16370, in unit_statement
self.create_package()
File "antlr_plsql/grammars/PlSqlParser.py", line 25046, in create_package
self.consume()
File ".local/lib/python3.10/site-packages/antlr4/Parser.py", line 348, in consume
self.getInputStream().consume()
File ".local/lib/python3.10/site-packages/antlr4/BufferedTokenStream.py", line 101, in consume
self.index = self.adjustSeekIndex(self.index + 1)
File ".local/lib/python3.10/site-packages/antlr4/CommonTokenStream.py", line 45, in adjustSeekIndex
return self.nextTokenOnChannel(i, self.channel)
File ".local/lib/python3.10/site-packages/antlr4/BufferedTokenStream.py", line 214, in nextTokenOnChannel
self.sync(i)
File ".local/lib/python3.10/site-packages/antlr4/BufferedTokenStream.py", line 112, in sync
fetched = self.fetch(n)
File ".local/lib/python3.10/site-packages/antlr4/BufferedTokenStream.py", line 124, in fetch
t = self.tokenSource.nextToken()
File ".local/lib/python3.10/site-packages/antlr4/Lexer.py", line 137, in nextToken
ttype = self._interp.match(self._input, self._mode)
File ".local/lib/python3.10/site-packages/antlr4/atn/LexerATNSimulator.py", line 104, in match
return self.execATN(input, dfa.s0)
File ".local/lib/python3.10/site-packages/antlr4/atn/LexerATNSimulator.py", line 173, in execATN
target = self.computeTargetState(input, s, t)
File ".local/lib/python3.10/site-packages/antlr4/atn/LexerATNSimulator.py", line 231, in computeTargetState
self.getReachableConfigSet(input, s.configs, reach, t)
File ".local/lib/python3.10/site-packages/antlr4/atn/LexerATNSimulator.py", line 280, in getReachableConfigSet
if self.closure(input, config, reach, currentAltReachedAcceptState, True, treatEofAsEpsilon):
File ".local/lib/python3.10/site-packages/antlr4/atn/LexerATNSimulator.py", line 359, in closure
currentAltReachedAcceptState = self.closure(input, c, configs, currentAltReachedAcceptState, speculative, treatEofAsEpsilon)
File ".local/lib/python3.10/site-packages/antlr4/atn/LexerATNSimulator.py", line 357, in closure
c = self.getEpsilonTarget(input, config, t, configs, speculative, treatEofAsEpsilon)
File ".local/lib/python3.10/site-packages/antlr4/atn/LexerATNSimulator.py", line 396, in getEpsilonTarget
if self.evaluatePredicate(input, t.ruleIndex, t.predIndex, speculative):
File ".local/lib/python3.10/site-packages/antlr4/atn/LexerATNSimulator.py", line 465, in evaluatePredicate
return self.recog.sempred(None, ruleIndex, predIndex)
File "antlr_plsql/grammars/PlSqlLexer.py", line 16736, in sempred
return pred(localctx, predIndex)
File "antlr_plsql/grammars/PlSqlLexer.py", line 16747, in PROMPT_MESSAGE_sempred
return this.IsNewlineAtPos(-4)
NameError: name 'this' is not defined
</code></pre>
<p>What is the issue?</p>
|
<python><antlr4>
|
2024-03-01 01:12:40
| 1
| 419
|
Pro West
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.