QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,483,077 | 2,615,236 | python is hard crashing with a simple mp3 play call | <p>The following code is hard crashing. It was working fine then suddenly something changed and I don't know what, so I created this mini test app, and it's still crashing python in the call to play. The try isn't getting caught, it's a hard crash of the python interpreter. It does correctly play the audio, then crashes before returning. Any help or guidance is appreciated, this is driving me crazy</p>
<p>I've tried running from a net new virt env and the host directly</p>
<p>I did install the following at the host level:
flac and ffmpeg</p>
<p>then in the virt env I've installed gtts, and pydub. there are other things installed too, but I didn't think they would be the culprit, although if I just create a virt environment with only gttsand pydub I get an error 13 calling play, whereas my other virtenv doesn't give the error 13, but I don't know what's causing that.</p>
<p>The error 13 is on the path: C:\Users\larry\AppData\Local\Temp\tmpm6l7dmzm.wav</p>
<p>even though tts.save("output.mp3") puts the output.mp3 file into the local workind folder</p>
<hr />
<p>{</p>
<pre><code>from gtts import gTTS # used to convert text back to audio
from pydub import AudioSegment
from pydub.playback import play
def text_to_speech(text):
try:
# Convert text to audio using gTTS and save it to a BytesIO object
tts = gTTS(text)
tts.save("output.mp3")
# Load the saved MP3 file into an AudioSegment
audio = AudioSegment.from_file("output.mp3", format="mp3")
play(audio) <---- hard crashing in this call
except Exception as e:
print (f"failed to play sound, {str(e)}")
pass
text_to_speech("This is a test, this is only a test")
</code></pre>
<p>}</p>
| <python><audio><mp3><text-to-speech> | 2023-11-14 18:38:18 | 0 | 551 | Quadgnim |
77,483,016 | 6,141,238 | In IDLE, what is the simplest way to limit the number of lines displayed by a pprint call? | <p>I am working with large nested dictionaries in Python and would like to quickly check their structure in IDLE. The <a href="https://docs.python.org/3/library/pprint.html" rel="nofollow noreferrer"><code>pprint</code></a> module nicely displays this structure, but leads IDLE to hang due to the size the variables. Is there a simple way to limit the number of lines that <code>pprint</code> prints, or similarly limit the number of key-value pairs displayed at each level of structure? (I may be missing something obvious here.)</p>
<p>Of note, <a href="https://stackoverflow.com/questions/29321969/limiting-print-output">there appears</a> to be a <a href="https://docs.python.org/3.4/library/reprlib.html" rel="nofollow noreferrer">reprlib</a> module made for the similar goal of truncating the output of <code>print</code> after a certain number of characters. However, <code>print</code> does not display the structure of large nested dictionaries in a readable fashion, so this module seems to be impractical for my purpose.</p>
| <python><dictionary><large-data><python-idle><pprint> | 2023-11-14 18:26:37 | 3 | 427 | SapereAude |
77,482,903 | 401,173 | What is the proper way to build and reference an editable local Python module? | <p>I am trying to build a module that will be shared among more than one Python application in a project.</p>
<p>I have a folder with a <code>pyproject.toml</code> with the following content.</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "data_audit_shared"
version = "0.0.1"
authors = [
{ name="Josh Russo", email="jrusso@example.com" },
]
description = "Shared code for data audit"
requires-python = ">=3.8"
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
</code></pre>
<p>When I try to reference <code>data_audit_shared</code> I get the message <code>No module named 'data_audit_shared'</code>.</p>
<p>My dependency in my project is <code>-e ../../data_audit_shared</code> and it installs properly.</p>
<p>This is what my venv <code>site-packages</code> looks like.</p>
<p><a href="https://i.sstatic.net/hKGzY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hKGzY.png" alt="enter image description here" /></a></p>
<p>How do I update module or the reference to be able to use this module?</p>
<hr />
<p>Edit - Tried removing <code>-e</code></p>
<p>So when I remove <code>-e</code> I get the module listed in the installed modules. Is there something I am missing to be able to properly install and reference my module with <code>-e</code>?</p>
<hr />
<p>Edit - Relative paths</p>
<p>I'm assuming that this is because I'm trying to reference a package outside of my project folder root. The strange part in my mind is that you can use this syntax with a Git repo. Why would it not behave similarly?</p>
| <python><pip><python-module><pyproject.toml> | 2023-11-14 18:07:55 | 0 | 3,241 | Josh Russo |
77,482,831 | 5,529,155 | SMTP.starttls() got an unexpected keyword argument 'keyfile' | <p>I'm trying to send email with this code:</p>
<pre><code>send_mail(
subject='Restore password',
message='Restore password for ' + email,
from_email=os.environ.get('GMAIL_EMAIL', ''),
html_message=html_message,
recipient_list=[email]
)
</code></pre>
<p>And having this error</p>
<pre><code>SMTP.starttls() got an unexpected keyword argument 'keyfile'
</code></pre>
<p>My python version is 3.12.0</p>
<p>My Django version is 4.0.1</p>
| <python><python-3.x><django> | 2023-11-14 17:56:07 | 5 | 467 | Yuriy |
77,482,825 | 8,560,600 | Can I add a Python compiler/interpreter in a NextJS application, for user to run and test small Python code? | <p>I am building an educational application in NextJS (v13), where I would like to allow user to write and run/test small Python code. I want to build somewhat like an online compiler/interpreter. Is that possible? If so, could someone point me in the right direction and/or resources?</p>
<p>Thanks!</p>
| <python><next.js> | 2023-11-14 17:55:15 | 1 | 1,608 | Stefan Radonjic |
77,482,714 | 2,056,201 | cv2_imshow in Google Colab causes images to be stacked on top of each other instead of displaying them in the same feed | <p>Im not sure if this is a limitation of Colab, Im trying to implement a pong game with opencv, Im using modified code from a tutorial <a href="https://github.com/darthdaenerys/Pong-Game/blob/master/game.py" rel="nofollow noreferrer">https://github.com/darthdaenerys/Pong-Game/blob/master/game.py</a></p>
<p>My code modifications are below</p>
<p>After replacing cv2.show with cv2_show as requested by Colab, it seems to produce infinite frames as images and stacks them on top of each other instead of producing a moving video in the same space.</p>
<p>Is it possible to display the images in the same space as a game would, or is that impossible with Colaband I need to install jupyter to get it to work?</p>
<p>If it is possible, please post recommended code change</p>
<p>if it is not possible, is there at least a way to remove the previous image before showing the next one so its not stacking images on top of each other</p>
<p>Thanks</p>
<pre><code>import cv2
import numpy as np
import random
from google.colab.patches import cv2_imshow
# parameters
width=1280
height=720
paddlewidth=150
paddleheight=20
paddlecolor=(75, 153, 242)
lives=3
deltax=10
deltay=-10
xpos=640
ypos=400
level=1
ball=True
ballradius=8
highscore=0
bgcolor=(30,79,24)
scorecolor=(149, 129,252)
prevval=683
myscore=0
while True:
background=np.zeros([720,1366,3],dtype=np.uint8)
background[:,:]=bgcolor
val = random.randint(0, 1000)
if val==0:
val=prevval
else:
prevval=val
cv2.rectangle(background,(1366-(val-paddlewidth//2),720-paddleheight),(1366-(val+paddlewidth//2),720),paddlecolor,-1)
paddlerightcorner=(1366-(val-paddlewidth//2),720-paddleheight)
paddleleftcorner=(1366-(val+paddlewidth//2),720-paddleheight)
if ball==True:
cv2.circle(background,(xpos,ypos),ballradius,(255,255,255),-1)
xpos+=deltax
ypos+=deltay
if xpos>=1360 or xpos<=5:
deltax=-deltax
if ypos<=10:
deltay=-deltay
if xpos<=paddlerightcorner[0] and xpos>=paddleleftcorner[0]:
if ypos>=695 and ypos<=710:
deltay=-deltay
myscore+=1
if myscore%5==0 and myscore>=5:
level+=1
if deltax<0:
deltax-=2
else:
deltax+=2
if deltay<0:
deltay-=1
else:
deltay+=1
cv2.putText(background,'Lives: '+str(lives),(1200,35),cv2.FONT_HERSHEY_PLAIN,2,scorecolor,2)
cv2.putText(background,'Level: '+str(level),(1200,68),cv2.FONT_HERSHEY_PLAIN,2,scorecolor,2)
cv2.putText(background,'Score: '+str(myscore),(1200,101),cv2.FONT_HERSHEY_PLAIN,2,scorecolor,2)
if ypos>=720:
lives-=1
temp=cv2.blur(background,(15,15))
cv2.putText(temp,'You Lost a Life !',(300,360),cv2.FONT_HERSHEY_DUPLEX,3,(185,89,200),3,1)
cv2_imshow(temp)
cv2.waitKey(2000)
xpos=640
ypos=400
if deltay>0:
deltay=-deltay
if lives==0:
ball=False
deltax=10
deltay=-10
level=0
background=cv2.blur(background,(20,20))
cv2.waitKey(1000)
for i in range(0,720,10):
background[i:i+10,:]=(24,34,255)
cv2_imshow(background)
cv2.waitKey(10)
cv2.putText(background,'GAME OVER',(400,360),cv2.FONT_HERSHEY_DUPLEX,3,(0,255,255),2)
cv2.putText(background,'Your Score: '+str(myscore),(420,420),cv2.FONT_HERSHEY_PLAIN,2,(0,255,35),2)
if myscore>highscore:
highscore=myscore
cv2.putText(background,'HIGH SCORE: '+str(highscore),(420,480),cv2.FONT_HERSHEY_PLAIN,2,(0,255,56),2)
cv2.putText(background,'Press q twice to exit or game restarts in 5 seconds',(300,550),cv2.FONT_HERSHEY_DUPLEX,1,(255,0,0),2)
cv2_imshow(background)
cv2.waitKey(5000)
ball=True
myscore=0
lives=3
if cv2.waitKey(1) & 0xff==ord('q'):
break
cv2_imshow(background)
</code></pre>
| <python><user-interface><jupyter-notebook><google-colaboratory> | 2023-11-14 17:38:28 | 1 | 3,706 | Mich |
77,482,602 | 9,997,212 | How to create a chunked list of lists with almost-equal lengths? | <p>I need to split a number into 3 parts. The following function (<a href="https://stackoverflow.com/a/53117390/9997212">source</a>) does that correctly:</p>
<pre class="lang-py prettyprint-override"><code>def parts(num: int, div: int) -> list[int]:
"""Split a number into equal parts."""
return [num // div + (1 if x < num % div else 0) for x in range(div)]
</code></pre>
<p>So</p>
<pre class="lang-py prettyprint-override"><code>assert parts(8, 3) == [3, 3, 2]
assert parts(9, 3) == [3, 3, 3]
assert parts(10, 3) == [4, 3, 3]
</code></pre>
<p>From that list, I need a list of lists containing a range from 0 to <code>num + 3</code>, such that every sublist</p>
<ul>
<li>at an even index has a length of 1</li>
<li>at an odd index has a length of <code>n</code>, where <code>n</code> is fetched from <code>parts</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>assert expected(8, 3) == [[0], [1, 2, 3], [4], [5, 6, 7], [8], [9, 10]]
assert expected(9, 3) == [[0], [1, 2, 3], [4], [5, 6, 7], [8], [9, 10, 11]]
assert expected(10, 3) == [[0], [1, 2, 3, 4], [5], [6, 7, 8], [9], [10, 11, 12]]
</code></pre>
<p>This is what I've tried:</p>
<pre class="lang-py prettyprint-override"><code>def actual(num: int, div: int) -> list[list[int]]:
matrix: list[list[int]] = []
for i, length in enumerate(parts(num, div)):
base = i * (length + 1)
matrix.append([base])
matrix.append([base + j + 1 for j in range(length)])
return matrix
</code></pre>
<p>However, it only works when <code>num</code> is divisible by <code>div</code>. How can I fix it?</p>
<p>These are the outputs for my function:</p>
<pre class="lang-py prettyprint-override"><code>assert actual(8, 3) == [[0], [1, 2, 3], [4], [5, 6, 7], [6], [7, 8]]
assert actual(9, 3) == [[0], [1, 2, 3], [4], [5, 6, 7], [8], [9, 10, 11]]
assert actual(10, 3) == [[0], [1, 2, 3, 4], [4], [5, 6, 7], [8], [9, 10, 11]]
</code></pre>
| <python><function><python-3.8> | 2023-11-14 17:20:34 | 2 | 11,559 | enzo |
77,482,582 | 8,006,721 | Pandas compare columns and consider NULLS equal | <p>I used Pandas to joined two dataframes together and want to compare if column values are equal.</p>
<p>However, whenever I encounter NULLs (or NA ?) values, my comparison returns <code>False</code></p>
<pre><code>import pandas as pd
# create test dataframes
df_1 = pd.DataFrame({'key': [1, 2, 3, 4, 5], 'field1': ['foo', pd.NA, None, None, 6]})
df_2 = pd.DataFrame({'pk': [1, 2, 3, 4, 6], 'field2': ['foo', pd.NA, pd.NA, None, 6.0]})
# left join
df_joined = df_1.merge(df_2, 'left', left_on='key', right_on='pk')
# calculate comparison field
df_joined['compare'] = df_joined['field1'] == df_joined['field2']
print(df_joined)
</code></pre>
<p>yields</p>
<pre><code> key field1 pk field2 compare
0 1 foo 1 foo True
1 2 <NA> 2 <NA> False
2 3 None 3 None False
3 4 <NA> 4 None False
4 5 10 5 10.0 True
</code></pre>
<p>I want all values in <code>compare</code> to be <code>True</code>.</p>
<p>I know this could be accomplished with functions and Pandas <code>apply</code>, but I was hoping for a nice one line comparison similar to what I have already in the example.</p>
| <python><pandas><dataframe> | 2023-11-14 17:18:32 | 2 | 332 | sushi |
77,482,580 | 11,426,624 | merge to dataframes on first part of a string | <p>I have two dataframes</p>
<pre><code>df1 = pd.DataFrame({'id':['XYZ', 'ABC1', 'CDS'], 'col1':[1,2,3]})
df2 = pd.DataFrame({'id':['XYZ1', 'XYZ2', 'ABC1', 'ABC11', 'CDSS', 'CDS', 'ABC2', 'ABC', 'XYA'],
'col2':[1,2,3,4,5,6,7,8,9]})
</code></pre>
<pre><code> id col1
0 XYZ 1
1 ABC1 2
2 CDS 3
</code></pre>
<p>and</p>
<pre><code> id col2
0 XYZ1 1
1 XYZ2 2
2 ABC1 3
3 ABC11 4
4 CDSS 5
5 CDS 6
6 ABC2 7
7 ABC 8
8 XYA 9
</code></pre>
<p>and I would like to merge df1 to df2 on the full id of df1 and the first characters of df2 that match it, such that I get this dataframe</p>
<pre><code> id col2 col1
0 XYZ1 1 1.0
1 XYZ2 2 1.0
2 ABC1 3 2.0
3 ABC11 4 2.0
4 CDSS 5 3.0
5 CDS 6 3.0
6 ABC2 7 NaN
7 ABC 8 NaN
8 XYA 9 NaN
</code></pre>
<p>How can I do this?</p>
| <python><pandas><dataframe><merge> | 2023-11-14 17:18:20 | 3 | 734 | corianne1234 |
77,482,489 | 10,951,092 | How to remove a row of a Dataframe if more than one value is zero? | <p>I am analyzing data from an accelerator. It returns acceleration in the x, y, and z directions as well as a timestamp. These values are stored in a Pandas DataFrame. Sometimes the accelerator returns bad data. In this case, at least two of the values (x, y, or z) will be zero. How do I find and remove rows from a dataframe where any two values are zero?</p>
| <python><pandas><dataframe> | 2023-11-14 16:59:43 | 1 | 382 | PetSven |
77,482,416 | 308,827 | Convert Dekad to start and end date in Python | <p>I have a pandas dataframe containing a column with Dekad (10-day) information e.g. Dekad 1 is Jan 1 to jan 10th, Dekad 2 is Jan 11th to jan 20th and so on.</p>
<p>Is there a way to get start and end day information given the dekad number? I can do this by hardcoding but a more pythonic solution would be better.</p>
<p>The dataframe looks like so, and i want to add a new column called date_range_dekad that contains the start and end date concatenated into a string</p>
<pre><code> CEI Dekad
0 160.004352 1
1 0.000000 2
2 0.000000 3
3 0.000000 4
4 0.000000 5
</code></pre>
| <python><pandas><datetime> | 2023-11-14 16:47:33 | 1 | 22,341 | user308827 |
77,482,375 | 1,016,065 | Python fuzzy string search with `regex` | <p>Trying to understand fuzzy pattern matching with <a href="https://pypi.org/project/regex/" rel="nofollow noreferrer">regex</a>. What I want: I have a string, and I want to find identical or similar strings in other, perhaps larger strings. (Does one field in a database record occur, perhaps as a fuzzy substring, in any other field in that database record?)</p>
<p>Here's a sample. Comments indicate character positions.</p>
<pre><code>import regex
to_search = "1990 /"
#123456
# ^^ ^
search_in = "V CAD-0000:0000[01] ISS 23/10/91"
#12345678901234567890123456789012
# ^^ ^
m = regex.search(f'({to_search}){{e<4}}', search_in, regex.BESTMATCH)
</code></pre>
<p>result:</p>
<pre><code>>>> m
<regex.Match object; span=(27, 30), match='10/', fuzzy_counts=(0, 0, 3)>
>>> m.fuzzy_changes
([], [], [28, 29, 31])
</code></pre>
<p>No substitutions, no insertions, 3 deletions at positions 28, 29 and 31. The order "substitutions insertions deletions" matters, it's taken from <a href="https://pypi.org/project/regex/" rel="nofollow noreferrer">here</a>.</p>
<p>Question: how to interpret this, in normal human language? What it <em>says</em> (I think):</p>
<blockquote>
<p>"You have a match from substring <code>10/</code> in your <code>search_in</code>, if you delete positions 28, 29 and 31 in it."</p>
</blockquote>
<p>I probably got that wrong. This is true tho':</p>
<blockquote>
<p>"If you delete positions 5, 3 and 2, in that order, in <code>to_search</code>, you have an exact match at substring <code>10/</code> in <code>search_in</code>, yay!"</p>
</blockquote>
<p>Fortunately, I found <a href="https://maxhalford.github.io/blog/fuzzy-regex-matching-in-python/" rel="nofollow noreferrer">a guru!</a> So I did</p>
<pre><code>>>> import orc
>>> m = regex.search(f'({to_search}){{e<4}}', search_in, regex.BESTMATCH)
>>> m
<regex.Match object; span=(27, 30), match='10/', fuzzy_counts=(0, 0, 3)>
>>> near_match = orc.NearMatch.from_regex(m, to_search)
>>> print(near_match)
10/
I
190/
I
1990/
I
1990 /
</code></pre>
<p>Hmm... so the order of <code>fuzzy_counts</code>, is in fact, something, something, <em>insertions</em>?</p>
<p>I'd appreciate if anyone could shed some light on this.</p>
| <python><regex><fuzzy-search> | 2023-11-14 16:41:30 | 1 | 3,912 | RolfBly |
77,482,299 | 5,627,023 | Docker build with python-3.10-alpine | <p>I am facing compatability issue with python-3.10-alpine when building the docker image.</p>
<p>Requirements.txt</p>
<pre><code>numpy==1.21.2
dask==2021.9.1
pandas==1.4.4
pyrrow==8.0.0
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM python:3.10-apine
COPY requirements.txt .
RUN pip install -r requirements.txt
</code></pre>
<p><code>docker build .</code>
While installing pandas it throws an error Numpy 1.21.6 may not yet supported by 3.10</p>
| <python><dockerfile> | 2023-11-14 16:31:20 | 0 | 305 | Anji |
77,482,271 | 10,143,378 | Multi variable in multi processing with one common | <p>I have a script in python and a function that I need to execute with multiprocessing, however, one of the parameter (df) of this function is the same for each call. Let me explain with the code :</p>
<pre><code>def main():
for country in all_countries:
df = SqlManager().sql_query('MyDb', "SELECT * FROM MyTable WHERE Country=''")
args = [(
df,
lvl,
timescale,
adm_code
) for lvl in ['Country', 'Region', 'County']
for timescale in ['month', 'week']
for adm_code in list(df[lvl].unique())
]
pool = Pool(cpu_count())
entry_list = list(tqdm(pool.imap(parallel_func, args), total=len(args))
</code></pre>
<p>where</p>
<pre><code>def parallel_func(args):
return function_to_run(*args)
def function_to_run(df, lvl, timescale, adm_code):
"""my code"""
</code></pre>
<p>probleme with that is my df is very big (more than 50 millions of rows), and with this process I store it 10000 times in args while it is exactly the same one everywhere
I also don't want to do the SQL call into my parallel function, because what I will win in memory I will lose by calling SQL several time for the same data</p>
<p>I try to use global variable, but again as I am working in //, they didn't share the same variable.</p>
| <python><sql><multiprocessing><arguments> | 2023-11-14 16:27:20 | 1 | 576 | kilag |
77,482,223 | 6,544,849 | How to make shamrock image | <p>I am trying to make a shamrock crystal for later fabrication, and am using nazca design tool for doing it.</p>
<p>How can I place the trapezoid so that the sides of the trapezoid are parallel to the cyrcles and are always at the end of the sharp corners.</p>
<pre><code>import nazca as nd
import numpy as np
# Define parameters
a = 500
semib = 0.255 * a
semia = semib * 0.75
shift = 0.22 * a
thickness = 0.65 * a
f = 2 * semia / np.sqrt(3)
r = semia
with nd.Cell(name="shamrock") as sham:
# Create circles for the shamrock pattern
circ1 = nd.geometries.circle(radius=semia, N=100)
circ2 = nd.geometries.circle(radius=semia, N=100)
circ3 = nd.geometries.circle(radius=semia, N=100)
# Create Polygon objects for circles
circ1_polygon = nd.Polygon(layer=56, points=circ1)
circ2_polygon = nd.Polygon(layer=56, points=circ2)
circ3_polygon = nd.Polygon(layer=56, points=circ3)
# Place circles for the shamrock pattern using put
circ1_polygon.put(0, shift)
circ2_polygon.put(-shift * 0.5 * np.sqrt(3), -shift * 0.5)
circ3_polygon.put(shift * 0.5 * np.sqrt(3), -shift * 0.5)
# Identify sharp corners
sharp_corner1 = ((-shift * 0.5 * np.sqrt(3)) / 2, (shift + (-shift * 0.5)) / 2)
sharp_corner2 = ((0 + (shift * 0.5 * np.sqrt(3))) / 2, (shift + (-shift * 0.5)) / 2)
sharp_corner3 = ((-shift * 0.5 * np.sqrt(3) + (shift * 0.5 * np.sqrt(3))) / 2, (-shift * 0.5 + (-shift * 0.5)) / 2)
semicircle_radius = f-r
# Create trapezoid using the trapezoid function
trapezoid_shape = nd.geometries.trapezoid(length=50, height=160, angle1=132, angle2=132, position=1)
# Create Polygon object for trapezoid
trapezoid_polygon = nd.Polygon(layer=56, points=trapezoid_shape)
# Place trapezoid at the sharp corners
trapezoid_polygon.put(25,75,180)
nd.export_plt(sham)
</code></pre>
<p>And here is the picture.</p>
<p><a href="https://i.sstatic.net/nLXom.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nLXom.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/U8IiV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U8IiV.png" alt="enter image description here" /></a></p>
<p>Ideally would be nice to have something like this with the smoothen corners. But I was thinking to put a trapezoid on top to have smoothen the sharper corners.</p>
| <python><matplotlib><geometry><trigonometry> | 2023-11-14 16:20:51 | 1 | 321 | wosker4yan |
77,482,207 | 10,438,528 | Is there a way to encrypt two messages into a single string? | <p>How do I encrypt two strings into one so that I can use two keys to get two different messages from a single string?</p>
<p>I need a method that should work kind of like this:</p>
<pre><code>key1 = "asdf"
key2= "xcvb"
encrypted_message = "qpoweiurtz"
decrypt(encrypted_message, key1)
decrypt(encrypted_message, key2)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>apple
pear
</code></pre>
<p>Is something like that possible?</p>
| <python><cryptography> | 2023-11-14 16:18:51 | 0 | 482 | LGR |
77,481,897 | 2,149,718 | Capture a video stream from rtsp camera and write it to a file | <p>I am trying to capture a video stream coming from an rtsp camera and write it to a file. Using Jetson Xavier AGX with Jetpack 4.5 [L4T 32.5.0]</p>
<p>I am using below python app to perform the task:</p>
<pre><code>cap = cv2.VideoCapture("rtspsrc location=rtsp://10.34.134.1/Streaming/channels/1/ user-id=myuser user-pw=mypass ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink")
w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = cap.get(cv2.CAP_PROP_FPS)
print('Src opened, %dx%d @ %d fps' % (w, h, fps))
gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=test.mkv "
out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))
if not out.isOpened():
print("Failed to open output")
exit()
if cap.isOpened():
while True:
ret_val, img = cap.read()
if not ret_val:
break;
out.write(img);
cv2.waitKey(1)
else:
print ("pipeline open failed")
cap.release()
out.release()
</code></pre>
<p>Opening the stream doesn’t work. I get the error below:</p>
<blockquote>
<p>[ERROR:0@0.041] global /io/opencv/modules/videoio/src/cap.cpp (164)
open VIDEOIO(CV_IMAGES): raised OpenCV exception:</p>
<p>OpenCV(4.6.0) /io/opencv/modules/videoio/src/cap_images.cpp:253:
error: (-5:Bad argument) CAP_IMAGES: can’t find starting number (in
the name of file): rtspsrc
location=rtsp://10.34.134.1/Streaming/channels/1/ user-id=myuser
user-pw=mypass ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv
! video/x-raw, format=BGRx ! videoconvert ! video/x-raw,format=BGR !
appsink in function ‘icvExtractPattern’</p>
</blockquote>
<p>Can I somehow modify the string supplied to cv2.VideoCapture in order to properly read the rtsp stream?</p>
| <python><opencv><video-streaming><gstreamer><nvidia-jetson> | 2023-11-14 15:30:53 | 1 | 72,345 | Giorgos Betsos |
77,481,878 | 2,586,950 | Is there any relation between classes which use the same type variable? | <p>The <a href="https://docs.python.org/3/library/typing.html#typing.TypeVar" rel="nofollow noreferrer"><code>typing.TypeVar</code></a> class allows one to specify reusable type variables. With Python 3.12 / <a href="https://docs.python.org/3/whatsnew/3.12.html#pep-695-type-parameter-syntax" rel="nofollow noreferrer">PEP 695</a>, one can define a class <code>A</code>/<code>B</code> with type variable <code>T</code> like this:</p>
<pre><code>class A[T]:
...
class B[T]:
...
</code></pre>
<p>Beforehand, with Python 3.11, you would do it like this:</p>
<pre><code>from typing import TypeVar, Generic
T = TypeVar("T")
class A(Generic[T]):
...
class B(Generic[T]):
...
</code></pre>
<p>For the first example, the <code>T</code> is defined in the class scope, so they do not relate to each other.
Is there any difference to the second 'old' example? Or: Is there any connection between the two classes <code>A</code> and <code>B</code>?</p>
| <python><mypy><typing> | 2023-11-14 15:28:36 | 1 | 711 | JHK |
77,481,842 | 190,887 | kivy buildozer UrlRequest fails on android with Python3 | <p>I've read and tried out the solution from <a href="https://stackoverflow.com/questions/51300399/kivy-buildozer-android-https-request-fail">kivy buildozer android https request fail</a>, but possibly Python3 changes things?</p>
<p>I'm trying to run a <code>UrlRequest</code>; it works fine when I run it on my laptop with <code>python main.py</code>, but when I try to run it on my Android device, I keep seeing the log line</p>
<pre><code>11-14 10:12:14.202 6785 6939 I python : [INFO ] REQ ERROR -- <UrlRequestUrllib(Thread-1, started daemon 517489732944)> <<[Errno 7] No address associated with hostname>>
</code></pre>
<p>and the label widget doesn't change text as expected. If I change <code>UrlRequest</code> to <code>UrlRequestRequests</code>, and add the requisite wod to the <code>buildozer.spec</code> file, I instead get the longer but not particularly more informitive</p>
<pre><code>11-14 10:21:32.098 7626 7719 I python : [INFO ] [REQ ERROR -- <UrlRequestRequests(Thread-1, started daemon 517496282448)> <<HTTPSConnectionPool(host='www.google.com', port=443)] Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x787e579ba0>: Failed to resolve 'www.google.com' ([Errno 7] No address associated with hostname)"))>>
</code></pre>
<p>I've followed the <code>buildozer</code> installation instructions at <a href="https://buildozer.readthedocs.io/en/latest/installation.html" rel="nofollow noreferrer">https://buildozer.readthedocs.io/en/latest/installation.html</a> (including installing <code>libssl-dev</code> first, and doing <code>buildozer android clean</code> just for good measure).</p>
<p>The minimum reproducible error is:</p>
<pre><code># main.py
import kivy
kivy.require('2.2.1') # replace with your current kivy version !
import markdownify
from kivy.app import App
from kivy.logger import Logger
from kivy.network.urlrequest import UrlRequest
from kivy.uix.label import Label
class HelloApp(App):
def build(self):
lbl = Label(text="Placeholder Text")
def _setlbl(new_text):
Logger.info("REQ callback started. Setting label text...")
lbl.text = new_text
Logger.info("REQ text set")
return new_text
url = "https://www.google.com/"
Logger.info("REQ Running UrlRequest...")
UrlRequest(
url,
on_success=lambda req, res: _setlbl("Got a response!"),
on_failure=lambda req, res: Logger.info(f"REQ FAIL -- {req} {res}"),
on_error=lambda req, err: Logger.info(f"REQ ERROR -- {req} <<{err}>>")
)
Logger.info("REQ request started...")
return lbl
if __name__ == '__main__':
HelloApp().run()
</code></pre>
<pre><code># buildozer.spec (with a giant pile of comments removed)
[app]
# (str) Title of your application
title = My Application
# (str) Package name
package.name = myapp
# (str) Package domain (needed for android/ios packaging)
package.domain = org.test
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,kv,atlas,ttf
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
source.include_patterns = fonts
# (str) Application versioning (method 1)
version = 0.26
# (list) Application requirements
# comma separated e.g. requirements = sqlite3,kivy
requirements = openssl,hostpython3,python3,kivy
# (list) Supported orientations
# Valid options are: landscape, portrait, portrait-reverse or landscape-reverse
orientation = landscape, portrait
# change the major version of python used by the app
osx.python_version = 3
# Kivy version to use
osx.kivy_version = 1.9.1
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 0
# (list) Permissions
# (See https://python-for-android.readthedocs.io/en/latest/buildoptions/#build-options-1 for all the supported syntaxes and properties)
android.permissions = android.permission.INTERNET #, (name=android.permission.WRITE_EXTERNAL_STORAGE;maxSdkVersion=18)
# (list) The Android archs to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64
# In past, was `android.arch` as we weren't supporting builds for multiple archs at the same time.
android.archs = arm64-v8a, armeabi-v7a
# (int) overrides automatic versionCode computation (used in build.gradle)
# this is not the same as app version and should only be edited if you know what you're doing
# android.numeric_version = 1
# (bool) enables Android auto backup feature (Android API >=23)
android.allow_backup = True
#
# iOS specific
#
# (str) Path to a custom kivy-ios folder
#ios.kivy_ios_dir = ../kivy-ios
# Alternately, specify the URL and branch of a git checkout:
ios.kivy_ios_url = https://github.com/kivy/kivy-ios
ios.kivy_ios_branch = master
# Another platform dependency: ios-deploy
# Uncomment to use a custom checkout
#ios.ios_deploy_dir = ../ios_deploy
# Or specify URL and branch
ios.ios_deploy_url = https://github.com/phonegap/ios-deploy
ios.ios_deploy_branch = 1.10.0
# (bool) Whether or not to sign the code
ios.codesign.allowed = false
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 1
</code></pre>
<p>What am I doing wrong here?</p>
<p>EDIT:</p>
<p>Adding a runtime permission request doesn't on its own resolve things. If I add</p>
<pre><code>from kivy.utils import platform
Logger.info(f"PLATFORM {kivy.utils.platform}")
if platform == 'android':
Logger.info(" ON ANDROID - REQUESTING PERMISSIONS")
from android.permissions import Permission, request_permissions
Logger.info(" imported permission things...")
perms = [Permission.INTERNET]
Logger.info(f" asking for `{perms}`...")
res = request_permissions(perms, None)
Logger.info(f" permissions requested {res} ...")
</code></pre>
<p>to <code>main.py</code> between the existing imports and <code>HelloApp</code> declaration, I can see each individual log line hit terminal, but it still results in a request error.</p>
<p>The complete log dump is:</p>
<pre><code>11-14 14:32:56.597 20951 21073 I python : Initializing Python for Android
11-14 14:32:56.597 20951 21073 I python : Setting additional env vars from p4a_env_vars.txt
11-14 14:32:56.597 20951 21073 I python : Changing directory to the one provided by ANDROID_ARGUMENT
11-14 14:32:56.597 20951 21073 I python : /data/user/0/org.test.myapp/files/app
11-14 14:32:56.597 20951 21073 I python : Preparing to initialize python
11-14 14:32:56.597 20951 21073 I python : _python_bundle dir exists
11-14 14:32:56.597 20951 21073 I python : calculated paths to be...
11-14 14:32:56.597 20951 21073 I python : /data/user/0/org.test.myapp/files/app/_python_bundle/stdlib.zip:/data/user/0/org.test.myapp/files/app/_python_bundle/modules
11-14 14:32:56.597 20951 21073 I python : set wchar paths...
11-14 14:32:56.723 20951 21073 I python : Initialized python
11-14 14:32:56.723 20951 21073 I python : AND: Init threads
11-14 14:32:56.724 20951 21073 I python : testing python print redirection
11-14 14:32:56.727 20951 21073 I python : Android path ['.', '/data/user/0/org.test.myapp/files/app/_python_bundle/stdlib.zip', '/data/user/0/org.test.myapp/files/app/_python_bundle/modules', '/data/user/0/org.test.myapp/files/app/_python_bundle/site-packages']
11-14 14:32:56.727 20951 21073 I python : os.environ is environ({'PATH': '/sbin:/system/sbin:/product/bin:/apex/com.android.runtime/bin:/system/bin:/system/xbin:/odm/bin:/vendor/bin:/vendor/xbin', 'ANDROID_BOOTLOGO': '1', 'ANDROID_ROOT': '/system', 'ANDROID_ASSETS': '/system/app', 'ANDROID_DATA': '/data', 'ANDROID_STORAGE': '/storage', 'ANDROID_RUNTIME_ROOT': '/apex/com.android.runtime', 'ANDROID_TZDATA_ROOT': '/apex/com.android.tzdata', 'EXTERNAL_STORAGE': '/sdcard', 'ASEC_MOUNTPOINT': '/mnt/asec', 'BOOTCLASSPATH': '/apex/com.android.runtime/javalib/core-oj.jar:/apex/com.android.runtime/javalib/core-libart.jar:/apex/com.android.runtime/javalib/okhttp.jar:/apex/com.android.runtime/javalib/bouncycastle.jar:/apex/com.android.runtime/javalib/apache-xml.jar:/system/framework/QPerformance.jar:/system/framework/UxPerformance.jar:/system/framework/framework.jar:/system/framework/ext.jar:/system/framework/telephony-common.jar:/system/framework/voip-common.jar:/system/framework/ims-common.jar:/system/framework/android.test.base.jar:/system/framework/tcmiface.jar:/system/framework/telephony-ext.jar:/system/framework/WfdCommon.jar:/system/framework/qcom.fmradio.jar:/system/framework/com.nxp.nfc.nq.jar:/apex/com.android.conscrypt/javalib/conscrypt.jar:/apex/com.android.media/javalib/updatable-media.jar', 'DEX2OATBOOTCLASSPATH': '/apex/com.android.runtime/javalib/core-oj.jar:/apex/com.android.runtime/javalib/core-libart.jar:/apex/com.android.runtime/javalib/okhttp.jar:/apex/com.android.runtime/javalib/bouncycastle.jar:/apex/com.android.runtime/javalib/apache-xml.jar:/system/framework/QPerformance.jar:/system/framework/UxPerformance.jar:/system/framework/framework.jar:/system/framework/ext.jar:/system/framework/telephony-common.jar:/system/framework/voip-common.jar:/system/framework/ims-common.jar:/system/framework/android.test.base.jar:/system/framework/tcmiface.jar:/system/framework/telephony-ext.jar:/system/framework/WfdCommon.jar:/system/framework/qcom.fmradio.jar:/system/framework/com.nxp.nfc.nq.jar', 'SYSTEMSERVERCLASSPATH': '/system/framework/services.jar:/system/framework/ethernet-service.jar:/system/framework/wifi-service.jar:/system/framework/com.android.location.provider.jar', 'DOWNLOAD_CACHE': '/data/cache', 'ANDROID_SOCKET_zygote': '18', 'ANDROID_SOCKET_usap_pool_primary': '19', 'ANDROID_ENTRYPOINT': 'main.pyc', 'ANDROID_ARGUMENT': '/data/user/0/org.test.myapp/files/app', 'ANDROID_APP_PATH': '/data/user/0/org.test.myapp/files/app', 'ANDROID_PRIVATE': '/data/user/0/org.test.myapp/files', 'ANDROID_UNPACK': '/data/user/0/org.test.myapp/files/app', 'PYTHONHOME': '/data/user/0/org.test.myapp/files/app', 'PYTHONPATH': '/data/user/0/org.test.myapp/files/app:/data/user/0/org.test.myapp/files/app/lib', 'PYTHONOPTIMIZE': '2', 'P4A_BOOTSTRAP': 'SDL2', 'PYTHON_NAME': 'python', 'P4A_IS_WINDOWED': 'True', 'KIVY_ORIENTATION': 'LandscapeLeft Portrait', 'P4A_NUMERIC_VERSION': 'None', 'P4A_MINSDK': '21', 'LC_CTYPE': 'C.UTF-8'})
11-14 14:32:56.727 20951 21073 I python : Android kivy bootstrap done. __name__ is __main__
11-14 14:32:56.727 20951 21073 I python : AND: Ran string
11-14 14:32:56.727 20951 21073 I python : Run user program, change dir and execute entrypoint
11-14 14:32:57.057 20951 21073 I python : [INFO ] [Logger ] Record log in /data/user/0/org.test.myapp/files/app/.kivy/logs/kivy_23-11-14_1.txt
11-14 14:32:57.058 20951 21073 I python : [INFO ] [Kivy ] v2.2.1
11-14 14:32:57.059 20951 21073 I python : [INFO ] [Kivy ] Installed at "/data/user/0/org.test.myapp/files/app/_python_bundle/site-packages/kivy/__init__.pyc"
11-14 14:32:57.059 20951 21073 I python : [INFO ] [Python ] v3.10.10 (main, Nov 14 2023, 02:57:53) [Clang 14.0.6 (https://android.googlesource.com/toolchain/llvm-project 4c603efb0
11-14 14:32:57.060 20951 21073 I python : [INFO ] [Python ] Interpreter at ""
11-14 14:32:57.061 20951 21073 I python : [INFO ] [Logger ] Purge log fired. Processing...
11-14 14:32:57.062 20951 21073 I python : [INFO ] [Logger ] Purge finished!
11-14 14:32:59.358 20951 21073 I python : [INFO ] [Factory ] 190 symbols loaded
11-14 14:32:59.878 20951 21073 I python : [INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2 (img_pil, img_ffpyplayer ignored)
11-14 14:33:00.246 20951 21073 I python : [INFO ] [Text ] Provider: sdl2
11-14 14:33:00.251 20951 21073 I python : [INFO ] PLATFORM android
11-14 14:33:00.252 20951 21073 I python : [INFO ] ON ANDROID - REQUESTING PERMISSIONS
11-14 14:33:00.256 20951 21073 I python : [INFO ] imported permission things...
11-14 14:33:00.256 20951 21073 I python : [INFO ] asking for `['android.permission.INTERNET']`...
11-14 14:33:00.375 20951 21073 I python : [INFO ] permissions requested None ...
11-14 14:33:00.455 20951 21073 I python : [INFO ] [Window ] Provider: sdl2
11-14 14:33:00.495 20951 21073 I python : [INFO ] [GL ] Using the "OpenGL ES 2" graphics system
11-14 14:33:00.504 20951 21073 I python : [INFO ] [GL ] Backend used <sdl2>
11-14 14:33:00.505 20951 21073 I python : [INFO ] [GL ] OpenGL version <b'OpenGL ES 3.2 V@415.0 (GIT@389f9e1, I4b4012dc33, 1609770135) (Date:01/04/21)'>
11-14 14:33:00.506 20951 21073 I python : [INFO ] [GL ] OpenGL vendor <b'Qualcomm'>
11-14 14:33:00.506 20951 21073 I python : [INFO ] [GL ] OpenGL renderer <b'Adreno (TM) 508'>
11-14 14:33:00.507 20951 21073 I python : [INFO ] [GL ] OpenGL parsed version: 3, 2
11-14 14:33:00.507 20951 21073 I python : [INFO ] [GL ] Texture max size <16384>
11-14 14:33:00.507 20951 21073 I python : [INFO ] [GL ] Texture max units <16>
11-14 14:33:00.632 20951 21073 I python : [INFO ] [Window ] auto add sdl2 input provider
11-14 14:33:00.634 20951 21073 I python : [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked
11-14 14:33:00.825 20951 21073 I python : [INFO ] REQ Running UrlRequest...
11-14 14:33:00.829 20951 21073 I python : [INFO ] REQ request started...
11-14 14:33:00.831 20951 21073 I python : [WARNING] [Base ] Unknown <android> provider
11-14 14:33:00.832 20951 21073 I python : [INFO ] [Base ] Start application main loop
11-14 14:33:00.842 20951 21073 I python : [INFO ] [GL ] NPOT texture support is available
11-14 14:33:00.914 20951 21073 I python : [INFO ] REQ ERROR -- <UrlRequestUrllib(Thread-1, started daemon 517499501904)> <<[Errno 7] No address associated with hostname>>
</code></pre>
| <python><android><python-3.x><kivy> | 2023-11-14 15:23:47 | 2 | 14,105 | Inaimathi |
77,481,795 | 5,753,145 | Why am i getting S3 Access error AmazonS3Exception in this AWS Glue Job | <p>I am running an AWS Glue Job, and the job is doing what it should, it takes the records from Kinesis stream and is putting it into the data lake. But it ends with a failure and error is as given below:</p>
<pre><code> StreamingQueryException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied;
</code></pre>
<p>Logs from cloudwatch:</p>
<pre><code>23/11/06 16:05:59 ERROR GlueExceptionAnalysisListener: [Glue Exception Analysis] {
"Event": "GlueETLJobExceptionEvent",
"Timestamp": 1699286759755,
"Failure Reason": "Traceback (most recent call last):\n File \"/tmp/azure-activity-to-ocsf-pyspark2.py\", line 300, in <module>\n \"checkpointLocation\": args[\"TempDir\"] + \"/\" + args[\"JOB_NAME\"] + \"/checkpoint/\",\n File \"/opt/amazon/lib/python3.6/site-packages/awsglue/context.py\", line 678, in forEachBatch\n raise e\n File \"/opt/amazon/lib/python3.6/site-packages/awsglue/context.py\", line 668, in forEachBatch\n query.start().awaitTermination()\n File \"/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/streaming.py\", line 101, in awaitTermination\n return self._jsq.awaitTermination()\n File \"/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py\", line 1305, in __call__\n answer, self.gateway_client, self.target_id, self.name)\n File \"/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py\", line 117, in deco\n raise converted from None\npyspark.sql.utils.StreamingQueryException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: KKMW0JW7VXKSRCCJ; S3 Extended Request ID: K+qiigS/kCTXbG9A7Tc/I7QtLBpsVSfUxzCApZZAtOwkLjYZgoZPFIiHi7+DlHvYwed9syWsx78=; Proxy: null), S3 Extended Request ID: K+qiigS/kCTXbG9A7Tc/I7QtLBpsVSfUxzCApZZAtOwkLjYZgoZPFIiHi7+DlHvYwed9syWsx78=\n=== Streaming Query ===\nIdentifier: [id = b3618842-c4c3-4b21-be2e-679f15677208, runId = 68cf2099-0c14-4fbd-a976-51c9e91d6506]\nCurrent Committed Offsets: {KinesisSource[securityLakeAzureActivityStream]: {\"shardId-000000000002\":{\"iteratorType\":\"TRIM_HORIZON\",\"iteratorPosition\":\"\"},\"metadata\":{\"streamName\":\"securityLakeAzureActivityStream\",\"batchId\":\"2\"},\"shardId-000000000001\":{\"iteratorType\":\"TRIM_HORIZON\",\"iteratorPosition\":\"\"},\"shardId-000000000000\":{\"iteratorType\":\"AFTER_SEQUENCE_NUMBER\",\"iteratorPosition\":\"49645809341923458802793491468334882006293674050208137218\"}}}\nCurrent Available Offsets: {KinesisSource[securityLakeAzureActivityStream]: {\"shardId-000000000002\":{\"iteratorType\":\"TRIM_HORIZON\",\"iteratorPosition\":\"\"},\"metadata\":{\"streamName\":\"securityLakeAzureActivityStream\",\"batchId\":\"3\"},\"shardId-000000000001\":{\"iteratorType\":\"TRIM_HORIZON\",\"iteratorPosition\":\"\"},\"shardId-000000000000\":{\"iteratorType\":\"AFTER_SEQUENCE_NUMBER\",\"iteratorPosition\":\"49645809341923458802793546272532794821692601702159482882\"}}}\n\nCurrent State: ACTIVE\nThread State: RUNNABLE\n\nLogical Plan:\nProject [cast(data#18 as string) AS $json$data_infer_schema$_temporary$#27]\n+- Project [UDF(data#5) AS data#18, streamName#6, partitionKey#7, sequenceNumber#8, approximateArrivalTimestamp#9]\n +- StreamingExecutionRelation KinesisSource[securityLakeAzureActivityStream], [data#5, streamName#6, partitionKey#7, sequenceNumber#8, approximateArrivalTimestamp#9]\n",
"Stack Trace": [
{
"Declaring Class": "deco",
"Method Name": "raise converted from None",
"File Name": "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py",
"Line Number": 117
},
{
"Declaring Class": "__call__",
"Method Name": "answer, self.gateway_client, self.target_id, self.name)",
"File Name": "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py",
"Line Number": 1305
},
{
"Declaring Class": "awaitTermination",
"Method Name": "return self._jsq.awaitTermination()",
"File Name": "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/streaming.py",
"Line Number": 101
},
{
"Declaring Class": "forEachBatch",
"Method Name": "query.start().awaitTermination()",
"File Name": "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py",
"Line Number": 668
},
{
"Declaring Class": "forEachBatch",
"Method Name": "raise e",
"File Name": "/opt/amazon/lib/python3.6/site-packages/awsglue/context.py",
"Line Number": 678
},
{
"Declaring Class": "<module>",
"Method Name": "\"checkpointLocation\": args[\"TempDir\"] + \"/\" + args[\"JOB_NAME\"] + \"/checkpoint/\",",
"File Name": "/tmp/azure-activity-to-ocsf-pyspark2.py",
"Line Number": 300
}
],
"Last Executed Line number": 300,
"script": "azure-activity-to-ocsf-pyspark2.py"
}
</code></pre>
<p>The line 300 is this:</p>
<pre><code>glueContext.forEachBatch(
frame=dataframe_KinesisStream_node1,
batch_function=processBatch,
options={
"windowSize": "100 seconds",
**"checkpointLocation": args["TempDir"] + "/" + args["JOB_NAME"] + "/checkpoint/",**
},
</code></pre>
<p>The checkpoint folder is created in S3, I can see that. Temporary folder given in the job settings, is being accessed just fine, i can see the files created by the job there.</p>
<p>This here is the code in the Glue Job: <a href="https://github.com/aws-samples/amazon-security-lake-custom-data/blob/main/azure-activity-to-ocsf-pyspark.py" rel="nofollow noreferrer">Github Link</a></p>
<p>Access Role policy:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::aws-security-data-lake-*/*",
"arn:aws:s3:::securitylake-glue-assets-*/*"
],
"Effect": "Allow"
}
]
}
</code></pre>
<p>Why this error might be happening? Which bucket is the error for?</p>
<p>Another observation: If i add DeleteObject permission to the above the Glue Job runs forever.</p>
<p>No problem with ACL, SCP etc.</p>
<p>Bucket (Glue script, temporary bucket) Policy:</p>
<pre><code>{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Stmt1683139153218",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::securitylake-glue-assets-123456789-us-east-1/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "true"
},
"ArnEquals": {
"aws:PrincipalArn": "arn:aws:iam::123456789:role/securityLakeGlueStreamingRole"
}
}
}
]
}
</code></pre>
<p>Security Lake Bucket policy (Where Glue Job writes to. I can see new files here)</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::aws-security-data-lake-us-east-1-ogt1oa9bot0jmeduqbjxzmxzvu9eij/*",
"arn:aws:s3:::aws-security-data-lake-us-east-1-ogt1oa9bot0jmeduqbjxzmxzvu9eij"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
},
{
"Sid": "PutSecurityLakeObject",
"Effect": "Allow",
"Principal": {
"Service": "securitylake.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::aws-security-data-lake-us-east-1-ogt1oa9bot0jmeduqbjxzmxzvu9eij/*",
"arn:aws:s3:::aws-security-data-lake-us-east-1-ogt1oa9bot0jmeduqbjxzmxzvu9eij"
],
"Condition": {
"StringEquals": {
"aws:SourceAccount": "644107485976",
"s3:x-amz-acl": "bucket-owner-full-control"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:securitylake:us-east-1:644107485976:*"
}
}
}
]
}
</code></pre>
| <python><amazon-web-services><amazon-s3><aws-glue> | 2023-11-14 15:17:27 | 1 | 475 | manu muraleedharan |
77,481,743 | 3,973,175 | adding figure legend outside of the box | <p>I am attempting to make a scatterplot, with a figure legend outside of the box, as suggested by <a href="https://stackoverflow.com/questions/4700614/how-to-put-the-legend-outside-the-plot">How to put the legend outside the plot</a>, but am also plotting multiple sets of data <a href="https://stackoverflow.com/questions/46106912/one-colorbar-for-multiple-scatter-plots">One colorbar for multiple scatter plots</a>. The following minimal working example has most sets removed, there can be 10 sets in the real script:</p>
<pre><code>import matplotlib.pyplot as plt
norm = plt.Normalize(55,1954)
plt.scatter([75,75,63,145,47],[0.979687,1.07782,2.83995,4.35468,4.44244], c = [75,75,70,70,70], norm = norm, label = 's1', cmap = 'gist_rainbow', marker = 'o')
plt.scatter([173,65],[0.263218,3.12112], c = [77,68], norm = norm, label = 's2', cmap = 'gist_rainbow', marker = 'v')
plt.colorbar().set_label('score')
plt.legend(loc = "outside center left")
plt.savefig('file.svg', bbox_inches='tight', pad_inches = 0.1)
</code></pre>
<p>but I get an error:</p>
<pre><code>/home/con/.local/lib/python3.10/site-packages/matplotlib/projections/__init__.py:63: UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
warnings.warn("Unable to import Axes3D. This may be due to multiple versions of "
Traceback (most recent call last):
File "/tmp/2Le599Dl8S.py", line 6, in <module>
plt.legend(loc = "outside center left")
File "/home/con/.local/lib/python3.10/site-packages/matplotlib/pyplot.py", line 3372, in legend
return gca().legend(*args, **kwargs)
File "/home/con/.local/lib/python3.10/site-packages/matplotlib/axes/_axes.py", line 323, in legend
self.legend_ = mlegend.Legend(self, handles, labels, **kwargs)
File "/home/con/.local/lib/python3.10/site-packages/matplotlib/legend.py", line 566, in __init__
self.set_loc(loc)
File "/home/con/.local/lib/python3.10/site-packages/matplotlib/legend.py", line 703, in set_loc
raise ValueError(
ValueError: 'outside' option for loc='outside center left' keyword argument only works for figure legends
</code></pre>
<p>I have also tried:</p>
<pre><code>import matplotlib.pyplot as plt
norm = plt.Normalize(55,1954)
fig = plt.scatter([75,75,63,145,47],[0.979687,1.07782,2.83995,4.35468,4.44244], c = [75,75,70,70,70], norm = norm, label = 's1', cmap = 'gist_rainbow', marker = 'o')
plt.scatter([173,65],[0.263218,3.12112], c = [77,68], norm = norm, label = 's2', cmap = 'gist_rainbow', marker = 'v')
plt.colorbar().set_label('score')
fig.legend(loc = "outside center left")
plt.savefig('file.svg', bbox_inches='tight', pad_inches = 0.1)
</code></pre>
<p>but this produces another problem:</p>
<pre><code>/home/con/.local/lib/python3.10/site-packages/matplotlib/projections/__init__.py:63: UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
warnings.warn("Unable to import Axes3D. This may be due to multiple versions of "
Traceback (most recent call last):
File "/tmp/2Le599Dl8S.py", line 6, in <module>
fig.legend(loc = "outside center left")
AttributeError: 'PathCollection' object has no attribute 'legend'
</code></pre>
<p>I also looked at <a href="https://stackoverflow.com/questions/38666527/what-is-the-necessity-of-plt-figure-in-matplotlib">What is the necessity of plt.figure() in matplotlib?</a>, but there is nothing in there about legends, so that post did not solve my problem. It didn't show up on Google search either.</p>
| <python><python-3.x><matplotlib> | 2023-11-14 15:11:42 | 1 | 6,227 | con |
77,481,634 | 13,023,224 | Pandas groupby count unique non cumulative | <p>I have this toy data set</p>
<pre><code>df=pd.DataFrame({'user':['John','Steve','Steve','Steve','Jane','Jane','Jane','Jane','Alice','Alice','Alice'],
'days':[1,1,2,3,1,2,3,4,1,2,3]})
</code></pre>
<p>yielding</p>
<pre><code>user days
John 1
Steve 1
Steve 2
Steve 3
Jane 1
Jane 2
Jane 3
Jane 4
Alice 1
Alice 2
Alice 3
</code></pre>
<p>I wish to count the exact number of user with only 1 days, only 2 and only 3 days.</p>
<p>Desired output</p>
<pre><code>user days_count
1 1
3 2
4 1
</code></pre>
<p>I have tried code from this <a href="https://stackoverflow.com/questions/18554920/pandas-aggregate-count-distinct">answer</a> and from this <a href="https://stackoverflow.com/questions/35759120/finding-the-cumulative-number-of-unique-values">answer</a>, but non yielded above (or similar result)</p>
| <python><pandas><count><cumulative-sum> | 2023-11-14 14:58:50 | 2 | 571 | josepmaria |
77,481,523 | 7,497,912 | What is the most effective way to flatten a nested json response structure with PySpark? | <p>Given a json array from an API response like the below:</p>
<pre><code>[
{
"id": 1,
"collection_name": "gym_equipment",
"total_price": 5400,
"lineitems": [
{
"item_no": 1,
"item_name": "dumbell",
"quantity": 5,
"price": 200,
"splitaccountings": [
{
"internalorder": "yes",
"percentage": 50,
"test": "no"
},
{
"internalorder": "no",
"percentage": 50,
"test": "yes"
}
]
},
{
"item_no": 2,
"item_name": "kettlebell",
"quantity": 5,
"price": 300,
"splitaccountings": [
{
"internalorder": "yes",
"percentage": 50,
"test": "no"
},
{
"internalorder": "no",
"percentage": 50,
"test": "yes"
}
]
},
{
"item_no": 2,
"item_name": "weight-set",
"quantity": 15,
"price": 420,
"splitaccountings": [
{
"internalorder": "yes",
"percentage": 50,
"test": "no"
},
{
"internalorder": "no",
"percentage": 50,
"test": "yes"
}
]
}
]
},
{
"id": 2,
"collection_name": "holiday_equipment",
"total_price": 5400,
"lineitems": [
{
"item_no": 1,
"item_name": "suncream",
"quantity": 5,
"price": 200,
"splitaccountings": [
{
"internalorder": "yes",
"percentage": 50,
"test": "no"
},
{
"internalorder": "no",
"percentage": 50,
"test": "yes"
}
]
},
{
"item_no": 2,
"item_name": "beer",
"quantity": 15,
"price": 420,
"splitaccountings": [
{
"internalorder": "yes",
"percentage": 100,
"test": "no"
}
]
}
]
},
{
"id": 3,
"collection_name": "hiking_equipment",
"total_price": 5400,
"lineitems": [
{
"item_no": 1,
"item_name": "100",
"quantity": 5,
"price": 200,
"splitaccountings": [
{
"internalorder": "yes",
"percentage": 50,
"test": "no"
},
{
"internalorder": "no",
"percentage": 50,
"test": "yes"
}
]
}
]
}
]
</code></pre>
<p>As you can see there are nested objects within the json array. I want to return a single dataframe which contains all the fields from the json response. This is the output I need:</p>
<pre><code>+--------+-----------------+-----------+------------------+--------------------+-------------------+----------------+-------------------------------+----------------------------+----------------------+
|order_id|collection_name |total_price|**lineitem_item_no|**lineitem_item_name|**lineitem_quantity|**lineitem_price|**splitaccounting_internalorder|**splitaccounting_percentage|**splitaccounting_test|
+--------+-----------------+-----------+------------------+--------------------+-------------------+----------------+-------------------------------+----------------------------+----------------------+
|1 |gym_equipment |5400 |1 |dumbell |5 |200 |yes |50 |no |
|1 |gym_equipment |5400 |1 |dumbell |5 |200 |no |50 |yes |
|1 |gym_equipment |5400 |2 |kettlebell |5 |300 |yes |50 |no |
|1 |gym_equipment |5400 |2 |kettlebell |5 |300 |no |50 |yes |
|1 |gym_equipment |5400 |2 |weight-set |15 |420 |yes |50 |no |
|1 |gym_equipment |5400 |2 |weight-set |15 |420 |no |50 |yes |
|2 |holiday_equipment|5400 |1 |suncream |5 |200 |yes |50 |no |
|2 |holiday_equipment|5400 |1 |suncream |5 |200 |no |50 |yes |
|2 |holiday_equipment|5400 |2 |beer |15 |420 |yes |100 |no |
|3 |hiking_equipment |5400 |1 |100 |5 |200 |yes |50 |no |
|3 |hiking_equipment |5400 |1 |100 |5 |200 |no |50 |yes |
+--------+-----------------+-----------+------------------+--------------------+-------------------+----------------+-------------------------------+----------------------------+----------------------+
</code></pre>
<p>This is my current way of achieveing this:</p>
<pre><code>from pyspark.sql.functions import explode, col
spark = SparkSession.builder.appName("nested_json_processing").getOrCreate()
df = spark.read.format("json").load("/content/test.json")
df.show(truncate=False)
# Explode 'lineitems'
lineitems_exploded = df.select(
col("id").alias("order_id"),
col("collection_name"),
col("total_price"),
explode(col("lineitems")).alias("lineitem")
)
lineitems_exploded.show()
# Further explode 'splitaccountings' from the lineitems
splitaccountings_exploded = lineitems_exploded.select(
col("order_id"),
col("collection_name"),
col("total_price"),
col("lineitem.item_no").alias("**lineitem_item_no"),
col("lineitem.item_name").alias("**lineitem_item_name"),
col("lineitem.quantity").alias("**lineitem_quantity"),
col("lineitem.price").alias("**lineitem_price"),
explode(col("lineitem.splitaccountings")).alias("splitaccounting")
)
splitaccountings_exploded.show()
# Flatten all fields
flattened_df = splitaccountings_exploded.select(
col("order_id"),
col("collection_name"),
col("total_price"),
col("**lineitem_item_no"),
col("**lineitem_item_name"),
col("**lineitem_quantity"),
col("**lineitem_price"),
col("splitaccounting.internalorder").alias("**splitaccounting_internalorder"),
col("splitaccounting.percentage").alias("**splitaccounting_percentage"),
col("splitaccounting.test").alias("**splitaccounting_test")
)
# Show the flattened DataFrame
flattened_df.show(truncate=False)
# Stop the Spark session if no longer needed
spark.stop()
</code></pre>
<p>I am pretty new to Spark, so I am curious if there are better ways to do this.</p>
| <python><apache-spark><pyspark><apache-spark-sql> | 2023-11-14 14:45:02 | 1 | 417 | Looz |
77,481,362 | 1,106,951 | How to filter Excel data before loading to Pandas | <p>I want to load only users who their status are <code>"Disabled"</code> in <code>Status</code> column into Pandas data frame. My initial code is like this</p>
<pre><code>import pandas as pd
df = pd.read_excel('Users.XLSX', sheet_name='WebUsers', usecols="A,B")
print(df)
</code></pre>
<p>Which is bringing all users no matter they are "Disabled" or "Active". How can I add a filter like <code>WHERE 'Status' == 'Disabled'</code> and load only disabled users into frame?</p>
| <python><pandas> | 2023-11-14 14:23:37 | 2 | 6,336 | Behseini |
77,481,307 | 2,411,173 | Cover the surface of a polygon with rectangles | <p>I am struggling with a python challenge.
I have a polygon that is defined by a list of corners. Each corner is 90 degrees.</p>
<p>I am trying to partition the surface of the polygon with the smallest possible number of rectangles. How can I do it?</p>
<p>Example:</p>
<pre><code>corners = [[91570, 49055],
[91570, 48870],
[91570, 48778],
[91690, 48778],
[91815, 48778],
[91815, 48892],
[91695, 48892],
[91695, 48930],
[92245, 48930],
[92245, 48892],
[92137, 48892],
[92137, 48778],
[92370, 48778],
[92370, 49055]]
</code></pre>
<p>polygon:</p>
<p><a href="https://i.sstatic.net/EoNve.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EoNve.png" alt="enter image description here" /></a></p>
<p>I think the result should have around 5 rectangles.</p>
<p>Thanks a lot</p>
| <python><algorithm><computational-geometry> | 2023-11-14 14:16:39 | 1 | 17,747 | Donbeo |
77,480,943 | 446,347 | Nginx/Guicorn 502s under load | <p>I have an application serving WebSockets that proxies to Gunicorn using Nginx. I am seeing when a bunch of clients (say 30) are already busy interacting with the server, new attempts to connect to the proxy are failing with 502 errors.</p>
<p>I understand this could just be load, but what is strange is these 502s happen always after 10 seconds (Nginx logs). I've upped every timeout both in Nginx and Guicorn but always its ~ 10 seconds.</p>
<p>Need some guidance as to what controls that timeout and what I might change to get the 502s to go away.</p>
<p>Also, worth noting, if the 30 clients just connect and don't start interacting with the websocket, I can connect 100 more clients just fine.</p>
| <python><nginx><websocket><gunicorn> | 2023-11-14 13:25:22 | 0 | 634 | LiteWait |
77,480,789 | 15,263,719 | Can I Sort work with Ruff formatter? (not linter) | <p>I am migrating my project to RUFF. The migration was kind of straightforward, but for some reason, ISort rules are not being applied when I do this commands on VSCode: <code>Format code with > Ruff</code></p>
<p>I see this when I have my imports usorted:</p>
<p><a href="https://i.sstatic.net/qVg7q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qVg7q.png" alt="enter image description here" /></a></p>
<p>I can click con the <code>Fast fix</code> option (<code>Ctrl+.</code>. sorry, my VSCode is in Spanish) and it get fixed, but it does not when I do the VSCode Format Code With > Ruff.</p>
<p>In case you want to know, the imports gets fixed when I run <code>ruff check . --fix</code>.</p>
<p>Here is my <code>ruff.toml</code></p>
<pre class="lang-ini prettyprint-override"><code>
[lint]
extend-select = [
"E501", # Add the `line-too-long` rule to the enforced rule set. # By default, Ruff omits rules that overlap with the use of a formatter
]
ignore = [
"E712", # (x == True) is allowed.
"D100", # Docstring allowed rules
"D105",
"D107",
"D203",
"D213",
]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
# Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or McCabe complexity (`C901`) by default.
# Also add ISort (`I`) rules.
select = ["E4", "E7", "E9", "F", "I"]
# Allow fix for all enabled rules (when `--fix`) is provided. (only safes fixes will be applied)
fixable = ["ALL"]
unfixable = []
# Allow unused variables when underscore-prefixed.
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
[format]
# Like Black, use double quotes for strings.
quote-style = "double"
# Like Black, indent with spaces, rather than tabs.
indent-style = "space"
# Like Black, respect magic trailing commas.
skip-magic-trailing-comma = false
# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"
</code></pre>
<p>I am probably doing something wrong, because this iSort fixes used to work with the black formatter. And I have this option selected in VSCode settings.</p>
<p><a href="https://i.sstatic.net/hlFXe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hlFXe.png" alt="enter image description here" /></a></p>
<p>Thanks in advance.</p>
| <python><formatter><isort><ruff> | 2023-11-14 12:57:40 | 1 | 360 | Danilo Bassi |
77,480,740 | 2,523,635 | I am getting ValueError when running any pip command in my windows machine | <p><strong>Background</strong></p>
<p>I have installed python 3.10.11 on my Windows 10 machine using installer 'python-3.10.11-amd64.exe'. But whenever I try to run any 'pip' command it shows error now.</p>
<p>Error comes as below</p>
<pre><code>C:\WINDOWS\system32>pip -version
Traceback (most recent call last):
File "C:\Users\innabal1\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\innabal1\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\innabal1\AppData\Local\Programs\Python\Python310\Scripts\pip.exe\__main__.py", line 4, in <module>
File "C:\Users\innabal1\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\cli\main.py", line 9, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "C:\Users\innabal1\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\cli\autocompletion.py", line 10, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "C:\Users\innabal1\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\cli\main_parser.py", line 9, in <module>
from pip._internal.build_env import get_runnable_pip
File "C:\Users\innabal1\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_internal\build_env.py", line 15, in <module>
from pip._vendor.packaging.requirements import Requirement
File "C:\Users\innabal1\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\packaging\__init__.py", line 5, in <module>
from .__about__ import (
ValueError: source code string cannot contain null bytes
</code></pre>
<p>I am unable to rectify this error.</p>
<p>Python version command shows as below</p>
<pre><code>C:\WINDOWS\system32>python --version
Python 3.10.11
</code></pre>
<p><strong>What I tried</strong></p>
<p>I tried uninstalling and re-installing Python but it did not help.</p>
<p><strong>Note</strong></p>
<p>I earlier had Python 3.8.*. I have both VS Code and Pycharm installed in the same machine</p>
| <python><visual-studio-code><pip> | 2023-11-14 12:49:54 | 2 | 665 | KBNanda |
77,480,719 | 1,538,701 | Can I exit an "as_completed" pool executor loop (cancelling all remaining tasks)? | <p>The idea is to run a set of tasks, process them as they finish, and as soon as one returns the desired result, cancel the remaining tasks. I don't mind waiting for tasks already started to finish, but starting new tasks should be forbidden.</p>
<p>I've tried calling <code>pool.shutdown(cancel_futures=True)</code>, but it doesn't seem to work here:</p>
<pre class="lang-py prettyprint-override"><code>import time
import multiprocessing
import concurrent.futures
import random
filelist = ['001', '002', '003', '004', '005', '006', '007', '008', '009', '010']
parallel = 2
def worker(name, lock, index):
with lock:
index.value += 1
pc = index.value*10
print(f'starting: {name} ({pc})')
d = random.randrange(1, 5)
time.sleep(d)
return pc == 60
print('INIT')
with multiprocessing.Manager() as manager:
lock = manager.Lock()
index = manager.Value('b', 0)
if parallel > 0:
with concurrent.futures.ProcessPoolExecutor(max_workers=parallel) as pool:
tasks = [pool.submit(worker, filename, lock, index) for filename in filelist]
for task in concurrent.futures.as_completed(tasks):
rc = task.result()
if rc:
print('Found!')
pool.shutdown(cancel_futures=True)
break
else:
for filename in filelist:
rc = worker(filename, lock, index)
if rc:
print('Found!')
break
print('END')
</code></pre>
<p>When running in serial mode (<code>parallel = 0</code>), I get the desired result:</p>
<pre><code>INIT
starting: 001 (10)
starting: 002 (20)
starting: 003 (30)
starting: 004 (40)
starting: 005 (50)
starting: 006 (60)
Found!
END
</code></pre>
<p>But with parallel activated, it just continues till the end:</p>
<pre><code>INIT
starting: 001 (10)
starting: 002 (20)
starting: 003 (30)
starting: 004 (40)
starting: 005 (50)
starting: 006 (60)
Found!
starting: 007 (70)
starting: 008 (80)
starting: 009 (90)
END
</code></pre>
<p>So, is there any way to exit the <code>as_completed</code> loop?</p>
| <python><python-multiprocessing><concurrent.futures> | 2023-11-14 12:46:29 | 1 | 2,714 | Jellby |
77,480,495 | 4,575,197 | How to clean a long text from repetitive (duplicate) Paragraphs? | <p>So i have 100000 rows in a dataframe, all containing a text column. which i want to clean before further analysis. i found this <a href="https://stackoverflow.com/a/62333168/4575197">answer</a> which gave me a lot of information. However still i have duplicated sentences even in clean list. It's important to note that the text's language is German.</p>
<p>here's the example Text and code.</p>
<pre><code>from nltk.tokenize import sent_tokenize
corpus = '''
Monsanto:Bayer will keinen Genpflanzenzwang für Europa Nach derFusion mit dem US-Unternehmen Monsantowill der ChemiekonzernBayernicht zwangsläufig genetisch verändertes Saatgut in Europa vertreiben. Das sagte Bayer-Chef Werner Baumann derSüddeutschen Zeitung."Wir wollenMonsantonicht übernehmen, um genveränderte Pflanzen in Europa zu etablieren", sagte Baumann. Wenn die Gesellschaft
gentechnisch verändertes Saatgut ablehne, akzeptiereBayerdies.
"Und wir werden nicht über Umwege versuchen, etwas anderes
durchzudrücken", sagte Baumann weiter. Monsantohabe zu Beginn des Jahrzehnts versucht, die
Einführung von genveränderten Pflanzen in Europa gegen große
Widerstände voranzutreiben und sei dabei zu wenig auf Bedenken
eingegangen, sagte der Bayer-Chef. "Dieser Schuss ist nach hinten losgegangen." Unter
seiner Führung solle damit Schluss sein, kündigte Werner Baumann. "Wir bei Bayer haben einen partnerschaftlichen Ansatz, mit
unseren Kunden und allen gesellschaftlichen Gruppen umzugehen."
Nach diesem Maßstab werde man auch das kombinierte
Saatgutgeschäft führen. Außerdem werde auch er persönlich den
Dialog mit Kritikern stärker suchen. Die Befürchtungen, Gen-Saatgut könne verstärkt nachEuropakommen, waren gewachsen, nachdem der Deal zwischen Bayer und Monsanto öffentlich wurde. Umwelt- und Naturschutzverbände kritisieren die Übernahme heftig. "Sollten die Kartellbehörden die
Fusion durchwinken, würde der neu entstehende Megakonzern eine
marktbeherrschende Stellung im Bereich Saatgut, Gentechnik und
Pestizide bekommen", sagte Heike Moldenhauer, Gentechnikexpertin beim BUND. Sie fürchtet, dass der Konzern künftig diktieren
wolle, was Landwirte anbauen und welche Produkte auf dem Markt
verfügbar sind. Zudem würde die Umwelt durch noch mehr
Monokulturen und Gentechpflanzen leiden. Der Geld-Newsletter Geld oder Leben? Warum nicht beides! Jeden Dienstag bringt unser Newsletter Finanzwelt und Familie, Börse und Beziehung in Ihrem Postfach zusammen. Mit Ihrer Registrierung nehmen Sie dieDatenschutzerklärungzur Kenntnis. Vielen Dank! Wir haben Ihnen eine E-Mail geschickt. Diese E-Mail-Adresse ist bereits registriert. Die Übernahme von Monsanto ist die teuerste, die ein
deutsches Unternehmen jemals gewagt hat. Rund 66 Milliarden
Dollar (58,8 Milliarden Euro) will Bayer für den Saatgutanbieter zahlen. Allerdings
müssen der Übernahme noch die Wettbewerbshüter in etwa 30
Ländern zustimmen. Durch den Kauf wird Bayer zum weltweit führenden Anbieter für Saatgut und
Pflanzenschutzmittel. Zwar ist der Kauf laut Marktexperten für Bayer
strategisch sinnvoll, weil sich die beiden Unternehmen ergänzen. Monsanto steht in Europa aber seit Jahren wegen
seiner gentechnisch veränderten Produkte in der Kritik. Nicht zuletzt,
weil der Konzern den UnkrautvernichterGlyphosatvertreibt, der im
Verdacht steht, krebserregend zu sein. Nach derFusion mit dem US-Unternehmen Monsantowill der ChemiekonzernBayernicht zwangsläufig genetisch verändertes Saatgut in Europa vertreiben. Das sagte Bayer-Chef Werner Baumann derSüddeutschen Zeitung."Wir wollenMonsantonicht übernehmen, um genveränderte Pflanzen in Europa zu etablieren", sagte Baumann. Wenn die Gesellschaft
gentechnisch verändertes Saatgut ablehne, akzeptiereBayerdies.
"Und wir werden nicht über Umwege versuchen, etwas anderes
durchzudrücken", sagte Baumann weiter. Monsantohabe zu Beginn des Jahrzehnts versucht, die
Einführung von genveränderten Pflanzen in Europa gegen große
Widerstände voranzutreiben und sei dabei zu wenig auf Bedenken
eingegangen, sagte der Bayer-Chef. "Dieser Schuss ist nach hinten losgegangen." Unter
seiner Führung solle damit Schluss sein, kündigte Werner Baumann. "Wir bei Bayer haben einen partnerschaftlichen Ansatz, mit
unseren Kunden und allen gesellschaftlichen Gruppen umzugehen."
Nach diesem Maßstab werde man auch das kombinierte
Saatgutgeschäft führen. Außerdem werde auch er persönlich den
Dialog mit Kritikern stärker suchen. Jetzt teilen auf:
'''
sentences = sent_tokenize(corpus,language='german')
duplicates = []
cleaned = []
for s in sentences:
if s in cleaned:
if s in duplicates:
continue
else:
duplicates.append(s)
else:
cleaned.append(s)
cleaned
</code></pre>
<p>i get rid of most of duplicated sentences, but i have in this case half of a sentence which still remains in the clean list, how can i fully clean this text. This is the half of sentence that remains:</p>
<pre><code>Nach derFusion mit dem US-Unternehmen Monsantowill der ChemiekonzernBayernicht zwangsläufig genetisch verändertes Saatgut in Europa vertreiben
</code></pre>
| <python><text><nltk><data-cleaning> | 2023-11-14 12:09:50 | 2 | 10,490 | Mostafa Bouzari |
77,480,317 | 17,487,457 | Mean absolute value along all columns of a 3D Numpy array | <p>I always get find myself in state of confusion when dealing with a multi-dimensional array.
Imaging having the following array of arrays, where each array contains feature importance scores (3-features), for each class in the dataset (5-classes). The dataset contains 4 samples in all.</p>
<pre class="lang-py prettyprint-override"><code>arr = np.random.randn(5,4,3).round(1)
arr
array([[[ 0.7, -0.1, 0.6], # class 0 feature importances
[-0.8, -0.7, 1.4],
[ 1.4, -0.1, 1.4],
[-1.8, -1.2, -1.6]],
[[-0.3, 2.1, 0.5], # class 1 feature importances
[-1.2, 1.4, -0.4],
[ 0. , -1. , 0.8],
[-0.8, 2.3, 0.3]],
[[ 0.2, 0.6, -0.1], # class 2 feature importances
[-1.8, -0.2, 1.2],
[-0.5, 0.5, 1. ],
[ 1.3, 0.4, -2.6]],
[[-1. , 0.8, -0.4], # class 3 feature importances
[ 1.2, 1.5, -0.5],
[ 0.1, -0.5, 0.8],
[ 2.5, -1.6, -0.6]],
[[-1.2, 0.3, -0.9], # class 4 feature importances
[ 1. , -1. , -0.5],
[ 0.3, 1.4, 0.5],
[-2.3, 0.6, 0.2]]])
</code></pre>
<p>I am interested in computing the <code>mean absolute</code> value of feature importances across the classes (overrall). Ideally the resultant arrar should be a rank 1 <code>(3,)</code> since there are three features:</p>
<pre class="lang-py prettyprint-override"><code>Feature1 = sum( abs(0.7,-0.8, 1.4, -1.8, -0.3, -1.2, 0.0, -0.8, 0.2, -1.8, -0.5, 1.3,
-1.0, 1.2, 0.1, 2.5, -1.2, 1.0, 0.3,, -2.3) ) / 12 # n = 12
</code></pre>
| <python><arrays><numpy><multidimensional-array><numpy-ndarray> | 2023-11-14 11:40:20 | 1 | 305 | Amina Umar |
77,480,234 | 859,141 | PySide6 - Color specific row with QTableView QAbstractTableModel and Pandas | <p>This is the first application I have created with Python QT so apologies if I have missed something really fundamental.</p>
<p>The following table is populated via a QAbstractTableModel linked to a Pandas Framework. I have an PlayerIDX (e.g. 30, who in the example is running 25th) and want that row to be highlighted. At the moment I have the following function:</p>
<pre><code>def highlight_player_row(self, row):
for column in range(0,self.columnCount()):
ix = self.index(row, column)
print(ix)
self.colors[(row, column)] = QBrush(Qt.darkCyan)
self.dataChanged.emit(ix, ix, (Qt.BackgroundRole,))
</code></pre>
<p>which called with <em>highlight_player_row(30)</em> results in the following:</p>
<p><a href="https://i.sstatic.net/itLyt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itLyt.png" alt="enter image description here" /></a></p>
<p>Rather than row 24/31 being highlighted I would like row 30/25 (marked in yellow).</p>
<p>I don't know if it makes any difference but I am sorting the dataframe prior with:</p>
<pre><code>self.idx_data = idx_df.sort_values(by=['CarIdxPosition'])
</code></pre>
<p>Do I need to go back to where the dataframe is formed from the incoming data and flagged?</p>
| <python><pandas><pyside><qtableview><qabstracttablemodel> | 2023-11-14 11:26:47 | 1 | 1,184 | Byte Insight |
77,479,695 | 4,474,576 | Scrapy how to make async request from FSFilesStore/media pipeline? | <p>What I need is to stat file based on head request, instead of downloading entire file. File is video(can be big), always have same name. Can determine update based on size and update change from head.</p>
<p>Code:</p>
<pre><code>import treq
from scrapy.pipelines.files import FilesPipeline
from scrapy.pipelines.files import FSFilesStore
class FSFilesStoreDBCacheExtended(FSFilesStore):
async def stat_file(self, path, info):
try:
head_resp = await treq.head('https://google.com')
except Exception as e:
print(e) # ConnectError here
raise e
size = int(head_resp.headers['content-length'])
mod_timestamp = head_resp.headers['last-modified']
# ... do some stuff
return super().stat_file(path, info)
class ImageDownload(FilesPipeline):
def __init__(self, store_uri, download_func=None, settings=None):
self.STORE_SCHEMES[''] = FSFilesStoreDBCacheExtended
self.STORE_SCHEMES['file'] = FSFilesStoreDBCacheExtended
super().__init__(store_uri, download_func, settings)
</code></pre>
<p><code>treq</code> fails with error:</p>
<p><code>[<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion: Connection lost.>]</code></p>
<p>Most probably <code>async</code> is not allowed here.</p>
<p>There is <a href="https://stackoverflow.com/questions/41624350/making-a-non-blocking-http-request-from-scrapy-pipeline">this answer</a>, but there is no access to crawler and spider and it looks very hacky: don't want media process logic to be in spider.</p>
<p><strong>Question</strong>: how to make non blocking HTTP call from <code>stat_file</code> method with <code>treq</code> or any other lib or by scrapy tool?</p>
| <python><async-await><scrapy><twisted> | 2023-11-14 10:03:46 | 0 | 1,660 | frenzy |
77,479,601 | 12,133,068 | Dask array operation on chunks with no return | <p>I have a dask array of dimension <code>(C, Y, X)</code> (for instance, <code>(100, 50000, 50000)</code>).</p>
<p>I want to perform an operation on each chunk that will add some value to a small numpy array, that we will call <code>x</code>. Note that the operation on each chunk doesn't need to return any value, it will just update one row of <code>x</code>.</p>
<p>I have the current implementation below (pseudo-code) that works but is quite ugly and seems to use too much memory (I have some memory errors sometimes). I was wondering if there is a better solution for this?</p>
<pre class="lang-py prettyprint-override"><code>import dask.array as da
import numpy as np
image = ... # dask array of shape (C, Y, X)
x = np.zeros(100, 20)
def func(chunk, block_info=None):
row_index = ... # the coordinates of the chunk is needed to get the row index that we will update
new_value = ... # here, I perform an operation on my chunk
x[row_index] += new_value
return da.zeros_like(chunk) # I have to return something that has the shape of the chunk
image = image.rechunk({0: -1}) # I don't want chunks on the channel axis (each chunk is still quite light in term of RAM)
image.map_blocks(func).compute()
</code></pre>
| <python><dask> | 2023-11-14 09:50:25 | 1 | 334 | Quentin BLAMPEY |
77,479,584 | 571,692 | Local Azure Function: Customer packages not in sys path. This should never happen | <p>I'm encountering a weird warning with azure functions locally.
Whenever I <code>func start</code> my function, I get these error messages:</p>
<pre><code>Found Python version 3.10.12 (python3).
Azure Functions Core Tools
Core Tools Version: 4.0.5455 Commit hash: N/A (64-bit)
Function Runtime Version: 4.27.5.21554
[2023-11-14T10:02:39.795Z] Customer packages not in sys path. This should never happen!
[2023-11-14T10:02:42.194Z] Worker process started and initialized.
</code></pre>
<p>In the host.json, extensionBundle version is <code>[3.*, 4.0.0)</code>
In the local.settings.json, <code>"FUNCTIONS_WORKER_RUNTIME": "python"</code>
The function app is based on the new model of python azure function (func init MyProjFolder --worker-runtime python --model V2 <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&pivots=programming-language-python" rel="noreferrer">https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=linux%2Cisolated-process%2Cnode-v4%2Cpython-v2%2Chttp-trigger%2Ccontainer-apps&pivots=programming-language-python</a>)</p>
<p>My first interrogation is the first warning:</p>
<p><code>Customer packages not in sys path. This should never happen!</code>. I'm using a virtual environment.</p>
<p>The function is starting correctly, but what is this warning?</p>
<p>local.settings.json:</p>
<pre class="lang-json prettyprint-override"><code>{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"AzureWebJobsFeatureFlags": "EnableWorkerIndexing"
}
}
</code></pre>
| <python><azure-functions><azure-functions-core-tools> | 2023-11-14 09:47:27 | 2 | 717 | Morti |
77,479,458 | 189,336 | Python: write dataset as hive-partitioned and clustered parquet files (no JVM) | <p>I would like write a table stored in a dataframe-like object (e.g. pandas dataframe, duckdb table, pyarrow table) in the parquet format that is both hive partitioned <em>and</em> clustered. Here's what I mean</p>
<ul>
<li><strong>Hive partitioning</strong> i.e., I can specify a set of partitioning columns e.g. <code>(year, month, day, foo_col1)</code>) which and this will result in data from each partition being written to a different path e.g. <code>year=2024/month=01/day=01/foo_col1=bar_val/</code></li>
<li><strong>Clustering</strong> (a.k.a. bucketing). I can also specify a set of clustering columns, and this will co-locate data with the same values to adjacent rows in the parquet files within each partition.</li>
</ul>
<p>Note that to achieve clustering, it is also sufficient to be able to sort rows within each partition by a set of columns.</p>
<p>I can do this in spark (and pyspark) by sorting a dataframe and then writing the output with parquet and specifying the partitionBy columns. <em>However, spark is a JVM-based framework that I am trying to avoid.</em> I would love to achieve this using a package like <code>pyarrow</code> <code>pandas</code> or <code>duckdb</code> that does not require an external runtime like java (and which often has lower serialization/deserialization cost.</p>
<p>I have tried doing this in duckdb by first creating a sorted table and then using the <code>COPY TO</code> with the relevant hive paritioning options. This creates a hive-paritioned that's not quite right: within each individual file, the sorting seems to be respected, but the sorting is not respected across all files in the same hive partition. This prevents the optimization offered by clustering/bucketing, in which all identical values of the clustered column appear in a contiguous block of rows within the partition.</p>
| <python><parquet><pyarrow><duckdb> | 2023-11-14 09:26:16 | 0 | 13,905 | conradlee |
77,479,371 | 573,191 | CuDF KeyError: 'Field "None" does not exist in schema' | <p>I am reading in one single file (1.4GB) using CuDF, I am actually using RapidsAI pandas implementation, but since I was getting the same error I tried directly with CuDF. Now the shape is (847942, 4) and dtypes shows:</p>
<pre><code>@timestamp object
message object
syslog_program object
syslog_hostname object
dtype: object
DTPYES: dict = {
'@timestamp': str,
'syslog_program': str,
'syslog_hostname': str,
'message': str,
}
COLUMN_ORDER:List[str] = [
'@timestamp',
'syslog_program',
'syslog_hostname',
'message',
]
gdf = cudf.read_csv(matching_files[:1][0],
dtype=DTPYES,
usecols=COLUMN_ORDER,
delimiter=",",
)
print(gdf.shape)
</code></pre>
<p>but if I try to gdf.head(5) i get the following error</p>
<blockquote>
<p>return libcudf.interop.to_arrow([self], [("None", self.dtype)])["None"].chunk(0)</p>
<p>File ~/.local/lib/python3.9/site-packages/pyarrow/table.pxi:1525, in
pyarrow.lib._Tabular.<strong>getitem</strong>()</p>
<p>File ~/.local/lib/python3.9/site-packages/pyarrow/table.pxi:1611, in
pyarrow.lib._Tabular.column()</p>
<p>File ~/.local/lib/python3.9/site-packages/pyarrow/table.pxi:1547, in
pyarrow.lib._Tabular._ensure_integer_index()</p>
<p>KeyError: 'Field "None" does not exist in schema'</p>
</blockquote>
<p>how can I overcome this issue, considering that if I use dask or polars I do not have that issue?</p>
| <python><dataframe><pyarrow><cudf> | 2023-11-14 09:11:15 | 0 | 325 | fabio.geraci |
77,479,170 | 4,505,998 | Save figure with centered plot | <p>I want to save the figure with an Y axis title and ticks, but keeping the frame of the graph horizontally centered.</p>
<pre class="lang-py prettyprint-override"><code>x = np.linspace(0, 2 * np.pi, 100)
y = np.sin(x)
plt.plot(x, y)
plt.ylabel('Y-axis')
plt.savefig('/tmp/test.png')
</code></pre>
<p>While the displayed figure in my Jupyter Notebook is centered:
<a href="https://i.sstatic.net/98NxG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/98NxG.png" alt="enter image description here" /></a></p>
<p>The file <code>/tmp/test.png</code> is not:</p>
<p><a href="https://i.sstatic.net/5vOxa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5vOxa.png" alt="enter image description here" /></a></p>
<p>I'd like it to be the same space between the black vertical bar in the left with the left edge of the image, and the black vertical bar in the right with the right edge of the image.</p>
| <python><matplotlib> | 2023-11-14 08:33:47 | 0 | 813 | David Davó |
77,478,949 | 7,585,973 | NotImplementedError: cannot instantiate 'WindowsPath' on your system when load .pkl file in prod environtment | <p>I try to load a pickle file that initially saved from 'nlp = spacy.load("en_core_web_md")' it sucess to load and use in jupyter notebopok, but NotImplementedError: cannot instantiate 'WindowsPath' on your system when load .pkl file in prod environtment</p>
<p>This is what I've done:</p>
<ol>
<li>changing path to backslash</li>
</ol>
<p>But, the error still similar, do you have any idea?</p>
| <python><pandas> | 2023-11-14 07:50:52 | 1 | 7,445 | Nabih Bawazir |
77,478,821 | 4,494,781 | How to emit a 64 bit unsigned integer with a PyQt signal | <p>I ran into integer overflow issues when using <code>PyQt5.QtCore.pyqtSignal(int)</code> in the past, because it seems that the python int becomes a C int (32 bit signed integer) in the C++ call. This limits the maximum value that the signal can emit to 2,147,483,647.</p>
<p>I worked around this issue by emitting a <code>PyQt5.QtCore.pyqtSignal(object)</code> instead, but I imagine this adds a lot of overhead to the signal.</p>
<p>Is there a way to specify exactly which flavor of integer I want the signal to emit in my python code?</p>
| <python><pyqt><pyqt5><integer-overflow> | 2023-11-14 07:26:59 | 0 | 1,105 | PiRK |
77,478,792 | 2,000,548 | Why `astype(int)` changes the number? | <p>I am using Python 3.11.</p>
<p>I have some data which is nanosecond unix time but in float. I try to convert to int.</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{"float_time": [1698873816748837120.0, 1698873795853176060.0]}
)
df["time"] = df["float_time"].astype(int)
</code></pre>
<p>However, this fails</p>
<pre><code>assert df.equals(
pd.DataFrame(
{
"float_time": [1698873816748837120.0, 1698873795853176060.0],
"time": [1698873816748837120, 1698873795853176060] # <- Note last number ends with 0
}
)
)
</code></pre>
<p>This succeeds</p>
<pre><code>assert df.equals(
pd.DataFrame(
{
"float_time": [1698873816748837120.0, 1698873795853176060.0],
"time": [1698873816748837120, 1698873795853176064] # <- Note last number ends with 4
}
)
)
</code></pre>
<p>You can see <code>astype(int)</code> changes <code>1698873795853176060.0</code> to <code>1698873795853176064</code>.</p>
<p>I am wondering why <code>astype(int)</code> changes the number?</p>
<p>I was thinking decimals in the float may change, but didn't expect the integer part changes.</p>
<p>Could someone explains? Thanks!</p>
| <python><pandas> | 2023-11-14 07:21:48 | 0 | 50,638 | Hongbo Miao |
77,478,727 | 12,242,085 | How to create new column in Data Frame to sort values for each id based on date column in Python Pandas? | <p>I have Data Frame in Python Pandas like below:</p>
<ul>
<li><p>Column my_date is in datetime format.</p>
</li>
<li><p>In my real DataFrame I have many more columns.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>my_date</th>
<th>col1</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>2023-05-15</td>
<td>1</td>
</tr>
<tr>
<td>111</td>
<td>2023-05-14</td>
<td>11</td>
</tr>
<tr>
<td>111</td>
<td>2023-05-13</td>
<td>2</td>
</tr>
<tr>
<td>222</td>
<td>2023-10-11</td>
<td>3</td>
</tr>
<tr>
<td>222</td>
<td>2023-10-12</td>
<td>55</td>
</tr>
</tbody>
</table>
</div></li>
</ul>
<p>And I need to create new column col_x where for each id will be value 1,2,3 and so on based on date in column my_date. So, for each id in column col_x has to be 1 in the early date from my_date and so on.</p>
<p>So, as a result I need something like below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>my_date</th>
<th>col1</th>
<th>col_x</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>2023-05-15</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>111</td>
<td>2023-05-14</td>
<td>11</td>
<td>2</td>
</tr>
<tr>
<td>111</td>
<td>2023-05-13</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>222</td>
<td>2023-10-11</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>222</td>
<td>2023-12-12</td>
<td>55</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>How can I do that in Python Pandas ?</p>
| <python><pandas><dataframe><date><datetime> | 2023-11-14 07:10:27 | 1 | 2,350 | dingaro |
77,478,698 | 13,803,549 | How to work in development with Discord OAuth2 live | <p>I have a django app with Discord OAuth2 authentication live on the deployed site.</p>
<p>With all the redirects leading back to the deployed app, what is the procedure for doing development/debugging when I can’t see the progress on the localhost?</p>
<p>Thanks!</p>
| <python><django><oauth-2.0><discord.py> | 2023-11-14 07:03:17 | 1 | 526 | Ryan Thomas |
77,478,534 | 11,580,993 | MV C++ 14.0 or greater is required | <p>I am trying to install Bloomberg API using <code>python -m pip install --index-url=https://bcms.bloomberg.com/pip/simple blpapi</code> as per their website. But I get an error</p>
<pre><code> error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
</code></pre>
<p>Which is strange because I do have Microsoft Visual C++ 2015 2022 installed. i even added the env variable: C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.37.32822\bin\Hostx64\x64.</p>
<p>Does anyone knows a solution, Bloomberg helpdesk is recommending to read their documentation which obviously does NOT provide a coverage of the problem.</p>
| <python> | 2023-11-14 06:26:12 | 0 | 1,003 | Rene Chan |
77,478,398 | 4,281,353 | Using list comprehensions in Tensorflow | <p>Can we use Python comprehension in Tensorflow graph (non eager) execution?</p>
<pre><code>IOU: tf.Tensor = tf.concat(
values=[
intersection_over_union(
box_pred[..., j, 1:5],
box_true[..., 1:5]
)
for j in range(self.B) # <--- predicted boxes
],
axis=-1,
name="IOU"
)
</code></pre>
<p>I think Graph cannot have a loop by default and need to use either <code>@tf.function</code> or <code>tf.while_loop</code> but some posts used comprehension, hence would like to confirm.</p>
<p><a href="https://stackoverflow.com/questions/44284477/equivalent-list-comprehension-in-tensorflow">equivalent list comprehension in tensorflow</a></p>
<pre><code>vals = [dict[tensor1[k]] for k in range(tensor1.get_shape().as_list()[0])]
tensor2 = tf.stack(vals, axis=0)
</code></pre>
| <python><tensorflow><list-comprehension> | 2023-11-14 05:47:18 | 1 | 22,964 | mon |
77,478,026 | 9,747,182 | How to compute the moving average over 3D array with a step size? | <p>I need to calculate a moving average over a 3D array with a step size set by me. What I am doing right now is</p>
<pre><code> img = np.ones(10,10,50)
img_new = bottleneck.move.move_mean(img, window=5, axis=2)
</code></pre>
<p>While the bottleneck.move.move_mean is fast enough to deal with images, unfortunately it does not let me set the step size. This leads to a lot of overhead because the mean of 4 out 5 windows is calculated unnecessarily.</p>
<p>Is there a similar function to bottleneck.move.move_mean where I can set the step size?</p>
| <python><numpy-ndarray><moving-average><array-broadcasting><numpy-slicing> | 2023-11-14 03:26:24 | 1 | 701 | emely_pi |
77,477,931 | 189,247 | Compute the order of non-unique array elements | <p>I'm looking for an efficient method to compute the "order" of each
item in a numpy array, with "order" defined as the number of
preceding elements equal to the element. Example:</p>
<pre><code>order([4, 2, 3, 2, 6, 4, 4, 6, 2, 4])
[0 0 0 1 0 1 2 1 2 3]
</code></pre>
<p>Current solution loops in pure Python and is not fast enough:</p>
<pre><code>def order(A):
cnt = defaultdict(int)
O = np.zeros_like(A)
for i, r in enumerate(A):
O[i] = cnt[r]
cnt[r] += 1
return O
</code></pre>
<p>I'm using <code>order</code> to implement <code>scatter</code>:</p>
<pre><code>def scatter(A, c):
R = A % c
I = c * order(R) + R
B = np.full(np.max(I) + 1, -1)
B[I] = A
return B
</code></pre>
<p>This is useful for multi-threading. For example, if the scattered
array contains addresses to write to then no two threads processing
the array in parallel will see the same address.</p>
<p>Question is are there numpy built-ins that I'm missing that I can use
to make <code>order</code> faster and to remove the explicit looping?</p>
| <python><arrays><numpy> | 2023-11-14 02:52:02 | 1 | 20,695 | Gaslight Deceive Subvert |
77,477,648 | 15,233,792 | pkg_resources.DistributionNotFound: The 'pip==20.0.2' distribution was not found and is required by the application | <pre><code>python --version
Python 2.7.18
python3 --version
Python 3.10.0
</code></pre>
<p>After I installed pip3 using <code>sudo apt-get install python3-pip</code></p>
<p>I run <code>pip3</code></p>
<p>It shows error below:</p>
<pre class="lang-bash prettyprint-override"><code>$ pip3
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 568, in _build_master
ws.require(__requires__)
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 886, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 777, in resolve
raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (pip 21.2.3 (/usr/local/lib/python3.10/site-packages), Requirement.parse('pip==20.0.2'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/pip3", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 3243, in <module>
def _initialize_master_working_set():
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 3226, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 3255, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 570, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 583, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/local/lib/python3.10/site-packages/pkg_resources/__init__.py", line 772, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==20.0.2' distribution was not found and is required by the application
</code></pre>
<p>I would appreciate it if someone could provide any ideas to solve this issue</p>
<p>Thanks</p>
| <python><python-3.x><pip> | 2023-11-14 01:02:54 | 1 | 2,713 | stevezkw |
77,477,547 | 564,709 | Specify pydantic model fields from a data source | <p>I have a need for a pydantic model that would dynamically create model fields from another data source. I'd like to do something like this:</p>
<pre class="lang-py prettyprint-override"><code>import pydantic
import my_data
_data = my_data.load() # _data[variable] = value
class MyModel(pydantic.BaseModel):
# this obviously doesn't work. Is there a way to create a suite of variable
# names from the keys in _data
for _key in _data:
_key: float = None
</code></pre>
<p>I've tried to do <code>setattr(MyModel, _key, None)</code> but that doesn't work either. The tricky business is that there are a few hundred fields that I would like to specify on <code>MyModel</code> but I'm not sure how to do this effectivly. How can I create an arbitrary set of fields on a pydantic model based on data in a dictionary? More generally, is there a way to specify class attributes and their associated type hints from data in a dictionary?</p>
| <python><python-typing><pydantic> | 2023-11-14 00:23:59 | 2 | 3,336 | dino |
77,477,374 | 2,518,602 | Jupyter Notebook hangs after trying to upgrade to new version of Python | <p>I recently realized that while I have Python 3.12.0 installed, Jupyter notebook was using Python 3.7.3. I thought it would be a good idea to try and get Jupyter to use the same version of Python that <code>python3</code> is referring to. I spent a few hours on this and seem to have completely broken Jupyter. I am using MacOS 14.0.</p>
<p>Jupyter appears to startup just fine. Although I do see this warning which I believe is new:</p>
<pre><code>/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/json/encoder.py:257: UserWarning: date_default is deprecated since jupyter_client 7.0.0. Use jupyter_client.jsonutil.json_default.
return _iterencode(o, 0)
</code></pre>
<p>But when I try to create a new notebook it hangs. I see a ton of output on the console. The output is probably too long to print in its entirety but it start with:</p>
<pre><code>[I 15:08:35.774 NotebookApp] Creating new notebook in
/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/json/encoder.py:257: UserWarning: date_default is deprecated since jupyter_client 7.0.0. Use jupyter_client.jsonutil.json_default.
return _iterencode(o, 0)
Exception in callback <TaskWakeupMethWrapper object at 0x10fa70fd8>(<Future finis...c7b"\r\n\r\n'>)
handle: <Handle <TaskWakeupMethWrapper object at 0x10fa70fd8>(<Future finis...c7b"\r\n\r\n'>)>
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
RuntimeError: Cannot enter into task <Task pending coro=<HTTP1ServerConnection._server_request_loop() running at /Users/arilamstein/Library/Python/3.7/lib/python/site-packages/tornado/http1connection.py:825> wait_for=<Future finished result=b'GET /api/co...ac7b"\r\n\r\n'> cb=[IOLoop.add_future.<locals>.<lambda>() at /Users/arilamstein/Library/Python/3.7/lib/python/site-packages/tornado/ioloop.py:687]> while another task <Task pending coro=<RequestHandler._execute() running at /Users/arilamstein/Library/Python/3.7/lib/python/site-packages/tornado/web.py:1711> cb=[_HandlerDelegate.execute.<locals>.<lambda>() at /Users/arilamstein/Library/Python/3.7/lib/python/site-packages/tornado/web.py:2361]> is being executed.
Exception in callback <TaskWakeupMethWrapper object at 0x10f0dd6a8>(<Future finis...c7b"\r\n\r\n'>)
handle: <Handle <TaskWakeupMethWrapper object at 0x10f0dd6a8>(<Future finis...c7b"\r\n\r\n'>)>
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
RuntimeError: Cannot enter into task <Task pending coro=<HTTP1ServerConnection._server_request_loop() running at /Users/arilamstein/Library/Python/3.7/lib/python/site-packages/tornado/http1connection.py:825> wait_for=<Future finished result=b'GET /nbexte...ac7b"\r\n\r\n'> cb=[IOLoop.add_future.<locals>.<lambda>() at /Users/arilamstein/Library/Python/3.7/lib/python/site-packages/tornado/ioloop.py:687]> while another task <Task pending coro=<RequestHandler._execute() running at /Users/arilamstein/Library/Python/3.7/lib/python/site-packages/tornado/web.py:1711> cb=[_HandlerDelegate.execute.<locals>.<lambda>() at /Users/arilamstein/Library/Python/3.7/lib/python/site-packages/tornado/web.py:2361]> is being executed.
Exception in callback <TaskWakeupMethWrapper object at 0x10f0dd528>(<Future finis... GMT\r\n\r\n'>)
handle: <Handle <TaskWakeupMethWrapper object at 0x10f0dd528>(<Future finis... GMT\r\n\r\n'>)>
</code></pre>
<p>While I would still like to have Jupyter use the same version of Python that <code>python3</code> refers to, at this point I would settle for simply unbreaking Jupyter.</p>
<p>Any help would be appreciated!</p>
| <python><jupyter-notebook><jupyter> | 2023-11-13 23:19:59 | 2 | 2,023 | Ari |
77,477,353 | 678,572 | How to write to Python dictionary without loading the dictionary into memory? | <p>I have a large table that I want to convert to a Python dictionary but I don't want to load all of the data into memory.</p>
<p><strong>Is it possible to actively write to a pickle dump without building the object first?</strong></p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>import gzip
f_out = open("output.dict.pkl.gz", "wb")
with open("table.tsv", "r") as f_in:
for line in f_in:
line = line.strip()
if line:
fields = line.split("\t")
k = fields[3]
v = fields[1]
# Pseudocode
f_out[k] = v # I know this won't work but just so you can see my goal
# Close the pickle file
f_out.close()
</code></pre>
| <python><dictionary><bigdata><pickle><large-data> | 2023-11-13 23:12:01 | 1 | 30,977 | O.rka |
77,477,327 | 377,022 | Is there a reason some (all?) libraries don't implement log_softmax via log1p(x) = log(1 + x)? | <p>As I understand it, PyTorch implements <code>log_softmax(x)</code> as <code>x - x.max() - (x - x.max()).exp().sum().log()</code>, for added numerical stability. (See, e.g., <a href="https://stackoverflow.com/a/61568783/377022">here</a>.) However, when the largest value is much larger than the rest of the values (about 16 larger for float32, about 36 larger for float64), <code>log_softmax</code> returns <code>0</code> at the maximum value, when it could give a much more precise answer.</p>
<p>Numerically, we have</p>
<pre class="lang-py prettyprint-override"><code>>>> eps = torch.tensor(torch.finfo(torch.float32).eps)
>>> -torch.tensor([1-(2*eps).log(), 0]).log_softmax(dim=0)
tensor([1.1921e-07, 1.6249e+01])
>>> -torch.tensor([1-eps.log(), 0]).log_softmax(dim=0)
tensor([-0.0000, 16.9424])
</code></pre>
<p>(For float16, this is 7.2e-4.)</p>
<p>As I understand it, <code>log</code> and <code>exp</code> are implemented under the hood using some sort of polynomial(?) series expansion. (cf, eg, <a href="https://stackoverflow.com/a/40519989/377022">this answer</a>) 1.19e-7 is the unit of least precision for float32 around 1, but float32 can represent values as small as about 1.18e-38 normally (<code>torch.finfo(torch.float32).smallest_normal</code>, <code>2**-126 = 2**-(2**7 - 2)</code>) or about 1.18e-45 (<code>2**-126 * 2**-23</code>) subnormally. So there's a lot of room for more precision, which could be achieved by having functions like <code>expm1(x)</code> which is <code>exp(x) - 1</code> but with more precision, and <code>log1p(x)</code> which is <code>log(1 + x)</code> but with more precision on tiny values of <code>x</code>. Then we could implement <code>log_softmax(x)</code> as</p>
<pre class="lang-py prettyprint-override"><code>maxi = x.argmax()
xoffset = x - x[maxi]
xoffsetexp = xoffset.exp()
# xoffsetexp[maxi] is currently about 1
xoffsetexp[maxi] = 0
xoffsetexp_sum_m1 = xoffsetexp.sum()
return xoffset - xoffsetexp_sum_m1.log1p()
</code></pre>
<p>This might, for example, allow model training to not be dominated by floating point errors quite as early, in some cases.</p>
<p><strong>Do any libraries implement <code>log_softmax</code> this way? Are there reasons to avoid implementing <code>log_softmax</code> this way?</strong></p>
| <python><tensorflow><pytorch><numerical-methods> | 2023-11-13 23:00:58 | 0 | 6,168 | Jason Gross |
77,477,158 | 628,228 | Is there a natural way to handle values of Enum class in a pandas column? | <p>Enum classes are often defined as a mapping from names to numeric constants. I was looking to take advantage of this mapping to show natural labels in a categorical pandas column, while keeping the numbers for efficiency.</p>
<p>Initially I felt this could be achieved naturally by using the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html" rel="nofollow noreferrer"><code>Categorical</code></a> type, so I started working from the following approach:</p>
<pre class="lang-py prettyprint-override"><code>from enum import IntEnum
import pandas as pd
class CategoryType(IntEnum):
RED = 0
YELLOW = 1
GREEN = 2
cat = pd.Categorical([2,2,1,2,0], categories = [x for x in CategoryType])
cat_data = pd.DataFrame(cat)
</code></pre>
<p>The above will work, but all of the label information is of course lost so you only see the numbers when printing the data frame.</p>
<p>We can use <code>map</code> or other similar transformations to make everything work with <code>str</code> types, but this seems like a huge waste of time given that <code>Categorical</code> is already supposed to store internally strings as numerical codes, so it seems backwards to do this when we already have the codes.</p>
<p>Furthermore, you lose all ability to do something like the following <code>cat_data == CategoryType.Red</code> since your categorical will be string based and so knows nothing about numbers.</p>
<p>I've searched everywhere but for some reason I can only find suggestions about mapping from numbers to categorical strings, but never about preserving the numbers from an existing Enum type and somehow assigning labels to the codes for display purposes.</p>
<p>I was a bit surprised by this since it looks to me like this would be something people want to do fairly often. Am I missing something?</p>
| <python><pandas> | 2023-11-13 22:12:31 | 3 | 4,430 | glopes |
77,477,006 | 2,642,356 | Poetry & Cython - cross-platfrom / build on install | <p>I'm using Poetry v1.4.1 with Cython 3.0.5. My package contains a <code>.pyx</code> file that I wish to compile. If I'm using <code>pyximport</code> instead of pre-compiling, importing takes a long time. However, if I'm building (with the code below) using <code>poetry build</code>, instead of a version- & platform-agnostic wheel file <code>my_package-1.0.0-py3-none-any.whl</code>, I get the very specific wheel <code>my_package-1.0.0-cp39-cp39-manylinux_2_31_x86_64.whl</code>.</p>
<p>Is there a way to build the wheel that's fully-agnostic? Maybe cythonize/compile C files at the time of the wheel's installation somehow?</p>
<p>Thanks!</p>
<hr />
<h2>Code:</h2>
<p><code>pyproject.toml</code> (partial):</p>
<pre class="lang-ini prettyprint-override"><code>...
[tool.poetry.build]
generate-setup-file = false
script = 'build.py'
[build-system]
requires = ["poetry-core", "Cython", "numpy"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p><code>build.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import os
import shutil
from distutils.core import Distribution
import warnings
if os.environ.get("NO_BUILD", False):
exit(0)
try:
from Cython.Build import build_ext, cythonize
import numpy
ext_modules = cythonize("my_package/**/*.pyx", output_dir="build", include_path=["dmcommon"],
build_dir="build", aliases={'NUMPY': numpy.get_include()})
dist = Distribution({"ext_modules": ext_modules})
cmd = build_ext(dist)
cmd.ensure_finalized()
cmd.run()
# Move compiled near origin files.
for output in cmd.get_outputs():
relative_extension = os.path.relpath(output, cmd.build_lib)
relative_output_folder = os.path.dirname(relative_extension)
os.makedirs(relative_output_folder, exist_ok=True)
shutil.copyfile(output, relative_extension)
except Exception as ex:
warnings.warn(f"Failed to build Cython extensions: {ex}")
</code></pre>
| <python><cython><python-poetry><distutils><cythonize> | 2023-11-13 21:33:27 | 0 | 1,864 | EZLearner |
77,476,965 | 3,219,759 | Django / Pytest / Splinter : IntegrityError duplicate key in test only | <p>I know it's a very common problem and I read a lot of similar questions. But I can't find any solution, so, here I am with the 987th question on Stackoverflow about a Django Integrity error.</p>
<p>I'm starting a Django project with a Postgres db to learn about the framework. I did the classic Profile creation for users, automated with a post_save signal. Here is the model:</p>
<pre><code>class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
description = models.TextField(max_length=280, blank=True)
contacts = models.ManyToManyField(
"self",
symmetrical=True,
blank=True
)
</code></pre>
<p>And this is the signal that goes with it :</p>
<pre><code>def create_profile(sender, instance, created, **kwargs):
if created:
user_profile = Profile(user=instance)
user_profile.save()
post_save.connect(create_profile, sender=User, dispatch_uid="profile_creation")
</code></pre>
<p>The project is just starting, and for now I only create users in the admin view. With the post_save signal, it's supposed to create a Profile with the same form.</p>
<p>Here is the admin setup :</p>
<pre><code>class ProfileInline(admin.StackedInline):
model = Profile
class UserAdmin(admin.ModelAdmin):
model = User
list_display = ["username", "is_superuser"]
fields = ["username", "is_superuser"]
inlines = [ProfileInline]
</code></pre>
<p>I'm using pytest and Splinter for my test, and this is the integration test that don't work :</p>
<pre><code>@pytest.mark.django_db
class TestAdminPage:
def test_profile_creation_from_admin(self, browser, admin_user):
browser.visit('/admin/login/')
username_field = browser.find_by_css('form input[name="username"]')
password_field = browser.find_by_css('form input[name="password"]')
username_field.fill(admin_user.username)
password_field.fill('password')
submit = browser.find_by_css('form input[type="submit"]')
submit.click()
browser.links.find_by_text("Users").click()
browser.links.find_by_partial_href("/user/add/").click()
browser.find_by_css('form input[name="username"]').fill('Super_pseudo')
browser.find_by_css('textarea[name="profile-0-description"]').fill('Super description')
browser.find_by_css('input[name="_save"]').click()
assert browser.url is '/admin/auth/user/'
assert Profile.objects.last().description is 'Super description'
</code></pre>
<p>when I run this, I get this error :</p>
<pre><code>django.db.utils.IntegrityError: duplicate key value violates unique constraint "profiles_profile_user_id_key"
DETAIL: Key (user_id)=(2) already exists.
</code></pre>
<p>At first, I also saw this error when I was creating a user using my local server. But only if I wrote a description. If I let the description field empty, everything was working fine. So I wrote this integration test, to solve the issue. And then I read a lot, tweaked a few things, and the error stopped happening in my local browser. But not in my test suite.</p>
<p>So I used a breakpoint, there :</p>
<pre><code>def create_profile(sender, instance, created, **kwargs):
breakpoint()
if created:
user_profile = Profile(user=instance)
user_profile.save()
</code></pre>
<p>And that's where the fun begins. This single test is calling the signal 3 times.</p>
<p>The first time it's called by the <code>admin_user</code> fixture that I'm using.</p>
<pre><code>(Pdb) from profiles.models import Profile
(Pdb) instance
<User: admin>
(Pdb) created
True
(Pdb) instance.profile
*** django.contrib.auth.models.User.profile.RelatedObjectDoesNotExist: User has no profile.
(Pdb) Profile.objects.count()
0
(Pdb) continue
</code></pre>
<p>Seems legit, the admin user don't have a profile, why not. Then the signal is called again on the same instance.</p>
<pre><code>(Pdb) instance
<User: admin>
(Pdb) instance.profile
<Profile: admin>
(Pdb) Profile.objects.count()
1
(Pdb) Profile.objects.last()
<Profile: admin>
(Pdb) created
False
(Pdb) continue
</code></pre>
<p>Still legit, weird, but it's not doing anything. <code>created</code> is False, so it's not creating a second profile. Didn't need the first one, but it's not making the test fail. And then :</p>
<pre><code>(Pdb) instance
<User: Super_pseudo>
(Pdb) created
True
(Pdb) instance.profile
<Profile: Super_pseudo>
(Pdb) Profile.objects.count()
1
(Pdb) Profile.objects.last()
<Profile: admin>
</code></pre>
<p>This is so weird. The profile is not saved, but it's raising an Integrity error anyway. It looks instanciated (why?), when I call <code>instance.profile</code> I get something (how?), but it don't look like it's saved in the db. But the error happens anyway. I have no clue, I spent a few hours already, and I don't know what to look.</p>
<p>Feels like I'm missing something important, and that's why I'm asking for your help.</p>
<h2>Edit</h2>
<p>I tried updating the signal with <code>if created and not kwargs.get('raw', False):</code>, but it doesn't work.</p>
<p>Just in case, the error message in full :</p>
<pre><code>=================================== FAILURES ===================================
________________ TestAdminPage.test_profile_creation_from_admin ________________
self = <django.db.backends.utils.CursorWrapper object at 0x7f6f62521790>
sql = 'INSERT INTO "profiles_profile" ("user_id", "description") VALUES (%s, %s) RETURNING "profiles_profile"."id"'
params = (2, 'Super description')
ignored_wrapper_args = (False, {'connection': <DatabaseWrapper vendor='postgresql' alias='default'>, 'cursor': <django.db.backends.utils.CursorWrapper object at 0x7f6f62521790>})
def _execute(self, sql, params, *ignored_wrapper_args):
self.db.validate_no_broken_transaction()
with self.db.wrap_database_errors:
if params is None:
# params default might be backend specific.
return self.cursor.execute(sql)
else:
> return self.cursor.execute(sql, params)
E psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "profiles_profile_user_id_key"
E DETAIL: Key (user_id)=(2) already exists.
/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py:89: UniqueViolation
The above exception was the direct cause of the following exception:
self = <profiles.tests.test_admin.TestAdminPage object at 0x7f6f62a543b0>
browser = <splinter.driver.djangoclient.DjangoClient object at 0x7f6f62822840>
admin_user = <User: admin>
def test_profile_creation_from_admin(self, browser, admin_user):
self.login_as_admin(browser, admin_user)
browser.links.find_by_text("Users").click()
browser.links.find_by_partial_href("/user/add/").click()
browser.find_by_css('form input[name="username"]').fill('Super_pseudo')
browser.find_by_css('textarea[name="profile-0-description"]').fill('Super description')
> browser.find_by_css('input[name="_save"]').click()
profiles/tests/test_admin.py:28:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.12/site-packages/splinter/driver/lxmldriver.py:433: in click
return self.parent.submit_data(parent_form)
/usr/local/lib/python3.12/site-packages/splinter/driver/djangoclient.py:130: in submit_data
return super(DjangoClient, self).submit(form).content
/usr/local/lib/python3.12/site-packages/splinter/driver/lxmldriver.py:89: in submit
self._do_method(method, url, data=data)
/usr/local/lib/python3.12/site-packages/splinter/driver/djangoclient.py:118: in _do_method
self._response = func_method(url, data=data, follow=True, **extra)
/usr/local/lib/python3.12/site-packages/django/test/client.py:948: in post
response = super().post(
/usr/local/lib/python3.12/site-packages/django/test/client.py:482: in post
return self.generic(
/usr/local/lib/python3.12/site-packages/django/test/client.py:609: in generic
return self.request(**r)
/usr/local/lib/python3.12/site-packages/django/test/client.py:891: in request
self.check_exception(response)
/usr/local/lib/python3.12/site-packages/django/test/client.py:738: in check_exception
raise exc_value
/usr/local/lib/python3.12/site-packages/django/core/handlers/exception.py:55: in inner
response = get_response(request)
/usr/local/lib/python3.12/site-packages/django/core/handlers/base.py:197: in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
/usr/local/lib/python3.12/contextlib.py:81: in inner
return func(*args, **kwds)
/usr/local/lib/python3.12/site-packages/django/contrib/admin/options.py:688: in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
/usr/local/lib/python3.12/site-packages/django/utils/decorators.py:134: in _wrapper_view
response = view_func(request, *args, **kwargs)
/usr/local/lib/python3.12/site-packages/django/views/decorators/cache.py:62: in _wrapper_view_func
response = view_func(request, *args, **kwargs)
/usr/local/lib/python3.12/site-packages/django/contrib/admin/sites.py:242: in inner
return view(request, *args, **kwargs)
/usr/local/lib/python3.12/site-packages/django/contrib/admin/options.py:1886: in add_view
return self.changeform_view(request, None, form_url, extra_context)
/usr/local/lib/python3.12/site-packages/django/utils/decorators.py:46: in _wrapper
return bound_method(*args, **kwargs)
/usr/local/lib/python3.12/site-packages/django/utils/decorators.py:134: in _wrapper_view
response = view_func(request, *args, **kwargs)
/usr/local/lib/python3.12/site-packages/django/contrib/admin/options.py:1747: in changeform_view
return self._changeform_view(request, object_id, form_url, extra_context)
/usr/local/lib/python3.12/site-packages/django/contrib/admin/options.py:1799: in _changeform_view
self.save_related(request, form, formsets, not add)
/usr/local/lib/python3.12/site-packages/django/contrib/admin/options.py:1255: in save_related
self.save_formset(request, form, formset, change=change)
/usr/local/lib/python3.12/site-packages/django/contrib/admin/options.py:1243: in save_formset
formset.save()
/usr/local/lib/python3.12/site-packages/django/forms/models.py:784: in save
return self.save_existing_objects(commit) + self.save_new_objects(commit)
/usr/local/lib/python3.12/site-packages/django/forms/models.py:944: in save_new_objects
self.new_objects.append(self.save_new(form, commit=commit))
/usr/local/lib/python3.12/site-packages/django/forms/models.py:1142: in save_new
return super().save_new(form, commit=commit)
/usr/local/lib/python3.12/site-packages/django/forms/models.py:757: in save_new
return form.save(commit=commit)
/usr/local/lib/python3.12/site-packages/django/forms/models.py:542: in save
self.instance.save()
/usr/local/lib/python3.12/site-packages/django/db/models/base.py:814: in save
self.save_base(
/usr/local/lib/python3.12/site-packages/django/db/models/base.py:877: in save_base
updated = self._save_table(
/usr/local/lib/python3.12/site-packages/django/db/models/base.py:1020: in _save_table
results = self._do_insert(
/usr/local/lib/python3.12/site-packages/django/db/models/base.py:1061: in _do_insert
return manager._insert(
/usr/local/lib/python3.12/site-packages/django/db/models/manager.py:87: in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
/usr/local/lib/python3.12/site-packages/django/db/models/query.py:1805: in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
/usr/local/lib/python3.12/site-packages/django/db/models/sql/compiler.py:1820: in execute_sql
cursor.execute(sql, params)
/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py:67: in execute
return self._execute_with_wrappers(
/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py:80: in _execute_with_wrappers
return executor(sql, params, many, context)
/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py:84: in _execute
with self.db.wrap_database_errors:
/usr/local/lib/python3.12/site-packages/django/db/utils.py:91: in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <django.db.backends.utils.CursorWrapper object at 0x7f6f62521790>
sql = 'INSERT INTO "profiles_profile" ("user_id", "description") VALUES (%s, %s) RETURNING "profiles_profile"."id"'
params = (2, 'Super description')
ignored_wrapper_args = (False, {'connection': <DatabaseWrapper vendor='postgresql' alias='default'>, 'cursor': <django.db.backends.utils.CursorWrapper object at 0x7f6f62521790>})
def _execute(self, sql, params, *ignored_wrapper_args):
self.db.validate_no_broken_transaction()
with self.db.wrap_database_errors:
if params is None:
# params default might be backend specific.
return self.cursor.execute(sql)
else:
> return self.cursor.execute(sql, params)
E django.db.utils.IntegrityError: duplicate key value violates unique constraint "profiles_profile_user_id_key"
E DETAIL: Key (user_id)=(2) already exists.
/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py:89: IntegrityError
</code></pre>
| <python><django><postgresql><pytest><splinter> | 2023-11-13 21:23:18 | 0 | 1,222 | Ruff9 |
77,476,750 | 3,116,231 | Issue with upsert: SQLAlchemy insert works, update doesn't | <p>My session manager:</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.exc import SQLAlchemyError
import contextlib
class DatabaseSessionManager:
def __init__(self, connection_string):
self.engine = create_engine(connection_string)
self.session_factory = sessionmaker(bind=self.engine)
self.Session = scoped_session(self.session_factory)
@contextlib.contextmanager
def session_scope(self):
"""Provide a transactional scope around a series of operations."""
session = self.Session()
try:
yield session
session.commit()
except SQLAlchemyError as e:
session.rollback()
raise
finally:
session.close()
</code></pre>
<p>the upsert:</p>
<pre><code>def upsert(session, customer):
try:
# Attempt to find the customer by id
existing_customer = session.query(Customer).filter_by(id=customer.id).first()
if existing_customer:
# Update existing customer
for key, value in vars(customer).items():
if hasattr(existing_customer, key) and value is not None:
setattr(existing_customer, key, value)
else:
# Create new customer
session.add(customer)
except SQLAlchemyError as e:
print(f"Error occurred: {e}")
session.rollback()
finally:
session.close()
</code></pre>
<p>When inserting an object with a key, which doesn't exist, the data is written to the database</p>
<pre><code>from datetime import datetime
test_customer = Customer(
id=6962763399483, # Manually assign a unique ID
first_name='Melanie',
last_name='Dopp',
email='john.doe@example.com',
orders_count='9000',
total_spent='19000',
created_at=datetime.utcnow(),
last_order_created_at=datetime.utcnow() # Assign a relevant datetime or None
)
</code></pre>
<p>When I edit the object, without changing to key, the changes aren't committed. I also tried with a modified database manager class, using <code>__enter__</code>, <code>__exit__</code></p>
<pre><code>with dbm.session_scope() as session:
print(session.query(Customer).filter_by(id=6962763399483).first().last_name)
# returns Ruby
</code></pre>
<p>My data:
<a href="https://i.sstatic.net/tYIfh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tYIfh.png" alt="enter image description here" /></a></p>
<p>What is going wrong here?</p>
| <python><postgresql><sqlalchemy> | 2023-11-13 20:43:46 | 1 | 1,704 | Zin Yosrim |
77,476,482 | 10,326,759 | Can I return a dataclass from the forward method of a pytorch Module? | <p>Can I return a dataclass from the forward method of a pytorch Module?</p>
<p>It appears to be possible in simple examples, and I can't find any documented obstruction. However, I've not seen it done, and coming from Tensorflow I worry that it will blow up when things get complicated (e.g. in distributed training).</p>
| <python><pytorch> | 2023-11-13 19:47:22 | 0 | 497 | Andrea Allais |
77,476,304 | 2,983,568 | Why does running Python import fails on first run but works on subsequent ones (noisereduce package)? | <p>Working with VS Code and Jupyter notebook extension (among others), in a virtual environment that is properly activated.<br>
I am trying to run the following import:</p>
<pre><code>import noisereduce
</code></pre>
<p>The <strong>first</strong> time I run the code, this error is shown:</p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_2444\169068538.py in <module>
----> 1 import noisereduce
c:\Users\Username\miniconda3\envs\adsml\lib\site-packages\noisereduce\__init__.py in <module>
----> 1 from noisereduce.noisereduce import reduce_noise
c:\Users\Username\miniconda3\envs\adsml\lib\site-packages\noisereduce\noisereduce.py in <module>
----> 1 from noisereduce.spectralgate.stationary import SpectralGateStationary
2 from noisereduce.spectralgate.nonstationary import SpectralGateNonStationary
3
4 try:
5 import torch
c:\Users\Username\miniconda3\envs\adsml\lib\site-packages\noisereduce\spectralgate\__init__.py in <module>
1 from .nonstationary import SpectralGateNonStationary
2 from .stationary import SpectralGateStationary
----> 3 from .streamed_torch_gate import StreamedTorchGate
c:\Users\Username\miniconda3\envs\adsml\lib\site-packages\noisereduce\spectralgate\streamed_torch_gate.py in <module>
----> 1 import torch
2 from noisereduce.spectralgate.base import SpectralGate
3 from noisereduce.torchgate import TorchGate as TG
4 import numpy as np
5
ModuleNotFoundError: No module named 'torch'
</code></pre>
<p>But then, running it again works, until I restart the kernel, or reload window, or close/reopen VS Code.<br>
Why is that? Should I try to manually install <code>torch</code> on this venv?
<br><br>
<strong>Edit</strong><br>
This [ugly hack] works:</p>
<pre><code>try:
import noisereduce
except:
import noisereduce # must run twice
</code></pre>
<p><strong>Edit 2</strong><br>
Versions</p>
<pre><code>python 3.9.16
noisereduce 3.0.0
VSCode 1.84.2
</code></pre>
| <python><visual-studio-code><import><dependencies> | 2023-11-13 19:13:50 | 1 | 4,665 | evilmandarine |
77,476,261 | 19,694,624 | How do I reply with mention pycord | <p>I am running a discord bot. But it responds without mentioning.</p>
<p>In discord.py I could do something like this:</p>
<pre><code>@bot.event
async def on_message(message):
if message.content == "hi":
await message.reply("Hello!", mention_author=True)
</code></pre>
<p>And it would mention the user and reply to their message. But since discord.py was depricated, I switched to pycord. And I just can't find how I can do the same thing in pycord.</p>
<p>Here is simple echo bot to replicate:</p>
<pre><code>import discord
bot = discord.Bot()
@bot.event
async def on_ready():
print(f"{bot.user} is ready and online!")
@bot.slash_command(name="chat", description="some desc")
async def chat(ctx, msg):
await ctx.respond(msg)
bot.run(TOKEN)
</code></pre>
<p>The result I want to achieve, but with slash command
<a href="https://i.sstatic.net/g0JZT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g0JZT.png" alt="enter image description here" /></a></p>
| <python><python-3.x><discord><discord.py><pycord> | 2023-11-13 19:05:20 | 2 | 303 | syrok |
77,476,250 | 9,100,431 | Can't upload file to Sharepoint site using Graph API. The resource could not be found | <p>I can see the files in the drive, but supposedly doesn't exist when trying to write one.</p>
<p>This is how I look for files:</p>
<pre><code>site_url = https://graph.microsoft.com/v1.0/sites
get_url = f"{site_url}/{site_id}/drives/{drive_id}/root/children"
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json;odata.metadata=minimal",
"Accept": "application/json;odata.metadata=minimal"
}
response = requests.get(get_url, headers=headers)
if response.status_code == 200:
print(response.text)
</code></pre>
<p>And get the response:</p>
<pre><code>{
"value": [
{
"createdDateTime": "2023-11-13T18:21:22Z",
"lastModifiedDateTime": "2023-11-13T18:21:22Z",
"name": "Reporte",
"webUrl": "{site}/Shared%20Documents/Reporte",
},
{
"@microsoft.graph.downloadUrl": "{download_url}",
"name": "Open pip.txt"
}
]
}
</code></pre>
<p>Confirming I can access the drive,
But when I try to write a new file:</p>
<pre><code>file_name = "Reporte Z.xlsx"
upload_url = f"{site_url}/{site_id}/drives/{drive_id}/root:/{file_name}:/content"
# Set headers
headers = {
"Authorization": f"Bearer {access_token}",
"Connection": "Keep-alive",
"Content-Type": "text/plain"
}
# Read file content
with open(file_path, "rb") as file:
file_content = file.read()
# Make POST request
response = requests.post(upload_url, headers=headers, data=file_content)
# Check response status
if response.status_code == 201:
print("File uploaded successfully")
else:
print("Error uploading file:", response.status_code, response.text)
</code></pre>
<p>I get the error</p>
<pre><code>Error uploading file: 404 {"error":{"code":"itemNotFound","message":"The resource could not be found.","innerError":{"date":"2023-11-13T18:47:51","request-id":"[string]","client-request-id":"[string]"}}}
</code></pre>
<p>I've tried removing the ":" in root, but get the Entity only allows writes with a JSON Content-Type header. Error</p>
<p>I'm following this tutorial (<a href="https://learn.microsoft.com/en-us/graph/api/driveitem-put-content?view=graph-rest-1.0&tabs=http" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/graph/api/driveitem-put-content?view=graph-rest-1.0&tabs=http</a>) to no avail.</p>
<p>I've looked in many threads, but many seem to never have found a solution.</p>
| <python><sharepoint><python-requests><microsoft-graph-api> | 2023-11-13 19:03:05 | 1 | 660 | Diego |
77,476,241 | 1,175,788 | Is it possible to rename Pyspark exploded columns all at once? | <p>I have to explode two different struct columns, both of which have the same underlying structure, meaning there are overlapping names. Is it possible to rename/alias the columns that are returned from <code>explode()</code> all at once, and avoid a bunch of <code>alias()</code>?</p>
<p>Here's what I have:</p>
<pre><code>from pyspark.sql import functions as f
df = df.withColumn("first_struct_field", f.explode("first_struct_field"))\
.withColumn("second_struct_field", f.explode("second_struct_field"))\
.select("id_field", "first_struct_field.*", "second_struct_field.*").show(n=5)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id_field</th>
<th>first_struct_field</th>
<th>second_struct_field</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>{"hidden": true, "type": "Personal", "description": "High"}</td>
<td>{"active": false, "type": "Fruit", "description": "Yellow"}</td>
</tr>
<tr>
<td>Second</td>
<td>{"hidden": true, "type": "Business", "description": "Low"}</td>
<td>{"active": true, "type": "Vehicle", "description": "Purple"}</td>
</tr>
</tbody>
</table>
</div>
<p>I tried to make it obvious that in the data, the two overlapping sub-items are <code>type</code> and <code>description</code> even though they don't have the same meanings between the two columns. I would want something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id_field</th>
<th>first_struct_field_type</th>
<th>first_struct_field_description</th>
<th>second_struct_field_type</th>
<th>second_struct_field_description</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>Personal</td>
<td>High</td>
<td>Fruit</td>
<td>Yellow</td>
</tr>
<tr>
<td>Second</td>
<td>Business</td>
<td>Low</td>
<td>Vehicle</td>
<td>Purple</td>
</tr>
</tbody>
</table>
</div> | <python><apache-spark><pyspark><apache-spark-sql> | 2023-11-13 19:01:41 | 1 | 3,011 | simplycoding |
77,476,114 | 11,500,371 | In a pandas dataframe where the column names are datetime objects, how can I find the earliest instance of a True value for a given row? | <p>I have a dataframe where the column ids are datetime objects, and the values are True or False</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>2023-10-30 00:00:00</th>
<th>2023-11-01 00:00:00</th>
<th>2023-11-03 00:00:00</th>
<th>2023-11-06 00:00:00</th>
<th>2023-11-08 00:00:00</th>
<th>2023-11-13 00:00:00</th>
</tr>
</thead>
<tbody>
<tr>
<td>Canada</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>true</td>
</tr>
<tr>
<td>France</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>true</td>
<td>true</td>
</tr>
<tr>
<td>Argentina</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
</tr>
<tr>
<td>Australia</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
</tr>
<tr>
<td>Morocco</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
</tr>
<tr>
<td>Ethiopia</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>true</td>
<td>true</td>
</tr>
<tr>
<td>Nepal</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
</tr>
</tbody>
</table>
</div>
<p>I want to add a new column that identifies when the earliest True value occurred for a given row. For example, Canada's first True entry occured on 11/13, while Australia's occurred on 10/30. The final output would look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>2023-10-30 00:00:00</th>
<th>2023-11-01 00:00:00</th>
<th>2023-11-03 00:00:00</th>
<th>2023-11-06 00:00:00</th>
<th>2023-11-08 00:00:00</th>
<th>2023-11-13 00:00:00</th>
<th>Earliest True</th>
</tr>
</thead>
<tbody>
<tr>
<td>Canada</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>true</td>
<td>2023-11-13 00:00:00</td>
</tr>
<tr>
<td>France</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>true</td>
<td>true</td>
<td>2023-11-08 00:00:00</td>
</tr>
<tr>
<td>Argentina</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>2023-10-30 00:00:00</td>
</tr>
<tr>
<td>Australia</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>2023-10-30 00:00:00</td>
</tr>
<tr>
<td>Morocco</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>2023-10-30 00:00:00</td>
</tr>
<tr>
<td>Ethiopia</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>False</td>
<td>true</td>
<td>true</td>
<td>2023-11-08 00:00:00</td>
</tr>
<tr>
<td>Nepal</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>true</td>
<td>2023-10-30 00:00:00</td>
</tr>
</tbody>
</table>
</div>
<p>Any ideas on how to accomplish this?</p>
| <python><pandas><datetime> | 2023-11-13 18:37:47 | 1 | 337 | Sean R |
77,476,044 | 12,042,094 | Azure function unable to install azure-identity dependencies | <p>I am attempting to install and azure function using the <code>WEBSITE_RUN_FROM_PACKAGE = <STORAGE_BLOB_URL></code> approach with a python runtime and linux os to a dedicated subnet so as to eliminate public internet traffic.</p>
<p>the specific required installs are:</p>
<pre><code>import logging
import os
import azure.functions as func
from azure.identity import DefaultAzureCredential, CredentialUnavailableError
from azure.keyvault.secrets import SecretClient
from azure.core.exceptions import ResourceNotFoundError, ClientAuthenticationError
</code></pre>
<p>Prior to uploading the function folder in zip format, I isolate the function folder through a virtual environment and install the complete requirements.txt:</p>
<pre><code>azure-functions
azure-identity
azure-keyvault-secrets
pymssql
</code></pre>
<p>without error. A subsequent <code>func start</code> confirms this allows local deploy. I use a <code>freeze</code> command to isolate the versions and update my requirements.txt. I then install all of these dependencies and packages locally.</p>
<p>My own python runtime is also 3.10 64bit.</p>
<p>These are uploaded locally with the --upgrade flag specified.</p>
<pre><code>pip install --target="./.python_packages/lib/site-packages" --upgrade -r requirements.txt
</code></pre>
<p>I then upload to blob storage and create the function app in a dedicated subnet with a <code>Microsfot.Storage</code> endpoint.</p>
<p>Despite this, looking at the function monitoring output, the error seems to show that it is still unable to resolve dependencies:</p>
<pre><code>Result: Failure Exception: ImportError: cannot import name 'x509' from 'cryptography.hazmat.bindings._rust' (unknown location). Please check the requirements.txt file for the missing module. For more info, please refer the troubleshooting guide: https://aka.ms/functions-modulenotfound Stack: File "/azure-functions-host/workers/python/3.10/LINUX/X64/azure_functions_worker/dispatcher.py", line 387, in _handle__function_load_request func = loader.load_function( File "/azure-functions-host/workers/python/3.10/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 48, in call raise extend_exception_message(e, message) File "/azure-functions-host/workers/python/3.10/LINUX/X64/azure_functions_worker/utils/wrappers.py", line 44, in call return func(*args, **kwargs) File "/azure-functions-host/workers/python/3.10/LINUX/X64/azure_functions_worker/loader.py", line 194, in load_function mod = importlib.import_module(fullmodname) File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/home/site/wwwroot/function_folder/__init__.py", line 4, in <module> from azure.identity import DefaultAzureCredential, CredentialUnavailableError File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/identity/__init__.py", line 10, in <module> from ._credentials import ( File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/identity/_credentials/__init__.py", line 5, in <module> from .authorization_code import AuthorizationCodeCredential File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/identity/_credentials/authorization_code.py", line 9, in <module> from .._internal.aad_client import AadClient File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/identity/_internal/__init__.py", line 5, in <module> from .aad_client import AadClient File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/identity/_internal/aad_client.py", line 11, in <module> from .aad_client_base import AadClientBase File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/identity/_internal/aad_client_base.py", line 20, in <module> from .aadclient_certificate import AadClientCertificate File "/home/site/wwwroot/.python_packages/lib/site-packages/azure/identity/_internal/aadclient_certificate.py", line 7, in <module> from cryptography import x509 File "/home/site/wwwroot/.python_packages/lib/site-packages/cryptography/x509/__init__.py", line 7, in <module> from cryptography.x509 import certificate_transparency File "/home/site/wwwroot/.python_packages/lib/site-packages/cryptography/x509/certificate_transparency.py", line 11, in <module> from cryptography.hazmat.bindings._rust import x509 as rust_x509
</code></pre>
<p>My question is, is this a <code>azure-identity</code> <code>cryptography</code> dependency specific error that others have encountered and resolved or is there a different approach that must be taken when deploying a function to an azure function app, contrary to documentation?</p>
<p>note that the specific error message is:</p>
<pre><code>
Result: Failure Exception: ImportError: cannot import name 'x509' from 'cryptography.hazmat.bindings._rust'
</code></pre>
<p>Additional application settings have been configured to facilitate a python runtime:</p>
<pre><code>SCM_DO_BUILD_DURING_DEPLOYMENT = true
ENABLE_ORYX_BUILD = true
FUNCTIONS_WORKER_RUNTIME = true
</code></pre>
<p>note that as VNet integration is required, this is on a service plan of Basic, rather than consumption. I have tried using Kudu to literally SSH into the wwwroot and install the necessary libraries without any success. The error remains.</p>
| <python><azure><terraform><azure-functions><azure-blob-storage> | 2023-11-13 18:23:14 | 1 | 486 | RAH |
77,475,846 | 9,112,151 | How to mock __next__ magic method? | <p>How to mock <code>Queue.__next__</code> magic method? The code below with <code>side_effect</code> is not working:</p>
<pre><code>from unittest.mock import MagicMock
class Queue:
def __init__(self):
self.nums = [1, 2, 3]
def __iter__(self):
return self
def __next__(self):
try:
num = self.nums.pop()
except Exception:
raise StopIteration
return num
class Consumer:
def __init__(self, queue):
self._queue = queue
def consume(self):
for msg in self._queue:
raise ValueError(msg)
def test_it():
queue = MagicMock()
queue.__next__.side_effect = [4, 5, 6] # not working
consumer = Consumer(queue)
consumer.consume()
</code></pre>
<p>Python interpreter is not even going inside <code>__next__</code> method. How to make it work?</p>
| <python><pytest><python-unittest><python-unittest.mock> | 2023-11-13 17:49:12 | 2 | 1,019 | Альберт Александров |
77,475,822 | 1,775,010 | Python/Mido crashing without any error message | <p>I'm pretty new to Python and MIDI processing, however what I see does not seem normal to me.</p>
<p>In this code, <code>listen_input</code> MIDI callback crashes without any error:</p>
<pre class="lang-py prettyprint-override"><code>import mido
import time
def listen_input(message):
print('input:', message)
print(1)
filtered_on_message = copy.copy(message)
filtered_on_message.velocity = 50
output_message(filtered_on_message)
print(2)
inport = mido.open_input('reface CP', callback=listen_input)
outport = mido.open_output('reface CP')
while True:
time.sleep(1)
</code></pre>
<p>outputs:</p>
<pre><code>input: note_on channel=0 note=79 velocity=81 time=0
1
input: note_on channel=0 note=79 velocity=0 time=0
1
input: note_on channel=0 note=79 velocity=77 time=0
1
input: note_on channel=0 note=79 velocity=0 time=0
1
</code></pre>
<p><code>print(2)</code> is never called.</p>
<p>I'm guessing those errors are displayed in a separate process or something, or maybe a log, due to the callback?</p>
<p>Any idea?</p>
| <python><midi> | 2023-11-13 17:43:27 | 1 | 1,068 | theredled |
77,475,807 | 7,827,848 | Reponse time for GPT models via openAI API vs internet version | <p>I am making a small time test of the openai API, from my local internet connection and laptop, but I get times that are much larger than expected. With the following code:</p>
<pre><code>import openai
import time
import tiktoken
OPENAI_KEY='xxx'
openai.api_key=OPENAI_KEY
encoding = tiktoken.encoding_for_model("gpt-3.5-turbo")
def get_completion(prompt, model="gpt-3.5-turbo", temperature=0.5):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature, # this is the degree of randomness of the model's output
)
return response.choices[0].message["content"]
for i in range(5):
tic=time.time()
answer=get_completion('Hello. My name is Bob. I would like to know how can I have a good day')
print(answer)
delta=(time.time()-tic)*1000
tokens=len(encoding.encode(answer))
print('time: ', delta)
print('tokens: ', tokens)
print('milliseconds/token: ', delta/tokens)
</code></pre>
<p>I get typical times like:</p>
<pre><code>time: 39341.29881858826 (milliseconds)
tokens: 292
milliseconds/token: 134.73047540612416
</code></pre>
<p>whereas using the standard chatGPT online version at <a href="https://chat.openai.com/" rel="nofollow noreferrer">https://chat.openai.com/</a> I am getting times around 18 sec (to compare with a typical value of around 40 secs and more with the API), for the same request/prompt and with a similar output token length, therefore the online version is at least twice as fast. These experiments use the same model.</p>
<p>My question is this: should I expect such a difference in time response between the model when used on the internet and with the API or am I making some mistake? I tried also considering node.js and I get similar timings (but the python code should be simpler to debug).</p>
<p>I searched on the internet but I did not get yet to the conclusion whether my test has some issue or not.</p>
<p>UPDATE: it does not seem to be related to my account, even if it would be great to have a confirmation from users with different API-keys.</p>
| <python><openai-api><latency> | 2023-11-13 17:41:13 | 0 | 331 | Thomas |
77,475,591 | 15,412,041 | Python modules collection from requirements txt | <p>I just have questions about downloading Python modules based on requirements.txt Let's say I have a bundle of folders A,B. Each folder contains some subdirectories within it.</p>
<p>Each folder contains <code>requirements.txt</code>, which contains a specific package:</p>
<pre><code>$ cat requirements.txt
boto3
</code></pre>
<p>When I run a bash script, the script will always download the package from the internet, which takes time each time.</p>
<p>To increase build speed, instead of downloading packages from the internet, how can I re-use the package until I change the version inside the <code>requirements.txt</code> file?</p>
<pre class="lang-bash prettyprint-override"><code>cat /workspace/source/A/Lambda/requirements.txt
boto3
cat /workspace/source/B/Lambda/requirements.txt
pymongo=4.6.0
</code></pre>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
startdir=(
"/workspace/source/A/Lambda"
"/workspace/source/B/Lambda")
SRC_DIR=/workspace/source/src_lambda
if [ ! -d $SRC_DIR ]
then
mkdir $SRC_DIR
else
echo "Source Python Modules Directory Cleaning"
cd "$SRC_DIR" && rm -rf *.zip
cd ..
fi
for dir in "${startdir[@]}"; do
(
cd "$dir" || continue
for sub_dir in */ ; do
cd "$sub_dir" || exit
echo "---> Finding pip modules based on requirements.txt file"
pip install --platform manylinux2014_x86_64 --implementation cp --python-version 3.11 --only-binary=:all: -r requirements.txt -t .
zip -r "${sub_dir%/}.zip" .
echo "---> Copying the zip files "${sub_dir%/}".zip into destination location "$SRC_DIR""
cp -r "${sub_dir%/}".zip $SRC_DIR
cd ..
done
)
done
echo "zip files has been moved to SRC_DIR"
</code></pre>
| <python><pip> | 2023-11-13 17:03:39 | 0 | 611 | Gowmi |
77,475,570 | 893,254 | How can I serialize a Python list as a single line in JSON? | <p>It appears to be that by default Python <code>json</code> module serializes a Python list using new lines between each element, like so:</p>
<pre><code>"data": [
-6.150000000000006,
-0.5,
0.539999999999992,
0.5800000000000125,
-4.6299999999999955,
12.0,
2.829999999999984,
-1.4199999999999875,
1.759999999999991,
-1.25,
</code></pre>
<p>I would like to serialize the contents of this list on a single line, if possible. How can I do that?</p>
<p>I am using <code>json.dumps(data, indent=4)</code> to perform the serialization step.</p>
<p>Please note that this list structure is part of a larger dictionary structure, and I would like to retain the indentation for the serialization of that dictionary.</p>
<p>The intended purpose is that the dictionary should be human legible, whereas the contents of the list can be on a single line, and it is still relatively easy to read. The data in the list is also less important for human legibility.</p>
| <python><json> | 2023-11-13 17:01:05 | 2 | 18,579 | user2138149 |
77,475,508 | 21,346,793 | How to fix vertical naming of endpoints in swagger | <p>I write my project in DRF and for auto-documentation i use drf-spectacular, but i have one problem: <a href="https://i.sstatic.net/ECTz7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ECTz7.png" alt="enter image description here" /></a></p>
<p>How can if fix this</p>
| <python><django-rest-framework><swagger><drf-spectacular> | 2023-11-13 16:52:14 | 2 | 400 | Ubuty_programmist_7 |
77,475,285 | 4,537,160 | Pytorch CrossEntropy Loss, getting error: "RuntimeError: Boolean value of Tensor with more than one value is ambiguous" | <p>I have a classification model, producing predictions for 4 classes in a tensor of shape (256, 1, 4)...256 is the batch size, while the "1" for the second dimension is due to some model internal logic and can be removed:</p>
<pre><code>preds.shape
torch.Size([256, 1, 4])
</code></pre>
<p>The corresponding annotations are one-hot encoded, in a tensor of the same shape:</p>
<pre><code>targets.shape
torch.Size([256, 1, 4])
</code></pre>
<p>so, in every row there is only one non-zero element:</p>
<pre><code>targets[0][0] = [1, 0, 0, 0]
</code></pre>
<p>I need to calculate the CrossEntropy loss of the model. I know that The CrossEntropyLoss expects class indices as the target, so I tried using argmax to determine the position of the 1 for each sample, and squeezed the extra dimension:</p>
<pre><code>vpredictions_squeezed = cls_scores.squeeze(1)
targets = torch.argmax(targets.squeeze(1), dim=1)
losses = torch.nn.CrossEntropyLoss(predictions_squeezed, targets)
</code></pre>
<p>But I'm getting the error:</p>
<pre><code>RuntimeError: Boolean value of Tensor with more than one value is ambiguous
</code></pre>
<p>What am I doing wrong here?</p>
| <python><pytorch><cross-entropy> | 2023-11-13 16:13:58 | 1 | 1,630 | Carlo |
77,474,965 | 4,382,305 | Implement XOR with Pytorch | <p>I want to implement XOR logical operator. I don't get the optimal answer. I need three output columns for next operations.
I have a data.csv file as data file(XOR):</p>
<pre><code>in1,in2,in3,in4,out1,out2,out3
0,0,0,0,0,0,0
0,0,0,1,0,0,1
0,0,1,0,0,0,1
0,0,1,1,0,0,0
0,1,0,0,0,0,1
0,1,0,1,0,0,0
0,1,1,0,0,0,0
1,0,0,0,0,0,1
1,0,0,1,0,0,0
1,0,1,0,0,0,0
1,0,1,1,0,0,1
1,1,0,0,0,0,0
1,1,0,1,0,0,1
1,1,1,0,0,0,1
</code></pre>
<p>python code is:</p>
<pre><code>import torch
import torch.nn as nn
import pandas as pd
import numpy as np
# Defining input size, hidden layer size, output size and batch size respectively
n_in, n_h, n_out = 4, 5, 3
# Create dummy input and target tensors (data)
df = pd.read_csv('data.csv')
input_cols = ['in1', 'in2', 'in3', 'in4']
output_cols = ['out1', 'out2', 'out3']
input_np_array = df[input_cols].to_numpy()
target_np_array = df[output_cols].to_numpy()
inputs = torch.tensor(input_np_array, dtype=torch.float32)
targets = torch.tensor(target_np_array, dtype=torch.float32)
model = nn.Sequential(
nn.Linear(n_in, n_h),
nn.Sigmoid(),
nn.Linear(n_h, n_out),
nn.Sigmoid())
# Construct the loss function
criterion = torch.nn.MSELoss()
# Construct the optimizer (Stochastic Gradient Descent in this case)
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
# Gradient Descent
for epoch in range(500000):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(inputs)
# Compute and print loss
loss = criterion(y_pred, targets)
if epoch % 100==0:
print('epoch: ', epoch,' loss: ', loss.item())
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
# perform a backward pass (backpropagation)
loss.backward()
# Update the parameters
optimizer.step()
test_data = torch.tensor([1,1,1,1], dtype=torch.float32) # 0,0,0
output = model(test_data)
probabilities = torch.nn.functional.sigmoid(output)
# probabilities = torch.nn.functional.softmax(output, dim=0)
print(probabilities)
test_data = torch.tensor([0,1,1,1], dtype=torch.float32) # 0,0,1
output = model(test_data)
probabilities = torch.nn.functional.sigmoid(output)
print(probabilities)
</code></pre>
<p>output is:</p>
<pre><code>tensor([0.5058, 0.5060, 0.6247], grad_fn=<SigmoidBackward0>)
tensor([0.5052, 0.5058, 0.6214], grad_fn=<SigmoidBackward0>)
</code></pre>
<p>Why am I not getting the optimal answer?</p>
| <python><pytorch><neural-network> | 2023-11-13 15:18:39 | 1 | 2,091 | Darwin |
77,474,923 | 17,487,457 | Calculate the mean of absolute SHAP values across all classes | <p>Suppose I have the following model, built from this synthetic data.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import shap
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
X, y = make_classification(n_samples=1000,
n_features=50,
n_informative=9,
n_redundant=0,
n_repeated=0,
n_classes=10,
n_clusters_per_class=1,
class_sep=9,
flip_y=0.2,
random_state=17)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
model = RandomForestClassifier()
model.fit(X_train, y_train)
</code></pre>
<p>And I calculate features shap values:</p>
<pre class="lang-py prettyprint-override"><code>explainer = shap.Explainer(model)
shap_values = explainer.shap_values(X_test)
type(shap_values)
list
</code></pre>
<p>To calculate each class' SHAP values separately, I do:</p>
<pre class="lang-py prettyprint-override"><code>abs_sv = np.abs(shap_values)
avg_feature_importance_per_class = np.mean(abs_sv, axis=1)
avg_feature_importance_per_class.shape
(10, 50)
</code></pre>
<p><strong>Question</strong></p>
<p>Now, how do I calculate the mean of absolute SHAP values across all classes, which I can consider as the model's feature importance (derived from SHAP values).</p>
<p>I do like this:</p>
<pre class="lang-py prettyprint-override"><code>feature_importance_overall = np.mean(abs_sv, axis=0)
</code></pre>
<p>But then I got myself confused. I am really doing this right? Especially if I look at the shape:</p>
<pre class="lang-py prettyprint-override"><code>feature_importance_overall.shape
(250, 50)
</code></pre>
<p>I was expecting something a the shape of <code>(number_of_features_,)</code>.
Similar to what I get from:</p>
<pre class="lang-py prettyprint-override"><code>model.feature_importances_.shape
(50,)
</code></pre>
<p><code>avg_feature_importance_per_class.shape</code> also shows this but for <code>number_of_classes</code> (i.e. <code>(10, 50)</code>) since this is computed for individual classes separately.</p>
| <python><numpy><shap> | 2023-11-13 15:12:00 | 1 | 305 | Amina Umar |
77,474,917 | 13,423,905 | How do I interpolate a missing area in a 2D image in python using Gaussians? | <p>I have been looking for some version of the answer to this question in many places, but I feel like the indexing is just getting the better of me. I have a 2D Gaussian image, which has a grayscale value as appears below</p>
<p>#visual examination shows that the area with the slit is row 169 to 185 and column 0 to 179. below is code to load the image from a tif file ('Cropped.tif') and also a script that converts from rgb to grayscale.</p>
<pre><code>def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])
#plot the grayscale image
plt.title("Image Grayscale")
plt.xlabel("X pixels")
plt.ylabel("Y Pixels Label")
image_unprocessed=mpimg.imread('Cropped.tif')
plt.imshow(rgb2gray(image_unprocessed))
plt.show()
image_grayscale=rgb2gray(image_unprocessed)
</code></pre>
<p><a href="https://i.sstatic.net/KDj6w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KDj6w.png" alt="enter image description here" /></a></p>
<p>Now I want to interpolate in that small slit that appears in the middle. . What I have essentially done is set the values in the slit to nan manally. The image itself is image_grayscale which is a354x342 numpy array for which I want to interpolate in region [169:195,0:195]. My mwe is as follows</p>
<pre><code>from scipy import interpolate
image_grayscale[169:195,0:195]=np.nan
x = np.arange(0, image_grayscale.shape[1])
y = np.arange(0, image_grayscale.shape[0])
image_grayscale=np.ma.masked_invalid(image_grayscale)
xx,yy=np.meshgrid(x,y)
x1=xx[~image_grayscale.mask]
y1=yy[~image_grayscale.mask]
new_imagegreyscale=image_grayscale[~image_grayscale.mask]
GD1 = interpolate.griddata((x1, y1), new_imagegreyscale.ravel(),
(xx, yy),
method='linear')
plt.title("Image interpolated")
plt.xlabel("X pixels")
plt.ylabel("Y Pixels Label")
plt.imshow(GD1)
plt.show()
</code></pre>
<p>Which returns the following image, which isn't bad but definitely isn't perfect, you can see that there are imperfections near the middle which I want to get rid of</p>
<p><a href="https://i.sstatic.net/JWlHA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JWlHA.png" alt="enter image description here" /></a></p>
<p>However, I feel like if I could just let the interpolator know to use a gaussian fit instead of a linear or cubic fit then the interpolated image may be better, but I am not sure about how this would change the syntax. I can see that scipy.interpolate has a lot of different interpolators but I am unsure of 1) which algorithm would be best suited for this case 2) how the syntax of implementing that algorithm differs from what I have above. I saw that the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RBFInterpolator.html#scipy.interpolate.RBFInterpolator" rel="nofollow noreferrer">RBFInterpolator</a> has an option to use gaussian basis funcions, but when I try the RBF interpolater as above</p>
<pre><code>from scipy.interpolate import RBFInterpolator
image_grayscale[169:195,0:195]=np.nan
x = np.arange(0, image_grayscale.shape[1])
y = np.arange(0, image_grayscale.shape[0])
image_grayscale=np.ma.masked_invalid(image_grayscale)
xx,yy=np.meshgrid(x,y)
x1=xx[~image_grayscale.mask]
y1=yy[~image_grayscale.mask]
new_imagegreyscale=image_grayscale[~image_grayscale.mask]
GD1 = RBFInterpolator((x1, y1), new_imagegreyscale.ravel(),
(xx, yy))
plt.title("Image interpolated")
plt.xlabel("X pixels")
plt.ylabel("Y Pixels Label")
plt.imshow(GD1)
plt.show()
</code></pre>
<p>I get the value error "Expected the first axis of <code>d</code> to have length 2.". I can see that this is a syntax issue but I have been really struggling at making any sense of the correct syntax. Sorry if this has been asked before but the other threads I found on 2D interpolation didn't quite give me the clarity I was looking for.</p>
| <python><image><scipy><interpolation> | 2023-11-13 15:11:13 | 0 | 369 | hex93 |
77,474,902 | 17,124,619 | Retrieve columns from async oracle sql | <p>I am attempting to retrieve column names from my oracle sql query, but I am using FastAPI with an oracle db database. I know that <code>cx_oracle</code> or <code>oracledb</code> do not have asyncio features yet and its being worked on.</p>
<p>Therefore, how do I extract column names from the library <code>cx_Oracle_async</code>, which integrates asyncio with cx_oracle. Because, I checked the source code and no <code>description</code> argument is included.</p>
<p>For example:</p>
<pre><code>class SQLFast:
"""SQL allows you to perform any type of SQL command in Python"""
async def __aenter__(self):
# Using cx_Oracle_async.create_pool to create an asynchronous connection pool
self.pool = await cx_Oracle_async.create_pool(
user=DB_USERNAME,
password=DB_PASSWORD,
dsn=f"{DB_HOST}:{DB_PORT}/{DB_SERVICE}",
)
self.connection = await self.pool.acquire()
# Creating an asynchronous cursor
self.cursor = await self.connection.cursor()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.pool.release(self.connection)
async def sql_query(self, query: str, return_type=pd.DataFrame):
"""Returns either a DataFrame or a list from a query"""
try:
await self.cursor.execute(query)
if return_type == pd.DataFrame:
result = await self.cursor.fetchall()
if result:
# Fetch columns directly from the cursor description
columns = [desc[0] for desc in self.cursor.description]
if columns:
return pd.DataFrame(result, columns=columns)
else:
return pd.DataFrame(result)
else:
return pd.DataFrame() # Return an empty DataFrame for no result
elif return_type == list:
return result
except Exception as e:
logger.error(f"Error executing SQL query: {e}")
raise
</code></pre>
<p>When I attempt to read a table from sql, I will get the following error:</p>
<blockquote>
<p>columns = [desc[0] for desc in self.cursor.description]
AttributeError: 'AsyncCursorWrapper' object has no attribute 'description'</p>
</blockquote>
| <python><fastapi> | 2023-11-13 15:09:10 | 0 | 309 | Emil11 |
77,474,676 | 1,581,090 | How to plot times on the x-axis with matplotlib? | <p>Following the example given <a href="https://www.tutorialspoint.com/how-to-show-date-and-time-on-the-x-axis-in-matplotlib" rel="nofollow noreferrer">HERE</a> I want to create a plot that shows just the time on the x-axis, not the date. So I modified the example code to the following:</p>
<pre><code>from datetime import datetime as dt
from matplotlib import pyplot as plt, dates as mdates
plt.rcParams["figure.figsize"] = [7.50, 3.50]
plt.rcParams["figure.autolayout"] = True
dates = ["01/02/2020 10:00", "01/02/2020 10:05", "01/02/2020 10:10"]
x_values = [dt.strptime(d, "%m/%d/%Y %H:%M").date() for d in dates]
y_values = [1, 2, 3]
ax = plt.gca()
formatter = mdates.DateFormatter("%H:%M")
ax.xaxis.set_major_formatter(formatter)
locator = mdates.HourLocator()
ax.xaxis.set_major_locator(locator)
plt.plot(x_values, y_values)
plt.show()
</code></pre>
<p>but instead of showing a range in time from 10:00 to 10:10 I get the following:</p>
<p><a href="https://i.sstatic.net/urifv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/urifv.png" alt="enter image description here" /></a></p>
<p>What is wrong?</p>
| <python><matplotlib> | 2023-11-13 14:38:59 | 1 | 45,023 | Alex |
77,474,534 | 1,585,696 | "BlockingIOError: [Errno 11] Resource temporarily unavailable" when installing Python 3.10.13 | <p>I'm running into a blocking issue when installing Python 3.10.13 on a server running Ubuntu 20.04.6 LTS.</p>
<p>I download the source, <code>configure</code> with <code>--prefix</code> specified, <code>make</code>, and then <code>make install</code>. The error occurs during the <code>make install</code> portion, at the point when the Makefile is executing this command:</p>
<pre><code>PYTHONPATH=/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10 \
./python -E -Wi /home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/compileall.py \
-j0 -d /home/dh_2i85ds/opt/python-3.10.13/lib/python3.10 -f \
-x 'bad_coding|badsyntax|site-packages|lib2to3/tests/data' \
/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10
</code></pre>
<p>The traceback is:</p>
<pre><code>Listing '/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10'...
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/compileall.py", line 462, in <module>
exit_status = int(not main())
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/compileall.py", line 439, in main
if not compile_dir(dest, maxlevels, args.ddir,
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/compileall.py", line 103, in compile_dir
results = executor.map(partial(compile_file,
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 766, in map
results = super().map(partial(_process_chunk, fn),
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/_base.py", line 610, in map
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/_base.py", line 610, in <listcomp>
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 738, in submit
self._start_executor_manager_thread()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 678, in _start_executor_manager_thread
self._launch_processes()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 705, in _launch_processes
self._spawn_process()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 714, in _spawn_process
p.start()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/context.py", line 281, in _Popen
return Popen(process_obj)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/popen_fork.py", line 66, in _launch
self.pid = os.fork()
BlockingIOError: [Errno 11] Resource temporarily unavailable
^CProcess ForkProcess-19:
Process ForkProcess-14:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-17:
Process ForkProcess-10:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-15:
Process ForkProcess-8:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-12:
Process ForkProcess-6:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-2:
Traceback (most recent call last):
Process ForkProcess-4:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 103, in get
res = self._recv_bytes()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
Exception ignored in atexit callback: <function _exit_function at 0x6ae9377b52d0>
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/util.py", line 357, in _exit_function
p.join()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/popen_fork.py", line 43, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt:
Process ForkProcess-9:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-11:
Process ForkProcess-13:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
Traceback (most recent call last):
KeyboardInterrupt
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-7:
Process ForkProcess-3:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-18:
Process ForkProcess-5:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-1:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkProcess-16:
Traceback (most recent call last):
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/concurrent/futures/process.py", line 240, in _process_worker
call_item = call_queue.get(block=True)
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/queues.py", line 102, in get
with self._rlock:
File "/home/dh_2i85ds/opt/python-3.10.13/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
make: [Makefile:1551: libinstall] Error 1 (ignored)
</code></pre>
<p><strong>Edit</strong>: The soft and hard max user processes limit is 2061293:</p>
<pre><code>[dh_2i85ds@pdx1-shared-a1-08:~/Python-3.10.13]$ ulimit -aH | grep "\-u"
max user processes (-u) 2061293
[dh_2i85ds@pdx1-shared-a1-08:~/Python-3.10.13]$ ulimit -aS | grep "\-u"
max user processes (-u) 2061293
</code></pre>
| <python><python-3.x><ubuntu><makefile> | 2023-11-13 14:17:06 | 0 | 594 | Tyler D |
77,474,487 | 5,873,325 | Compare dataframe columns by another column elements and compute accuracy percentage | <p>Let's say I have two dataframes of unequal lengths, <strong>df1</strong> & <strong>df2</strong>.</p>
<p>The first dataframe <strong>df1</strong> looks like this :</p>
<pre><code>id | a | b | c | d
--------------------------
i1 | 1,2 | 0 | Nan | 3
i2 | 1 | 4 | 2 | 0,3
i3 | 0 | 1 | 2,3 | Nan
</code></pre>
<p>And the second dataframe <strong>df2</strong> looks like this :</p>
<pre><code>id | a | b | c | d
--------------------------
i5 | 0,3 | 1 | 2,4 | Nan
i2 | 1 | 4 | 2 | 0
i3 | 0 | 1 | 2,3 | Nan
i4 | 0 | 1 | 2,3 | Nan
i1 | 1 | 0 | Nan | 3
</code></pre>
<p>Both dataframes have the same columns names; <strong>a, b, c & d</strong>. They might have the same elements in the column <strong>id</strong> but not necessarily in the same order.</p>
<p>Now for every common id number in the column <strong>id</strong>, I want to compare all the columns <strong>a, b, c & d</strong> values and compute an accuracy percentage.</p>
<p>For example, let's look at the id number <strong>i1</strong>, in df1 the values in column <strong>a</strong> for this id number are 1,2 but in df2, with the same id number <strong>i1</strong>, the values in column <strong>a</strong> are 1. So the value 2 is missing and therefore the accuracy should be something like 50%. This will be repeated for every column and for every row. If the id number in df2 does not exist in df1 or the opposite it should be ignored. I want to compute the final accuracy percentage and be able to visualize the elements that matches and the ones that do not.</p>
<p>To compare column a, I have tried something like this :</p>
<pre><code>new_df2 = df2[df2.id.isin(df1.id)]
acc = (df1.sort_values('id')['a'].reset_index()==new_df2.sort_values('id')['a'].reset_index()).mean()
</code></pre>
<p>I am not sure if this is correct or if it's the best way to do it. Any suggestions ?</p>
| <python><pandas><dataframe> | 2023-11-13 14:10:14 | 1 | 640 | Mejdi Dallel |
77,474,455 | 11,932,905 | Pandas - drop records if on specific column value is equal to previous record | <p>I have a dataframe like following (sorted by id, time and status):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>status</th>
<th>timestamp</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>A</td>
<td>29.08.2023 12:39</td>
</tr>
<tr>
<td>111</td>
<td>A</td>
<td>29.08.2023 12:45</td>
</tr>
<tr>
<td>111</td>
<td>B</td>
<td>29.08.2023 12:47</td>
</tr>
<tr>
<td>111</td>
<td>C</td>
<td>29.08.2023 12:50</td>
</tr>
<tr>
<td>111</td>
<td>A</td>
<td>29.08.2023 12:50</td>
</tr>
<tr>
<td>112</td>
<td>A</td>
<td>29.08.2023 12:50</td>
</tr>
<tr>
<td>112</td>
<td>B</td>
<td>29.08.2023 13:09</td>
</tr>
<tr>
<td>112</td>
<td>C</td>
<td>29.08.2023 13:40</td>
</tr>
<tr>
<td>112</td>
<td>B</td>
<td>29.08.2023 13:50</td>
</tr>
<tr>
<td>112</td>
<td>A</td>
<td>29.08.2023 13:55</td>
</tr>
</tbody>
</table>
</div>
<p>I need to remove remove repeats in status sequences but only for cases when status repeats in next to each other records. Meaning that id 111 can have multiple statuses == 'A', but I need to remove second 'A' if previous status in time also was 'A'.
So new table should be like this (remove second line with same status):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>status</th>
<th>timestamp</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>A</td>
<td>29.08.2023 12:39</td>
</tr>
<tr>
<td>111</td>
<td>B</td>
<td>29.08.2023 12:47</td>
</tr>
<tr>
<td>111</td>
<td>C</td>
<td>29.08.2023 12:50</td>
</tr>
<tr>
<td>111</td>
<td>A</td>
<td>29.08.2023 12:50</td>
</tr>
<tr>
<td>112</td>
<td>A</td>
<td>29.08.2023 12:50</td>
</tr>
<tr>
<td>112</td>
<td>B</td>
<td>29.08.2023 13:09</td>
</tr>
<tr>
<td>112</td>
<td>C</td>
<td>29.08.2023 13:40</td>
</tr>
<tr>
<td>112</td>
<td>B</td>
<td>29.08.2023 13:50</td>
</tr>
<tr>
<td>112</td>
<td>A</td>
<td>29.08.2023 13:55</td>
</tr>
</tbody>
</table>
</div>
<p>So in the end we have a unique sequences of statuses for each id without repeats.</p>
<p>Appreciate any help because I stuck with some very slow and blunt approach comparing each row with previous, but dataset is very huge, > 5M records.</p>
| <python><pandas><group-by><duplicates> | 2023-11-13 14:04:27 | 1 | 608 | Alex_Y |
77,474,360 | 5,618,856 | get text from stdout in subprocess.run without UnicodeDecodeError | <p>I read the output from a windows command line call like so:</p>
<pre class="lang-py prettyprint-override"><code>result = subprocess.run(["cmd", "/c", "dir","c:\mypath"], stdout=subprocess.PIPE, text=True,check=True)
</code></pre>
<p>The result may contain unexpected characters and I get a UnicodeDecodeError. It tried to sanitize it with <code>text = result.stdout.encode('ascii','replace').decode('ascii')</code> but this doesn't always help.</p>
<p>How do I robustly read the text avoiding any UnicodeDecodeError?</p>
| <python><python-3.x><subprocess> | 2023-11-13 13:48:16 | 1 | 603 | Fred |
77,474,250 | 4,399,016 | Pandas Pivot Table returning no Values | <p>I have a pandas data <a href="https://stackoverflow.com/a/77446000/4399016">frame</a> <code>df</code> that looks like this.</p>
<p><a href="https://i.sstatic.net/5GvGX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5GvGX.png" alt="enter image description here" /></a></p>
<pre><code>import pandas as pd
url = "https://www-genesis.destatis.de/genesisWS/rest/2020/data/tablefile?username=DEB924AL95&password=P@ssword1234&name=42153-0002&area=all&compress=false&transpose=false&startyear=1900&endyear=&timeslices=&regionalvariable=&regionalkey=&classifyingvariable1=WERT03&classifyingkey1=BV4TB&classifyingvariable2=WZ08V2&classifyingkey2=&classifyingvariable3=&classifyingkey3=&format=xlsx&job=false&stand=01.01.1970&language=en"
df = pd.read_excel(url, engine='openpyxl')
df = df.iloc[5:-3]
df.columns = ['Variable', 'Date', 'Value']
m = df['Date'].isna()
df['Date'] += '-' + df['Variable'].ffill()
df['Variable'] = df['Variable'].where(m).ffill()
df
import numpy as np
# Reshape your dataframe
out = (df[~m].replace('...', np.nan)
.pivot_table(index='Date', columns='Variable',
values='Value', sort=False)
.reset_index().rename_axis(columns=None))
out
</code></pre>
<p>This gives me only a <code>Date</code> column and no values.</p>
<pre><code> Date
0 January-1991
1 February-1991
2 March-1991
3 April-1991
</code></pre>
<p>What changes do I need to do to make it work?</p>
| <python><pandas><pivot-table> | 2023-11-13 13:28:35 | 1 | 680 | prashanth manohar |
77,474,231 | 19,672,778 | TypeError: string indices must be integers in pandas_datareader.DataReader | <p>I am trying to get facebook stock prices with pandas_datareader but it gives me this error. can anyone help me out?</p>
<pre class="lang-py prettyprint-override"><code>company = 'FB'
start = dt.datetime(2012, 1, 1)
end = dt.datetime.now()
data = pdr.DataReader(company, 'yahoo', start, end)
</code></pre>
| <python><python-3.x><pandas><yahoo-finance> | 2023-11-13 13:24:26 | 1 | 319 | NikoMolecule |
77,473,949 | 10,973,108 | It's possible to decrease network consume from Selenium application? | <p>Today I've a scraper that was implemented in Selenium and the target website have a lot of barriers to pass, such as, Cloudflare checks, IP block, etc.</p>
<p>The application already pass this barriers but we are using an strategy that have a network with limits usage.</p>
<p>It's possible to decrease Selenium network usage? I think no, because how Selenium simulate a browser, the server will load the expected page content.</p>
<p>I'm just asking to be sure.</p>
<p>edit1: target url - <a href="https://precodahora.tcepb.tc.br/" rel="nofollow noreferrer">https://precodahora.tcepb.tc.br/</a></p>
| <python><selenium-webdriver><web-scraping><web-crawler> | 2023-11-13 12:38:12 | 0 | 348 | Daniel Bailo |
77,473,922 | 7,318,488 | Casting a Polars pl.Object column to pl.String raises a ComputeError | <p>I got a pl.LazyFrame with a column of type Object that contains date representations, it also includes missing values (None).<br />
In a first step I would like to convert the column from <code>Object</code> to <code>String</code> however this results in a ComputeError. I can not seem to figure out why. I suppose this is due to the None values, sadly I can not drop those at the current point in time.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import polars as pl
rng = np.random.default_rng(12345)
df = pl.LazyFrame(
data={
"date": rng.choice(
[None, "03.04.1998", "03.05.1834", "05.06.2025"], 100
),
}
)
df.with_columns(pl.col("date").cast(pl.String)).collect()
</code></pre>
| <python><dataframe><casting><type-conversion><python-polars> | 2023-11-13 12:33:03 | 4 | 1,840 | Björn |
77,473,880 | 1,841,839 | How to Covert Google analtics API response to Pandas data frame | <p>I have some code which exports data from Google analytics using the Google Analtics data api.</p>
<pre><code>def sample_run_report(credentials=None, property_id="YOUR-GA4-PROPERTY-ID"):
"""Runs a simple report on a Google Analytics 4 property."""
client = BetaAnalyticsDataClient(credentials=credentials)
dates = [DateRange(start_date="7daysAgo", end_date='today')]
metrics = [Metric(name='activeUsers')]
dimensions = [], #[Dimension(name='city')]
request = RunReportRequest(
property=f'properties/{property_id}',
metrics=metrics,
date_ranges=dates,
)
response = client.run_report(request)
print(response)
</code></pre>
<p>This code works file it exports the data in the standard response for the api.</p>
<pre><code>{
"dimensionHeaders": [
{
object (DimensionHeader)
}
],
"metricHeaders": [
{
object (MetricHeader)
}
],
"rows": [
{
object (Row)
}
]
.....
}
</code></pre>
<p>I would like to format this as a pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer">dataFrame</a> I have not been able to find any method to output it as a dataframe.</p>
| <python><pandas><google-analytics><google-analytics-api><google-api-python-client> | 2023-11-13 12:25:55 | 1 | 118,263 | Linda Lawton - DaImTo |
77,473,875 | 5,224,236 | Your Kedro project version 0.17.6 does not match Kedro package version | <p>when running <code>kedro ipython</code> into a cloned repo, I get:</p>
<pre><code>_version_mismatch_error(metadata_dict["kedro_init_version"]))
ValueError: Your Kedro project version 0.17.6 does not match Kedro package version
0.18.14 you are running. Make sure to update your project template. See https://github.com/kedro-org/kedro/blob/main/RELEASE.md for how to migrate your Kedro project.
</code></pre>
<p>The thing is I don't have admin priviledges to downgrade python to 3.8 zhich would be required to install the matching project version 0.17.6. Any help?</p>
| <python><kedro> | 2023-11-13 12:25:31 | 1 | 6,028 | gaut |
77,473,858 | 9,106,985 | How to add bar values, set_ylim, and set_ylabel on secondary_y | <p>Currently have 2 problems with the following script</p>
<ol>
<li><p>cannot add bar labels to sets 8/9</p>
</li>
<li><p>set 8 is "off the charts" regardless of what ylim I set.</p>
<pre><code> import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
categories = ['Cat1', 'Cat2', 'Cat3',
'Cat4', 'Cat5']
data_sets = {'Set1': [0.151, 0.015, 0.110, 0.204, 0.110],
'Set2': [0.146, 0.025, 0.088, 0.151, 0.088],
'Set3': [0.161, 0.027, 0.122, 0.217, 0.122],
'Set4': [0.145, 0.015, 0.095, 0.174, 0.095],
'Set5': [0.216, 0.020, 0.160, 0.300, 0.160],
'Set6': [0.069, 0.050, 0.069, 0.088, 0.088],
'Set7': [0.069, 0.050, 0.069, 0.088, 0.088],
'Set8': [132, 131, 139, 137, 135],
'Set9': [132, 131, 139, 137, 135], }
df = pd.DataFrame(data_sets, index=categories)
ax = df.plot(kind= 'bar',secondary_y= 'Set9', rot= 0,
ylabel = 'test', ylim = [0, .35], grid=True )
plt.ylabel('Check')
for bar in ax.containers:
ax.bar_label(bar, rotation=90)
fig = ax.get_figure()
ax = fig.get_axes()
ax[1].set_ylim(0, 200)
</code></pre>
</li>
</ol>
| <python><pandas><matplotlib><plot-annotations><grouped-bar-chart> | 2023-11-13 12:22:19 | 1 | 575 | shoggananna |
77,473,850 | 9,434,803 | Trying to set up Polyglot notebook to work with Python and R | <p>I recently installed the Polyglot notebook extension for VS Code, and I am trying to set it up to work with Python and R. According to <a href="https://github.com/dotnet/interactive/blob/main/docs/jupyter-in-polyglot-notebooks.md" rel="nofollow noreferrer">the developer's tutorial</a>, it should be a simple matter of running the following lines in a notebook cell (i am not using Anaconda, and R is already configured for Jupyter notebooks):</p>
<pre><code>#!connect jupyter --kernel-name pythonkernel --kernel-spec python3
#!connect jupyter --kernel-name Rkernel --kernel-spec ir
</code></pre>
<p>Executing a cell run with either of these commands will result in the cell timer running indefinitely without any errors. However, nothing happens.</p>
<p>When posting here, I tried retrieving the version using <code>#!about</code> as suggested in the issue template. Like the above commands, the cell timer runs like the cell is being executed but nothing is happening.</p>
<p>Seemingly, Polyglot notebook is not set up correctly, so I am wondering what I can do to get it to work?</p>
<hr />
<h4>Edit:</h4>
<p>Anyway, I checked the Polyglot notebook: diagnostics console output, and it keeps reiterating the following message indefinitely:</p>
<pre><code>Process 'dotnet' with PID 2588758 exited with code 150 and signal null
Started process 2588770: dotnet tool run dotnet-interactive -- notebook-parser
process 2588770 stderr: You must install or update .NET to run this application.
App: /home/pal_bjartan/.nuget/packages/microsoft.dotnet-interactive/1.0.456201/tools/net7.0/any/Microsoft.DotNet.Interactive.App.dll
Architecture: x64
Framework: 'Microsoft.AspNetCore.App', version '7.0.0' (x64)
.NET location: /usr/share/dotnet/
No frameworks were found.
Learn about framework resolution:
https://aka.ms/dotnet/app-launch-failed
To install missing framework, download:
https://aka.ms/dotnet-core-applaunch?framework=Microsoft.AspNetCore.App&framework_version=7.0.0&arch=x64&rid=arch-x64
</code></pre>
<p>According to the <a href="https://aka.ms/dotnet/app-launch-failed" rel="nofollow noreferrer">linked guide</a> above, it seems this error is caused by the required framework either being missing or not installed correctly. However, querying the package manager, the necessary packages seems to be installed:</p>
<pre><code>> pacman -Qs dotnet 1
local/dotnet-host 7.0.13.sdk113-1
A generic driver for the .NET Core Command Line Interface
local/dotnet-runtime 7.0.13.sdk113-1
The .NET Core runtime
local/dotnet-sdk 7.0.13.sdk113-1
The .NET Core SDK
local/dotnet-targeting-pack 7.0.13.sdk113-1
The .NET Core targeting pack
</code></pre>
<hr />
<p>.NET:</p>
<pre><code>v. 7.0.113
</code></pre>
<p>VS Code:</p>
<pre><code>Version: 1.84.2
Commit: 1a5daa3a0231a0fbba4f14db7ec463cf99d7768e
Date: 2023-11-09T10:50:47.800Z
Electron: 25.9.2
ElectronBuildId: 24603566
Chromium: 114.0.5735.289
Node.js: 18.15.0
V8: 11.4.183.29-electron.0
OS: Linux x64 6.6.1-arch1-1
</code></pre>
<p>Extensions:</p>
<pre><code>.NET Install Tool: v2.0.0
Polyglot Notebooks: v1.0.4562011 Pre-Release
</code></pre>
| <python><r><.net><visual-studio-code><polyglot-notebooks> | 2023-11-13 12:21:13 | 1 | 935 | Pål Bjartan |
77,473,770 | 374,437 | VSCode not debugging into third side packages | <p>New to vscode and it seems fine, just that it doesn't seem to allow debugging into python packages I didn't write (not under the project folder?), neither does it allow me to add breakpoints to code that wasn't written by me.</p>
<p>I tried to set the "justMyCode": false configuration flag but it seems not to affect anything.</p>
<p>standard microsoft python and pylint extensions installed</p>
<p>Added according to request:</p>
<p>Version: 1.83.1 (Universal)
Commit: f1b07bd25dfad64b0167beb15359ae573aecd2cc
Date: 2023-10-10T23:46:55.789Z
Electron: 25.8.4
ElectronBuildId: 24154031
Chromium: 114.0.5735.289
Node.js: 18.15.0
V8: 11.4.183.29-electron.0
OS: Darwin arm64 23.1.0</p>
<p>extensions:</p>
<p>Python
v2023.20.0
Microsoft
microsoft.com</p>
<p>Python Extension Pack
v1.7.0
Don Jayamanne</p>
| <python><visual-studio-code><debugging><ide> | 2023-11-13 12:05:52 | 2 | 974 | Veltzer Doron |
77,473,693 | 726,730 | python - pyqt5 QScrollBar of QListWidget | <pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'untitled_1.ui'
#
# Created by: PyQt5 UI code generator 5.15.7
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(400, 300)
self.verticalLayout = QtWidgets.QVBoxLayout(Form)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setObjectName("verticalLayout")
self.scrollArea = QtWidgets.QScrollArea(Form)
self.scrollArea.setWidgetResizable(True)
self.scrollArea.setObjectName("scrollArea")
self.scrollAreaWidgetContents = QtWidgets.QWidget()
self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 398, 298))
self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents")
self.horizontalLayout = QtWidgets.QHBoxLayout(self.scrollAreaWidgetContents)
self.horizontalLayout.setObjectName("horizontalLayout")
self.listWidget = QtWidgets.QListWidget(self.scrollAreaWidgetContents)
self.listWidget.setSizeAdjustPolicy(QtWidgets.QAbstractScrollArea.AdjustToContents)
self.listWidget.setObjectName("listWidget")
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
item = QtWidgets.QListWidgetItem()
self.listWidget.addItem(item)
self.horizontalLayout.addWidget(self.listWidget)
self.frame = QtWidgets.QFrame(self.scrollAreaWidgetContents)
self.frame.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.frame.setFrameShadow(QtWidgets.QFrame.Raised)
self.frame.setObjectName("frame")
self.horizontalLayout.addWidget(self.frame)
self.scrollArea.setWidget(self.scrollAreaWidgetContents)
self.verticalLayout.addWidget(self.scrollArea)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
__sortingEnabled = self.listWidget.isSortingEnabled()
self.listWidget.setSortingEnabled(False)
item = self.listWidget.item(0)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(1)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(2)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(3)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(4)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(5)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(6)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(7)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(8)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(9)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(10)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(11)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(12)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(13)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(14)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(15)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(16)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(17)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(18)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(19)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(20)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(21)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(22)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(23)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(24)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(25)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(26)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(27)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(28)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(29)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(30)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(31)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(32)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(33)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(34)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(35)
item.setText(_translate("Form", "1"))
item = self.listWidget.item(36)
item.setText(_translate("Form", "1"))
self.listWidget.setSortingEnabled(__sortingEnabled)
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.showMaximized()
sys.exit(app.exec_())
</code></pre>
<p>In this example how can i hide the scrollbar of listwidget and instead have scrollbar of QMainWindow - QScrollArea?</p>
| <python><pyqt5><qlistwidget><qscrollarea><qscrollbar> | 2023-11-13 11:52:40 | 0 | 2,427 | Chris P |
77,473,528 | 6,653,602 | Is it possible to show only sender name when sending email from Python? | <p>I am trying to send email using Python and <code>smptlib</code>. Is it possible to format message body so that it shows the sender's name only instead of name and email?</p>
<p>Currently I am composing the email message like this</p>
<pre><code>from email.utils import formataddr
from email.message import EmailMessage
msg = EmailMessage()
msg['From'] = formataddr((str(Header('John Smith', 'utf-8')), 'john.smith@outlook.com'))
msg['To'] = "abc@outlook.com"
msg['Subject'] = 'title'
msg.set_content('content message')
</code></pre>
<p>In the inbox it will show the name only (<code>John Smith</code>) as expected, but when I open the message it shows name and the email in pointy brackets: <code>John Smith <john.smith@outlook.com></code></p>
<p>Is it possible to show only John Smith name without the email in the email message? I want to still keep the email somehow so that the name is correctly associated with that email when I hover over it in Outlook. If I send name only like this <code>msg['From'] = 'John Smith'</code> then the name will be greyed out and won't have any details for that person from the address book in Outlook.</p>
| <python><email><smtplib> | 2023-11-13 11:24:35 | 3 | 3,918 | Alex T |
77,473,473 | 2,414,988 | How to use mypy with helper functions | <p>I have a function <code>my_function</code> that takes two arguments, <code>x</code> and <code>method</code>. Depending on the value of <code>method</code>, either one of two standalone functions will be used, <code>func1</code> and <code>func2</code>. These two functions take <code>x</code> as argument: <code>func1</code> expects it to be an integer and <code>func2</code> expects it to be a float. Both functions may be used independently by the user and thus check the type of <code>x</code> and raise an error if the type is wrong. Below is my code with type hints:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
def func1(x: int):
"""Perform some work on integers."""
if not isinstance(x, int):
raise ValueError("x should be an integer.")
# Do some stuff.
def func2(x: float):
"""Perform some work on floats."""
if not isinstance(x, float):
raise ValueError("x should be a float.")
# Do some stuff.
def my_function(x: int | float, method: Literal[1, 2]):
"""Perform some work using one of two methods."""
match method:
case 1:
result = func1(x)
case 2:
result = func2(x)
case _:
raise ValueError(f"Invalid method: {method}.")
# Do some more work on result.
return result
</code></pre>
<p>Mypy will raise an error on the call to <code>func1</code>, as the <code>int | float</code> type from <code>x</code> is incompatible with the expected type <code>int</code> from <code>helper1</code>. What's the recommended way to please mypy here? Should I repeat the type checking in <code>my_function</code> just before calling <code>func1</code> and <code>func2</code>?</p>
<p><strong>EDIT</strong>: I should note that I do not want to simply rely on the type of <code>x</code> inside <code>my_function</code> to choose between functions, as it would be very easy for users to confuse integer and floats (e.g. by omitting the <code>.0</code> in <code>1.0</code>).</p>
| <python><mypy><python-typing> | 2023-11-13 11:15:10 | 2 | 715 | Biblot |
77,473,455 | 13,184,263 | Problems dealing with big dataframes in databricks notebook | <p>I am having problems dealing with a big dataframe in databricks notebook. This is the schema of my dataframe:</p>
<pre><code>root
|-- PAYMENT_DATE: string (nullable = true)
|-- PAYMENT_REFERENCE: string (nullable = true)
|-- ACCOUNT_NUMBER: string (nullable = true)
|-- ROUTING_NUMBER: string (nullable = true)
|-- FUND_AMT: string (nullable = true)
|-- BATCH_REFERENCE: string (nullable = true)
|-- BATCH_TYPE: string (nullable = true)
|-- CUSTOMER_BATCH_REFERENCE: string (nullable = true)
|-- CUSTOMER_NAME: string (nullable = true)
|-- MERCHANT_NUMBER: string (nullable = true)
|-- EXTERNAL_MID: string (nullable = true)
|-- STORE_NUMBER: string (nullable = true)
|-- CHAIN: string (nullable = true)
|-- BATCH_AMT: string (nullable = true)
|-- AMOUNT: string (nullable = true)
|-- CARD_TYPE: string (nullable = true)
|-- CHARGE_TYPE: string (nullable = true)
|-- CHARGE_TYPE_DESCRIPTION: string (nullable = true)
|-- CARD_PLAN: string (nullable = true)
|-- CARD_NO: string (nullable = true)
|-- TRANSACTION_DATE: string (nullable = true)
|-- SETTLEMENT_DATE: string (nullable = true)
|-- AUTHORIZATION_CODE: string (nullable = true)
|-- CHARGEBACK_CONTROL_NO: string (nullable = true)
|-- ROC_TEXT: string (nullable = true)
|-- TRN_ACI: string (nullable = true)
|-- LIFE_CYCLE_ID: string (nullable = true)
|-- CARD_SCHEME_REF: string (nullable = true)
|-- TRN_REF_NUM: string (nullable = true)
|-- SETTLEMENT_METHOD: string (nullable = true)
|-- CURRENCY_CODE: string (nullable = true)
|-- CB_ACQ_REF_ID: string (nullable = true)
|-- CHGBK_RSN_CODE: string (nullable = true)
|-- CHGBK_RSN_DESC: string (nullable = true)
|-- MER_REF: string (nullable = true)
|-- CUST_COD: string (nullable = true)
|-- TRN_ARN: string (nullable = true)
|-- TERM_ID: string (nullable = true)
|-- ENT_NUM: string (nullable = true)
|-- ingestionDate: string (nullable = true)
|-- _rescued_data: string (nullable = true)
|-- source_metadata: struct (nullable = true)
| |-- file_path: string (nullable = true)
| |-- file_name: string (nullable = true)
| |-- file_size: long (nullable = true)
| |-- file_block_start: long (nullable = true)
| |-- file_block_length: long (nullable = true)
| |-- file_modification_time: timestamp (nullable = true)
|-- PAYMENT_DATE: string (nullable = true)
|-- ACCOUNT_NUMBER: string (nullable = true)
|-- ROUTING_NUMBER: string (nullable = true)
|-- FUND_AMT: string (nullable = true)
|-- BATCH_REFERENCE: string (nullable = true)
|-- BATCH_TYPE: string (nullable = true)
|-- CUSTOMER_BATCH_REFERENCE: string (nullable = true)
|-- CUSTOMER_NAME: string (nullable = true)
|-- MERCHANT_NUMBER: string (nullable = true)
|-- EXTERNAL_MID: string (nullable = true)
|-- STORE_NUMBER: string (nullable = true)
|-- CHAIN: string (nullable = true)
|-- BATCH_AMT: string (nullable = true)
|-- AMOUNT: string (nullable = true)
|-- CARD_TYPE: string (nullable = true)
|-- CHARGE_TYPE: string (nullable = true)
|-- CHARGE_TYPE_DESCRIPTION: string (nullable = true)
|-- CARD_PLAN: string (nullable = true)
|-- CARD_NO: string (nullable = true)
|-- TRANSACTION_DATE: string (nullable = true)
|-- SETTLEMENT_DATE: string (nullable = true)
|-- AUTHORIZATION_CODE: string (nullable = true)
|-- CHARGEBACK_CONTROL_NO: string (nullable = true)
|-- ROC_TEXT: string (nullable = true)
|-- TRN_ACI: string (nullable = true)
|-- LIFE_CYCLE_ID: string (nullable = true)
|-- CARD_SCHEME_REF: string (nullable = true)
|-- TRN_REF_NUM: string (nullable = true)
|-- SETTLEMENT_METHOD: string (nullable = true)
|-- CURRENCY_CODE: string (nullable = true)
|-- CB_ACQ_REF_ID: string (nullable = true)
|-- CHGBK_RSN_CODE: string (nullable = true)
|-- CHGBK_RSN_DESC: string (nullable = true)
|-- MER_REF: string (nullable = true)
|-- CUST_COD: string (nullable = true)
|-- TRN_ARN: string (nullable = true)
|-- TERM_ID: string (nullable = true)
|-- ENT_NUM: string (nullable = true)
|-- SURCHG_AMOUNT: string (nullable = true)
|-- CONVNCE_AMT: string (nullable = true)
|-- PURCH_ID: string (nullable = true)
|-- CHK_NUM: string (nullable = true)
|-- ISS_CCT_COD: string (nullable = true)
|-- POS_ENT_MODE: string (nullable = true)
|-- CH_ID_METH: string (nullable = true)
|-- MOTO_IND: string (nullable = true)
|-- CAT_IND: string (nullable = true)
|-- CVV2_RESULT: string (nullable = true)
|-- DR_SERVICE_CODE: string (nullable = true)
|-- TDSXT_TRN_DWNGRD: string (nullable = true)
|-- AUTH_FAIL_APPROVAL_CDE_IND: string (nullable = true)
|-- AUTH_FAIL_AUTH_RESP_IND: string (nullable = true)
|-- AUTH_FAIL_TRN_ID_IND: string (nullable = true)
|-- AUTH_FAIL_AUTH_DTE_IND: string (nullable = true)
|-- AUTH_FAIL_TRN_ACI_IND: string (nullable = true)
|-- AUTH_FAIL_TRK_DATA_COND_IND: string (nullable = true)
|-- SETTL_FAIL_TIMELINESS_IND: string (nullable = true)
|-- SETTL_FAIL_AVS_RESP_IND: string (nullable = true)
|-- SETTL_FAIL_CVV_RESP_IND: string (nullable = true)
|-- SETTL_FAIL_TAX_INCL_FLG_IND: string (nullable = true)
|-- SETTL_FAIL_TAX_AMT_IND: string (nullable = true)
|-- SETTL_FAIL_CUST_CDE_IND: string (nullable = true)
|-- SETTL_FAIL_AUTH2SETTL_AMT_IND: string (nullable = true)
|-- SETTL_FAIL_INVC_NUM_IND: string (nullable = true)
|-- SETTL_FAIL_CUST_SVC_NUM_IND: string (nullable = true)
|-- SETTL_FAIL_POS_ENTRY_IND: string (nullable = true)
|-- RENT_FAIL_CUST_REF_IND: string (nullable = true)
|-- RENT_FAIL_RENTER_NAM_IND: string (nullable = true)
|-- RENT_FAIL_RETURN_CITY_IND: string (nullable = true)
|-- RENT_FAIL_RETURN_ST_IND: string (nullable = true)
|-- RENT_FAIL_RETURN_CTRY_IND: string (nullable = true)
|-- RENT_FAIL_RETURN_LOC_IND: string (nullable = true)
|-- RENT_FAIL_CKOUT_RETURN_DTE_IND: string (nullable = true)
|-- RENT_FAIL_CUST_SVC_NUM_IND: string (nullable = true)
|-- RENT_FAIL_PROP_PHN_NUM_IND: string (nullable = true)
|-- RENT_FAIL_MKT_SPECFC_IND: string (nullable = true)
|-- RENT_FAIL_CK_IN_OUT_IND: string (nullable = true)
|-- TRVL_FAIL_PASSGR_NAM_IND: string (nullable = true)
|-- TRVL_FAIL_DEPART_DTE_IND: string (nullable = true)
|-- TRVL_FAIL_ORIG_CITY_IND: string (nullable = true)
|-- TRVL_FAIL_LEG1_IND: string (nullable = true)
|-- TRVL_FAIL_RESTR_TIC_IND: string (nullable = true)
|-- TRVL_FAIL_MULT_SEQ_NUM_IND: string (nullable = true)
|-- TRVL_FAIL_MULT_SEQ_CNT_IND: string (nullable = true)
|-- TRVL_FAIL_ANCIL_TKT_NUM_IND: string (nullable = true)
|-- TRVL_FAIL_ANCIL_SVC_CAT_IND: string (nullable = true)
|-- TRVL_FAIL_ANCIL_CONN_TKT_IND: string (nullable = true)
|-- TRVL_FAIL_ANCIL_PASSGR_DAT_IND: string (nullable = true)
|-- FUEL_FAIL_PURCH_TYP_IND: string (nullable = true)
|-- FUEL_FAIL_FUEL_TYP_IND: string (nullable = true)
|-- FUEL_FAIL_UNIT_MEAS_IND: string (nullable = true)
|-- FUEL_FAIL_FUEL_QTY_IND: string (nullable = true)
|-- FUEL_FAIL_PRICE_IND: string (nullable = true)
|-- FUEL_FAIL_TAX_EXEMPT_IND: string (nullable = true)
|-- FUEL_FAIL_COMPANY_NAM_IND: string (nullable = true)
|-- FUEL_FAIL_PURCH_TIM_IND: string (nullable = true)
|-- FUEL_FAIL_QTY_EXPONENT_IND: string (nullable = true)
|-- FUEL_FAIL_SLS_AMT_IND: string (nullable = true)
</code></pre>
<p>it contains 65 million records however I loaded only 100 into databricks notebook in order to test a cleansing method on the data
this is the method I am using:</p>
<pre><code># Define the characters you want to remove
characters_to_remove = ['=', '"', '$']
# Remove specified characters from all columns
for char in characters_to_remove:
for column in df.columns:
if column != 'source_metadata':
df = df.withColumn(column, regexp_replace(col(column), char, ''))
</code></pre>
<p>however I am getting error when displaying df:</p>
<pre><code>grpc_shaded.com.google.protobuf.InvalidProtocolBufferException: Protocol message had too many levels of nesting. May be malicious. Use CodedInputStream.setRecursionLimit() to increase the depth limit.
</code></pre>
<p>I would like to know how to deal with such dataframes in databricks notebook, in silver layer, in an efficient way. Thank you!</p>
| <python><dataframe><pyspark><databricks> | 2023-11-13 11:13:11 | 0 | 415 | Jenny |
77,473,419 | 7,431,005 | output from GridSearchCV to log (with parallel joblib) | <p>I'm running some hyperparameter tuning on a cluster.
The cluster automatically kills my job after some time.
When this happens, I lose all my preliminary results.
It would be great to store the preliminary results from <code>GridSearchCV</code> (the output that is printed to the console) in some file.
My plan was to use the logging library for this task.</p>
<p><strong>What I want to do:</strong>
I would like to be able to redirect the output from <code>GridSearchCV</code> to my logger.
I tried to simply redirect all output from <code>sys.stdout</code> to my logger based on some suggestions found on this site.
However, it is not working.</p>
<p>Based on the information I found, I think the problem I face is that each process <code>GridSearchCV</code> is kind of independent from my main stdout.
I'm also not sure if there is a smarter way to do it - I tried to solve it with callbacks but with no success.</p>
<p>This is a minimal example:</p>
<pre><code>import xgboost as xgb
from sklearn.model_selection import GridSearchCV
import numpy as np
import logging
import sys
class StreamToLogger(object):
def __init__(self, logger, level):
self.logger = logger
self.level = level
self.linebuf = ''
def write(self, buf):
for line in buf.rstrip().splitlines():
self.logger.log(self.level, line.rstrip())
def flush(self):
pass
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO, format='[%(asctime)s] [%(levelname)s] %(message)s')
logger = logging.getLogger('sklearn.model_selection._search')
sys.stdout = StreamToLogger(logger, logging.INFO)
sys.stderr = StreamToLogger(logger, logging.ERROR)
X = np.random.randn(100,2)
y = np.random.randn(100)
reg = xgb.XGBRegressor(max_depth=10, n_estimators=100)
param_grid = {"gamma": [0.1]}
gs = GridSearchCV(reg, param_grid, cv=5, verbose=3, n_jobs=1)
gs.fit(X,y)
</code></pre>
<p>with <code>n_jobs=1</code> I get the preliminary results redirected to my logger:</p>
<pre><code>[2023-11-13 12:01:23,382] [INFO] Fitting 5 folds for each of 1 candidates, totalling 5 fits
[2023-11-13 12:01:23,434] [INFO] [CV 1/5] END ........................gamma=0.1;, score=-0.730 total time= 0.1s
[2023-11-13 12:01:23,451] [INFO] [CV 2/5] END ........................gamma=0.1;, score=-1.663 total time= 0.0s
[2023-11-13 12:01:23,467] [INFO] [CV 3/5] END ........................gamma=0.1;, score=-0.310 total time= 0.0s
[2023-11-13 12:01:23,483] [INFO] [CV 4/5] END ........................gamma=0.1;, score=-1.536 total time= 0.0s
[2023-11-13 12:01:23,507] [INFO] [CV 5/5] END ........................gamma=0.1;, score=-0.841 total time= 0.0s
</code></pre>
<p>When I change it to <code>n_jobs=2</code>, the output becomes</p>
<pre><code>[2023-11-13 11:58:54,220] [INFO] Fitting 5 folds for each of 1 candidates, totalling 5 fits
[CV 2/5] END ........................gamma=0.1;, score=-0.793 total time= 0.0s
[CV 1/5] END ........................gamma=0.1;, score=-0.111 total time= 0.0s
[CV 3/5] END ........................gamma=0.1;, score=-1.091 total time= 0.0s
[CV 4/5] END ........................gamma=0.1;, score=-0.139 total time= 0.0s
[CV 5/5] END ........................gamma=0.1;, score=-0.621 total time= 0.0s
</code></pre>
<p>(note that the first line is still forwarded to my logger but all the [CV X/5] lines are no longer redirected to my logger - thus, they would not be included in a .log file if I add a file handler).</p>
<p>An alternative approach via calling my script using <code>$ python file.py > "file.log" 2>&1</code> seems to not flush the buffer when the process is killed... Thus, I would prefer to use the logging approach.</p>
| <python><logging><scikit-learn> | 2023-11-13 11:08:18 | 0 | 4,667 | user7431005 |
77,472,990 | 14,270,305 | Using concurrency with python asyncio in requests in Google Cloud Functions | <p>I am trying to send requests to the OpenAI API to translate a list of texts via a Google Cloud Function. Since the API takes a little while to answer and I am trying to translate a few hundred texts per day, I tried speeding up the process by sending the requests concurrently from the GCF to the OpenAI API.</p>
<p>So, I prototyped the function in Python in a Jupyter Notebook and managed to get a concurrent function running, knowing that there are some differences in the event loop management, but in the end reducing the run time from about 30 minutes down to 4-5. Then I tried to figure out the differences between running the function in Jupyter and in the Google Cloud Function and in the end managed to get it running there as well.</p>
<p>However, it just doesn't seem to speed up the GCF and still runs for 30 minutes and more, causing TimeoutErrors in other parts of the platform.</p>
<p>I have checked:</p>
<ol>
<li>Resource utilization 400 MB of 8GB are used</li>
<li>Concurrent request limit: Set to 8 then realized this setting is only meant for inbound traffic</li>
</ol>
<p>Here is my code for the cloud function</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
import os
import asyncio
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import HumanMessagePromptTemplate, ChatPromptTemplate
from langchain.output_parsers import PydanticOutputParser
from langchain.schema.output_parser import OutputParserException
from langchain.schema.messages import get_buffer_string
from langchain.chains import LLMChain
from functools import wraps
import platform
if platform.system() == "Windows":
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
OPEN_AI_KEY = os.environ.get("OPENAI_TOKEN")
OPENAI_MODEL = "gpt-3.5-turbo-1106"
def request_concurrency_limit_decorator(limit=5):
# Bind the default event loop
sem = asyncio.Semaphore(limit)
def executor(func):
@wraps(func)
async def wrapper(*args, **kwargs):
async with sem:
return await func(*args, **kwargs)
return wrapper
return executor
async def translate_concurrently(
df: pd.DataFrame, openai_api_key: str = OPEN_AI_KEY, model_name: str = OPENAI_MODEL
):
# Creating the prompt
PROMPT_TRANSLATE_REVIEW = """Translate this into English: '{input}'"""
llm = ChatOpenAI(openai_api_key=openai_api_key, model_name=model_name)
message = HumanMessagePromptTemplate.from_template(
template=PROMPT_TRANSLATE_REVIEW,
)
chat_prompt = ChatPromptTemplate.from_messages([message])
chain = LLMChain(llm=llm, prompt=chat_prompt)
# start async translation requests
@request_concurrency_limit_decorator(limit=5)
async def async_translate(chain: LLMChain, input: str):
resp = await chain.arun({"input": input})
return resp
tasks = [async_translate(chain, input=review) for review in df["original_text"]]
# Row order is guaranteed by asyncio.gather
df["text_translation"] = await asyncio.gather(*tasks)
def main(ctx):
# preparing the translation_df with a column "original_text"
translation_df = pd.DataFrame(...)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(translate_concurrently(translation_df))
loop.close()
return "OK", 200
</code></pre>
<p>I would appreciate any hints to where I went wrong and why the GCF doesn't speed up as much as my prototyped function in Jupyter Notebook</p>
| <python><google-cloud-platform><concurrency><google-cloud-functions><google-cloud-run> | 2023-11-13 09:51:36 | 1 | 320 | Manuel Huppertz |
77,472,784 | 1,568,919 | Making fake DataFrame/TimeDataFrame in Pandas | <p>Just found out that <code>pandas.util.testing.makeDataFrame</code> no longer exists.</p>
<p>Where is it moved to? Or if it is completely gone, is there a substitute for easy testing functionality?</p>
| <python><pandas> | 2023-11-13 09:12:10 | 0 | 7,411 | jf328 |
77,472,748 | 5,868,293 | How to add text at barchart, when y is a list, using plotly express | <p>I have the following pandas dataframe</p>
<pre><code>import pandas as pd
foo = pd.DataFrame({'country': {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'},
'unweighted': {0: 18.0, 1: 16.9, 2: 13.3, 3: 11.3, 4: 13.1},
'weighted_1': {0: 17.7, 1: 15.8, 2: 14.0, 3: 11.2, 4: 12.8},
'weighted_2': {0: 17.8, 1: 15.8, 2: 14.0, 3: 11.2, 4: 12.8}})
country unweighted weighted_1 weighted_2
0 a 18.0 17.7 17.8
1 b 16.9 15.8 15.8
2 c 13.3 14.0 14.0
3 d 11.3 11.2 11.2
4 e 13.1 12.8 12.8
</code></pre>
<p>And I am using the following code to produce the barchart using <code>plotly_express</code></p>
<pre><code>import plotly.express as px
px.bar(
so,
x='country',
y=['unweighted', 'weighted_1', 'weighted_2'],
barmode='group',
)
</code></pre>
<p><a href="https://i.sstatic.net/FM9Aq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FM9Aq.png" alt="enter image description here" /></a></p>
<p>I would like to display also the <code>text</code> at each bar.
I tried</p>
<pre><code>px.bar(
so,
x='country',
y=['unweighted', 'weighted_1', 'weighted_2'],
text=['unweighted', 'weighted_1', 'weighted_2'],
barmode='group',
)
</code></pre>
<p>but it doesnt work.</p>
<p>How could I do that ?</p>
| <python><plotly> | 2023-11-13 09:05:44 | 1 | 4,512 | quant |
77,472,746 | 5,269,892 | Python - keep output XML namespace definitions at non-root nodes with ElementTree | <p>I want to generate an output XML from a modified input XML in Python using <em>xml.etree.ElementTree</em> (short: <em>ElementTree</em>). The input XML has multiple namespaces and the output XML should also have these namespaces, <strong>defined at the same nodes as the input XML</strong>; the latter is important for specific reasons. However, <em>ElementTree</em> by default sets the namespaces at the root node. How can I prevent this if possible? Below is a minimal example (without modifications to the input XML):</p>
<p><strong>Read + write the XML (<em>demo_input_xml.py</em>):</strong></p>
<pre><code>import sys
import xml.etree.ElementTree as ET
infile = sys.argv[1]
outfile = sys.argv[2]
# read the input XML file
tree = ET.parse(infile)
# get the tree root
root = tree.getroot()
# insert modifications to XML here
# open the output file and write out the new XML tree
ofile = open(outfile, 'w')
tree.write(ofile)
ofile.close()
</code></pre>
<p><strong>Input XML (<em>demo_temp_input.xml</em>):</strong></p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<tag0 xmlns="namespace0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="someLocation0">
<tag1a>someText</tag1a>
<tag1b xmlns="namespace1" xmlns:n1="namespace2" xsi:schemaLocation="someLocation1">
<tag2>
<tag3 xmlns="namespace3" xsi:schemaLocation="someLocation2">
</tag3>
</tag2>
</tag1b>
</tag0>
</code></pre>
<p><strong>Actual output XML (<em>demo_temp_output.xml</em>):</strong></p>
<pre><code><ns0:tag0 xmlns:ns0="namespace0" xmlns:ns2="namespace1" xmlns:ns3="namespace3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="someLocation0">
<ns0:tag1a>someText</ns0:tag1a>
<ns2:tag1b xsi:schemaLocation="someLocation1">
<ns2:tag2>
<ns3:tag3 xsi:schemaLocation="someLocation2">
</ns3:tag3>
</ns2:tag2>
</ns2:tag1b>
</ns0:tag0>
</code></pre>
<p><strong>Desired output XML:</strong></p>
<pre><code><ns0:tag0 xmlns:ns0="namespace0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="someLocation0">
<ns0:tag1a>someText</ns0:tag1a>
<ns2:tag1b xmlns:ns2="namespace1" xsi:schemaLocation="someLocation1">
<ns2:tag2>
<ns3:tag3 xmlns:ns3="namespace3" xsi:schemaLocation="someLocation2">
</ns3:tag3>
</ns2:tag2>
</ns2:tag1b>
</ns0:tag0>
</code></pre>
<p>Things to consider:</p>
<ul>
<li>The python version used is 2.7.18.</li>
<li>The <em>lxml</em> package is not available and I do not have permissions to install it.</li>
<li>The input XML has multiple default namespaces (i.e. no prefixes). Ideally, the output XML would also have multiple default namespaces (I already played around with <em>ET.register_namespaces()</em> and <em>ET._namespace_map</em>, without success), but this is not a priority. Thus, other namespace prefixes, also ones generated by <em>ElementTree</em>, e.g. <em>ns0</em>, <em>ns2</em> etc., are in principle ok (side question: why is the prefix <em><strong>ns1</strong></em> not used by <em>ElementTree</em>?).</li>
<li><em>ElementTree</em> does not store unused namespaces (in the minimal example the prefix <em>n1</em>, <em>"namespace2"</em>). Once any tag below that namespace uses the prefix in the input XML, <em>ElementTree</em> puts such namespaces with the (input) prefixes at the root node in the output XML. I had expected the default behavior would be to keep any namespaces no matter whether they are later used or not, but this is fine.</li>
<li><em>ElementTree</em> omits the XML version header line from the input XML; this is fine.</li>
</ul>
| <python><xml><xml-parsing><elementtree> | 2023-11-13 09:05:28 | 0 | 1,314 | silence_of_the_lambdas |
77,472,734 | 13,438,431 | asyncio.Task cancel() does not cancel the task if it's not in event loop | <p>I've stumbled upon a rather counterintuitive behavior of Python. Version is 3.10.12, Ubuntu 22.04.</p>
<p>Consider this script:</p>
<pre><code>import asyncio
async def main():
t = asyncio.create_task(asyncio.sleep(5))
t.cancel()
print(t.cancelled())
await asyncio.sleep(0)
print(t.cancelled())
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>It writes:</p>
<pre><code>False
True
</code></pre>
<p>Shouldn't it write</p>
<pre><code>True
True
</code></pre>
<p>instead?</p>
<p>I assume the task first has to be committed to the event loop, before it can be cancelled. But isn't it really a bug? Shouldn't I be able to cancel the task regardless?</p>
| <python><async-await><task><python-asyncio> | 2023-11-13 09:03:31 | 0 | 2,104 | winwin |
77,472,710 | 15,948,240 | Using .loc with on subset of levels from a DataFrame with MultIndex | <p>Given a dataframe with 3 levels of multiindex:</p>
<pre><code>import pandas as pd
df = pd.concat({'a': pd.Series([1,2,3,1]),
'b': pd.Series([5,4,3,5]),
'c': pd.Series(range(9,13)),
'd': pd.Series(range(13,17))}, axis=1).set_index(['a', 'b', 'c'])
>>> d
a b c
1 5 9 13
2 6 10 14
3 7 11 15
4 8 12 16
</code></pre>
<p>I want to use loc with a list of indices from the first 2 levels:</p>
<pre><code>idx = pd.MultiIndex.from_arrays([[1, 2], [5, 4]], names=('a', 'b'))
>>> MultiIndex([(1, 5),
(2, 6)],
names=['a', 'b'])
</code></pre>
<p>I tried to use .loc with individual indices:</p>
<pre><code>df.loc[idx[0]]
>>> d
c
9 13
12 16
df.loc[idx[1]]
>>> d
c
10 14
</code></pre>
<p>I expected <code>df.loc[idx]</code> to return the same result as</p>
<pre><code>pd.concat([df.loc[i] for i in idx])
>>> d
c
9 13
12 16
10 14
</code></pre>
<p>But I <code>df.loc[idx]</code> returns</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (2,2) (3,) (2,2)
</code></pre>
<p>Is there something cleaner than <code>pd.concat([df.loc[i] for i in idx])</code> to obtain the expected result ?</p>
| <python><pandas><indexing><multi-index> | 2023-11-13 08:58:40 | 2 | 1,075 | endive1783 |
77,472,315 | 8,741,562 | How to get the actual cost for a resource group in python? | <p>I am working on connecting with the azure portal and fetch informations like resource groups, tags and cost data. In that I can able to fetch resource groups and tags, but I am stuck on getting the cost information from the portal. Particularly I need the actual cost from the azure portal as shown in the image below.</p>
<p><a href="https://i.sstatic.net/UcGmf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UcGmf.png" alt="actual amount" /></a></p>
<p>python code below:</p>
<pre><code>import requests
url = "https://management.azure.com/subscriptions/000000-0000-0000-0000-00000000000000/providers/Microsoft.Consumption/usageDetails?api-version=2023-05-01&metric=actualcost"
# Make the GET request to the Azure Consumption API
headers = {
"Authorization": f"Bearer {access_token.token}"
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
cost_details = response.json()
print(cost_details)
</code></pre>
<p>Sample response I am getting is below:</p>
<pre><code>"value": [
{
"kind": "legacy",
"id": "/subscriptions/blurred/providers/Microsoft.Billing/billingPeriods/blurred/providers/Microsoft.Consumption/usageDetails/blurred",
"name": "blurred",
"type": "Microsoft.Consumption/usageDetails",
"tags": {
"DEPLOYMENT_ID": "blurred",
"ENVIRONMENT": "blurred",
"OWNER": "blurred",
"ROLE_PURPOSE": "storageAccounts"
},
"properties": {
"billingAccountId": "00000",
"billingAccountName": "TEST NAME",
"billingPeriodStartDate": "2023-11-01T00:00:00.0000000Z",
"billingPeriodEndDate": "2023-11-30T00:00:00.0000000Z",
"billingProfileId": "00000",
"billingProfileName": "TEST NAME",
"accountOwnerId": "blurred",
"accountName": "blurred",
"subscriptionId": "00000-00000-000-0000",
"subscriptionName": "blurred",
"date": "2023-11-05T00:00:00.0000000Z",
"product": "blurred",
"partNumber": "blurred",
"meterId": "blurred",
"quantity": 0.000288,
"effectivePrice": 0.013,
"cost": 0.000003744,
"unitPrice": 0.013,
"billingCurrency": "USD",
"resourceLocation": "EastUS",
"consumedService": "Microsoft.Storage",
"resourceId": "/subscriptions/ blurred/resourceGroups/lineage-ingestion-rg/providers/Microsoft.Storage/storageAccounts/blurred",
"resourceName": "blurred",
"invoiceSection": "blurred",
"costCenter": "blurred",
"resourceGroup": "blurred",
"offerId": "blurred",
"isAzureCreditEligible": true,
"publisherName": "Microsoft",
"publisherType": "Azure",
"planName": "Hot",
"chargeType": "Usage",
"frequency": "UsageBased",
"payGPrice": 0.0191,
"pricingModel": "OnDemand",
"meterDetails": null
}}
</code></pre>
<p>I am expecting that 16.20$ to print for my resource group.</p>
| <python><azure><azure-functions><azure-rest-api> | 2023-11-13 07:34:12 | 1 | 1,070 | Navi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.