QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,734,373
9,116,959
Erratic Behavior in xTerm Terminal While Running Long Python Script with SSH Connections
<p>Using <code>xTerm</code> terminal to run a long Python script. Randomly, while the script is running, the terminal behavior becomes erratic. New lines and indents get messed up. For example, I see</p> <pre><code> **************************************************************************************************** ********************** Object&lt;i&gt;: *********************** ********************** Phase 1 Method Completed !11_07_2024_09_07_39_133768 *********************** **************************************************************************************************** **************************************************************************************************** **************************** Object&lt;i&gt; ***************************** **************************** Phase 1 Method runTime is 8.907342 [sec] ***************************** **************************************************************************************************** </code></pre> <p>Instead of</p> <pre><code> **************************************************************************************************** ************************ Object&lt;i&gt;: ************************ ************************ Phase 1 Completed !11_07_2024_10_13_01_760 ************************ **************************************************************************************************** **************************************************************************************************** **************************** Object&lt;i&gt;: ***************************** **************************** Phase 1 runTime is 9.403279 [sec] ***************************** **************************************************************************************************** </code></pre> <p>Additionally, when scrolling down with the mouse, I get &quot;BOBOBO&quot; instead of scrolling.</p> <p>I suspect that the issue often occurs around the time the script attempts to SSH into another device. What could be causing this, and how can it be resolved?</p>
<python><terminal><indentation><xterm>
2024-07-11 08:19:06
0
632
Kfir Ettinger
78,734,366
11,148,741
Send email via EmailMultiAlternatives sudenly stopped working
<p>I encounter a strange error. The email sending stopped working since yesterday before all was fine. I did not change a thing at all.</p> <p>Error message is:</p> <pre><code> 2024-07-11 09:35:01,147 share.email.service ERROR Traceback (most recent call last): File &quot;/srv/deduu/./share/email/service.py&quot;, line 318, in send_email_from_template cls.send_admin_email(subject, recipient_list, html_message=message) File &quot;/srv/deduu/./share/email/service.py&quot;, line 330, in send_admin_email cls.send_email(subject, from_mail, recipient_list, message, html_message) File &quot;/srv/deduu/./share/email/service.py&quot;, line 279, in send_email return email_wrapper.send_mail() File &quot;/srv/deduu/./share/email/service.py&quot;, line 91, in send_mail return_value = bool(self.mail.send()) File &quot;/srv/deduu/.venv/lib/python3.8/site-packages/django/core/mail/message.py&quot;, line 284, in send return self.get_connection(fail_silently).send_messages([self]) File &quot;/srv/deduu/.venv/lib/python3.8/site-packages/django/core/mail/backends/smtp.py&quot;, line 109, in send_messages sent = self._send(message) File &quot;/srv/deduu/.venv/lib/python3.8/site-packages/django/core/mail/backends/smtp.py&quot;, line 125, in _send self.connection.sendmail(from_email, recipients, message.as_bytes(linesep='\r\n')) File &quot;/usr/lib/python3.8/smtplib.py&quot;, line 874, in sendmail (code, resp) = self.mail(from_addr, esmtp_opts) File &quot;/usr/lib/python3.8/smtplib.py&quot;, line 539, in mail return self.getreply() File &quot;/usr/lib/python3.8/smtplib.py&quot;, line 398, in getreply raise SMTPServerDisconnected(&quot;Connection unexpectedly closed&quot;) smtplib.SMTPServerDisconnected: Connection unexpectedly closed </code></pre> <p>Strangely the sending of the mail works with a testscript with the same credentials and settings but using this snippet</p> <pre><code>from email.message import EmailMessage import smtplib username = &quot;info@test.de&quot; password = r&quot;*****&quot; smtp_server = &quot;mail.test.de&quot; smtp_port = 47999 receiver_email = &quot;user@test.de&quot; subject = 'Test Email' body = 'This is a test email sent using smtplib in Python.' # Create message em = EmailMessage() em['From'] = username em['To'] = receiver_email em['Subject'] = subject em.set_content(body) try: # Connect to the SMTP server with smtplib.SMTP(smtp_server, smtp_port) as smtp: smtp.starttls() smtp.set_debuglevel(1) smtp.login(username, password) smtp.sendmail(username, receiver_email, em.as_string()) smtp.quit() except Exception as e: print(f&quot;Unexpected error: {e}&quot;) </code></pre> <p>Email is used as follow (excerpt):</p> <pre><code>email_connection = get_connection() email_connection.host = &quot;mail.test.de&quot; email_connection.port = 47999 email_connection.username = info@test.de email_connection.password = r&quot;*****&quot; email_connection.use_tls = True mail = EmailMultiAlternatives(subject, message, from_email, recipient_list) mail.connection = email_connection if html_message: mail.attach_alternative(html_message, &quot;text/html&quot;) mail.send() </code></pre> <p>Anyone has an idea?</p>
<python><django><smtplib>
2024-07-11 08:17:41
0
489
Matt
78,734,268
2,466,784
I have designed a separate database file for handling database queries for sqlite database in python but facing issue in execute function
<p>I am working on a sqlite database python based project , to keep the code neat and clean I am passing the values of my form fields along with database name and table name to another python file , whose work is to handle only database queries , I have framed the query everthing is working fine but I am getting error in execute() function.</p> <p>import sqlite3</p> <p>from PyQt5.QtWidgets import QMessageBox</p> <p>class Db: def <strong>init</strong>(self): pass</p> <pre><code>def DbConnection(self ,databaseName, databaseTable, tableColumnValueList): try: print (databaseName, databaseTable, tableColumnValueList) connection = sqlite3.connect (databaseName) # Create a cursor object cursor = connection.cursor () # Query to get column names cursor.execute (f&quot;PRAGMA table_info({databaseTable});&quot;) # Fetch all results columns_info = cursor.fetchall () # Extract and print column names column_names = [info[1] for info in columns_info] print (&quot;Column names:&quot;, column_names) w = &quot;?,&quot; * (len (column_names) - 1) w = w[:-1] + &quot;&quot; print (w) query = &quot;'INSERT into&quot; + &quot; &quot; + databaseTable + &quot; (&quot; # Appending columns name in a query string for i in range (1, len (column_names)): query += column_names[i] + &quot;,&quot; query = query[:-1] + &quot;) VALUES(&quot; + w + &quot;)',(&quot; # Appending columns values in a query string for i in range (len (tableColumnValueList)): query += str(tableColumnValueList[i]) + &quot;,&quot; query = query[:-1] + &quot;)&quot; print(query) cursor.execute(query) connection.commit () cursor.close() # Close the connection connection.close () except Exception as e: #QMessageBox.about (self, &quot;Exception&quot;, 'Error: &quot;{}&quot;'.format (e)) print(e) </code></pre>
<python><sqlite>
2024-07-11 08:01:14
0
805
Arun Agarwal
78,733,931
1,716,733
pip wheel installation overrides version
<p>I am trying to install a Python package from a wheel:</p> <pre><code>pip install PyMuPDF==1.24.7 --no-index --find-links file:///Users/myusername/Downloads/PyMuPDF-1.24.7-cp310-none-macosx_11_0_arm64.whl </code></pre> <p>But getting this error:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement PyMuPDFb==1.24.6 (from pymupdf) (from versions: none) ERROR: No matching distribution found for PyMuPDFb==1.24.6 </code></pre> <p>Why is it looking for different version? How to fix this?</p> <p>I am on Python 3.10.14 on MacOSX M1</p> <p>Update: I tried the version it insisted on and am still getting the same error:</p> <pre><code>pip install PyMuPDF --no-index --find-links file:///Users/myusername/Downloads/PyMuPDF-1.24.6-cp310-none-macosx_11_0_arm64.whl </code></pre> <p>error:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement PyMuPDFb==1.24.6 (from pymupdf) (from versions: none) ERROR: No matching distribution found for PyMuPDFb==1.24.6 </code></pre>
<python><pip><python-wheel>
2024-07-11 06:44:47
1
13,164
Dima Lituiev
78,733,661
395,857
How can I extract which Python lines are used when running a Python script, including the Python lines of the libs it's using?
<p>Example: I have this Python script <code>SE_Q_coverage.py</code>:</p> <pre><code># pip install transformers from transformers import AutoModelForTokenClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('huawei-noah/TinyBERT_General_4L_312D') if False: print('this line is not hit') tokenized_inputs = tokenizer('hello world !'.split(' '), is_split_into_words=True, return_tensors=&quot;pt&quot;) print(tokenized_inputs) </code></pre> <p>How can I automatically or semi-automatically extract which Python lines are used when running this Python script?</p> <hr /> <p>I am aware of <code>coverage.py</code>, but it only shows which lines in <code>SE_Q_coverage.py</code> are used when running this Python script. It does not show which lines in <code>SE_Q_coverage.py</code> are used in the <code>transformers</code> lib. Example:</p> <pre><code>pip install coverage coverage html SE_Q_coverage.py </code></pre> <p>outputs:</p> <p><a href="https://i.sstatic.net/IYMsBgiW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYMsBgiW.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/TglsxGJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TglsxGJj.png" alt="enter image description here" /></a></p> <p>I am also aware of <code>trace</code>, but it outputs each line executed, including lines in any used library (here, <code>transformers</code> and possibly libs that <code>transformers</code> itself uses).</p> <pre><code>python -m trace --trace .\SE_Q_coverage.py &gt; trace.txt </code></pre> <p>outputs a 50+MB log file:</p> <pre><code>--- modulename: SE_Q_coverage, funcname: &lt;module&gt; SE_Q_coverage.py(2): from transformers import AutoModelForTokenClassification, AutoTokenizer --- modulename: _bootstrap, funcname: _find_and_load &lt;frozen importlib._bootstrap&gt;(1170): &lt;frozen importlib._bootstrap&gt;(1171): &lt;frozen importlib._bootstrap&gt;(1173): --- modulename: _bootstrap, funcname: __init__ &lt;frozen importlib._bootstrap&gt;(166): &lt;frozen importlib._bootstrap&gt;(167): --- modulename: _bootstrap, funcname: __enter__ &lt;frozen importlib._bootstrap&gt;(170): --- modulename: _bootstrap, funcname: _get_module_lock &lt;frozen importlib._bootstrap&gt;(185): &lt;frozen importlib._bootstrap&gt;(186): &lt;frozen importlib._bootstrap&gt;(187): &lt;frozen importlib._bootstrap&gt;(188): &lt;frozen importlib._bootstrap&gt;(189): &lt;frozen importlib._bootstrap&gt;(190): &lt;frozen importlib._bootstrap&gt;(192): &lt;frozen importlib._bootstrap&gt;(193): &lt;frozen importlib._bootstrap&gt;(196): --- modulename: _bootstrap, funcname: __init__ &lt;frozen importlib._bootstrap&gt;(72): &lt;frozen importlib._bootstrap&gt;(73): &lt;frozen importlib._bootstrap&gt;(74): &lt;frozen importlib._bootstrap&gt;(75): &lt;frozen importlib._bootstrap&gt;(76): &lt;frozen importlib._bootstrap&gt;(77): &lt;frozen importlib._bootstrap&gt;(198): &lt;frozen importlib._bootstrap&gt;(209): &lt;frozen importlib._bootstrap&gt;(211): &lt;frozen importlib._bootstrap&gt;(213): &lt;frozen importlib._bootstrap&gt;(171): --- modulename: _bootstrap, funcname: acquire &lt;frozen importlib._bootstrap&gt;(106): &lt;frozen importlib._bootstrap&gt;(107): &lt;frozen importlib._bootstrap&gt;(108): &lt;frozen importlib._bootstrap&gt;(109): &lt;frozen importlib._bootstrap&gt;(110): &lt;frozen importlib._bootstrap&gt;(111): &lt;frozen importlib._bootstrap&gt;(112): &lt;frozen importlib._bootstrap&gt;(113): &lt;frozen importlib._bootstrap&gt;(114): &lt;frozen importlib._bootstrap&gt;(110): &lt;frozen importlib._bootstrap&gt;(123): &lt;frozen importlib._bootstrap&gt;(1174): &lt;frozen importlib._bootstrap&gt;(1175): &lt;frozen importlib._bootstrap&gt;(1176): --- modulename: _bootstrap, funcname: _find_and_load_unlocked &lt;frozen importlib._bootstrap&gt;(1121): &lt;frozen importlib._bootstrap&gt;(1122): &lt;frozen importlib._bootstrap&gt;(1123): &lt;frozen importlib._bootstrap&gt;(1124): &lt;frozen importlib._bootstrap&gt;(1138): --- modulename: _bootstrap, funcname: _find_spec &lt;frozen importlib._bootstrap&gt;(1056): &lt;frozen importlib._bootstrap&gt;(1057): &lt;frozen importlib._bootstrap&gt;(1062): &lt;frozen importlib._bootstrap&gt;(1068): &lt;frozen importlib._bootstrap&gt;(1069): &lt;frozen importlib._bootstrap&gt;(1070): --- modulename: _bootstrap, funcname: __enter__ &lt;frozen importlib._bootstrap&gt;(1028): &lt;frozen importlib._bootstrap&gt;(1071): &lt;frozen importlib._bootstrap&gt;(1072): &lt;frozen importlib._bootstrap&gt;(1078): --- modulename: __init__, funcname: find_spec __init__.py(92): if path is not None and not fullname.startswith('test.'): __init__.py(95): method_name = 'spec_for_{fullname}'.format(**locals()) __init__.py(96): method = getattr(self, method_name, lambda: None) __init__.py(97): return method() --- modulename: __init__, funcname: &lt;lambda&gt; __init__.py(96): method = getattr(self, method_name, lambda: None) &lt;frozen importlib._bootstrap&gt;(1070): --- modulename: _bootstrap, funcname: __exit__ &lt;frozen importlib._bootstrap&gt;(1032): &lt;frozen importlib._bootstrap&gt;(1079): &lt;frozen importlib._bootstrap&gt;(1069): &lt;frozen importlib._bootstrap&gt;(1070): --- modulename: _bootstrap, funcname: __enter__ &lt;frozen importlib._bootstrap&gt;(1028): &lt;frozen importlib._bootstrap&gt;(1071): &lt;frozen importlib._bootstrap&gt;(1072): &lt;frozen importlib._bootstrap&gt;(1078): --- modulename: _bootstrap, funcname: find_spec &lt;frozen importlib._bootstrap&gt;(750): &lt;frozen importlib._bootstrap&gt;(753): &lt;frozen importlib._bootstrap&gt;(1070): --- modulename: _bootstrap, funcname: __exit__ &lt;frozen importlib._bootstrap&gt;(1032): &lt;frozen importlib._bootstrap&gt;(1079): &lt;frozen importlib._bootstrap&gt;(1069): &lt;frozen importlib._bootstrap&gt;(1070): --- modulename: _bootstrap, funcname: __enter__ &lt;frozen importlib._bootstrap&gt;(1028): &lt;frozen importlib._bootstrap&gt;(1071): &lt;frozen importlib._bootstrap&gt;(1072): &lt;frozen importlib._bootstrap&gt;(1078): --- modulename: _bootstrap, funcname: find_spec &lt;frozen importlib._bootstrap&gt;(922): --- modulename: _bootstrap, funcname: _call_with_frames_removed &lt;frozen importlib._bootstrap&gt;(241): &lt;frozen importlib._bootstrap&gt;(923): &lt;frozen importlib._bootstrap&gt;(924): &lt;frozen importlib._bootstrap&gt;(1070): --- modulename: _bootstrap, funcname: __exit__ &lt;frozen importlib._bootstrap&gt;(1032): &lt;frozen importlib._bootstrap&gt;(1079): &lt;frozen importlib._bootstrap&gt;(1069): &lt;frozen importlib._bootstrap&gt;(1070): --- modulename: _bootstrap, funcname: __enter__ &lt;frozen importlib._bootstrap&gt;(1028): &lt;frozen importlib._bootstrap&gt;(1071): &lt;frozen importlib._bootstrap&gt;(1072): &lt;frozen importlib._bootstrap&gt;(1078): --- modulename: _bootstrap_external, funcname: find_spec &lt;frozen importlib._bootstrap_external&gt;(1502): &lt;frozen importlib._bootstrap_external&gt;(1503): &lt;frozen importlib._bootstrap_external&gt;(1504): --- modulename: _bootstrap_external, funcname: _get_spec &lt;frozen importlib._bootstrap_external&gt;(1469): &lt;frozen importlib._bootstrap_external&gt;(1470): &lt;frozen importlib._bootstrap_external&gt;(1471): &lt;frozen importlib._bootstrap_external&gt;(1473): --- modulename: _bootstrap_external, funcname: _path_importer_cache &lt;frozen importlib._bootstrap_external&gt;(1429): &lt;frozen importlib._bootstrap_external&gt;(1436): &lt;frozen importlib._bootstrap_external&gt;(1437): &lt;frozen importlib._bootstrap_external&gt;(1438): &lt;frozen importlib._bootstrap_external&gt;(1439): --- modulename: _bootstrap_external, funcname: _path_hooks &lt;frozen importlib._bootstrap_external&gt;(1411): &lt;frozen importlib._bootstrap_external&gt;(1413): &lt;frozen importlib._bootstrap_external&gt;(1414): &lt;frozen importlib._bootstrap_external&gt;(1415): --- modulename: zipimport, funcname: __init__ &lt;frozen zipimport&gt;(65): &lt;frozen zipimport&gt;(67): &lt;frozen zipimport&gt;(69): &lt;frozen zipimport&gt;(70): &lt;frozen zipimport&gt;(72): &lt;frozen zipimport&gt;(73): &lt;frozen zipimport&gt;(74): &lt;frozen zipimport&gt;(75): --- modulename: _bootstrap_external, funcname: _path_stat &lt;frozen importlib._bootstrap_external&gt;(147): &lt;frozen zipimport&gt;(86): &lt;frozen zipimport&gt;(88): &lt;frozen importlib._bootstrap_external&gt;(1416): &lt;frozen importlib._bootstrap_external&gt;(1417): &lt;frozen importlib._bootstrap_external&gt;(1413): &lt;frozen importlib._bootstrap_external&gt;(1414): &lt;frozen importlib._bootstrap_external&gt;(1415): --- modulename: _bootstrap_external, funcname: path_hook_for_FileFinder &lt;frozen importlib._bootstrap_external&gt;(1698): --- modulename: _bootstrap_external, funcname: _path_isdir &lt;frozen importlib._bootstrap_external&gt;(166): &lt;frozen importlib._bootstrap_external&gt;(168): --- modulename: _bootstrap_external, funcname: _path_is_mode_type &lt;frozen importlib._bootstrap_external&gt;(152): &lt;frozen importlib._bootstrap_external&gt;(153): --- modulename: _bootstrap_external, funcname: _path_stat &lt;frozen importlib._bootstrap_external&gt;(147): &lt;frozen importlib._bootstrap_external&gt;(156): &lt;frozen importlib._bootstrap_external&gt;(1700): --- modulename: _bootstrap_external, funcname: __init__ &lt;frozen importlib._bootstrap_external&gt;(1563): &lt;frozen importlib._bootstrap_external&gt;(1564): &lt;frozen importlib._bootstrap_external&gt;(1565): --- modulename: _bootstrap_external, funcname: &lt;genexpr&gt; &lt;frozen importlib._bootstrap_external&gt;(1565): --- modulename: _bootstrap_external, funcname: &lt;genexpr&gt; &lt;frozen importlib._bootstrap_external&gt;(1565): --- modulename: _bootstrap_external, funcname: &lt;genexpr&gt; &lt;frozen importlib._bootstrap_external&gt;(1565): &lt;frozen importlib._bootstrap_external&gt;(1564): &lt;frozen importlib._bootstrap_external&gt;(1565): --- modulename: _bootstrap_external, funcname: &lt;genexpr&gt; &lt;frozen importlib._bootstrap_external&gt;(1565): --- modulename: _bootstrap_external, funcname: &lt;genexpr&gt; &lt;frozen importlib._bootstrap_external&gt;(1565): --- modulename: _bootstrap_external, funcname: &lt;genexpr&gt; &lt;frozen importlib._bootstrap_external&gt;(1565): &lt;frozen importlib._bootstrap_external&gt;(1564): &lt;frozen importlib._bootstrap_external&gt;(1565): --- modulename: _bootstrap_external, funcname: &lt;genexpr&gt; &lt;frozen importlib._bootstrap_external&gt;(1565): --- modulename: _bootstrap_external, funcname: &lt;genexpr&gt; &lt;frozen importlib._bootstrap_external&gt;(1565): &lt;frozen importlib._bootstrap_external&gt;(1564): &lt;frozen importlib._bootstrap_external&gt;(1566): &lt;frozen importlib._bootstrap_external&gt;(1568): &lt;frozen importlib._bootstrap_external&gt;(1569): &lt;frozen importlib._bootstrap_external&gt;(1574): &lt;frozen importlib._bootstrap_external&gt;(1575): &lt;frozen importlib._bootstrap_external&gt;(1576): &lt;frozen importlib._bootstrap_external&gt;(1440): &lt;frozen importlib._bootstrap_external&gt;(1441): &lt;frozen importlib._bootstrap_external&gt;(1474): &lt;frozen importlib._bootstrap_external&gt;(1475): &lt;frozen importlib._bootstrap_external&gt;(1476): --- modulename: _bootstrap_external, funcname: find_spec &lt;frozen importlib._bootstrap_external&gt;(1609): &lt;frozen importlib._bootstrap_external&gt;(1610): &lt;frozen importlib._bootstrap_external&gt;(1611): &lt;frozen importlib._bootstrap_external&gt;(1612): --- modulename: _bootstrap_external, funcname: _path_stat &lt;frozen importlib._bootstrap_external&gt;(147): &lt;frozen importlib._bootstrap_external&gt;(1615): &lt;frozen importlib._bootstrap_external&gt;(1616): --- modulename: _bootstrap_external, funcname: _fill_cache &lt;frozen importlib._bootstrap_external&gt;(1657): &lt;frozen importlib._bootstrap_external&gt;(1658): &lt;frozen importlib._bootstrap_external&gt;(1659): &lt;frozen importlib._bootstrap_external&gt;(1666): &lt;frozen importlib._bootstrap_external&gt;(1674): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1680): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1676): &lt;frozen importlib._bootstrap_external&gt;(1677): &lt;frozen importlib._bootstrap_external&gt;(1678): &lt;frozen importlib._bootstrap_external&gt;(1681): &lt;frozen importlib._bootstrap_external&gt;(1675): &lt;frozen importlib._bootstrap_external&gt;(1682): &lt;frozen importlib._bootstrap_external&gt;(1683): &lt;frozen importlib._bootstrap_external&gt;(1684): --- modulename: _bootstrap_external, funcname: &lt;setcomp&gt; [...] </code></pre> <p>Instead, I would like to see which lines are hit in <code>\Lib\site-packages\transformers\models\auto\tokenization_auto.py</code> and other files in the transformers lib.</p>
<python>
2024-07-11 05:10:39
1
84,585
Franck Dernoncourt
78,733,473
319,058
How to pass all Python's traffics through a socks proxy?
<p>There is <a href="https://stackoverflow.com/a/31641300/319058">How to pass all Python's traffics through a http proxy?</a> However that one does not deal with sock proxy. I want to use sock proxy which we can get easily with ssh tunneling.</p> <pre><code>ssh -D 5005 user@server </code></pre>
<python><proxy><tinyproxy>
2024-07-11 03:51:49
1
5,477
Win Myo Htet
78,733,472
4,732,111
How to initialise a list defined at the class level from a common method outside of the class in python?
<p>I have a list defined at the class level in Python and below is my code:</p> <pre><code>class TestStudentReg(unittest.TestCase): student_id_list = [] </code></pre> <p>I'm trying to initialise the list by reading values from a csv file and below is the method that i use within the same class i.e., <em>TestStudentReg</em></p> <pre><code>@classmethod def initialise_student_id_list(cls): filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter=&quot;|&quot;)) next(datareader, None) # skip the headers for col in datareader: cls.student_id_list.append(col[2]) </code></pre> <p>Now that this method is being common across several classes, i'm planning to make it as a reusable function and define it in a separate python file (utilities.py) which will be like this:</p> <pre><code>import csv import logging import time import timeit def get_date(dateformat=&quot;%Y-%m-%d&quot;, subtract_num_of_days=0): getdate = date.today() - timedelta(subtract_num_of_days) return getdate.strftime(dateformat) def initialise_student_id_list(**kwargs): filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter=&quot;|&quot;)) next(datareader, None) # skip the headers for col in datareader: cls.student_id_list.append(col[2]) </code></pre> <p>I would like to understand how to initialise student_id_list from the utilities file.</p> <p>One way i thought to implement this was to have the method return a list and initialise the value by calling this method.</p> <pre><code>def initialise_student_id_list(**kwargs): id_list=[] filename = 'input/student_testdata.csv' with open(filename, 'r') as csvfile: datareader = (csv.reader(csvfile, delimiter=&quot;|&quot;)) next(datareader, None) # skip the headers for col in datareader: cls.student_id_list.append(col[2]) return id_list </code></pre> <p>Not sure if this is a correct approach. Great if someone could help me on this please.</p>
<python><python-3.x>
2024-07-11 03:51:21
1
363
Balaji Venkatachalam
78,733,467
898,249
python script crashes on Windows; fixed by running a separate memory update py script
<p>I have a Python Flask API server that crashes after a few seconds when running on my new Windows machine (a Dell work station). There is usually no error message, but sometimes there are errors related to Python libraries (not my code). <a href="https://i.sstatic.net/eAlNlnIv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAlNlnIv.png" alt="enter image description here" /></a> This same program runs stably and without errors on 5 other Windows machines with the same OS version (but lower hardware specifications).</p> <p>here' another possible error message (varied from time to time) <a href="https://i.sstatic.net/itCtei1j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itCtei1j.png" alt="enter image description here" /></a></p> <hr /> <p>Yesterday, I made a new <strong>discovery</strong>: I wrote an unrelated Python script that continuously updates memory information (updates an array variable, takes up 1K of memory) in a while True loop. When I run this <code>update_memory script</code> and then start the Flask server, the Flask program runs normally and does not exit.</p> <p>However, <strong>as soon as I close the <code>update_memory script</code>, the Flask program immediately terminates automatically</strong>. Again, there is usually no error message, but sometimes there are errors related to Python libraries (still, not my code).</p> <hr /> <p>Additional Details:</p> <p>When the Flask py script quits, there's always this event in Windows eventvwr. It goes like:</p> <pre><code>EventData python.exe 3.10.6150.1013 62e84c21 python310.dll 3.10.6150.1013 62e84bd6 c0000005 0000000000073fe5 36f0 01dad2a089e5de44 C:\Users\user\AppData\Local\Programs\Python\Python310\python.exe C:\Users\user\AppData\Local\Programs\Python\Python310\python310.dll ac4f916b-29e0-4081-9ea1-b4640148abe9 </code></pre> <p>I noticed that there is a &quot;c0000005&quot; in it and did some research, it could be something related to memory. So I wrote the <code>update_memory script</code> and shocked by what happened.</p> <hr /> <p>BTW, I have installed different windows OS version (win10, win11) on this machine, but always the same kind of error, no exception, no miracle.</p> <p>I suspect a hardware issue or hardware compatibility issue. How can I continue to investigate this further?</p>
<python><windows><flask><memory><hardware>
2024-07-11 03:48:03
2
689
Randy Lam
78,733,361
24,758,287
How to take the average of all previous entries in a group?
<p>I'd like to do the following in python using the polars library:</p> <p>Input:</p> <pre class="lang-py prettyprint-override"><code>df = pl.from_repr(&quot;&quot;&quot; ┌──────┬────────┐ │ Name ┆ Number │ │ --- ┆ --- │ │ str ┆ i64 │ ╞══════╪════════╡ │ Mr.A ┆ 1 │ │ Mr.A ┆ 4 │ │ Mr.A ┆ 5 │ │ Mr.B ┆ 3 │ │ Mr.B ┆ 5 │ │ Mr.B ┆ 6 │ │ Mr.B ┆ 10 │ └──────┴────────┘ &quot;&quot;&quot;) </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code>shape: (7, 3) ┌──────┬────────┬──────────┐ │ Name ┆ Number ┆ average │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ f64 │ ╞══════╪════════╪══════════╡ │ Mr.A ┆ 1 ┆ 0.0 │ │ Mr.A ┆ 4 ┆ 1.0 │ │ Mr.A ┆ 5 ┆ 2.5 │ │ Mr.B ┆ 3 ┆ 0.0 │ │ Mr.B ┆ 5 ┆ 3.0 │ │ Mr.B ┆ 6 ┆ 4.0 │ │ Mr.B ┆ 10 ┆ 4.666667 │ └──────┴────────┴──────────┘ </code></pre> <p>That is to say:</p> <ol> <li>For every first entry of a person, set the average to zero.</li> <li>For every subsequent entry, calculate the average based on the previous entries</li> </ol> <p>Example:</p> <p>Mr. A started off with average=0 and the Number=1.</p> <p>Then, Mr. A has the Number=4, thus it took the average of the previous entry (1/1 data=1)</p> <p>Then, Mr. A has the Number=5, thus the previous average was: (1+4) / (2 data) = 5/2 = 2.5</p> <p>And so on</p> <p>I've tried the rolling mean function (using a Polars Dataframe, df), however, I'm restricted by rolling_mean's window size (i.e. it calculates only the past 2 entries, plus it averages the current entry as well; I want to average only the previous entries)</p> <p>Does anyone have an idea? Much appreciated!:</p> <pre class="lang-py prettyprint-override"><code>df.group_by(&quot;Name&quot;).agg(pl.col(&quot;Number&quot;).rolling_mean(window_size=2)) </code></pre>
<python><grouping><python-polars>
2024-07-11 02:54:25
1
301
user24758287
78,733,192
11,628,437
How to seed `gymnasium` environment resets when using `stable_baselines3`?
<p>I would like to seed my gymnasium environment. From the <a href="https://gymnasium.farama.org/" rel="nofollow noreferrer">official documentation</a>, the way I'd do it is -</p> <pre class="lang-py prettyprint-override"><code>import gymnasium as gym env = gym.make(&quot;LunarLander-v2&quot;, render_mode=&quot;human&quot;) observation, info = env.reset(seed=42) </code></pre> <p>However, stable_baselines3 doesn't seem to require resets from the user side as shown in the program below -</p> <pre><code>import gymnasium as gym from stable_baselines3 import PPO from stable_baselines3.common.env_util import make_vec_env # Parallel environments vec_env = make_vec_env(&quot;CartPole-v1&quot;, n_envs=4) model = PPO(&quot;MlpPolicy&quot;, vec_env, verbose=1) model.learn(total_timesteps=25000) model.save(&quot;ppo_cartpole&quot;) del model # remove to demonstrate saving and loading model = PPO.load(&quot;ppo_cartpole&quot;) </code></pre> <p>How do I place a seed with <code>stable_baselines3</code>? I tried placing <code>np.random.seed(24)</code> but that didn't work.</p>
<python><reinforcement-learning><stable-baselines><gymnasium>
2024-07-11 01:17:10
1
1,851
desert_ranger
78,733,174
5,754,215
How can I use styles from an existing docx file in my new document?
<p>I have a beautiful .docx file generated by a teammate and I would like to use the styles in it as defaults for the new documents I am generating programmatically via python-docx.</p> <p>I am able to load the existing docx file but it isn't clear how to use the styles from the document in my newly created documents.</p> <p>Would I need to perform the following pseudo code for each style I use from the existing file?</p> <pre><code># Read in our template file to enable style reuse template = Document(&quot;MyTemplate.docx&quot;) # Create our new, empty document doc = Document() set_style(doc.styles['Heading 1'] = template.styles['Heading 1'] set_style(doc.styles['Title'] = template.styles['Title'] # etc </code></pre>
<python><python-docx>
2024-07-11 01:00:02
1
551
Not a machine
78,733,059
11,748,924
Broadcasting multiple versions of X_data that pair with the same y_data
<p>My deep learning architecture accepts an input vector with size <strong>512</strong> and an output vector with size <strong>512</strong> too.</p> <p>The problem is I have <code>X_data</code> version that pairs with the same <code>y_data</code>.</p> <p>I have these tensors:</p> <pre><code>(4, 8, 512) -&gt; (batch_size, number of X_data version, input size to model architecture) (list of X_data) (4, 512) -&gt; (batch_size, output size to model architecture) (y_data) </code></pre> <p>This means:</p> <pre><code>X_data[0,0,:] is pairing with y_data[0,:] X_data[0,1,:] is pairing with y_data[0,:] ... X_data[0,7,:] is pairing with y_data[0,:] X_data[1,0,:] is pairing with y_data[1,:] X_data[1,1,:] is pairing with y_data[1,:] ... X_data[1,7,:] is pairing with y_data[1,:] ... X_data[3,7,:] is pairing with y_data[3,:] </code></pre> <p>What is the final tensor shape of X_data and y_data so that I can train the model?</p> <p>Could you do that in NumPy?</p>
<python><numpy><tensorflow><keras><deep-learning>
2024-07-10 23:38:28
1
1,252
Muhammad Ikhwan Perwira
78,732,981
3,884,734
Snowflake Native app with container service, grant imported privilege on Snowflake DB
<p>How can a snowflake native application built with container services request or grant imported privilege on Snowflake DB?</p> <p>According to <a href="https://other-docs.snowflake.com/en/native-apps/consumer-granting-privs#grant-the-imported-privileges-privilege-on-the-snowflake-database" rel="nofollow noreferrer">Snowflake Documentation</a>, the grant can only be added through SQL commands. Once I create the application, and run the below SQL, It shows the error</p> <pre><code>GRANT IMPORTED PRIVILEGES ON DATABASE SNOWFLAKE TO APPLICATION my_app; -- Error Privilege 'IMPORTED PRIVILEGES ON SNOWFLAKE DB' cannot be granted because it is not requested by current application version </code></pre> <p>Sharing the manifest.yml for reference:</p> <pre><code>manifest_version: 1 version: name: v1_9 label: &quot;v1_9&quot; comment: &quot;My Application&quot; artifacts: setup_script: setup.sql readme: readme.md container_services: images: - /insta_spcs_db/app_schema/repo_stage/iqr_app_image default_web_endpoint: service: core.iqr_service endpoint: iq configuration: log_level: debug trace_level: always grant_callback: app_public.grant_callback lifecycle_callbacks: version_initializer: app_public.version_init privileges: - CREATE COMPUTE POOL: description: &quot;Enable application to create its own compute pool(s)&quot; - BIND SERVICE ENDPOINT: description: &quot;Enables application to expose service endpoints&quot; - CREATE WAREHOUSE: description: &quot;Enables application to create its own WAREHOUSE&quot; references: - snowflake_query_history: label: &quot;Snowflake Query History&quot; description: &quot;A database in the consumer account that exists outside the APPLICATION object.&quot; privileges: - SELECT object_type: VIEW multi_valued: false register_callback: app_public.register_single_reference </code></pre>
<python><sql><docker><snowflake-cloud-data-platform>
2024-07-10 22:58:06
1
4,411
Sajal
78,732,956
12,224,591
Save 16-bit PGM Image with Python PIL?
<p>I am attempting to convert an image in PNG format to a 16-bit PGM format and save it using Python's <a href="https://python-pillow.org/" rel="nofollow noreferrer"><code>PIL</code></a> library. I'm using Python 3.12.4 in all examples shown.</p> <hr /> <p>Using the following <code>test.png</code> image:</p> <p><a href="https://i.sstatic.net/DdniArJ4m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdniArJ4m.png" alt="enter image description here" /></a></p> <p>Attempting a simple script like this with the <a href="https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.Image.save" rel="nofollow noreferrer"><code>Image.save</code></a> function:</p> <pre><code>from PIL import Image image = Image.open(&quot;test.png&quot;) image.save(&quot;test.pgm&quot;) </code></pre> <p>Can save resave the image as PGM, however it's always saved as an 8-bit image, as shown by GIMP:</p> <p><a href="https://i.sstatic.net/GPOTKa1Qm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPOTKa1Qm.png" alt="enter image description here" /></a></p> <hr /> <p>Attempting to specify the amount of pixel bits via the optional <code>bits</code> argument as such:</p> <pre><code>from PIL import Image image = Image.open(&quot;test.png&quot;) image.save(&quot;test.pgm&quot;, bits = 16) </code></pre> <p>Also results in an 8-bit PGM image being saved.</p> <hr /> <p>Attempting to manually create a <code>numpy</code> array of the <code>np.uint16</code> type &amp; then creating an image from it using the <a href="https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.fromarray" rel="nofollow noreferrer"><code>Image.fromarray</code></a> function as such:</p> <pre><code>from PIL import Image import numpy as np image = Image.open(&quot;test.png&quot;) imageData = np.array(image.getdata(), dtype = np.uint16) newImage = Image.fromarray(imageData) newImage.save(&quot;test.pgm&quot;, bits = 16) </code></pre> <p>Results in:</p> <pre><code>OSError: cannot write mode I;16 as PPM </code></pre> <p>I also noticed that this appears to break the image, as saving with <code>np.uint32</code> saves a blank image with a very narrow horizontal size and a very large vertical size.</p> <hr /> <p>What is the proper way to save 16-bit PGM images using the Python PIL library?</p>
<python><python-imaging-library><pgm>
2024-07-10 22:42:54
2
705
Runsva
78,732,871
2,153,235
Global namespace changes depending on scope?
<p>I have the following function defined in my Spyder startup file:</p> <pre><code># startup.py #----------- def del_globals(*names): for name1 in names: try: del globals()[name1] except KeyError: pass # Ignore if name1 doesn't exist </code></pre> <p>At the Spyder console, I run the following script with the debugger to create a variable <code>cat</code> before descending into a function:</p> <pre><code># MyScript.py #------------ runfile('The/Path/to/startup.py') # In case functions are deleted cat='dog' del_globals('cat') </code></pre> <p>I have a breakpoint at the last line <code>del_globals('cat')</code>. Upon breaking, I confirm that <code>cat</code> is a variable in the global namespace:</p> <pre><code>sorted(globals().keys()) # ['TKraiseCFG', '__builtins__', '__doc__', '__file__', # '__loader__', '__name__', '__nonzero__', '__package__', # '__spec__', 'cat', 'del_globals'] </code></pre> <p>I then step into <code>del_globals('cat')</code> and at the very first line (the <code>for</code> statement), confirmed that <code>cat</code> is not in the global namespace</p> <pre><code>sorted(globals().keys()) ['TKraiseCFG', '__builtins__', '__doc__', '__loader__', '__name__', '__nonzero__', '__package__', '__spec__', '_spyderpdb_builtins_locals', '_spyderpdb_code', '_spyderpdb_locals', 'del_globals'] </code></pre> <p>Should there be just one global namespace, and should that be the prevailing namespace at the REPL command line?</p> <p>P.S. For background, the context is described <a href="https://stackoverflow.com/questions/78727436">here</a> and <a href="https://stackoverflow.com/questions/78732473">here</a>.</p>
<python><global-namespace>
2024-07-10 22:05:57
1
1,265
user2153235
78,732,772
601,684
How to get bot's conversation history with user in aiogram?
<p>I'm making a telegram bot using the aiogram (v3.9.0) library for python (v3.11) and I need to access some previous messages in a conversation with a user. Please note that we are not talking about a group or channel, but about a private chat, where the user starts the bot with the <code>/start</code> command.</p> <p>It so happened that I do not maintain a database of messages (I admit my mistake) and I do not have the ids of previous messages saved anywhere.</p> <p>Is there a way to get the history of messages in a personal dialogue between a user and a bot, since it is stored on the server and is available to the user?</p>
<python><telegram-bot><aiogram>
2024-07-10 21:30:36
0
836
alexeyprog
78,732,770
20,394
Type for heterogeneous *args
<p>In typed Python, how do you type a <code>*args</code> list that you're expecting to pass to another argument that is a higher kinded function?</p> <p>For example, this function takes a function and its first argument and internally just routes <code>*args</code> through, but its type signature depends on the type of that, possibly heterogeneous, argument list:</p> <pre><code>def curry_one[T, R](x: T, f: Callable[[T, WHAT_GOES_HERE], R]) -&gt; Callable[[WHAT_GOES_HERE], R]: def curried(*args: WHAT_GOES_HERE) -&gt; R: return f(x, *args) return curried </code></pre> <p>Can <em>Expand</em> be used with a generic type parameter over some kind of heterogeneous array?</p> <p>(I'm not interested in a curry function in the standard library. I'm writing some specific function adapters and am just using curry as an example.)</p>
<python><variadic-functions><python-typing><higher-order-functions>
2024-07-10 21:29:54
1
120,803
Mike Samuel
78,732,560
18,150,609
Applying styles to Pandas multiindex, using all index levels for context
<p>I am attempting to style a dataframe with cell colors based on categories provided via the index. Index levels 0 and 1 provide the fruit and part. I want every cell to be colored based on distinct categories of fruit parts. This works fine when trying to format the cells values throughout the dataframe, but produces issues when trying to style indices.</p> <p>Below is a sample script to style a sample dataframe of fruit parts.</p> <pre><code>import pandas as pd # Dynamically sets style based on index values def style_row(row): color_map = { ('apple', 'skin'): 'red', ('apple', 'seed'): 'blue', ('banana', 'skin'): 'red', ('banana', 'seed'): 'blue', } fruit, part, _ = row.name color = color_map.get((fruit, part)) return [f'background-color: {color}']*len(row) # Sample data data = [ ('apple', 'skin', 'color', 'red', 2), ('apple', 'skin', 'surface_area', 15, 3), ('apple', 'skin', 'texture', 'smooth', 1), ('apple', 'seed', 'color', 'grey', 3), ('apple', 'seed', 'weight', 4, 2), ('banana', 'skin', 'color', 'yellow', 3), ('banana', 'skin', 'surface_area', 25, 21), ('banana', 'skin', 'texture', 'smooth', 5), ('banana', 'seed', 'color', 'grey', 5), ('banana', 'seed', 'weight', 6, 2) ] # Create a DataFrame df = pd.DataFrame(data, columns=['fruit', 'part', 'property', 'value', 'observances']) df = df.set_index(['fruit', 'part', 'property']) # Styled DataFrame styled_df = df.style.apply(lambda row: style_row(row), axis=1) styled_df </code></pre> <p><strong>Result:</strong><br /> <a href="https://i.sstatic.net/zOSDagh5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOSDagh5.png" alt="Semi-complete, without colored index." /></a></p> <p>That produces a dataframe with red and blue cells, as desired. To finish up, I now only need to style the index level 1 and 2 values, based on the index level 0 and 1 values. This appears to be impossible.</p> <p>The issue seems to stem from the capabilities of the <code>pandas.io.formats.style.Styler</code> api. Methods such as <code>Styler.map_index</code>, <code>Styler.format_index</code>, and <code>Styler.apply_index</code> each take a callback function in order to style each index value. However, with a multi-index, the callback function is called with values from each index level independently. This API decision decouples the index values and prevents any callback function from making decisions that depend on more than one index level at the same time.</p> <p>Docs on <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.map_index.html#pandas.io.formats.style.Styler.map_index" rel="nofollow noreferrer">map_index</a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.apply_index.html#pandas.io.formats.style.Styler.apply_index" rel="nofollow noreferrer">apply_index</a>, and <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format_index.html" rel="nofollow noreferrer">format_index</a>.</p> <p><strong>Attempt 1 to bypass this issue:</strong><br /> The approach below attempts to enrich each level of the multiindex with context from the other levels, then apply the style, then remove the redundancies previously added to each level of the index. In theory it should work well, but (for some reason) <code>styled_df.index = new_multiindex</code> produces no effect on the values passed to <code>styled_df.map_index(callback_func)</code>. It's as though <code>styled_df.index = new_multiindex</code> does nothing!</p> <pre><code># Function to style indices def style_indices(index_val): if isinstance(index_val, tuple): fruit, part, _ = index_val color_map = { ('apple', 'skin'): 'red', ('apple', 'seed'): 'blue', ('banana', 'skin'): 'red', ('banana', 'seed'): 'blue', } return f'background-color: {color_map.get((fruit, part), &quot;white&quot;)}' # Convert MultiIndex to flat index for modification flat_index = styled_df.index.to_flat_index() # Sets index values with more context for callback functions new_tuples = [(a, flat_index[idx], flat_index[idx]) for idx, (a, b, _) in enumerate(styled_df.index)] new_multiindex = pd.MultiIndex.from_tuples(new_tuples) styled_df.index = new_multiindex # This does not do what you would expect! # Apply styles to the index styled_df = styled_df.map_index(lambda index_val: style_indices(index_val), axis=0) # Revert the index back to its original form simplified_index = pd.MultiIndex.from_tuples([(a, b[1], c[2]) for a, b, c in styled_df.index]) styled_df.index = simplified_index # This does not do what you would expect! styled_df </code></pre> <p>Notes:</p> <ul> <li>Its important that this dataframe remains multiindexed, so that <code>df.to_excel()</code> automatically applies Merge and Center styling.</li> <li>The solution must be compatible with the <code>.to_excel()</code> function, so that the Excel file output retains the styles.</li> </ul>
<python><python-3.x><excel><pandas><dataframe>
2024-07-10 20:07:53
1
364
MrChadMWood
78,732,502
12,021,983
Embedded python script in golang with go routines
<p>I'm working on a Go application that interfaces with Python via CGO to process store data. I'm using goroutines because I have more than 4M store.</p> <p>Issue: The execution gets blocked after some iterations.</p> <p>Here's a simplified overview of the relevant code snippets:</p> <p>Go Code (main package):</p> <pre class="lang-golang prettyprint-override"><code>func init() { defer validation.FinalizePython() } func main() { const CHUNK_SIZE = 1000 var resultsChan = make(chan stores.Store, CHUNK_SIZE) var done = make(chan struct{}) batches := len(records) / CHUNK_SIZE + 1 var wg sync.WaitGroup for i := 0; i &lt; batches; i++ { start := i * CHUNK_SIZE end := (i + 1) * CHUNK_SIZE if end &gt; len(records) { end = len(records) } wg.Add(1) go func(rows [][]string) { defer wg.Done() for _, row := range rows { store := validator.ValidateStore(row) fmt.Println(&quot;Processed store:&quot;, store) resultsChan &lt;- store } }(records[start:end]) } wg.Wait() close(resultsChan) &lt;-done //skipping some unnecessary code go func() { for store := range resultsChan { bqWriter.AddStore(store) if store.Cleaned { atomic.AddInt64(&amp;addedstore, 1) } else { atomic.AddInt64(&amp;removedstore, 1) } } bqWriter.Done() done &lt;- struct{}{} }() } </code></pre> <p>Package validation</p> <pre><code>/* #cgo CFLAGS: -I/usr/local/Cellar/python@3.8/3.8.19/Frameworks/Python.framework/Versions/3.8/include/python3.8 #cgo LDFLAGS: -L/usr/local/Cellar/python@3.8/3.8.19/Frameworks/Python.framework/Versions/3.8/lib -lpython3.8 #include &quot;cleaner_bridge.h&quot; */ import &quot;C&quot; import ( &quot;encoding/json&quot; &quot;fmt&quot; &quot;sync&quot; &quot;unsafe&quot; &quot;github.com/stores&quot; type Validator struct { Headers struct { ID int Name int Address int Country int } } func (validator *Validator) ValidateStore(fields []string) stores.Store { InitializePython() fmt.Println(&quot;Validating store&quot;, fields) var store stores.Store store.ID = GetRowField(fields, validator.Headers.ID) store.Name = GetRowField(fields, validator.Headers.Name) store.Address = GetRowField(fields, validator.Headers.Address) store.Country = GetRowField(fields, validator.Headers.Country) fmt.Println(&quot;to pass to function Cleaned store data&quot;, store) storeCode := C.CString(store.ID) defer C.free(unsafe.Pointer(storeCode)) storeName := C.CString(store.Name) defer C.free(unsafe.Pointer(storeName)) storeAddress := C.CString(store.Address) defer C.free(unsafe.Pointer(storeAddress)) storeCountry := C.CString(store.Country) defer C.free(unsafe.Pointer(storeCountry)) fmt.Println(&quot;Calling C function clean_store&quot;) result := C.clean_store(storeCode, storeName, storeAddress, storeCountry, C.int(0)) defer C.free(unsafe.Pointer(result)) fmt.Println(&quot;C function clean_store returned&quot;) if result != nil { cleanedStore := &amp;stores.Store{} err := json.Unmarshal([]byte(C.GoString(result)), cleanedStore) if err != nil { fmt.Println(&quot;Error unmarshalling JSON&quot;, err) } // Use cleanedStore as needed } return store } func GetRowField(fields []string, index int) string { if index &gt;= 0 &amp;&amp; index &lt; len(fields) { return fields[index] } return &quot;&quot; } var initOnce sync.Once var finalizeOnce sync.Once func InitializePython() { initOnce.Do(func() { C.initialize_python() fmt.Println(&quot;Python initialized&quot;) }) } func FinalizePython() { finalizeOnce.Do(func() { C.finalize_python() fmt.Println(&quot;Python finalized&quot;) }) } </code></pre> <p><code>cleaner_bridge.h</code></p> <pre class="lang-c prettyprint-override"><code>#ifndef CLEANER_BRIDGE_H #define CLEANER_BRIDGE_H void initialize_python(); void finalize_python(); const char* clean_store(const char* store_code, const char* store_name, const char* store_address, const char* store_country, int address_pars_flag); #endif // CLEANER_BRIDGE_H </code></pre> <p><code>cleaner_bridge.c</code></p> <pre class="lang-c prettyprint-override"><code>#include &quot;cleaner_bridge.h&quot; #include &lt;Python.h&gt; #include &lt;pthread.h&gt; #include &lt;stdio.h&gt; static pthread_once_t init_once = PTHREAD_ONCE_INIT; static pthread_mutex_t gil_lock; static int python_initialized = 0; void initialize_python_once() { Py_Initialize(); pthread_mutex_init(&amp;gil_lock, NULL); PyRun_SimpleString(&quot;import sys&quot;); PyRun_SimpleString(&quot;sys.path.append('validation')&quot;); PyRun_SimpleString(&quot;import cleaner&quot;); python_initialized = 1; printf(&quot;Python initialized\n&quot;); } void initialize_python() { pthread_once(&amp;init_once, initialize_python_once); } void finalize_python() { if (python_initialized) { Py_Finalize(); python_initialized = 0; printf(&quot;Python finalized\n&quot;); } } const char* clean_store(const char* store_code, const char* store_name, const char* store_address, const char* store_country, int address_pars_flag) { pthread_mutex_lock(&amp;gil_lock); PyGILState_STATE gstate = PyGILState_Ensure(); printf(&quot;GIL acquired\n&quot;); PyObject* sysPath = PySys_GetObject(&quot;path&quot;); PyObject* currentDir = PyUnicode_FromString(&quot;validation&quot;); PyList_Append(sysPath, currentDir); Py_DECREF(currentDir); PyObject* pModule = PyImport_ImportModule(&quot;cleaner&quot;); if (!pModule) { PyErr_Print(); PyGILState_Release(gstate); pthread_mutex_unlock(&amp;gil_lock); printf(&quot;Error importing module\n&quot;); return NULL; } PyObject* pFunc = PyObject_GetAttrString(pModule, &quot;clean_store&quot;); if (!pFunc || !PyCallable_Check(pFunc)) { if (PyErr_Occurred()) PyErr_Print(); Py_DECREF(pFunc); Py_DECREF(pModule); PyGILState_Release(gstate); pthread_mutex_unlock(&amp;gil_lock); printf(&quot;Error accessing function\n&quot;); return NULL; } PyObject* pArgs = PyTuple_New(5); PyTuple_SetItem(pArgs, 0, PyUnicode_FromString(store_code)); PyTuple_SetItem(pArgs, 1, PyUnicode_FromString(store_name)); PyTuple_SetItem(pArgs, 2, PyUnicode_FromString(store_address)); PyTuple_SetItem(pArgs, 3, PyUnicode_FromString(store_country)); PyTuple_SetItem(pArgs, 4, PyLong_FromLong(address_pars_flag)); PyObject* pResult = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); if (!pResult) { PyErr_Print(); Py_DECREF(pFunc); Py_DECREF(pModule); PyGILState_Release(gstate); pthread_mutex_unlock(&amp;gil_lock); printf(&quot;Error calling function\n&quot;); return NULL; } char* result = PyUnicode_AsUTF8(PyUnicode_FromObject(pResult)); Py_DECREF(pResult); Py_DECREF(pFunc); Py_DECREF(pModule); PyGILState_Release(gstate); pthread_mutex_unlock(&amp;gil_lock); printf(&quot;GIL released\n&quot;); return result; } </code></pre> <p><code>cleaner.py</code></p> <pre class="lang-py prettyprint-override"><code>def clean_store(store_code, store_name, store_address, store_country, address_pars_flag): try: store = { &quot;store_code&quot;: store_code, &quot;store_name&quot;: store_name, &quot;store_address&quot;: store_address, &quot;store_country&quot;: store_country, &quot;address_pars_flag&quot;: address_pars_flag } return json.dumps(store) except Exception as e: print(f&quot;Error cleaning store: {e}&quot;) return json.dumps({&quot;error&quot;: str(e), &quot;store_code&quot;: store_code, &quot;store_name&quot;: store_name, &quot;store_address&quot;: store_address, &quot;store_country&quot;: store_country}) </code></pre> <p>Here are some logs:</p> <pre class="lang-none prettyprint-override"><code>Python finalized Connecting to Bigquery... Connecting to GCS... Connecting to S3... Fetching CSV from S3... Parsing the CSV... Processing 104 stores Processed 0 so far Processed 0 so far Processed 0 so far Python initialized Python initialized Validating store [100 8378346 Bird In Bush Elsdon Village Green Elsdon GB 55.23311 -2.103 Village Green] Validating store [30 7085141 Black Bull Godmanchester 32 Post Street, Godmanchester Huntingdon GB 52.32233 -0.1758 32 Post Street, Godmanchester] Validating store [10 1455570 La Coquille Napoléon Route Napoléon Saint-Jean-Pied-de-Port FR 43.1528 -1.24208 Route Napoléon] Validating store [20 374314 Sibton White Horse Inn Halesworth Road, Sibton Saxmundham GB 52.27933 1.45492 Halesworth Road, Sibton] Validating store [60 9145596 The Castle Inn Bradford on Avon 10 Mount Pleasant Bradford on Avon GB 51.35035 -2.24903 10 Mount Pleasant] to pass to function Cleaned store data {7085141 Black Bull Godmanchester 32 Post Street, Godmanchester Huntingdon GB 52.32233 -0.1758 true 100 [] } Calling C function clean_store to pass to function Cleaned store data {8378346 Bird In Bush Elsdon Village Green Elsdon GB 55.23311 -2.103 true 66.66666666666666 [No number exists in the store address] } Calling C function clean_store to pass to function Cleaned store data {1455570 La Coquille Napoléon Route Napoléon Saint-Jean-Pied-de-Port FR 43.1528 -1.24208 true 66.66666666666666 [No number exists in the store address] } Calling C function clean_store Validating store [40 1334335 The Oakwheel 17 19 Coastal Road, Burniston Scarborough GB 54.32133 -0.44181 17 19 Coastal Road, Burniston] to pass to function Cleaned store data {374314 Sibton White Horse Inn Halesworth Road, Sibton Saxmundham GB 52.27933 1.45492 inn true 66.66666666666666 [No number exists in the store address] } Validating store [0 11081738 Real fisherman's cabins in Ballstad, Lofoten - nr. 11, Johnbua 46 Kræmmervikveien Ballstad NO 68.06593 13.53547 Kræmmervikveien 46 Ballstad] to pass to function Cleaned store data {9145596 The Castle Inn Bradford on Avon 10 Mount Pleasant Bradford on Avon GB 51.35035 -2.24903 inn true 100 [] } Calling C function clean_store Calling C function clean_store Validating store [50 2536808 The Rodney 67 Winwick Road Warrington GB 53.39456 -2.59368 67 Winwick Road] to pass to function Cleaned store data {1334335 The Oakwheel 17 19 Coastal Road, Burniston Scarborough GB 54.32133 -0.44181 true 100 [] } Calling C function clean_store Validating store [70 387155 Turfcutters Arms Main Road Boldre GB 50.80243 -1.47022 Main Road] to pass to function Cleaned store data {2536808 The Rodney 67 Winwick Road Warrington GB 53.39456 -2.59368 true 100 [] } Validating store [80 542412 Victoria Inn Sigingstone Cowbridge GB 51.43524 -3.47953 Moorshead Farm Cowbridge UK] to pass to function Cleaned store data {387155 Turfcutters Arms Main Road Boldre GB 50.80243 -1.47022 true 66.66666666666666 [No number exists in the store address] } Calling C function clean_store Validating store [90 340537 The Bulls Head 1 Woodville Road Swadlincote GB 52.78309 -1.51679 1 Woodville Road] Calling C function clean_store to pass to function Cleaned store data {11081738 Real fisherman's cabins in Ballstad, Lofoten - nr. 11, Johnbua 46 Kræmmervikveien Ballstad NO 68.06593 13.53547 true 66.66666666666666 [A number exists in the store name] } Calling C function clean_store to pass to function Cleaned store data {542412 Victoria Inn Sigingstone Cowbridge GB 51.43524 -3.47953 inn true 66.66666666666666 [No number exists in the store address] } Calling C function clean_store to pass to function Cleaned store data {340537 The Bulls Head 1 Woodville Road Swadlincote GB 52.78309 -1.51679 true 100 [] } Calling C function clean_store Processed 0 so far Processed 0 so far Processed 0 so far Processed 0 so far Processed 0 so far store </code></pre>
<python><c><go><cgo>
2024-07-10 19:55:37
0
531
sokida
78,732,473
2,153,235
"del" a list of variables, some which don't exist, without invoking globals()
<p>In <a href="https://stackoverflow.com/questions/78727436">this Q&amp;A</a>, I describe why I wish to issue (say) <code>del a,b,c</code> where some of the variables do not exist. All solutions work, but they make use of <code>globals()</code> because that is the prevailing namespace. That is, I run a script and then work from the console. The script is constantly changing, so hiving sections of it into functions is not practical.</p> <p>However, I am finding that using <code>globals()</code> is not practical when some of the objects take a lot of memory. For example, I have a list of 90+ thousand nested dictionaries read from a JSON file, which the ensuing code munges into a DataFrame. Even invoking <code>globals()</code> by itself causes the Python console to freeze. From trial an error I found the point at which it doesn't come back is about 20K records.</p> <p>What is a good alternative to using <code>globals()</code> if I want to apply <code>del</code> to a list of variables names, only some of which exist?</p> <p>The solution in the Q&amp;A cited above uses <code>try</code>, but it still requires the invocation <code>globals()</code> to generate the dictionary to look up each variable in the list. Here is the original version of that solution (it has since been modified):</p> <pre><code>def del_globals(*names): for name1 in names: try: del globals()[name1] except KeyError: pass # Ignore if name1 doesn't exist </code></pre>
<python>
2024-07-10 19:46:26
1
1,265
user2153235
78,732,261
14,230,633
How to turn off linting in ipynb notebooks?
<p>Is it possible to turn off the squiggly underline linting in notebooks but not other (Python) scripts?</p>
<python><visual-studio-code><jupyter-notebook>
2024-07-10 18:39:25
1
567
dfried
78,732,237
8,545,026
FastAPI uvicorn starrlette gradio package errors
<p>I am trying to upgrade a gradio app to the latest version. I have been able to get the app running again but it's throwing server errors when i go to the browser. I am getting the following errors listed:</p> <pre><code>ValueError: too many values to unpack (expected 2) ERROR: Exception in ASGI application Traceback (most recent call last): File &quot;/home/gil/miniconda3/envs/diarize-env/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py&quot;, line 399, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/gil/miniconda3/envs/diarize-env/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py&quot;, line 70, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/gil/miniconda3/envs/diarize-env/lib/python3.12/site-packages/fastapi/applications.py&quot;, line 292, in __call__ await super().__call__(scope, receive, send) File &quot;/home/gil/miniconda3/envs/diarize-env/lib/python3.12/site-packages/starlette/applications.py&quot;, line 122, in __call__ self.middleware_stack = self.build_middleware_stack() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/gil/miniconda3/envs/diarize-env/lib/python3.12/site-packages/fastapi/applications.py&quot;, line 213, in build_middleware_stack for cls, options in reversed(middleware): ^^^^^^^^^^^^ ValueError: too many values to unpack (expected 2) </code></pre> <p>I am guessing that there is some package conflict that I am missing here. Has anyone updated to the lastest version fast API experienced similar issues?</p>
<python><fastapi><uvicorn><gradio>
2024-07-10 18:30:42
1
550
Husk Rekoms
78,732,010
899,862
Use of defalut dict such that the default is a dict with an empty list
<p>The main object is a dictionary whose keys are strings, values a 2nd dictionary.<br /> The 2nd dictionary has string keys, and list values.</p> <p>I want to append a new value to the empty list created using defalutdict()</p> <p>Here is what I am trying to do:</p> <pre class="lang-py prettyprint-override"><code>import collections my_dict = collections.defaultdict(dict,{'missing_key': [] } ) my_dict['foo']['bar'].append('1') </code></pre> <p>And the result produced by Idle u3.10.11:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;pyshell#14&gt;&quot;, line 1, in &lt;module&gt; my_dict['foo']['bar'].append('1') KeyError: 'bar' </code></pre>
<python><dictionary><defaultdict>
2024-07-10 17:31:09
1
2,620
Mikef
78,731,945
251,589
Get the name of the current python distribution (name field in pyproject.toml)
<p>I am getting the version of my package using this code:</p> <pre class="lang-py prettyprint-override"><code>import importlib.metadata importlib.metadata.version(&quot;mypackage&quot;) </code></pre> <p><a href="https://stackoverflow.com/a/73904403/251589">As documented in this question</a></p> <p>I would like to avoid hardcoding &quot;mypackage&quot; in this code and instead retrieve the package name from <em>somewhere</em>. How can I do that?</p> <p>Other info - I am using poetry to manage dependencies. The package name is in my <code>pyproject.toml</code> file.</p> <p><strong>Option #1</strong></p> <p>Parse the <code>__name__</code> variable. Something like this probably works: <code>__name__.split(&quot;.&quot;)[0]</code></p> <p><a href="https://stackoverflow.com/a/76740696/251589">This seems like a bad idea</a>.</p> <p><strong>Option 2</strong> - Parse the <code>pyproject.toml</code> file and grab the name from there.</p>
<python><package>
2024-07-10 17:14:32
0
27,385
sixtyfootersdude
78,731,803
9,884,998
Create Matrix by Multiplying ith and jth Vector-Element in Numpy
<p>I need to calculate a the error function V</p> <pre><code>V = Σi Σj X[i] X[j] σ[i][j] </code></pre> <p>Where <code>σ[i][j]</code> is a given matrix and I need a relatively fast solution. To do this I want to create another matrix Y where</p> <pre><code>Y[i][j] = X[i]*X[j] </code></pre> <p>So I that I can simply sum over <code>Y * σ</code>. Is there a good way to implement this with numpy functions?</p> <p>So far I have tried to meshgrid(X, X) and then apply np.prod to each row, however this did yield the expected output and would have required a for-loop in python.</p> <p>Edit: Minimal reproducable example:</p> <pre><code>cov = np.array(((0.1, 0.05), (0.05, 0.25))) x = np.array((0.6, 0.4) desired = x[0]*x[0]*cov[0][0] + x[0]*x[1]*cov[0][1] + x[1]*x[0]*cov[1][0] + x[1]*x[1] * cov[1][1] </code></pre>
<python><numpy>
2024-07-10 16:40:50
2
529
David K.
78,731,732
9,854,132
Cannot find an element with Selenium in Python
<p>From this url: <a href="https://ec.europa.eu/economy_finance/recovery-and-resilience-scoreboard/milestones_and_targets.html?lang=en" rel="nofollow noreferrer">https://ec.europa.eu/economy_finance/recovery-and-resilience-scoreboard/milestones_and_targets.html?lang=en</a></p> <p>I want to click to the following button:</p> <p><a href="https://i.sstatic.net/nS3XNgCP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nS3XNgCP.png" alt="enter image description here" /></a></p> <p>and then click View data table but I cannot find the corresponding element. The code I am using is the following:</p> <pre><code>from selenium import webdriver driver = webdriver.Firefox() driver.get('https://ec.europa.eu/economy_finance/recovery-and-resilience-scoreboard/milestones_and_targets.html?lang=en') element = driver.find_element(By.XPATH, '//*[@id=&quot;highcharts-ldz37cw-0&quot;]/svg/g[7]/g') element.click() </code></pre> <p>The error I am getting is:</p> <p>NoSuchElementException: Unable to locate element: //*[@id=&quot;highcharts-ldz37cw-0&quot;]/svg/g[7]/g; For documentation on this error, please visit: <a href="https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception" rel="nofollow noreferrer">https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception</a></p> <p>What is possibly the problem?</p>
<python><selenium-webdriver>
2024-07-10 16:25:28
2
316
Theodosis Siomos
78,731,615
5,547,553
How to rewrite Oracle sql's hierarchical query to polars?
<p>In Oracle I have the following hierarchical sql:</p> <pre class="lang-sql prettyprint-override"><code>with prod as (select 10 id,'1008' code from dual union all select 11 id,'1582' code from dual union all select 12 id,'1583' code from dual union all select 13 id,'2023' code from dual union all select 14 id,'2025' code from dual union all select 15 id,'2030' code from dual union all select 16 id,'2222' code from dual ), prre as (select 10 detail_product_id,90 master_product_id from dual union all select 12 detail_product_id,11 master_product_id from dual union all select 91 detail_product_id,92 master_product_id from dual union all select 14 detail_product_id,12 master_product_id from dual union all select 90 detail_product_id,93 master_product_id from dual union all select 11 detail_product_id,91 master_product_id from dual union all select 15 detail_product_id,12 master_product_id from dual union all select 13 detail_product_id,12 master_product_id from dual union all select 94 detail_product_id,95 master_product_id from dual ) select prod.code, connect_by_root prod.code group_type from prre,prod where prre.detail_product_id = prod.id connect by nocycle prior prre.detail_product_id = prre.master_product_id start with prod.code in ('1008', '1582') </code></pre> <p>Resulting in:</p> <pre><code>CODE GROUP_TYPE 1582 1582 1583 1582 2030 1582 2025 1582 2023 1582 1008 1008 </code></pre> <p>How can I rewrite it to polars? The starting dfs will be like:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl prod = pl.DataFrame({'id': [10,11,12,13,14,15,16], 'code': ['1008','1582','1583','2023','2025','2030','2222'] }) prre = pl.DataFrame({'detail_product_id': [10,12,91,14,90,11,15,13,94], 'master_product_id': [90,11,92,12,93,91,12,12,95] }) </code></pre> <p>But how to go on?<br> The task in plain English is:</p> <ul> <li>start out from prod df, where the code is '1008' or '1582'</li> <li>take the corresponding id</li> <li>using this id loop through the prre df: wherever you find this id in the detail_product_id field the corresponding code of the prod df will be in the result list along with the root code value (named group_type)</li> <li>look through the master_product_id-s to these detail_product_id-s recursively, repeating the previous step</li> </ul> <p>The found structure looks like this:</p> <pre><code>11 -&gt; 1582 |-12 -&gt; 1583 |-13 -&gt; 2023 |-14 -&gt; 2025 |-15 -&gt; 2030 </code></pre>
<python><sql><python-polars><recursive-query>
2024-07-10 15:56:30
1
1,174
lmocsi
78,731,458
941,397
How to start gunicorn when using a custom Post Deployment Action on Azure App Service
<p>I am trying to use a Post Deployment Action on Azure App Service with a Flask app. I followed <a href="https://stackoverflow.com/questions/64895608/run-post-deployment-action-on-azure-app-service">this</a> Stack overflow post to accomplish the Post Deployment Action, but this results in the container not starting with this error: <code>Container has finished running with exit code: 0</code>. According to <a href="https://learn.microsoft.com/en-us/azure/spring-apps/enterprise/troubleshoot-exit-code" rel="nofollow noreferrer">this</a> that means that the container has run to completion due to not running continuously, so I assumed that this means that I also need to start the Flask app via gunicorn at the end of my Post Deployment Action.</p> <p>I therefore added this at the end of my <code>/home/site/deployments/tools/deploy.sh</code> as per <a href="https://learn.microsoft.com/en-us/azure/app-service/configure-language-python#customize-startup-command" rel="nofollow noreferrer">this</a>:</p> <p>gunicorn --bind=0.0.0.0 --timeout 600 --workers=3 app:app</p> <p>But this results in this error:</p> <p><code>ModuleNotFoundError: No module named 'app'</code></p> <p>My app.py is in the root of my deploment and I use <code>app = Flask(__name__)</code> as per <a href="https://learn.microsoft.com/en-us/azure/app-service/quickstart-python?tabs=flask%2Cwindows%2Cazure-cli%2Czip-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli" rel="nofollow noreferrer">this</a> App Service start Flask tutorial, so I am not sure where I am going wrong. When the container deploys the code is copied to something like <code>/tmp/8dca0e6e81ee3cf</code> so I am not sure whether I have to magically change direction there; which would be impossible since the directory changes on every start of the app. Where am I going wrong?</p> <p>The error log:</p> <pre><code>2024-07-10T15:21:31.7220801Z [2024-07-10 15:21:31 +0000] [715] [ERROR] Exception in worker process 2024-07-10T15:21:31.7221200Z Traceback (most recent call last): 2024-07-10T15:21:31.7221276Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 609, in spawn_worker 2024-07-10T15:21:31.7221309Z worker.init_process() 2024-07-10T15:21:31.7221342Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/workers/base.py&quot;, line 134, in init_process 2024-07-10T15:21:31.7221375Z self.load_wsgi() 2024-07-10T15:21:31.7221407Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/workers/base.py&quot;, line 146, in load_wsgi 2024-07-10T15:21:31.7221435Z self.wsgi = self.app.wsgi() 2024-07-10T15:21:31.7221463Z ^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7221523Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/base.py&quot;, line 67, in wsgi 2024-07-10T15:21:31.7221552Z self.callable = self.load() 2024-07-10T15:21:31.7221580Z ^^^^^^^^^^^ 2024-07-10T15:21:31.7221612Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/wsgiapp.py&quot;, line 58, in load 2024-07-10T15:21:31.7221642Z return self.load_wsgiapp() 2024-07-10T15:21:31.7221670Z ^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7221703Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/wsgiapp.py&quot;, line 48, in load_wsgiapp 2024-07-10T15:21:31.7221731Z return util.import_app(self.app_uri) 2024-07-10T15:21:31.7221784Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7221816Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/util.py&quot;, line 371, in import_app 2024-07-10T15:21:31.7221845Z mod = importlib.import_module(module) 2024-07-10T15:21:31.7221874Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7221908Z File &quot;/opt/python/3.12.2/lib/python3.12/importlib/__init__.py&quot;, line 90, in import_module 2024-07-10T15:21:31.7221938Z return _bootstrap._gcd_import(name[level:], package, level) 2024-07-10T15:21:31.7221968Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7221998Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1387, in _gcd_import 2024-07-10T15:21:31.7222050Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1360, in _find_and_load 2024-07-10T15:21:31.7222081Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1324, in _find_and_load_unlocked 2024-07-10T15:21:31.7222110Z ModuleNotFoundError: No module named 'app' 2024-07-10T15:21:31.7222139Z [2024-07-10 15:21:31 +0000] [715] [INFO] Worker exiting (pid: 715) 2024-07-10T15:21:31.7415049Z [2024-07-10 15:21:31 +0000] [716] [INFO] Booting worker with pid: 716 2024-07-10T15:21:31.7840966Z [2024-07-10 15:21:31 +0000] [716] [ERROR] Exception in worker process 2024-07-10T15:21:31.7841281Z Traceback (most recent call last): 2024-07-10T15:21:31.7841356Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 609, in spawn_worker 2024-07-10T15:21:31.7841392Z worker.init_process() 2024-07-10T15:21:31.7841424Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/workers/base.py&quot;, line 134, in init_process 2024-07-10T15:21:31.7841451Z self.load_wsgi() 2024-07-10T15:21:31.7841481Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/workers/base.py&quot;, line 146, in load_wsgi 2024-07-10T15:21:31.7841514Z self.wsgi = self.app.wsgi()2024-07-10T15:21:31.7841541Z ^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7841571Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/base.py&quot;, line 67, in wsgi 2024-07-10T15:21:31.7841618Z self.callable = self.load() 2024-07-10T15:21:31.7841650Z ^^^^^^^^^^^ 2024-07-10T15:21:31.7841681Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/wsgiapp.py&quot;, line 58, in load 2024-07-10T15:21:31.7841707Z return self.load_wsgiapp() 2024-07-10T15:21:31.7841734Z ^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7841766Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/wsgiapp.py&quot;, line 48, in load_wsgiapp 2024-07-10T15:21:31.7841793Z return util.import_app(self.app_uri) 2024-07-10T15:21:31.7841819Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7841867Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/util.py&quot;, line 371, in import_app 2024-07-10T15:21:31.7841896Z mod = importlib.import_module(module) 2024-07-10T15:21:31.7841924Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.7841953Z File &quot;/opt/python/3.12.2/lib/python3.12/importlib/__init__.py&quot;, line 90, in import_module 2024-07-10T15:21:31.8196675Z return _bootstrap._gcd_import(name[level:], package, level) 2024-07-10T15:21:31.8201442Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.8204620Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1387, in _gcd_import 2024-07-10T15:21:31.8209799Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1360, in _find_and_load 2024-07-10T15:21:31.8213087Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1324, in _find_and_load_unlocked 2024-07-10T15:21:31.8216209Z ModuleNotFoundError: No module named 'app' 2024-07-10T15:21:31.8219509Z [2024-07-10 15:21:31 +0000] [716] [INFO] Worker exiting (pid: 716) 2024-07-10T15:21:31.8292405Z [2024-07-10 15:21:31 +0000] [717] [INFO] Booting worker with pid: 717 2024-07-10T15:21:31.8810430Z [2024-07-10 15:21:31 +0000] [717] [ERROR] Exception in worker process 2024-07-10T15:21:31.8811034Z Traceback (most recent call last): 2024-07-10T15:21:31.8811083Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 609, in spawn_worker 2024-07-10T15:21:31.8811148Z worker.init_process() 2024-07-10T15:21:31.8811181Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/workers/base.py&quot;, line 134, in init_process 2024-07-10T15:21:31.8811209Z self.load_wsgi() 2024-07-10T15:21:31.8811239Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/workers/base.py&quot;, line 146, in load_wsgi 2024-07-10T15:21:31.8811265Z self.wsgi = self.app.wsgi() 2024-07-10T15:21:31.8811291Z ^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.8811324Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/base.py&quot;, line 67, in wsgi 2024-07-10T15:21:31.8811351Z self.callable = self.load() 2024-07-10T15:21:31.8811396Z ^^^^^^^^^^^ 2024-07-10T15:21:31.8811427Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/wsgiapp.py&quot;, line 58, in load 2024-07-10T15:21:31.8811455Z return self.load_wsgiapp() 2024-07-10T15:21:31.8811482Z ^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.8811512Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/wsgiapp.py&quot;, line 48, in load_wsgiapp 2024-07-10T15:21:31.8811539Z return util.import_app(self.app_uri) 2024-07-10T15:21:31.8811567Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.8811598Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/util.py&quot;, line 371, in import_app 2024-07-10T15:21:31.8811647Z mod = importlib.import_module(module) 2024-07-10T15:21:31.8811675Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.8811705Z File &quot;/opt/python/3.12.2/lib/python3.12/importlib/__init__.py&quot;, line 90, in import_module 2024-07-10T15:21:31.8811734Z return _bootstrap._gcd_import(name[level:], package, level) 2024-07-10T15:21:31.8811762Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:31.8811790Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1387, in _gcd_import 2024-07-10T15:21:31.8811820Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1360, in _find_and_load 2024-07-10T15:21:31.8811848Z File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1324, in _find_and_load_unlocked 2024-07-10T15:21:31.8811897Z ModuleNotFoundError: No module named 'app' 2024-07-10T15:21:31.9068132Z [2024-07-10 15:21:31 +0000] [717] [INFO] Worker exiting (pid: 717) 2024-07-10T15:21:32.2200387Z [2024-07-10 15:21:32 +0000] [714] [ERROR] Worker (pid:715) exited with code 3 2024-07-10T15:21:32.2379308Z [2024-07-10 15:21:32 +0000] [714] [ERROR] Worker (pid:716) exited with code 3 2024-07-10T15:21:32.2385166Z Traceback (most recent call last): 2024-07-10T15:21:32.2390468Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 209, in run 2024-07-10T15:21:32.2396694Z self.sleep() 2024-07-10T15:21:32.2399813Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 360, in sleep 2024-07-10T15:21:32.2571431Z ready = select.select([self.PIPE[0]], [], [], 1.0) 2024-07-10T15:21:32.2576312Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:32.2580328Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 242, in handle_chld 2024-07-10T15:21:32.2589456Z self.reap_workers() 2024-07-10T15:21:32.2595316Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 530, in reap_workers 2024-07-10T15:21:32.2603327Z raise HaltServer(reason, self.WORKER_BOOT_ERROR) 2024-07-10T15:21:32.2684416Z gunicorn.errors.HaltServer: &lt;HaltServer 'Worker failed to boot.' 3&gt; 2024-07-10T15:21:32.2687674Z 2024-07-10T15:21:32.2690949Z During handling of the above exception, another exception occurred: 2024-07-10T15:21:32.2693934Z 2024-07-10T15:21:32.2698294Z Traceback (most recent call last): 2024-07-10T15:21:32.2701064Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 662, in kill_worker 2024-07-10T15:21:32.2860795Z os.kill(pid, sig) 2024-07-10T15:21:32.2861212Z ProcessLookupError: [Errno 3] No such process 2024-07-10T15:21:32.2861251Z 2024-07-10T15:21:32.2861284Z During handling of the above exception, another exception occurred: 2024-07-10T15:21:32.2861313Z 2024-07-10T15:21:32.2861374Z Traceback (most recent call last): 2024-07-10T15:21:32.2861410Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/bin/gunicorn&quot;, line 8, in &lt;module&gt; 2024-07-10T15:21:32.2861440Z sys.exit(run()) 2024-07-10T15:21:32.2861469Z ^^^^^ 2024-07-10T15:21:32.2861503Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/wsgiapp.py&quot;, line 67, in run 2024-07-10T15:21:32.2861538Z WSGIApplication(&quot;%(prog)s [OPTIONS] [APP_MODULE]&quot;, prog=prog).run() 2024-07-10T15:21:32.2861794Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/base.py&quot;, line 236, in run 2024-07-10T15:21:32.2861823Z super().run() 2024-07-10T15:21:32.2861877Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/app/base.py&quot;, line 72, in run 2024-07-10T15:21:32.2861908Z Arbiter(self).run() 2024-07-10T15:21:32.2862115Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 229, in run 2024-07-10T15:21:32.2862149Z self.halt(reason=inst.reason, exit_status=inst.exit_status) 2024-07-10T15:21:32.2862183Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 342, in halt 2024-07-10T15:21:32.2862214Z self.stop() 2024-07-10T15:21:32.2862248Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 393, in stop 2024-07-10T15:21:32.2862278Z self.kill_workers(sig) 2024-07-10T15:21:32.2862331Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 652, in kill_workers 2024-07-10T15:21:32.2871261Z self.kill_worker(pid, sig) 2024-07-10T15:21:32.2879052Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 668, in kill_worker 2024-07-10T15:21:32.2885088Z self.cfg.worker_exit(self, worker) 2024-07-10T15:21:32.2889894Z ^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:32.2893789Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/config.py&quot;, line 67, in __getattr__ 2024-07-10T15:21:32.2976486Z return self.settings[name].get() 2024-07-10T15:21:32.2981331Z ^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-07-10T15:21:32.2984913Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/config.py&quot;, line 308, in get 2024-07-10T15:21:32.2990859Z def get(self): 2024-07-10T15:21:32.2993990Z 2024-07-10T15:21:32.3152847Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 242, in handle_chld 2024-07-10T15:21:32.3153153Z self.reap_workers() 2024-07-10T15:21:32.3153205Z File &quot;/tmp/8dca0e6e81ee3cf/antenv/lib/python3.12/site-packages/gunicorn/arbiter.py&quot;, line 530, in reap_workers 2024-07-10T15:21:32.3153241Z raise HaltServer(reason, self.WORKER_BOOT_ERROR) 2024-07-10T15:21:32.3153278Z gunicorn.errors.HaltServer: &lt;HaltServer 'Worker failed to boot.' 3&gt; 2024-07-10T15:21:32.5498266Z Container has finished running with exit code: 1. 2024-07-10T15:21:32.5548285Z Container is terminating. Grace period: 5 seconds. 2024-07-10T15:21:32.5698740Z Stop and delete container. Retry count = 0 2024-07-10T15:21:32.5704224Z Stopping container: gmtpythonfunctionapp_924ed61c. 2024-07-10T15:21:32.8155945Z Deleting container: gmtpythonfunctionapp_924ed61c. Retry count = 0 2024-07-10T15:21:33.8339373Z Container spec TerminationMessagePolicy path 2024-07-10T15:21:33.8406716Z Container is terminated. Total time elapsed: 1284 ms. </code></pre>
<python><azure><flask><deployment><azure-web-app-service>
2024-07-10 15:25:25
1
8,133
Superdooperhero
78,731,437
353,337
Linear interpolation function `interp1d`, replacement after deprecation
<p>In SciPy, there are numerous interpolation function that return a function, e.g., <code>Akima1DInterpolator</code>. I'd like to have the same for a simple linear interpolator. <code>interp1d</code> does the trick, but is deprecated. What can I replace it with?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.interpolate import Akima1DInterpolator, interp1d import matplotlib.pyplot as plt t = np.linspace(0.0, 1.0, 4) y = np.array([1.0, 0.8, 0.7, 1.2]) # ak = Akima1DInterpolator(t, y) # works, but deprecated: ak = interp1d(t, y, kind=&quot;linear&quot;) tt = np.linspace(0.0, 1.0, 200) yy = ak(tt) plt.plot(tt, yy) plt.plot(t, y, &quot;x&quot;) plt.gca().set_aspect(&quot;equal&quot;) plt.show() </code></pre> <p><a href="https://i.sstatic.net/oThJGsaA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oThJGsaA.png" alt="enter image description here" /></a></p>
<python><scipy>
2024-07-10 15:20:27
2
59,565
Nico Schlömer
78,731,396
13,100,938
Fixed windowing not producing synchronous output
<p>I am using the Apache Beam Python SDK to create a Dataflow pipeline.</p> <p>Steps:</p> <ol> <li>ingest synchronous pubsub messages</li> <li>window into 1 second windows with 600ms allowed latency and a 2 second processing trigger</li> <li>transform data</li> <li>write to Firestore synchronously (1Hz)</li> </ol> <p>Currently the output is as synchronous as I can get it to be but it is not precise and often I encounter delays.</p> <pre class="lang-py prettyprint-override"><code>| &quot;Fixed Window&quot; &gt;&gt; beam.WindowInto( window.FixedWindows(1), trigger=AfterProcessingTime(2), accumulation_mode=AccumulationMode.DISCARDING, # if message late by 600ms, still accept allowed_lateness=window.Duration(seconds=0.6) ) </code></pre> <p>What would be the recommended way to do this?</p>
<python><google-cloud-firestore><apache-beam>
2024-07-10 15:12:12
1
2,023
Joe Moore
78,731,355
251,589
Why is `__name__` of `__init__` different when importing directly?
<p>I have a directory structure like this:</p> <pre><code>. ├── main.py └── silly_test ├── __init__.py └── foo.py </code></pre> <pre><code># main.py import silly_test.foo import silly_test.__init__ # This line is unusual - I wouldn't typically do this. </code></pre> <pre><code># silly_test/__init__.py print(f&quot;Init name: {__name__}&quot;) </code></pre> <pre><code># silly_test/foo.py print(&quot;Hi from foo&quot;) </code></pre> <p>This prints:</p> <pre><code>Init name: silly_test Hi from foo Init name: silly_test.__init__ &lt;-- WHAT?! </code></pre> <p>I am surprised by the last line of the output for multiple reasons.</p> <ol> <li>Generally top level code in a file is only executed once. To my surprise, this is executing twice.</li> <li>The <code>__name__</code> property of the file is different.</li> </ol> <p>What is going on here? :)</p> <p>Are there other cases where you can have a many-to-one relationship between <code>__name__</code> and a file?</p>
<python><import>
2024-07-10 15:04:07
1
27,385
sixtyfootersdude
78,731,243
20,176,161
Convert a dataset into a dataframe with two columns
<p>I have a dataset <code>X</code> which I have tried to convert into a dataframe.</p> <p>The dataset <code>X</code> is as follows:</p> <pre><code> X= woe_transform.fit_transform(df) libelle_situation_professionnelle:AUTRES libelle_situation_professionnelle:RETRAITE ... montant_echeance_d:(1347.0, 1561.0] montant_echeance_d:(1709.0, 15508500.0] 0 0 0 ... False False [1 rows x 26 columns] </code></pre> <p>Iam trying to convert it to a dataframe ( i managed to do for one column only).</p> <pre><code>feature_name = X.columns.values feature_values=X.iloc[0].values summary_table = pd.DataFrame(columns=[&quot;Feature name&quot;], data=feature_name) print(summary_table) Feature name 0 libelle_situation_professionnelle:AUTRES 1 libelle_situation_professionnelle:RETRAITE 2 libelle_situation_professionnelle:SALARIE 3 libelle_situation_professionnelle:TRAVAILLEUR ... 4 solde_trim1_d:(-6691655.0, 436.0] 5 solde_trim1_d:(436.0, 3895.0] 6 solde_trim1_d:(3895.0, 33317.0] 7 duree_dossier_d:(120, 180] 8 duree_dossier_d:(180, 240] 9 duree_dossier_d:(240, 400] 10 montant_nominal_d:(0, 140000] 11 montant_nominal_d:(170500, 180500] 12 montant_nominal_d:(180500, 205000] 13 montant_nominal_d:(205000, 220000] 14 montant_nominal_d:(220000, 412000] 15 montant_nominal_d:(412000, 10000000000000] 16 taux_interets_d:(0.0, 4.2] 17 taux_interets_d:(5.5, 6.4] 18 taux_interets_d:(6.4, 8.0] 19 mois_anciennete_d:(6, 12] 20 mois_anciennete_d:(12, 24] 21 mois_anciennete_d:(24, 48] 22 mois_anciennete_d:(48, 500] 23 montant_echeance_d:(0.0, 1347.0] 24 montant_echeance_d:(1347.0, 1561.0] 25 montant_echeance_d:(1709.0, 15508500.0] </code></pre> <p>I am looking to do it for both columns <code>feature_values</code> and <code>feature_name</code> in order to have a dataframe with two columns as opposed to one.</p> <p>I have tried this code but it failed.</p> <pre><code>summary_table = pd.DataFrame(columns=[&quot;Feature name&quot;, 'Feature Values'], data=[feature_name, features_values]) </code></pre> <p>Can someone help please?</p>
<python><pandas><dataframe>
2024-07-10 14:42:39
1
419
bravopapa
78,731,018
2,618,377
Non-supported wavelets in PyWavelets cwt()
<p>I'm am refreshing my knowledge of wavelets by working through some of the examples in Fugal's book, &quot;Conceptual Wavelets in Digital Signal Processing&quot; (2006) using PyWavelets rather than Matlab. I have run into a problem in the very first introductory chapter. Mr. Fugal presents two examples of using the continuous wavelet transform (CWT) to solve a couple of problems. The first localizes a chirp in frequency and the second localizes a discontinuity in time. However, for the first Fugal uses the Db20 as the basis, and in the second he uses the Db4. The Db family is not one of the support wavelets that can be used by cwt().</p> <p>I made a couple of substitutions (Complex Morlet for the chirp and Complex Shannon for the discontinuity) that seemed to work well. I was just wondering if there is some way to repeat Fugal's use of the Db family in PyWavelets with the cwt() function?</p>
<python><matlab><pywavelets>
2024-07-10 13:56:14
1
421
Pat B.
78,730,970
2,521,423
Checking subclass against metaclass type
<p>I have a set of plugins that inherit from a metaclass. the metaclasss is defined like so:</p> <pre><code>from abc import ABC, abstractmethod class MetaReader(ABC): def __init__(self, arg1, arg2, **kwargs): ... </code></pre> <p>and the subclass like so:</p> <pre><code>from utils.MetaReader import MetaReader class ABF2Reader(MetaReader): ... </code></pre> <p>This works, as far as I can tell, and I can instantiate classes and use them as intended. However, at some points I need to load these classes as plugins without instantiating them, and when I do, I need to check their subclass against the proper metaclass to know what to do with them when loaded. And there, I get some weird results:</p> <pre><code>from utils.MetaReader import MetaReader from plugins.datareaders.ABF2Reader import ABF2Reader print(type(reader)) #reader is an instance of ABF2Reader print(type(MetaReader)) #MetaReader is just the metaclass, not instantiated print(type(ABF2Reader)) #Just the subclass, not instantiated </code></pre> <p>prints:</p> <pre><code>&lt;class 'plugins.datareaders.ABF2Reader.ABF2Reader'&gt; &lt;class 'abc.ABCMeta'&gt; &lt;class 'abc.ABCMeta'&gt; </code></pre> <p>The type for <code>reader</code> is what I expect: it's an instance of ABF2Reader. But the types for the other two are not. <code>type(MetaReader)</code> is also fine, but I expect <code>type(ABF2Reader)</code> to give me <code>&lt;class 'MetaReader'&gt;</code>, not <code>&lt;class 'abc.ABCMeta'&gt;</code></p> <p>Clearly I am using metaclasses wrong. Can someone shed some light on where I am screwing this up?</p>
<python><metaclass><abc>
2024-07-10 13:47:07
1
1,488
KBriggs
78,730,773
5,224,881
How to make clickable link compatible with IDEs and mkdocs?
<p>My docstrings contain links. I want to make these links clickable in IDE, but I want to use the <a href="https://github.com/mkdocstrings/python" rel="nofollow noreferrer">https://github.com/mkdocstrings/python</a> to generate mkdocs api reference. I want to use this package, because the rest of my documentation is in markdown and thus generated using mkdocs too, and it's easier to tweak it than to merge mkdocs with e.g. sphinx.</p> <p>I know there are 3 python docstring formats the mkdocs can process: <a href="https://mkdocstrings.github.io/griffe/docstrings/" rel="nofollow noreferrer">https://mkdocstrings.github.io/griffe/docstrings/</a></p> <p>google, numpy and sphinx. I went through the documentation of individual formats:</p> <ul> <li><a href="https://sphinx-rtd-tutorial.readthedocs.io/en/latest/docstrings.html#the-sphinx-docstring-format" rel="nofollow noreferrer">https://sphinx-rtd-tutorial.readthedocs.io/en/latest/docstrings.html#the-sphinx-docstring-format</a></li> <li><a href="https://numpydoc.readthedocs.io/en/latest/format.html" rel="nofollow noreferrer">https://numpydoc.readthedocs.io/en/latest/format.html</a></li> <li><a href="https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html" rel="nofollow noreferrer">https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html</a></li> </ul> <p>but did not find the satisfying answer: I have tried to put the following links to my docstring:</p> <pre><code>def foo(): &quot;&quot;&quot; link1 `Example &lt;http://www.example.com&gt;`_ link2 `Example &lt;http://www.example.com&gt;` link3 &lt;http://www.example.com&gt; link4 _Example: http://www.example.com link5 _`Example &lt;http://www.example.com&gt;`_ link 6 http://www.example.com link7 [example](http://www.example.com) link8 &lt;a href=&quot;http://www.example.com&quot;&gt;example&lt;/a&gt; &quot;&quot;&quot; </code></pre> <p>which renders the following result in the doc visualization in PyCharm: <a href="https://i.sstatic.net/trklhu3y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/trklhu3y.png" alt="enter image description here" /></a></p> <p>then I used</p> <pre class="lang-markdown prettyprint-override"><code>::: my_package_name </code></pre> <p>for the api reference, and the following <code>mkdocs.yaml</code> is:</p> <pre class="lang-yaml prettyprint-override"><code>plugins: - search - mkdocstrings: default_handler: python handlers: python: paths: [src] options: allow_inspection: true show_submodules: true </code></pre> <p>but the rendered html looked like this: <a href="https://i.sstatic.net/1K00Sxg3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1K00Sxg3.png" alt="enter image description here" /></a></p> <p>so there is no single format that would be rendered in the same way in both mkdocstring and the native documentation rendering.</p> <p>Is there any how how to achieve it, or do I need to write my own mkdocs preprocessor?</p>
<python><docstring><mkdocs><mkdocstrings>
2024-07-10 13:08:33
1
1,814
Matěj Račinský
78,730,743
7,746,472
Transpose XLSX and send to database, using Pandas
<p>I have a xlsx file that is structured in an unorthodox way. Simplified, it looks like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td></td> <td>member_id</td> <td>101</td> <td>102</td> <td>102</td> <td>103</td> </tr> <tr> <td>3</td> <td></td> <td>first_name</td> <td>paul</td> <td>john</td> <td>george</td> <td>ringo</td> </tr> <tr> <td>4</td> <td></td> <td>last_name</td> <td>mccartney</td> <td>lennon</td> <td>harrison</td> <td>starr</td> </tr> </tbody> </table></div> <p>Note the member_id is NOT unique (it makes sense in the original data).</p> <p>My goal is to have a table that looks like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>member_id</th> <th>first_name</th> <th>last_name</th> </tr> </thead> <tbody> <tr> <td>101</td> <td>paul</td> <td>mccartney</td> </tr> <tr> <td>102</td> <td>john</td> <td>lennon</td> </tr> <tr> <td>102</td> <td>george</td> <td>harrison</td> </tr> <tr> <td>103</td> <td>ringo</td> <td>starr</td> </tr> </tbody> </table></div> <p>(note that member_id is still not unique)</p> <p>So my approach is to read this table into Python, transpose it, and write it to a database.</p> <p>This is what I have so far:</p> <pre><code>import pandas as pd import openpyxl df = pd.read_excel('sample.xlsx', sheet_name='data', engine='openpyxl', skiprows=[0]) </code></pre> <p>This is already off to a bad start, as it seems that Pandas wants my columns to be unique (note that the columns are called &quot;102&quot; and &quot;102.1&quot;):</p> <pre><code> Unnamed: 0 member_id 101 102 102.1 104 0 NaN first_name paul john george ringo 1 NaN last_name mccartney lennon harrison starr </code></pre> <p>If I got the issue with the member_id fixed, I would drop the empty column:</p> <pre><code>df = df.drop(df.columns[[0]], axis='columns') </code></pre> <p>Which would give me</p> <pre><code> member_id 101 102 102.1 104 0 first_name paul john george ringo 1 last_name mccartney lennon harrison starr </code></pre> <p>Then I would transpose the table like this:</p> <pre><code>df = df.transpose() 0 1 member_id first_name last_name 101 paul mccartney 102 john lennon 102.1 george harrison 104 ringo starr </code></pre> <p>Which has an odd first row of what used to be the index, which I can't seem to drop.</p> <p>Help is greatly appreciated!</p>
<python><python-3.x><pandas><dataframe>
2024-07-10 13:01:56
3
1,191
Sebastian
78,730,642
8,037,521
Non-blocking Tkinter window
<p>I have read multiple similar questions &amp; answers, but somehow still have not understood how to apply them to my particular use case. The best I found was using Tkinter with <code>Thread</code>:</p> <pre><code>from Tkinter import * import threading class App(threading.Thread): def __init__(self, tk_root): self.root = tk_root threading.Thread.__init__(self) self.start() def run(self): loop_active = True while loop_active: user_input = raw_input(&quot;Give me your command! Just type \&quot;exit\&quot; to close: &quot;) if user_input == &quot;exit&quot;: loop_active = False self.root.quit() self.root.update() else: label = Label(self.root, text=user_input) label.pack() ROOT = Tk() APP = App(ROOT) LABEL = Label(ROOT, text=&quot;Hello, world!&quot;) LABEL.pack() ROOT.mainloop() </code></pre> <p>But here the parallel task is the <code>raw_input</code>, which is already part of the <code>App</code> itself. But what if I want to have instead two instances of <code>App</code> each in its thread running in parallel? Or <code>App</code> + <code>open3d</code> visualization? How would I modify this code to have responsive Tkinter GUI in one thread, without explicitly relying on some function input?</p> <p>Another solution I found utilized <code>Toplevel</code>. Is that indeed the correct approach for this problem? Would it not complain that Tkinter does not like working with threads, as multiple other answers to similar questions pointed out? Example:</p> <pre><code>from tkinter import * import threading import time class Show_Azure_Message(Toplevel): def __init__(self,master,message): Toplevel.__init__(self,master) #master have to be Toplevel, Tk or subclass of Tk/Toplevel self.title('') self.attributes('WM_DELETE_WINDOW',self.callback) self.resizable(False,False) Label(self,text=message,font='-size 25').pack(fill=BOTH,expand=True) self.geometry('250x50+%i+%i'%((self.winfo_screenwidth()-250)//2,(self.winfo_screenheight()-50)//2)) def callback(self): pass BasicApp=Tk() App = Show_Azure_Message(BasicApp,'Hello') for i in range(0,2): print(i) time.sleep(1) App.destroy() </code></pre>
<python><multithreading><tkinter>
2024-07-10 12:40:22
0
1,277
Valeria
78,730,606
21,152,416
How to update nested attribute name for specific DynamoDB items
<p>I use <code>boto3</code> client for managing DynamoDB items. Let's assume that the item structure is the following:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;id&quot;: &quot;item_id&quot;, &quot;name&quot;: &quot;item_name&quot;, &quot;age&quot;: 23, &quot;body&quot;: { {&quot;nested&quot;: &quot;nested_data&quot;} } } </code></pre> <p>There was a bug at some point, and instead of <code>nested</code> key, <code>nested_error</code> key was inserted for some items:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;id&quot;: &quot;item_id&quot;, &quot;name&quot;: &quot;item_name&quot;, &quot;age&quot;: 23, &quot;body&quot;: { {&quot;nested_error&quot;: &quot;nested_data&quot;} } } </code></pre> <p>The bug has been fixed and I would like to implement a &quot;migration&quot; that will rename <code>nested_error</code> key to <code>nested</code> one for all items affected. The action should be skipped if the name of the attribute is already <code>nested</code>.</p> <p>What is the correct way to do such a migration? Should I implement a script with <code>boto3</code> or an <code>awscli</code> command would be enough? Or maybe it can be done via the AWS Console? What command should I use?</p>
<python><amazon-web-services><amazon-dynamodb><boto3>
2024-07-10 12:34:31
1
1,197
Victor Egiazarian
78,730,254
351,410
Matplotlib: adjust z-order of bar chart after adding all bars
<p>When adding a series of bars to a stacked bar chart in <code>matplotlib.pyplot</code>, there is only one <code>zorder</code> argument for the entire series, and it only accepts a single integer (not an array-like, which would be preferable). My chart has several series, and the intended z-order changes at each bar position (i.e., x-axis position), depending on which series happens to have the tallest bar at that position--otherwise that tall bar will cover the others, which totally defeats the purpose of stacked bars. Alternatively, it's possible to add the bars one-by-one, setting the z-order each time, but this introduces many other complications.</p> <p>Adding a series uniformly:</p> <pre><code>plot.bar(label_list, height_list, zorder = should_not_be_the_same_for_every_bar) </code></pre> <p>Adding a series bar-by-bar:</p> <pre><code>for label, height in zip(label_list, height_list): plot.bar(label, height, zorder = here_it_can_be_specified_per_bar) </code></pre> <p>To avoid the messy bar-by-bar approach, is there a way to iterate over the bars that have been added to the plot, and change the z-order of each bar?</p> <hr /> <p>Update: solution mentioned by JohanC looks like this:</p> <pre><code>for bars in zip(*axes.containers): z = len(bars) for bar in sorted(bars, key = lambda x : x.get_height()): bar.set_zorder(z) z -= 1 </code></pre> <hr /> <p>Not a duplicate because this question and its solution do not use the <code>seaborn</code> module, whereas the proposed duplicate is exclusively possible using <code>seaborn</code>.</p>
<python><matplotlib>
2024-07-10 11:22:41
0
2,715
Byron Hawkins
78,730,136
6,583,936
Scatter 3d animation frames dont display all traces
<p>I have searched for proper animation script and come up with following function on how to add traces:</p> <pre><code>from plotly import graph_objects as go def add_anim_frames(figure: go.Figure, traces): sliders_dict = { &quot;active&quot;: 0, # &quot;yanchor&quot;: &quot;top&quot;, # &quot;xanchor&quot;: &quot;left&quot;, &quot;currentvalue&quot;: { &quot;font&quot;: {&quot;size&quot;: 20}, &quot;prefix&quot;: &quot;&quot;, &quot;visible&quot;: True, &quot;xanchor&quot;: &quot;right&quot; }, &quot;transition&quot;: {&quot;duration&quot;: 300, &quot;easing&quot;: &quot;cubic-in-out&quot;}, &quot;len&quot;: 0.9, # &quot;x&quot;: 0.1, # &quot;y&quot;: 0, &quot;steps&quot;: [], 'pad': dict(zip('lrtb', [0, 0, 0, 0])), } frames = [ go.Frame(data=trace_list, traces=list(range(len(trace_list))), name=str(i)) for i, trace_list in enumerate(traces) ] f_debug = frames[0] print(len(f_debug.data)) print(f_debug.traces) sliders_dict['steps'] = [ {&quot;args&quot;: [[fr.name], {&quot;frame&quot;: {&quot;duration&quot;: 0, &quot;redraw&quot;: True}, &quot;mode&quot;: &quot;immediate&quot;, &quot;transition&quot;: {&quot;duration&quot;: 0}}], &quot;label&quot;: i, &quot;method&quot;: &quot;animate&quot;} for i, fr in enumerate(frames)] figure.frames = frames figure.layout.updatemenus = [ go.layout.Updatemenu( type='buttons', buttons=[ go.layout.updatemenu.Button( label=&quot;Play&quot;, method=&quot;animate&quot;, args=[ None, {'frame': {&quot;duration&quot;: 50, &quot;redraw&quot;: True}, &quot;mode&quot;: &quot;immediate&quot;, &quot;fromcurrent&quot;: True, &quot;transition&quot;: {&quot;duration&quot;: 50, &quot;easing&quot;: &quot;linear&quot;}}, ]), go.layout.updatemenu.Button( label='Pause', method='animate', args=[ [None], {&quot;frame&quot;: {&quot;duration&quot;: 0, &quot;redraw&quot;: False}, &quot;mode&quot;: &quot;immediate&quot;, &quot;transition&quot;: {&quot;duration&quot;: 0}} ]), ], bgcolor='rgba(0, 0, 0, 0)', active=99, bordercolor='black', font=dict(size=11, color='white', ), showactive=False, **{ &quot;direction&quot;: &quot;right&quot;, 'pad': dict(zip('lrtb', [0, 0, 0, 0])), &quot;x&quot;: 0.5, &quot;xanchor&quot;: &quot;right&quot;, &quot;y&quot;: 1, &quot;yanchor&quot;: &quot;top&quot; } ), ] figure.layout.sliders = [sliders_dict] </code></pre> <p>And i display my figure like this:</p> <pre><code>import numpy as np from plotly import graph_objects as go cube_points = np.zeros((8, 3)) cube_points[[2, 3, 4, 5], 0] = 1 cube_points[[1, 2, 5, 6], 1] = 1 cube_points[[4, 5, 6, 7], 2] = 1 N_FRAMES = 40 anim_points = np.stack([ np.sin(np.arange(N_FRAMES)), np.cos(np.arange(N_FRAMES)), np.arange(N_FRAMES)/10 ], axis=-1) static_traces = [ go.Scatter3d(**dict(zip('xyz', cube_points.T*5)), mode='markers', name='static'), ] anim_traces = [ [ go.Scatter3d(**dict(zip('xyz', anim_points[[i-1, i, i+1]].T + 2.5)), mode='lines', marker_color='lime'), go.Scatter3d(**dict(zip('xyz', anim_points[[i-1, i, i+1]].T + 2.7)), mode='lines', marker_color='olive'), go.Scatter3d(**dict(zip('xyz', anim_points[[i-1, i, i+1]].T+2.3)), mode='lines', marker_color='blue'), ] for i in range(1, N_FRAMES-1)] fig = go.Figure(data=[go.Scatter3d()]) fig.add_traces(static_traces) fig.update_layout(plt.get_layout_3d(show_grid=True), height=700) add_anim_frames(fig, anim_traces) fig.show() </code></pre> <p>Problem is that it seems that <code>Frame</code> object is capable of displaying only 2 traces at once: If i leave only 1 trace in anim traces list, static trace will be shown, if i will add extra enim trace, only anim traces will be shown and if i add even more, only first 2 anim traces will be shown. Please help</p>
<python><animation><plotly><scatter3d>
2024-07-10 10:59:57
1
320
mcstarioni
78,729,859
11,748,924
Numpythonic way to fill value based on range indices reference (label encoding from given range indices)
<p>I have this tensor dimension:</p> <pre><code>(batch_size, class_id, range_indices) -&gt; (4, 3, 2) int64 [[[1250 1302] [1324 1374] [1458 1572]] [[1911 1955] [1979 2028] [2120 2224]] [[2546 2599] [2624 2668] [2765 2871]] [[3223 3270] [3286 3347] [3434 3539]]] </code></pre> <p>How do I construct dense representation with filled value with this rule?</p> <p>Since there are 3 class IDs, therefore:</p> <ol> <li>Class ID 0: filled with 1</li> <li>Class ID 1: filled with 2</li> <li>Class ID 2: filled with 3</li> <li>Default: filled with 0</li> </ol> <p>Therefore, it will outputting vector like this:</p> <pre><code>[0 0 0 ...(until 1250)... 1 1 1 ...(until 1302)... 0 0 0 ...(until 1324)... 2 2 2 ...(until 1374)... and so on] </code></pre> <p>Here is a copiable code:</p> <pre><code>data = np.array([[[1250, 1302], [1324, 1374], [1458, 1572]], [[1911, 1955], [1979, 2028], [2120, 2224]], [[2546, 2599], [2624, 2668], [2765, 2871]], [[3223, 3270], [3286, 3347], [3434, 3539]]]) </code></pre> <p>Here is code generated by ChatGPT, but I'm not sure it's Numpythonic since it's using list comprehension:</p> <pre><code>import numpy as np # Given tensor tensor = np.array([[[1250, 1302], [1324, 1374], [1458, 1572]], [[1911, 1955], [1979, 2028], [2120, 2224]], [[2546, 2599], [2624, 2668], [2765, 2871]], [[3223, 3270], [3286, 3347], [3434, 3539]]]) # Determine the maximum value in the tensor to define the size of the output array max_value = tensor.max() # Create an empty array filled with zeros of size max_value + 1 dense_representation = np.zeros(max_value + 1, dtype=int) # Generate the class_ids array, replicated for each batch class_ids = np.tile(np.arange(1, tensor.shape[1] + 1), tensor.shape[0]) # Generate start and end indices start_indices = tensor[:, :, 0].ravel() end_indices = tensor[:, :, 1].ravel() # Create an array of indices to fill indices = np.hstack([np.arange(start, end) for start, end in zip(start_indices, end_indices)]) # Create an array of values to fill values = np.hstack([np.full(end - start, class_id) for start, end, class_id in zip(start_indices, end_indices, class_ids)]) # Fill the dense representation array dense_representation[indices] = values # The resulting dense representation print(dense_representation) print(dense_representation[1249:1303]) print(dense_representation[1323:1375]) print(dense_representation[1457:1573]) print(dense_representation[1910:1956]) </code></pre> <p>Output:</p> <pre><code>[0 0 0 ... 3 3 0] [0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0] [0 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0] [0 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 0] [0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0] </code></pre>
<python><numpy>
2024-07-10 09:59:35
2
1,252
Muhammad Ikhwan Perwira
78,729,186
7,448,592
dynamic subplot dimensions
<p>I want to plot several subplots. Depending on the data it might by a N x M array of plots, but also only a single plot or a 1 x M or N x 1 array.</p> <p>The code I'm using is as follows which works for M x N arrays. However if M or N or both are 1 I get the Error: <em>matplotlib IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed</em></p> <p>Sure I could do a switch case for the 4 different possibilities but there must be a better way?</p> <pre class="lang-py prettyprint-override"><code>fig, axs = plt.subplots(max_N, max_N) for N in range(max_N): for M in range(max_N): axs[N, M].plot(...) </code></pre>
<python><matplotlib>
2024-07-10 07:38:39
1
485
goaran
78,728,925
7,735,258
How to generate canonicalized element using c14n11
<p>I am signing my XML document using Python's signxml library. Here's my code:</p> <pre><code>signer = XMLSigner( method=methods.enveloped, signature_algorithm=&quot;rsa-sha256&quot;, c14n_algorithm=&quot;http://www.w3.org/2006/12/xml-c14n11&quot;, digest_algorithm=&quot;sha256&quot;, ) signed_root = signer.sign( document_to_sign, key=private_key, cert=public_key, key_info=key_info ) </code></pre> <p>Now, since I have to pass few additional details to the signature post signing, I have to generate a digest value of the canonicalized XML element. I am facing an issue in doing that. I am using lxml to perform that operation and it does not support c14n11. Here's my code for that:</p> <pre><code>def generate_custom_digest_value(signature_element, element, namespace): element_to_digest = signature_element.xpath(element, namespaces=namespace)[0] # Serialize the element to canonical XML canonicalized_element = etree.tostring( element_to_digest, method=&quot;c14n11&quot;, exclusive=False ) # this does not work and throws an error because there's no method &quot;c14n11&quot; # Calculate the SHA-256 digest digest_value = hashlib.sha256(canonicalized_element).digest() # Convert digest value to base64 for inclusion in XML digest_value_base64 = base64.b64encode(digest_value).decode(&quot;utf-8&quot;) return digest_value_base64 </code></pre> <p>My question is that how can I implement the c14n11 using lxml or any other library because I have tried using <strong>xmlsec</strong> and it also does not support c14n11. Any help would be appreciated.</p>
<python><xml><hash><digital-signature><c14n>
2024-07-10 06:28:23
0
321
Package.JSON
78,728,639
10,855,529
Separate a column with hybrid data types in polars
<p>I have column with containing data of multiple data types as string. Now, I am wondering how I can split the data into separate columns (one for each data type).</p> <p><strong>Input.</strong></p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({ &quot;hybrid_column&quot;: [&quot;1&quot;, &quot;hello&quot;, &quot;{'a': 1}&quot;, &quot;True&quot;] }) </code></pre> <p><strong>Output.</strong></p> <pre><code>shape: (4, 5) ┌───────────────┬─────────┬────────┬────────────┬─────────┐ │ hybrid_column │ integer │ string │ struct │ boolean │ │ --- │ --- │ --- │ --- │ --- │ │ str │ i64 │ str │ struct[?] │ bool │ ├───────────────┼─────────┼────────┼────────────┼─────────┤ │ 1 │ 1 │ null │ null │ null │ │ hello │ null │ hello │ null │ null │ │ {'a': 1} │ null │ null │ {&quot;a&quot;: 1} │ null │ │ True │ null │ null │ null │ true │ └───────────────┴─────────┴────────┴────────────┴─────────┘ </code></pre>
<python><python-polars>
2024-07-10 04:48:48
1
3,833
apostofes
78,728,631
3,377,314
How to render a static SVG in Shiny for Python?
<p>Is there a way to render a SVG in a shiny-python app directly rather than convert it to a png and then render it from an <code>ImgData</code>, by calling <code>ui.output_plot</code>?</p>
<python><py-shiny>
2024-07-10 04:46:07
1
969
Devil
78,728,112
11,439,134
How to create python wheel for each platform and only include binaries for that specific platform
<p>My project has the structure:</p> <pre><code>project - binaries - win32 - win64 - linux32 - linux64 - linuxarm - darwin64 - darwinarm </code></pre> <p>Each of the folders contains binaries that the project uses that are specific to the platform.</p> <p>I'm trying to create a wheel for the project. I'm thinking that I should have a wheel for each of the platforms listed and include only the folder for that platform. Ideally, I can rename the folder to something generic so that I can simplify the code that invokes the needed binary.</p> <p>Would anyone have any thoughts on how I could do this?</p> <p>Apologies if this question is a bit vague. I'm very new to using wheels.</p>
<python><setuptools><python-wheel>
2024-07-09 23:49:27
0
1,058
Andereoo
78,728,033
11,748,924
Numpythonic ways of converting sparse to dense based on referenced index
<p>I have this sparse vector <code>val</code> with this Pythonic way:</p> <pre><code>idx = [2, 5, 6] val = [69, 12, 15] _ = np.zeros(idx[-1]+1) for i in idx: _[i] = val[idx.index(i)] print(_) dbg(_) </code></pre> <p>Here is the output:</p> <pre><code>[ 0. 0. 69. 0. 0. 12. .15] (6,) float64 </code></pre> <p>How do I achieve my goal where it's not using <code>for</code> loops, instead using a Numpythonic way?</p>
<python><numpy>
2024-07-09 23:05:06
2
1,252
Muhammad Ikhwan Perwira
78,727,846
14,301,545
TypeError: cannot pickle 'builtins.DT' object - multiprocessing, pickle, pathos
<p>I'm using &quot;startinpy&quot; library (<a href="https://startinpy.readthedocs.io/en/0.10.2/" rel="nofollow noreferrer">https://startinpy.readthedocs.io/en/0.10.2/</a>) in my project. It generally works great, but when I want to use it in second process, the pickle error shows up:</p> <p><code>TypeError: cannot pickle 'builtins.DT' object</code></p> <p>MRE:</p> <pre><code>import startinpy from multiprocessing import Process class WorkCl: def __init__(self): self.dtm = DTM() # PROCESS self.dtm.start() time.sleep(1) class DTM(Process): def __init__(self): Process.__init__(self) self.dt = startinpy.DT() if __name__ == '__main__': wcl = WorkCl() </code></pre> <p>Does anyone know, how to make this work? I need to use the &quot;dt&quot; object as class object, because I'm using it inside various class methods. Also I'm using multiprocessing Event object to stop the process and SharedMemory and ShareableList to share informations between processes.</p> <p>I have found library called &quot;pathos&quot; (<a href="https://pathos.readthedocs.io/en/latest/pathos.html" rel="nofollow noreferrer">https://pathos.readthedocs.io/en/latest/pathos.html</a>) but I don't understand how exactly replace the standard multiprocessing module with it. Can anyone help?</p>
<python><multiprocessing><pathos>
2024-07-09 21:44:50
1
369
dany
78,727,843
4,894,593
cimport cython module that wraps C libraries
<p>I have this kind of C/cython project:</p> <pre><code>project/ ├── src/ │ └── modules/ │ ├── cython1.pyx │ ├── cython1.pxd │ ├── cython2.pyx │ ├── cython2.pxd │ ├── includes/ │ │ ├── c1.h │ │ ├── c1.c │ │ ├── c2.h │ │ ├── c2.c │ └── ... └── setup.py </code></pre> <p>with this <code>setup.py</code>:</p> <pre><code>filenames = ['cython1', 'cython2'] extensions = [Extension(name=f&quot;modules.{name}&quot;, sources=[f&quot;modules/{name}.pyx&quot;], include_dirs=[&quot;modules/includes&quot;, ]) for name in filenames] setup( ext_modules=cythonize(extensions), packages=['project'], include_package_data=True, package_data = { 'module': ['*.pxd', '*.so'], }, ] ) </code></pre> <p>I have several issues.</p> <p>In pxd files, I use definitions from C files as follows: <code>cdef extern from &quot;includes/c1.c&quot;</code> and the project compiles and runs without any issue. When I try to <code>cimport module1</code> in another context, I have a fatal error: 'c1.c' file not found.</p> <p>When I setup <code>cdef extern from &quot;includes/c1.h&quot;</code> in the pxd files, the project compiles but its execution gives errors (<code>symbol not found in flat namespace</code> : functions from C files are not in namespace).</p> <p>I tried to add the corresponding C files to the source list for each module (<code>sources=[f&quot;modules/{name}.pyx&quot;, &quot;c1.c&quot;]</code>), then I get a message saying that C functions are declared multiple times. The fact is that I use C functions in the different cython modules. Moreover, the different c/h files interact with each other (for instance there can be an <code>#include &quot;c1.h&quot;</code> in &quot;c2.c&quot;).</p> <p>In the end, I just can't manage it, and can't figure out how to structure the project so that it runs properly and can also be cimported. I understand from the forums that one solution might be to precompile my C file as a shared library, but I can't get this procedure to work directly in <code>setup.py</code>.</p> <p>What's the best way to structure the project and to write the <code>setup.py</code>?</p> <p><strong>EDIT :</strong></p> <p>I change the structure of the project:</p> <pre><code>project/ ├── src/ │ ├── modules_clib/ │ │ ├── c1.h │ │ ├── c1.c │ │ ├── c2.h │ │ └── c2.c │ │ │ └── modules/ │ ├── cython1.pyx │ ├── cython1.pxd │ ├── cython2.pyx │ ├── cython2.pxd │ └── includes.pxd └── setup.py </code></pre> <p>and the setup.py to this:</p> <pre><code>filenames = ['cython1', 'cython2'] extensions = [Extension(name=f&quot;modules.{name}&quot;, sources=[f&quot;./src/modules/{name}.pyx&quot;], include_dirs=[&quot;./src/modules_clib&quot;, ]) for name in filenames] setup( ext_modules=cythonize(extensions), packages=['project'], include_package_data=True, packages=['modules_clib', 'modules'], package_dir={'': 'src'}, package_data={'modules_clib': ['*.c', '*.h'], 'modules': ['*.pxd', '*.so']}, ] ) </code></pre> <p>All the <code>cdef extern from &quot;../modules_clib/c1.c&quot;</code> are now gathered in the <code>includes.pxd</code> file. In the <code>moduleX.pyx</code> files, I <code>cimport</code> all necessary C functions from <code>modules.includes</code>. I can now also <code>cimport modules.includes</code> or <code>cimport modules.cythonX</code> from another project/notebook to use C functions since <code>libc</code> is copied in the <code>site-packages</code> directory. I don't know if it is a good thing that <code>libc</code> is copied directly in the <code>site-packages</code>directory, but it works.</p> <p>However, I'm still forced to use C files directly. Using headers still leads to <code>symbol not found in flat namespace</code>. From what I understand, when C files are used, cython compiles them directly with the <code>*.pyx</code> files that use them, which is not the case if headers are used.</p>
<python><c><cython>
2024-07-09 21:42:54
0
1,080
Ipse Lium
78,727,839
23,260,297
Create config file for python script with variables
<p>I have a config file for a python script that stores multiple different values</p> <p>my config.ini looks like this:</p> <pre><code>[DHR] key1 = &quot;\\path1\...&quot; key2 = &quot;\\path2\...&quot; key3 = &quot;\\path3\file-{today}.xlsx&quot; </code></pre> <p>my <code>.py</code> has a date variable that gets today's date like:</p> <pre><code>today = str(date.today().strftime('%y%m%d')) </code></pre> <p>However when I read the <code>.ini</code> file the variable does not get appended to the value as I expected.</p> <pre><code>print(config.read(&quot;config.ini&quot;)) &quot;\\path3\file-{today}.xlsx&quot; </code></pre> <p>How can I adjust my script to append the variable to the path so that it looks like this:</p> <pre><code>&quot;\\path3\file-240709.xlsx&quot; </code></pre>
<python>
2024-07-09 21:42:38
2
2,185
iBeMeltin
78,727,781
2,397,318
Showing one x tick per month on a pandas plot
<p>I have a time series with daily data, that I want to plot, and only plot a x-tick every month. I have tried multiple approaches (including the ones described <a href="https://stackoverflow.com/questions/69101233/using-dateformatter-resets-starting-date-to-1970">here</a>, but it seems that pandas considers my data from 1970, even if the date index is from another date. Example below:</p> <pre><code>date_range = pd.date_range(start='2023-01-01', periods=periods) data = { 'A': np.random.randint(1, 10, periods), 'B': np.random.randint(1, 10, periods), } df = pd.DataFrame(data, index=date_range) df df['total'] = df.sum(axis=1) df['rolling'] = df['total'].rolling(3).mean() ax = df[['A', 'B']].plot(kind='bar', stacked=True) # this one defaults to 1970—which seems to be a known problem # https://stackoverflow.com/questions/69101233/using-dateformatter-resets-starting-date-to-1970 # ax.xaxis.set_major_locator(MonthLocator()) # ax.xaxis.set_major_formatter(DateFormatter('%b %Y')) # this one also doesn't work, because the data is thought to be also from 1970 so the picture comes out very wrong monthly_ticks = pd.date_range(start=df.index.min(), end=df.index.max(), freq='ME') ax.set_xticks(monthly_ticks) ax.set_xticklabels(monthly_ticks, rotation=45);``` </code></pre>
<python><pandas><matplotlib><bar-chart>
2024-07-09 21:24:12
1
3,769
meto
78,727,605
6,573,259
How to safely run a python script indefinitly?
<p>Im planning on making a script on python that monitors some external parameters and act based on the retrieved data. The check basically is do an http call on a server on the network, and if the response it not as expected do another http call on another server on the network.</p> <p>The script will have a 60 seconds interval on checking, and will keep checking hopefully indefinitely.</p> <p>Here is a trimmed down version of what i currently have</p> <pre><code>import time if __name__ == '__main__': while True: runCheckfunction() time.sleep(60) </code></pre> <p>Its very straight forward, which makes me doubt, it cant be that simple right?</p> <p>Concerns:</p> <ol> <li><p>During sleep i am worried that this script might be using excessive CPU resources, which might be a concern since it will be run on an embedded machine.</p> </li> <li><p>Memory leak. Although very unlikely and if it does happen it wont be in the infinite loop part and it would be in the runCheckFunction() due to my bad programming. After the functions exits is it safe to assume that it frees up all memory it used? I dont have global variables and everything is self contained within that function?</p> </li> <li><p>Are there better methods for this? like a standard module that i do not know of that is made to handle this exact purpose?</p> </li> </ol>
<python><python-3.x>
2024-07-09 20:19:36
1
752
Jake quin
78,727,436
2,153,235
One-liner to specify many variables for "del" and ignore those that don't exist
<p>I am using Spyder for exploratory data analysis -- not for production code. I like to think of my script as a sequence of major sections. At the end of each section, I want to delete the temporary objects so that the Variable Explorer in Spyder isn't too crowded. However, the objects that exist depend on the control flow through the code. If I have just one <code>del</code> statement at the end of each section to delete all objects that may come into existence regardless of the control flow through the code, I get <code>NameError</code> for those objects that haven't been created.</p> <p><em><strong>Is there a simple one-liner that I can use, similar to <code>del a,b,c,d</code> but not yielding an error if some of the variables don't exist?</strong></em> I want a short one-liner because I don't want to take attention away from the main functions of the code.</p> <p>A try/except block is a multi-line control structure that contributes too much cognitive noise to the code, in my opinion. Something like the following doesn't seem legal, and it seems to be that a one-liner can't consist of an if-statement within a for-loop:</p> <pre><code># Syntax error at the &quot;if&quot; for vName in ['gbCluster']: if vName in locals(): del locals()[vName] # Works without &quot;if&quot; for vName in ['gbCluster']: del locals()[vName] </code></pre>
<python>
2024-07-09 19:27:25
5
1,265
user2153235
78,727,326
6,622,697
How to run a function at startup before requests are handled in Django?
<p>I want to run some code when my Django server starts-up in order to clean up from the previous end of the server. How can I run some code once and only once at startup before the server processes any requests. I need to access the database during this time.</p> <p>I've read a couple of things online that seem a bit outdated and are also not guaranteed to run only once.</p> <h1>Update</h1> <p>I've looked at the suggested answer by looking into the <code>ready</code> function. That appears to work, but it is documented that you should not access the database there (<a href="https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready" rel="nofollow noreferrer">https://docs.djangoproject.com/en/dev/ref/applications/#django.apps.AppConfig.ready</a>)</p> <p>There are many suggestions at <a href="https://stackoverflow.com/questions/6791911/execute-code-when-django-starts-once-only">Execute code when Django starts ONCE only?</a>, but since that post is a few years old, I thought there might be some additional solutions.</p> <p>This looks promising</p> <pre><code>from django.dispatch import receiver from django.db.backends.signals import connection_created @receiver(connection_created) def my_receiver(connection, **kwargs): with connection.cursor() as cursor: # do something to the database connection_created.disconnect(my_receiver) </code></pre> <p>Any other thoughts?</p>
<python><django>
2024-07-09 18:57:44
1
1,348
Peter Kronenberg
78,727,261
9,951,273
How to dynamically define class variables?
<p>Let's say I want to create a class:</p> <pre><code>class Foo: hello = &quot;world&quot; goodbye = &quot;moon&quot; </code></pre> <p>But both of those class variables are dynamically provided.</p> <pre><code>attrs = [(&quot;hello&quot;, &quot;world&quot;), (&quot;goodbye&quot;, &quot;moon&quot;)] def create_class(attrs): class Foo: # INSERT DYNAMIC ATTRS HERE pass create_class(attrs) </code></pre> <p>How can I dynamically set those attributes as class variables at the time the class is defined? I know I can use <code>setattr</code> to set them <strong>after</strong> the class is created, but this is not what I'm looking for.</p> <p>I'm thinking this is possible with <code>__build_class__</code> but I'm struggling to implement it myself.</p>
<python><class>
2024-07-09 18:39:25
1
1,777
Matt
78,727,228
56,207
Replace removed function PyArray_GetCastFunc in numpy 2
<p>I'm migrating some python C extension to numpy 2. The extension basically gets a list of 2D numpy arrays and generates a new 2D array by combining them (average, median, etc,). The difficulty is that the input and output arrays are byteswapped. I cannot byteswap the input arrays to machine order (they are too many to fit in memory). So</p> <ul> <li>I read an element of each input array,</li> <li>bitswap them to machine order,</li> <li>then cast them to a list of doubles,</li> <li>perform my operation on the list to obtain a double</li> <li>cast the double to the dtype of the output array</li> <li>bitswap again</li> <li>and write the result in the output array</li> </ul> <p>To achieve this (using numpy 1.x C-API) I was using something like:</p> <pre><code>PyArray_Descr* descr_in = PyArray_DESCR((PyArrayObject*)input_frame_1); PyArray_CopySwapFunc* swap_in = descr_in-&gt;f-&gt;copyswap; PyArray_VectorUnaryFunc* cast_in = PyArray_GetCastFunc(descr_in, NPY_DOUBLE); bool need_to_swap_in = PyArray_ISBYTESWAPPED((PyArrayObject*)input_frame_1); </code></pre> <p>And something slightly different but similar for the output. I use the function <code>swap_in</code> to read a value from the input array, bitswap it and write it into a buffer and then <code>cast_in</code> to cast the contents of the buffer into a double.</p> <p>In numpy 2, the <code>copyswap</code> function is still accesible with a different syntax:</p> <pre><code>PyArray_CopySwapFunc* swap_in = PyDataType_GetArrFuncs(descr_in)-&gt;copyswap; </code></pre> <p>But the <code>cast</code> function is not. Although the member is still in the struct, most of its values are NULL. So this doesn't work:</p> <pre><code>PyArray_VectorUnaryFunc* cast_in = PyDataType_GetArrFuncs(descr_in)-&gt;cast[NPY_DOUBLE]; </code></pre> <p>The <a href="https://numpy.org/doc/stable/release/2.0.0-notes.html#numpy-2-0-c-api-removals" rel="nofollow noreferrer">documentation</a> says</p> <blockquote> <p>PyArray_GetCastFunc is removed. Note that custom legacy user dtypes can still provide a castfunc as their implementation, but any access to them is now removed. The reason for this is that NumPy never used these internally for many years. If you use simple numeric types, please just use C casts directly. In case you require an alternative, please let us know so we can create new API such as PyArray_CastBuffer() which could use old or new cast functions depending on the NumPy version.</p> </blockquote> <p>So the function has been removed, but there isn't a clear path to subtitute it with something else. What is the correct way of read and write values from/to bitswapped arrays?</p> <p>More detailed sample code. It just iterates over the input and saves the value in a double.</p> <pre><code>double d_val = 0; char buffer[NPY_BUFSIZE]; PyObject* input_frame_1; // input_frame_1 is initialized over here // Conversion PyArray_Descr* descr_in = PyArray_DESCR((PyArrayObject*)input_frame_1); PyArray_CopySwapFunc* swap_in = descr_in-&gt;f-&gt;copyswap; PyArray_VectorUnaryFunc* cast_in = PyArray_GetCastFunc(descr_in, NPY_DOUBLE); bool need_to_swap_in = PyArray_ISBYTESWAPPED((PyArrayObject*)input_frame_1); // Iterator PyArrayIterObject* iter = PyArray_IterNew(input_frame_1); // Just reads the value and casts it into a double d_val while (iter-&gt;index &lt; iter-&gt;size) { d_val = 0; // Swap the value if needed and store it in the buffer swap_in(buffer, iter-&gt;dataptr, need_to_swap_in, NULL); cast_in(buffer, &amp;d_val, 1, NULL, NULL); /* Code to advance iter comes here */ } </code></pre>
<python><numpy>
2024-07-09 18:26:15
1
727
Sergio
78,727,183
3,821,009
Distinct elements across subgroups for each group in polars
<p>Consider this data frame:</p> <pre><code>df = (polars .DataFrame( dict( j=[1,1,1,1,2,2,3,3,3,3,3,3,3], k=[1,1,2,2,3,3,4,4,5,5,6,6,6], l=[1,2,1,2,2,2,3,4,3,3,3,4,3], u=[1,1,1,1,2,2,3,3,3,3,3,3,3], ) ) ) j k l u i64 i64 i64 i64 1 1 1 1 1 1 2 1 1 2 1 1 1 2 2 1 2 3 2 2 2 3 2 2 3 4 3 3 3 4 4 3 3 5 3 3 3 5 3 3 3 6 3 3 3 6 4 3 3 6 3 3 shape: (13, 4) </code></pre> <p>I can do this:</p> <pre><code>dfj = (df .group_by('j', 'k', maintain_order=True) .agg( i=polars.struct('l', 'u').unique() ) ) j k i i64 i64 list[struct[2]] 1 1 [{2,1}, {1,1}] 1 2 [{2,1}, {1,1}] 2 3 [{2,2}] 3 4 [{3,3}, {4,3}] 3 5 [{3,3}] 3 6 [{3,3}, {4,3}] shape: (6, 3) </code></pre> <p><strong>Question 1</strong>: Why did <code>agg</code> not aggregate rows in order? E.g. I'd expect the first row to be this instead:</p> <pre><code> j k i 1 1 [{1,1}, {2,1}] </code></pre> <p>Can this be resolved somehow?</p> <p>I can then do this:</p> <pre><code>dfk = (dfj .group_by('j', maintain_order=True) .agg( o=polars.col('i').unique() ) ) polars.exceptions.InvalidOperationError: `unique` operation not supported for dtype `list[struct[2]]` </code></pre> <p><strong>Question 2</strong>: Why is unique not working for lists (of structs)?</p> <p>The above is my XY problem.</p> <p>Now consider <code>j</code> a group, <code>k</code> a subgroup within <code>j</code> and let's call <code>i</code> a value of a subgroup. Consider the first <code>group_by</code> above: after fixing the ordering of structs within <code>i</code>, it logically breaks into:</p> <pre><code> j k i i64 i64 list[struct[2]] --- 1 1 [{1,1}, {2,1}] 1 2 [{1,1}, {2,1}] --- 2 3 [{2,2}] --- 3 4 [{3,3}, {4,3}] 3 5 [{3,3}] 3 6 [{3,3}, {4,3}] shape: (6, 3) </code></pre> <p>If you look at all groups:</p> <ul> <li>Group 1 has two subgroups 1 and 2 and both their values are the same <code>[{1,1}, {2,1}]</code></li> <li>Group 2 has one subgroup 3 and a unique value [{2,2}]</li> <li>Group 3 has 3 subgroups 4, 5 and 6 and they have two distinct values <code>[{3,3}, {4,3}]</code> and <code>[{3,3}]</code></li> </ul> <p><strong>Question 3</strong>: (My second-level XY problem) Given a starting data frame with at least 3 columns, where <code>j</code> is a group and <code>k</code> is a subgroup, get a list of all unique values in all the remaining columns, per group. For the above, I'd expect this output (or something similar - e.g. the order within the list may not matter necessarily):</p> <pre><code> j i i64 list[list[struct[2]]] 1 [[{1,1}, {2,1}]] 2 [[{2,2}]] 3 [[{3,3}, {4,3}], [{3,3}]] shape: (3, 2) </code></pre> <p>I'm only sligtly interested in both XY problems, as they require some data wrangling that might be useful in general / in the future.</p> <p><strong>Question 4</strong>: (This is as far from an XY problem as I managed to get) Is there a way to list all subgroups that have different set of values in the remaining columns? For example, the output could be:</p> <pre><code> j k l u i64 i64 i64 i64 3 4 3 3 3 4 4 3 3 5 3 3 3 5 3 3 3 6 3 3 3 6 4 3 3 6 3 3 shape: (7, 4) </code></pre> <p>or any other structural variation of that that's more convenient - as long as it lists either <code>j</code> or <code>k</code> in some way (e.g. as separate rows or packed into a struct or a list) and at least one row / list element for each of the distinct (<code>l</code>, <code>u</code>) values. This would be another acceptable output:</p> <pre><code> k l u i64 i64 i64 4 3 3 4 4 3 5 3 3 6 3 3 6 4 3 shape: (5, 3) </code></pre>
<python><python-polars>
2024-07-09 18:15:48
1
4,641
levant pied
78,727,159
1,600,870
How do I add a location filter when using the LinkedIn Voyager API?
<p>I'm trying to scrape LinkedIn job postings using the Voyager API, adapting code from <a href="https://github.com/ArshKA/LinkedIn-Job-Scraper" rel="nofollow noreferrer">here</a>. Here's the relevant portion of the code:</p> <pre><code>class JobSearchRetriever: def __init__(self): self.job_search_link = 'https://www.linkedin.com/voyager/api/voyagerJobsDashJobCards?decorationId=com.linkedin.voyager.dash.deco.jobs.search.JobSearchCardsCollection-187&amp;count=100&amp;q=jobSearch&amp;query=(origin:JOB_SEARCH_PAGE_OTHER_ENTRY,selectedFilters:(sortBy:List(DD)),spellCorrectionEnabled:true)&amp;start=0' emails, passwords = get_logins('search') self.sessions = [create_session(email, password) for email, password in zip(emails, passwords)] self.session_index = 0 self.headers = [{ 'Authority': 'www.linkedin.com', 'Method': 'GET', 'Path': 'voyager/api/voyagerJobsDashJobCards?decorationId=com.linkedin.voyager.dash.deco.jobs.search.JobSearchCardsCollection-187&amp;count=25&amp;q=jobSearch&amp;query=(origin:JOB_SEARCH_PAGE_OTHER_ENTRY,selectedFilters:(sortBy:List(DD)),spellCorrectionEnabled:true)&amp;start=0', 'Scheme': 'https', 'Accept': 'application/vnd.linkedin.normalized+json+2.1', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US,en;q=0.9', 'Cookie': &quot;; &quot;.join([f&quot;{key}={value}&quot; for key, value in session.cookies.items()]), 'Csrf-Token': session.cookies.get('JSESSIONID').strip('&quot;'), # 'TE': 'Trailers', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36', # 'X-Li-Track': '{&quot;clientVersion&quot;:&quot;1.12.7990&quot;,&quot;mpVersion&quot;:&quot;1.12.7990&quot;,&quot;osName&quot;:&quot;web&quot;,&quot;timezoneOffset&quot;:-7,&quot;timezone&quot;:&quot;America/Los_Angeles&quot;,&quot;deviceFormFactor&quot;:&quot;DESKTOP&quot;,&quot;mpName&quot;:&quot;voyager-web&quot;,&quot;displayDensity&quot;:1,&quot;displayWidth&quot;:1920,&quot;displayHeight&quot;:1080}' 'X-Li-Track': '{&quot;clientVersion&quot;:&quot;1.13.5589&quot;,&quot;mpVersion&quot;:&quot;1.13.5589&quot;,&quot;osName&quot;:&quot;web&quot;,&quot;timezoneOffset&quot;:-7,&quot;timezone&quot;:&quot;America/Los_Angeles&quot;,&quot;deviceFormFactor&quot;:&quot;DESKTOP&quot;,&quot;mpName&quot;:&quot;voyager-web&quot;,&quot;displayDensity&quot;:1,&quot;displayWidth&quot;:360,&quot;displayHeight&quot;:800}' } for session in self.sessions] </code></pre> <p>I want to filter the job postings to only those from a specific state - let's say, Michigan. Performing a search myself on the LinkedIn website, I find (from the URL) that the GeoID for Michigan is 103051080. I've tried editing the <code>job_search_link</code> in various ways, for example by adding <code>geoUrn:List(103051080)</code> to the query, but I'm still getting posts from all over the US.</p> <p>How do I edit the query to only get posts from a specific location? I haven't found specific documentation; <a href="https://learn.microsoft.com/en-us/linkedin/shared/integrations/people/profile-api" rel="nofollow noreferrer">this guide</a> to locations in the Profile API looks relevant, but I'm still not sure how to incorporate that into the query.</p>
<python><web-scraping><linkedin-api>
2024-07-09 18:08:53
1
2,095
perigon
78,727,123
3,107,798
Size of pyarrow Table in bytes
<p>I have a basic <a href="https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#" rel="nofollow noreferrer">pyarrow.Table</a>. What's the best way to get it's size in bytes?</p> <p>Here is an example table:</p> <pre><code>import pyarrow as pa n_legs = pa.array([2, 4, 5, 100]) animals = pa.array([&quot;Flamingo&quot;, &quot;Horse&quot;, &quot;Brittle stars&quot;, &quot;Centipede&quot;]) names = [&quot;n_legs&quot;, &quot;animals&quot;] table = pa.table([n_legs, animals], names=names) </code></pre>
<python><pyarrow><apache-arrow>
2024-07-09 18:01:07
1
11,245
jjbskir
78,726,982
2,152,371
Coinbase Advance Trading API returns "Error: account not available" when placing orders
<p>For the past while I have been trying to get this crypto trading bot to work. The problem is that while I can get/list accounts when I place an order I get a 400 error.</p> <p>The code:</p> <pre><code>def place_order(symbol, side, amount, price=None): url = 'https://api.coinbase.com/api/v3/brokerage/orders' headers = { 'Authorization': 'Bearer '+encryption_build.build_jwt( 'POST api.coinbase.com/api/v3/brokerage/orders', api_key, api_secret), 'Content-Type': 'application/json' } client_order_id = uuid.uuid4() try: # Order details order_data = { &quot;client_order_id&quot;: str(client_order_id), &quot;product_id&quot;: symbol, &quot;side&quot;: side, &quot;order_configuration&quot;: { &quot;market_market_ioc&quot;: { &quot;base_size&quot;: str(amount) } }, &quot;retail_portfolio_id&quot;: &quot;41864a16-1fd4-50bb-a633-278d9c39f846&quot; } res = requests.post(url=url, headers=headers, data=json.dumps(order_data)) except Exception as e: print(f&quot;An error occurred: {e}&quot;) place_order('BTC-USD', 'SELL', amount=0.01) </code></pre> <p>The response:</p> <pre><code>{ &quot;error&quot;:&quot;INVALID_ARGUMENT&quot;, &quot;error_details&quot;:&quot;account is not available&quot;, &quot;message&quot;:&quot;account is not available&quot; } </code></pre> <p>The API key has view/trade permissions. The List Accounts endpoint returns my accounts on Coinbase. A BTC and fiat account in CAD are present. Both are active. Specifying the retail portfolio, or using other libraries to do this didn't make a difference.</p> <p>I expect a 200 status code with the order pending or filled on Coinbase's end.</p> <p>EDIT: Running it through Postman produced the same error.</p> <pre><code>POST https://api.coinbase.com/api/v3/brokerage/orders 400 111 ms POST /api/v3/brokerage/orders HTTP/1.1 Content-Type: application/json Authorization: Bearer eyJhbGciOiJFUzI1NiIsImtpZCI6Im9yZ2FuaXphdGlvbnMvNjUwYjZhYjEtMTk2My00ZTQ1LThhYWMtNmUyYTdkNGU0MmQxL2FwaUtleXMvOTg0ZTE4ZjMtZDEzZi00NjllLTk3ZDItNTNkZDEzYzM1NDJhIiwibm9uY2UiOiJlODE3MmZmZDZjMzQ3ZDdhNzRjZDk3YWFlNmJkZTEzZTYyZWY5ZTk3ZjEyZTBlOTU4NTgwNzMyMzUwZTBlMmFiIiwidHlwIjoiSldUIn0.eyJzdWIiOiJvcmdhbml6YXRpb25zLzY1MGI2YWIxLTE5NjMtNGU0NS04YWFjLTZlMmE3ZDRlNDJkMS9hcGlLZXlzLzk4NGUxOGYzLWQxM2YtNDY5ZS05N2QyLTUzZGQxM2MzNTQyYSIsImlzcyI6ImNkcCIsIm5iZiI6MTcyMDU2MTE2MSwiZXhwIjoxNzIwNTYxMjgxLCJ1cmkiOiJQT1NUIGFwaS5jb2luYmFzZS5jb20vYXBpL3YzL2Jyb2tlcmFnZS9vcmRlcnMifQ.0ECu5nM8XTTLv7caGPxSBmoVUDoqW-z07z4Jz8wDljjIf2rreGjZ-L-ZAzMcDMU8FafSEQeo81FExUkUe2TVww User-Agent: PostmanRuntime/7.39.0 Accept: */* Postman-Token: 9506a059-e326-402b-8a4f-7aefcba39eaf Host: api.coinbase.com Accept-Encoding: gzip, deflate, br Connection: keep-alive Content-Length: 231 Cookie: __cf_bm=wG1twa3RotRq8PB5plk5UPsC8Luskeyddh9famNqHE4-1720561189-1.0.1.1-lWThs07H99ZxAB78K2ZUdY.4aDP0LEg49rLxEqPyrbL2ivpjLLQFKbsSMcieMCZ.eVdyusgBCyVspRfrdP.5LQ; cb_dm=16c8bf71-6e14-4c79-8c05-83251661ce3b {&quot;client_order_id&quot;: &quot;b372a61a-7585-4dd1-bb6c-f5990967412b&quot;, &quot;product_id&quot;: &quot;BTC-USD&quot;, &quot;side&quot;: &quot;BUY&quot;, &quot;order_configuration&quot;: {&quot;market_market_ioc&quot;: {&quot;base_size&quot;: &quot;0.01&quot;}}, &quot;retail_portfolio_id&quot;: &quot;d641518f-dd81-49fc-96d1-ce1e782f697f&quot;} HTTP/1.1 400 Bad Request Date: Tue, 09 Jul 2024 21:40:17 GMT Content-Type: application/json; charset=utf-8 Content-Length: 108 Connection: keep-alive access-control-allow-headers: Content-Type, Accept, Second-Factor-Proof-Token, Client-Id, Access-Token, X-Cb-Project-Name, X-Cb-Is-Logged-In, X-Cb-Platform, X-Cb-Session-Uuid, X-Cb-Pagekey, X-Cb-Ujs, Fingerprint-Tokens, X-Cb-Device-Id, X-Cb-Version-Name access-control-allow-methods: GET,POST,DELETE,PUT access-control-allow-private-network: true access-control-expose-headers: access-control-max-age: 7200 Cache-Control: no-store strict-transport-security: max-age=31536000; includeSubDomains; preload trace-id: 7311873497407553797 trace-id: 7311873497407553797 vary: Origin x-content-type-options: nosniff x-dns-prefetch-control: off x-download-options: noopen x-frame-options: SAMEORIGIN x-xss-protection: 1; mode=block x-envoy-upstream-service-time: 51 CF-Cache-Status: DYNAMIC Server: cloudflare CF-RAY: 8a0b78b7bf7caca0-YYZ {&quot;error&quot;:&quot;INVALID_ARGUMENT&quot;,&quot;error_details&quot;:&quot;account is not available&quot;,&quot;message&quot;:&quot;account is not available&quot;} </code></pre> <p>There is very little on this on the internet. Any help would be appreciated. Thank you.</p>
<python><rest><cryptocurrency><http-status-code-400><coinbase-api>
2024-07-09 17:24:42
1
470
Miko
78,726,975
11,628,437
How do I log observations after reset in Stable_Baselines3?
<p>I want to log each <code>observation</code> obtained after <code>reset</code> during training, while using SB3.</p> <p>Based on <a href="https://github.com/DLR-RM/stable-baselines3/issues/137#issuecomment-669862467" rel="nofollow noreferrer">this</a> issue message, I decided to use the <code>Monitor</code> wrapper instead of a callback.</p> <p>However, the <code>Monitor</code> wrapper is giving me an error. Here is my code -</p> <pre class="lang-py prettyprint-override"><code>import gym from stable_baselines3 import PPO from stable_baselines3.common.callbacks import BaseCallback from stable_baselines3.common.monitor import Monitor class CustomMonitor(Monitor): def __init__(self, env, filename=None, allow_early_resets=True, reset_keywords=(), info_keywords=()): super(CustomMonitor, self).__init__(env) self.reset_observations = [] def reset(self, **kwargs): observation = super(CustomMonitor, self).reset(**kwargs) self.reset_observations.append(observation) return observation env = gym.make('LunarLander-v2') env = CustomMonitor(env) model = PPO('MlpPolicy', env, verbose=1) # Train the model model.learn(total_timesteps=1000000) # Save the model model.save(&quot;ppo_lunarlander_mutant&quot;) </code></pre> <p>However, after running it, I am getting the following error -</p> <pre><code>Traceback (most recent call last): File &quot;minimal_example.py&quot;, line 21, in &lt;module&gt; model = PPO('MlpPolicy', env, verbose=1) File &quot;/home/thoma/anaconda3/envs/wp/lib/python3.8/site-packages/stable_baselines3/ppo/ppo.py&quot;, line 109, in __init__ super().__init__( File &quot;/home/thoma/anaconda3/envs/wp/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py&quot;, line 85, in __init__ super().__init__( File &quot;/home/thoma/anaconda3/envs/wp/lib/python3.8/site-packages/stable_baselines3/common/base_class.py&quot;, line 180, in __init__ assert isinstance(self.action_space, supported_action_spaces), ( AssertionError: The algorithm only supports (&lt;class 'gymnasium.spaces.box.Box'&gt;, &lt;class 'gymnasium.spaces.discrete.Discrete'&gt;, &lt;class 'gymnasium.spaces.multi_discrete.MultiDiscrete'&gt;, &lt;class 'gymnasium.spaces.multi_binary.MultiBinary'&gt;) as action spaces but Discrete(4) was provided </code></pre>
<python><reinforcement-learning><openai-gym><stable-baselines>
2024-07-09 17:22:57
1
1,851
desert_ranger
78,726,881
11,318,930
holoviews how to set margin around a layout
<p>Using <code>holoviews</code> with the <code>plotly</code> extension, I am rendering a layout with several <code>overlay</code> plots. Each plot has a 4 line title. Unfortunately, the result clips the title from 4 lines to two. It feels like I need to set margins but doing so does not seem to help. How can I make it so that all of the title shows?</p> <p>This is a simplified example code with the result.</p> <pre><code>height = 500 width = 500 x = range(0,21) y = [3.00 + 0.500*i for i in x] data = pd.DataFrame(zip(x,y),columns=['x','y']) lin_reg = hv.Scatter(data).opts(height=height, width=width , color='red' , marker='diamond', size=3 ) title = &quot;Pearson={} &lt;br&gt;Spearman={} &lt;br&gt;Kendall={} &lt;br&gt;Xi={}&quot;.format(1, 2, 3, 4) temp = hv.Scatter(data).opts(title=title, height=height, width=width , margins=(10, 100, 10, 100)) final = temp * lin_reg hv.Layout([final]).cols(1).opts(shared_axes=False, width=800) </code></pre> <p><a href="https://i.sstatic.net/8Or9wQTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8Or9wQTK.png" alt="enter image description here" /></a></p>
<python><plotly><holoviews>
2024-07-09 16:59:31
0
1,287
MikeB2019x
78,726,750
10,721,627
How can I install packages using `uv pip install` without creating a virtual environment in CI/CD pipeline?
<p>I would like to install Python packages in the CI/CD pipeline using the <a href="https://github.com/astral-sh/uv" rel="nofollow noreferrer">uv</a> package manager. I did not create a virtual environment because I would like to use the virtual machine's global Python interpreter. When I run the <code>uv pip install &lt;package&gt;</code> script, I get the following error:</p> <pre class="lang-none prettyprint-override"><code>Requirement already satisfied: pip in /opt/hostedtoolcache/Python/3.12.4/x64/lib/python3.12/site-packages (24.1.1) Collecting pip Downloading pip-24.1.2-py3-none-any.whl.metadata (3.6 kB) Downloading pip-24.1.2-py3-none-any.whl (1.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 32.0 MB/s eta 0:00:00 Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 24.1.1 Uninstalling pip-24.1.1: Successfully uninstalled pip-24.1.1 Successfully installed pip-24.1.2 Collecting uv Downloading uv-0.2.23-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB) Downloading uv-0.2.23-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.4/13.4 MB 104.1 MB/s eta 0:00:00 Installing collected packages: uv Successfully installed uv-0.2.23 error: No virtual environment found </code></pre> <p>Is it possible to install it into the global Python environment?</p>
<python><pip><continuous-integration><uv>
2024-07-09 16:28:42
1
2,482
Péter Szilvási
78,726,599
8,554,833
SQL FetchMany in the middle
<p>So I'm fetching data from large tables. I'm using fetchmany to get 100,000 rows at a time. However, during my export, I will occasionally get a network disconnect, or need to stop the process for one reason or another.</p> <p>I was wondering if there is a way to perform a fetchmany after a certain point. For instance, if I have a table with 1 Million rows. The program stopped in the middle and I can see I got the first 500,000 rows. Is there a way I can tell my fetchmany to start after row 500,000?</p> <p>Edit: here's some code of what I'm doing. Moving data from one place to another.</p> <pre><code>curr.execute( ''' SELECT * FROM Database.dbo.Table_Name ; ''' ) progress = 0 while True: rows = curr.fetchmany(ROWS_AT_ONCE) if not rows: break df_temp = pd.DataFrame(rows, columns=cols) df_temp.to_sql(Table_Name, index=False, if_exists='append') </code></pre>
<python><sql>
2024-07-09 15:52:30
0
728
David 54321
78,726,520
11,402,025
Swagger API : should not accept Null
<p>I have added the following to the swagger definition for the api.</p> <pre><code>value : BooleanEnum = Query ( False, alias=&quot;value&quot;) class BooleanEnum(str, Enum): true = &quot;true&quot; false = &quot;false&quot; @classmethod def _missing_(cls, value): return cls.__members__.get(value.lower(), None) </code></pre> <p>The Swagger docs is still accepting a null value and showing it as one of the options of selection. Though, the default is set to False.</p> <p>How can I limit the selection to just True and False, so the user is mandated to select one of the option and provide no choice to accept a Null response.</p>
<python><swagger><fastapi><swagger-ui><pydantic>
2024-07-09 15:33:27
1
1,712
Tanu
78,726,485
8,964,393
How to convert a python character to a python object
<p>I have the following python lists</p> <pre><code>import pandas as pd import numpy as np listOfChars = ['feature1','feature2'] listOfBins = [[0,1,2],[15,20,30]] </code></pre> <p>I need to define each of the elements in <code>listOfChars</code> and assign them the correspondent <code>listOfBins</code> element.</p> <p>For example, I'd like to get:</p> <pre><code>feature1 = [0,1,2] feature2 = [15,20,30] </code></pre> <p>Notice that <code>feature1</code> and <code>feature2</code> have no quotes.</p> <p>And if I print <code>feature1</code> I get:</p> <pre><code>[0,1,2] </code></pre> <p>Does anybody know how to do it in Python?</p>
<python><list><function><assign>
2024-07-09 15:25:46
2
1,762
Giampaolo Levorato
78,726,475
674,039
How can I ignore a specific breakpoint interactively?
<p>Consider this script:</p> <pre><code>print(&quot;before loop&quot;) for i in range(100): breakpoint() print(&quot;after loop&quot;) breakpoint() print(&quot;exit&quot;) </code></pre> <p>Short of pressing &quot;c&quot; one hundred times, how can you get past the breakpoint within the loop at L3 and proceed to L5?</p> <p>I've tried the <a href="https://docs.python.org/3/library/pdb.html#pdbcommand-ignore" rel="nofollow noreferrer">ignore</a> command, but I couldn't work it out:</p> <pre class="lang-none prettyprint-override"><code>$ python3 example.py before loop &gt; /tmp/example.py(2)&lt;module&gt;() -&gt; for i in range(100): (Pdb) ignore 0 *** Breakpoint 0 already deleted (Pdb) c &gt; /tmp/example.py(2)&lt;module&gt;() -&gt; for i in range(100): (Pdb) ignore 0 *** Breakpoint 0 already deleted (Pdb) c &gt; /tmp/example.py(2)&lt;module&gt;() -&gt; for i in range(100): </code></pre> <p>I want to execute the remainder of the loop, without tripping the breakpoint on L3 again, and then print &quot;after loop&quot; and break before printing &quot;exit&quot;, remaining in the debugger. The answer must not require exiting the debugger and reentering the runtime, or modifying the source code.</p>
<python><pdb>
2024-07-09 15:24:12
3
367,866
wim
78,726,212
6,104,011
Can you order the execution of all expanded jobs in single rule?
<p>I have a snakemake pipeline that runs a genetic analysis. The genome has been split into many 'regions'. These regions can run in parallel, and therefore I've used <code>expand()</code>, and they all run as expected.</p> <pre><code>regions = ['r1', 'r2', 'r3', 'r4'] some_pattern = {region}/file.tsv rule all: input: expand(some_pattern, region=regions) </code></pre> <p>and there is a subsequent rule</p> <pre><code>rule process_region: output: some_pattern run: ... </code></pre> <p>The issue is, some regions are much more computationally complex, and therefore take more time to run, therefore I would prefer if they were front-loaded so they don't hold up the pipeline for a long time at the end. Is there a way I can order the execution of the expanded pattern?</p>
<python><snakemake>
2024-07-09 14:30:20
2
1,064
iquestionshard
78,726,188
8,037,521
Simultaneous matplotlib and open3d visualization
<p>I am looking for a way to simultaneously visualize two simple interfaces that I have for 3d (point cloud) and 2d (photo) data visualization. One is with tkinter and matplotlib, and another with open3d. Separately or sequentially they work without any issues. However, I want user to be able to have them side-by-side since they represent same dataset, and visual comparison might be of use.</p> <p>I somewhat naively tried to do it with threads (I have no significant experience with threads), but it seems that this is not supported(?).</p> <p>After throwing away all code irrelevant to the question (including tkinter, as problem is in matplotlib itself):</p> <pre><code>import threading import numpy as np import matplotlib.pyplot as plt import open3d as o3d # Function to plot random 2D data using matplotlib def plot_random_2d(): plt.ion() # Turn on interactive mode fig, ax = plt.subplots() x = np.random.rand(100) y = np.random.rand(100) ax.scatter(x, y) ax.set_title(&quot;Random 2D Data&quot;) plt.show() # Function to plot random 3D data using open3d def plot_random_3d(): pcd = o3d.geometry.PointCloud() points = np.random.rand(100, 3) pcd.points = o3d.utility.Vector3dVector(points) vis = o3d.visualization.Visualizer() vis.create_window() vis.add_geometry(pcd) vis.run() vis.destroy_window() # Main function to run both plots concurrently def main(): thread_2d = threading.Thread(target=plot_random_2d) thread_3d = threading.Thread(target=plot_random_3d) thread_2d.start() thread_3d.start() thread_2d.join() thread_3d.join() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Dependencies: <code>pip install matplotlib numpy open3d</code></p> <p>Note: <strong>both plots are interactive</strong> in the final use-case, so any proposed solution should support interactivity.</p> <p>Is there any way to solve this issue with the current approach? Or should I trash it completely and look for different library entirely? (I thought about Plotly, but not sure how much is supports 2d/3d points selection, and also it seems to be slow for point cloud visualization).</p>
<python><multithreading><matplotlib><open3d>
2024-07-09 14:24:35
0
1,277
Valeria
78,726,092
1,883,154
ValueError: Appended dtypes differ when appending two simple tables with dask
<p>I am using Dask to write multiple very large dataframes to a single parquet dataset in python.</p> <p>The dataframes are simple and all the column types are either floats or strings.</p> <p>I iterate over the dask dataframes, and call <code>to_parquet</code> on each one, overwritting if this is the first dataframe, and appending if it is not.</p> <pre><code>ddf.to_parquet(options.output_prefix, append=True if n&gt; 0 else False, engine=&quot;pyarrow&quot;, write_index=False, write_metadata_file=True, compute=True) </code></pre> <p>On the second call I get the following error:</p> <pre><code>ValueError: Appended dtypes differ. {('IJC_SAMPLE_1', 'object'), ('riExonStart_0base', dtype('float64')), ('riExonEnd', 'float64'), ('PValue', dtype('float64')), ('IncLevelDifference', dtype('float64')), ('chr', 'object'), ('SkipFormLen', dtype('float64')), ('ID', 'float64'), ('SJC_SAMPLE_1', 'object'), ('geneSymbol', 'object'), ('ID.1', 'float64'), ('downstreamEE', dtype('float64')), ('downstreamES', 'float64'), ('strand', 'object'), ('SJC_SAMPLE_2', 'object'), ('upstreamES', dtype('float64')), ('IncFormLen', 'float64'), ('FDR', dtype('float64')), ('IJC_SAMPLE_2', 'object'), ('ID.1', dtype('float64')), ('downstreamES', dtype('float64')), ('FDR', 'float64'), ('IJC_SAMPLE_1', string[pyarrow]), ('GeneID', 'object'), ('IncFormLen', dtype('float64')), ('chr', string[pyarrow]), ('GeneID', string[pyarrow]), ('IncLevel1', 'object'), ('SJC_SAMPLE_1', string[pyarrow]), ('geneSymbol', string[pyarrow]), ('riExonStart_0base', 'float64'), ('PValue', 'float64'), ('upstreamEE', dtype('float64')), ('IncLevelDifference', 'float64'), ('IncLevel1', string[pyarrow]), ('strand', string[pyarrow]), ('SJC_SAMPLE_2', string[pyarrow]), ('IncLevel2', 'object'), ('SkipFormLen', 'float64'), ('riExonEnd', dtype('float64')), ('IncLevel2', string[pyarrow]), ('upstreamEE', 'float64'), ('downstreamEE', 'float64'), ('ID', dtype('float64')), ('IJC_SAMPLE_2', string[pyarrow]), ('upstreamES', 'float64')} </code></pre> <p>The dtypes of the first and second dataframes are the same:</p> <pre><code>dtypes for frame 0: ID float64 GeneID string[pyarrow] geneSymbol string[pyarrow] chr string[pyarrow] strand string[pyarrow] riExonStart_0base float64 riExonEnd float64 upstreamES float64 upstreamEE float64 downstreamES float64 downstreamEE float64 ID.1 float64 IJC_SAMPLE_1 string[pyarrow] SJC_SAMPLE_1 string[pyarrow] IJC_SAMPLE_2 string[pyarrow] SJC_SAMPLE_2 string[pyarrow] IncFormLen float64 SkipFormLen float64 PValue float64 FDR float64 IncLevel1 string[pyarrow] IncLevel2 string[pyarrow] IncLevelDifference float64 dtype: object dtypes for frame 1: ID float64 GeneID string[pyarrow] geneSymbol string[pyarrow] chr string[pyarrow] strand string[pyarrow] riExonStart_0base float64 riExonEnd float64 upstreamES float64 upstreamEE float64 downstreamES float64 downstreamEE float64 ID.1 float64 IJC_SAMPLE_1 string[pyarrow] SJC_SAMPLE_1 string[pyarrow] IJC_SAMPLE_2 string[pyarrow] SJC_SAMPLE_2 string[pyarrow] IncFormLen float64 SkipFormLen float64 PValue float64 FDR float64 IncLevel1 string[pyarrow] IncLevel2 string[pyarrow] IncLevelDifference float64 dtype: object </code></pre> <p>If seems that the <code>string[pyarrow]</code> columns are being stored as <code>object</code> for the first time round, and then on the second time round, dask/pyarrow is seeing these as different dtypes. I've tried various things around the using the schema parameter to <code>to_parquet</code>, including setting all the <code>string[pyarrow]</code> columns to <code>object</code> or <code>np.object_</code> or <code>pa.from_numpy_dtype(np.object_)</code> (unsupported) or enforcing storing them as &quot;string[pyarrow]&quot; (no effect).</p>
<python><dask><parquet>
2024-07-09 14:06:40
0
1,738
Ian Sudbery
78,726,032
3,368,980
Detecting DRM encrypted files
<p>My company uses DRM software (NASCA) for file encryption/protection that is hooked deep into the OS. I'd like to detect whether a file has been encrypted or not, in Python.</p> <p>An easy way would be checking for whether the file begins with <code>&lt;## NASCA DRM FILE - VER1.00 ##&gt;</code> as all of our encrypted files do.</p> <p>However, due to the DRM software rules, when reading a DRM'd file into Python (as in <code>open(file, &quot;rb&quot;) as handle:</code>), Python may or may not receive the <strong>unencrypted</strong> file. Thus, in these circumstances, I cannot check for the existence of the NASCA header.</p> <p>Is there some other way or trick to read a DRM'd file (in Python) and ensure it that it <strong>remains</strong> encrypted?</p>
<python><encryption>
2024-07-09 13:56:16
0
441
Abstracted
78,725,967
7,775,166
Translate Pandas groupby plus resample to Polars in Python
<p>I have this code that generates a toy DataFrame (production df is much complex):</p> <pre><code>import polars as pl import numpy as np import pandas as pd def create_timeseries_df(num_rows): date_rng = pd.date_range(start='1/1/2020', end='1/01/2021', freq='T') data = { 'date': np.random.choice(date_rng, num_rows), 'category': np.random.choice(['A', 'B', 'C', 'D'], num_rows), 'subcategory': np.random.choice(['X', 'Y', 'Z'], num_rows), 'value': np.random.rand(num_rows) * 100 } df = pd.DataFrame(data) df = df.sort_values('date') df.set_index('date', inplace=True, drop=False) df.index = pd.to_datetime(df.index) return df num_rows = 1000000 # for example df = create_timeseries_df(num_rows) </code></pre> <p>Then perform this transformations with Pandas.</p> <pre><code>df_pd = df.copy() df_pd = df_pd.groupby(['category', 'subcategory']) df_pd = df_pd.resample('W-MON') df_pd.agg({ 'value': ['sum', 'mean', 'max', 'min'] }).reset_index() </code></pre> <p>But, obviously it is quite slow with Pandas (at least in production). Thus, I'd like to use Polars to speed up time. This is what I have so far:</p> <pre><code>#Convert to Polars DataFrame df_pl = pl.from_pandas(df) #Groupby, resample and aggregate df_pl = df_pl.group_by('category', 'subcategory') df_pl = df_pl.group_by_dynamic('date', every='1w', closed='right') df_pl.agg( pl.col('value').sum().alias('value_sum'), pl.col('value').mean().alias('value_mean'), pl.col('value').max().alias('value_max'), pl.col('value').min().alias('value_min') ) </code></pre> <p>But I get <code>AttributeError: 'GroupBy' object has no attribute 'group_by_dynamic'</code>. Any ideas on how to use <code>groupby</code> followed by <code>resample</code> in Polars?</p>
<python><dataframe><group-by><python-polars>
2024-07-09 13:42:48
1
732
girdeux
78,725,930
6,622,697
How to validate an enum in Django using serializers
<p>I'm using serializers to validate my Post data. How can I validate an Enum?</p> <p>I have a class like this:</p> <pre><code>class DataTypeEnum(StrEnum): FLOAT = 'float' INTEGER = 'integer' BOOLEAN = 'boolean' </code></pre> <p>And my Post input contains</p> <pre><code>{ ... &quot;value&quot; : &lt;datatype&gt; ... } </code></pre> <p>where <code>value</code> should have the values of <code>float</code>, <code>integer</code> or <code>boolean</code></p>
<python><django><validation><django-rest-framework>
2024-07-09 13:36:20
1
1,348
Peter Kronenberg
78,725,809
19,067,218
How to hide the __init__.py in every directory
<p>I have a repo filled with lots of nested directories; in almost everyone, I have a <code>__init__.py</code> file. The problem is it makes it harder to visually search directories for the files, and it was noticeable when the directory only contained 1 Python code file alongside <code>__init__.py</code></p> <p>So my question is:</p> <ul> <li>Is there any way to hide the file in IDE (PyCharm in my case) to not show this file without affecting the functionality?</li> </ul> <p>I have tried to exclude it from PyCharm from <code>General &gt; File Types</code> but that breaks the code because it is not simply hiding the files from the codebase.</p> <p>for reference:</p> <p>I am new to the project and not so experienced, so hiding the init files will help me to focus only on the code files and folder construction design, and learn about the code that I'm going to deal with, for now.</p>
<python><configuration><pycharm><ide><init>
2024-07-09 13:10:03
1
344
llRub3Nll
78,725,722
2,767,937
Passing Additional Information in LangChain abatch Calls
<p>Given an <code>abatch</code> call for a LangChain chain, I need to pass additional information, beyond just the content, to the function so that this information is available in the callback, specifically in the <code>on_chat_model_start</code> method.</p> <p>Here is the code:</p> <pre class="lang-py prettyprint-override"><code> from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate class ModelAsyncHandler(AsyncCallbackHandler): def __init__(self): super().__init__() async def on_chat_model_start( self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], run_id: UUID, **kwargs: Any) -&gt; None: await asyncio.sleep(0.3) # How to get here additional data from the abatch call? model_async_handler = ModelAsyncHandler() model = ChatOpenAI( api_key=&quot;API_KEY&quot;, model=&quot;MY_MODEL&quot;, ) prompt = ChatPromptTemplate.from_template(&quot;A sample template&quot;) llm_chain = prompt | model responses = await llm_chain.abatch( inputs=[{&quot;topic&quot;: &quot;MY_CONTENT_1&quot;}, {&quot;topic&quot;: &quot;MY_CONTENT_2&quot;}], config={'callbacks': [model_async_handler]}, ) </code></pre> <p>This additional information could include the origin of the documents (such as file names). I couldn't find another way to include this information other than through the <code>inputs</code> parameter of the <code>abatch</code> function. The same issue applies to <code>batch</code>, <code>invoke</code>, or similar calls.</p> <p>Does anyone else have the same problem and a smart solution for it?</p>
<python><langchain><large-language-model>
2024-07-09 12:53:53
0
629
TantrixRobotBoy
78,725,603
20,920,790
How to get requirements from my Airflow python environment
<p>How to perform pip freeze &gt; requirements.txt for Airflow environment? Or get list of packages by another way.</p> <p>Airflow v. 2.8.2 installed with Docker.</p>
<python><airflow>
2024-07-09 12:29:41
1
402
John Doe
78,725,262
4,277,485
Convert TSV file data to a dataframe, which can be pushed to database
<p>We have TSV files which holds IOT data, want to convert to table like structure using pandas. I have worked on TSV data, similar to given below, were the logics goes like</p> <ol> <li>read the file</li> <li>Add new column names</li> <li>do transpose</li> <li>reindex</li> </ol> <p>This is bit challenging as explained, col1 to col3 is dimension data and remaining is fact data</p> <p><strong>tsv file data looks as below</strong></p> <p>col1 qweqweq<br> col2 345435<br> col3 01/01/2024 35:08:09<br> col4 1<br> col5 0<br> col4 0<br> col5 0<br> col4 1<br> col5 1<br> col4 0<br> col5 1<br></p> <p><strong>Want to project as table like structure</strong></p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">col1</th> <th style="text-align: center;">col2</th> <th style="text-align: right;">col3</th> <th style="text-align: center;">col4</th> <th style="text-align: right;">col5</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">qweqweq</td> <td style="text-align: center;">345435</td> <td style="text-align: right;">01/01/2024 35:08:09</td> <td style="text-align: center;">1</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">qweqweq</td> <td style="text-align: center;">345435</td> <td style="text-align: right;">01/01/2024 35:08:09</td> <td style="text-align: center;">0</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: left;">qweqweq</td> <td style="text-align: center;">345435</td> <td style="text-align: right;">01/01/2024 35:08:09</td> <td style="text-align: center;">1</td> <td style="text-align: right;">1</td> </tr> <tr> <td style="text-align: left;">qweqweq</td> <td style="text-align: center;">345435</td> <td style="text-align: right;">01/01/2024 35:08:09</td> <td style="text-align: center;">0</td> <td style="text-align: right;">1</td> </tr> </tbody> </table></div> <p>col4 and col5 can differ in each IOT file. How to achieve with python, pandas?</p>
<python><pandas><csv><file>
2024-07-09 11:16:24
1
438
Kavya shree
78,725,115
1,824,064
Is there a way to toggle develop mode for path dependencies during install?
<p>I have a project that depends upon local libraries, which are installed as path dependencies in edit mode:</p> <pre><code>mylib = {path=&quot;../../libs/mylib&quot;, develop=true} </code></pre> <p>This is what I want during development so that library changes are immediately reflected in the project. What is the proper way to install path dependencies for production? Ideally, I'd like to be able to issue some type of flag or environment variable that forces <code>develop</code> mode off for all path dependencies, so that they get installed into the project's virtual environment. Barring that, I am stuck using the <code>--no-directory</code> flag during install, then manually copying over all libraries, which is not ideal.</p>
<python><python-poetry>
2024-07-09 10:39:37
1
1,998
amnesia
78,724,819
1,838,076
How can I redirect stdout and stderr of subprocess in python to same file without losing the order
<p>I have a simple script to mimic a program writing to stdout and stderr streams interleaved.</p> <pre><code>import sys import time for i in range(5): print(int(time.time()), &quot;This is Stdout&quot;) print(int(time.time()), &quot;Stderr&quot;, file=sys.stderr) time.sleep(1) </code></pre> <p>When I run such a program with Python <code>subprocess.Popen</code> I am losing the order of stderr and stdout.</p> <pre><code>import subprocess file = open('./stdout1.log', 'w', encoding='utf-8') subprocess.Popen('./stderrout.py', stdout=file, stderr=file) </code></pre> <p>Output</p> <pre><code>1720517080 Stderr 1720517081 Stderr 1720517082 Stderr 1720517083 Stderr 1720517084 Stderr 1720517080 This is Stdout 1720517081 This is Stdout 1720517082 This is Stdout 1720517083 This is Stdout 1720517084 This is Stdout </code></pre> <p>I understand that the stderr gets flushed quicker or something of that sort, but how do I preserve the order?</p> <p>I also tried using <code>subprocess.STDOUT</code> for stderr and played with <code>text</code> and <code>bufsize</code> arguments with no luck. Also tried adding <code>buffering</code> to the file handle. <strong>The output is identical in all cases.</strong></p> <p>Here is the full program</p> <pre><code>import subprocess file = open('./stdout1.log', 'w', encoding='utf-8') ; subprocess.Popen('./stderrout.py', stdout=file, stderr=file) file = open('./stdout2.log', 'w', encoding='utf-8') ; subprocess.Popen('./stderrout.py', stdout=file, stderr=file, text=True, bufsize=0) file = open('./stdout3.log', 'w', encoding='utf-8') ; subprocess.Popen('./stderrout.py', stdout=file, stderr=file, text=True, bufsize=1) file = open('./stdout4.log', 'w', encoding='utf-8') ; subprocess.Popen('./stderrout.py', stdout=file, stderr=subprocess.STDOUT) file = open('./stdout5.log', 'w', encoding='utf-8') ; subprocess.Popen('./stderrout.py', stdout=file, stderr=subprocess.STDOUT, text=True, bufsize=0) file = open('./stdout6.log', 'w', encoding='utf-8') ; subprocess.Popen('./stderrout.py', stdout=file, stderr=subprocess.STDOUT, text=True, bufsize=1) file = open('./stdout1b.log', 'w', encoding='utf-8', buffering=1) ; subprocess.Popen('./stderrout.py', stdout=file, stderr=file) file = open('./stdout2b.log', 'w', encoding='utf-8', buffering=1) ; subprocess.Popen('./stderrout.py', stdout=file, stderr=file, text=True, bufsize=0) file = open('./stdout3b.log', 'w', encoding='utf-8', buffering=1) ; subprocess.Popen('./stderrout.py', stdout=file, stderr=file, text=True, bufsize=1) file = open('./stdout4b.log', 'w', encoding='utf-8', buffering=1) ; subprocess.Popen('./stderrout.py', stdout=file, stderr=subprocess.STDOUT) file = open('./stdout5b.log', 'w', encoding='utf-8', buffering=1) ; subprocess.Popen('./stderrout.py', stdout=file, stderr=subprocess.STDOUT, text=True, bufsize=0) file = open('./stdout6b.log', 'w', encoding='utf-8', buffering=1) ; subprocess.Popen('./stderrout.py', stdout=file, stderr=subprocess.STDOUT, text=True, bufsize=1) </code></pre> <p>Is there a way to redirect both streams to the same file without losing the order?</p> <p>If not what is the significance of <code>bufsize</code> argument?</p>
<python><subprocess>
2024-07-09 09:35:42
1
1,622
Krishna
78,724,742
956,539
Speed-up literal_eval in a DataFrame apply
<p>I have a <code>pandas</code> DataFrame with the following columns:</p> <pre><code>id | value | somedate ------------------------------ 1 | [10, 13, 14] | 2024-06-01 2 | [5, 6, 7] | 2024-07-01 3 | [1, 2, 3] | 2024-06-01 </code></pre> <p>I'm doing the following transformation to parse the <code>value</code> column and explode the DataFrame:</p> <pre><code>import ast import pandas as pd data = pd.DataFrame({&quot;id&quot;: [1, 2, 3], &quot;value&quot;: [&quot;[10, 13, 14]&quot;, &quot;[5, 6, 7]&quot;, &quot;[1, 2, 3]&quot;], &quot;somedate&quot;: [&quot;2024-06-01&quot;, &quot;2024-07-01&quot;, &quot;2024-06-01&quot;]}) data[&quot;parsed_value&quot;] = data[&quot;value&quot;].apply(lambda x: ast.literal_eval(x)) data = data.explode(column=&quot;parsed_value&quot;) </code></pre> <p>This works well on small DataFrames, but becomes incredibly slow at very large DataFrames (tens to hundreds of millions of rows).</p> <p>Obviously, there are options like parallel processing, switching to other packages like <code>dask</code> or <code>polars</code>, but I just wanted to first make sure that I'm not missing some obvious solution within the current tech stack.</p>
<python><pandas><dataframe>
2024-07-09 09:20:35
2
2,891
abudis
78,724,741
447,426
How format doc within :param <name>: in reStructuredText markup?
<p>I am trying to document certain <code>:param</code> and want to give examples like</p> <pre><code> :param mapper_matrix: lookup table with columns ref_col, ref_col_2 and value.\n |**Example:** | [(&quot;s1&quot;, &quot;p1&quot;, &quot;state1&quot;), | (&quot;s1&quot;, &quot;p2&quot;, special), | (&quot;s2&quot;, &quot;p1&quot;, &quot;state3&quot;), | (&quot;s2&quot;, &quot;p2&quot;, &quot;state4&quot;)] Every missing mapping will result in a null value in the new column. </code></pre> <p>My main problem are line breaks or paragraphs. As us see i already added <code>|</code>, <code>\n</code> All is ignored. If i add a new line everything after is just not rendered.</p> <p>For rendering i use IntelliJ set to reStructuredText. I even tried to intend by <a href="https://devguide.python.org/documentation/markup/" rel="nofollow noreferrer">3 spaces as documentation tells</a>.</p> <p>So is there a way to apply formatting within <code>:param</code> or similar parts?</p> <p>I also tried to put the Example within the normal doc (above) &quot;:param&quot;s but it is also not correctly rendered (tried <code>::</code>, <code>.. code-block:: python</code>). can someone please provide an example how to put such a list of tuple or general code into pydoc with reStructuredtext. Probably intellj renderer has a bug?</p>
<python><restructuredtext><pydoc>
2024-07-09 09:20:32
1
13,125
dermoritz
78,724,606
1,982,032
How can calculate the American put option's vega,rho?
<p>The QuantLib's version in my os:</p> <pre><code>import QuantLib as ql ql.__version__ '1.34' </code></pre> <p>All the arguments related to the put option:</p> <pre><code>settlementDate = ql.Date(11, ql.July, 2019) maturity = ql.Date(19, ql.July, 2019) stock = 0.28 strike = 0.5 riskFreeRate = 0.05 volatility = 1.7 </code></pre> <p>The other part of price valuation:</p> <pre><code>calendar = ql.UnitedStates(ql.UnitedStates.NYSE) dayCounter = ql.Actual365Fixed(ql.Actual365Fixed.Standard) ql.Settings.instance().evaluationDate = todayDate AmericanExercise(earliestDate, latestDate, payoffAtExpiry=False) AmericanExercise = ql.AmericanExercise(todayDate,maturity) optionType = ql.Option.Put payoff = ql.PlainVanillaPayoff(type=optionType, strike=strike) AmericanOption = ql.VanillaOption(payoff=payoff,exercise=AmericanExercise) underlying = ql.SimpleQuote(stock) underlyingH = ql.QuoteHandle(underlying) flatRiskFreeTS = ql.YieldTermStructureHandle( ql.FlatForward( settlementDate, riskFreeRate, dayCounter)) flatVolTS = ql.BlackVolTermStructureHandle( ql.BlackConstantVol( settlementDate, calendar, volatility, dayCounter)) bsProcess = ql.BlackScholesProcess( s0=underlyingH, riskFreeTS=flatRiskFreeTS, volTS=flatVolTS) steps = 200 binomial_engine = ql.BinomialVanillaEngine(bsProcess, &quot;crr&quot;, steps) AmericanOption.setPricingEngine(binomial_engine) </code></pre> <p>The put option's price:</p> <pre><code>print(&quot;Option value =&quot;, AmericanOption.NPV()) Option value = 0.22013426651607249 </code></pre> <p>Other Greeks(Delta,Gamma,theta)value:</p> <pre><code>print(&quot;Delta value =&quot;, AmericanOption.delta()) Delta value = -0.988975537620728 print(&quot;Gamma value =&quot;, AmericanOption.gamma()) Gamma value = 0.5635976654806573 print(&quot;Theta value =&quot;, AmericanOption.theta()) Theta value = -0.03899648147441449 </code></pre> <p>It can't get American put option's vega,rho:</p> <pre><code>print(&quot;Theta value =&quot;, AmericanOption.theta()) Theta value = -0.03899648147441449 &gt;&gt;&gt; print(&quot;Vega value =&quot;, AmericanOption.vega()) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/debian/mydoc/lib/python3.11/site-packages/QuantLib/QuantLib.py&quot;, line 17245, in vega return _QuantLib.OneAssetOption_vega(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: vega not provided &gt;&gt;&gt; print(&quot;Rho value =&quot;, AmericanOption.rho()) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/debian/mydoc/lib/python3.11/site-packages/QuantLib/QuantLib.py&quot;, line 17249, in rho return _QuantLib.OneAssetOption_rho(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: rho not provided </code></pre> <p>The AmericanOption contains <code>vega</code> and <code>rho</code>:</p> <pre><code>'vega' and 'rho' in dir(AmericanOption) True </code></pre> <p>Why 'vega' and 'rho' are not provided in runtime?<br /> How can calculate the American put option's vega,rho?<br /> Are there other python libs can work for American BS model?</p>
<python><quantlib>
2024-07-09 08:55:07
1
355
showkey
78,724,558
5,269,892
Pandas alignment error during elementwise comparison
<p>When checking element-wise equality of multiple columns of a dataframe against a single column, pandas raises a <code>ValueError: Operands are not aligned. Do 'left, right = left.align(right, axis=1, copy=False)' before operating.</code>.</p> <pre><code>import pandas as pd df1 = pd.DataFrame({ 'A': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'B': [11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 'C': [21, 22, 23, 24, 25, 26, 27, 28, 29, 30] }) df2 = pd.DataFrame({ 'X': [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000] }) cond1 = (df1['A'] == df2['X']) cond2 = (df1[['A','B','C']] == 5) cond3 = (df1[['A','B','C']] == df2['X']) # raises an alignment error cond3 = (df1[['A','B','C']] == df1['A']) # raises an alignment error </code></pre> <p>Why does pandas raise this error? I would have assumed that pandas performs an element-wise comparison without issues, either aligning on the existing index of the columns (which is the same between the dataframes) or on an ad-hoc-assigned new index (from 0 to N-1).</p> <p>Is there a way to avoid the <code>left.align()</code> suggestion without converting to numpy arrays as shown below?</p> <pre><code>cond3 = (df1[['A','B','C']].values == df2['X'].values[:,None]) </code></pre>
<python><pandas><comparison>
2024-07-09 08:42:38
2
1,314
silence_of_the_lambdas
78,724,345
6,930,340
Extending the Polars API for both DataFrame and LazyFrame
<p>I am extending the <code>polars</code> DataFrame and LazyFrame as described in the <a href="https://docs.pola.rs/api/python/stable/reference/api.html" rel="nofollow noreferrer">docs</a>.</p> <p>Let's go with their <code>split</code> example for <code>pl.DataFrame</code>. Let's say I also wanted to extend the <code>pl.LazyFrame</code> with the same <code>split</code> function.</p> <p>The code would look pretty much the same, with the exception of the decorator (<code>@pl.api.register_dataframe_namespace(&quot;split&quot;)</code> vs. <code>@pl.api.register_lazyframe_namespace(&quot;split&quot;)</code>, the input argument (<code>df</code> vs. <code>ldf</code>) and the return type (<code>list[pl.DataFrame]</code> vs. <code>list[pl.LazyFrame]</code>).</p> <p>This looks pretty much violating the DRY mantra.</p> <p>What is best-practice to extend the API on multiple fronts (DataFrame, LazyFrame, Series)?</p> <p>To put it differently, how can I apply an extension to both a <code>pl.DataFrame</code> and a <code>pl.LazyFrame</code>? And can this extension share the same namespace?</p>
<python><python-polars>
2024-07-09 07:52:16
1
5,167
Andi
78,724,330
893,254
VS Code + pytest - How to move the whole project to a subdirectory?
<p>I have an existing Python project which looks like this:</p> <pre><code>.venv/ ... example_package/ __init__.py tests/ test_example_package.py </code></pre> <p>Imagine that I worked on this project for a while and then decided to add a frontend to some kind of web service using React.</p> <p>I created a new directory called <code>React</code>, and a new directory called <code>Python</code> to organize my code.</p> <p>The project structure now looks like this:</p> <pre><code>Python/ .venv/ ... example_package/ __init__.py tests/ test_example_package.py React/ ... </code></pre> <p>But there is a problem. My tests no longer work. So I ran the VS Code &quot;configure tests&quot; command and set the subdirectory where the tests are to be found to <code>Python</code>. But this did not fix the whole issue, and the <code>pytest</code> discovery process fails with the following error:</p> <pre><code>ModuleNotFoundError: No module named 'example_package' </code></pre> <p>It seems relatively obvious as to what is happening. <code>pytest</code> is being run by VS Code, with a current working directory set to <code>.</code> instead of <code>./Python/</code>.</p> <p>How can I fix this?</p>
<python><visual-studio-code><pytest>
2024-07-09 07:49:47
1
18,579
user2138149
78,724,252
1,838,076
Why concatenation can't handle Nones in categorical columns when the DF can hold it in the first place
<p>I have 2 DFs with <code>object</code> type columns, which work fine with concatenation.</p> <h3>Code</h3> <pre class="lang-py prettyprint-override"><code>df1 = pd.DataFrame({'A': ['A0', 'A1'], 'B': ['B0', None]}) df2 = pd.DataFrame({'A': ['A4', 'A5'], 'B': [None, None]}) print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Original DFs&quot;) print(df1) print(df2) print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Original DTypes&quot;) print(df1.dtypes) print(df2.dtypes) </code></pre> <h3>Corresponding Output</h3> <pre><code>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Original DFs A B 0 A0 B0 1 A1 None A B 0 A4 None 1 A5 None &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Original DTypes A object B object dtype: object A object B object dtype: object &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Concatenation 1 - No Warning A B 0 A0 B0 1 A1 None 0 A4 None 1 A5 None </code></pre> <p>But if I do the same with <code>categorical</code> columns, I get a <code>FutureWarning</code></p> <h3>Code with Categorical data type</h3> <pre class="lang-py prettyprint-override"><code>print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Categorical DTypes&quot;) df1 = df1.astype('category') df2 = df2.astype('category') print(df1.dtypes) print(df2.dtypes) print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Concatenation 2 - Gives warning&quot;) print(pd.concat([df1, df2])) </code></pre> <h3>Corresponding Output</h3> <pre><code>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Categorical DTypes A category B category dtype: object A category B category dtype: object &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Concatenation 2 - Gives warning bla.py:37: FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation. print(pd.concat([df1, df2])) A B 0 A0 B0 1 A1 NaN 0 A4 NaN 1 A5 NaN </code></pre> <p><code>df2</code> had a <code>NaN</code> to start with and there is no problem with it, but when I try to concatenate with all <code>NaN</code> columns, I get the warning. The suggestion is to remove such entries altogether. Why is this the case? Why does concatenation seem to have issues with <code>NaNs</code>?</p> <p>Here is the full code</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd def bla(): '''The main function, that can also be called fromother scripts as an API''' df1 = pd.DataFrame({'A': ['A0', 'A1'], 'B': ['B0', None]}) df2 = pd.DataFrame({'A': ['A4', 'A5'], 'B': [None, None]}) print(pd.__version__) print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Original DFs&quot;) print(df1) print(df2) print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Original DTypes&quot;) print(df1.dtypes) print(df2.dtypes) print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Concatenation 1 - No Warning&quot;) print(pd.concat([df1, df2])) print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Categorical DTypes&quot;) df1 = df1.astype('category') df2 = df2.astype('category') print(df1.dtypes) print(df2.dtypes) print(&quot;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt; Concatenation 2 - Gives warning&quot;) print(pd.concat([df1, df2])) if __name__ == '__main__': bla() </code></pre> <p>PS: Pandas version is 2.2.2</p>
<python><pandas><dataframe><categorical-data>
2024-07-09 07:32:42
1
1,622
Krishna
78,724,178
3,581,875
URL Fragment Encoding - Dollar? [AWS]
<p>What is the standard way to encode the fragment portion of a URL? (after the <code>#</code> symbol)</p> <p>I noticed that on AWS CloudWatch for example, percent characters (<code>%</code>) in the fragment portion are encoded as '$25', so you'd see something like this:</p> <pre><code>.../home?region=us-west-1#logsV2:log-groups/log-group/$252Faws$252Flambda$252Ft... </code></pre> <p>The portion after <code>/log-group/</code> should decode back to <code>/aws/lambda/</code>, which means some kind of double quoting, i.e.: <code>/</code> -&gt; <code>%2F</code> -&gt;. <code>$252F</code></p> <p>Is that some standard convention or is that specific to AWS? And if it is standard how does one accomplish this functionality with Python? (currently I have <code>urllib.parse.quote(fragment, safe = '').replace('%', '$25')</code> which seems a bit hacky).</p>
<python><amazon-web-services><url><encoding>
2024-07-09 07:16:07
0
1,152
giladrv
78,723,898
10,426,490
How to determine the text responsible for Google Gemini `block_reason: OTHER`?
<p>I've spent a long time setting up Google Gemini. Now that I've:</p> <ul> <li>Setup a Google Workspace</li> <li>Connected a GCP Billing Account (If you don't do this, you'll receive <code>Status 429 exceeds quota</code>(?))</li> <li>Ect.</li> </ul> <p>Anyway, I can now send API requests. These requests contain very large text files. Now I'm getting the error <code>block_reason: OTHER</code>.</p> <p>I set the <code>safety_settings</code> as shown below, but still receiving the block...</p> <p>How do I determine what text in this 200-500k token text is the culprit?</p> <p>Here is the function I'm using:</p> <pre><code>def start_gemini_chat(api_key, system_message, user_message): genai.configure(api_key=api_key) safety_settings = [ {&quot;category&quot;: &quot;HARM_CATEGORY_HARASSMENT&quot;, &quot;threshold&quot;: &quot;BLOCK_NONE&quot;}, {&quot;category&quot;: &quot;HARM_CATEGORY_HATE_SPEECH&quot;, &quot;threshold&quot;: &quot;BLOCK_NONE&quot;}, {&quot;category&quot;: &quot;HARM_CATEGORY_SEXUALLY_EXPLICIT&quot;, &quot;threshold&quot;: &quot;BLOCK_NONE&quot;}, {&quot;category&quot;: &quot;HARM_CATEGORY_DANGEROUS_CONTENT&quot;, &quot;threshold&quot;: &quot;BLOCK_NONE&quot;}] generation_config = { &quot;temperature&quot;: 0, &quot;max_output_tokens&quot;: 8192 } model = genai.GenerativeModel( model_name=&quot;gemini-1.5-pro-latest&quot;, generation_config=generation_config, system_instruction=system_message, safety_settings=safety_settings ) chat_session = model.start_chat(history=[]) response = chat_session.send_message(user_message) return response.text </code></pre>
<python><google-gemini><google-generativeai>
2024-07-09 06:07:35
1
2,046
ericOnline
78,723,649
20,087,266
How to set the default colour of a PyQtGraph ImageView's Histogram LUT?
<p>PyQtGraph's <a href="https://pyqtgraph.readthedocs.io/en/latest/api_reference/widgets/imageview.html" rel="nofollow noreferrer">ImageView</a> widget includes a histogram with:</p> <ol> <li>a moveable region that defines dark and light levels. as well as</li> <li>the ability to edit a colour gradient.</li> </ol> <p>These two features allow users to edit/generate a colour lookup table (LUT) to pseudo-colour the displayed image.</p> <p>By default, the colour gradient is set to black and white as shown below:</p> <p><a href="https://i.sstatic.net/1ruCzq3Lm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1ruCzq3Lm.png" alt="Default PyQtGraph ImageView histogram LUT" /></a></p> <p>How can one set a default gradient to some other than the default black and white?</p> <p><strong>Minimal Reproducible Example Code</strong></p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pyqtgraph as pg data = np.array( [ [0.0, 0.0, 0.5, 2.5, 2.5, 3.5], [2.0, 2.0, 1.5, 0.5, 2.5, 3.5], [2.0, 2.0, 2.0, 2.5, 0.5, 1.5], [3.0, 3.0, 2.5, 3.0, 1.5, 0.5], [3.0, 3.0, 3.0, 4.5, 1.5, 1.5], ] ) # Get QApplication instance if it exists, else create new one app = pg.Qt.mkQApp(&quot;Example Cost Matrix Visualisation&quot;) # Create and configure plot plot = pg.PlotItem() plot.setTitle(&quot;Cost Matrix&quot;) plot.setLabel(axis=&quot;top&quot;, text=&quot;Y Signal&quot;) plot.setLabel(axis=&quot;left&quot;, text=&quot;X Signal&quot;) # Create image item and view im = pg.ImageItem(data, axisOrder=&quot;row-major&quot;) imv = pg.ImageView(imageItem=im, view=plot) imv.show() imv.setHistogramLabel(&quot;Cost Matrix Histogram&quot;) app.exec() </code></pre>
<python><image><user-interface><pyqt><pyqtgraph>
2024-07-09 04:11:32
1
4,086
Kyle F. Hartzenberg
78,723,533
1,942,868
push reload or invoke javascript function from server
<p>I have Django + React + uwsgi application.</p> <p>Uwsgi access to the database.</p> <p>Now I want to reload Django(React) web applicataion triggered by Database is changed.</p> <p>For example</p> <ol> <li>User open the web application</li> <li>Administrator use mysql command on server <code>Insert into mytable(name) values(&quot;data&quot;)</code></li> <li>User's application will be reloaded or some javascript function be called.</li> </ol> <p>It's something like push reload from server.</p> <p>In my idea. Web application access the database to check the value is altered in every seconds.</p> <p>However It requires to access API every seconds.</p> <p>(And at most it has the one seconds delay, or requires too much api access)</p> <p>Is this a good practice ?</p> <p>Is there any good method for this purpose??</p>
<javascript><python><django>
2024-07-09 03:05:50
0
12,599
whitebear
78,723,313
1,636,016
Variables set inside the rcfile are NOT accessible through Python
<p>I've a shell script and a python script organized as follows (<strong>CAN NOT</strong> change the directory structure):</p> <pre class="lang-none prettyprint-override"><code>~ ├── a.py ├── b └── c.sh </code></pre> <p>The MWEs are as follows:</p> <p><code>c.sh</code></p> <pre class="lang-bash prettyprint-override"><code>#!/bin/sh rcfile=$(mktemp) export TEST1='hello' cat &lt;&lt;-EOF &gt; ${rcfile} export TEST2='world' EOF ${SHELL} --rcfile ${rcfile} rm -f ${rcfile} </code></pre> <p><code>a.py</code></p> <pre class="lang-py prettyprint-override"><code>from subprocess import Popen, PIPE p = Popen('./c.sh', cwd='./b', stdin=PIPE) p.communicate(b'echo $TEST1\necho $TEST2\nexit') </code></pre> <p>When I execute the shell script <code>c.sh</code>, it shows both the variables as expected.</p> <p><a href="https://i.sstatic.net/EDFLtwlZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDFLtwlZ.png" alt="enter image description here" /></a></p> <p>However, when executed from the python script <code>a.py</code>, the inner variable i.e. <code>TEST2</code> is not set.</p> <p><a href="https://i.sstatic.net/XItWRrKc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XItWRrKc.png" alt="enter image description here" /></a></p> <p>I <strong>CAN NOT</strong> modify the shell script.</p> <p>Is there a way to access both the variables <code>TEST1</code> and <code>TEST2</code> from the python script?</p>
<python><subprocess><sh><subshell>
2024-07-09 01:09:52
2
563
dibyendu
78,723,116
3,486,684
Python + Polars: efficiently looking up a value in another DataFrame: replace or join?
<p><strong>Note to potential editors:</strong> please leave both &quot;Python&quot; and &quot;Polars&quot; in the question title, because:</p> <ul> <li>there are many questions about looking up values in another dataframe in the <code>pandas</code> context;</li> <li>not everyone (e.g. search engines, or beginners) knows how to use the <code>[python-polars]</code> tag to drill down to polars specific questions.</li> </ul> <hr /> <p><a href="https://stackoverflow.com/questions/76681016/python-pandas-x-polars-values-mapping-lookup-value">Python - Pandas x Polars - Values mapping (Lookup value)</a> discusses a solution involving <code>join</code> or <code>replace</code>. What are the benefits of using <code>replace</code> over join, or vice versa?</p>
<python><python-polars>
2024-07-08 22:51:42
0
4,654
bzm3r
78,722,956
169,252
Python: await to read from socket OR shut down on event
<p>Given this function:</p> <pre><code>async def read_data(self, stream: UStreams) -&gt; None: while True: read_bytes = await stream.read(MAX) #handle the bytes </code></pre> <p>This however will keep the function running forever, of course.</p> <p>I'd like to have this function do this, but also shutdown (therefore exit the function) in case an event occurs:</p> <pre><code>async def read_data(self, stream: UStreams, evt: trio.Event) -&gt; None: while True: read_bytes = await stream.read(MAX) || await evt.wait() # &lt;- how do I do this? #handle the bytes or exit </code></pre> <p>I can't simply put <code>evt.wait()</code> as first statement in the loop, as the stream would never be read (and the other way around also doesn't work, as then the event would never be awaited if there is no stuff to read). <code>gather</code> type of code also doesn't help, I don't want to wait for both: only to the first one occurring (in <code>go</code> this would be solved with a <code>select</code> statement).</p>
<python><asynchronous><async-await><python-trio>
2024-07-08 21:45:21
1
6,390
unsafe_where_true
78,722,955
2,478,983
child item is not using parent coordinates
<p>I have a problem with PySide coordinate system. I create a custom item (NodeItem) that inherits from QGraphicsRectItem and is positioned in the scene. This custom item has a child (LabelItem) that inherits from QGraphicsTextItem. When I add the parent item (NodeItem) to the scene is positioned in 150,150 in scene coordinates but the child LabelItem is positioned in scene coordinates not in his parent coordinates (NodeItem).</p> <p>As you see in the image, &quot;R1&quot; LabelItem is in the origin of the scene not the origin of the NodeItem (the rectangle).</p> <pre><code>from PySide6.QtWidgets import QGraphicsRectItem, QGraphicsTextItem, QGraphicsScene, QGraphicsView, QApplication from PySide6.QtGui import QColor, QPen, QBrush from PySide6.QtCore import QRectF from PySide6 import QtWidgets class LabelItem(QtWidgets.QGraphicsTextItem): def __init__(self, parent=None): super().__init__(parent) self.setFlags(QtWidgets.QGraphicsItem.ItemIsMovable | QtWidgets.QGraphicsItem.ItemIsSelectable) self.setZValue(2) class NodeItem(QGraphicsRectItem): def __init__(self, x, y, width, height): super().__init__(x, y, width, height) pen = QPen(QColor(&quot;#000000&quot;)) brush = QBrush(QColor(&quot;#FFD700&quot;)) self.setPen(pen) self.setBrush(brush) self.setFlag(QtWidgets.QGraphicsItem.ItemIsMovable) self.setFlag(QtWidgets.QGraphicsItem.ItemIsSelectable) self.setFlag(QtWidgets.QGraphicsItem.ItemIsFocusable) self.setFlag(QtWidgets.QGraphicsItem.ItemSendsGeometryChanges) # LabelItem created as a child of this item self.label = LabelItem(self) #Child item self.label.setPlainText(&quot;R1&quot;) self.update_label_position() print(self.label.parentItem()) def update_label_position(self): self.label.setPos(0, 0) #Child item positioned at 0,0 class MyGraphicsView(QGraphicsView): def __init__(self): super().__init__() self.setScene(QGraphicsScene(self)) self.setMouseTracking(True) # Node item creation and positioned at 150,150 of the scene node_item = NodeItem(150, 150, 100, 50) self.scene().addItem(node_item) if __name__ == &quot;__main__&quot;: app = QApplication([]) view = MyGraphicsView() view.setSceneRect(0, 0, 400, 300) view.show() app.exec() </code></pre> <p><a href="https://i.postimg.cc/ydZWGMrW/pyside.png" rel="nofollow noreferrer"><img src="https://i.postimg.cc/ydZWGMrW/pyside.png" alt="enter image description here" /></a></p>
<python><pyside6>
2024-07-08 21:44:57
1
370
Carlitos_30
78,722,940
17,556,733
How do I create and freely delete parts of a black layer with python and tkinter
<p>Using python and the tkinter library, I want to create a program that will open an image, then draw a completely black layer on top of it, and using mouse click + movement, delete parts of that black layer (think of it as a fog-of-war in a table top game) or restore parts of the black layer.</p> <p>When I want to delete part of the black layer, I basically draw a transparent shape, and when I restore the black layer, I just draw a fully black shape. Here is my current implementation:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk from tkinter import filedialog from PIL import Image, ImageTk, ImageDraw class ImageEraserApp: def __init__(self, root): self.root = root self.root.title(&quot;Image Eraser App&quot;) # Open image file dialog self.image_path = filedialog.askopenfilename( filetypes=[(&quot;Image files&quot;, &quot;*.jpg *.jpeg *.png *.bmp *.gif&quot;)] ) if not self.image_path: self.root.quit() # Load original image and create initial resized image self.original_image = Image.open(self.image_path) self.resized_image = self.original_image.copy() self.tk_image = ImageTk.PhotoImage(self.resized_image) # Create canvas to display image self.canvas = tk.Canvas(self.root) self.canvas.pack(fill=tk.BOTH, expand=True) self.canvas_image = self.canvas.create_image(0, 0, anchor=tk.NW, image=self.tk_image) # Initialize black layer for erasing and painting self.black_layer = Image.new('RGBA', self.original_image.size, (0, 0, 0, 255)) self.layer_draw = ImageDraw.Draw(self.black_layer) self.tk_black_layer = ImageTk.PhotoImage(self.black_layer) # Eraser size self.eraser_size = 20 # Bind mouse events self.canvas.bind(&quot;&lt;B1-Motion&gt;&quot;, self.erase) self.canvas.bind(&quot;&lt;B3-Motion&gt;&quot;, self.paint_black) self.canvas.bind(&quot;&lt;MouseWheel&gt;&quot;, self.adjust_eraser_size) # Display initial black layer self.black_layer_id = self.canvas.create_image(0, 0, anchor=tk.NW, image=self.tk_black_layer) def adjust_eraser_size(self, event): # Adjust the eraser size based on mouse wheel scroll if event.delta &gt; 0: self.eraser_size = min(100, self.eraser_size + 20) elif event.delta &lt; 0: self.eraser_size = max(20, self.eraser_size - 20) def erase(self, event): # Draw on the black layer to &quot;erase&quot; it self.layer_draw.ellipse([ (event.x - self.eraser_size, event.y - self.eraser_size), (event.x + self.eraser_size, event.y + self.eraser_size) ], fill=(0, 0, 0, 0)) # Update the black layer on the canvas self.tk_black_layer = ImageTk.PhotoImage(self.black_layer) self.canvas.itemconfig(self.black_layer_id, image=self.tk_black_layer) def paint_black(self, event): # Draw on the black layer to &quot;paint&quot; it self.layer_draw.ellipse([ (event.x - self.eraser_size, event.y - self.eraser_size), (event.x + self.eraser_size, event.y + self.eraser_size) ], fill=(0, 0, 0, 255)) # Update the black layer on the canvas self.tk_black_layer = ImageTk.PhotoImage(self.black_layer) self.canvas.itemconfig(self.black_layer_id, image=self.tk_black_layer) if __name__ == &quot;__main__&quot;: root = tk.Tk() app = ImageEraserApp(root) root.mainloop() </code></pre> <p>Now as the number of the shapes I draw grows, the app becomes more and more unresponsive (if I try to do some resizing operations/zoom-in/zoom-out on top of this, it gets even worse (easiest to notice when loading a very large image)</p> <p>My question is how can I optimize this? Is the way I am trying to implement it even the correct way to solve this kind of problem?</p>
<python><tkinter><optimization><canvas><layer>
2024-07-08 21:40:13
1
495
TheMemeMachine
78,722,914
14,336,726
An ordinary numpy array produces a type error?
<p>Could someone explain why I get this warning</p> <pre><code>&lt;&gt;:18: SyntaxWarning: list indices must be integers or slices, not tuple; perhaps you missed a comma? </code></pre> <p>and this error message</p> <pre><code>TypeError Traceback (most recent call last) Cell In[62], line 18 12 weights.extend([w / d] * d) 14 return maut_method(D, weights, ['min'] * 54 + ['max'] * 10 + ['min'] * 1 + ['max'] * 1 + ['min'] * 1, ['exp'] * 67, graph=True) #poistin tästä 0.01 joka on varmaan step funktion koko 17 dataset = np.array([ ---&gt; 18 [1, 1, 1, 1, 1, 2, 2, 2, 3, 59.4, 4.13, 4, 4, 2, 3, 4, 4, 1, 1, 1, 1, 1, 1, 1550, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.43, 187, 1.87e-05, 0.0698, 0.149, 1, 0.0398, 1, 1, 1, 1, 1, 1, 315, 6030, 1, 2910, 0.00134, 1, 183, 27.2, 30.6, 3, 48, 3, 23, 14, 3.3, 1, 3.65, 0.025, 2, 0.0] #SnPb 19 [1, 1, 1, 1, 1, 2, 2, 2, 3, 57.5, 5.04, 3, 3, 1, 3, 3, 3, 1, 1, 1, 1, 1, 1, 198, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 11, 357, 4.13e-05, 0.551, 1.47, 1, 0.12, 1, 1, 1, 1, 1, 1, 768, 8760, 1, 5770, 0.0214, 1, 218, 28, 32, 3.4, 22.1, 3, 27, 17, 13, 5, 8.936, 0.304, 2, 0.0] #SAC water 20 [1, 1, 1, 1, 1, 2, 2, 2, 3, 57.5, 5.04, 3, 3, 1, 3, 3, 3, 1, 1, 1, 1, 1, 1, 198, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 11, 357, 4.13e-05, 0.551, 1.47, 1, 0.12, 1, 1, 1, 1, 1, 1, 768, 8760, 1, 5770, 0.0214, 1, 218, 20, 30, 6.2, 26.1, 3, 27, 17, 13, 5, 8.936, 0.304, 2, 0.0] #SAC air 21 [1, 1, 1, 1, 1, 2, 2, 2, 3, 54.9, 2.58, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 8.7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.53, 216, 1.78e-05, 0.0706, 0.199, 1, 0.0364, 1, 1, 1, 1, 1, 1, 312, 5830, 1, 3400, 0.00133, 1, 227, 15, 19, 5.4, 20.8, 2, 21.5, 18.5, 8.6, 2.1, 1.125, 0.41, 2, 0.0] #SnCu water 22 [1, 1, 1, 1, 1, 2, 2, 2, 3, 54.9, 2.58, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 8.7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.53, 216, 1.78e-05, 0.0706, 0.199, 1, 0.0364, 1, 1, 1, 1, 1, 1, 312, 5830, 1, 3400, 0.00133, 1, 227, 16, 22, 9.1, 41.2, 2, 21.5, 18.5, 8.6, 2.1, 1.125, 0.41, 2, 0.0] #SnCu air 23 ]) 27 print(get_ranks(dataset)) TypeError: list indices must be integers or slices, not tuple </code></pre> <p>I think my data is a proper numpy array containing integers and decimal numbers. So what is the problem?</p> <p>Here is my code</p> <pre><code>import pyDecision import numpy as np from pyDecision.algorithm import maut_method def get_ranks(D): Ws, Ds = [13.75, 20.83, 18.75, 18.33, 14.58, 13.75], [5, 18, 10, 20, 11, 3] weights = [] for w, d in zip(Ws, Ds): weights.extend([w / d] * d) return maut_method(D, weights, ['min'] * 54 + ['max'] * 10 + ['min'] * 1 + ['max'] * 1 + ['min'] * 1, ['exp'] * 67, graph=True) dataset = np.array([ [1, 1, 1, 1, 1, 2, 2, 2, 3, 59.4, 4.13, 4, 4, 2, 3, 4, 4, 1, 1, 1, 1, 1, 1, 1550, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.43, 187, 1.87e-05, 0.0698, 0.149, 1, 0.0398, 1, 1, 1, 1, 1, 1, 315, 6030, 1, 2910, 0.00134, 1, 183, 27.2, 30.6, 3, 48, 3, 23, 14, 3.3, 1, 3.65, 0.025, 2, 0.0] #a1 [1, 1, 1, 1, 1, 2, 2, 2, 3, 57.5, 5.04, 3, 3, 1, 3, 3, 3, 1, 1, 1, 1, 1, 1, 198, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 11, 357, 4.13e-05, 0.551, 1.47, 1, 0.12, 1, 1, 1, 1, 1, 1, 768, 8760, 1, 5770, 0.0214, 1, 218, 28, 32, 3.4, 22.1, 3, 27, 17, 13, 5, 8.936, 0.304, 2, 0.0] #a2 [1, 1, 1, 1, 1, 2, 2, 2, 3, 57.5, 5.04, 3, 3, 1, 3, 3, 3, 1, 1, 1, 1, 1, 1, 198, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 11, 357, 4.13e-05, 0.551, 1.47, 1, 0.12, 1, 1, 1, 1, 1, 1, 768, 8760, 1, 5770, 0.0214, 1, 218, 20, 30, 6.2, 26.1, 3, 27, 17, 13, 5, 8.936, 0.304, 2, 0.0] #a3 [1, 1, 1, 1, 1, 2, 2, 2, 3, 54.9, 2.58, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 8.7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.53, 216, 1.78e-05, 0.0706, 0.199, 1, 0.0364, 1, 1, 1, 1, 1, 1, 312, 5830, 1, 3400, 0.00133, 1, 227, 15, 19, 5.4, 20.8, 2, 21.5, 18.5, 8.6, 2.1, 1.125, 0.41, 2, 0.0] #a4 [1, 1, 1, 1, 1, 2, 2, 2, 3, 54.9, 2.58, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 8.7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.53, 216, 1.78e-05, 0.0706, 0.199, 1, 0.0364, 1, 1, 1, 1, 1, 1, 312, 5830, 1, 3400, 0.00133, 1, 227, 16, 22, 9.1, 41.2, 2, 21.5, 18.5, 8.6, 2.1, 1.125, 0.41, 2, 0.0] #a5 ]) print(get_ranks(dataset)) </code></pre>
<python><arrays><numpy>
2024-07-08 21:30:32
1
480
Espejito
78,722,890
14,301,911
Where can I find an exhaustive list of actions for spark?
<p>I want to know exactly what I can do in spark without triggering the computation of the spark RDD/DataFrame.</p> <p>It's my understanding that only actions trigger the execution of the transformations in order to produce a DataFrame. The problem is that I'm unable to find a comprehensive list of spark actions.</p> <p><a href="https://spark.apache.org/docs/latest/rdd-programming-guide.html#actions" rel="noreferrer">Spark documentation</a> lists some actions, but it's not exhaustive. For example show is not there, but it is considered an action.</p> <ul> <li>Where can I find a full list of actions?</li> <li>Can I assume that all methods listed <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.html#pyspark.sql.DataFrame" rel="noreferrer">here</a> are also actions?</li> </ul>
<python><dataframe><apache-spark><pyspark>
2024-07-08 21:20:53
2
504
HappilyCoding
78,722,888
9,097,114
Python sendgrid - Concat/Add a string to HTML_CONTENT
<p>i Am trying to add a string to thml content and out that i am getting is not as required<br /> 'str1' should be added in place of str1 to 'Attached is the report of your data - +str1'</p> <pre><code>str1= 'StringToBeAdded' html_content1 = ''' Hi All, &lt;br&gt; &lt;br&gt; Greetings from xyz! &lt;br&gt; Attached is the report of your data - + str1 &lt;br&gt; &lt;br&gt; Note: This is an automated email. In case of any queries, please reach us at &lt;br&gt; Regards, &lt;br&gt; abc ''' message = Mail( from_email='from@example.com', to_emails=to@example.com', subject='Mail, html_content=html_content1) </code></pre> <p>Expected Output</p> <p><strong>Attached is the report of your data - StringToBeAdded</strong></p> <p>Thanks in advance</p>
<python><sendgrid>
2024-07-08 21:19:59
1
523
san1
78,722,881
9,363,181
Collect values as dictionary in parent column using Pyspark
<p>I have code and data like below:</p> <pre><code>df_renamed = df.withColumnRenamed(&quot;id&quot;,&quot;steps.id&quot;).withColumnRenamed(&quot;status_1&quot;,&quot;steps.status&quot;).withColumnRenamed(&quot;severity&quot;,&quot;steps.error.severity&quot;) df_renamed.show(truncate=False) +----------+-------+------+-----------------------+------------+--------------------+ |apiVersion|expired|status|steps.id |steps.status|steps.error.severity| +----------+-------+------+-----------------------+------------+--------------------+ |2 |false |200 |mexican-curp-validation|200 |null | +----------+-------+------+-----------------------+------------+--------------------+ </code></pre> <p>Now I want to transform this data as below:</p> <pre><code>+----------+-------+------+-----------------------+------------+--------------------+ |apiVersion|expired|status|steps | +----------+-------+------+-----------------------+------------+--------------------+ |2 |false |200 |{&quot;id&quot;:&quot;mexican-curp-validation&quot;, &quot;status&quot;:200 ,&quot;error&quot;:{&quot;severity&quot;:null}} | +----------+-------+------+-----------------------+------------+--------------------+ </code></pre> <p>where it can be seen that based on the dot notation of the column names, JSON struct is formed in the data. For this reason, I used below code:</p> <pre><code>cols_list = [name for name in df_renamed.columns if &quot;.&quot; in name] df_new = df_renamed.withColumn(&quot;steps&quot;,F.to_json(F.struct(*cols_list))) df_new.show() </code></pre> <p>But it gives below error even though the column is present:</p> <pre><code> df_new = df_renamed.withColumn(&quot;steps&quot;,F.to_json(F.struct(*cols_list))) File &quot;/Users/../IdeaProjects/pocs/venvsd/lib/python3.9/site-packages/pyspark/sql/dataframe.py&quot;, line 3036, in withColumn return DataFrame(self._jdf.withColumn(colName, col._jc), self.sparkSession) File &quot;/Users/../IdeaProjects/pocs/venvsd/lib/python3.9/site-packages/py4j/java_gateway.py&quot;, line 1321, in __call__ return_value = get_return_value( File &quot;/Users/../IdeaProjects/pocs/venvsd/lib/python3.9/site-packages/pyspark/sql/utils.py&quot;, line 196, in deco raise converted from None pyspark.sql.utils.AnalysisException: Column 'steps.id' does not exist. Did you mean one of the following? [steps.id, expired, status, steps.status, apiVersion, steps.error.severity]; 'Project [apiVersion#17, expired#18, status#19, steps.id#29, steps.status#37, steps.error.severity#44, to_json(struct(id, 'steps.id, status, 'steps.status, severity, 'steps.error.severity), Some(GMT+05:30)) AS steps#82] +- Project [apiVersion#17, expired#18, status#19, steps.id#29, steps.status#37, severity#22 AS steps.error.severity#44] +- Project [apiVersion#17, expired#18, status#19, steps.id#29, status_1#21 AS steps.status#37, severity#22] +- Project [apiVersion#17, expired#18, status#19, id#20 AS steps.id#29, status_1#21, severity#22] +- Relation [apiVersion#17,expired#18,status#19,id#20,status_1#21,severity#22] csv </code></pre> <p>Where am I going wrong? Any help is much appreciated.</p>
<python><python-3.x><dictionary><apache-spark><pyspark>
2024-07-08 21:16:46
2
645
RushHour
78,722,524
2,700,344
pyvis - How to make nodes IDs selectable (to copy into buffer) in browser
<p>This code generates HTML files with sub-graphs. The problem is that it is not possible to copy some node ID from HTML page to be able to paste it into some other tool.</p> <pre><code>G = nx.from_pandas_edgelist(df, source='fromKey', target='toKey', create_using=nx.DiGraph) i = 0 for h in nx.weakly_connected_components(G): i += 1 #Create subgraph g=nx.subgraph(G, h) #----Draw using pyvis net = Network(height='1000px', width='1000px', directed=True#, #heading='Subgraph '+str(i) ) net.from_nx(g) net.repulsion(central_gravity=0.1) net.set_edge_smooth('dynamic') net.show('subgraph_'+str(i)+'.html', notebook=False) </code></pre> <p>The graph looks good but not possible to select particular node id</p> <p>Say, it is some node with id=100040003003405 - Is it any property which allows to copy ID using mouse cursor to be able to use it as a parameter for adhoc query rather than manually typing...</p>
<python><networkx><pyvis>
2024-07-08 19:03:40
1
38,483
leftjoin