QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,475,150
| 8,512,262
|
How can I update an attribute that's added to a method via a decorator?
|
<p>I've created a decorator that wraps tkinter's <code>after()</code> method to make looping functions easier (i.e., having a function call itself periodically)</p>
<pre><code>import tkinter as tk
from functools import wraps
# decorator
def after_loop(interval: int):
"""Loop the decorated method at the specified `interval` using `after()`"""
def _after_loop(function):
@wraps(function)
def wrapper(self, *args, **kwargs):
value = function(self, *args, **kwargs)
self.after(
interval,
lambda s=self, a=args, k=kwargs: wrapper(s, *a, **k)
)
return value
return wrapper
return _after_loop
</code></pre>
<p>This works perfectly fine as written above...</p>
<pre><code># example tkinter application
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title('After Loop Test')
self.label_val = 0
self.label = tk.Label(self, text=self.label_val)
self.label.pack()
self.label_update # begin updates
@after_loop(1000) # update this label every second
def label_update(self):
self.label_val += 1
self.label.config(text=self.label_val)
if __name__ == '__main__':
app = App()
app.mainloop()
</code></pre>
<p>however, I'm struggling to implement an abstraction around <code>after_cancel()</code> so the user (I) can cancel the loop if desired. I've tried the following:</p>
<pre><code>def after_loop(interval: int):
"""Loop the decorated method at the specified `interval` using `after()`"""
def _after_loop(function):
function.loop_cancel = False # add attribute to decorated function
@wraps(function)
def wrapper(self, *args, **kwargs):
value = function(self, *args, **kwargs)
_loop_id = self.after(
interval,
lambda s=self, a=args, k=kwargs: wrapper(s, *a, **k)
)
if wrapper.loop_cancel: # break out of the 'after' loop when set
self.after_cancel(_loop_id)
return value
wrapper.loop_cancel = False # add attribute to the wrapper function
return wrapper
return _after_loop
</code></pre>
<p>But when I attempt to write to the <code>loop_cancel</code> attribute flag, I'm greeted with <code>AttributeError: 'method' object has no attribute 'loop_cancel'</code> even though it shows up in <code>label_update</code>'s <code>__dict__</code> and I can read the attribute's value just fine with <code>print(label_update.after_cancel)</code></p>
<pre><code>@after_loop(1000) # update this label every second
def label_update(self):
self.label_val += 1
self.label.config(text=self.label_val)
# test reading the 'loop_cancel' attribute (works as expected)
print(self.label_update.loop_cancel) # => False
print(self.label_update.__dict__) # => {'loop_cancel': False, ...
# attempt to set the 'loop_cancel' flag and break out of the loop (no dice!)
if self.label_val >= 5:
self.label_update.loop_cancel = True # raises AttributeError
# setattr(self.label_update, 'loop_cancel', True) has the same problem
</code></pre>
<p>Are attributes added via a decorator inherently read-only, or am I implementing the decorator incorrectly somehow? Any help is much appreciated, as ever.</p>
<hr>
<p><em>Edit</em> - to say that if I <em>don't</em> add the <code>loop_cancel</code> attribute to the function immediately inside <code>_after_loop</code> the <code>AttributeError</code> is raised immediately upon trying to access <code>self.label_update.loop_cancel</code>, which isn't entirely surprising.</p>
<pre><code>def _after_loop(function):
function.loop_cancel = False # add attribute to decorated function
</code></pre>
|
<python><tkinter><attributes><decorator><python-decorators>
|
2023-02-16 16:22:02
| 1
| 7,190
|
JRiggles
|
75,475,141
| 579,228
|
Using SQLite Python and Multithreading
|
<p>I'm desperately trying to get this code to work from about 70 threads, where it won't be run exactly at the same time, but pretty closely. All I really want is a way of saying, try to insert this, and if you can't back off for a while and try again, just doit without breaking the database. I'm using no options when creating the database, except for the filename. The only problem is I'm getting lots of <strong>disk I/O errors</strong> and <strong>database disk image is malformed</strong>. I'm trying to run this in a transaction, so if anything goes wrong it should roll back. I've tried the isolation_level=None option on the connection, which didn't really help. I'm using the Python sqlite3 module.</p>
<p>Here's the code</p>
<pre><code>update_simulations_end_time_sql = """update simulations set end_time=?, completion_status =? where id=?;"""
def __set_time(sql_command, data):
retries=0
while retries<5:
try:
with create_tables.create_connection() as conn:
cur = conn.cursor()
cur.execute("begin")
cur.execute(sql_command, data)
return
except Exception as e:
print(f"__set_time has failed with {sql_command}")
print(e)
sleep_time = uniform(0.1,4)
print(f"Sleeping for {sleep_time}")
sleep(sleep_time)
retries+=1
raise Exception(f"__set_time failed after {retries}")
</code></pre>
<p>Here's the options sqlite was compiled with</p>
<pre><code>sqlite> SELECT * FROM pragma_compile_options;
COMPILER=gcc-9.4.0
ENABLE_COLUMN_METADATA
ENABLE_DBSTAT_VTAB
ENABLE_FTS3
ENABLE_FTS3_PARENTHESIS
ENABLE_FTS3_TOKENIZER
ENABLE_FTS4
ENABLE_FTS5
ENABLE_JSON1
ENABLE_LOAD_EXTENSION
ENABLE_PREUPDATE_HOOK
ENABLE_RTREE
ENABLE_SESSION
ENABLE_STMTVTAB
ENABLE_UNKNOWN_SQL_FUNCTION
ENABLE_UNLOCK_NOTIFY
ENABLE_UPDATE_DELETE_LIMIT
HAVE_ISNAN
LIKE_DOESNT_MATCH_BLOBS
MAX_SCHEMA_RETRY=25
MAX_VARIABLE_NUMBER=250000
OMIT_LOOKASIDE
SECURE_DELETE
SOUNDEX
THREADSAFE=1
USE_URI
</code></pre>
<p>If anyone has any ideas on how to solve this, I would be amazingly grateful.</p>
|
<python><sqlite>
|
2023-02-16 16:21:19
| 1
| 1,850
|
James
|
75,474,913
| 13,525,512
|
Launch different tkinter version from python app compiled with pyinstaller on Windows
|
<p>I have a tkinter GUI that allows me to start any kind of program:</p>
<pre><code># main_app.py
import tkinter as tk
import subprocess
root = tk.Tk()
cmd_entry = tk.Entry(width=50)
cmd_entry.pack(side='left')
def run_script():
sp = subprocess.run(cmd_entry.get().split(), shell=True)
run_btn = tk.Button(text="Run Command", command=run_script)
run_btn.pack(side='left')
root.mainloop()
</code></pre>
<p>It looks like this:</p>
<p><a href="https://i.sstatic.net/FsYBy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FsYBy.png" alt="enter image description here" /></a></p>
<p>I can start another tkinter script from this window, for instance:</p>
<pre><code># dummy_app.py
import tkinter as tk
root = tk.Tk()
root.mainloop()
</code></pre>
<p>It even works when starting <code>dummy_app.py</code> with a different version of python. For example, I can start <code>main_app.py</code> with Python 3.10.8 and run the following:</p>
<pre><code>C:\Path\to\python3.9\python.exe dummy_app.py
</code></pre>
<p>However, if I compile <code>main_app.py</code> into an executable with pyinstaller (v5.6.2):</p>
<pre><code>pyinstaller.exe .\main_app.py --onefile
</code></pre>
<p>Then I get the following error when trying to run <code>C:\Path\to\python3.9\python.exe dummy_app.py</code> from <code>main_app.exe</code>:</p>
<pre><code>C:/Users/.../AppData/Local/Temp/_MEI76562/tcl/init.tcl: version conflict for package "Tcl": have 8.6.9, need exactly 8.6.12
version conflict for package "Tcl": have 8.6.9, need exactly 8.6.12
while executing
"package require -exact Tcl 8.6.12"
(file "C:/Users/.../AppData/Local/Temp/_MEI76562/tcl/init.tcl" line 19)
invoked from within
"source C:/Users/.../AppData/Local/Temp/_MEI76562/tcl/init.tcl"
("uplevel" body line 1)
invoked from within
"uplevel #0 [list source $tclfile]"
This probably means that Tcl wasn't installed properly.
</code></pre>
<p><code>python dummy_app.py</code> works fine however.</p>
<p>Why does the tcl version has to be the same when starting the script from the compiled executable? Is there a way around this?</p>
|
<python><python-3.x><tkinter><tcl><pyinstaller>
|
2023-02-16 16:04:37
| 2
| 12,821
|
Tranbi
|
75,474,678
| 11,974,163
|
Pyodbc.connect is not connecting with created login and user
|
<p>Been stuck on this for a few days. Seen quite a few stack overflow posts on this which hasn't resolved for me and read the microsoft and pyodbc docs also but seems like my issue on this may be niche and would like some help.</p>
<p><strong>Goal:</strong> I want to connect to sql server via a python script using <code>pyodbc</code>. I've built a <code>docker-compose.yml</code> which for the sql server image essentially points at a <code>Dockerfile</code> with <code>build: .</code>, then the dockerfile runs a <code>setup.sql</code> script is run so I create a db, user and login automatically instead of creating it every time I spin up the container.</p>
<p>On my laptop (checked ODBC data source admin) I have <code>ODBC Driver 17 for SQL Server</code>, and that's why I've used in in below code:</p>
<pre><code>cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=localhost;DATABASE=testdb;UID=kafkaUser;PWD=Kafka_Us3R!;')
</code></pre>
<p>(I've seen on other posts that using <code>trusted_connection=yes;</code> should resolve this, but sadly doesn't for me.) <strong>NOTE:</strong> I get this error whether I use <code>sa</code>, <code>kafkaUser</code>, or <code>kafkaLogin</code> with their designated passwords. I've just copied and pasted above line after last try before resulting to posting here.</p>
<p><code>Dockerfile:</code></p>
<pre><code>FROM mcr.microsoft.com/mssql/server
WORKDIR /topics
# Env vars for sql server
ENV ACCEPT_EULA Y
ENV MSSQL_SA_PASSWORD <YourStrong!Passw0rd>
ENV MSSQL_PID Developer
ENV MSSQL_HOST localhost
ENV MSSQL_USER kafkaUser
ENV MSSQL_PASSWORD Kafka_Us3R!
ENV MSSQL_DATABASE testdb
EXPOSE 1433:1433
COPY topics/proposal-created-hl/setup.sql setup.sql
COPY topics/proposal-created-hl/setup_database.sh setup_database.sh
COPY topics/proposal-created-hl/entrypoint.sh entrypoint.sh
RUN /opt/mssql/bin/sqlservr & ./setup_database.sh
</code></pre>
<p><code>setup.sql</code></p>
<pre><code>-- MSSQL file for local testing with docker container
IF NOT EXISTS ( SELECT * FROM sys.databases WHERE name = 'testdb' )
CREATE DATABASE [testdb];
GO
USE [testdb]
GO
IF NOT EXISTS ( SELECT name FROM sys.server_principals WHERE name = 'kafkaLogin' )
BEGIN
CREATE LOGIN [kafkaLogin] WITH PASSWORD = 'Kafka_Us3R!', CHECK_EXPIRATION = OFF, CHECK_POLICY = OFF;
ALTER SERVER ROLE [sysadmin] ADD MEMBER [kafkaLogin];
ALTER LOGIN [kafkaLogin] WITH DEFAULT_DATABASE = [testdb];
CREATE USER [kafkaUser] FOR LOGIN [kafkaLogin];
END
GO
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>pyodbc.InterfaceError: ('28000', "[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'kafkaUser'. (18456) (SQLDriverConnect); [28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'kafkaUser'. (18456)")
</code></pre>
<p><code>setup_database.sh</code>:</p>
<pre><code># Resource: https://stackoverflow.com/questions/58309452/docker-initialize-database-tables-and-records-in-sql-server
#!/usr/bin/env bash
# Wait for database to startup
sleep 20
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourStrong!Passw0rd>' -i setup.sql
</code></pre>
<p>I have checked this also while getting in the container (via <code>docker exec</code>) that the db, user and login have been created. So what detail am I missing here that has caused me to get this error? I tried the solutions of various posts of people coming across the same error but this hasn't resolved for me. Is there something else I need to look at?</p>
|
<python><sql-server><pyodbc>
|
2023-02-16 15:46:04
| 1
| 457
|
pragmatic learner
|
75,474,670
| 5,453,284
|
Is calling str.replace() twice the best solution for overlapping matches?
|
<p>When I execute the following code I expect all ' a ' to be replaced by ' b ' yet only non overlapping matches are replaced.</p>
<pre><code>" a a a a a a a a ".replace(' a ', ' b ')
>>>' b a b a b a b a'
</code></pre>
<p>So I use the following:</p>
<pre><code>" a a a a a a a a ".replace(' a ', ' b ').replace(' a ', ' b ')
>>>' b b b b b b b b '
</code></pre>
<p>Is this a bug or a feature of <em>replace</em> ?</p>
<p>From the <a href="https://docs.python.org/3/library/stdtypes.html#str.replace" rel="nofollow noreferrer">docs</a> <em>ALL OCCURENCES</em> are replaced.</p>
<pre><code>str.replace(old, new[, count])
Return a copy of the string with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced.
</code></pre>
|
<python><replace>
|
2023-02-16 15:45:32
| 2
| 327
|
thomas
|
75,474,642
| 12,875,823
|
Interpret hex as signed integer in Python
|
<p>I'm aware I could just do <code>0x0 - 9223372036854775807 - 1</code>, but is there a bit shift operation I could do instead to make this faster? Context: I'm fed a uint64 number in hex string form but I want to store this number inside an 8 byte signed integer attr in PostgreSQL. Also, I would need a way to convert it back from signed integer to unsigned hex</p>
|
<python><python-3.x><bit-manipulation>
|
2023-02-16 15:43:21
| 0
| 998
|
acw
|
75,474,603
| 5,510,540
|
Python: replace numbers in an array
|
<p>I have the following array:</p>
<pre><code>array = array([4., 0., 2., 8., 8., 8., 8., 2., 0.])
</code></pre>
<p>and I would like to replace 0 by 0.5 so to get:</p>
<pre><code>array = array([4., 0.5, 2., 8., 8., 8., 8., 2., 0.5])
</code></pre>
<p>so far I have tried:</p>
<pre><code>array.replace(0.5, 0)
</code></pre>
<p>with little success:</p>
<pre><code>AttributeError: 'numpy.ndarray' object has no attribute 'replace'
</code></pre>
<p>any idea on how to keep the array format but replace numbers inside it?</p>
<p>cheers!</p>
|
<python><arrays>
|
2023-02-16 15:40:36
| 1
| 1,642
|
Economist_Ayahuasca
|
75,474,589
| 4,494,781
|
How to download a file via https using QNetworkAccessManager
|
<p>I'm trying to write a class using <code>QtNetwork</code> to download a file without freezing my GUI.
This seems to work with http URLs (tested with "http://webcode.me"), but not with the <code>https</code> URL from my example.</p>
<pre><code>import os
from typing import Optional
import urllib.parse
from PyQt5.QtCore import pyqtSignal, QByteArray, QFile, QObject, QUrl
from PyQt5.QtNetwork import QNetworkAccessManager, QNetworkReply, QNetworkRequest
class AsyncDownloader(QObject):
def __init__(self, url: str, filename: str, parent=None):
super().__init__(parent)
self.net_mgr = QNetworkAccessManager()
self.req = QNetworkRequest(QUrl(url))
self.fetch_task: Optional[QNetworkReply] = None
self.data: Optional[QByteArray] = None
self.file = QFile(filename)
self.net_mgr.sslErrors.connect(self._ignore_ssl_errors)
def start_fetch(self):
self.fetch_task = self.net_mgr.get(self.req)
self.fetch_task.downloadProgress.connect(self.on_progress)
self.fetch_task.finished.connect(self.on_finished)
def _ignore_ssl_errors(self, reply: QNetworkReply, errors: List[QSslError]):
print(f"errors {errors}")
reply.ignoreSslErrors(errors)
def on_progress(self, bytes_received: int, bytes_total: int):
print(f"bytes received {bytes_received} (total {bytes_total})")
def on_finished(self):
print("finished")
self.data = self.fetch_task.readAll()
if not self.file.open(QFile.WriteOnly):
raise IOError(f"Unable to write to {self.file.fileName}")
self.file.write(self.data)
self.file.close()
print(f"file written to {self.file.fileName()}")
if __name__ == '__main__':
from pathlib import Path
from PyQt5.QtWidgets import QApplication
dl_path = os.path.join(str(Path.home()), "test_dl")
os.makedirs(dl_path, exist_ok=True)
app = QApplication([])
downloader = AsyncDownloader(
"https://github.com/PiRK/Electrum-ABC-Build-Tools/releases/download/v1.0/tor-linux",
os.path.join(dl_path, "tor")
)
downloader.start_fetch()
app.exec_()
</code></pre>
<p>The errors (or warnings?) I'm getting are:</p>
<pre><code>qt.network.ssl: QSslSocket: cannot resolve EVP_PKEY_base_id
qt.network.ssl: QSslSocket: cannot resolve SSL_get_peer_certificate
qt.network.ssl: QSslSocket: cannot call unresolved function SSL_get_peer_certificate
errors [<PyQt5.QtNetwork.QSslError object at 0x7fad867112a0>]
qt.network.ssl: QSslSocket: cannot call unresolved function EVP_PKEY_base_id
bytes received 0 (total 0)
finished
file written to /home/myname/test_dl/tor
</code></pre>
<p>The file that is written is empty.</p>
<p>I tried adding the following lines just after <code>self.net_mgr = ....</code>:</p>
<pre><code> parsed_url = urllib.parse.urlparse(url)
if parsed_url.scheme == "https":
self.net_mgr.connectToHostEncrypted(parsed_url.hostname)
</code></pre>
<p>This does not help.</p>
<p>The download work fine with <code>wget</code>:</p>
<pre><code>$ wget "https://github.com/PiRK/Electrum-ABC-Build-Tools/releases/download/v1.0/tor-linux"
...
tor-linux 100%[=============================================================================================>] 15,34M 985KB/s in 16s
2023-02-16 16:36:51 (969 KB/s) - ‘tor-linux’ saved [16090880/16090880]
</code></pre>
|
<python><qtnetwork>
|
2023-02-16 15:39:12
| 1
| 1,105
|
PiRK
|
75,474,511
| 2,061,944
|
How to select and click on option based on city name in mat-option-text?
|
<p>I need to find and click on a specific city from a drop down list using selenium. I have tried using the xpath but the id number for the city keeps changing with every refresh of the page. The element from the website is below:</p>
<pre><code><mat-option role="option" class="mat-option mat-focus-indicator mat-active ng-tns-c88-11 ng-star-inserted mat-selected" id="mat-option-14" tabindex="0" aria-disabled="false" style="" aria-selected="true"><!---->
<span class="mat-option-text"> Melbourne </span><!----><div mat-ripple="" class="mat-ripple mat-option-ripple"></div></mat-option>
</code></pre>
<p>My code currently uses xpath:</p>
<pre><code>WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//*[@id="mat-option-14"]/span'))).click()
</code></pre>
<p>The code works only when the id is 14 but it can be any number so when it changes the code breaks. Also there are multiple cities with different ids. Can I find and click by the city name instead somehow? Thanks</p>
|
<python><selenium-webdriver><xpath>
|
2023-02-16 15:33:10
| 2
| 329
|
user2061944
|
75,474,448
| 13,517,174
|
pytest: How can you use mocker.patch on a function that is defined inside of a test?
|
<p>I have a pytest file called <code>test_util</code> that looks like this:</p>
<pre><code>import pytest
class TestUtil:
def test_split_kwargs(self, mocker):
def testfunction_extra(e='5',f='6'):
return e+f
mocker.patch(...)
</code></pre>
<p>I would like to use the <code>assert_has_calls</code> method on my <code>testfunction_extra</code> function, but I'm not sure what to put into my <code>mocker.patch</code> statement. I have already tried</p>
<pre><code>mocker.patch(__name__ + '.TestUtil.test_split_kwargs.testfunction_extra')
</code></pre>
<p>but this returns the error <code>AttributeError: <function TestUtil.test_split_kwargs at 0x7fce6612e790> does not have the attribute 'testfunction_extra'</code></p>
|
<python><mocking><pytest>
|
2023-02-16 15:28:23
| 1
| 453
|
Yes
|
75,474,347
| 9,911,256
|
Convert bytea to ndarray of nd.float32
|
<p>I have an ndarray of np.float32 that is saved in a Postgres database in the <a href="https://www.postgresql.org/docs/current/datatype-binary.html#id-1.5.7.12.9" rel="nofollow noreferrer"><em>bytea</em></a> format:</p>
<pre><code>import pandas as pd
import numpy as np
import sqlite3
myndarray=np.array([-3.55219245e-02, 1.33227497e-01, -4.96977456e-02, 2.16857344e-01], dtype=np.float32)
myarray=[myndarray.tobytes()]
mydataframe=pd.DataFrame(myarray, columns=['Column1'])
mydataframe.to_sql('mytable', sqlite3.connect("/tmp/floats.sqlite"))
</code></pre>
<p>In SQLITE3, this will produce:</p>
<pre><code>CREATE TABLE IF NOT EXISTS "mytable" ("index" INTEGER, "Column1" TEXT);
INSERT INTO mytable VALUES(0,X'707f11bdca6c083edd8f4bbdda0f5e3e');
</code></pre>
<p>In Postgresql, this will produce:</p>
<pre><code>mydatabase=# select * from mytable;
index | Column1
-------+------------------------------------
0 | \x707f11bdca6c083edd8f4bbdda0f5e3e
</code></pre>
<p>Which format is <a href="https://www.postgresql.org/docs/current/datatype-binary.html#id-1.5.7.12.9" rel="nofollow noreferrer"><em>bytea</em></a>. How to convert that <code>\x707f...</code> back to <code>myndarray</code>? No expert here, I've found a lot of obscure documentation about <code>frombuffer()</code>, python2 <code>buffer()</code>, <code>memoryview()</code> but I am far from a proper result.</p>
<p>My best so far is:</p>
<pre><code>np.frombuffer(bytearray('707f11bdca6c083edd8f4bbdda0f5e3e', 'utf-8'), dtype=np.float32)
</code></pre>
<p>which is completely wrong (myndarray has 4 values):</p>
<pre><code>[2.1627062e+23 1.6690035e+22 3.3643249e+21 5.2896255e+22 2.1769183e+23
1.6704162e+22 2.0823326e+23 5.2948159e+22]
</code></pre>
|
<python><pandas><postgresql><numpy><bytea>
|
2023-02-16 15:20:32
| 1
| 954
|
RodolfoAP
|
75,474,316
| 4,700,367
|
Python threading memory error / bug / race condition
|
<p>I have an app where the following happens:</p>
<ul>
<li>starts a thread to generate "work"
<ul>
<li>this thread then starts a thread pool with 5 workers to generate "work" and put it on to a FIFO queue</li>
</ul>
</li>
<li>starts a thread pool of 20 workers to get work from the FIFO queue and executes it on a thread in the pool</li>
</ul>
<p>When running just one piece of "work" through the system, it works great. When running multiple, it starts failing.</p>
<p>I logged out in the <code>id()</code> of the objects retrieved from the queue, it seems that memory addresses are being re-used repeatedly for some reason rather than storing objects in a new memory address. I suspect then there is a data race where multiple threads are then accessing an object (which in my view IS a different object) but from the same memory address thereby overwriting each others variables etc.</p>
<p>See the following snippet from the log:</p>
<pre><code>[2023-02-16 14:33:02,695] INFO | App started with main PID: 26600
[2023-02-16 14:33:02,695] DEBUG | Max workers: 20
[2023-02-16 14:33:02,695] DEBUG | Max queue size: 60
[2023-02-16 14:33:02,695] INFO | Creating a work queue with size: 60
[2023-02-16 14:33:02,695] INFO | Starting the work generator thread
[2023-02-16 14:33:02,696] INFO | Creating a work consumer thread pool with max workers: 20
[2023-02-16 14:33:02,697] INFO | Found automation 'automation_d'
[2023-02-16 14:33:02,697] DEBUG | Submitting automation file to the work generator thread pool for execution
>>>>>>>>>>>>>>>>>>>id()==140299908643808
[2023-02-16 14:33:03,181] DEBUG | Putting 'T2149393' on to the queue for automation 'automation_d'
[2023-02-16 14:33:03,181] DEBUG | Putting 'T2149388' on to the queue for automation 'automation_d'
[2023-02-16 14:33:03,181] DEBUG | Putting 'T2149389' on to the queue for automation 'automation_d'
[2023-02-16 14:33:03,198] DEBUG | Retrieved a work item from the queue
[2023-02-16 14:33:03,198] DEBUG | Submitting work to the work consumer thread pool for execution
[2023-02-16 14:33:03,199] DEBUG | ==========================================================================================
>>>>>>>>>>>>>>>>>>>id()==140299908643808
[2023-02-16 14:33:03,199] DEBUG | <automation.TAutomation object at 0x7f9a1e377be0>
[2023-02-16 14:33:03,199] DEBUG | Task(num="T2149393", req="R2396580", who="", grp="AG1", desc="REQ - T"
[2023-02-16 14:33:03,199] DEBUG | ==========================================================================================
[2023-02-16 14:33:03,199] INFO | Running automation_d against T2149393 with internal automation id 18aa2e51-c94d-4d83-a033-44e30cca9dd3 in thread 140299891414784
[2023-02-16 14:33:03,199] INFO | Assigning T2149393 to API user
[2023-02-16 14:33:03,199] DEBUG | Retrieved a work item from the queue
[2023-02-16 14:33:03,201] DEBUG | Submitting work to the work consumer thread pool for execution
[2023-02-16 14:33:03,202] DEBUG | ==========================================================================================
>>>>>>>>>>>>>>>>>>>id()==140299908643808
[2023-02-16 14:33:03,202] DEBUG | <automation.TAutomation object at 0x7f9a1e377be0>
[2023-02-16 14:33:03,202] DEBUG | Task(num="T2149388", req="R2396575", who="", grp="AG1", desc="REQ - T"
[2023-02-16 14:33:03,202] DEBUG | ==========================================================================================
[2023-02-16 14:33:03,202] INFO | Running automation_d against T2149388 with internal automation id 18aa2e51-c94d-4d83-a033-44e30cca9dd3 in thread 140299883022080
[2023-02-16 14:33:03,202] DEBUG | Retrieved a work item from the queue
[2023-02-16 14:33:03,202] INFO | Assigning T2149388 to API user
[2023-02-16 14:33:03,203] DEBUG | Submitting work to the work consumer thread pool for execution
[2023-02-16 14:33:03,204] DEBUG | ==========================================================================================
>>>>>>>>>>>>>>>>>>>id()==140299908643808
[2023-02-16 14:33:03,204] DEBUG | <automation.TAutomation object at 0x7f9a1e377be0>
[2023-02-16 14:33:03,204] DEBUG | Task(num="T2149389", req="R2396576", who="", grp="AG1", desc="REQ - T"
[2023-02-16 14:33:03,205] DEBUG | ==========================================================================================
[2023-02-16 14:33:03,205] INFO | Running automation_d against T2149389 with internal automation id 18aa2e51-c94d-4d83-a033-44e30cca9dd3 in thread 140299670124288
</code></pre>
<p>As can be seen above, the <code>id()</code> is the same for all executions. Also the actual memory address of the object is the same each time, as well as the internal automation id which is a attribute on the object. Meaning when I eventually put this in to the queue, and it gets consumed and passed to another thread for execution, every thread has a pointer/reference to the same object which is causing the execution to fail in weird ways.</p>
<p>The below code sample is not intended to be a re-producible way to generate the error or the above log, it's intended as a visualisation and to give an example of how the app is structured currently. There is way too much code and custom logic to share here.</p>
<p>Rough, high-level code here:</p>
<pre class="lang-py prettyprint-override"><code>import json
import os
import sys
import time
from concurrent.futures import (CancelledError, Future, ThreadPoolExecutor,
TimeoutError)
from dataclasses import dataclass
from logging import Logger
from pathlib import Path, PurePath
from queue import Empty, Full, Queue
from threading import Event, Thread
from types import FrameType
from typing import Any, Dict, List, Optional
import requests
import urllib3
@dataclass()
class WorkItem:
automation_object: Automation
target: AutomationTarget
config: AutomationConfig
def generate_work(work_queue, app_config, automation_file, automation_name):
automation_config_raw = load_automation_file(automation_file)
validate_automation_file(automation_config=automation_config_raw)
automation_config = build_automation_config(
automation_name=automation_name,
automation_config_raw=automation_config_raw,
log_dir=app_config.log_dir
)
automation_object = build_automation(automation_config=automation_config)
records = automation_object.get_records()
for record in records:
work_item = WorkItem(
automation_object=automation_object,
target=record,
config=automation_config
)
work_queue.put(item=work_item, block=False)
def work_generator(stop_app_event, app_config, app_logger, work_queue):
work_generator_thread_pool = ThreadPoolExecutor(max_workers=5)
while True:
automation_files = get_automation_files(app_config.automations_dir)
for automation_file in automation_files:
automation_name = PurePath(automation_file).stem
work_generator_thread_pool.submit(generate_work, work_queue, app_config, automation_file, automation_name)
def main():
work_generator_thread = Thread(target=work_generator, args=(stop_app_event, app_config, app_logger, work_queue))
work_generator_thread.start()
work_consumer_thread_pool = ThreadPoolExecutor(max_workers=max_workers)
while True:
work_item = work_queue.get()
work_consumer_thread_pool.submit(work_item.automation_object.execute, work_item.target)
if __name__ == "__main__":
main()
</code></pre>
<p>So, at a high level we have 1 thread generating work using a thread pool, and another thread consuming + executing work from the queue.</p>
<p>Why is Python re-using the same piece of memory repeatedly and how can I force it to use a new piece of memory when creating these objects?</p>
|
<python><python-3.x><multithreading><python-multithreading><data-race>
|
2023-02-16 15:18:21
| 1
| 438
|
Sam Wood
|
75,474,163
| 1,900,954
|
Disabling tf.function decorators for code coverage pytest run
|
<p>As discussed <a href="https://github.com/tensorflow/tensorflow/issues/33759" rel="nofollow noreferrer">here</a>, code coverage tools do not work nicely with tensorflow due to its code transformation. One suggested workaround is to use <code>tf.config.experimental_run_functions_eagerly(True)</code> when generating reports (though it's worth noting that this still does not handle all cases, e.g. <code>tf.map_fn</code>).</p>
<p><strong>My question is: is there a simple way to do this automatically for tests run using <code>pytest --cov</code>?</strong> Is there perhaps something I could add to <code>conftest.py</code> that would allow me to make all executions run eagerly whenever I pass a given command line argument, such as <code>pytest --cov --eagerly</code>?</p>
|
<python><tensorflow><pytest><coverage.py><pytest-cov>
|
2023-02-16 15:06:05
| 1
| 1,974
|
Uri Granta
|
75,474,160
| 6,817,178
|
Python consume RabbitMQ and run SocketIO server
|
<p><strong>Setup</strong></p>
<p>I have a python application, which should consume messages from a RabbitMQ and act as a SocketIO server to a Vue2 APP. When it receives messages from RabbitMQ it should send out a message over SocketIO to the Vue2 APP. Therefore I wrote 2 classes <code>RabbitMQHandler</code> and <code>SocketIOHandler</code>. I am starting the <code>RabbitMQHandler</code> in a separate thread so that both the RabbitMQ consume and the wsgi server can run in parallel.</p>
<p><strong>Code</strong></p>
<pre><code>import random
import threading
import socketio
import eventlet
import sys
import os
import uuid
import pika
from dotenv import load_dotenv
import logging
class RabbitMQHandler():
def __init__(self, RABBITMQ_USER, RABBITMQ_PW, RABBITMQ_IP):
self.queue_name = 'myqueue'
self.exchange_name = 'myqueue'
credentials = pika.PlainCredentials(RABBITMQ_USER, RABBITMQ_PW)
self.connection = pika.BlockingConnection(pika.ConnectionParameters(RABBITMQ_IP, 5672, '/', credentials))
self.channel = self.connection.channel()
self.channel.queue_declare(queue=self.queue_name)
self.channel.exchange_declare(exchange=self.exchange_name, exchange_type='fanout')
self.channel.queue_bind(exchange=self.exchange_name, queue=self.queue_name)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.connection.close()
def run(self, callback):
logging.info('start consuming messages...')
self.channel.basic_consume(queue=self.queue_name,auto_ack=True, on_message_callback=callback)
self.channel.start_consuming()
class SocketIOHandler():
def __init__(self):
self.id = str(uuid.uuid4())
# create a Socket.IO server
self.sio = socketio.Server(async_mode='eventlet', cors_allowed_origins='*')
# wrap with a WSGI application
self.app = socketio.WSGIApp(self.sio)
self.sio.on('connect_to_backend', self.handle_connect)
self.sio.on('random_number', self.handle_random_number)
def handle_connect(self, sid, msg):
logging.info('new socket io message')
self.emit('connect_success', {
'success': True,
})
def handle_random_number(self, sid, msg):
logging.info('handle_random_number')
self.emit('response_random_number', { 'number': random.randint(0,10)})
def emit(self, event, msg):
logging.info('socket server: {}'.format(self.id))
logging.info('sending event: "{}"'.format(event))
self.sio.emit(event, msg)
logging.info('sent event: "{}"'.format(event))
def run(self):
logging.info('start web socket on port 8765...')
eventlet.wsgi.server(eventlet.listen(('', 8765)), self.app)
def start_rabbitmq_handler(socketio_handler, RABBITMQ_USER, RABBITMQ_PW, RABBITMQ_IP):
def callback(ch, method, properties, body):
logging.info('rabbitmq handler')
socketio_handler.emit('response_random_number', { 'number': random.randint(0,10)})
with RabbitMQHandler(RABBITMQ_USER, RABBITMQ_PW, RABBITMQ_IP) as rabbitmq_handler:
rabbitmq_handler.run(callback=callback)
threads = []
def main():
global threads
load_dotenv()
RABBITMQ_USER = os.getenv('RABBITMQ_USER')
RABBITMQ_PW = os.getenv('RABBITMQ_PW')
RABBITMQ_IP = os.getenv('RABBITMQ_IP')
socketio_handler = SocketIOHandler()
rabbitmq_thread = threading.Thread(target=start_rabbitmq_handler, args=(socketio_handler, RABBITMQ_USER, RABBITMQ_PW, RABBITMQ_IP))
threads.append(rabbitmq_thread)
rabbitmq_thread.start()
socketio_handler.run()
if __name__ == '__main__':
try:
logging.basicConfig(level=logging.INFO)
logging.getLogger("pika").propagate = False
main()
except KeyboardInterrupt:
try:
for t in threads:
t.exit()
sys.exit(0)
except SystemExit:
for t in threads:
t.exit()
os._exit(0)
</code></pre>
<p><strong>Problem</strong></p>
<p>The Problem is, that when the <code>RabbitMQHandler</code> receives a message the event <code>response_random_number</code> does not get through to the Vue2 APP. Even though it is emited in the <code>callback</code> function. When I send the <code>random_number</code> event from the Vue2 APP to the python application I do get the <code>response_random_number</code> event back from the python application in the Vue2 APP.</p>
<p>So all connections work on their own, but not together. My guess would be, that there is some sort of threading communication error. I added the <code>id</code> to the <code>SocketIOHandler</code> class to make sure it is the same instanced object and the prints are the same.</p>
<p>The logs 'socket server: ...', <code>sending event: ...</code> and <code>sent event: ...</code> tell me, that the function is being called correctly.</p>
|
<python><socket.io><rabbitmq><python-multithreading>
|
2023-02-16 15:05:41
| 1
| 4,935
|
Raphael Hippe
|
75,474,110
| 2,397,711
|
How to fix vulnerabilities from AWS ECR Image Scans
|
<p>I'm trying to fix some Common Vulnerabilities and Exposures from my docker images hosted at AWS ECR.</p>
<p>I have a Debian Bullseye that is basically a copy from the official python 3.11 bullseye slim image.</p>
<p>When I run a scan in it, ECR shows the CVE-2019-8457 - SQLite3 from 3.6.0 to and including 3.27.2 is vulnerable to heap out-of-bound read in the rtreenode() function when handling invalid rtree tables.</p>
<p>I tried to remove both sqlite3 and db5.3 from the image, but the <a href="https://security-tracker.debian.org/tracker/CVE-2019-8457" rel="nofollow noreferrer">warning persists</a>.</p>
<pre><code>RUN apt-get purge --auto-remove sqlite3 -y
RUN apt-get purge --auto-remove libsqlite3-dev -y
RUN apt-get purge --auto-remove db5.3-util -y
</code></pre>
<p>So my final question is: The scan is performed in the base image? Is there a way to fix this CVEs?</p>
|
<python><amazon-web-services><docker><security><amazon-ecr>
|
2023-02-16 15:01:29
| 0
| 1,874
|
Rafael Marques
|
75,474,093
| 6,364,850
|
How to read a spreadsheet using python API and Applications Default Credentials?
|
<p>I have the following code:</p>
<pre><code> scope = ['https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/spreadsheets',
'https://www.googleapis.com/auth/spreadsheets.readonly']
credentials, project = google.auth.default(scopes=scope)
service = discovery.build('sheets', 'v4', credentials=credentials, cache_discovery=False)
sheet = service.spreadsheets()
result_input = sheet.values().get(spreadsheetId=id,range=range).execute()
</code></pre>
<p>But I'm getting a 403, even when the sheet is public:
googleapiclient.errors.HttpError: <HttpError 403 when requesting</p>
<pre><code>https://sheets.googleapis.com/v4/spreadsheets/<SHEET_ID_HERE>/values/<RANGE_HERE>?alt=json returned "Request had insufficient authentication scopes.". Details: "[{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'ACCESS_TOKEN_SCOPE_INSUFFICIENT', 'domain': 'googleapis.com', 'metadata': {'service': 'sheets.googleapis.com', 'method': 'google.apps.sheets.v4.SpreadsheetsService.GetValues'}}]">
</code></pre>
<p>I have</p>
<ul>
<li>Google SDK well configured</li>
<li>I am the owner of the GCP project</li>
<li>I am the owner of the google sheet with the same email (that know is public)</li>
</ul>
<p>I didn't want to download any .json api key. Do I have another option? what am I doing wrong?</p>
<p><strong>EDIT</strong>
It seems to be impossible to use ADC to access the google sheet api. It MUST be by service account or Oauth2 configuration</p>
<p>here's a reference to another similar question
<a href="https://stackoverflow.com/questions/72526314/google-sheet-api-access-with-application-default-credentials-using-scopes-giving">Google Sheet API access with Application Default credentials using scopes giving Insufficient Scopes error while running on GCP VM</a></p>
<p>Very sad ending for my search</p>
|
<python><google-cloud-platform><scope><google-oauth><google-sheets-api>
|
2023-02-16 14:59:21
| 0
| 627
|
Matheus Oliveira
|
75,474,086
| 284,932
|
How to train FLAN-T5 to summarization task with a custom dataset of legal documents in pt-br?
|
<p>So, I would like to create a small proof-of-concept using (already extracted in txt files) +- 4.000 legal text divided in:</p>
<ol>
<li>2.000 initial petitions / complaints *.txt files</li>
<li>2.000 summaries of each initial petition (txt files too)</li>
</ol>
<p>PS.: <strong>all text files are in brazilian portuguese (pt-br)</strong></p>
<p>So how can I use these txt files to train a new transformer able to generate new summaries (using flan-t5) ?</p>
|
<python><nlp><transformer-model><summarization>
|
2023-02-16 14:58:59
| 2
| 474
|
celsowm
|
75,474,055
| 11,117,255
|
What can I do to pivot and group a dataframe?
|
<p>I have this dataframe</p>
<pre><code>time id type value
9:04 1 A 23
9:04 1 B 12
9:10 2 A 81
9:10 2 B 17
9:12 3 A 11
9:12 3 B 2
9:14 4 A 88
9:14 4 B 17
</code></pre>
<p>I can use this code to turn it into this dataframe</p>
<pre><code>dataframe = dataframe.pivot(index = ['time', 'id'], columns = 'type', values = value)
dataframe.columns = ['_'.join(col) for col in dataframe.columns]
time id A_type B_type
9:04 1 23 NaN
9:04 1 NaN 12
9:10 2 81 NaN
9:10 2 NaN 17
9:12 3 11 NaN
9:12 3 NaN 2
9:14 4 88 NaN
9:14 4 NaN 17
</code></pre>
<p>How do I turn this into this dataframe?</p>
<pre><code>time id A_type B_type
9:04 1 23 12
9:10 2 81 17
9:12 3 11 2
9:14 4 88 17
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-16 14:56:53
| 0
| 2,759
|
Cauder
|
75,473,964
| 10,353,865
|
Pandas: "Zip-like" selection
|
<p>Currently, when using the lookup method on a df one obtains zip-like selection, e.g.</p>
<pre><code>df.lookup(["one","two"],["a","b"])
</code></pre>
<p>will select two values: one with rowlabel "one" and collabel "a" and another with "two" and "b".
Now, when using the method a warning appears that the method will not be available in future versions and that one should use the "loc" method.</p>
<p>I really don't know how to obtain the same "zip-like" behavior with loc. Can anyone explain/help?</p>
|
<python><pandas>
|
2023-02-16 14:49:16
| 1
| 702
|
P.Jo
|
75,473,891
| 11,809,811
|
make plt graph fill area without padding
|
<p>I am trying to integrate matplotlib into tkinter, this is doable with FigureCanvasTkAgg. However, in my case I want to remove the y axis ticks and then make the graph cover the entire available area (i.e. remove the padding). Removing the y axis was not difficult but I cannot find any information on removing the padding.</p>
<p>If I found a solution (for example: <a href="https://stackoverflow.com/questions/11637929/remove-padding-from-matplotlib-plotting">Remove padding from matplotlib plotting</a>) it included using plt.savefig, but saving the image wouldn't help me. I guess I could save the image and display it that way although that would feel pretty hacky. Is there a better way?</p>
<p>My code so far:</p>
<pre><code>import customtkinter
import matplotlib.pyplot as plt
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
import matplotlib
import numpy as np
root = customtkinter.CTk()
root.geometry('600x500')
# figure
figure = plt.Figure()
ax = figure.add_subplot(111)
ax.yaxis.set_visible(False)
t = np.arange(0, 3, .01)
ax.plot(t, 2 * np.sin(2 * np.pi * t))
# add widget
canvas = FigureCanvasTkAgg(figure, master=root)
canvas.draw()
canvas.get_tk_widget().pack(fill='both', expand=True)
root.mainloop()
</code></pre>
|
<python><matplotlib><tkinter>
|
2023-02-16 14:44:51
| 0
| 830
|
Another_coder
|
75,473,825
| 11,391,711
|
Ignored_column is not working when using grid search in h2o library - Python
|
<p>The parameter called <code>ignored_columns</code> (see <a href="https://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/algo-params/ignored_columns.html" rel="nofollow noreferrer">link</a>) helps user to keep a feature that you want to be ignored when building a model.</p>
<p>When I build a simple ML model and analyze the feature importance, I can see that <code>h2o</code> ignores the column that I speficied during the training process, which can be observed from the feature importance. As shown below, column <code>c</code> is not used during training.</p>
<pre><code>import pandas as pd
import h2o
from h2o.estimators import H2ODeepLearningEstimator
from h2o.grid.grid_search import H2OGridSearch
from h2o.estimators.random_forest import H2ORandomForestEstimator
h2o.init()
x = pd.DataFrame([[0, 1, 4], [5, 1, 6], [15, 2, 0], [25, 5 , 32],
[35, 11 ,89], [45, 15, 1], [55, 34,3], [60, 35,4]], columns = ['a','b','c'])
y = pd.DataFrame([4, 5, 20, 14, 32, 22, 38, 43], columns = ['label'])
hf = h2o.H2OFrame( pd.concat([x,y], axis="columns"))
X = hf.col_names[:-1]
y = hf.col_names[-1]
model= H2ORandomForestEstimator(ignored_columns = ['c'])
model.train(y = y, training_frame=hf)
model.varimp(use_pandas=True)
variable relative_importance scaled_importance percentage
0 b 33876.328125 1.000000 0.540893
1 a 28753.998047 0.848793 0.459107
</code></pre>
<p>However, when I turn on the grid search for the hyper parameter tunning, it does not seem like working.</p>
<pre><code>params = {'max_depth': list(range(7, 16)), 'sample_rate': [0.8], }
criteria = {'strategy': 'RandomDiscrete', 'max_models': 4}
grid = H2OGridSearch(model= H2ORandomForestEstimator(ignored_columns = ['c']),
search_criteria=criteria,
hyper_params=params )
grid.train( y = y, training_frame=hf)
best_model = grid.get_grid(sort_by='rmse', decreasing=False)[0]
best_model.varimp(use_pandas=True)
variable relative_importance scaled_importance percentage
0 a 33525.109375 1.000000 0.516545
1 b 23314.916016 0.695446 0.359230
2 c 8062.515137 0.240492 0.124225
</code></pre>
|
<python><debugging><h2o><hyperparameters>
|
2023-02-16 14:39:06
| 0
| 488
|
whitepanda
|
75,473,766
| 14,256,643
|
python fastapi can't create multiple variation object in database for parent product
|
<p>I am trying to create multiple size and color for each of my parent product. when sending post request using fastapi swagger docs getting error</p>
<pre><code>{
"detail": [
{
"loc": [
"body",
"variations",
0
],
"msg": "value is not a valid dict",
"type": "type_error.dict"
}
]
}
</code></pre>
<p><strong>here is my code</strong></p>
<pre><code>class Variation(BaseModel):
size: str = None
color: str = None
@dataclass
class Product():
title: str = Form()
variations: List[Variation] = Form()
@router.post("/main_product", status_code=status.HTTP_200_OK)
async def create_main_product(product: Product = Depends(),current_user:get_current_user = Depends() ,db: Session = Depends(get_db)):
create_parent_product = models.ParentProduct(
product_title=product.title
db.add(create_parent_product )
db.commit()
db.refresh(create_parent_product)
for variation in product.variations:
variation_str = f"size:'{variation.size}',color:'{variation.color}'"
varitaion_product = models.VariationProduct(parent_product_id = create_parent_product.id,variations=variation_str)
db.add(varitaion_product)
db.commit()
db.refresh(varitaion_product)
</code></pre>
|
<python><python-3.x><fastapi>
|
2023-02-16 14:34:52
| 0
| 1,647
|
boyenec
|
75,473,633
| 11,402,025
|
Database Connection Timeout Issue
|
<p>I have a python code that takes a set of inputs ( possibly 250 ) and then looks up the values corresponding to that input in 2 databases.</p>
<pre><code>engine = create_engine(database_url1)
result = {}
for input in input_list:
sql_query = f"""SELECT * FROM table 1where name = input limit 1;"""
db_results = engine.execute(sql_query).fetchall()
if len(db_results) <= 0:
sql_query = f"""SELECT * FROM table 2 where name = input limit 1;"""
db_results = engine.execute(sql_query).fetchall()
if len(db_results) <= 0:
sql_query = f"""SELECT * FROM table 3 where name = input limit 1;"""
db_results = engine.execute(sql_query).fetchall()
if len(db_results) <= 0:
engine_database2 = create_engine(database_url2)
sql_query = f"""SELECT * FROM table 4 where name = input limit 1;"""
db_results = engine_database2.execute(sql_query).fetchall()
</code></pre>
<p>I end up getting timeout issue, for large set of inputs.
What is the best way/practice to approach this, so that I am not opening and closing multiple database connections and the performance is better with large set of inputs.</p>
<p>Should i handle the inputs in batches ? will it increase the performance.</p>
|
<python><mysql><sql><query-optimization><database-performance>
|
2023-02-16 14:25:04
| 0
| 1,712
|
Tanu
|
75,473,318
| 8,881,495
|
Running gpt-neo on GPU
|
<p>The following code runs the <code>EleutherAI/gpt-neo-1.3B</code> model. The model runs on CPUs, but I don't understand why it does not use my GPU. Did I missed something?</p>
<pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
prompt = ("What is the capital of France?")
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=50 )
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print (gen_text)
</code></pre>
<p>By the way, here is the output of the <code>nvidia-smi</code> command</p>
<pre><code>Thu Feb 16 14:58:28 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03 Driver Version: 510.108.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:73:00.0 On | N/A |
| 30% 31C P8 34W / 350W | 814MiB / 24576MiB | 22% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX A5000 Off | 00000000:A6:00.0 Off | Off |
| 30% 31C P8 16W / 230W | 8MiB / 24564MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 3484 G /usr/lib/xorg/Xorg 378MiB |
| 0 N/A N/A 3660 G /usr/bin/gnome-shell 62MiB |
| 0 N/A N/A 4364 G ...662097787256072160,131072 225MiB |
| 0 N/A N/A 37532 G ...6/usr/lib/firefox/firefox 142MiB |
| 1 N/A N/A 3484 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
</code></pre>
|
<python><gpu>
|
2023-02-16 13:59:38
| 1
| 3,655
|
Fifi
|
75,473,244
| 11,913,986
|
Pyspark groupby and merge column values to get aggregated frequency of row count
|
<p>I have a spark dataframe like:</p>
<p>df:</p>
<pre><code>id name subregion streak
8103178 A Western Asia 1
8103178 A Southern Asia 1
8344002 B North America 1
5225081 B South America 1
5225081 C Eastern Europe 1
5225081 D Northern Europe 1
5225081 E Southern Europe 1
5225081 F South-Eastern Asia 1
5225081 G Southern Africa 1
5225081 H Central America 1
5225081 I Northern Africa 1
5225081 I Caribbean 2
5225081 J Eastern Asia 2
8103178 A Western Asia 3
8103178 A Southern Asia 4
8344002 B North America 5
5225081 B South America 3
5225081 C Eastern Europe 4
5225081 D Northern Europe 3
5225081 E Southern Europe 4
5225081 F South-Eastern Asia 5
5225081 G Southern Africa 3
5225081 H Central America 4
5225081 I Northern Africa 5
5225081 I Caribbean 6
5225081 J Eastern Asia 3
</code></pre>
<p>And I have a list distribution of regions like this:</p>
<pre><code>'APAC'=['Southern Asia', 'South-Eastern Asia', 'Central Asia']
'EU/UK' = ['Western Europe', 'Eastern Europe', 'Northern Europe', 'Southern Europe']
'MEA'= ['Western Asia', 'Western Africa', 'Southern Africa', 'Northern Africa', 'Eastern Asia', 'Eastern Africa']
'NA'=['North America']
'LATAM' = ['Caribbean', 'Central America', 'South America']
</code></pre>
<p>What i want to do is to count the number of rows for every entry in 'subregion' of df which has value of 'streak'==1 and 'streak'=>3 and write it a new column while mapping the subregions to regions like this:</p>
<p>result:</p>
<p><a href="https://i.sstatic.net/0JDwx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0JDwx.png" alt="enter image description here" /></a></p>
<p>I am new to pyspark. I can simply do a groupby and aggr function based on streak column and subregion. But mapping the subregion to region and create the result dataframe, I am looking for insights. Any help is appreciated.</p>
|
<python><pandas><list><dataframe><pyspark>
|
2023-02-16 13:53:29
| 1
| 739
|
Strayhorn
|
75,473,164
| 19,325,656
|
subprocess.CalledProcessError: Command 'netsh advfirewall returned non-zero exit status 1
|
<p>Hi guys Im trying to edit firewall rules via an executable script (.exe)</p>
<p>this is the basic code snippet that I'm running</p>
<pre class="lang-py prettyprint-override"><code>command = "netsh advfirewall firewall add rule name=dupa8 action=allow profile=any localip=192.162.250.0/24 remoteip=192.162.250.0/24 dir=in"
encoding = "cp949"
out = str(subprocess.check_output(command).decode(encoding=encoding_type, errors="ignore"))
print(f'Out result {out}')
</code></pre>
<p>this works okay when I run it like script or in python via cmd but when I compile it to exe via pyinstaller I get</p>
<p>subprocess.CalledProcessError: Command 'netsh advfirewall...." returned non-zero exit status 1.</p>
|
<python><firewall><netsh>
|
2023-02-16 13:46:15
| 0
| 471
|
rafaelHTML
|
75,473,145
| 3,030,875
|
How to configure pip so that pip install is always called with argument --user?
|
<p>I'm configuring a jupyterlab single-user instance. The users will need to install Python packages using</p>
<pre><code>pip install <package-name>
</code></pre>
<p>I want to configure pip so that <code>pip install</code> is always called with argument <code>--user</code>, even if it is invoked without <code>--user</code>. Is it possible to achieve this? If so, how?</p>
<p>The reason behind it is that the $HOME directory is mounted on a persistent volume, and I want the user to still have its packages installed after restarting the jupyterlab server. By installing packages using <code>pip install --user <package-name></code>, the packages will be stored in <code>$HOME/.local/lib/python${PY_MAJOR}.${PY_MINOR}/site-packages</code> and will therefore persist even after a server restart.</p>
|
<python><pip><persistence><jupyter-lab>
|
2023-02-16 13:44:32
| 2
| 1,778
|
Brainless
|
75,472,993
| 482,819
|
Proper work to specialize generic classes in Python
|
<p>I am having a hard time using python typing annotations when dealing with generics and compound types</p>
<p>Consider the following class:</p>
<pre class="lang-py prettyprint-override"><code>import typing as ty
T = ty.TypeVar("T")
CT = tuple[bool, T, str]
class MyClass(ty.Generic[T]):
internal1: tuple[bool, T, str]
internal2: CT[T]
internal3: CT[float]
class DerivedMyClass(MyClass[float]):
pass
print(ty.get_type_hints(MyClass))
print(ty.get_type_hints(DerivedMyClass))
</code></pre>
<p>where the type of <code>internal</code> 1, 2, 3 is actually a much more lengthy type annotation. The output is:</p>
<pre class="lang-py prettyprint-override"><code>{
'internal1': tuple[bool, ~T, str],
'internal2': tuple[bool, ~T, str],
'internal3': tuple[bool, float, str]
}
{
'internal1': tuple[bool, ~T, str],
'internal2': tuple[bool, ~T, str],
'internal3': tuple[bool, float, str]
}
</code></pre>
<p>Is there a way to make <code>CT</code> aware of the type in the derived class?</p>
|
<python><generics><typing><specialized-annotation>
|
2023-02-16 13:28:34
| 1
| 6,143
|
Hernan
|
75,472,971
| 14,301,545
|
Python bytes to binary string - how 4 bytes can be 29 bits?
|
<p>I need to read time data from sensor. Here are the instructions from manual:</p>
<p><a href="https://i.sstatic.net/3Udmf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3Udmf.png" alt="enter image description here" /></a></p>
<p>I have written code in Python, but I feel like there should be some better way:</p>
<pre><code># 1
data_bytes = b'\x43\x32\x21\x10'
print('data_bytes: ', data_bytes, len(data_bytes)) # why it changes b'\x43\x32\x21\x10' to b'C2!\x10' ???
# 2
data_binary = bin(int.from_bytes(data_bytes, 'little')) # remove '0b' from string
print('data_binary: ', data_binary, len(data_binary))
data_binary = data_binary[2:]
print('data_binary: ', data_binary, len(data_binary)) # should be: 0001 0000 0010 0001 0011 0010 0100 0011, 32
# 3
sec = data_binary[0:-20]
print(sec, len(sec)) # should be: 0001 0000 0010, 12
sec = int(sec, 2)
print(sec)
usec = data_binary[-20:]
print(usec, len(usec)) # 0001 0011 0010 0100 0011, 20
usec = int(usec, 2)
print(usec)
# 4
print('time: ', sec + usec/1000000) # should be: 258.078403
</code></pre>
<p>Results:</p>
<pre><code>data_bytes: b'C2!\x10' 4
data_binary: 0b10000001000010011001001000011 31
data_binary: 10000001000010011001001000011 29
100000010 9
258
00010011001001000011 20
78403
time: 258.078403
</code></pre>
<p>I have questions:</p>
<ol>
<li>Why Python changes b'\x43\x32\x21\x10' to b'C2!\x10'?</li>
<li>Why is the length of the message 29 bits and not 32?</li>
<li>Is it possible to do this in better/cleaner/faster way?</li>
</ol>
<p>Thanks!</p>
|
<python><binary><byte>
|
2023-02-16 13:27:14
| 4
| 369
|
dany
|
75,472,814
| 2,536,951
|
Python Bokeh Not Changing the Colour of the Text when Updating
|
<p>I am trying to update the colour of a text on a plot I am creating.</p>
<p><a href="https://i.sstatic.net/REgYW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/REgYW.jpg" alt="enter image description here" /></a></p>
<p>The code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>
plot = figure(
x_axis_location="above", tools="hover,save",
x_range=list(reversed(names)), y_range=names,
tooltips = [('names', '@yname, @xname'), ('count', '@count')]
)
plot.width = 4500
plot.height = 4500
plot.grid.grid_line_color = 'pink'
plot.axis.axis_line_color = 'pink'
plot.axis.major_tick_line_color = 'white'
plot.axis.major_tick_line_color = None
plot.axis.major_label_text_font_size = "22px"
plot.axis.major_label_standoff = 3
plot.xaxis.major_label_orientation = np.pi/2
plot.rect('xname', 'yname', 1.0, 1.0, source=data,
color='colors', alpha='alphas', line_color=None,
hover_line_color='pink', hover_color='colors'
)
save(plot, title='plot.html', filename="plot.html")
</code></pre>
<p>According to the documentation it should be pretty simple:</p>
<pre class="lang-py prettyprint-override"><code>
plot.axis.axis_label_text_color = 'white'
</code></pre>
<p>However, Bokeh refuses to change the color of any of the axis texts. I'm pretty befuddled on how to get the axis labels to be white or what is going on here?</p>
|
<python><bokeh>
|
2023-02-16 13:14:05
| 1
| 597
|
Suliman Sharif
|
75,472,690
| 1,907,765
|
Python GUI - "<Return>" event only seems to run once
|
<p>I'm trying to create a Tkinter GUI that will take a value from a textbox and put another value in a second textbox when the user presses Enter. So far I'm stuck on binding the event correctly. Here is the code:</p>
<pre><code># Event handler functions
def get_input(event):
print("hello world")
input_str = tbx_input.get("1.0", "end-1c")
tbx_output.delete("1.0", "end")
tbx_output.insert("1.0", input_str)
root = tk.Tk()
# Widget definitions
tbx_input = tk.Text(
fg="black",
bg="white",
width=40,
height=1,
)
tbx_output = tk.Text(
fg="black",
bg="white",
width=40,
height=1,
)
# Event binding
root.bind("<Return>", get_input)
# Display logic
tbx_input.pack()
tbx_output.pack()
root.mainloop()
</code></pre>
<p>Here are the steps I take in the GUI.</p>
<ol>
<li>In the first textbox, type something:</li>
</ol>
<p><a href="https://i.sstatic.net/aSs7l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aSs7l.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>Press Enter, and expect the text I typed to appear in the second textbox (As expected)</li>
</ol>
<p><a href="https://i.sstatic.net/fUeXp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fUeXp.png" alt="enter image description here" /></a></p>
<ol start="3">
<li>Then I type something else in the first textbox</li>
</ol>
<p><a href="https://i.sstatic.net/vKhJ0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vKhJ0.png" alt="enter image description here" /></a></p>
<ol start="4">
<li>And I expect the value to appear in the 2nd textbox. It does not.</li>
</ol>
<p><a href="https://i.sstatic.net/WdKdE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WdKdE.png" alt="enter image description here" /></a></p>
<p>However the terminal output shows that the event function is being triggered, as "hello world" is printed twice:</p>
<pre><code>$ python testapp.py
hello world
hello world
</code></pre>
<p>So while the hello world gets printed once, the "delete" and "insert" calls to the second textbox only happen the first time the function gets called.</p>
<p>Why is this?</p>
|
<python><tkinter>
|
2023-02-16 13:02:50
| 1
| 2,527
|
Lou
|
75,472,688
| 2,964,170
|
How can we sync different databases and application in django
|
<p>How can we access other database tables into django application database. Here 'default' database is main application and 'cl_db' is other application which is developed in asp.net but using same database server here how can we sync 'cl_db' database into 'default' database.</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'fail_over',
'USER': 'SomeUser',
'PASSWORD': 'SomePassword',
'HOST': '127.0.0.1',
'PORT': '',
},
'cl_db': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'cl_dev',
'USER': 'SomeUser',
'PASSWORD': 'SomePassword',
'HOST': '127.0.0.1',
'PORT': '',
},
}
</code></pre>
<pre><code>DATABASE_ROUTERS = ['app.dbrouters.AuthRounter']
</code></pre>
<pre><code>class AuthRouter:
"""
A router to control all database operations on models in the
auth and contenttypes applications.
"""
route_app_labels = {'auth', 'contenttypes'}
def db_for_read(self, model, **hints):
"""
Attempts to read auth and contenttypes models go to auth_db.
"""
if model._meta.app_label in self.route_app_labels:
return 'default'
return None
def db_for_write(self, model, **hints):
"""
Attempts to write auth and contenttypes models go to auth_db.
"""
if model._meta.app_label in self.route_app_labels:
return 'default'
return None
def allow_relation(self, obj1, obj2, **hints):
"""
Allow relations if a model in the auth or contenttypes apps is
involved.
"""
if (
obj1._meta.app_label in self.route_app_labels or
obj2._meta.app_label in self.route_app_labels
):
return True
return None
def allow_migrate(self, db, app_label, model_name=None, **hints):
"""
Make sure the auth and contenttypes apps only appear in the
'auth_db' database.
"""
if app_label in self.route_app_labels:
return db == 'default'
return None
</code></pre>
<p>models.py</p>
<pre><code>class usermaster(models.Model):
userid = models.AutoField(primary_key=True, unique=True)
class Meta:
app_label = 'user_master'
</code></pre>
|
<python><django><postgresql><django-rest-framework><postgis>
|
2023-02-16 13:02:47
| 0
| 425
|
Vas
|
75,472,405
| 20,240,835
|
count interval length with overlap in a larger data
|
<p>I have a list of tuples representing genomic intervals, where each tuple contains the start and end coordinates of the interval. I want to compute the total length of all the intervals, but I need to account for overlapping regions only once. The problem is that the intervals may not be sorted, and some intervals may overlap with multiple others, making it difficult to determine which overlaps should be counted. For example, consider the following list of intervals:</p>
<p>for example:</p>
<pre><code>#sorted data by start pos
[(3, 9), (3, 5), (6, 9)]
</code></pre>
<p>The last interval overlaps with the first one, but not with the second one, and so the correct total length should be 6 (not 9 or 10). How can I write a Python code to solve this problem in an efficient way?</p>
<p>Note that this is a large data sheet, may have a lots of intervals</p>
|
<python><dataframe><algorithm><sorting>
|
2023-02-16 12:39:02
| 1
| 689
|
zhang
|
75,472,376
| 2,254,024
|
Postgres database includes a table but I can not see any table when I connect via command line in Docker container
|
<p>I try to keep postgresql in a separate service. This is the Dockerfile</p>
<pre><code>FROM python:3.10.0-alpine
WORKDIR /app
RUN pip install --upgrade pip && \
pip install poetry
COPY poetry.lock pyproject.toml /app/
RUN poetry install
COPY . /app/
RUN python manage.py makemigrations kapp && \
python manage.py migrate
CMD ["poetry", "run", "python", "manage.py", "runserver", "0.0.0.0:8000"]
</code></pre>
<p>and docker-compose:</p>
<pre><code>version: "3.8"
services:
db:
container_name: postgresql_db
image: postgres
restart: always
ports:
- 54321:5432
environment:
- POSTGRES_USER=kuser
- POSTGRES_PASSWORD=kpass
- POSTGRES_DB=kdb
networks:
- backend
app:
container_name: kapp
build: .
ports:
- 8000:8000
depends_on:
- db
restart: always
volumes:
- .:/kapp
</code></pre>
<p>My <code>models.py</code>:</p>
<pre><code>class Ktable(models.Model):
class Meta:
app_label = "kapp"
id = models.AutoField(primary_key=True)
number = models.PositiveIntegerField(unique=True)
</code></pre>
<p>How I insert data:</p>
<pre><code>Ktable.objects.update_or_create(
number=number,
)
</code></pre>
<p>and how I fetch data from Ktable:</p>
<pre><code>res = Ktable.objects.all()
</code></pre>
<p>The kapp works fine and res contains what I expected. However, when I connect to the kdb database in the Docker container (via Docker desktop), I do not see any table in the public schema.</p>
<pre><code># psql -U kuser -d kdb
kdb=# \dt
Do not find any relations.
</code></pre>
|
<python><postgresql><docker><docker-compose><dockerfile>
|
2023-02-16 12:36:32
| 0
| 422
|
Karimai
|
75,472,350
| 10,667,216
|
How to resolve "ERROR: Could not build wheels for matplotlib, which is required to install pyproject.toml-based projects" with Python 3.9.12?
|
<p>I am encountering the following error when attempting to install matplotlib in an alpine Docker image:</p>
<pre class="lang-none prettyprint-override"><code> error: Failed to download any of the following: ['http://www.qhull.org/download/qhull-2020-src-8.0.2.tgz']. Please download one of these urls and extract it into 'build/' at the top-level of the source repository.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for matplotlib
Failed to build matplotlib
ERROR: Could not build wheels for matplotlib, which is required to install pyproject.toml-based projects
</code></pre>
<p>My Python version is 3.9.12. How can I resolve this error?</p>
|
<python><matplotlib><alpine-linux>
|
2023-02-16 12:34:25
| 2
| 483
|
Davood
|
75,472,331
| 6,357,916
|
Getting error "'list' object is not callable" for authentication_classes
|
<p>I have following method:</p>
<pre><code>class AbcViewSet(viwesets.ModelViewSet):
@action(detail=False, permission_classes=(IsOwnerOrReadOnly,))
@debugger_queries
def get_xyz(self, request, pk=None):
# ...
</code></pre>
<p>Inside this method, <code>request.user</code> was always <code>AnonymousUser</code>. I felt that it was because I did not specify any kind of authentication for this method. I skipped through codebase and found other developers using decorator. So, I tried by adding a decorator <code>@authentication_classes([TokenAuthentication,])</code> as follows:</p>
<pre><code>class AbcViewSet(viwesets.ModelViewSet):
@action(detail=False, permission_classes=(IsOwnerOrReadOnly,))
@debugger_queries
@authentication_classes([TokenAuthentication,])
def get_xyz(self, request, pk=None):
# ...
</code></pre>
<p>But it started me giving <code>'list' object is not callable</code> error on this newly added line. I was expecting it to work as we can see similar code here: <a href="https://github.com/encode/django-rest-framework/issues/992" rel="nofollow noreferrer">1</a>, <a href="https://stackoverflow.com/questions/56526740/must-i-include-authentication-classes-attribute-in-my-django-class-view">2</a>. Django doc too state it <a href="https://www.django-rest-framework.org/api-guide/views/#api-policy-decorators" rel="nofollow noreferrer">here</a>.</p>
<p>Is it that they are used with function based views and are not allowed with View sets? What I am missing here?</p>
<p><strong>PS</strong>:</p>
<p>My settings.py does have <code>TokenAuthentication</code> specified:</p>
<pre><code>REST_FRAMEWORK = {
# ...
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.TokenAuthentication',
),
# ...
}
</code></pre>
<p><strong>PS2:</strong> I also tried following, though its somewhat unrelated to the error:</p>
<pre><code>@authentication_classes((TokenAuthentication))
</code></pre>
<p>and</p>
<pre><code>@authentication_classes((TokenAuthentication,))
</code></pre>
|
<python><django><django-rest-framework><django-views><django-viewsets>
|
2023-02-16 12:33:01
| 1
| 3,029
|
MsA
|
75,472,184
| 16,981,638
|
how to use scrapy package with Juypter Notebook
|
<p>I'm trying to learn web scraping/crawling and trying to apply the below code on Juypter Notebook but it didn't show any outputs, So can anyone help me and guide me to how to use the scrapy package on Juypter notebook.</p>
<p><strong>The code:</strong></p>
<pre><code>import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class BooksCrawlSpider(CrawlSpider):
name = 'books_crawl'
allowed_domains = ['books.toscrape.com']
start_urls = ['https://books.toscrape.com/catalogue/category/books/sequential-art_5/page-1.html']
le_book_details = LinkExtractor(restrict_css='h3 > a')
le_next = LinkExtractor(restrict_css='.next > a') # next_button
le_cats = LinkExtractor(restrict_css='.side_categories > ul > li > ul > li a') # Categories
rule_book_details = Rule(le_book_details, callback='parse_item', follow=False)
rule_next = Rule(le_next, follow=True)
rule_cats = Rule(le_cats, follow=True)
rules = (
rule_book_details,
rule_next,
rule_cats
)
def parse_item(self, response):
yield {
'Title': response.css('h1 ::text').get(),
'Category': response.xpath('//ul[@class="breadcrumb"]/li[last()-1]/a/text()').get(),
'Link': response.url
}
</code></pre>
<p><strong>The final result is without any output:-</strong></p>
<p><a href="https://i.sstatic.net/XUQdI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XUQdI.png" alt="enter image description here" /></a></p>
|
<python><pandas><web-scraping><scrapy><jupyter>
|
2023-02-16 12:21:08
| 1
| 303
|
Mahmoud Badr
|
75,472,089
| 6,134,218
|
Python unittest - assert called with is not working
|
<p>i do a gitlab api call to a put_request and for my test I want to mock this call and assert if this call was called with the specific input.</p>
<p>The problem is I do two put requests in my question with different input. This causes problems in my test.</p>
<p>Code:</p>
<pre><code>def set_project_settings(project_id: int) -> None:
gitlabapi.put_request(("projects"), project_id, {"merge_pipelines_enabled": "true"})
gitlabapi.put_request(("projects"), project_id, {"merge_trains_enabled": "true"})
</code></pre>
<p>Test:</p>
<pre><code> @patch('python_check_gitlab_module.GitlabApi.put_request')
def test_set_merged_results_pipeline_settings(self, api_mock)-> None:
project_id = 100
uut.set_project_settings(project_id)
api_mock.assert_called_with(
("projects"), project_id, {"merge_pipelines_enabled": "true"})
api_mock.assert_called_with(
("projects"), project_id, {"merge_trains_enabled": "true"})
</code></pre>
<p>Error:</p>
<pre><code>AssertionError: expected call not found.
</code></pre>
<p>FYI: if i do only one put_request in my set_project_settings method and test with
assert_called_once_with then it works.</p>
|
<python><pytest><python-unittest><gitlab-api>
|
2023-02-16 12:13:13
| 1
| 742
|
Shalomi90
|
75,471,806
| 5,931,672
|
Understanding Plotly Time Difference units
|
<p>So, I have a problem similar to <a href="https://stackoverflow.com/questions/66635281/fitting-timedelta-on-plotly-y-axis">this</a> question. I have a DataFrame with a column <code>'diff'</code> and a column <code>'date'</code> with the following dtypes:</p>
<pre><code>delta_df['diff'].dtype
>>> dtype('<m8[ns]')
delta_df['date'].dtype
>>> datetime64[ns, UTC]
</code></pre>
<p>According to <a href="https://stackoverflow.com/a/29218694/5931672">this</a> answer, there are (kind of) equivalent. However, then I plot using plotly (I used histogram and scatter), the <code>'diff'</code> axis has a weird unit. Something like <code>2T, 2.5T, 3T, etc</code>, what is this? The data on <code>'diff'</code> column looks like <code>0 days 00:29:36.000001</code> so I don't understand what is happening (column of <code>'date'</code> is <code>2018-06-11 01:04:25.000005+00:00</code>).</p>
<p>BTW, the diff column was generated using <code>df['date'].diff()</code>.</p>
<p>So my question is:</p>
<ol>
<li>What is this T? Is it a standard choosen by plotly like 30 mins and then 2T is 1 hour? if so, how to check the value of the chosen T?</li>
<li>Maybe more important, how to plot with the axis as it appears on the column so it's easier to read?</li>
</ol>
|
<python><pandas><plotly>
|
2023-02-16 11:46:39
| 1
| 4,192
|
J Agustin Barrachina
|
75,471,704
| 9,909,598
|
Masking a polars dataframe for complex operations
|
<p>If I have a polars Dataframe and want to perform masked operations, I currently see two options:</p>
<pre class="lang-py prettyprint-override"><code># create data
df = pl.DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]], schema = ['a', 'b']).lazy()
# create a second dataframe for added fun
df2 = pl.DataFrame([[8, 6, 7, 5], [15, 16, 17, 18]], schema=["b", "d"]).lazy()
# define mask
mask = pl.col('a').is_between(2, 3)
</code></pre>
<h2>Option 1: create filtered dataframe, perform operations and join back to the original dataframe</h2>
<pre class="lang-py prettyprint-override"><code>masked_df = df.filter(mask)
masked_df = masked_df.with_columns( # calculate some columns
[
pl.col("a").sin().alias("new_1"),
pl.col("a").cos().alias("new_2"),
(pl.col("a") / pl.col("b")).alias("new_3"),
]
).join( # throw a join into the mix
df2, on="b", how="left"
)
res = df.join(masked_df, how="left", on=["a", "b"])
print(res.collect())
</code></pre>
<h2>Option 2: mask each operation individually</h2>
<pre class="lang-py prettyprint-override"><code>res = df.with_columns( # calculate some columns - we have to add `pl.when(mask).then()` to each column now
[
pl.when(mask).then(pl.col("a").sin()).alias("new_1"),
pl.when(mask).then(pl.col("a").cos()).alias("new_2"),
pl.when(mask).then(pl.col("a") / pl.col("b")).alias("new_3"),
]
).join( # we have to construct a convoluted back-and-forth join to apply the mask to the join
df2.join(df.filter(mask), on="b", how="semi"), on="b", how="left"
)
print(res.collect())
</code></pre>
<h2>Output:</h2>
<pre><code>shape: (4, 6)
┌─────┬─────┬──────────┬───────────┬──────────┬──────┐
│ a ┆ b ┆ new_1 ┆ new_2 ┆ new_3 ┆ d │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ f64 ┆ f64 ┆ f64 ┆ i64 │
╞═════╪═════╪══════════╪═══════════╪══════════╪══════╡
│ 1 ┆ 5 ┆ null ┆ null ┆ null ┆ null │
│ 2 ┆ 6 ┆ 0.909297 ┆ -0.416147 ┆ 0.333333 ┆ 16 │
│ 3 ┆ 7 ┆ 0.14112 ┆ -0.989992 ┆ 0.428571 ┆ 17 │
│ 4 ┆ 8 ┆ null ┆ null ┆ null ┆ null │
└─────┴─────┴──────────┴───────────┴──────────┴──────┘
</code></pre>
<p>Most of the time, option 2 will be faster, but it gets pretty verbose and is generally harder to read than option 1 when any sort of complexity is involved.</p>
<p>Is there a way to apply a mask more generically to cover multiple subsequent operations?</p>
|
<python><dataframe><python-polars>
|
2023-02-16 11:37:01
| 2
| 451
|
DataWiz
|
75,471,591
| 4,183,498
|
Python list.sort(key=lambda x: ...) type hints
|
<p>I am sorting a list of dicts based on a key like below</p>
<pre><code>my_function() -> list[dict]:
data: list[dict] = []
# Populate data ...
if condition:
data.sort(key=lambda x: x["position"])
return data
</code></pre>
<p>However mypy complains about <code>Returning Any from function declared to return "Union[SupportsDunderLT[Any], SupportsDunderGT[Any]]"</code>. Is it possible to update the above snippet so that mypy doesn't raise a <code>no-any-return</code> error?</p>
<p><strong>EDIT</strong></p>
<p>Versions: Python 3.10.9 and mypy 1.0.0 (compiled: yes)</p>
|
<python><lambda><types><type-hinting><mypy>
|
2023-02-16 11:26:30
| 1
| 10,009
|
Dušan Maďar
|
75,471,585
| 6,775,670
|
How to create a class which can be initialized by its own instance
|
<p><strong>Task:</strong></p>
<p>Implement some class that accepts at least one argument and can be either initialized by original data, or its own instance.</p>
<p><strong>Minimal example of usage:</strong></p>
<pre><code>arg = {} # whatever necessary for the real object
instance1 = NewClass(arg)
instance2 = NewClass(instance1)
assert instance2 is instance1 # or at least, ==
</code></pre>
<p><strong>More complex example of usage:</strong></p>
<pre><code>from typing import Mapping, Union
class NewClass:
"""
Incomplete
Should somehow act like described in the task
"""
def __init__(self, data: Mapping):
self.data = data
def cool_method(self):
assert isinstance(self.data, Mapping)
# do smth with self.data
return ...
...
class AnotherClass:
"""
Accepts both mappings and NewClass instances,
but needs NewClass internally
"""
def __init__(self, obj: Union[Mapping, NewClass]):
self.cool = NewClass(obj).cool_method()
...
</code></pre>
|
<python><oop>
|
2023-02-16 11:25:55
| 3
| 1,312
|
Nikolay Prokopyev
|
75,471,540
| 6,560,267
|
What is the use of PyArrow Tensor class?
|
<p>In the Arrow documentation there is a class named <code>Tensor</code> that is created from numpy ndarrays. However, the documentation is pretty sparse, and after playing a bit I haven't found an use case for it. For example, you can't construct a table with it:</p>
<pre class="lang-py prettyprint-override"><code>import pyarrow as pa
import numpy as np
x = np.random.normal(0, 1.5, size=(4, 3, 2))
T = pa.Tensor.from_numpy(x, dim_names="xyz")
# error
pa.table([pa.array([0, 1, 2, 3]), T], names=["f1", "f2"])
</code></pre>
<p>Neither there is a type for schemas and structs. So my question is: what is it there for? Can someone provide a simple example using them?</p>
<p>Here's a <a href="https://stackoverflow.com/q/46794345/6560267">related question</a> from over 5 years ago, but it asked about Parquet. While I'm interested in persisting these tensors, before that I should understand how to use them, and as of today, I don't.</p>
|
<python><pyarrow><apache-arrow>
|
2023-02-16 11:21:41
| 2
| 913
|
Adrian
|
75,471,448
| 12,814,680
|
Lists manipulation with indexs
|
<p>I have two lists.</p>
<p>List A :</p>
<pre><code>A = ["apple","cherry","pear","mango","banana","grape","kiwi","orange","pineapple"]
</code></pre>
<p>List B :</p>
<pre><code>B = [{"offset":0, "xx":789},{"offset":3, "xx":921},{"offset":6, "xx":89}]
</code></pre>
<p>The idea is to use the offset from each item in B as an index offset for setting the xx values in our results array.
For instance, this would be the expected result:</p>
<pre><code>C=[
{"fruit":"apple","xx":789},
{"fruit":"cherry","xx":789},
{"fruit":"pear","xx":789},
{"fruit":"mango","xx":921},
{"fruit":"banana","xx":921},
{"fruit":"grape","xx":921},
{"fruit":"kiwi","xx":89},
{"fruit":"orange","xx":89},
{"fruit":"pineapple","xx":89},
]
</code></pre>
<p>For example, B[0] has "offset" of 0. this means that C of index >= 0 will have an "xx" value of B[0]['xx']. Then we have B[0]['offset'] of 3 that will set new "xx" values to the C items with index >= 3 and so on.</p>
<p>I am able to acheive a similar result using a dataframes and pandas. But since pandas library is quite heavy, I am requested to do it without using pandas.</p>
|
<python>
|
2023-02-16 11:13:48
| 3
| 499
|
JK2018
|
75,471,388
| 462,707
|
Sentry: rate limit errors sent to prevent depletion of error quota
|
<p>When an infrastructure incident happens, the application will start to generate thousands of occurrences of the same error. Is it possible to configure some kind of rate limiting or anything like that on the sentry client (or server) to avoid depleting the error quota?</p>
<p>I'm using Python, Django and Celery mostly.</p>
|
<python><django><celery><sentry>
|
2023-02-16 11:09:19
| 1
| 8,649
|
nemesifier
|
75,471,237
| 3,672,883
|
How can I interpret this Python memory profile?
|
<p>I am running a Python script and I use memray to watch the memory usage. I got the following graph:</p>
<p><a href="https://i.sstatic.net/wNOl0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wNOl0.png" alt="enter image description here" /></a></p>
<p>I don't understand the difference between heap and resident size.</p>
<p>I read <a href="https://pythonspeed.com/articles/measuring-memory-python/" rel="nofollow noreferrer">here</a> that resident size is the ram usage, but I don't find information about heap size. Is it the memory that really I need to execute the script? Or is it only the space that Python saves to execute but didn't use?</p>
<p>How can I read this chart?</p>
|
<python><memory>
|
2023-02-16 10:54:55
| 1
| 5,342
|
Tlaloc-ES
|
75,471,070
| 7,959,614
|
Uncompress a payload using Python
|
<p>I have an encoded string, <code>payload</code>, of a request that I want to uncompress.</p>
<pre><code>payload = 'H4sIAAAAAAAAA+19bXMcN5LmX2Fw76NIIZFAAvA3e+yddax3rPDobmPiYkJBiS2LNzKpoCh7dHP+7/dkVXX1C6pNAtWu6oqwpdFIbBIFoIAnM598+9f5q1cPnz+sbq9+Wp1/cf7Xd3cPX68erm7efzx/dn598/HD+6vPL/E5Pvvyu+/wtZtr/PUHaywbwxfM5OjC4esPd5/udZDbh2+3vwOf6BCr++arzbfjS/d3n27xb/fs/P3N7epPd+/v7vHpvxkT3b//Oz7/afXx49WP+lD8493d+9XH8y/+978OTfU/8A3d9/3l00+vVxiL+sn/x9ZXz/XZH67umyd/vrq/bp7hLP718eHq4ROecv7Vtz98/e03+L6Pb+7u9Tm6hB/vV6vb8y8e7j+tMPmr23/oWF5XvVp9//Pq/ueb1S/nX+xN8MXdze3D92+/vX1Y3a8+Pvzp7u7+Wnf1/ertw8u7H25+fLf+2v5P/vXh/u4fq+bDm9urh5X+1D/PvzCXxjmJQZwVMoaSS1iGfj1a8hyYk5CQ9fjy/z3/4oJ+fXb++u7h4e6nl3cv7z6UPiwftHlW8p7Jsg8cU4rGr5/1a7Mbf2536vfeigt36YMzAROxSdjZ0M1OAiXsDot3JvpwlI3IxsSTfP78fhs+3NxOeSgSJhWdBDHBWOe42wlrTSI24qO3+E/8UfZiYNTmafiqw3GxnGwK3gfa3Y+JjgXelfUmiI1COKemvyBOv8iG8f/RG2PlKJsxMGrzNA4uWQ7EERc2+e29UJB6cfPmH+9XX22ei2e8e3j48PGL588//Hh1oVB6gQ25fPP+7pM+8v7z5Zu7n57f/AS8ev7pw/u7q+vnb169vXn//tnrV/c/vv7ibfPfs6tXNpjnDV6+Uvx9df8KEPzKk3l1hwP5bnV1/ertp/fvX9Hl//nw4/n2bL7bbPmRZlM3j+agzLQ1DcofmtPkG7Q7m4/NGRwQgyrf2gOq3/Xu7pf/urr/hwq7Vlj91Pzr5eqfOmsbTT/UlqR8izm/78b44u3V+4+rRno+XN2+WbU/dfa5uZftT3bqwF9f/vD9f6qgvDuEdSqhv7u5/Udzc/77/urDBzzwSbd7+wf7K/f2/u6nKoFJUZzYEAzxlsBUqQbRL9h/s4WND3flkJO8U9wTCIcQYvcIb1QyRO8J4EAmbYHAEyDnaDuQr3QjxSMxJLy1KcqYDchX2qGgcV6FEpQ+lRXc78CvvUZ1aiem1SusCd7gzMRIwfeKhYWYJ4E2FA0L1W/YBV0SdsoGa4RTYuH+pQT8E08x3kUvIcx0ZPKlrhWevY2pPzL5SvEIO7AvmxOjyvxXn1/gzw0mnT3cnenhOHt7dXP/y9XnZ2c4ausP3rVmwebnvrt6vXrf7dQZnf/6bASU4jkZlNpHobSb3SKhtBDmqk6FSUnwR2BrjJHQa7I+wDRMsEEckZ/rWpTCXM0O5EvttGuoB/jDCu5NYuuXAKXFMFcpfK2YaJKDPUJxfSr1oAaJHKOLZLy4mc5MIc7V7MDAUtfnMnCihEfjtuIwHQTTLcxsztKzMz4Dpvqzm9vLx4FUcfDnm+vVXcPwCK6uFeuYiGwBxHZYuYuxGcLyHsK2wLwFsP3EFwmxhfhXB7FlZMHE96UU/+owtpTCOF2QLQfAOpAtI1UmPjTFCFgFs6Vczz7Kfnt79vBu9QiY8vmvf89A8zeZbjvMdNtBplslTc90v/jyhy2a2x2guV/KDDS3ZxirMD0MwdwAXK2t9iQAR7yFEEXSkdjdfNBWSXHCABtI6OgS02w0d+DGUlfYhvLZXu8LbBFOIgkmHK0zgMpAR9iMA+M21l82j7no7hhhiLBR9xI1960VYIxbziZK8BG33sbjsN35qM3TsC/GSUoB+wQ83HGCTMl2O0zQcQyevYPg9P1mBGH8JzDZcZM8H2kzslFbCxC74IWsE+cdTKSTY7vtibDd+TzmZ7sPzGkmttsek+0Golew3fipxVI0rdwkGwjonXziLbkJzdmIDarAjeJoxHvYrUlH9FDIU08DRciNgKcAi3yci6PJV7olzGOwyluaUbpwvtBWDXQuJA14SNAQUzCyAOOh0y6U11J6wnUETasFQODhDLHDLw8teoTReUGXRv0MqjlEwp75tRhVxzqRgdXiPAzfOMuhObTcXuvZ2Z8RFydbbEMF7W6NHDQfOlxSPuZez8YW5W3TNJS3zQmZJ1DelhaLp4VYV3MqYlBlinwAbgfTU6Q2Wu9g0nobAFiwY2fjvMvQroqPydfaKtnWepgaDlAOlV/cIiC1GOtqdowdNP7oE37DCgm9nHfR2oANMzGkPU18Wv9hCdJVkTHZQts99jBCyBn8hoVoD3MxHShtM95emeP0VMp7JLHNjzLbA4pp1BnyUrntUqCrAtPIyh9bNTKi9LQRzgp5R0DyYMWyn+teFANdHbudLbajLCTgqoSQghDDUF8AmpYjXRWY2ID9Ug7F2hgib3wiZMWRg2bInKzhuejtMrCrOzTZUjtqx/qYlNkJAZh5WDt98ekB2NQj1OWzM6t/t0/DUz66o9A95ijsp7dMMC0EujowLeJZJ3cUFsJc1bUopn9PGUuLca4OTMsY6cmPTRnQ1Z2aQp68ylPoij2FPOwp5LWn0G95Cr2Lv5UTc8hZSHEGZ6EeZQcw4KjvlNfe32RwDNVrG403yW+RfiN8IPmg7bNwZMTD3BEKImZbwZzQWegvRVNSlOYMEA3kOsOLAWHOxAioDBSOkvswNCieJfkMZnMTBkwQwhFvjAPRlrkBoRY0yEMFxXEcY9mgnVoviSCETBKx2Ki5nITsNXwmxIiJkOkhL4SYIAZiDAQl+xju44ExW03VaWCTJdLbk07QQcgn4iDM5zG/g/DAnGZyEPJRHYTe1zgIvV8sod3KSmZIKujJvQbYSDVOEGueItMobtIn6JjRRvKy8b85C5UL0hEKkIluLv4lX+aW8BaBCgGzSkYxUNlCu0A56JYEuejYpSWEFbaqhHo+PPRX8c75LV0CBgRZgzWOS4LB2JGcNxpWpInha7mE3cPWebVrvZvJ85EvdK3d7G1K/VHJ19lscL4rhxyCLQ4NOQSjm8YhGFKNQzCkxeJnCbhVcS7ifbQBurMFOvNak7TJw9ZIAG682uBmIyILwK3KcM6W2erR7BwsDGwLzAzcmSWwLeXwVrdhkGVKMzgRwEEfJuucC2Sjx3OZvJ3puBQCXNV9yVba7rFJVrNuYJWxOMcHMbTDog2G3qjd9dPq+gZPUSAtILHHOgXzYLXHfYLLDVUrRLq60AoLyzvocYOx019AGzWqnEMitnYrkXdqJrIQ7OoiK/aX2hETwWDLXUPUpCZE4eQBtRjsquIqvDWkfD8bTj1vG5rEeXwNWl/yYS6PRyHWVamk2Uo7/iakCCWYJYlx4TCaJoXLsIbLLrLC8sk6AUnnS7RYL2AZwNVhaAmLOnk8RRm81QanFXG7pwyhZfhWh6AlXPPUCmkRvFUBaBn/PZHjzw07/tza8cdbjj/SEJsDKYIHK+HRDF4/7DPBCLd4YzH6HpokweSglIxz1rEcx+uXD9rRphDHMHpD8ni5brvCz4ReP7qELRWEIw4fpMDaAaqBqJixkHr99UQew9c1NGhj1OVTmM3vB5TmoE5Jp+nQfcQyKecLzdE44zxvV68ZcTAGRu20eOsFSjy0NU085t3tmMzz5xNHzwJBmHxM0gfPQUDi9QUc6pgoxeOkBw6M2vFDQk4YSgo+NizbhO5puP/cibj/8nnM7/47MKeZ3H/umO4/mNkV7j/81GJJl0ZohuChMVMwwW1kJpT7BNUE2q7jUemBQXHGW5+0tsamhAng0UVnOUD9ijaFuerx5EvdSHJrNcYWs7PjXID5UlujScHPBXwcQki8iGp4rW7hvDin5X6936gWoqWwIO2dSTzK58EY3zRvo6lUuD6TVpghNUQc+blcHkML7dWdnV0ZU1tlf6Hd+j0USwpeAuzvw17ADo4GvIDS1EKSp5EvIz2BtqYYnl0ukJaiXJ0vULxmo1ITqNorb2yAI7goXjgxLNq5fIGFKFeFDNlSO+WaneDyYYMMHhEWAKSlKFcXeKPEBt6KHhlr+yJIMWo2PMFqgxVrZhO8RTBXdVqyha7ND5xO70kPTAiHyRfxO3jZsdduh9L+fT2AOYg+Wu+un94iYbQQ4upQtIwOmPhaFEJc1b0o5ShOGUYLQa4SR4tIk6lPTBnMVSZEFjI5E9W688NEtj8fqnUXTHGtO7IzENneqvcqWadBMWz6qFSjV191Bu8dheNwdPmgXXmcoGfGBB8hjmUmIpsvozXJiUApIMiFPqvRYXJeS45Hiz/lSKXd8lHxNJfPYb46dyY6lpRs43PYZNKxNFmfeGkJ9/NIpd32B+3yHJNKJmOwUxpzMVcGCwSYAeAl1oZDaytck/ksAAkvD0q1ptodqcxdPmqXck76dNwgnct2cfpT4bH9ifDY+Tzm57EPzGkmHtv/kcYyiseGzPRQA40DQsbUSwq9nACJoFV6YWaOUoIkGS2t4ACNsNi5r6usiSIQGOKhJQnTfLks+0vtJbmDHpggvYTG1WUaWGpnniWtx+GDqNEuSygk0uoWEWZDsIkpUB/8ATUAeyWqAJgY3AjfxwV2DLIJqjn2RXUHXsckeBgrycH2ciFAzs7VCShf6lrh2d6YUaZTttDGW5BvS3lKC/lpUlooD8N+QkoLLTcQuxTn6kqJBKA0ae14Qyb2XLEWVXSas9+YmXMx2YUwVxVFuL/STr22ZHzUsvEJX18GkhajXB2QRI1qZdX2fZL+lRAgDFaKghieM1uBu0Kcq2PtsqW29xK2SSDtxGDEQfj/hmeQwl6NO/IFHsGxZLY8SmYP4GiTb+MWS2cXwVxdOHbSE8bGeVgWPX9jmTQ3wliKHgcyzBaQXQZzdcV1sqV2AdnYDxtFc2p0j9wCsLQc56qwlIBUUblcnM6+i7PWtQ0cgiQXoQHEuY5MMc5VHZp8sR2lozQoaT9r0viOR+rbbeBpXd/uiVg6V307WSySFuJcnVJaxK9ODaWlOFcHpoW070ljaRnO1UFpGQ899Zkphbm6UqGl9PhEWS4y7ByUQefgTnm7pzkH3Qy+QW0oCgmpgtH5HgM0rS04C5UKstHycfxh+aAtoai9XgVqnXijBvI8rkF7GUlrbuD6iTaX7Cpx4uuwwzQtD6KCSGi7qnh9F6yhUfE0HpjEXM7BpFU2lbQR58T2sou04V3C5SSA97Gq22WDNs8yWsxS2RHtCxZcCrubMZ1vMLKWXcTUAD6+j0OECQ5BQDFqpp85UgesfNAW+ki0+iQ0CcgGt9MG8TQcg3IijsF8HvM7Bg/MaSbHoBzTMQjLssIxiJ9aLJmtApM9Ef7QXuWp79Kp0o2146taFnFckq9AxTTGa3dEY31vsQZc/ghZEVVd57moyXyhvRgXaG34w43zi2bLbBkLKJcpueS1IOx2lPHpWgydUmGCh84qjh3xllKBpfmkPZjVXzbKKRg1iNCRE2IKvq862MjTwCYQVHaZKS57cK29qrO7NWMs7f2lNnR5vjEHDIYOkTK34NnrT7dAO63TZKbxDkqVd1AW7B0swrqqhuQwJxWXLJQoNn3BePxVOEQ1Z32i7XS5iWtGFMFdXZZLttJWwQ4cVZIFKLiChy2BhSmHu6rAFIwj2hwa72BTxdX7ZJXQE8g9nq2BfSHWVd2YbKWt/9UKJD/DUNuh8zO3oOz1EWyO0seb61UPp176ukhTZL2YCk+hNYuF1FLAqzoiWuKZDGlA91alY2ui11JG3ok4aISzOX1KIa/O6ZMttiMuhHzU/ubk1Ou0jOp3RZhXl/YC8c6ajheTOiPXkt4GDTRUOiwaobmCdIowr8qsyda5zujVRBfnDeHMOjrMavu0Uyt0O32QzDRuwjx/8FE34WZ+y8TSQpyrw9ISqnVyl08hyFUhaSkDfMpAWohzVVBaRklPbdEUIl0VmhYS5RO5CMOwizCcDxTCa5qAFRbC8zO4CCMupAlRXEoGmtP6sKkGFbQCo7EiIR3JR5gN2qohZH1Iltgbk8KO+3vaOniCg6UMJ8CQYo+FkozYqHmrLvidgz2qJOD+oG1hmGwKs6UPNq0bUyCtDRx7N70nCAnTIJRG/R9lL7Ix2yiSpkGiwSYBAollz1s6nYPQKuwA7CDCZFO8USh5zSOCUuk19fY4pyIbtMW9oJtjPQnujuwGiJyGhzCciIcwn8f8HsIDc5rJQxiO6iHknHx5goeQl8u+qLzEJQ1qMoaEv+zIy4Qb6p11MraLCwAAgsDrq6S+5ghMFeti1xZ0NkY7X2kvxCPMHGUYrDGj1L98pV3FEc1UhFbstDtvXELFkVarYA12swm/8VK3tAo2pHVCmnqCozzKKhmgN2hnMuPT+sBYDZbHhqkjDpJ8Lko7X+la1dnel5GNbPZX2t4YFdoBO2M1AM7wQXuhQ6Qt4qWlY57KYZf4BQeZlzzQ4nG/YFugLy2VeylEuSogtU47eVO00Ww6fnhLaqbiWKRg/WzptIUgV0UkZCttHxG8tn6PFDQLbQmxFsUQV7VbJmEkFmy60UKumxJzRE2JOZc0b28upq4M4upKie6vtLU+ksE9FWZYZLhMh12DTZLLBpUuO/raT+MHzCH0adXv/GIBtAjdqvCziAOYOom2CNzq0jXKeIlTxs9CcKuTNkVEyeQFRIvArQo/S9mbicrexWHaOp4PZbZoicvCzJaXMgNvrXtsCJcQuhNBHq53O1I0mvpnoShYZ4+Tw5AP2vnPxfkUAY8MVc3MRFw7nG3IgGRgmEt0nX8NlmeK0Yuof02TzY6S2jIwKJ7l8xnMlthikpaVBtppcUvp0/hwMcmIVkhMGjRynL4+A6O2ns1kyAfYvVDlk6bNzZbbQtZjWlpkG9DcOzSi+nTxwtjiCh2lsc/QoJ3LLgbngIo4LEDBeHLMdTwR5jqfx/zM9YE5zcRcx6MWvaOa5i34qeUy15CYqssbaCKJeBOM3Qo3gFiy2olsXG4LQRQSFMFgzFanM0/q6gQKiCb9zsZE5ivtxbgq9EkjGqKMtBn2V9qxWtBTIhRBWCuaEy8LMBpaxUIX0+ybdBRdpwNonVsJUGqtGVGoCINZA7HB2HqvsUG9P4GCPtpoL05V2+fJhxpc61rh2d2Z+iOTr7Thx/N9OVT1jvoWLno48uyWQNNkt4QaFrub3SIhtRDtqqIHSVEbhyAl7FRfXM6Ks0kcQ79MgJa5unKUol2VbydbaqtjRwVtT04jd3FRFoGoxWhXFz5otYAYwzJRvqMPH1RiLLH2KQtWdioxTHlkCtGurovu/ko7UyRoBTGJzUfOH47F3kLMdek7mrD0XV6O+Qml75bNZRfCXBWSmsjQeg3sYVjDvTFsHY4JhLk2L4D9Eee6F6UwV1lGdH+tHV9BIs4k2Omw0l2kBWBpMc5VQWmQkEQ7xohNoW8XIyrTvMUhIhzZMNeRKcW5KoMuW2r7DAsQZ+0YwwFY/ljhu40fsCt8l0678N1igypKQa4OSMso1umLiJaBXBWSFjO/J42kZTBXp5QWUdFTOwfLUK4KSAv58YlSWtKwbzAN+gb9dtW7r77/8zd/2/IO+gPewTiDc5CBUSGx0V94dz3RpwUOtR6ox/2VbRQc1xNrb9D2WQIs0IqK+svt9FmdtCcWQa10ga3XQrnrGjXqrLfaBCWS4aghAcdJ5MgG/awdInanMGdOS0rekyVtfKJN6PpcTNbeoprRJp7jkXqlDYzamvpGW9+GaJrirGmn0sS0TbGSNqd2DMCHlOxbIQatfpWcYp9Vc+QomzEwagt/ug3QWKC9enzL6ZW+SyfiHsznMb978MCcZnIPpqMmtqRY4R7ETy2Wy1aZCXtSC2fEGN1GjplgjEbdJgnabXeUEiiUko2UgMGqU/UsDxT1YCI5fMCzRYjlK+0FuXVJi60QNSbVGC57f6UtCnLEg0nbKmml3EUktqhmAeWKcFw0Fsj0AfyqBGi/ZNV21TcwqvhdCCFqg02foEJY176TC/UwsIO2LNhHIezZXGcmX2yn8WxvzYgjM7zUpiRUvjWHkltaVFK2BSPf/Lw6u7pfXT07w/yO7xccYmEuavyCF4NuwRff/OXL717+7cSx1FstcB618jXjePTtB2CSKMgkvf9+nDld9ISpGZj9WazDYIkiaUVycXYcm1D2hFOGUa3npeHr2kdBoOBQD3EwEoAfxjoACfTjNELy1jxl2jiLwYk0MLc38XHipPQhOZ2NZ7x/+Pz7+f4o1cRQpCGs/PqH719UAOWU0TVdQNGY+KG9IX6d1p//OyygFKz+eF+zv68diPj6/u7D2c3tnq4V46O61tjoqzyg9QlhA2G5Aa2/v5YVlZZiCtFpoxrT91ASYwSfQenmRHE2R9fvrmblC+1SlIN1sGTJe2FJS0iAm0bPaiqUBUkc1GtkxfcRcXhNeCtWG/FoyT07T9LkJGrW4GJbZgAmMhs8FI8JMGp/I/5qN6L1Rmnvn1bXN3jKszPxfRu940YQDPGA7tEIggEa0C0WUwsRry58wBqTbBSyHL1ft/+12vWLWKNGRdvqzRWeWAZ5deUw91fahQ6EBDjXD4T0mi4AVisAr47t8FbjiUV99KmP9NI8Y63/jJdCLpi5ggdK4a6qIlK21E7Yu6AB6MwuRON+I7U4mJ2wq3WVm52aw4/GFhy5urB/LBarn94isbQQ6KoU1EKH9tQaaiHS1aFpqZv9lOG0EOnqMtTK/P5Tl0UqRLoqNC2NRqgKxvLFwVhkhqOxSKss5uFYWkuusFRDM9LU0Vje4Spal0xsStb0lRqitVreWaOVnfgjVWrIBm2duM4DKYM3iZoCMXNFYxmOrN1dWDOnqOtDIJe2aeOuqMj6DUfYiYExG8/k7gT2mm5OWqiBPMQjQwiQ9qjty4hLwDsCPqkVsn0HR52KgVG77AqHDYK+rzQS1Hve3Y/JYrFc0sKIATclaszipk2U1oFOkTwsYpeOshXZmK2uarRsu8U1IQ1Y3Ka0TiMOi8yJBGINTGT+SKxDk5opFKufznFKNRjOTIcnlGowvFgWRiVmCFEDPrR7d68GQrixTQnQ6ZoO0qPi0YOJAAD2MF+V8ltDDvmIT2xiSExoQ/P1Qdtfai/HrSUTOWpT8XGN8/KltltgE9mmThg+CsugtxvNoilX51g7GnXyvtUCRLNUIps0JoHyQgv5Wm5CkJQ83nBjAUPHBNUZclTcTLnoAytd6ztb2zLqvOTrbJjzfFcO1WloESlrQ6rZxTJNhYa66ILh8IJFIGkpytWVaAjstIaMJS9s1zyPg7oZbVBqNOBo0Fz2dCHKVXm9sqV22rVeSGx+0M5rIS0irrUY5qo4GBbH6oNThsOZuCE5iEUL/cJY9XP5lQtxrq4D695CO9uPvQTYQN6mADn/Gw5CSvsFGuKEpYalJtIiritILBJIC0GuDkc9Wa+Vhh077jPVYLtbCcwJj4guzVMIqhzjqq5FttQ1SxHxzKCF4bUv2jIqtpdhXJ3YSaKyDTvlohby7WkTL8rsCBRBDmEuHC0EuSoTJlvpms1hgnyHNuzJHw5Va4sz9MDUFGeYpC5D3jXo0boMZtm+wDJsq4TPIkJ16ttQiG2V+FlI854ygJbCWyWEFhDPU5suRehWh59FZPhENRlUuRh0A9KgG1A3ptQNOIMX0KUI/QmXUuMO/eY4J6tNoESd9dqKeOt9jinKkA3aPssbwpuOwCEHNW7bJp/UDaj16gm3WmvXe2LfxZTDKIeFjikaWOrBpWMUIhgctaHG8knMWJcBV5pS9EZroKzfFtlIDJWaBW9Sm3Qf5WgMjLoOGAHeRqO5ueIpztZvlJhwR7RMM95Mz9dYiM2gRYWst8EZc5x7MjBqFwkhEXdEKyJzkLR9Nk7EH0in4g/MJ3IC/sADk5rLH0hH9Qfamqaj+KnFstiN7KRASkBYH3hXdgIxNfoUYm1cE0kFQANojN6x74uAWW3mo51OTYpmrvzQoZVuBLoW1YJUD25sE8n9lba6sEqnBClJzosLi6gz3GoY4pg1ZM64zhLqdAFKHuAO4PduhMV5AbmNN4Fdc07tE+psuoZBNxjfOO9Ie07N4/oYXGyv+GztzUif6P5SG7J8b2cOJxR3sDTkFPQ8jVPQ5aVunuAUdMstdVOIdHWB2azQpJGsHLea0wHBE6k2DhRLdq7gikKkq0sd3F9pp2SLCjN2Jooq2ksA03Kcq2JinLpXtOercUZ6G0jrCoqG+VOilgOaJ2+wEOfqorL3l9qyMV79K6pgJp/4cPfRDpG2fII2lvRwHl25IddKH/cK2rbN9FJp7UKYq0xxEWHrfXR4Us9QwopX4jgmabol+7mgtBDnqqA0X+qasGCW4Bx5p5WmFpIwWIZzdRku3mqN6igp+tBzGtZpjxLgiCXGp7MF5BTBXF2Z//2FdpyOC1qemthoRdZ0EEkbt+AGmS6fTVWuvdYtuNhy7aXgVgmgReTq1ABaim6VEFrI+Z4yhBbiWx2ElpHQUxswZQhXBaKlzPhUzkF7wDlo185B3nIOYgEHnYN8yDnopvcOWu0dTskxa0bmOt2BteiqBAu7wwXaClEdkyCYDdpGd2qT7mi8iASoJjN5BklLwQL0bIKGFLUl1np6hnEgAeKJyEd/nK3IB/2srbj2prAdMDKxXzCEGCG2HM6FtpXuSxxhXqHRlyDZkj1OL+d80BbxLJGWV8LJYPKeZ8sQtIJXRCkBlrSM/jqCKGByOMqYdoAIPU6KYDZmq6AGdrCxjRFL0fP2DTkRl6A9FZdgPpETcAkemNRcLkF7TJcgSZ7Y8rhLsCuAvEjLoRGYBpLMBe2w2RMPjXizBPmWIpT+cSQ2sfNOgcAEwu9eA4oUNNLOKTzTXFWG85VuBHkCQnqrLVnH5bVkK22thijeeZucFrPH7yWUFmlVC7xLzYPSyORemqoWoH28YIJa6D7jaosw9t0Ia7RRCH2QZ3CaydkcU/Fxtm61+UrX+s7WvoyytfOFrmO+oVZgbzQEzprfqNW+VZN9TWI3PIx5GgEztlL7AIf9uCuQF87BlKFcXUNajMwMuxQWRur7XuCJnhwZr/X9ieaq0lSKclXAkC21064xtGgMIgAcF3Ah/EsZylVJHmswtFYc1MYjfe0wGGsGZkgwMAH8XORLKchV3Zi9hXYWSMLIGtuvbWrcIwT2BpYun53RExns0U7AR32AGYPddJYOS0XPQmirQ88iFmDi21CKbHVZGoXcxCmjZxG01WFnEVUyNXqWIFsddBaxN1W0NZfT1nyAtuY1be23aGuvwUebTqPf/vD1t9+cPyGtJUzPXBunuUghBaCTCcn2SWfJNE2botNKNuE41e2yMbsg2KbCg4E8Zi1BOw937YBHRoBSlpPx6wRE7eEFiDTQFyLr4Y7H4CgHR8XTfD6H2erbmRA4ktjAQCLptepAGnVHTStkS8cpb5cP2qY96mmkyMFqUWtmOxN3DcUQ55KY8Rug3CcZEmmeHWHColfnSJ1G81Hbp0nCP2NT1EJN/dOjr/lU6Ot8IidAXx+Y1Fz0NR83oyUvjv2UjJbl9m5RqRnVix+iNr707DdSMxlgd1DnLI0sG6/4C9EswUE+97Do2XpIbECjWi1utoyWbKW9MKfgA5R7bZU6sszQ/ko7Sx1CAfoCPsUp361YdqJmQ6tdBJGgkS9Rulzti4aNaAKILOn/RuRy0KXz1mProSFrAji5XozC7gw+arYIZMpMoaYDS10rPLvbUn9cBhbaEuTZvhxMZrGDnUatC9OkssSq+nZxuW7AQoir8gKq5mS1Y7Nom+LY61MuUozEJkZn7VzmdCHEVeUlZCvtgtxZS5kECK+mrbVdAoyWg1wdkIgnVb7ZNQlWa9kWvbEClVz5GRvDbA1bCnGu6tZkS+3UG3ZRjXTRFMRwOHqwA6VNXuD93acf3z07e2rP5hIiewBJw+NEdg6kYbBl8yJwtBDk6vpekQ0AimjUoUY9jHhDPorRCkUx8FzlmUpBrioKO1tqBz2aRYYrl9QDC3tgAUBajnFVKfl4ETZI05lPbC/cKAi+rA0t8BgxcbYInDKMqzoy+VK7dKFggnKgLjEDsg/iaItJ2/EUjb+Np0loCRmKPprQspnfIoG0EOXqgLSIWZ06n6UQ5OpuRSHfe8pAWgpyVUBayEBPfWYKUa6u20EhLz5VPos74Bh050PF7iL/lmPwUErLSzuDY1CzhwCG2vjKu42fOzlHuKbBMV7CcdyC2ZhdcaDotPq9dr1yfqeK46RuwWjZ4/QBm6Baczu7C39pnCbeBUzYAMiP4QgbGrThyLIZzFfozkfILNL6G9b2sYZWk5FwSJSzgAUqx6j7Nzhql/ToSJvcGsMaBbuf4DNhSovVik6kDm3SiIVufoAi/FObqmJ+Ph6jG9rQoF1YRAia+MmQ2rgq23fkRLyC7lS8gvlETsAreGBSc3kF3VG9gpz3GXiCV5BlsSyMikxgt4OSDFjaJLWofGuK7hut6zymYpcCgdd6JVCsJDJJXyZbaWKrGrU6UtJcJnW+0l6UW2s8rKpA7EbpwvlKWz1QjXZnoIvrrmz3fz1d46FVLnBi2JNuDzu/pV0ox5tYq2jLCA7ignAsU6Mya8KVkLW965GdqKVl2EOE+HlcIENrXSs9ezszosxdvtSGMc935pBnkGXQMwh7bBrPIPsazyD7xWJpIc7VVRbRshmM4RiD8RqsKEHZNg37A4PbzJYgWAp0VUxMvtZOx9ZqK7DIoiRDULOXgKblSFeXVCkEZBKi1j7te2EkTeXESyEYsybOkw1VDnWV/dL2ltoaI8RRYBPF2FIHh52DLS5ts9pcwGqPTXLJ64U+muSymd8iwbQU6erQtIwbmJqjLEW6qqI7xZTF6cJpOdLVafNlJMrUMrgM6eoMujJmZ6qEF3+A1/bnQwkvqp8fqNPkD5DaTerM5KS219jUlFxgsr5PL4ohWahZHqqVdccK5c8H7Yxhp5HDxnirDaN4Hl5btASHExcafx53xcX5MjjRrkEJxx3nzxyDvRwaFM8K+Qxm47UB2UH7y0I1gm7d+/DEWyPOMOF/WnThONku+ajt05yP1uPAaAErn2SuUk0M+Sjql8WptX5T4BOTThZilFSIbReLHrEX+aAd+sOysQkyGbd1t2fmidDa/lRo7XwiJ0BrH5jUXLS2PyqtbbiG1m6bpy/SemiEpirP5BiqrbXbQlNULRKtEz1Kc8b4MEyjJg9T4LSme0QzxD0QAMYFFK7ZIsWylW4kucRkosaQjTOf8pV2geBaDYo81MBoxSwhJKZTLXyIHFhlWtqoFrZlJXQTRzG6VlsfYhiB5YYRuQtftHglHGBOMJ5NKj1mOTBDS+0Vnp19qd2AoYU2e5xvyyFG2/BggDYO9DSM9kDJuycw2gsueVcIcVW1E5Tx82yS2Jg2wXtsNPYZH8XQ6N9zpQwWYlxVmFi+1JaGwl2BsaHde8loHM0SgLQY5ap8Y1q3X7QktNpi64rCdGkZ8h76fxCGHu7n4e2Kca58B4ZW2mzA7r78Bpst6UALrPDEtMGRyS4xz75+PNklLjf5uhTlqthspT7xiyJwyVBfcF+MkhjQgZMz+DzOhKTFMFfXAytbbLsJSn7iV4R2I5rSuwQsLca5KukT8R9brcLOPsW+tJ5X9t+pE4aNdTNF55QCXV3fhf2F9joOSdDy+Nq/4nD1pmj3HIPk1PFmp0l3yePV9tNdBjTS0PSZMYt1DRYCXV0yKQzlBN2XXFTdt38GHoKzl8iEqBWh5nINluJcHZhmi+1Y3+C1dimwXKcgy3ANlgJdXcZH8knZFYKpHE2f9eIt/lPnTUgMc2G2zP0yqKtC02yl7SOiUe+BZlAF6AGHNdOmjugWQF0+OyuJs3BHLybqH4uzWHiYRSHO1WFpmatqciwthLk6LC31oJ00lhbiXBWWlvn0pobSMpyrg9IyR2NVlIUvj7KQA1EWso6y2OmGFQ5HWZxS6iBRiBoD5qFMGTa8VqhiEG1KpvqW4LAfp/FRPmj7rKi9NQKOu9gkOy7kSRtiaQC7IZw9rTnek74eyKXxEBpxloI5Tt+jfNCGLMxmMFeURWCxQe8gbE3DoW/b7XFOvIPkDHaX/h1TRjMbtOVHYPnaAFwRlQ92rtRBHxlCXMsHap6M7ZvIKpcTXGi6yCVnjnNBBkZtaXft/eu1Uh+kjw1pm+E9kSgLOZUoi3wiJxBlcWBSc0VZyFE7Yg1UcHpCR6wFl3DSXhxR3WMO0Ag1bROaKBGmJsSYZsiPa1EfgAXa7QACU8GxD+i1JgEIyLsQbRQzV4x2vtS1KFeq23kvqiOO2oGBpXaCAfJRxEENFX3QItIHG+2CBWthbf20KeKkmgBrKTgD1dqPy7f0GroOYzNYWHSpL7jnmuhkbBY0DS9uLu4uX+la59nblxFHJltpuwGa7gBDE5LTwqQ9bDpQXsOJ+9Yuv39PrDzb5fEwC2uX3dSlEObqoFSgVhttLSXepF6Fg1qp1eY0tpzDbP7BYpirIhWyta517BhS0mZkrCWiF4CkxShXBaRRUzwsaxA8QKM3fzA+E0WKWvaF01zMXSHM1TH62VI7SyRGGGaCR8FANL9RDq9htDfopJ2x/DRJg3lnwad1xlosmV0IcJUYWkIITB1jUQZvdQhaxlKcNIIWwlsdhJbxJlOfmFJ8q4twLKVzpsoYDAe47HA+UAmPaadF1vd//uZv50/IGZTp2WwTSAJhRwFR2Ps+/SFBu4pabTjByAjhOMlh+aDrAjqsJWo15k0jIudhs+2lVmCMRpOcBceta02nXydx5HE9HTkOx0gaHBq0ifzMpzAbn63VOYzR1Ggt1L+mK2w04tgB0LWvBR/nXOSDts+iEF1KmoOsPRT8XDmDmBqAxwtkAGnRz614xCQJbytBgIbAxyH3B0ZtaQ/xEA0OMKyUUbKnR2iHUyG084mcAKF9YFJzEdrhmIS2TTVpg/ipxRLaKjeTg4qsfZY14WlLbkrUBhYmjOwUGlh7p0BRtxq5usYcAAIbdfaTSWm2tuR7y+zFuPGwpgBSTvyoCPV8mZ30gSEB/U80lNHbJYTAtHqFidEr98CWuiogrQrAVmzUIJ8oo4LpbPTROW6iAnyKbl0bCvq6bhS+5iBXwkz8y9Ba1+rO3s6MSPUZWOvn9iapoiVN1Ibbrni2bzp0eKR0yy8Y+f7ZWUhNeCFNQ2Zf5F7Bx9nsi0Gn4Itv/vLldy//duIgGqDlesPaO5y1htP6lierGTDR4EhHaH/jvIKFz5gYSAfm0c2PoftqOTMWb8b1yCp9xgmjKSbNotp4CkofEXNfBo+YmnpGXrm9MNIvWP6UqSvhDUykRTtRDAxQ/nhk3YKKh+SUNp7x/uHz78dc51ktj0PmYA3mr3/4/kUFXE74zluheTGqjebeEL9O69H/HRZQClV/vK/Z39cOQHx9f/cB2tWZLmtdpSE1aXHmUaVrXGIx1yQW84ITi6dQtqILojntxFGLpW8UDR9I4//xID+bz2saXWt/qR2VCWsVW+Nh+8BG5iVYrlPpWs6KE81TCKxB9VunxuN92GSSi9EZOw/XMYmqNbjYlvWFScua3ALjGRt1uCdpCju5xF0w1lZ63O+fYZxzgI9nGMdlJxgXAl4dppKRyOof0VSQHrej0RLqwsZ6mi+9uBTv6sit/aV2z4iJSbz4YLWI7zIwtRjtqiCVtaKbvg+j3qT+GY5wXpNxLmlaL80VwFcKdlWHJl9r50YL2pfdpdCkyx0OJGiiseJ2evFTo7HGphbn0ViPphYvPBqrDOLqULTInT31hShEuEoULXOynzSKFgJcHYgWuv2nPjSFCFeHooXBCFOlFscD4ViaEJA3JmV3MLVYO5kOBWNpAcepY7G0G2jAu2TjSLRgYC8XAz5xTSVxIOVx8mn3x2x9uNoR1QosvuQjy7ZXZMJQLNYkB0vJBcwMGNglOWiZKzY+mRihXkZuItnHx2INjYqnuf1JzBeLFZv2NurEtAGCshcPWm4TRgfm6a3l4+RZ54O21ScskCSpjxmGsLa/my0WS5trQz/EG9NgwXWpA0Na5QsnF+IzynG2Ih+0wz6XDEF914JKZmcnTiQQK55KIFY+kRMIxDowqbkCseJx25LmrPZT2pIul9ZuhKaHeuKcNiDqUzlUvjGQjAFbwPdxXWxUBFnvOBrvk/SR705EQ7S8I2jtabYs0WylG1EO4xrGjnatGhVelK+0S38wXm0GgZTUYkULMB1a3YKiaLVqgrUVtnUL8UrkMeRKGpPH0AzmiTTFRas2bTo/QZDjFAV86IKEmXLRB9e61ni2t2aUwZ2t9HNTjnN3X9xBw6HDJGVbGt9gX3eYDE1Swz3VRGOl5ZZoKAS5yvJuWrDIRENJbG+w2iDkIo6EsDGe5sLRQpCrC2vdX+m6tJsWk0qkNHqIi+iDUQxxVelglimpEm5VvvXavotaADrCCBAhmGjzNQQvwri6hqzZUluDJ0g0UVjwdL/XwnbXL9ij5dv7m9sfV8/OOD4tvOIo8VmUOwUfj7Kg5eYFFEJcFYpGo03+vCVHOH8bKttKtDH5YPGL3GwRrWUYV0dlZ0ttn0FJWw+6EH1kNdMXAKTlIFenwBPMUG3q6aVNum6NpMYRlzwxtjKmuUpkloJcVXZ1ttR2B3wMSbuRkuM2S/+QPpr6ugy7xW4mCa3IddH90Iphl2CTurBMIC1EuSokLeNVJ3cKFqJcHZQW0r2njKSFKFfnFCzin6emgspArooLKyTFqxyCTh2Cf29g8eF/3Vyv7rL5tV99dn5zjX9AdGpdPcdE1FQEvnl4r9/0n6vVj1e3Z1/dX12/X30++39nSsWe/fd/nb14d7e6vfnn2fcYD1/+4e7T7fWZw9/o40MzNfy1U38x2jtA7u3VzfstWniADf7w41VDFH+4v7vuqOG3q4c3756/e/U//vVupRfg12e/4O+/3Fw/vPv1mRLGD8/XA755e9H8zMfLTx8vVlcfHy7oUke6fH33/uH69uPl7erh+c/0XL2YN2+ei5bhV1zUggHPNUTg9Wt6ffEWmHnhOL25uDLOXWBTrlc+XL9xNj2n69eJ+SpcvBa7unDxtb24krS6eBOu+bUk//at8xjJ/DOZ5z9d6cSbCXXs9Ie7jw+Nh3XJO+CiuzLx+sKu3sYLd8Wri/TG8IW8jVfmTbwWdu45wNX8k6C/D23Cp9fXuAta2VpLZRgv3hjTuJbvHz5++ZB98OZ+he+/zr+Or/54d6/3ARv3YYUDe3Oncnr9wdetb/sv7WH/0873vL2HhH9383G1+9P/pj9w8frzhf7/9rftjqV37eyrz2cv2u/SF/anu+vmkz9/+fL7/6m+8etP91fNw77QitP6w6v75sK13onmzjFDMOtjbu4/PnRjt/dNQf+q/1p3+Vod577/znO93p9X6qk/11vZTUVv9+3Dt/oA07Sivder2bv33V4YQKsX4aL2X1B7dQMbA96UP4DjD+D4Azj+AI4B4Pj7r/8fN2LZVzH7AQA='
</code></pre>
<p>My process / goal is explained <a href="https://documentation.softwareag.com/webmethods/api_gateway/yai10-11/10-11_API_Gateway_webhelp/index.html#page/api-gateway-integrated-webhelp/ta-uncompress_payload.html" rel="nofollow noreferrer">here</a>. However, I want to stay with Python.</p>
<p>First, I believe I need to decompose the data. With the help of this <a href="https://stackoverflow.com/a/3470583/7959614">answer</a> I write:</p>
<pre><code>import base64
payload_in_bytes = base64.b64decode(payload)
</code></pre>
<p>Next, I assume that the end-result is a dictionary so I use <code>json.loads()</code> as the <a href="https://docs.python.org/3/library/json.html#json.loads" rel="nofollow noreferrer">documentation</a> states it accepts bytes.</p>
<pre><code>import json
data = json.loads(payload_in_bytes)
</code></pre>
<p>However, this results in a <code>UnicodeDecodeError</code>:</p>
<blockquote>
<p>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position
1: invalid start byte</p>
</blockquote>
<p>What am I doing wrong?</p>
|
<python><json><byte><decode><payload>
|
2023-02-16 10:40:24
| 1
| 406
|
HJA24
|
75,471,046
| 9,391,359
|
mlflow.exceptions.MlflowException: Could not find a registered artifact repository
|
<p>I have mlflow experiment on remote server.</p>
<p>I user pytorch_lightning for creating experiment</p>
<pre><code>from pytorch_lightning.loggers import MLFlowLogger
logger = MLFlowLogger(
experiment_name=EXPERIMENT_NAME,
tracking_uri=MLFLOW_TRACKING_URI,
run_name=args.prefix
)
</code></pre>
<p>I can successfully see at mlflow UI that my experiment and run created.
I successfully log log_hyperparams</p>
<pre><code>logger.log_hyperparams({"run_id": self.logger.run_id})
</code></pre>
<p>But when i try to log dict</p>
<pre><code> logger.experiment.log_dict(mlf_logger.run_id, {1:1 , 2:2}, 'test.json')
</code></pre>
<p>I have following error</p>
<pre><code>mlflow.exceptions.MlflowException: Could not find a registered artifact repository for: mlflow-artifacts:/7/d736955490f6444b916d5881b067aac1/artifacts. Currently registered schemes are: ['', 'file', 's3', 'gs', 'wasbs', 'ftp', 'sftp', 'dbfs', 'hdfs', 'viewfs', 'runs', 'models']
</code></pre>
|
<python><mlflow>
|
2023-02-16 10:38:53
| 1
| 941
|
Alex Nikitin
|
75,471,037
| 5,868,293
|
How to use dictionary on np.where clause in pandas
|
<p>I have the following dataframe</p>
<pre><code>import pandas as pd
foo = pd.DataFrame({'id': [1,1,1,2,2,2],
'time': [1,2,3,1,2,3],
'col_id': ['ffp','ffp','ffp', 'hie', 'hie', 'ttt'],
'col_a': [1,2,3,4,5,6],
'col_b': [-1,-2,-3,-4,-5,-6],
'col_c': [10,20,30,40,50,60]})
id time col_id col_a col_b col_c
0 1 1 ffp 1 -1 10
1 1 2 ffp 2 -2 20
2 1 3 ffp 3 -3 30
3 2 1 hie 4 -4 40
4 2 2 hie 5 -5 50
5 2 3 ttt 6 -6 60
</code></pre>
<p>I would like to create a new <code>col</code> in <code>foo</code>, which will take the value of either <code>col_a</code> or <code>col_b</code> or <code>col_c</code>, depending on the value of <code>col_id</code>.</p>
<p>I am doing the following:</p>
<pre><code>foo['col'] = np.where(foo.col_id == "ffp", foo.col_a,
np.where(foo.col_id == "hie",foo.col_b, foo.col_c))
</code></pre>
<p>which gives</p>
<pre><code> id time col_id col_a col_b col_c col
0 1 1 ffp 1 -1 10 1
1 1 2 ffp 2 -2 20 2
2 1 3 ffp 3 -3 30 3
3 2 1 hie 4 -4 40 -4
4 2 2 hie 5 -5 50 -5
5 2 3 ttt 6 -6 60 60
</code></pre>
<p>Since I have a lot of columns, I was wondering if there is a cleaner way to do that, with using a dictionary for example:</p>
<pre><code>dict_cols_matching = {"ffp" : "col_a", "hie": "col_b", "ttt": "col_c"}
</code></pre>
<p>Any ideas ?</p>
|
<python><pandas>
|
2023-02-16 10:38:12
| 4
| 4,512
|
quant
|
75,470,878
| 458,661
|
Passing results of BigQuery query task to the next task while using template macro
|
<p>This seems a peculiar struggle, so I'm sure I'm missing something.
Somehow I can't seem to pass values using XCOM, unless I'm using functions to execute the tasks that provide and use the information and call them from PythonOperator.
This works, so far so good.</p>
<p>But now I need to use the execution date in the sql query. Since it's embedded within a function it isn't parsed by Jinja.
I get why the {{ ds }} macro is not available outside of the operators, I'm just struggling how to solve this in this case?</p>
<p>Example of what I'm doing currently:</p>
<pre><code>def get_some_values(**context):
hook = BigQueryHook(use_legacy_sql=False)
conn = hook.get_conn()
cursor = conn.cursor()
cursor.execute(
"SELECT value1, value2, value3 FROM some_dataset.some_table__{{ ds }}"
)
results = cursor.fetchone()
# Store the results in XCom
if results is not None:
for i, result in enumerate(results):
context['ti'].xcom_push(f'value{i+1}', result)
def send_slack_message(**context):
# Retrieve the results from XCom
value1 = context['ti'].xcom_pull(key='value1')
value2 = context['ti'].xcom_pull(key='value2')
value3 = context['ti'].xcom_pull(key='value3')
slack_msg = """values returned: {}, {}, {} """.format(value1, value2, value3)
send_slack_message = SlackWebhookOperator(
task_id='slack_test',
http_conn_id=SLACK_CONN_ID,
webhook_token=slack_webhook_token,
channel = '#some_channel',
message=slack_msg,
username='airflow',
dag=dag,
)
send_slack_message.execute(context=context)
dag = DAG(
'test',
default_args=default_args,
schedule_interval='0 12 * * *',
catchup=False,
)
get_values_to_output = PythonOperator(
task_id='get_values_to_output',
python_callable=get_some_values,
provide_context=True,
dag=dag
)
send_slack_message = PythonOperator(
task_id='send_slack_message',
python_callable=send_slack_message,
provide_context=True,
dag=dag
)
</code></pre>
<p>In this case the query is failing cause it just wants to select from the <code>some_table__{{ ds }}</code> table.
how do I get the execution date in here? OR how do I pass values from a query to the next task without using a function?
('current date' is not good enough since I want to be able to do back runs)</p>
|
<python><airflow><google-cloud-composer>
|
2023-02-16 10:25:54
| 2
| 1,956
|
Chrisvdberge
|
75,470,756
| 13,517,174
|
How can you override the argspec of a python function, e.g. to make the result of the help() function more useful?
|
<p>I have a python function that only takes in keyword arguments:</p>
<pre><code>def my_func(**kwargs):
</code></pre>
<p>I am splitting the keyword arguments among two separate functions, which have their keyword arguments defined explicitly:</p>
<pre><code>def my_subfunc_1(a=None,b=None):
def my_subfunc_2(c=None,d=None):
</code></pre>
<p>When I issue <code>help(my_func)</code> I only get the description for <code>my_func(**kwargs)</code>. However, ideally I would like the result of this to be <code>my_func(a=None,b=None,c=None,d=None)</code>.</p>
<p>I can fetch the arguments of <code>my_subfunc_1</code> and <code>my_subfunc_2</code> with <code>inspect.getfullargspec()</code>. However, I am not sure how to use this information to override the part of <code>my_func</code> that the <code>help()</code> function reads from to fetch the displayed <code>**kwargs</code>.</p>
|
<python><built-in>
|
2023-02-16 10:14:26
| 1
| 453
|
Yes
|
75,470,619
| 1,145,808
|
Python multprocessing callback
|
<p>Using <a href="https://stackoverflow.com/questions/11515944/how-to-use-multiprocessing-queue-in-python">this post</a> as inspiration, I am trying to add a callback. I am using GLib.add_timeout to poll for the result, as I want to use it in a Gtk app. However, the main_quit() is not called properly, and thus the following code hangs after finishing:</p>
<pre><code>import multiprocessing
import queue
import collections
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import GLib, Gtk
Msg = collections.namedtuple("Msg", ["event", "args"])
class BaseProcess(multiprocessing.Process):
"A process backed by internal queues for simple messaging"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.requests = multiprocessing.Queue()
self.responses = multiprocessing.Queue()
def send(self, event, *args, finished_callback=None):
"Puts the event and args as a `Msg` on the requests queue"
msg = Msg(event, args)
self.requests.put(msg)
GLib.timeout_add(100, self._monitor_process, finished_callback)
def run(self):
while True:
event, args = self.requests.get()
if event == "quit":
break
handler = getattr(self, "do_%s" % event, None)
if not handler:
raise NotImplementedError("Process has no handler for [%s]" % event)
msg = handler(*args)
self.responses.put(msg)
def _monitor_process(self, finished_callback):
print(f"in _monitor_process {finished_callback}", flush=True)
try:
result = self.responses.get(False)
if finished_callback is not None:
finished_callback(result)
except queue.Empty:
return GLib.SOURCE_CONTINUE
return GLib.SOURCE_REMOVE
class MyProcess(BaseProcess):
"test process class"
def do_sum(self, arg1, arg2):
"test method"
print(f"do_sum {arg1 + arg2}", flush=True)
return arg1 + arg2
def finished_callback(result):
print(f"result {result}", flush=True)
Gtk.main_quit()
if __name__ == "__main__":
process = MyProcess()
process.start()
process.send('sum', 1, 2, finished_callback=finished_callback)
Gtk.main()
</code></pre>
<p>How can I prevent the code from hanging?</p>
<p>Edit: I see from <a href="https://jameswestby.net/tech/14-caution-python-multiprocessing-and-glib-dont-mix.html" rel="nofollow noreferrer">this page</a> that others have noted problems. How can I build a Gtk-based app to control long-running processes like scanners without blocking the main thread?</p>
|
<python><multiprocessing><glib>
|
2023-02-16 10:02:52
| 1
| 829
|
DobbyTheElf
|
75,470,473
| 9,720,161
|
Custom Network and Policy in Stable-Baselines3
|
<p>I am attempting to create a small working example of how to use MultiDiscrete actions spaces together with a Box observation space. One of the problems that I have run into is that the dimension returned by utilizing a normal policy does not fit with the Box dimensions. The base policy returns something of size 25, whereas I need something that is (5,5).</p>
<p>I have tried to alleviate this problem by generating a custom "policy" (actually a network) where I, as the last step, reshape the output to (5,5) rather than 25. This has resulted in an array of problems. I have attempted to read the documentation for how to create custom policies; however, I cannot for the life of me find the issue.</p>
<ol>
<li><p>I have attempted to use policy_kwargs; however, I don't know how to write that the NN should be reshaped.</p>
</li>
<li><p>I have attempted to use a BaseFeaturesExtractor, with no luck as well.</p>
</li>
<li><p>Various combinations of 1 and 2.</p>
</li>
</ol>
<p>I have included some of the error messages that I get for the various different attempts that I have made. Does anyone know what I am missing? Is it something completely fundamental that I have misunderstood?</p>
<pre><code>import numpy as np
import gym
import torch.nn as nn
import torch as th
from stable_baselines3 import PPO
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor # don't know if this is necessary
# -------- Attempt using BaseFeaturesExtractor
# class CustomPolicy(BaseFeaturesExtractor): # Don't know if BaseFeaturesExtractor is correct
# def __init__(self, observation_space, action_space, features_dim: int = 25): # Features should perhaps be (5,5)
# super().__init__(observation_space, features_dim)
# --------
# Define a custom neural network architecture
class CustomPolicy():
def __init__(self, observation_space, action_space):
super().__init__()
# Define the layers of the neural network
self.fc1 = nn.Linear(observation_space.shape[0], 64)
self.fc2 = nn.Linear(64, 64)
self.fc3 = nn.Linear(64, action_space.shape[0])
# Reshape the output to match the Box observation space shape
def forward(self, x):
x = nn.functional.relu(self.fc1(x))
x = nn.functional.relu(self.fc2(x))
x = self.fc3(x)
x = th.reshape(x, (5, 5))
return x
# Define the grid world environment
class GridWorldEnv(gym.Env):
def __init__(self):
self.observation_space = gym.spaces.Box(low=0, high=1, shape=(5, 5), dtype=np.float32)
self.action_space = gym.spaces.MultiDiscrete([5, 3]) # 5 movement directions, 3 movement distances
self.state = np.zeros((5, 5))
self.state[0, 0] = 1 # Start location
self.goal = (4, 4) # Goal location
self.steps = 0
self.state.flatten()
def reset(self):
self.state = np.zeros((5, 5))
self.state[0, 0] = 1 # Start location
self.goal = (4, 4) # Goal location
self.steps = 0
return self.state.flatten()
def step(self, action):
direction, distance = action
reward = -1
done = False
# Calculate the movement offset based on the selected direction and distance
if direction == 0:
offset = (distance, 0)
elif direction == 1:
offset = (-distance, 0)
elif direction == 2:
offset = (0, distance)
elif direction == 3:
offset = (0, -distance)
else:
offset = (0, 0)
# Calculate the new position based on the current position and movement offset
current_pos = np.argwhere(self.state == 1)[0]
new_pos = tuple(np.clip(current_pos + np.array(offset), 0, 4))
# Update the state with the new position
self.state[current_pos] = 0
self.state[new_pos] = 1
# Check if the agent has reached the goal
if np.argmax(self.state) == np.ravel_multi_index(self.goal, self.state.shape):
reward = 10
done = True
# Increment step count and check if episode should end
self.steps += 1
if self.steps >= 50:
done = True
return self.state, reward, done, {}
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
# Create an environment with the CustomEnv environment
env = GridWorldEnv()
# Create policy
policy = CustomPolicy(env.observation_space, env.action_space)
# Create a PPO agent with the CustomPolicy
model = PPO(policy=policy, env=env, verbose=1)
# --------- TypeError: 'CustomPolicy' object is not callable
# --------- Attempt at using policy_kwargs
# policy_kwargs = dict(activation_fn=th.nn.ReLU,
# net_arch=dict(pi=[32, 32], vf=[32, 32]))
# model = PPO("MlpPolicy", env=env, verbose=1, policy_kwargs=policy_kwargs)
# --------- ValueError: could not broadcast input array from shape (25,) into shape (5,5)
# --------- Attempt at using policy_kwargs with custom policy
# policy_kwargs = dict(
# features_extractor_class=CustomPolicy,
# features_extractor_kwargs=dict(features_dim=25), # should perhaps be (5,5)
# )
# model = PPO(policy=policy, env=env, verbose=1, policy_kwargs=policy_kwargs)
# --------- TypeError: CustomPolicy.forward() got an unexpected keyword argument 'use_sde'
# Train the agent for 1000 steps
model.learn(total_timesteps=1000)
</code></pre>
|
<python><reinforcement-learning><openai-gym><stable-baselines>
|
2023-02-16 09:49:56
| 1
| 303
|
AliG
|
75,470,298
| 12,546,311
|
How to sum area if a threshold is reached in pandas dataframe?
|
<p>I have a pandas data frame <code>df</code> where I try to find the sum of hectares that need to be harvested <code>area</code> before the threshold day in the other pandas data frame <code>lst</code> is reached per state.</p>
<pre><code>lst = pd.DataFrame()
lst['ST'] = ['CA', 'MA', 'TX', 'FL', 'OH', 'WY', 'AK']
lst['doy'] = [140, 150, 160, 170, 180, 190, 200]
</code></pre>
<pre><code>print(df)
doy ST ... area left
0 111 AK ... 4.293174e+05 760964.996900
1 120 AK ... 4.722491e+06 760535.679500
2 121 AK ... 8.586347e+06 760149.293900
3 122 AK ... 2.683233e+07 758324.695200
4 122 AK ... 2.962290e+07 758045.638900
.. ... ... ... ... ...
111 211 AK ... 7.609006e+09 107.329336
112 212 AK ... 7.609221e+09 85.863469
113 213 AK ... 7.609435e+09 64.397602
114 214 AK ... 7.609650e+09 42.931735
115 215 AK ... 7.610079e+09 0.000000
</code></pre>
<p>So I would end up with a data frame that sums up all the <code>area</code> before the threshold <code>doy</code> in <code>lst</code></p>
<pre><code> area ST
5.0000+05 CA
4.0123+05 MA
3.1941+05 TX
4.0011+05 FL
1.2346+05 OH
87.318+05 WY
0.7133+05 AK
</code></pre>
<p>How can I achieve this?</p>
|
<python><pandas><merge>
|
2023-02-16 09:34:43
| 2
| 501
|
Thomas
|
75,470,290
| 4,729,280
|
Adding a coroutine with a webcam feed to a running asyncio loop stops tasks from running
|
<p>If I run 1+n async greeting loops everything works fine. Whenever I add the runCamera method, it stops the greeting loops from running. Why is this happening? Or is this rather a case for threading? I want the greeting loops to run and the webcam image showing at the same time.</p>
<pre><code> def main():
loop = asyncio.get_event_loop()
asyncio.ensure_future(greet_every_two_seconds())
asyncio.ensure_future(runCamera())
loop.run_forever()
async def greet_every_two_seconds():
while True:
print('Hello World')
await asyncio.sleep(1)
async def runCamera():
vid = cv.VideoCapture(0)
while (True):
ret, frame = vid.read()
cv.imshow('frame', frame)
if cv.waitKey(1) & 0xFF == ord('q'):
break
vid.release()
cv.destroyAllWindows()
</code></pre>
|
<python><python-asyncio><video-capture><opencv>
|
2023-02-16 09:34:23
| 1
| 362
|
ill
|
75,470,010
| 3,922,727
|
Python unable to read excel file using openpyxl received from API call as FileStorage
|
<p>We are trying to read excel file coming within a request into python script from Angular web app.</p>
<p>We read the file as follows:</p>
<pre><code>if 'sfile' in req.files:
sfile = (req.files['sfile'].read())
</code></pre>
<p>We need to pass <code>sfile</code> into another function to read it with openpyxl and do some transformations:</p>
<pre><code>readTest = openpyxl.load_workbook(open(sub_odk))
</code></pre>
<p>The following error is occuring:</p>
<blockquote>
<p>'utf-8' codec can't decode bytes in position 14-15: invalid
continuation byte</p>
</blockquote>
<p>We made sure that the file is strictly with a format of <code>.xlsx</code>.</p>
<p>Then we tried with adding <code>rb</code> and a similar error occured:</p>
<blockquote>
<p>'utf-8' codec can't decode bytes in position 14-15: invalid
continuation byte</p>
</blockquote>
<p>We tried reading the file as BytesIO as mentioned in this <a href="https://stackoverflow.com/questions/49194667/how-to-read-a-xlsx-stream-file-using-openpyxl">post</a>:</p>
<pre><code>sfile = io.BytesIO(req.files['sfile'].read())
</code></pre>
<p>And we got this error:</p>
<blockquote>
<p>expected str, bytes or os.PathLike object, not BytesIO</p>
</blockquote>
<p>Then, we tried with the following:</p>
<pre><code>openpyxl.load_workbook(req.files['sfile'].read())
</code></pre>
<p>and got this error:</p>
<blockquote>
<p>openpyxl does not support
b'.relspk\x05\x06\x00\x00\x00\x00\x19\x00\x19\x00\xb3\x06\x00\x00\xeem\x00\x00\x00\x00'
file format, please check you can open it with Excel first. Supported
formats are: .xlsx,.xlsm,.xltx,.xltm</p>
</blockquote>
<p>the file type itself is <code>FileStorage</code> coming from the API request.</p>
<p>How to read excel files using openpyxl that are coming from a request in python?</p>
|
<python><openpyxl>
|
2023-02-16 09:13:33
| 0
| 5,012
|
alim1990
|
75,469,860
| 1,394,590
|
Is it possible to add type hints to an object whose attributes are defined on instantiation?
|
<p>I'm trying to define a class whose attributes are (mostly) provided by its users on instantiation. (There's more functionality to it, but this is what matters for the question.) My approach so far has been to pass keyword arguments to <code>__init__</code> and then these kwargs are transformed into attributes (pretty much the same as <code>argparse.Namespace</code>).</p>
<p>It can be boiled down to this:</p>
<pre><code>class Bag:
def __init__(self, **kwargs):
vars(self).update(kwargs)
</code></pre>
<p>And then it's used like this:</p>
<pre><code>b = Bag(a=some_function, b=2)
</code></pre>
<p>This works as expected, the problem is that IDEs have no chance to provide autocomplete suggestions and that makes it less comfortable to use.</p>
<p>The question is if it's possible to somehow propagate the types of the passed arguments to the <code>Bag</code> instance itself (obviously only when this call can be analyzed statically). Or, if there's a more elegant alternative.</p>
<p>Thanks a lot in advance.</p>
<p><strong>EDIT:</strong></p>
<p>A hacky alternative I came up with was to use a class declaration with a metaclass to provide my extra functionality, and then use the class as if it was an object. Something like this:</p>
<pre><code>class some_bag(metaclass=MagicBag):
a = "hello"
b = 2
some_bag.a # <-- recognized by the IDE
</code></pre>
<p>(This has however its own share of problems)</p>
|
<python><python-typing>
|
2023-02-16 08:59:02
| 0
| 15,387
|
bgusach
|
75,469,596
| 3,337,994
|
why is Numba parallel is slower than normal python loop?
|
<p>Following is normal python loop (I copied example from official doc - <a href="https://numba.readthedocs.io/en/stable/user/parallel.html" rel="nofollow noreferrer">https://numba.readthedocs.io/en/stable/user/parallel.html</a>)</p>
<pre><code>def two_d_array_reduction_prod(n):
shp = (13, 17)
result1 = 2 * np.ones(shp, np.int_)
tmp = 2 * np.ones_like(result1)
for i in range(n):
result1 *= tmp
return result1
</code></pre>
<p>I called function like:</p>
<pre><code>two_d_array_reduction_prod(50000)
</code></pre>
<p>It takes around 0.7482060070033185.</p>
<p>Numba parallel code</p>
<pre><code>@nb.njit(parallel=True)
def two_d_array_reduction_prod(n):
shp = (13, 17)
result1 = 2 * np.ones(shp, np.int_)
tmp = 2 * np.ones_like(result1)
for i in nb.prange(n):
result1 *= tmp
return result1
</code></pre>
<p>I called function like:</p>
<pre><code>two_d_array_reduction_prod(50000)
</code></pre>
<p>It takes 3.9858204890042543</p>
<p>My environment:</p>
<ol>
<li>Amazon Linux 2, x86_64 processor</li>
<li>8 CPUs</li>
<li>32G memory</li>
</ol>
|
<python><numba>
|
2023-02-16 08:32:23
| 1
| 2,945
|
Krishna Sunuwar
|
75,469,573
| 15,023,255
|
Simple Neural Network Using Pytorch
|
<p>I want to build Simple Neural Network with pytorch.
And I want to teach this network.</p>
<p>the network has
y = w(weight) * x + b(bias)
with w = 3 and b = 0</p>
<p>so I have the data
x = [1,2,3,4,5,6]
y = [3,6,9,12,15,18]</p>
<p>But I have some problem while building this Simple Neural Network</p>
<pre><code>import torch
import torch.nn as nn
class MyNeuralNetwork(nn.Module):
def __init__(self):
super(MyNeuralNetwork, self).__init__()
self.layer=nn.Linear(in_features=1, out_features=1, bias=True)
weight = torch.rand(1)
bias = torch.rand(1)
self.layer.weight = nn.Parameter(weight)
self.layer.bias = nn.Parameter(bias)
def forward(self, input):
output = self.layer(input)
return output
model = MyNeuralNetwork().to("cpu")
print(model)
print(f"weight : {model.layer.weight}")
print(f"bias : {model.layer.bias}")
input = torch.tensor([1.]).view(-1,1)
model(input)
# out = model(input)
# print(out)
# y = torch.tensor([3.,6.,9.,12.,15.,18.])
</code></pre>
<p>I have an error which says that
"RuntimeError: mat2 must be a matrix, got 1-D tensor"</p>
<p>What should I do to fix this problem?
Thanks OTL....</p>
|
<python><deep-learning><pytorch><neural-network><gradient-descent>
|
2023-02-16 08:29:54
| 1
| 403
|
Umgee
|
75,469,506
| 8,406,122
|
Finding the difference in number of words in a sentence in python
|
<p>Say I have 2 sentences,</p>
<pre><code>Ref: Q R CODE SCANNER APP EXIT KARE
Hyp: WORKOUTS SCANNER APP EXIT KARE
</code></pre>
<p>Here we can see that the <code>Ref</code> has 3 different words from the <code>Hyp</code>, since the <code>Q R Code</code> is not present in Hyp. If there is any built-in function in Python that will check this and output 3 as a result?</p>
|
<python>
|
2023-02-16 08:21:33
| 1
| 377
|
Turing101
|
75,469,436
| 11,267,783
|
Interpolation Scipy in Python with meshgrid
|
<p>I would like to do the same interpolation as MATLAB in Python with scipy. Here is an example of my code.</p>
<p>This is what I have in MATLAB :</p>
<pre class="lang-matlab prettyprint-override"><code>x = linspace(-10,10,10);
y = linspace(-5,5,10);
DATA = rand(10,10);
[XX,YY] = ndgrid(x,y);
XX2 = XX;
YY2 = YY;
DATA2 = interpn(XX,YY,DATA,XX2,YY2);
</code></pre>
<p>I try to to it in Python but seems to be difficult to do it with matrix in meshgrid format.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import scipy.interpolate
x = np.linspace(-10,10,10)
y = np.linspace(-5,5,10)
DATA = np.random.rand(10,10)
[XX,YY] = np.meshgrid(x,y,indexing='ij')
XX2 = XX
YY2 = YY
DATA2 = scipy.interpolate.interpn(XX,YY,DATA,XX2,YY2) # NOT WORKING
</code></pre>
<p>Any ideas on how to solve this issue ?</p>
|
<python><numpy><matlab><scipy><interpolation>
|
2023-02-16 08:13:24
| 1
| 322
|
Mo0nKizz
|
75,469,194
| 1,409,374
|
How to filter DataFrame by any string column matching any regex pattern in a list?
|
<p>String columns can be selected with:</p>
<pre class="lang-python prettyprint-override"><code>df.select(pl.col(pl.String))
</code></pre>
<p>and a Dataframe's rows can be filtered with a regex pattern for a single column, with something like:</p>
<pre class="lang-python prettyprint-override"><code>df.filter(pl.col("feature").str.contains("dangerous"))
</code></pre>
<p>How can a DataFrame be filtered with a list of regex patterns that could appear in any string column? I.e., if any string in a row matches any regex pattern, then keep that entire row, discard the rest.</p>
<p><strong>EDIT 1</strong></p>
<p>Here's a generated <code>df</code> and <code>patterns</code> to test functionality and performance.</p>
<pre class="lang-python prettyprint-override"><code>import random
from faker import Faker
import polars as pl
random.seed(42)
Faker.seed(42)
faker = Faker()
df_len = 10000
df = pl.DataFrame(
[
pl.Series("a", [random.randint(0, 511) for _ in range(df_len)]).cast(pl.Binary),
pl.Series("b", [random.randint(0, 1) for _ in range(df_len)]).cast(pl.Boolean),
pl.Series("c", faker.sentences(df_len), pl.String),
pl.Series("d", [random.randint(0, 255) for _ in range(df_len)], pl.UInt8),
pl.Series("e", faker.words(df_len), pl.String),
pl.Series(
"f",
[random.randint(0, 255) * random.TWOPI for _ in range(df_len)],
pl.Float32,
),
pl.Series("g", faker.words(df_len), pl.String),
]
)
patterns = [r"(?i)dangerous", r"always", r"(?i)prevent"]
</code></pre>
<p><code>print(df)</code> yields:</p>
<pre class="lang-none prettyprint-override"><code>shape: (10_000, 7)
┌────────┬───────┬─────────────────────────────────┬─────┬───────────┬─────────────┬──────────┐
│ a ┆ b ┆ c ┆ d ┆ e ┆ f ┆ g │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ binary ┆ bool ┆ str ┆ u8 ┆ str ┆ f32 ┆ str │
╞════════╪═══════╪═════════════════════════════════╪═════╪═══════════╪═════════════╪══════════╡
│ b"114" ┆ false ┆ Agent every development say. ┆ 164 ┆ let ┆ 980.17688 ┆ yard │
│ b"25" ┆ true ┆ Beautiful instead ahead despit… ┆ 210 ┆ reach ┆ 458.672516 ┆ son │
│ b"281" ┆ false ┆ Information last everything th… ┆ 230 ┆ arm ┆ 50.265484 ┆ standard │
│ b"250" ┆ false ┆ Choice whatever from behavior … ┆ 29 ┆ operation ┆ 929.911438 ┆ final │
│ b"228" ┆ false ┆ Page southern role movie win h… ┆ 242 ┆ coach ┆ 1149.822876 ┆ none │
│ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │
│ b"30" ┆ true ┆ Huge course partner. ┆ 249 ┆ media ┆ 1118.406982 ┆ movement │
│ b"33" ┆ true ┆ Building sign recently avoid u… ┆ 132 ┆ practice ┆ 282.743347 ┆ big │
│ b"346" ┆ false ┆ Paper will board. ┆ 72 ┆ similar ┆ 376.991119 ┆ just │
│ b"431" ┆ true ┆ Technology money worker spring… ┆ 140 ┆ sign ┆ 94.24778 ┆ audience │
│ b"267" ┆ false ┆ A third traditional ago. ┆ 40 ┆ available ┆ 615.752136 ┆ always │
└────────┴───────┴─────────────────────────────────┴─────┴───────────┴─────────────┴──────────┘
</code></pre>
<p><strong>EDIT 2</strong></p>
<p>Using <a href="https://stackoverflow.com/a/75470301/1409374">@jqurious's answer</a> (the fastest so far), the correct output of <code>df.filter(pl.any_horizontal(pl.col(pl.String).str.contains(regex)))</code> is:</p>
<pre class="lang-none prettyprint-override"><code>shape: (146, 7)
┌────────┬───────┬─────────────────────────────────┬─────┬───────────┬─────────────┬──────────┐
│ a ┆ b ┆ c ┆ d ┆ e ┆ f ┆ g │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ binary ┆ bool ┆ str ┆ u8 ┆ str ┆ f32 ┆ str │
╞════════╪═══════╪═════════════════════════════════╪═════╪═══════════╪═════════════╪══════════╡
│ b"57" ┆ true ┆ During prevent accept seem sho… ┆ 137 ┆ various ┆ 471.238892 ┆ customer │
│ b"269" ┆ true ┆ Ball always it focus economy b… ┆ 179 ┆ key ┆ 471.238892 ┆ guy │
│ b"250" ┆ false ┆ Admit attack energy always. ┆ 175 ┆ purpose ┆ 1281.769775 ┆ wonder │
│ b"82" ┆ false ┆ Beyond prevent entire staff. ┆ 242 ┆ hair ┆ 904.778687 ┆ around │
│ b"186" ┆ false ┆ Suffer accept letter visit alw… ┆ 134 ┆ magazine ┆ 12.566371 ┆ dream │
│ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │
│ b"464" ┆ true ┆ Kid prevent avoid quite brothe… ┆ 153 ┆ visit ┆ 879.645935 ┆ anything │
│ b"426" ┆ true ┆ Your sure piece simple always … ┆ 247 ┆ recently ┆ 1055.575073 ┆ laugh │
│ b"403" ┆ false ┆ Difference all machine let cha… ┆ 178 ┆ former ┆ 1061.858276 ┆ always │
│ b"184" ┆ true ┆ Morning carry event tell preve… ┆ 3 ┆ entire ┆ 1432.566284 ┆ hit │
│ b"267" ┆ false ┆ A third traditional ago. ┆ 40 ┆ available ┆ 615.752136 ┆ always │
└────────┴───────┴─────────────────────────────────┴─────┴───────────┴─────────────┴──────────┘
</code></pre>
|
<python><python-polars>
|
2023-02-16 07:46:49
| 2
| 12,042
|
rickhg12hs
|
75,468,967
| 8,406,122
|
Extracting and replacing a particular string from a sentence in python
|
<p>Say I have a string,</p>
<pre><code>s1="Hey Siri open call up duty"
</code></pre>
<p>and another string</p>
<p><code>s2="call up duty"</code>.</p>
<p>Now I know that "call up duty" should be replaced by "call of duty".</p>
<p>Say <code>s3="call of duty"</code>.</p>
<p>So what I want to do is that from s1 delete s2 and place s3 in its location. I am not sure how this can be done. Can anyone please guide me as I am new to python. The answer should be</p>
<pre><code>"Hey siri open call of duty"
</code></pre>
<p>Note--> s2 can be anywhere within the string s1 and need not be at the last everytime</p>
|
<python><string>
|
2023-02-16 07:21:02
| 2
| 377
|
Turing101
|
75,468,853
| 10,097,229
|
Calculate difference of rows in Pandas
|
<p>I have a timeseries dataframe where there are alerts for some particular rows. The dataframe looks like-</p>
<pre><code>machineID time vibration alerts
1 2023-02-15 220 1
11:45
1 2023-02-15 221 0
12:00
1 2023-02-15 219 0
12:15
1 2023-02-15 220 1
12:30
1 2023-02-16 220 1
11:45
1 2023-02-16 221 1
12:00
1 2023-02-16 219 0
12:15
1 2023-02-16 220 1
12:30
</code></pre>
<p>I want to calculate difference of <code>alerts</code> columns for each day. But since the <code>date</code> column is in time interval of 15 minutes, I am not getting how to group for whole day i.e., sum the alerts for each day and compare it with the sum of all alerts of the previous day.</p>
<p>In short, I need a way to sum all alerts for each day and substract with previous day. The result should be in another dataframe where there is a date column and difference of alerts column. In this case, the new dataframe will be-</p>
<pre><code>time diff_alerts
2023-02-16 1
</code></pre>
<p>since there is difference of 1 alert on the next day i.e. 16-02-2023</p>
|
<python><pandas><datetime><time-series>
|
2023-02-16 07:06:34
| 1
| 1,137
|
PeakyBlinder
|
75,468,851
| 7,873,570
|
BigQuery : Python Client not capturing errors found in audit logs when running multiple statements in a single query
|
<p>BigQuery : Python Client not capturing errors found in audit logs , when running multiple statements in a single query (for eg a declare variable statement before a create table statement ).</p>
<p>There was a syntax issue in variable declaration in the query , The python client is not able to catch the error and it deems the status as done . However inspection of logs in the auditlog trail shows the error</p>
<p>BQ Client Code Snippet:</p>
<pre><code> query_job = self.client.query(query)
print(dir(query_job))
# print(query_job.exception())
with suppress(TypeError):
##https://github.com/googleapis/python-bigquery/issues/1459
exc = query_job.exception()
if exc:
raise exc
while (query_job.state != 'DONE'):
print("Job {} is currently in state {}".format(query_job.job_id, query_job.state))
if query_job.errors is not None:
raise Exception("Bigquery job failed with error {}".format(query_job.errors))
query_job = self.client.get_job(query_job.job_id)
print(query_job.total_bytes_processed)
print(query_job.errors)
time.sleep(wait)
</code></pre>
<p>The log trail shows below error :</p>
<pre><code>jobStatus: {
error: {
code: 11
message: "Query error: Unrecognized name: varname at [4:56]"
}
state: "DONE"
}}}}
serviceName: "bigquery.googleapis.com"
</code></pre>
<p>Is there a way to capture such errors / exceptions ?</p>
|
<python><exception><google-cloud-platform><google-bigquery>
|
2023-02-16 07:06:32
| 2
| 330
|
Nixon Raphy
|
75,468,753
| 18,096,205
|
python -v and python3 -V give different results
|
<p>python -v and python3 -V give different results</p>
<h1>Current status.</h1>
<pre><code>$ python -V
Python 3.10.10
</code></pre>
<pre><code>$ python3 -V
Python 3.9.16
</code></pre>
<p>Output results will be different.</p>
<h1>What I did</h1>
<p>Switched versions in pyenv.</p>
<pre><code>$ pyenv global 3.10.10
</code></pre>
<h1>What I want to do.</h1>
<p>I want both to be Python 3.10.10.
I've tried everything, but I wasn't sure how to do it.
I would appreciate it if you could tell me how to do it.</p>
|
<python><python-3.x><pyenv>
|
2023-02-16 06:53:19
| 0
| 349
|
Tdayo
|
75,468,689
| 13,142,245
|
How to define an MDP as a python function?
|
<p>I’m interested in defining a Markov Decision Process as a python function. It would need to interface with PyTorch API for reinforcement learning, however that constraint shapes the function’s form, inputs and outputs.</p>
<p>For context, my problem involves optimally placing items in a warehouse, not knowing the value of future items which might arrive. Anticipating these arrivals would limit greedy behavior of algorithm, effectively reserving some high value locations for high value items which might arrive as learned by the RL model.</p>
<p>How can I best define such a function? (Not asking about business logic but about requirements of its form, inputs outputs etc) What is PyTorch expecting of an MDP?</p>
|
<python><optimization><pytorch><reinforcement-learning><markov-decision-process>
|
2023-02-16 06:44:26
| 1
| 1,238
|
jbuddy_13
|
75,468,512
| 9,768,643
|
Create DataFrame from df1 and df2 and take empty value from df2 for column value if not exist in df1 column value
|
<pre><code>df1 = pd.DataFrame({'call_sign': ['ASD','BSD','CDSF','GFDFD','FHHH'],'frn':['123','124','','656','']})
df2 = pd.DataFrame({'call_sign': ['ASD','CDSF','BSD','GFDFD','FHHH'],'frn':['1234','','124','','765']})
</code></pre>
<p>need to get a new df like</p>
<pre><code>df2 = pd.DataFrame({'call_sign': ['ASD','BSD','CDSF','GFDFD','FHHH'],'frn':['123','','124','656','765']})
</code></pre>
<p>I need to take frn from df2 if it's missing in df1 and create a new df</p>
|
<python><pandas><dataframe>
|
2023-02-16 06:18:43
| 3
| 836
|
abhi krishnan
|
75,468,349
| 498,739
|
requirements.txt: any version < 1.5 (incl dev and rc versions)
|
<p>I am looking for a pattern for Python's requirement.txt (for usage with pip and python 3.10), which will cover all versions available up to a version 1.5, e.g.</p>
<ul>
<li>1.4.2.dev5+g470a8b8</li>
<li>1.4.dev22+g2be722f</li>
<li>1.4</li>
<li>1.4rc0</li>
<li>1.5rc1</li>
</ul>
<p>And: is there a clever way to test this without actually running "pip install" in a fresh venv?</p>
|
<python><pip><python-packaging>
|
2023-02-16 05:54:07
| 1
| 2,793
|
herrjeh42
|
75,468,333
| 8,481,155
|
Apache Beam Find Top N elements Python SDK
|
<p>My requirement is to read from a BQ table, do some processing, select the Top N rows on the "score" column and write it to the BQ Table and also publish the rows as a PubSub message.</p>
<p>I made a sample below to create a PCollection and select top 5 rows from this PCollection based on the value of "score".</p>
<pre><code>import apache_beam as beam
with beam.Pipeline() as p:
elements = (
p
| beam.Create([{"name": "Bob", "score": 5},
{"name":"Adam", "score": 5},
{"name":"John", "score": 2},
{"name":"Jose", "score": 1},
{"name":"Tim", "score": 1},
{"name":"Bryan", "score": 1},
{"name":"Kim", "score":1}])
| "Filter Top 5 scores" >> beam.combiners.Top.Of(5, key=lambda t: t['score'])
| "Print" >> beam.Map(print)
)
# Write to BQ
# Publish to PubSub
</code></pre>
<p>This returns a list instead of a PCollection. Hence I'm unable to write it back to BQ table nor publish to PubSub in this format.</p>
<p>What is the best way to select the top N elements but keep them as a PCollection?</p>
<p>In my real use case I might have around 2 million rows and I need to select 800k records from it based on a column.
Also in one of the Apache Beam summit videos I remember hearing that the Top function will keep the results in one worker node instead of keeping it across. So thats why i assume it is a list. What would be the maximum number of the rows before which it could break?</p>
|
<python><google-cloud-dataflow><apache-beam>
|
2023-02-16 05:52:05
| 1
| 701
|
Ashok KS
|
75,468,249
| 6,013,016
|
Deleting elements from list or recreating new list (python)
|
<p>What it the best/fastest way to delete objects from list?
Deleting some objects:</p>
<pre><code>[objects.remove(o) for o in objects if o.x <= 0]
</code></pre>
<p>or recreating new object:</p>
<pre><code>new_objects = [o for o in objects if o.x > 0]
</code></pre>
|
<python><list><object>
|
2023-02-16 05:39:30
| 3
| 5,926
|
Scott
|
75,467,961
| 2,399,453
|
docker compose Flask app returning empty response
|
<p>I am trying to play with a dockerized flask container using docker-compose. The flask app is a hello world app that works correctly when I test it on my host.</p>
<p>The docker-compose file looks like this:</p>
<pre><code>version: '3'
services:
web:
image: ubuntu
build:
context: ./
dockerfile: Dockerfile
ports:
- "6000:5000"
</code></pre>
<p>My dockerfile looks like this:</p>
<pre><code>FROM ubuntu
RUN apt-get update
RUN apt-get install -y python3 python3-pip
RUN pip3 install Flask
COPY app.py .
EXPOSE 5000
CMD ["python3", "app.py"]
</code></pre>
<p>The hello world app looks like this:</p>
<pre><code>from flask import Flask, request
app = Flask(__name__)
@app.route("/")
def index():
return "Hello, World!"
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p>When I bring up the containers using docker-compose up -d, there is no error. When I curl to localhost:6000, I get this :</p>
<pre><code>curl -X PUT localhost:6000
curl: (52) Empty reply from server
</code></pre>
<p>It seems like the app is responding but not how it responds when I run it on a my host and just returns an empty string instead of "hello world" when I curl to it. What am I doing wrong?</p>
|
<python><flask><docker-compose><dockerfile>
|
2023-02-16 04:48:17
| 1
| 3,152
|
user2399453
|
75,467,927
| 887,290
|
AttributeError: 'str' object has no attribute 'items' when using RawframeDataset with mvit in mmaction2
|
<p>I am using <a href="https://github.com/open-mmlab/mmaction2/tree/1.x" rel="nofollow noreferrer">mmaction2 branch 1.x</a>. I recently migrated from <a href="https://github.com/open-mmlab/mmaction2/tree/v0.24.1" rel="nofollow noreferrer">0.24</a> and want to use <a href="https://github.com/open-mmlab/mmaction2/blob/1.x/configs/recognition/mvit/mvit-large-p244_u40_sthv2-rgb.py" rel="nofollow noreferrer">mvit</a> model. When I train my configuration with RawframeDataset, it stops with message: <code>AttributeError: 'str' object has no attribute 'items'</code> (please see below for detailed log). Any suggestion? Thank you.</p>
<h2>Configuration</h2>
<pre><code>#something_mvit.py
_base_ = [
'mmaction2/configs/_base_/models/mvit_small.py', 'mmaction2/configs/_base_/default_runtime.py'
]
repeat_times = 1
num_classes = 10
batch_size = 1
# model settings
model = dict(
backbone=dict(
arch='large',
temporal_size=40,
spatial_size=312,
drop_path_rate=0.75,
),
data_preprocessor=dict(
type='ActionDataPreprocessor',
mean=[114.75, 114.75, 114.75],
std=[57.375, 57.375, 57.375],
blending=dict(
type='RandomBatchAugment',
augments=[
dict(type='MixupBlending', alpha=0.8, num_classes=400),
dict(type='CutmixBlending', alpha=1, num_classes=400)
]),
format_shape='NCTHW'),
cls_head=dict(in_channels=1152, num_classes=num_classes),
test_cfg=dict(max_testing_views=5))
# dataset settings
dataset_type = 'RawframeDataset'
data_root = 'dataset'
data_root_val = 'dataset'
ann_file_train = 'dataset/train_rawdataset.txt'
ann_file_val = 'dataset/val_rawdataset.txt'
ann_file_test = 'dataset/test_rawdataset.txt'
file_client_args = dict(io_backend='disk')
train_pipeline = [
dict(type='UniformSampleFrames', clip_len=40),
dict(type='RawFrameDecode', **file_client_args),
dict(type='Resize', scale=(-1, 256)),
dict(
type='PytorchVideoWrapper',
op='RandAugment',
magnitude=7,
num_layers=4),
dict(type='RandomErasing', erase_prob=0.25, mode='rand'),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
]
val_pipeline = [
dict(type='UniformSampleFrames', clip_len=40, test_mode=True),
dict(type='RawFrameDecode', **file_client_args),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
]
test_pipeline = [
dict(type='UniformSampleFrames', clip_len=40, test_mode=True),
dict(type='RawFrameDecode', **file_client_args),
dict(type='Resize', scale=(-1, 224)),
dict(type='ThreeCrop', crop_size=224),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
]
with_offset = True
filename_tmpl = '{:05}.jpg'
train_dataloader = dict(
batch_size=batch_size,
num_workers=1,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type='RepeatDataset',
times=repeat_times,
dataset=dict(
type=dataset_type,
ann_file=ann_file_train,
data_prefix=data_root,
num_classes=num_classes,
with_offset=with_offset,
filename_tmpl=filename_tmpl,
pipeline=train_pipeline)))
val_dataloader = dict(
batch_size=batch_size,
num_workers=1,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='RepeatDataset',
times=repeat_times,
dataset=dict(
type=dataset_type,
ann_file=ann_file_val,
data_prefix=data_root_val,
num_classes=num_classes,
with_offset=with_offset,
filename_tmpl=filename_tmpl,
pipeline=val_pipeline)))
test_dataloader = dict(
batch_size=1,
num_workers=1,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
ann_file=ann_file_test,
data_prefix=data_root_val,
num_classes=num_classes,
with_offset=with_offset,
filename_tmpl=filename_tmpl,
pipeline=test_pipeline))
val_evaluator = dict(type='AccMetric')
test_evaluator = val_evaluator
train_cfg = dict(
type='EpochBasedTrainLoop', max_epochs=100, val_begin=1, val_interval=3)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
base_lr = 1.6e-3
optim_wrapper = dict(
type='AmpOptimWrapper',
optimizer=dict(
type='AdamW', lr=base_lr, betas=(0.9, 0.999), weight_decay=0.05))
param_scheduler = [
dict(
type='LinearLR',
start_factor=0.1,
by_epoch=True,
begin=0,
end=30,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=70,
eta_min=base_lr / 100,
by_epoch=True,
begin=30,
end=100,
convert_to_iter_based=True)
]
default_hooks = dict(
checkpoint=dict(interval=3, max_keep_ckpts=5), logger=dict(interval=10))
# Default setting for scaling LR automatically
# - `enable` means enable scaling LR automatically
# or not by default.
# - `base_batch_size` = (8 GPUs) x (8 samples per GPU).
auto_scale_lr = dict(enable=False, base_batch_size=64)
# runtime settings
# checkpoint_config = dict(interval=5)
work_dir = 'runs/'
</code></pre>
<h2>Log</h2>
<pre><code>thomas@RYZEN9:~/proyek$ pyenv exec mmaction2/tools/dist_train.sh something_mvit.py 1
+ CONFIG=something_mvit.py
+ GPUS=1
+ NNODES=1
+ NODE_RANK=0
+ PORT=29500
+ MASTER_ADDR=127.0.0.1
++ dirname /home/thomas/proyek/mmaction2/tools/dist_train.sh
++ dirname /home/thomas/proyek/mmaction2/tools/dist_train.sh
+ PYTHONPATH=/home/thomas/proyek/mmaction2/tools/..:
+ python -m torch.distributed.launch --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --nproc_per_node=1 --master_port=29500 /home/thomas/proyek/mmaction2/tools/train.py something_mvit.py --launcher pytorch
/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/utils/dl_utils/setup_env.py:46: UserWarning: Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
warnings.warn(
/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/utils/dl_utils/setup_env.py:56: UserWarning: Setting MKL_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
warnings.warn(
02/16 09:14:50 - mmengine - WARNING - The "log_processor" registry in mmaction did not set import location. Fallback to call `mmaction.utils.register_all_modules` instead.
02/16 09:14:51 - mmengine - INFO -
------------------------------------------------------------
System environment:
sys.platform: linux
Python: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]
CUDA available: True
numpy_random_seed: 209724671
GPU 0: NVIDIA GeForce RTX 3070
CUDA_HOME: None
GCC: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
PyTorch: 1.13.1+cu117
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.14.1+cu117
OpenCV: 4.7.0
MMEngine: 0.5.0
Runtime environment:
cudnn_benchmark: False
mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
dist_cfg: {'backend': 'nccl'}
seed: None
diff_rank_seed: False
deterministic: False
Distributed launcher: pytorch
Distributed training: True
GPU number: 1
------------------------------------------------------------
02/16 09:14:51 - mmengine - INFO - Config:
model = dict(
type='Recognizer3D',
backbone=dict(
type='MViT',
arch='large',
drop_path_rate=0.75,
temporal_size=40,
spatial_size=312),
data_preprocessor=dict(
type='ActionDataPreprocessor',
mean=[114.75, 114.75, 114.75],
std=[57.375, 57.375, 57.375],
format_shape='NCTHW',
blending=dict(
type='RandomBatchAugment',
augments=[
dict(type='MixupBlending', alpha=0.8, num_classes=400),
dict(type='CutmixBlending', alpha=1, num_classes=400)
])),
cls_head=dict(
type='MViTHead',
in_channels=1152,
num_classes=10,
label_smooth_eps=0.1,
average_clips='prob'),
test_cfg=dict(max_testing_views=5))
default_scope = 'mmaction'
default_hooks = dict(
runtime_info=dict(type='RuntimeInfoHook'),
timer=dict(type='IterTimerHook'),
logger=dict(type='LoggerHook', interval=10, ignore_last=False),
param_scheduler=dict(type='ParamSchedulerHook'),
checkpoint=dict(
type='CheckpointHook', interval=3, save_best='auto', max_keep_ckpts=5),
sampler_seed=dict(type='DistSamplerSeedHook'),
sync_buffers=dict(type='SyncBuffersHook'))
env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'))
log_processor = dict(type='LogProcessor', window_size=20, by_epoch=True)
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='ActionVisualizer', vis_backends=[dict(type='LocalVisBackend')])
log_level = 'INFO'
load_from = None
resume = False
repeat_times = 1
num_classes = 10
batch_size = 1
dataset_type = 'RawframeDataset'
data_root = 'dataset'
data_root_val = 'dataset'
ann_file_train = 'dataset/train_rawdataset.txt'
ann_file_val = 'dataset/val_rawdataset.txt'
ann_file_test = 'dataset/test_rawdataset.txt'
file_client_args = dict(io_backend='disk')
train_pipeline = [
dict(type='UniformSampleFrames', clip_len=40),
dict(type='RawFrameDecode', io_backend='disk'),
dict(type='Resize', scale=(-1, 256)),
dict(
type='PytorchVideoWrapper',
op='RandAugment',
magnitude=7,
num_layers=4),
dict(type='RandomErasing', erase_prob=0.25, mode='rand'),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
]
val_pipeline = [
dict(type='UniformSampleFrames', clip_len=40, test_mode=True),
dict(type='RawFrameDecode', io_backend='disk'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
]
test_pipeline = [
dict(type='UniformSampleFrames', clip_len=40, test_mode=True),
dict(type='RawFrameDecode', io_backend='disk'),
dict(type='Resize', scale=(-1, 224)),
dict(type='ThreeCrop', crop_size=224),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
]
with_offset = True
filename_tmpl = '{:05}.jpg'
train_dataloader = dict(
batch_size=1,
num_workers=1,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type='RepeatDataset',
times=1,
dataset=dict(
type='RawframeDataset',
ann_file='dataset/train_rawdataset.txt',
data_prefix='dataset',
num_classes=10,
with_offset=True,
filename_tmpl='{:05}.jpg',
pipeline=[
dict(type='UniformSampleFrames', clip_len=40),
dict(type='RawFrameDecode', io_backend='disk'),
dict(type='Resize', scale=(-1, 256)),
dict(
type='PytorchVideoWrapper',
op='RandAugment',
magnitude=7,
num_layers=4),
dict(type='RandomErasing', erase_prob=0.25, mode='rand'),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
])))
val_dataloader = dict(
batch_size=1,
num_workers=1,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='RepeatDataset',
times=1,
dataset=dict(
type='RawframeDataset',
ann_file='dataset/val_rawdataset.txt',
data_prefix='dataset',
num_classes=10,
with_offset=True,
filename_tmpl='{:05}.jpg',
pipeline=[
dict(type='UniformSampleFrames', clip_len=40, test_mode=True),
dict(type='RawFrameDecode', io_backend='disk'),
dict(type='Resize', scale=(-1, 256)),
dict(type='CenterCrop', crop_size=224),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
])))
test_dataloader = dict(
batch_size=1,
num_workers=1,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='RawframeDataset',
ann_file='dataset/test_rawdataset.txt',
data_prefix='dataset',
num_classes=10,
with_offset=True,
filename_tmpl='{:05}.jpg',
pipeline=[
dict(type='UniformSampleFrames', clip_len=40, test_mode=True),
dict(type='RawFrameDecode', io_backend='disk'),
dict(type='Resize', scale=(-1, 224)),
dict(type='ThreeCrop', crop_size=224),
dict(type='FormatShape', input_format='NCTHW'),
dict(type='PackActionInputs')
]))
val_evaluator = dict(type='AccMetric')
test_evaluator = dict(type='AccMetric')
train_cfg = dict(
type='EpochBasedTrainLoop', max_epochs=100, val_begin=1, val_interval=3)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
base_lr = 0.0016
optim_wrapper = dict(
type='AmpOptimWrapper',
optimizer=dict(
type='AdamW', lr=0.0016, betas=(0.9, 0.999), weight_decay=0.05))
param_scheduler = [
dict(
type='LinearLR',
start_factor=0.1,
by_epoch=True,
begin=0,
end=30,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=70,
eta_min=1.6e-05,
by_epoch=True,
begin=30,
end=100,
convert_to_iter_based=True)
]
auto_scale_lr = dict(enable=False, base_batch_size=64)
work_dir = 'runs/'
launcher = 'pytorch'
randomness = dict(seed=None, diff_rank_seed=False, deterministic=False)
02/16 09:14:51 - mmengine - WARNING - The "visualizer" registry in mmaction did not set import location. Fallback to call `mmaction.utils.register_all_modules` instead.
02/16 09:14:51 - mmengine - WARNING - The "vis_backend" registry in mmaction did not set import location. Fallback to call `mmaction.utils.register_all_modules` instead.
02/16 09:14:51 - mmengine - WARNING - The "model" registry in mmaction did not set import location. Fallback to call `mmaction.utils.register_all_modules` instead.
02/16 09:14:53 - mmengine - WARNING - The "hook" registry in mmaction did not set import location. Fallback to call `mmaction.utils.register_all_modules` instead.
02/16 09:14:53 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook
--------------------
before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook
--------------------
before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook
--------------------
before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
--------------------
after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_train_epoch:
(NORMAL ) IterTimerHook
(NORMAL ) SyncBuffersHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
before_val_epoch:
(NORMAL ) IterTimerHook
--------------------
before_val_iter:
(NORMAL ) IterTimerHook
--------------------
after_val_iter:
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
before_test_epoch:
(NORMAL ) IterTimerHook
--------------------
before_test_iter:
(NORMAL ) IterTimerHook
--------------------
after_test_iter:
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_run:
(BELOW_NORMAL) LoggerHook
--------------------
02/16 09:14:53 - mmengine - WARNING - The "loop" registry in mmaction did not set import location. Fallback to call `mmaction.utils.register_all_modules` instead.
02/16 09:14:53 - mmengine - WARNING - The "dataset" registry in mmaction did not set import location. Fallback to call `mmaction.utils.register_all_modules` instead.
Traceback (most recent call last):
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/thomas/proyek/mmaction2/mmaction/datasets/rawframe_dataset.py", line 99, in __init__
super().__init__(
File "/home/thomas/proyek/mmaction2/mmaction/datasets/base.py", line 48, in __init__
super().__init__(
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 241, in __init__
self._join_prefix()
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/dataset/base_dataset.py", line 538, in _join_prefix
for data_key, prefix in self.data_prefix.items():
AttributeError: 'str' object has no attribute 'items'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/dataset/dataset_wrapper.py", line 207, in __init__
self.dataset = DATASETS.build(dataset)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/registry.py", line 521, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 135, in build_from_cfg
raise type(e)(
AttributeError: class `RawframeDataset` in mmaction/datasets/rawframe_dataset.py: 'str' object has no attribute 'items'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/runner/loops.py", line 43, in __init__
super().__init__(runner, dataloader)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/runner/base_loop.py", line 26, in __init__
self.dataloader = runner.build_dataloader(
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1333, in build_dataloader
dataset = DATASETS.build(dataset_cfg)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/registry.py", line 521, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 135, in build_from_cfg
raise type(e)(
AttributeError: class `RepeatDataset` in mmengine/dataset/dataset_wrapper.py: class `RawframeDataset` in mmaction/datasets/rawframe_dataset.py: 'str' object has no attribute 'items'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/thomas/proyek/mmaction2/tools/train.py", line 141, in <module>
main()
File "/home/thomas/proyek/mmaction2/tools/train.py", line 137, in main
runner.train()
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1656, in train
self._train_loop = self.build_train_loop(
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1448, in build_train_loop
loop = LOOPS.build(
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/registry.py", line 521, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 135, in build_from_cfg
raise type(e)(
AttributeError: class `EpochBasedTrainLoop` in mmengine/runner/loops.py: class `RepeatDataset` in mmengine/dataset/dataset_wrapper.py: class `RawframeDataset` in mmaction/datasets/rawframe_dataset.py: 'str' object has no attribute 'items'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2909183) of binary: /home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/bin/python
Traceback (most recent call last):
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/torch/distributed/launch.py", line 195, in <module>
main()
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/torch/distributed/launch.py", line 191, in main
launch(args)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/torch/distributed/launch.py", line 176, in launch
run(args)
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/thomas/.pyenv/versions/miniconda3-3.10-22.11.1-1/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/home/thomas/proyek/mmaction2/tools/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-02-16_09:14:54
host : RYZEN9
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 2909183)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
</code></pre>
|
<python><machine-learning><pytorch><computer-vision>
|
2023-02-16 04:42:01
| 2
| 2,153
|
ThomasEdwin
|
75,467,533
| 7,267,480
|
access to the row of a dataframe using the conditions and values for unnamed columns
|
<p>I have a dataframe:</p>
<pre><code>params = pd.DataFrame({ 'dE' : {'3.0':20.0, '4.0':15.0, '-4.0':15.0},
'Gg' : {'3.0':80.0, '4.0':55.0, '-4.0':55.0},
'gn2' : {'3.0':50.0, '4.0':10.0, '-4.0':10.0} })
</code></pre>
<p>The data inside:</p>
<pre><code> dE Gg gn2
3.0 20.0 80.0 50.0
4.0 15.0 55.0 10.0
-4.0 15.0 55.0 10.0
</code></pre>
<p>How to access the row of a dataframe where the first unnamed column has value 4.0?
How actually create a subset using the unnamed column?</p>
|
<python><pandas><dataframe>
|
2023-02-16 03:23:48
| 2
| 496
|
twistfire
|
75,467,411
| 2,762,570
|
conda: what difference does it make if we set pip_interop_enabled=True?
|
<p>There are many posts on this site which reference, typically in passing, the idea of setting <code>pip_interop_enabled=True</code> within some environment. This makes conda and pip3 somehow interact better, I am told. To be precise, people say conda will search PyPI for packages that don't exist in the main channels if this is true. They also say it's "experimental."</p>
<p>Here is <a href="https://docs.conda.io/projects/conda/en/latest/user-guide/configuration/pip-interoperability.html" rel="nofollow noreferrer">conda's documentation</a> about this. It notes that much of conda's behavior in recent versions has also improved even with pip_interop_enabled=False, leading to questions about what this setting even does.</p>
<p>Here is my question: in real terms, what does all of this mean?</p>
<ul>
<li>Is the <em>only</em> difference that conda will search PyPI if this is True and not if it's False?</li>
<li>Are there other things that it does? For instance, if I need to install some package from pip, will conda know better not to clobber it if this setting is True?</li>
<li>What, to be precise, goes wrong if I set this to True? Are there known edge cases that somehow break things if this "experimental" setting is set to True?</li>
<li>Why would I ever not want to set this?</li>
</ul>
|
<python><pip><anaconda><conda><pypi>
|
2023-02-16 03:00:38
| 1
| 405
|
Mike Battaglia
|
75,467,304
| 417,896
|
Python Eel - Close windows from js when connection to eel is lost
|
<p>During dev it's a pain to close all the eel windows when you need to restart eel for python code changes.</p>
<p>For development purposes, and since we don't yet have live-reload for python using eel, it would be helpful to close all the windows when the python program is quit.</p>
<p>I have the limitation that I am unable to call js from python in case that is part of your solution.</p>
<p>I thought of using a ping to eel and closing each window using this little script, but eel doesn't seem to throw the error while the function is being called.</p>
<pre><code>setInterval(pingEel, 250);
function pingEel(a, b) {
try {
eel.ping_eel();
} catch (error) {
console.log("Ping to eel failed, python must be closed. Closing windows.")
window.close()
}
}
</code></pre>
|
<javascript><python><eel>
|
2023-02-16 02:38:28
| 0
| 17,480
|
BAR
|
75,467,166
| 18,096,205
|
I want to switch python versions in paperspace
|
<p>I want to switch my python version from 3.9.6 to 3.10 in paperspace. But it doesn't work.
So I need your help.</p>
<h1>device info</h1>
<pre><code>$root@nu1mmmnfz5:/notebooks/LoRA# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
</code></pre>
<pre><code>$root@nu1mmmnfz5:/notebooks/LoRA# which python
/usr/local/bin/python
</code></pre>
<pre><code>$root@nu1mmmnfz5:/notebooks/LoRA# ls /usr/local/bin/ | grep python
ipython
ipython3
python
python3
</code></pre>
<pre><code>$ apt -y install python3.10
</code></pre>
<p>これが上手く入らないので苦戦しておりました。</p>
<pre><code>$ python -V
Python 3.9.16
</code></pre>
<pre><code>$ which python
/usr/local/bin/python
$ which python3
/usr/local/bin/python3
</code></pre>
<pre><code> $ which python3.10
/usr/bin/python3.10
</code></pre>
<p>There are two, 'usr/local/bin' and 'usr/bin', and 'apt -y install python3.10' has gone into 'usr/bin'.</p>
<h1>Tried</h1>
<pre><code>$apt update -y
$apt upgrade -y
$apt -y install python3.10
</code></pre>
<p>update-alternatives
<a href="https://i.sstatic.net/TJcJn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TJcJn.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><linux><ubuntu>
|
2023-02-16 02:07:43
| 5
| 349
|
Tdayo
|
75,466,856
| 3,431,407
|
How to close clickable popup to continue scraping through Selenium in python
|
<p>I'm trying to scrape some information from clickable popups in a table on a website into a <code>pandas</code> dataframe using <code>Selenium</code> in <code>python</code> and it seems to be able to do this if the popups have information.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.select import Select
import pandas as pd
import time
driver = webdriver.Chrome()
driver.get('https://mspotrace.org.my/Sccs_list')
time.sleep(20)
# Select maximum number of entries
elem = driver.find_element_by_css_selector('select[name=dTable_length]')
select = Select(elem)
select.select_by_value('500')
time.sleep(15)
# Get list of elements
elements = WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.XPATH, "//a[@title='View on Map']")))
# Loop through element popups and pull details of facilities into DF
pos = 0
df = pd.DataFrame(columns=['facility_name','other_details'])
try:
for element in elements:
data = []
element.click()
time.sleep(3)
facility_name = driver.find_element_by_xpath('//h4[@class="modal-title"]').text
other_details = driver.find_element_by_xpath('//div[@class="modal-body"]').text
data.append(facility_name)
data.append(other_details)
df.loc[pos] = data
WebDriverWait(driver,20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[aria-label='Close'] > span"))).click() # close popup window
time.sleep(10)
pos+=1
except:
print("No geo location information")
pass
print(df)
</code></pre>
<p>However, there are cases when a window like below appears and I need to click 'OK' on this to resume scraping the other rows on the web page but I can't seem to be able to find the element to click on to do this.</p>
<p><a href="https://i.sstatic.net/NdPqk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NdPqk.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><web-scraping>
|
2023-02-16 01:01:48
| 2
| 661
|
Funkeh-Monkeh
|
75,466,824
| 2,062,269
|
Python: Conditionally throwing an exception based on module-level variable
|
<p>I have a quite sizeable chunk of logic (say 100k lines of code) that are intended to validate a Python object which makes refactoring when <code>Exceptions</code> are thrown hard if not done incrementally.</p>
<p>The logic above will throw an <code>Exception</code> when an invalid object is passed in.</p>
<p>The logic might look something like this:</p>
<pre><code>def someCheck(self):
if not_valid1(self.some_object):
raise CustomException(...)
if not_valid2(self.some_object):
raise CustomException(...)
...
</code></pre>
<p>I would like to be able to annotate or wrap the exceptions so that some <code>not_valid*()</code> checks are silently ignored when a module-level variable is set to <code>True</code>.</p>
<p>For example, if <code>IGNORE_CHECKS = True</code>, then we should not raise the <code>CustomException</code> when <code>not_valid1(...)</code> is <code>True</code> above, but the <code>is_valid2(...)</code> check should still run.</p>
<p>Is there an easy way to do this?</p>
<p>For example, something like:</p>
<pre><code>def someCheck(self):
if not_valid1(self.some_object):
raise CustomExceptionThatIsSlientlyIgnoredWhenIgnoreChecksIsTrue(...)
if not_valid2(self.some_object):
raise CustomException(...)
...
</code></pre>
<p>or</p>
<pre><code>def someCheck(self):
if not_valid1(self.some_object):
raise CustomExceptionThatIsSlientlyIgnoredWhenIgnoreChecksIsTrue(CustomException(...))
if not_valid2(self.some_object):
raise CustomException(...)
...
</code></pre>
<p>I can do something like:</p>
<pre><code>def someCheck(self):
if not IGNORE_CHECKS and not_valid1(self.some_object):
raise CustomException(...)
if not_valid2(self.some_object):
raise CustomException(...)
...
</code></pre>
<p>but it gets unweidly if <code>IGNORE_CHECKS</code> needs to be sprinkled in hundreds/thousands of times across the often complex code.</p>
|
<python>
|
2023-02-16 00:54:23
| 0
| 960
|
Andy
|
75,466,731
| 2,690,251
|
Python int dunder methods return NotImplemented
|
<p>What really happens under the hood with <code>1/3.5</code> (or <code>1+3.5</code> or <code>1-3.5</code>)?</p>
<h2>Preface</h2>
<p>I will use division as example but this is true for all the methods</p>
<p>In my code I need to use operations as functions.<br />
I found method <code>__truediv__</code> inside both <code>int</code> and <code>float</code>.<br />
I do</p>
<pre><code>div = int.__truediv__
</code></pre>
<p>and then use it as:</p>
<pre><code>div(a,b)
</code></pre>
<p>After some testing and some troubleshooting, I found that <code>div</code> returns <code>NotImplemented</code> instead of some number when there is a float inside.</p>
<pre><code>>>> a = 1
>>> b = 3.5
>>> a/b
0.2857142857142857
>>> int.__truediv__(a, b)
NotImplemented
>>> float.__truediv__(a, b)
TypeError: descriptor '__truediv__' requires a 'float' object but received a 'int'
</code></pre>
<p>What I found to be working is:</p>
<pre><code>>>> float.__truediv__(float(a), float(b))
0.2857142857142857
</code></pre>
<p>Honestly I don't like this, but most of all, <em>I don't understand</em> what <code>1/3.5</code> is really doing!<br />
Is it casting to float under the hood? I know that <code>3/1 = 3.0</code> but I hope the answer is somewhat different.</p>
<p>If I use directly the int methods, the problem is the same:</p>
<pre><code>>>> a.__truediv__(b)
NotImplemented
>>> a.__rtruediv__(b)
NotImplemented
</code></pre>
<h2>Going Back to title</h2>
<p>As far as I know <code>a OPERATION b</code> gets <code>a</code> as main object, and uses a dunder method to evaluate result of operation with <code>b</code>.</p>
<pre><code>a.__operation__(b)
</code></pre>
<p>But in <code>int</code> dunder methods I see that <code>b</code> must be <code>int</code> too, for example:</p>
<pre><code>int.__add__(1, 3.5)
NotImplemented
</code></pre>
<p>After some hours of research I didn't find what it does under the hood. I searched source code, to find the int class but I had no success in understanding it at all.</p>
<p>I found 2 solutions in my code, and will test if one is better than the other, for example, for addition:</p>
<pre><code>add = lambda x,y: x+y
</code></pre>
<p>and</p>
<pre><code>add = operator.__add__
</code></pre>
<p>So the question is: What really happens under the hood with <code>1/3.5</code> (or <code>1+3.5</code> or <code>1-3.5</code>)?</p>
<h2>Edit for the closure and about duplicate:</h2>
<p>As seen in comments, this is <em><strong>not a duplicate</strong></em> of this question: <a href="https://stackoverflow.com/questions/10490610/operator-overloading-in-python-with-the-object-on-the-right-hand-side-of-the-ope">Operator overloading in python with the object on the right hand side of the operator</a><br />
In my question I use out of the box types from Python, not custom made classes with overloads.</p>
<p>The question is about what's really happening when you use the operators (<code>+-*/</code>), due to the fact, using the first item class overload method doesn't always work as we may expect.</p>
|
<python><magic-methods>
|
2023-02-16 00:30:52
| 0
| 377
|
Raikoug
|
75,466,701
| 9,663,207
|
SQLAlchemy relationship() via intermediary table
|
<p>I am struggling to define methods in SQLAlchemy to retrieve related records via an intermediary table.</p>
<p>Consider the following schema:</p>
<ul>
<li>Users can create multiple posts, each post belongs to 1 user</li>
<li>Each post can have multiple comments on it, with each comment belonging to 1 post</li>
</ul>
<p>What I want is to be able to, for a given user instance, retrieve <em>all</em> of the comments from <em>all</em> of their posts.</p>
<p>I have set this up as follows:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import ForeignKey
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship
class Base(DeclarativeBase):
id: Mapped[int] = mapped_column(primary_key=True)
# define model classes
class User(Base):
__tablename__ = "users"
name: Mapped[str] = mapped_column()
posts: Mapped[list["Post"]] = relationship(back_populates="user")
def __repr__(self) -> str:
return f"(<{__class__.__name__}> name: {self.name})"
class Post(Base):
__tablename__ = "posts"
title: Mapped[str] = mapped_column()
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"))
user: Mapped["User"] = relationship(back_populates="posts")
comments: Mapped[list["Comment"]] = relationship(back_populates="post")
def __repr__(self) -> str:
return f"(<{__class__.__name__}> title: {self.title})"
class Comment(Base):
__tablename__ = "comments"
body: Mapped[str] = mapped_column()
post_id: Mapped[int] = mapped_column(ForeignKey("posts.id"))
post: Mapped["Post"] = relationship(back_populates="comments")
def __repr__(self) -> str:
return f"(<{__class__.__name__}> body: {self.body})"
</code></pre>
<p>If I create a few instances of these models, you can see how things are related:</p>
<pre class="lang-py prettyprint-override"><code># create instances
user = User(name="greta")
post_1 = Post(title="First post", user=user)
post_2 = Post(title="Second post", user=user)
comment_1 = Comment(body="yeah wotever", post=post_1)
comment_2 = Comment(body="lol good one", post=post_1)
comment_3 = Comment(body="lmfao", post=post_2)
# show all posts, and their comments
print(user)
for post in user.posts:
print(f" └── {post}")
for comment in post.comments:
print(f" └── {comment}")
</code></pre>
<pre><code>(<User> name: greta)
└── (<Post> title: First post)
└── (<Comment> body: yeah wotever)
└── (<Comment> body: lol good one)
└── (<Post> title: Second post)
└── (<Comment> body: lmfao)
</code></pre>
<p>I am unsure of how to use <code>relationship()</code> to define a method <code>all_comments()</code> in the <code>User</code> class, which would return a list of all of the comments across all of a <code>user</code> instance's <code>posts</code>.</p>
<p>Can anyone point me in the right direction?</p>
|
<python><sqlalchemy><orm>
|
2023-02-16 00:24:43
| 1
| 724
|
g_t_m
|
75,466,507
| 3,059,546
|
How can you wait for an end-to-end Step Function test to finish when triggered by EventBridge?
|
<p>We are currently using an event-based approach to process files loaded into S3. When a file is uploaded it triggers an EventBridge S3 event, which kicks off a Step Function. When the data is moved to the target the Step Function finishes.</p>
<p>We are interested in implementing an automated end-to-end test in our Python testing environments (both local and CICD) for this.</p>
<p>How can we properly wait for the Step Function to finish before checking our output location?</p>
<p>There are a few approaches I've considered, each with a drawback:</p>
<ol>
<li>Set a timer, wait n minutes for job to finish - inefficient</li>
<li>Add final step to Step Function which writes to SNS, constantly poll SNS for incoming messages - requires new infrastructure just for testing, would be a waste in Production, still inefficient</li>
<li>Add tags or parameters to Step Function so we can search for it and find its ARN, then use that to poll for when it finishes - we can't find a way to attach tags when the Step Function is triggered by EventBridge, still somewhat inefficient</li>
<li>Constantly check our output location - inefficient</li>
<li>Test Step Function using Step Functions Local - still doesn't duplicate the EventBridge trigger, so we're not exactly testing the full end-to-end</li>
</ol>
<p>Is there a way to find the ARN of the Step Function which has been triggered by an EventBridge event that you intentionally initiated? Is there another effective approach?</p>
|
<python><testing><aws-step-functions><aws-event-bridge>
|
2023-02-15 23:51:49
| 1
| 1,010
|
WarSame
|
75,466,461
| 1,447,953
|
xarray/numpy how to create large views on arrays?
|
<p>For later processing convenience, I want to be able to create a very large view onto an xarray DataArray. Here's a small example that works:</p>
<pre><code>data = xr.DataArray([list(i+np.arange(10,20)) for i in range(5)], dims=["t", "x"])
indices = xr.DataArray(list([i]*20 for i in range(len(data))), dims=["y", "i"])
print(data)
print(indices)
selection = data.isel(t=indices)
print("selection:")
print(selection)
</code></pre>
<p>Output:</p>
<pre><code><xarray.DataArray (t: 5, x: 10)>
array([[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[11, 12, 13, 14, 15, 16, 17, 18, 19, 20],
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
[13, 14, 15, 16, 17, 18, 19, 20, 21, 22],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23]])
Dimensions without coordinates: t, x
<xarray.DataArray (y: 5, i: 20)>
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
[4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4]])
Dimensions without coordinates: y, i
selection:
<xarray.DataArray (y: 5, i: 20, x: 10)>
array([[[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]],
...
[[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
[14, 15, 16, 17, 18, 19, 20, 21, 22, 23]]])
Dimensions without coordinates: y, i, x
</code></pre>
<p>So we have a data array, an indexing array, and we use vectorised indexing to select from the first array into a structure that is convenient for later use. So far, great. But it looks like xarray isn't very smart about this, and is copying a lot of data rather than using views back on to the original data (where we have many views on to the same data so it shouldn't take so much RAM to create the view).</p>
<p>To demonstrate the problem, here's a scaled up version:</p>
<pre><code>data = xr.DataArray([list(i+np.arange(10,20000)) for i in range(5)], dims=["t", "x"])
indices = xr.DataArray(list([i]*50000 for i in range(len(data))), dims=["y", "i"])
selection = data.isel(t=indices)
print("big selection")
print(selection)
</code></pre>
<p>Output:</p>
<pre><code>Traceback (most recent call last):
File "xarray_test.py", line 15, in <module>
selection = data.isel(t=indices)
File "/data2/users/bfarmer/envs/bfarmer_dev_py38_clone_w_numba_TEST/lib/python3.8/site-packages/xarray/core/dataarray.py", line 1183, in isel
ds = self._to_temp_dataset()._isel_fancy(
File "/data2/users/bfarmer/envs/bfarmer_dev_py38_clone_w_numba_TEST/lib/python3.8/site-packages/xarray/core/dataset.py", line 2389, in _isel_fancy
new_var = var.isel(indexers=var_indexers)
File "/data2/users/bfarmer/envs/bfarmer_dev_py38_clone_w_numba_TEST/lib/python3.8/site-packages/xarray/core/variable.py", line 1156, in isel
return self[key]
File "/data2/users/bfarmer/envs/bfarmer_dev_py38_clone_w_numba_TEST/lib/python3.8/site-packages/xarray/core/variable.py", line 777, in __getitem__
data = as_indexable(self._data)[indexer]
File "/data2/users/bfarmer/envs/bfarmer_dev_py38_clone_w_numba_TEST/lib/python3.8/site-packages/xarray/core/indexing.py", line 1159, in __getitem__
return array[key]
File "/data2/users/bfarmer/envs/bfarmer_dev_py38_clone_w_numba_TEST/lib/python3.8/site-packages/xarray/core/nputils.py", line 126, in __getitem__
return np.moveaxis(self._array[key], mixed_positions, vindex_positions)
numpy.core._exceptions.MemoryError: Unable to allocate 37.2 GiB for an array with shape (5, 50000, 19990) and data type int64
</code></pre>
<p>This shouldn't take 40 GB of RAM since its just lots of views on to the same data. Yeah there is some overhead in the indexing, but we should only need one index per row of 20000 in <code>data</code>. We shouldn't have to copy over that row of 20000 into a new array.</p>
<p>Is there a way to make xarray do this in a smarter way? Can I more explicitly tell it to use Dask or something, or structure things differently somehow?</p>
<p>Actually I'm also totally happy just doing this with straight up numpy if that's easier, or any other library that can do this efficiently. I just used xarray because I thought it would do something smart with this operation, but I guess not, at least not automatically.</p>
<p>Edit: Ok I just found this question suggesting it is impossible with numpy: <a href="https://stackoverflow.com/questions/5127991/can-i-get-a-view-of-a-numpy-array-at-specified-indexes-a-view-from-fancy-inde">Can I get a view of a numpy array at specified indexes? (a view from "fancy indexing")</a>. Not sure if this implies that xarray can't do it either though...</p>
<p>Edit 2: Ok the documentations for xarray.DataArray.sel (<a href="https://docs.xarray.dev/en/stable/generated/xarray.Dataset.sel.html" rel="nofollow noreferrer">https://docs.xarray.dev/en/stable/generated/xarray.Dataset.sel.html</a>) says this:</p>
<blockquote>
<p>Returns obj (Dataset) – A new Dataset with the same contents as this
dataset, except each variable and dimension is indexed by the
appropriate indexers. If indexer DataArrays have coordinates that do
not conflict with this object, then these coordinates will be
attached. In general, each array’s data will be a view of the array’s
data in this dataset, unless vectorized indexing was triggered by
using an array indexer, in which case the data will be a copy.</p>
</blockquote>
<p>So I guess xarray <em>does</em> try to be smart about selections in general, but not in the vectorised indexing case I want, which is rather annoying... I guess I have to think of a way to do this without vectorised indexing...</p>
|
<python><numpy><python-xarray>
|
2023-02-15 23:44:22
| 1
| 2,974
|
Ben Farmer
|
75,466,349
| 1,818,059
|
From ansi encoding to utf8 (and hex bytes)
|
<p>I have some texts encoded in ansi windows codepage. It is known which codepage it is.</p>
<p>The data is stored in text files.</p>
<p>I would like to do the following:</p>
<ul>
<li>convert the to utf-8</li>
<li>print the resulting utf-8 as bytes</li>
</ul>
<p>Did read <a href="https://realpython.com/python-encodings-guide/" rel="nofollow noreferrer">python encoding guide</a>, but I could not get the answer.</p>
<p>So, take the minimum example here:</p>
<pre class="lang-py prettyprint-override"><code>import codecs
chinaAnsi = '\xCE\xD2' # 我 in chinese GBK CJK Unified Ideograph-6211
# 0xE6 0x88 0x91 in UTF8
print(chinaAnsi.encode('utf-8').decode('utf-8'))
# results in b'\xc3\x8e\xc3\x92' or ÎÒ
# which is meaningless.
# --> utf-8 representation of \xCE\xD2 in LATIN-1 (windows cp1252)
</code></pre>
<p>As can be seen from above, the cross coding works, my machine is Windows CP1252. <em>Except</em> my input is in codepage 936.</p>
<p>So how do I deal with ansi input that is not from my own codepage ?</p>
<p>My final desired output from the minimal example would be the string in utf-8 followed by the utf-8 bytes.</p>
<pre class="lang-none prettyprint-override"><code>我;e68891
</code></pre>
<p>The conversion of the string would mimic <code>iconv -f cp936 -t utf-8 theInput > theOutput</code></p>
|
<python><encoding><utf-8>
|
2023-02-15 23:23:10
| 1
| 1,176
|
MyICQ
|
75,466,209
| 7,873,949
|
How can PyPDF2 read the correct size of a PDF page
|
<p>I tried to get width and height of pdf with PyPDF2 with</p>
<pre><code>w, h = page.mediaBox.getWidth(), page.mediaBox.getHeight()
print(w, h) # showing 595 842
</code></pre>
<p>However, the pdf is actually 842 X 595(11.7 X 8.27 inch)
<a href="https://i.sstatic.net/vtJah.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vtJah.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><pypdf>
|
2023-02-15 22:58:33
| 1
| 601
|
Mas Zero
|
75,466,013
| 12,106,577
|
Matplotlib major and minor ticks misaligned when using pandas date_range
|
<p>I am trying to use pandas <code>date_range</code> in the x-axis of a matplotlib.pyplot graph, while setting years to be the major ticks and months to be the minor ticks (for a timeline plot).</p>
<p>I came across a seemingly unexpected behaviour (noticed also as part of <a href="https://stackoverflow.com/questions/26700598/matplotlib-showing-x-tick-labels-overlapping">this SO question</a> but unsolved), where the ticks are not aligned, and they are exhibiting an offset.</p>
<p>This code reproduces it for me (v3.5.2):</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib.dates as mdates
x_range = pd.date_range('2015-01-01', '2021-06-01', freq='m')
y = np.linspace(1,1, len(x_range))
fig, ax = plt.subplots(figsize=(10, 1))
ax.plot(x_range, y)
loc = mdates.MonthLocator(interval=12)
ax.xaxis.set_major_locator(loc)
ax.xaxis.set_minor_locator(AutoMinorLocator(12))
fmt = mdates.DateFormatter('%Y')
ax.xaxis.set_major_formatter(fmt)
</code></pre>
<p><a href="https://i.sstatic.net/7wF4e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7wF4e.png" alt="enter image description here" /></a></p>
<p>Anyone dealt with this before? I suspect it has to do with months having 28, 30 and 31 days but could not investigate further.</p>
|
<python><pandas><matplotlib><date-range><xticks>
|
2023-02-15 22:28:41
| 1
| 399
|
John Karkas
|
75,465,692
| 19,491,471
|
Jupyter packages
|
<p>I am trying to import certain packages as I am working with Jupyter notebook files, and most of the packages seem to be missing, even though I have installed them. For example, when I do the command: <code>from bs4 import BeautifulSoup</code> or <code>import requests</code>
I get the error saying <code>ModuleNotFoundError: No module named 'bs4'</code> for the first one and a similar one for importing requests as well. I have tried <code>pip install requests</code> and <code>pip install bs4</code>, but same issue persists. I have installed them on:
"<code>(base) aminnazemzadeh@amins-MacBook-Pro ~ %</code> " which seems to be my home directory, and I also have anaconda3 installed alongside python3. What is the issue that I cannot import these modules.</p>
<p>I am using visual studio if it makes any difference</p>
<p>Once I add :</p>
<pre><code>!pip install requests
!pip install bs4
</code></pre>
<p>I get:</p>
<pre><code>/Users/aminnazemzadeh/.zshenv:.:1: no such file or directory: /Users/aminnazemzadeh/.cargo/env
Requirement already satisfied: requests in /Users/aminnazemzadeh/opt/anaconda3/lib/python3.9/site-packages (2.28.1)
Requirement already satisfied: charset-normalizer<3,>=2 in /Users/aminnazemzadeh/opt/anaconda3/lib/python3.9/site-packages (from requests) (2.0.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/aminnazemzadeh/opt/anaconda3/lib/python3.9/site-packages (from requests) (1.26.11)
Requirement already satisfied: idna<4,>=2.5 in /Users/aminnazemzadeh/opt/anaconda3/lib/python3.9/site-packages (from requests) (3.3)
Requirement already satisfied: certifi>=2017.4.17 in /Users/aminnazemzadeh/opt/anaconda3/lib/python3.9/site-packages (from requests) (2022.9.24)
/Users/aminnazemzadeh/.zshenv:.:1: no such file or directory: /Users/aminnazemzadeh/.cargo/env
Requirement already satisfied: bs4 in /Users/aminnazemzadeh/opt/anaconda3/lib/python3.9/site-packages (0.0.1)
Requirement already satisfied: beautifulsoup4 in /Users/aminnazemzadeh/opt/anaconda3/lib/python3.9/site-packages (from bs4) (4.11.1)
Requirement already satisfied: soupsieve>1.2 in /Users/aminnazemzadeh/opt/anaconda3/lib/python3.9/site-packages (from beautifulsoup4->bs4) (2.3.1)
</code></pre>
<p>followed by this warning:</p>
<pre><code>ModuleNotFoundError Traceback (most recent call last)
Cell In[7], line 4
2 get_ipython().system('pip install bs4')
3 from urllib.request import urlopen
----> 4 from bs4 import BeautifulSoup
ModuleNotFoundError: No module named 'bs4'
</code></pre>
<p>Thanks</p>
|
<python><jupyter-notebook><pip><jupyter>
|
2023-02-15 21:40:01
| 2
| 327
|
Amin
|
75,465,666
| 8,474,432
|
What causes neural network accuracy to sharply increase after only one epoch?
|
<p>I'm using a relatively simple neural network with fully connected layers in keras. For some reason, the accuracy drastically increases basically to its final value after only one training epoch (likewise, the loss sharply decreases). I've tried architectures with larger and smaller numbers of hidden layers too. This network also performs poorly on the testing data, so I am trying to find a more optimal architecture or improve my training set accordingly.</p>
<p>It is trained on a set of 6500 1D array-like data, and I'm using a batch size of 512.</p>
<p><a href="https://i.sstatic.net/m4vuR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m4vuR.png" alt="" /></a></p>
|
<python><tensorflow><keras><deep-learning><neural-network>
|
2023-02-15 21:35:19
| 1
| 1,216
|
curious_cosmo
|
75,465,503
| 10,006,534
|
How do I reshape my data so that IDs with multiple observations are grouped as all possible pairs of observations by ID?
|
<p>Given a data frame like this:</p>
<pre><code>import pandas as pd
pd.DataFrame({'id':[1,1,1,2,2], 'letter':['a','b','c','b','d'], 'value':[0,0,0,1,1]})
id let val
0 1 a 0
1 1 b 0
2 1 c 0
3 2 b 1
4 2 d 1
</code></pre>
<p>I want to generate a version where the 'letter' column is all possible pairs by id. Order doesn't matter (b,d) is the same as (d,b). The pairs don't necessarily need to be represented using tuples either.</p>
<pre><code>pd.DataFrame({'id':[1,1,1,2], 'letter':[('a','b'),('a','c'),('b','c'),('b','d')], 'value':[0,0,0,1]})
id let val
0 1 (a, b) 0
1 1 (a, c) 0
2 1 (b, c) 0
3 2 (b, d) 1
</code></pre>
<p>How can I transform my data to the desired output? Thanks!</p>
|
<python><pandas><group-by><transformation>
|
2023-02-15 21:16:33
| 1
| 581
|
Slash
|
75,465,299
| 6,035,977
|
Undo Marking Folder as Source Directory in PyCharm
|
<p>So I've developed a larger package <code>my_package</code> in PyCharm and throughout the development process, I had marked the <code>my_package</code> directory as a source directory, and PyCharm automatically set up the import statements like</p>
<pre><code>from path1.to.module import something
from path2.to.another.module import more
import path3
[Code of a module in a package that uses something and more...]
</code></pre>
<p>where <code>path1</code>, <code>path2</code> and <code>path3</code> all reside as subfolders directly under <code>my_package</code>. Now I want to install and ship my code as a package however. After installation and import to the Python shell, however, I get <code>ModuleNotFoundError: No module named 'path1'</code>, because outside PyCharm's source directory magic Python would only recognize</p>
<pre><code>from my_package.path1.to.module import something
from my_package.path2.to.another.module import more
from my_package import path3
[Code of a module in a package that uses something and more...]
</code></pre>
<p>How can I fix all my import statements in my package efficiently? I have 70+ files and by hand will be tough to do.</p>
|
<python><installation><import><pycharm><packaging>
|
2023-02-15 20:49:03
| 1
| 333
|
Corram
|
75,465,032
| 4,218,755
|
Displaying a shared bar chart from groupby results
|
<p>I have this type of Dataframe :</p>
<pre><code> origin delta
month
2021-09 admin -1000
2021-09 ext 20
2021-10 ext 648
2021-11 admin -1000
2021-11 ext 590
monthframe (32,3)
</code></pre>
<p>(I reprocessed "month" index from dates in a colum, parsed as datetime initially)
I tried to reproduce
<a href="https://stackoverflow.com/questions/41494942/pandas-dataframe-groupby-plot">Pandas dataframe groupby plot</a>
<a href="https://stackoverflow.com/questions/42988302/pandas-groupby-results-on-the-same-plot">Pandas groupby results on the same plot</a></p>
<p>with</p>
<pre><code>monthframe.groupby("origin").plot(kind="bar",stacked=True, legend=True, xlabel="Month", ylabel="Delta", layout=(20,10),subplots=False)
</code></pre>
<p>setting month as index before so that it will work as x.</p>
<p>But I can't get it to display bars in the same graph, with various colors.
I only have one graph when I do</p>
<pre><code>monthframe.plot(kind="bar",stacked=True, legend=True, xlabel="Month", ylabel="Delta", layout=(20,10),subplots=False)
</code></pre>
<p>but then all origins are merged into the same months, all is blue and it really isn't informative.</p>
<p>I tried everything I could find (setting axes before; etc, but the plot doesn't even take its named arguments into account).</p>
<p>What to do please?</p>
|
<python><pandas><matplotlib><group-by>
|
2023-02-15 20:14:55
| 1
| 1,049
|
Ando Jurai
|
75,464,990
| 12,014,637
|
Speeding up numpy operations
|
<p>Using a 2D numpy array, I want to create a new array that expands the original one using a moving window. Let me explain what I mean using an example code:</p>
<pre class="lang-py prettyprint-override"><code># Simulate some data
import numpy as np
np.random.seed(1)
t = 20000 # total observations
location = np.random.randint(1, 5, (t,1))
var_id = np.random.randint(1, 8, (t,1))
hour = np.repeat(np.arange(0, (t/5)), 5).reshape(-1,1)
value = np.random.rand(t,1)
df = np.concatenate((location,var_id,hour,value),axis = 1)
</code></pre>
<p>Having "df" I want to create a new array "results" like below:</p>
<pre class="lang-py prettyprint-override"><code># length of moving window
window = 10
hours = df[:,2]
# create an empty array to store the results
results = np.empty((0,4))
for i in range(len(set(hours))-window+1):
obs_data = df[(hours >= i) & (hours <= i+window)]
results = np.concatenate((results, obs_data), axis=0)
</code></pre>
<p>my problem is that the concatenation is very slow (on my system the operation take 1.4 and 16 seconds without and with the concatenation respectively). I have over a million data points and I want to speedup this code. Does anyone know a better way to create the new array faster (possibly without using the np.concatenate)?</p>
|
<python><numpy>
|
2023-02-15 20:10:30
| 2
| 618
|
Amin Shn
|
75,464,972
| 9,210,912
|
Is there a way to parse Golang printed maps in Python?
|
<p>I mean something like</p>
<pre class="lang-py prettyprint-override"><code>go_map = 'map[customer_ids:[Test1 Test2]]'
parse(go_map) # = {'customer_ids': ['Test1', 'Test2']}
</code></pre>
|
<python><go><parsing>
|
2023-02-15 20:08:39
| 0
| 319
|
LiaVa
|
75,464,959
| 912,757
|
RSA_private_decrypt padding
|
<p>I'm encrypting a key with a public key with the <code>cryptography</code> library, in Python.</p>
<pre><code>key_path = "key.bin"
key = secrets.token_bytes(32)
with open(key_path, "w") as key_file:
key_file.write(key.hex())
with open(public_key, "rb") as public_key_file:
public_key = serialization.load_pem_public_key(public_key_file.read())
padding_config = padding.OAEP(
mgf=padding.MGF1(algorithm=hashes.SHA256()),
algorithm=hashes.SHA256(),
label=None
)
enc_path = key_path + ".enc"
with open(enc_path, "wb") as enc_file:
bytes_array = public_key.encrypt(content, padding_config)
enc_file.write(bytes_array)
</code></pre>
<p>This works well, afaik, but the code reading this key is in <a href="https://docs.rs/openssl/latest/openssl/rsa/struct.RsaRef.html#method.private_decrypt" rel="nofollow noreferrer">Rust</a>, which is simply a FFI to openssl C calls. There's not many option with openssl. You can't choose an "algorithm", a "mgf" and a "label. Padding is simply an enum, so I picked the obvious one <code>PKCS1_OAEP</code>.</p>
<pre><code>use openssl::{
cipher::Cipher,
cipher_ctx::CipherCtx,
pkey::Private,
rsa::{Padding, Rsa},
};
pub fn decrypt(key_file: File, pk: &str, pass: &str) -> String {
let rsa = Rsa::private_key_from_pem_passphrase(pk.as_bytes(), pass.as_bytes())
.expect("Can't build RSA object from PEM");
let mut encrypted = vec![];
key_file.read_to_end(&mut encrypted).expect("Can't read encrypted key file");
let mut decrypted: Vec<u8> = vec![0; rsa.size() as usize];
rsa.private_decrypt(&encrypted, &mut decrypted, Padding::PKCS1_OAEP).unwrap();
String::from_utf8(decrypted)
}
</code></pre>
<p>But I get this error:</p>
<pre><code>ErrorStack([
Error { code: 67571871, library: "rsa routines", function: "RSA_padding_check_PKCS1_type_2", reason: "pkcs decoding error", file: "../crypto/rsa/rsa_pk1.c", line: 251 },
Error { code: 67522674, library: "rsa routines", function: "rsa_ossl_private_decrypt", reason: "padding check failed", file: "../crypto/rsa/rsa_ossl.c", line: 500 }
])
</code></pre>
<p>I know that the Rust code is "right" because it was working well (with <code>Padding::PKCS1</code>) in a previous version when I was using subprocess calls to openssl instead of the <code>cryptography</code> library. And anyway there's only the <code>Padding</code> enum to change here.</p>
<p><a href="https://www.openssl.org/docs/man1.0.2/man3/RSA_private_decrypt.html" rel="nofollow noreferrer">openssl documentation</a> tells me that</p>
<blockquote>
<p>RSA_PKCS1_OAEP_PADDING</p>
<p>EME-OAEP as defined in PKCS #1 v2.0 with SHA-1, MGF1 and an empty encoding parameter. This mode is recommended for all new applications.</p>
</blockquote>
<p>but using <code>hashes.SHA1()</code> didn't change anything. How should I setup my padding so that openssl accepts decrypting it?</p>
|
<python><rust><openssl><cryptography>
|
2023-02-15 20:06:53
| 1
| 2,397
|
Nil
|
75,464,957
| 5,235,665
|
Detecting Excel column data types in Python Pandas
|
<p>New to Python and Pandas here. I am trying to read an Excel file off of S3 (using boto3) and read the headers (first row of the spreadsheet) and determine what data type each header is, <em>if this is possible to do</em>. If it is, I need a map of key-value pairs where each key is the header name and value is its data type. So for example if the file I fetch from S3 has the following data in it:</p>
<pre><code>Date,Name,Balance
02/01/2022,Jerry Jingleheimer,45.07
02/14/2022,Jane Jingleheimer,102.29
</code></pre>
<p>Then I would be looking for a map of KV pairs like so:</p>
<ul>
<li>Key 1: "Date", Value 1: "datetime" (or whatever is the appropriate date type)</li>
<li>Key 2: "Name", Value 2: "string" (or whatever is the appropriate date type)</li>
<li>Key 3: "Balance", Value 3: "numeric" (or whatever is the appropriate date type)</li>
</ul>
<p>So far I have:</p>
<pre><code>s3Client = Res.resource('s3')
obj = s3Client.get_object(Bucket="some-bucket", Key="some-key")
file_headers = pd.read_excel(io.BytesIO(obj['Body'].read()), engine="openpyxl").columns.tolist()
</code></pre>
<p>I'm just not sure about how to go about extracting the data types that Pandas has detected or how to generate the map.</p>
<p>Can anyone point me in the right direction please?</p>
|
<python><pandas>
|
2023-02-15 20:06:36
| 2
| 845
|
hotmeatballsoup
|
75,464,834
| 18,091,372
|
What do I need to change so my tornado code can post successfully?
|
<p>I have the following (with some strings modified) which works when using the <code>requests</code> library.</p>
<pre><code>import requests
from pprint import pprint
import json
PEM = '/full/path/to/my.pem'
client_id='cliendID'
client_secret='clientSecret'
USER='myuser'
PSWD='mypwd'
url = 'https://theurl.com/request/'
data = {
"grant_type": "password",
"username": USER,
"password": PSWD
}
auth = (client_id, client_secret)
response = requests.post( url, auth=auth, data=data, verify=PEM )
answer = response.json()['answer']
print( answer )
</code></pre>
<p>The answer printed is what I expect.</p>
<p>The following usage of <code>curl</code> also works:</p>
<pre><code>curl -iv --user cliendID:clientSecret --key /full/path/to/server.key --cert /full/path/to/my.pem -F grant_type=password -F username=myuser -F password=mypwd https://theurl.com/request/
</code></pre>
<p>However, when I try to do the same using Tornado and AsyncHTTPClient, I get a "Bad Request" response. Some sample Tornado code is:</p>
<pre><code>from tornado.httpclient import AsyncHTTPClient
from tornado.ioloop import IOLoop
import json
async def get_content():
PEM = '/full/path/to/my.pem'
client_id='cliendID'
client_secret='clientSecret'
url = 'https://theurl.com/request/'
USER='myuser'
PSWD='mypwd'
data = {
"grant_type": "password",
"username": USER,
"password": PSWD
}
bodyData = json.dumps( data )
http_client = AsyncHTTPClient()
response = await http_client.fetch( url,
method = 'POST',
body = bodyData,
auth_username = client_id,
auth_password = client_secret,
ca_certs = PEM )
print( response.body.decode() )
async def main():
await get_content()
if __name__ == "__main__":
io_loop = IOLoop.current()
io_loop.run_sync(main)
</code></pre>
<p>If I had to guess, I believe the issue is with how I am sending the bodyData.</p>
<p>What do I need to change in the Tornado code so this will work...?</p>
|
<python><curl><python-requests><tornado>
|
2023-02-15 19:54:23
| 1
| 796
|
Eric G
|
75,464,642
| 10,963,057
|
3d animated line plot with plotly in python
|
<p>I saw this 3d plot. it was animated and added a new value every day. i have not found an example to recreate it with plotly in python.</p>
<p><a href="https://i.sstatic.net/p2XUR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p2XUR.png" alt="enter image description here" /></a></p>
<p>the plot should start with the value from the first row (100). The start value should remain (no rolling values). The plot should be animated in such a way that each row value is added one after the other and the x-axis expands. the following data frame contains the values (df_stocks) and Dates to plot. assigning the colors would be a great addition. the more positive the deeper the green, the more negative the darker red.</p>
<pre><code>import yfinance as yf
import pandas as pd
stocks = ["AAPL", "MSFT"]
df_stocks = pd.DataFrame()
for stock in stocks:
df = yf.download(stock, start="2022-01-01", end="2022-07-01", group_by='ticker')
df['perct'] = df['Close'].pct_change()
df_stocks[stock] = df['perct']
df_stocks.iloc[0] = 0
df_stocks += 1
df_stocks = df_stocks.cumprod()*100
df_stocks -= 100
</code></pre>
|
<python><3d><plotly><interactive>
|
2023-02-15 19:32:41
| 1
| 1,151
|
Alex
|
75,464,574
| 14,900,791
|
Parallel downloading don't work in python threading
|
<p>I'm building a parallel download library using <code>threading</code> module.</p>
<p>When I use my library, it downloads the file without error, but <strong>the video file doesn't have the same content as if I downloaded it through the browser</strong>.</p>
<p>I use <code>threading</code> for parallel downloading and I think I have a problem with <code>threading.Lock</code> and <code>file.seek</code>, but I can't figure out how to fix the problem.</p>
<p>This is my code:</p>
<pre><code>import requests
import threading
from tqdm import tqdm
DOWNLOAD_CHUNK_SIZE = 1 << 20 # 1 MiB
class DownloadPart:
def __init__(self, url, byte_range) -> None:
self.url = url
self.byte_range = byte_range
self.lock = threading.Lock()
def download(self, file, pbar=None):
response = requests.get(
self.url,
headers={"Range": "bytes={}-{}".format(*self.byte_range)},
allow_redirects=True,
stream=True,
)
written = 0
for chunk in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE):
if chunk:
self.lock.acquire()
file.seek(self.byte_range[0] + written)
length = file.write(chunk)
file.flush()
written += length
pbar.update(length)
self.lock.release()
class Downloader:
def __init__(self, url, parts=10):
self.url = url
self.parts = parts
def _get_file_size(self) -> int:
info = requests.head(self.url, allow_redirects=True)
info.raise_for_status()
size = info.headers.get("content-length", None)
assert size
return int(size)
def download(self, filename):
file_size = self._get_file_size()
# file_size = 1024
size_per_part = file_size // self.parts
print(file_size, size_per_part)
file = open(filename, "wb")
pbar = tqdm(total=file_size)
threads = []
for index in range(self.parts):
# fix last part have more bytes
if index + 1 == self.parts:
byte_range = (size_per_part * index, file_size - 1)
else:
byte_range = (size_per_part * index, size_per_part * (index + 1) - 1)
thread = threading.Thread(
target=DownloadPart(self.url, byte_range).download, args=(file,), kwargs={"pbar": pbar}
)
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
file.close()
URL = "https://s-delivery38.mxdcontent.net/v/8a5f59673042ed97c402be84ceeb20d9.mp4?s=TfiDzO2oBLrhub_GhToCiQ&e=1676489987&_t=1676476332"
d = Downloader(URL)
d.download("video.mp4")
</code></pre>
<p>How can I solve the problem with my library and <strong>get the same data in the file</strong>? Thank you for any help.</p>
|
<python><multithreading><python-requests><download>
|
2023-02-15 19:25:03
| 1
| 1,171
|
Jurakin
|
75,464,250
| 217,586
|
Unable to move the Stable Diffusion pipeline to my M1 MacBook
|
<p>I am following the steps stated here: <a href="https://huggingface.co/docs/diffusers/optimization/mps#how-to-use-stable-diffusion-in-apple-silicon-m1m2" rel="nofollow noreferrer">How to use Stable Diffusion in Apple Silicon (M1/M2)</a>.</p>
<p>At my local MacBook M1 machine, I saved the below script in stable-diffusion.py file:</p>
<pre><code># make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("mps")
# Recommended if your computer has < 64 GB of RAM
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
# First-time "warmup" pass (see explanation above)
_ = pipe(prompt, num_inference_steps=1)
# Results match those from the CPU device after the warmup pass.
image = pipe(prompt).images[0]
</code></pre>
<p>Now when I am trying to execute: <code>python stable-diffusion.py</code> from Terminal, I am getting following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/apple/Desktop/area_51/stable-diffusion.py", line 2, in <module>
from diffusers import StableDiffusionPipeline
ModuleNotFoundError: No module named 'diffusers'
</code></pre>
<p>In order to fix it even I tried: <code>pip install diffusers</code>, however I still got same error.</p>
<p>Am I missing anything over here?</p>
|
<python><stable-diffusion>
|
2023-02-15 18:51:12
| 1
| 16,798
|
Devarshi
|
75,464,139
| 6,488,274
|
"ImportError: libta_lib.so.0: cannot open shared object file: No such file or directory" by restart the google colab
|
<p>I can successfully install the ta-lib in google colab as follow:</p>
<pre><code>import os, sys
from google.colab import drive
drive.mount('/content/gdrive')
nb_path = '/content/notebooks'
os.symlink('/content/gdrive/My Drive/Colab Notebooks', nb_path)
sys.path.insert(0, nb_path) # or append(nb_path)
!wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz
!tar xvf ta-lib-0.4.0-src.tar.gz
!ls {nb_path}
!cd {nb_path}/ta-lib && ./configure --prefix=/usr
!cd {nb_path}/ta-lib && make
!cd {nb_path}/ta-lib && sudo make install
!pip install --target=$nb_path numpy
!cd {nb_path}/ta-lib && pip install --target=$nb_path --upgrade --force-reinstall TA-Lib
!wget -P {nb_path} https://files.pythonhosted.org/packages/90/05/d4c6a778d7a7de0be366bc4a850b4ffaeac2abad927f95fa8ba6f355a082/TA-Lib-0.4.17.tar.gz
!cd {nb_path} && tar xvf TA-Lib-0.4.17.tar.gz
!cd '/content/gdrive/My Drive/Colab Notebooks/TA-Lib-0.4.17' && python setup.py install
import talib
</code></pre>
<p>I restart the notebook the second time as follow:</p>
<pre><code>from google.colab import drive
drive.mount('/content/gdrive')
import os, sys
nb_path = '/content/notebooks'
os.symlink('/content/gdrive/My Drive/Colab Notebooks', nb_path)
sys.path.append(nb_path)
import talib
</code></pre>
<p>But I got error as follow:</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-9-1ee486ccef90> in <module>
----> 1 import talib
/content/notebooks/talib/__init__.py in <module>
91
92
---> 93 from ._ta_lib import (
94 _ta_initialize, _ta_shutdown, MA_Type, __ta_version__,
95 _ta_set_unstable_period as set_unstable_period,
ImportError: libta_lib.so.0: cannot open shared object file: No such file or directory
</code></pre>
<p>I have to reinstall the talib as by the first time. This is surely not what I want.</p>
|
<python><google-colaboratory><ta-lib>
|
2023-02-15 18:39:49
| 0
| 387
|
thomas2004ch
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.