QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,863,889 | 1,601,580 | How does one fix an interleaved data set from only sampling one data set? | <p>The following</p>
<pre><code>from datasets import load_dataset
from datasets import interleave_datasets
# Preprocess each dataset
c4 = load_dataset("c4", "en", split="train", streaming=True)
wikitext = load_dataset("wikitext", "wikitext-103-v1", split="train", streaming=True)
# Interleave the preprocessed datasets
datasets = [c4, wikitext]
for dataset in datasets:
print(dataset.description)
interleaved = interleave_datasets(datasets, probabilities=[0.5, 0.5])
print(interleaved)
</code></pre>
<p>only samples from one data set, why?</p>
<pre><code>example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
example.keys()=dict_keys(['text', 'timestamp', 'url'])
counts=100
</code></pre>
<p>colab: <a href="https://colab.research.google.com/drive/1VIR66U1d7qk3Q1vU_URoo5tHEEheORpN?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1VIR66U1d7qk3Q1vU_URoo5tHEEheORpN?usp=sharing</a></p>
<hr />
<p>cross:</p>
<ul>
<li>hf discord: <a href="https://discord.com/channels/879548962464493619/1138632039197835354" rel="nofollow noreferrer">https://discord.com/channels/879548962464493619/1138632039197835354</a></li>
<li>hf discuss: <a href="https://discuss.huggingface.co/t/how-does-one-fix-an-interleaved-data-set-from-only-sampling-one-data-set/50041" rel="nofollow noreferrer">https://discuss.huggingface.co/t/how-does-one-fix-an-interleaved-data-set-from-only-sampling-one-data-set/50041</a></li>
</ul>
| <python><huggingface-transformers><huggingface><huggingface-datasets> | 2023-08-09 00:37:43 | 1 | 6,126 | Charlie Parker |
76,863,562 | 3,821,009 | Find matching pairs and lay them out as columns in polars | <p>Say I have this:</p>
<pre><code>df = polars.DataFrame(dict(
j=numpy.random.randint(10, 99, 9),
k=numpy.tile([1, 2, 2], 3),
))
j (i64) k (i64)
47 1
22 2
82 2
19 1
85 2
15 2
89 1
74 2
26 2
shape: (9, 2)
</code></pre>
<p>where column <code>k</code> is kind of a marker - <code>1</code> starts and then there are one or more <code>2</code>s (in the above example always two for simplicity, but in practice one or more). I'd like to get values in <code>j</code> that correspond to <code>k=1</code> and the last corresponding <code>k=2</code>. For the above:</p>
<pre><code> j (i64) k (i64)
47 1 >-\
22 2 | these are the 1 and the last of its matching 2s
82 2 <-/
19 1 >-\
85 2 | these are the 1 and the last of its matching 2s
15 2 <-/
89 1 >-\
74 2 | these are the 1 and the last of its matching 2s
26 2 <-/
shape: (9, 2)
</code></pre>
<p>and I'd like to put these in two columns, so I get this:</p>
<pre><code> j (i64) k (i64)
47 82
19 15
89 26
shape: (9, 2)
</code></pre>
<p>How would I approach this in polars?</p>
| <python><python-polars> | 2023-08-08 22:37:57 | 2 | 4,641 | levant pied |
76,863,438 | 21,115 | Narrowing types for multiprocessing manager proxies | <p><a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.managers.SyncManager.list" rel="nofollow noreferrer"><code>multiprocessing.managers.SyncManager.list</code></a> returns a <a href="https://docs.python.org/3/library/multiprocessing.html#proxy-objects" rel="nofollow noreferrer"><code>ListProxy[Any]</code></a>, but I don't see a way to narrow <code>Any</code>?</p>
<p>As a workaround I can:</p>
<pre><code>int_list = cast("ListProxy[int]", m.list())
</code></pre>
<p>And <code>int_list</code> is <code>ListProxy[int]</code>.</p>
<p>Note also that <code>"</code> are required on type to avoid:</p>
<pre><code>TypeError: type 'ListProxy' is not subscriptable`:
</code></pre>
<hr />
<p>Is there a better way?</p>
| <python><generics><multiprocessing><python-typing> | 2023-08-08 21:58:15 | 1 | 18,140 | davetapley |
76,863,351 | 21,115 | Expected no type arguments for class "Self" | <p>I have a generic class, with a <code>next</code> method which returns an instance of itself parameterized to a difference type variable.</p>
<p>I can specify the <code>next</code> method's return type using <code>"</code> to reference the class itself:</p>
<pre><code>DataIn = TypeVar('DataIn')
DataOut = TypeVar('DataOut')
@dataclass
class Msg(Generic[DataIn]):
source: str
data: DataIn
def next(self, data: DataOut) -> "Msg[DataOut]":
"""
Create a new message with the same source but different data
"""
return Msg(source=self.source, data=data)
</code></pre>
<p>I would like to use the <a href="https://peps.python.org/pep-0673" rel="nofollow noreferrer">PEP 673 <code>Self</code> type</a> to avoid the <code>"</code>':</p>
<pre><code> def next(self, data: DataOut) -> Self[DataOut]:
</code></pre>
<p>But that fails to type check in Pylance / Pyright:</p>
<pre><code> Expected no type arguments for class "Self"
</code></pre>
<p><a href="https://peps.python.org/pep-0673/#use-in-generic-classes" rel="nofollow noreferrer">The docs</a> say: "<em>Self can also be used in generic class methods</em>", but don't show this specific use case.</p>
<p>Is it supported?</p>
| <python><generics><typing><pylance><pyright> | 2023-08-08 21:37:31 | 1 | 18,140 | davetapley |
76,863,340 | 10,853,071 | Python connection class - Checking if connection is alive | <p>I am constructing a class where I will encapsulate the saspy lib. What I want to develop now is a method to check if the SAS connection still alive, so, if not, I call it to reconnect.</p>
<p>I am looking for a magic method that is executed every time my instance is accessed anytime. Is there any? I am really new on classes construction.</p>
<pre><code>class Sas_session:
def __init__(self) -> None:
self.iomhost: str = sasanl.host # type: ignore
self.iomport: int = sasanl.port # type: ignore
self.appserver: str = sasanl.appserver # type: ignore
self.omruser: str = "" # type: ignore
self.authkey: str = "" # type: ignore
self.cfgname: str = "iomwin" # type: ignore
self.timeout: int = 30 # type: ignore
self.cfgfile = str(Path(__file__).parent / "ConexaoSAS.py")
self.conectar()
def conectar(self):
self.session = saspy.SASsession(
cfgname = self.cfgname, # type: ignore
cfgfile = self.cfgfile, # type: ignore
omruser = self.omruser, # type: ignore
iomhost = self.iomhost, # type: ignore
iomport = self.iomport, # type: ignore
appserver = self.appserver, # type: ignore
authkey = self.authkey, # type: ignore
timeout = self.timeout) # type: ignore
print ('conexão instanciada')
return self
</code></pre>
| <python><saspy> | 2023-08-08 21:35:14 | 0 | 457 | FábioRB |
76,863,325 | 807,797 | Install Python from Powershell script | <p>The following command successfully installs Python on Windows 11 when run from the PowerShell command line as Administrator:</p>
<pre><code>c:/temp/python-3.11.4-amd64.exe /quiet InstallAllUsers=0 InstallLauncherAllUsers=0 PrependPath=1 Include_test=0
</code></pre>
<p>But when that same command is placed inside a script <code>myinstallscript.ps1</code>, and when that script is called from the PowerShell command line as <code>.\myinstallscript.ps1</code> , the installation fails without throwing any error.</p>
<p>Here is the relevant script code, including the same command that does not work when it is invoked in a script:</p>
<pre><code>Write-Output "About to create temp folder. "
New-Item -ItemType Directory -Force -Path C:\temp
Write-Output "About to set security protocol. "
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Write-Output "About to download Python executable to temp folder. "
Invoke-WebRequest -Uri "https://www.python.org/ftp/python/3.11.4/python-3.11.4-amd64.exe" -OutFile "c:/temp/python-3.11.4-amd64.exe"
Write-Output "About to install Python. "
c:/temp/python-3.11.4-amd64.exe /quiet InstallAllUsers=0 InstallLauncherAllUsers=0 PrependPath=1 Include_test=0
Write-Output "About to append python to path temporarily. "
$env:Path += ";$($env:LOCALAPPDATA)\Programs\Python\Python311\;$($env:LOCALAPPDATA)\Programs\Python\Python311\Scripts\"
</code></pre>
<p>What specific syntax needs to be used in <code>myinstallscript.ps1</code> in order to successfully execute the <code>c:/temp/python-3.11.4-amd64.exe /quiet InstallAllUsers=0 InstallLauncherAllUsers=0 PrependPath=1 Include_test=0</code> command? And what specific syntax would be needed in order for the command to gracefully break the program with a useful error message in the event that the installation fails for some unforeseen reason?</p>
| <python><windows><powershell> | 2023-08-08 21:31:46 | 1 | 9,239 | CodeMed |
76,863,152 | 1,786,040 | How can I format Flask Werkzeug logging or move the request values to my logger format? | <p>I am pretty green to Flask but not Python. I'm setting up my logging for my like so:</p>
<p>In logger.py:</p>
<pre><code>logfile = f"{ basedir }/app.log"
log_format = '%(asctime)s - %(levelname)s - (%(threadName)-10s) - %(message)s'
logging.basicConfig(filename = logfile, format = log_format, level=logging.DEBUG)
log = logging.getLogger()
</code></pre>
<p>Then in each file:</p>
<pre><code>from logger import log
log.debug("Oh no, the sky is falling!")
</code></pre>
<p>When my webapp gets an API call, it emits the following into the log:</p>
<pre><code>2023-08-08 16:48:21,559 - INFO - (Thread-80 (process_request_thread)) - 111.222.333.444 - - [08/Aug/2023 16:48:21] "GET /v1/user/ HTTP/1.1" 200 -
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
</code></pre>
<p>The 1st half-ish is per my formatting. Werkzeug adds additional info to the end of the line, which I've noted. Note that the time/date is superfluous. I like the IP and request information.</p>
<p>How can I either:</p>
<p>a. Format that Werkzeug output to remove the time/date and possibly
add additional meaningful values?</p>
<p>b. Just remove the Werkzeug output but add the IP and request info to the
normal logger output?</p>
| <python><flask><python-logging> | 2023-08-08 20:57:17 | 2 | 393 | Chris |
76,862,911 | 2,259,468 | Basic rpy in Jupyter failing | <p><strong>My thoughts</strong>:<br />
I don't know if this is a conda-problem or an rpy-problem.</p>
<p><strong>Background</strong>:<br />
I'm trying to get rpy to run in jupyter/python.</p>
<p>Here is what I am using:</p>
<ul>
<li>MS Windows 11 Pro</li>
<li>Anaconda 22.9.0</li>
<li>Python 3.7.16</li>
<li>R-base 3.6.1</li>
</ul>
<p>I also have installed a couple of helper packages:</p>
<ul>
<li>numpy version: 1.23.5</li>
<li>scipy version: 1.10.0</li>
<li>opencv version: 4.6.0</li>
<li>matplotlib version: 3.6.2</li>
</ul>
<p>I installed rpy2 version 2.9.4</p>
<p>When I execute the following code:</p>
<pre><code>import rpy2
import rpy2.situation
for row in rpy2.situation.iter_info():
print(row)
</code></pre>
<p>I get the following output:</p>
<pre><code>rpy2 version:
3.5.11
Python version:
3.9.16 (main, Jan 11 2023, 16:16:36) [MSC v.1916 64 bit (AMD64)]
Looking for R's HOME:
Environment variable R_HOME: None
InstallPath in the registry: C:\Program Files\R\R-4.2.2
Environment variable R_USER: None
Environment variable R_LIBS_USER: None
R version:
In the PATH: R version 4.1.3 (2022-03-10) -- "One Push-Up"
Copyright (C) 2022 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under the terms of the
GNU General Public License versions 2 or 3.
For more information about these matters see
https://www.gnu.org/licenses/.
Loading R library from rpy2: OK
Additional directories to load R packages from:
None
C extension compilation:
Warning: Unable to get R compilation flags.
Directory for the R shared library:
</code></pre>
<p>At this point it gives me a long string of errors that starts with this:</p>
<pre><code>---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
Cell In[14], line 5
2 import rpy2
4 import rpy2.situation
----> 5 for row in rpy2.situation.iter_info():
6 print(row)
</code></pre>
<p>And ends with this:</p>
<pre><code>File c:\Users\username\.conda\envs\this_env_name\lib\subprocess.py:528, in run(input, capture_output, timeout, check, *popenargs, **kwargs)
526 retcode = process.poll()
527 if check and retcode:
--> 528 raise CalledProcessError(retcode, process.args,
529 output=stdout, stderr=stderr)
530 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '('C:\\Program Files\\R\\R-4.2.2\\bin\\x64\\R', 'CMD', 'config', 'LIBnn')' returned non-zero exit status 1.
</code></pre>
<p>When I try to run this:</p>
<pre><code>import rpy2.robjects
</code></pre>
<p>The following is the first bit and last bit of the errors produced.</p>
<pre><code>R[write to console]: Error in gettext(fmt, domain = domain, trim = trim) :
3 arguments passed to .Internal(gettext) which requires 2
---------------------------------------------------------------------------
RRuntimeError Traceback (most recent call last)
Cell In[16], line 4
1 #
2 import rpy2
----> 4 import rpy2.robjects
File c:\Users\username\.conda\envs\this_env_name\lib\site-packages\rpy2\rinterface.py:817, in SexpClosure.__call__(self, *args, **kwargs)
810 res = rmemory.protect(
811 openrlib.rlib.R_tryEval(
812 call_r,
813 call_context.__sexp__._cdata,
814 error_occured)
815 )
816 if error_occured[0]:
--> 817 raise embedded.RRuntimeError(_rinterface._geterrmessage())
818 return res
RRuntimeError: Error in gettext(fmt, domain = domain, trim = trim) :
3 arguments passed to .Internal(gettext) which requires 2
</code></pre>
<p>These did not have a way forward, although the last seemed relevant:</p>
<ul>
<li><a href="https://stackoverflow.com/a/62986815/2259468">https://stackoverflow.com/a/62986815/2259468</a></li>
<li><a href="https://rpy2.github.io/doc/latest/html/introduction.html" rel="nofollow noreferrer">https://rpy2.github.io/doc/latest/html/introduction.html</a></li>
<li><a href="https://rpy2.github.io/doc/latest/html/generated_rst/notebooks.html" rel="nofollow noreferrer">https://rpy2.github.io/doc/latest/html/generated_rst/notebooks.html</a></li>
<li><a href="https://github.com/rpy2/rpy2/issues/874" rel="nofollow noreferrer">https://github.com/rpy2/rpy2/issues/874</a></li>
<li><a href="https://stackoverflow.com/questions/64498353/jupyter-notebook-rpy2-cannot-find-r-libraries">Jupyter notebook - rpy2 - Cannot find R libraries</a></li>
<li><a href="https://stackoverflow.com/questions/73319238/rwrite-to-console-error-in-gettextfmt-domain-domain-trim-trim-3-arg">R[write to console]: Error in gettext(fmt, domain = domain, trim = trim) : 3 arguments passed to .Internal(gettext) which requires 2</a></li>
</ul>
<p>This one says to use pip not conda, but that is a great way to poison conda environments. They can be hacked together but they can also break each other.<br />
<a href="https://stackoverflow.com/questions/74358650/how-to-correctly-install-rpy2-in-python">How to correctly install rpy2 in python?</a></p>
<p><strong>Observations</strong>:<br />
The R it should be using should be R-3.6 or so, and not R-4.2.2.</p>
<p><strong>Question</strong>:<br />
How do I get rpy to run in jupyter without breaking conda? I just want to have python call an r-script, and have it run on objects in the python workspace, then return a new object to the python workspace.</p>
| <python><r><jupyter-notebook><jupyter><rpy2> | 2023-08-08 20:10:01 | 1 | 2,022 | EngrStudent |
76,862,904 | 1,207,665 | Python ruamel.yaml library adds new lines where not expected | <p>I'm using <a href="https://yaml.readthedocs.io/en/latest/" rel="nofollow noreferrer">ruamel.yaml</a> to load and edit a specific property in a yaml file.</p>
<p>I need to preserve everything else as-is. So far, the following code is working almost perfect:</p>
<pre class="lang-py prettyprint-override"><code>yaml = ruamel.yaml.YAML()
yaml.preserve_quotes=True
yaml.explicit_start=True
yaml.indent(mapping=6, sequence=4, offset=2)
data = {}
with open("my.yaml", "r") as f:
data = yaml.load(f)
data["my::property::user::name"] = "me"
with open("my.yaml", "w") as f:
yaml.dump(data, f)
</code></pre>
<p>The yaml file is big, with a lot of properties and I can't get the following to work:</p>
<h3><code>yaml.dump</code> add a new line for the following key:</h3>
<p><code>my::property::group_name: "path\\Domain Admins"</code></p>
<p>Resulting in:</p>
<pre class="lang-yaml prettyprint-override"><code>my::property::group_name: "%{path\\Domain
Admins"
</code></pre>
<h3>For some properties, it adds a new line right after the <code>:</code> :</h3>
<p><code>my::property::value: some-really-big-string-here</code></p>
<p>Result in:</p>
<pre class="lang-yaml prettyprint-override"><code>my::property::value:
some-really-big-string-here
</code></pre>
<p><strong>EDITED:</strong></p>
<p>The follwing two lines will have a third <code>\</code> added and the line will also break:</p>
<pre class="lang-yaml prettyprint-override"><code>some::random::name: "\\\\%{expression}\\%{expression}"
another::random::name: "\\\\%{expression}\\pathname\\"
</code></pre>
<p>The result is:</p>
<pre class="lang-yaml prettyprint-override"><code>some::random::name: "\\\\%{expression}\\\
%{expression}"
another::random::name: "\\\\%{expression}\\\
pathname\\"
</code></pre>
<p>Maybe it's my yaml file that need some data fix, but is it possible to avoid this at the parser level ?</p>
| <python><ruamel.yaml> | 2023-08-08 20:09:13 | 1 | 672 | Giuliani Sanches |
76,862,713 | 5,338,465 | SQLAlchemy 2.0 ORM filter show wrong type in Pycharm | <p>I'm using Pycharm to develop an app with SQLAlchemy 2.0.</p>
<p>When I attempt to query some table using ORM approach. Pycharm always display type error in the filter query.</p>
<p>For example, in the code snippet below:</p>
<pre><code> with Session(engine) as session:
session.scalars(select(Albums.AlbumId).where(Albums.Id > user_last_sync_id))
^^^ Show wrong type
</code></pre>
<p>Get the following message</p>
<pre><code>Expected type 'ColumnElement[bool] | _HasClauseElement | SQLCoreOperations[bool] | ExpressionElementRole[bool] | () -> ColumnElement[bool] | LambdaElement',, got 'bool' instead
</code></pre>
<p><a href="https://i.sstatic.net/BGCKJ.png" rel="noreferrer"><img src="https://i.sstatic.net/BGCKJ.png" alt="enter image description here" /></a></p>
<p>Even though it indicates a type error, but scripts still be executed (And get correct data) without showing any error messages.</p>
<p>What could potentially be causing this issue in the code? Is there a way to make code more "correct" to let Pycharm not to display type error?</p>
| <python><python-3.x><sqlalchemy><pycharm> | 2023-08-08 19:33:57 | 4 | 1,050 | Vic |
76,862,692 | 2,192,824 | Can VSCode attach to a remote Python process and debug locally? | <p>I have a .par file built from python scripts, and it runs on a remote machine. I have the source codes on my local machine. Is it possible for me to attach my local python codes to the process running on the remote machine, and do the debugging using break points, steps, etc?</p>
<p>I know Visual studio has similar features for c# program debugging, but not sure whether VScode can do the same thing for Python scripts.</p>
| <python><visual-studio-code><vscode-debugger><vscode-remote> | 2023-08-08 19:30:45 | 1 | 417 | Ames ISU |
76,862,459 | 10,881,963 | What is the correct way to calculate mean IoU in PyTorch when there are absent classes? | <p>My use case is pretty much the same with this example:</p>
<pre><code>from torchmetrics import JaccardIndex
import torch
pred = torch.tensor([1, 2, 19, 17, 17])
target = torch.tensor([1, 2, 3, 17, 4])
jaccard = JaccardIndex(num_classes=21)
jaccard(pred, pred)
Out: tensor([0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
0., 1., 0.])
</code></pre>
<p>How can I correctly calculate mIoU between pred and target when there are non-present classes? In other words, I don't want it to simply assign zero to classes that were not even present in the test dataset. Also, we need to consider the zeros only if it is really a wrong prediction, instead of an absent class.</p>
| <python><machine-learning><deep-learning><pytorch><metrics> | 2023-08-08 18:54:21 | 2 | 461 | Jim Wang |
76,862,296 | 5,947,365 | Retrieving data from EventStream (server side events) | <p>I got the following page from Veikkaus: <a href="https://www.veikkaus.fi/fi/vedonlyonti/pitkaveto" rel="nofollow noreferrer">https://www.veikkaus.fi/fi/vedonlyonti/pitkaveto</a> which loads an EventStream (Serve Side Events (SSE)) to retrieve data from the server. See the picture below</p>
<p><a href="https://i.sstatic.net/WsgVI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WsgVI.png" alt="EventStream" /></a></p>
<p>I'm trying to reproduce this EventStream in Postman, however when I perform the action copy as CURL on the network request and import it in Postman then nothing is returned by the EventStream.</p>
<p>I furthermore tried to debug it using Python but this yields the same results, the code I used is show below (the URL is dynamic).</p>
<pre><code>import requests
from sseclient import SSEClient
# URL of the SSE endpoint
sse_url = "https://api-fip.sbtech.com/veikkaus/sportsdata/v2/stream/events?query=%24filter%3Did%20eq%20%2729185250%27&locale=fi&initialData=true&includeMarkets=all&jwt=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTE1MjQ1NjEsImlzcyI6IlZlaWtrYXVzIiwiU2l0ZUlkIjozNTksIklzQW5vbnltb3VzIjp0cnVlLCJTZXNzaW9uSWQiOiI5MTNkZjc2My02ODdmLTQzOWUtYmZhYy01YjVkODg3NjM4NWQiLCJuYmYiOjE2OTE1MTczNjEsImlhdCI6MTY5MTUxNzM2MX0.aARFZpjCI-4PAX9UHVE6P5fb_g2eRn8N3DAcX7xIS_DYDSU4yCk3eWYHuDsPd3dCTTskTsYg39_bw0VOOqNj-vln8EWYsCVDe5nH0aqudWSm-ly9YZ6ecA9Bb014OJbS4oBl1l11DTyoEVXdpK6wZLwikcdCyQyqPZGO0qz4RhQbG91KnGSzv64BbEcyWop6f0-DE6rjPthwhi2w0OXaYSIhFLL83STLP_EHmd6VadXLCk5fvBWcRDeApy-gIkLzlD8XwFcMXQj02YL2jV1DNe2_VUz7Qzb4mJjpoy3iuZGjGQbHKhQzTOrjr8MmJAKY_2_GXt8AEY-MSf3kU7VozA"
headers = {
'authority': 'api-fip.sbtech.com',
'accept': 'text/event-stream',
'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
'cache-control': 'no-cache',
'origin': 'https://www.veikkaus.fi',
'referer': 'https://www.veikkaus.fi/',
'sec-ch-ua': '"Not/A)Brand";v="99", "Google Chrome";v="115", "Chromium";v="115"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'cross-site',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36',
'Cookie': 'lb_sess=c76e48e72ae4aee2c9c3031bc3f7ede4'
}
# Make a GET request to establish the SSE connection
response = requests.get(sse_url, headers=headers, stream=True)
print(response.status_code)
if response.status_code == 200:
# Create an SSE client
sse_client = SSEClient(response.content)
# Iterate through SSE events
for event in sse_client:
print("Received update:", event.data)
else:
print("Failed to establish SSE connection.")
</code></pre>
<p>What am I doing wrong?</p>
| <python><web-scraping><event-stream> | 2023-08-08 18:30:39 | 0 | 607 | Sander Bakker |
76,862,275 | 7,535,168 | How to make Labels in RecycleView change size according to texture size during runtime? | <p>I have a <code>RecycleView</code> which consists of labels and a <code>MDSlider</code> which I use to change <code>font_size</code> of those labels during runtime. I'm wondering if it's possible to align labels' size to their <code>texture_size</code> property. Precisely, whenever I change the <code>font_size</code> which would cause label's <code>texture_size</code> to overflow its current size, I would want to increase the label size so the text fits. Since labels live inside the <code>RecycleView</code>, I'm not sure how should I approach the problem. I'm initially setting <code>height</code> to <code>self.minimum_height</code> of the <code>RecycleBoxLayout</code> which I presume should be updated in the process. I noticed that I can manually change the <code>RecycleBoxLayout's</code> <code>default_size</code> during runtime, but not sure how exactly I should pass the <code>texture_size</code> of the label to adjust the size. I tried with <code>on_size</code> and <code>on_texture</code> methods to pass the property and use to calculate <code>default_size</code>, but things get really complicated and I always end up getting gaps between the labels. Ideally I would want the solution that uses some kind of binding of labels sizes/texture (similarly like I already have with the <code>app.fontSize</code>) so I would get automatic resizing because any manual calculations of <code>RecycleView</code> properties and it's consequential update significantly slows down my program in the end when testing on Android.</p>
<p>Any ideas?</p>
<p>EDIT: I haven't mentioned it, I'm only interested in the height resizing. Width doesn't matter.</p>
<pre><code>from kivymd.app import MDApp
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager
from kivy.uix.screenmanager import Screen
from kivymd.uix.boxlayout import MDBoxLayout
from kivy.properties import StringProperty
kv = """
<MyLabel@MDLabel>:
font_size: app.fontSize
halign: 'center'
# Using these settings would be awesome, but cannot make it happen.
# Or there might be more elegant solution?
#size: self.texture_size
#size_hint_y: None
#text_size: self.width, None
<DailyService>:
day: ''
service: ''
MDGridLayout:
rows: 2
MyLabel:
id: firstLabelId
text: root.day
md_bg_color: app.theme_cls.accent_color
MyLabel:
id: secondLabelId
md_bg_color: app.theme_cls.primary_dark
text: root.service
<MainScreen>:
name: 'mainScreen'
myRv: rvId
MDRelativeLayout:
orientation: 'vertical'
MDRecycleView:
viewclass: 'DailyService'
id: rvId
rbl: rblId
RecycleBoxLayout:
id: rblId
default_size: None, dp(200)
default_size_hint: 1, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
MDSlider:
color: 'white'
orientation: 'horizontal'
size_hint: (0.2, 0.2)
pos_hint: {"x":0.4, "top": 1}
min: 10
value: 20
max: 150
on_value_normalized: root.fontSizeSlider(self.value)
MyScreenManager:
mainScreen: mainScreenId
MainScreen:
id: mainScreenId
"""
class DailyService(MDBoxLayout):
pass
class MainScreen(Screen):
def __init__(self, **kwargs):
super(MainScreen, self).__init__(**kwargs)
def fontSizeSlider(self, value):
app = MDApp.get_running_app()
app.fontSize = str(int(value)) + 'dp'
self.myRv.refresh_from_data()
class MyScreenManager(ScreenManager):
def __init__(self, **kwargs):
super(MyScreenManager, self).__init__(**kwargs)
class MyApp(MDApp):
fontSize = StringProperty('20dp')
def on_start(self):
data = []
for i in range(10):
data.append({'day': 'DAY\nDAY',
'service': 'SERVICE\nSERVICE'})
self.root.ids.mainScreenId.myRv.data = data
def build(self):
self.theme_cls.theme_style = 'Dark'
self.theme_cls.primary_palette = 'Blue'
self.theme_cls.accent_palette = 'Amber'
return Builder.load_string(kv)
if __name__ == '__main__':
MyApp().run()
</code></pre>
| <python><kivy><kivymd> | 2023-08-08 18:28:23 | 1 | 601 | domdrag |
76,862,214 | 15,175,417 | Merge two DataFrames based on a string aggregated column | <p>I have the below DF that contains</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Lookup</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>Apple</td>
</tr>
<tr>
<td>B</td>
<td>Banana</td>
</tr>
<tr>
<td>C</td>
<td>Carrot</td>
</tr>
</tbody>
</table>
</div>
<p>I have another DF2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>SNo</th>
<th>Lookup Values</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td><code>['A', 'B']</code></td>
</tr>
<tr>
<td>2</td>
<td><code>['A', 'C']</code></td>
</tr>
<tr>
<td>3</td>
<td><code>['B', 'C']</code></td>
</tr>
</tbody>
</table>
</div>
<p><strong>Note</strong>: "Lookup Values" column is a list of strings</p>
<p>What is the most simple way of performing the JOIN to get the below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>SNo</th>
<th>Lookup Values</th>
<th>Lookup Definitions</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>A, B</td>
<td>Apple, Banana</td>
</tr>
<tr>
<td>2</td>
<td>A, C</td>
<td>Apple, Carrot</td>
</tr>
<tr>
<td>3</td>
<td>B, C</td>
<td>Banana, Carrot</td>
</tr>
</tbody>
</table>
</div> | <python><pandas><dataframe><merge> | 2023-08-08 18:16:14 | 4 | 347 | Parsh |
76,862,146 | 308,421 | How to allow sting initialization for enums in pybind? | <p>In Python, I can initialize an enum from a string using standard construction:</p>
<pre><code>class A(Enum):
A = 'a'
a = A('a')
</code></pre>
<p>This is helpful for reading configurations from external files.</p>
<p>However, once I move this enum to C++ and bind it with pybind, <code>__init__()</code> can only accept ints...</p>
<p>Is there an option to bind a custom initializer to the binded enum so that the string-initializer syntax <code>A('a')</code> will still work?</p>
| <python><c++><pybind11> | 2023-08-08 18:04:32 | 0 | 1,592 | liorda |
76,862,089 | 14,374,614 | Calculate list of percentage to get a final percentage | <p>I have some trouble to calcul a list of percentage to a final percentage</p>
<p>my actual code is</p>
<pre><code>bonus = [120, 11, 23, 92]
</code></pre>
<p>everything in this list is already a percentage.</p>
<p>and I want a try to make a function that calcul all this percentage to have a final one, like</p>
<pre><code>120% + 11% + 23% + 92% = X%
</code></pre>
| <python> | 2023-08-08 17:56:00 | 3 | 432 | SoyNeko |
76,862,067 | 11,064,604 | Bigquery get_project function? | <p>In bigquery, I am looking for a function that states if a project exists, a la</p>
<pre><code>from google.cloud import bigquery
client = bigquery.Client()
try:
client.get_project("my_project")
exept NotFound:
print("`my project` not found.")
</code></pre>
<p>This is accomplished for datasets and tables through <code>get_dataset</code> and <code>get_table</code> respectively. I could not find anything for <code>get_project</code>, unfortunately.</p>
<p>As a complete aside, I cannot find documentation listing methods for <code>bigquery.Client</code> anywhere. Specifically, I just want the page that lists all methods; similar to <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers" rel="nofollow noreferrer">this page</a> for <code>tf.keras.layers</code>. I looked for anywhere with documentation on the methods (complete with arguments and their respective purpose) but could find neither a github repo or official documentaion. The best I found is <a href="https://cloud.google.com/python/docs/reference/bigquery/latest/index.html" rel="nofollow noreferrer">here</a>. Am I missing something obvious?</p>
| <python><google-bigquery> | 2023-08-08 17:53:35 | 1 | 353 | Ottpocket |
76,862,034 | 2,929,914 | Python Polars - Create sequence in list from integer | <p>Considering that:</p>
<blockquote>
<p>Using <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.map_elements.html#polars-expr-map-elements" rel="nofollow noreferrer">map_elements</a> is much slower than the native expressions API</p>
</blockquote>
<p>How can I use Polars native expression API to generate a new column with a list containing all the integers from 1 to the number of another column?</p>
<p>This means, going from this:</p>
<pre><code>shape: (3, 1)
┌─────┐
│ Qty │
│ --- │
│ i64 │
╞═════╡
│ 1 │
│ 2 │
│ 3 │
└─────┘
</code></pre>
<p>To this:</p>
<pre><code>shape: (3, 2)
┌─────┬───────────┐
│ Qty ┆ list │
│ --- ┆ --- │
│ i64 ┆ list[i64] │
╞═════╪═══════════╡
│ 1 ┆ [1] │
│ 2 ┆ [1, 2] │
│ 3 ┆ [1, 2, 3] │
└─────┴───────────┘
</code></pre>
<p>My current attempt using <code>map_elements</code>:</p>
<pre class="lang-py prettyprint-override"><code># Import package.
import polars as pl
# Create the DataFrame.
df = pl.DataFrame({'Qty': [1, 2, 3]})
# Add the 'List' column.
df = df.with_columns(
pl.col('Qty').map_elements(lambda x: [i+1 for i in range(x)]).alias('List')
)
</code></pre>
| <python><python-polars> | 2023-08-08 17:47:44 | 1 | 705 | Danilo Setton |
76,861,844 | 4,451,315 | multi-key argsort in polars | <p>say I have</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({'a': [1, 1, 2, 1], 'b': [2, 1, 3, 3]})
</code></pre>
<p>I'd like to find an Expr indices such that
<code>df.sort(indices)</code> would give the same result as <code>df.sort(by=['a', 'b'])</code></p>
<p>110% hacky solution:</p>
<pre class="lang-py prettyprint-override"><code>pl.from_pandas(df.to_pandas().sort_values(['a', 'b']).index.to_series())
</code></pre>
| <python><python-polars> | 2023-08-08 17:15:30 | 1 | 11,062 | ignoring_gravity |
76,861,836 | 4,391,249 | Is there a way to make an iterable that gets populated concurrently but can still be used as a regular iterable? | <p>Say I have a function that somehow procures or generates "things" (like web page data, images from a camera, etc) in an I/O bound fashion. I also have a function that does something with those things in a CPU-bound fashion. Below is a working example.</p>
<pre class="lang-py prettyprint-override"><code>import time
def make_things(n_things) -> list:
things = []
for i in range(n_things):
# Simulate IO bound.
time.sleep(1)
things.append(i)
return things
def do_something_with_things(things: list):
for thing in things:
# Simulate CPU bound.
start_cpu_bound = time.time()
while time.time() - start_cpu_bound < 1:
_ = 1 + 1
print(f"{time.time() - start_time:.1f}", f"#{thing}")
start_time = time.time()
things = make_things(2)
do_something_with_things(things)
print(f"Total time: {time.time() - start_time:.1f}")
</code></pre>
<p>It prints:</p>
<pre><code>3.0 #0
4.0 #1
Total time: 4.0
</code></pre>
<p>I could draw a diagram showing the order of execution. First two things are made, then each is processed.</p>
<p><a href="https://i.sstatic.net/xOtAd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xOtAd.png" alt="enter image description here" /></a></p>
<p>What I would like to achieve is:</p>
<p><a href="https://i.sstatic.net/nSwE5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSwE5.png" alt="enter image description here" /></a></p>
<p><strong>Importantly, I have two requirements to add</strong>.</p>
<ol>
<li>I want to leave the second function mostly unchanged. I could relax it to taking in an iterable rather than a list (which amounts to changing the type hint).</li>
<li>When I'm finished going through the things once, they become like a regular list. The things are cached because I've already done the work of procuring/generating them.</li>
</ol>
<p>If we drop the toy example for a moment, I can reframe my question. Is there some sort of way I can make something that is like an iterable which behaves as follows?</p>
<pre class="lang-py prettyprint-override"><code># This line will hang until the next item is available, or it will
# exit the for loop if my special_iterable says it's done:
for item in special_iterable:
# Now, the machinery that's populating the special iterable
# is still at work, preparing the next item for me via a
# I/O bound work (or not, if there's nothing left to prepare).
# Here I do some CPU bound stuff.
# Once everthing is done, my special iterable is no longer so special.
# I can iterate over it again and it has all the values cached.
# It's basically a list.
for item in special iterable:
# I immediately have access to the same items from before.
</code></pre>
<p>The advantage of this special iterable is that all code that needs to iterate through it can act as if it's iterating through a list. This means I don't have to write the code differently for whatever it is that happens to consume the special iterable first.</p>
<p>Edit: I realised it's not necessary to wait till the first for-loop to begin populating the iterable. I could also create the iterable, do a bunch of CPU-bound stuff that has nothing to do with the iterable, and then iterate through it. Maybe by then it already has some elements populated, it doesn't matter. The point is that I can start iterating through it even when it's not done populating, but that I won't reach a StopIteration until it's fully populated.</p>
<pre><code>iterable = make_iterable_somehow() # it will have 10 items, each taking 1 second to procure/generate
# I do a bunch of CPU/bound stuff that takes me 5 seconds.
# This will go through the first 5 items very fast but then have
# to wait 1 second for each of the next 5
for item in iterable:
# Something that takes 0.00001 seconds
# This will go through all the items very fast.
for item in iterable:
# Something that takes 0.00001 seconds
</code></pre>
| <python><concurrency> | 2023-08-08 17:14:26 | 3 | 3,347 | Alexander Soare |
76,861,819 | 10,450,762 | using Python requests library to log into reddit | <p>I'm trying to scrape html data from reddit when I am logged in, as the information I need is included in the logged-in page, not in the webpage when I am logged out(from <a href="https://stackoverflow.com/questions/76843989/find-elements-by-xpath-does-not-work-and-returns-an-empty-list">find_elements_by_xpath does not work and returns an empty list</a>).</p>
<p>I am using the following code to request login, assuming the login URL is <a href="https://www.reddit.com/login/" rel="nofollow noreferrer">https://www.reddit.com/login/</a>.</p>
<pre><code>import requests
username="myuser"
password="password"
payload = {
'loginUsername': username,
'loginPassword': password
}
# Use 'with' to ensure the session context is closed after use.
s = requests.Session()
headers = {'user-Agent': 'Mozilla/5.0'}
s.headers = headers
#login_url = f"https://www.reddit.com/user/{username}"
#print(login_url)
p = s.post("https://www.reddit.com/login/", data=payload)
# print the html returned or something more intelligent to see if it's a successful login page.
print(p.text)
print(p.status_code)
</code></pre>
<p>However, the status code returned is 404 and I get the following for <code>p.text</code>:</p>
<pre><code><!DOCTYPE html>
<html lang="en-CA">
<head>
<title>
reddit.com: Not found
</title>
<link rel="shortcut icon" type="image/png" sizes="512x512" href="https://www.redditstatic.com/accountmanager/favicon/favicon-512x512.png">
<link rel="shortcut icon" type="image/png" sizes="192x192" href="https://www.redditstatic.com/accountmanager/favicon/favicon-192x192.png">
<link rel="shortcut icon" type="image/png" sizes="32x32" href="https://www.redditstatic.com/accountmanager/favicon/favicon-32x32.png">
<link rel="shortcut icon" type="image/png" sizes="16x16" href="https://www.redditstatic.com/accountmanager/favicon/favicon-16x16.png">
<link rel="apple-touch-icon" sizes="180x180" href="https://www.redditstatic.com/accountmanager/favicon/apple-touch-icon-180x180.png">
<link rel="mask-icon" href="https://www.redditstatic.com/accountmanager/favicon/safari-pinned-tab.svg" color="#5bbad5">
<meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover">
<meta name="msapplication-TileColor" content="#ffffff"/>
<meta name="msapplication-TileImage" content="https://www.redditstatic.com/accountmanager/favicon/mstile-310x310.png"/>
<meta name="msapplication-TileImage" content="https://www.redditstatic.com/accountmanager/favicon/mstile-310x150.png"/>
<meta name="theme-color" content="#ffffff">
<link rel="stylesheet" href="https://www.redditstatic.com/accountmanager/vendor.4edfac426c2c4357e34e.css">
<link rel="stylesheet" href="https://www.redditstatic.com/accountmanager/theme.02a88d7effc337a0c765.css">
</head>
<body>
<div class="Container m-desktop">
<div class="PageColumns">
<div class="PageColumn PageColumn__left">
<div class="Art"></div>
</div>
<div class="PageColumn PageColumn__right">
<div class="ColumnContainer">
<div class="SnooIcon"></div>
<h1 class="Title">404&mdash;Not found</h1>
<p>
The page you are looking for does not exist.
</p>
</div>
</div>
</div>
</div>
<script>
//<![CDATA
window.___r = {"config": {"tracker_endpoint": "https://events.reddit.com/v2", "tracker_key": "AccountManager3", "tracker_secret": "V2FpZ2FlMlZpZTJ3aWVyMWFpc2hhaGhvaHNoZWl3"}};
//]]>
</script>
<script type="text/javascript" src="https://www.redditstatic.com/accountmanager/vendor.33ac2d92b89a211b0483.js"></script>
<script type="text/javascript" src="https://www.redditstatic.com/accountmanager/theme.5333e8893b6d5b30d258.js"></script>
<script type="text/javascript" src="https://www.redditstatic.com/accountmanager/sentry.d25b8843def9b86b36ac.js"></script>
</body>
</html>
</code></pre>
<p>I tried using login URL as <code>login_url = f"https://www.reddit.com/user/{username}"</code>, but it still does not work.
I tried using <code>https://www.reddit.com/login</code> without the slash at the end, and the status is 400 and there is no output for <code>p.text</code>.
I believe the username and password I put in is correct. Should the login URL be something different?</p>
<p>I noticed at <code>https://www.reddit.com/login</code>, the action is as follows:</p>
<p><code><form class="AnimatedForm" action="/login" method="post"></code></p>
| <python><html><web-scraping><python-requests> | 2023-08-08 17:11:35 | 1 | 453 | wkde |
76,861,518 | 7,169,895 | Drawing text to right side of tab prevents new tabs from having text | <p>I am trying to draw my tab text to the right side of the tab in part so it does not cut off for long tab names and it looks nicer (at least to me). However, my <code>paintEvent</code> seems to prevent the default behavior of setting the tab text. Where am I going wrong?</p>
<pre><code>from PySide6 import QtWidgets
from PySide6 import QtCore
from PySide6.QtCore import Signal, QSize
from PySide6.QtWidgets import QLabel, QWidget, QVBoxLayout, QStyle, QStyleOptionTab, QStylePainter
class ShrinkTabBar(QtWidgets.QTabBar):
_widthHint = -1
_initialized = False
_recursiveCheck = False
addClicked = Signal()
def __init__(self, parent):
super(ShrinkTabBar, self).__init__(parent)
self.setElideMode(QtCore.Qt.TextElideMode.ElideLeft)
self.setExpanding(False)
self.setMovable(True)
self.addButton = QtWidgets.QToolButton(self.parent(), text='+')
self.addButton.clicked.connect(self.addClicked)
self._recursiveTimer = QtCore.QTimer(singleShot=True, timeout=self._unsetRecursiveCheck, interval=0)
self._tabHint = QSize(0, 0)
self._minimumHint = QSize(0, 0)
def _unsetRecursiveCheck(self):
self._recursiveCheck = False
def _computeHints(self):
if not self.count() or self._recursiveCheck:
return
self._recursiveCheck = True
opt = QtWidgets.QStyleOptionTab()
self.initStyleOption(opt, 0)
width = self.style().pixelMetric(QtWidgets.QStyle.PixelMetric.PM_TabBarTabHSpace, opt, self)
iconWidth = self.iconSize().width() + 2
self._minimumWidth = width + iconWidth
# default text widths are arbitrary
fm = self.fontMetrics()
self._minimumCloseWidth = self._minimumWidth + fm.horizontalAdvance('x' * 4) + iconWidth
self._defaultWidth = width + fm.horizontalAdvance('x' * 26)
self._defaultHeight = super().tabSizeHint(0).height()
self._minimumHint = QtCore.QSize(self._minimumWidth, self._defaultHeight)
self._defaultHint = self._tabHint = QtCore.QSize(self._defaultWidth, self._defaultHeight)
self._initialized = True
self._recursiveTimer.start()
def _updateSize(self):
if not self.count():
return
frameWidth = self.style().pixelMetric(
QtWidgets.QStyle.PixelMetric.PM_DefaultFrameWidth, None, self.parent())
buttonWidth = self.addButton.sizeHint().width()
self._widthHint = (self.parent().width() - frameWidth - buttonWidth) // self.count()
self._tabHint = QtCore.QSize(min(self._widthHint, self._defaultWidth), self._defaultHeight)
# dirty trick to ensure that the layout is updated
if not self._recursiveCheck:
self._recursiveCheck = True
self.setIconSize(self.iconSize())
self._recursiveTimer.start()
# BUG: is somewhere in here
def paintEvent(self, event):
painter = QStylePainter(self)
option = QStyleOptionTab()
for index in range(self.count()):
self.initStyleOption(option, index)
tabRect = self.tabRect(index)
tabRect.moveLeft(10)
painter.drawControl(QStyle.ControlElement.CE_TabBarTabShape, option)
painter.drawText(tabRect,
QtCore.Qt.AlignmentFlag.AlignVCenter |
QtCore.Qt.TextFlag.TextDontClip,
self.tabText(index))
class ShrinkTabWidget(QtWidgets.QTabWidget):
addClicked = Signal()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._tabBar = ShrinkTabBar(self)
self.setTabBar(self._tabBar)
self._tabBar.setTabsClosable(True)
self._tabBar.addClicked.connect(self.get_page_choices)
def get_page_choices(self):
selection_page = QLabel("This worked")
index = self.addTab(selection_page, 'New Tab') # BUG: New Tab does not show up only a blank
self._tabBar.setCurrentIndex(index) # Set as current tab
</code></pre>
| <python><pyside6> | 2023-08-08 16:29:30 | 0 | 786 | David Frick |
76,861,440 | 13,950,870 | SQLAlchemy: modifications to other object in same table before insert are not persisted | <p>I'm trying to add an event listener to a table that sets the previously inserted row to inactive (active=False). I can see that during the run, the previous_date object is changed and the property active is set to False (I used breakpoints). However, after the insert is committed I only see the new row in the database. The active property of the previous_date object has not changed. Why is this? See code below
models.py:</p>
<pre class="lang-py prettyprint-override"><code>binded_maker = sessionmaker(bind=engine)
class ScenarioTrainingSchedule(Base):
__tablename__ = "scenario_training_schedule"
id = Column(Integer, primary_key=True)
company_id = Column(Integer, nullable=False)
scenario_id = Column(String, nullable=False)
training_date = Column(Date, nullable=False)
creation_date = Column(DateTime, nullable=False, server_default=func.now())
active = Column(Boolean, nullable=False, default=True)
def update_previous_training_date_activity(mapper, connection, target):
session = binded_maker.object_session(target)
if session is not None:
previous_date = (
session.query(ScenarioTrainingSchedule)
.filter(ScenarioTrainingSchedule.scenario_id == target.scenario_id)
.filter(ScenarioTrainingSchedule.active == 1)
# .order_by(TrainingDate.training_date.desc())
.first()
)
if previous_date:
attributes.set_committed_value(previous_date, "active", False)
</code></pre>
<p>database.py:</p>
<pre class="lang-py prettyprint-override"><code>binded_maker = sessionmaker(bind=engine)
session = binded_maker()
</code></pre>
<p>Other service file that imports the session from database.py and inserts new ScenarioTrainingSchedule:</p>
<pre class="lang-py prettyprint-override"><code>def insert_scenario_training_date(company_id, scenario_name, scenario_date):
try:
company_id = int(company_id)
training_date = datetime.datetime.strptime(scenario_date, "%Y-%m-%d").date()
training = ScenarioTrainingSchedule(
company_id=company_id, scenario_id=scenario_name, training_date=training_date
)
session.add(training)
session.commit()
except Exception as e:
session.rollback()
logging.error(e)
</code></pre>
| <python><sqlalchemy> | 2023-08-08 16:17:22 | 1 | 672 | RogerKint |
76,861,418 | 275,552 | Unable to set ymin/ymax properly in matplotlib axvspan | <p>I have a political compass style scatter plot, and I'd like each of the 4 quadrants to have a different background color:</p>
<p><a href="https://i.sstatic.net/kA7WG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kA7WG.png" alt="enter image description here" /></a></p>
<p>Trying to do this with <code>axvspan</code>. Here's me trying to do the lower-left quadrant:</p>
<pre><code>ymin,ymax = ax.get_ylim()
xmin,xmax = ax.get_xlim()
plt.axvspan(xmin,0,ymin,0,facecolor='g', alpha=0.5)
</code></pre>
<p>Nothing happens, the plot is still completely white. However if I set the <code>ymin</code> and <code>ymax</code> to positive values, it sort of works - it's just fills the entire y axis, ignoring the specific ymin/ymax values I set.</p>
<pre><code>ymin,ymax = ax.get_ylim()
xmin,xmax = ax.get_xlim()
plt.axvspan(xmin,0,0,1,facecolor='g', alpha=0.5)
</code></pre>
<p><a href="https://i.sstatic.net/DnIXl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DnIXl.png" alt="enter image description here" /></a></p>
<p>So to summarize, <code>axvspan</code> doesn't like a negative range for the y values and won't show anything. And if I set the y values to a positive range, those particular values are ignored and instead the entire y axis is filled. <code>axhspan</code> works similarly with the x value range. Why is it like this? Is there an alternative I can use for coloring the 4 quadrants?</p>
| <python><matplotlib><plot> | 2023-08-08 16:15:09 | 0 | 16,225 | herpderp |
76,861,309 | 5,547,553 | Find number of months between today and a dataframe date field in python polars | <p>I'd like to find the number of months between today and a date field in a python polars dataframe.<br>
How do you do that?</p>
<pre><code>(datetime.today() - pl.col('mydate')).dt.days()
</code></pre>
<p>returns it only in days.</p>
| <python><python-polars> | 2023-08-08 16:00:21 | 3 | 1,174 | lmocsi |
76,861,013 | 342,882 | Python type signature for builtin api | <p>Python typing makes progress.
There is a tool, such us mypy, for transforming application code to insert type signatures.</p>
<p>I check python 3.12 and <code>inspect.signature()</code> returns no trace of types for builtin function <code>pickle.dumps</code></p>
<pre><code>Python 3.12.0a6 (main, Mar 7 2023, 21:53:22) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import inspect
>>> import pickle
>>> inspect.signature(pickle.dumps)
<Signature (obj, protocol=None, *, fix_imports=True, buffer_callback=None)>
>>> inspect.signature(pickle.dumps).return_annotation
<class 'inspect._empty'>
</code></pre>
<p>Is there a project trying to do best effort in covering Builtin Python API with type signatures?</p>
| <python><python-typing><mypy> | 2023-08-08 15:25:41 | 0 | 6,169 | Daniil Iaitskov |
76,860,870 | 4,451,315 | Sort polars dataframe by indices in column | <p>Say I have:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
shape: (4, 4)
┌─────┬─────┬─────┬──────┐
│ idx ┆ a ┆ b ┆ idx2 │
│ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ i64 ┆ i64 ┆ u32 │
╞═════╪═════╪═════╪══════╡
│ 0 ┆ 1 ┆ 4 ┆ 3 │
│ 2 ┆ 1 ┆ 3 ┆ 0 │
│ 3 ┆ 2 ┆ 1 ┆ 2 │
│ 4 ┆ 2 ┆ 2 ┆ 4 │
└─────┴─────┴─────┴──────┘
""")
</code></pre>
<p>I would like to sort it so that column 'idx' respect is in the order given by 'idx2'.</p>
<p>Desired output:</p>
<pre class="lang-py prettyprint-override"><code>shape: (4, 4)
┌─────┬─────┬─────┬──────┐
│ idx ┆ a ┆ b ┆ idx2 │
│ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ i64 ┆ i64 ┆ u32 │
╞═════╪═════╪═════╪══════╡
│ 3 ┆ 2 ┆ 1 ┆ 2 │
│ 0 ┆ 1 ┆ 4 ┆ 3 │
│ 2 ┆ 1 ┆ 3 ┆ 0 │
│ 4 ┆ 2 ┆ 2 ┆ 4 │
└─────┴─────┴─────┴──────┘
</code></pre>
<p>How can I do this, in a way which will work in both lazy and eager mode?</p>
| <python><python-polars> | 2023-08-08 15:08:46 | 3 | 11,062 | ignoring_gravity |
76,860,614 | 1,264,018 | New Pinhole Camera Intinsics Matrix for the Cropped Image | <p>I give the following example to illustrate my question: suppose I have a camera intrinsics matrix that follows the pin-hole camera model, and its intrinsics matrix has the following structure:</p>
<pre><code> ori_intrinsics= [[fx, 0, cx,],
[0, fy, cy],
[0, 0, 1]]
</code></pre>
<p>I also know the image it can generate of size <code>[ori_width, ori_height]</code>. Then now I crop the image in the image position <code>[ori_left, ori_top, ori_right, ori_bottom]</code>. Now I have a new image. Then my question is as follows: image now I have a similar camera(the same pin-hole camera) and it will generate the same new image without any cropping operation as I did before. What will be the camera intrinsics for this camera?</p>
| <python><camera><camera-calibration> | 2023-08-08 14:38:52 | 1 | 11,853 | feelfree |
76,860,465 | 20,655,861 | What does "0o", "0x", and "0b" mean in Python? | <p>I don't understand what these prefixes mean:</p>
<pre><code>a = 0o1010
b = 0x1010
c = 0b1010
print(a)
print(b)
print(c)
</code></pre>
<p>It outputs the following values, but what does the <code>0o</code>, <code>0x</code> and <code>0b</code> parts mean ?</p>
<pre><code>520
4112
10
</code></pre>
| <python> | 2023-08-08 14:21:07 | 2 | 386 | tribhuwan |
76,860,418 | 2,801,589 | Assign Python method generic argument to class generic argument by default | <p>Say I want a method to return a generic type and I would like it to default to the a argument of <em>the class</em> by default but to still be overridable at function call, like this:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T', bound=str)
U = TypeVar('U')
class Foo(Generic[T]):
def bar(self, response_type: Type[U] = T) -> U:
pass
</code></pre>
<p>So you could have:</p>
<pre class="lang-py prettyprint-override"><code>x: Foo[int] = ...
x.bar() # Returns int
x.bar(response_type=str) # Returns str
</code></pre>
<p>Does any Python type checker type system support this?</p>
<p>Additionally, I would also like <code>Foo</code> to default to <code>str</code>, so that:</p>
<pre class="lang-py prettyprint-override"><code>x = Foo(...)
x.bar() # Returns str by default even if unspecified when constructing Foo
</code></pre>
<p>Does the <code>bound</code> parameter enable this?</p>
| <python><python-typing> | 2023-08-08 14:14:09 | 1 | 2,201 | Pop Flamingo |
76,860,328 | 5,573,074 | Cloud Function deployment does not respect Symbolic Linking | <p>As the design specification makes clear, the source directories of Google Cloud Functions are expected to include <code>main.py</code>, <code>requirements.txt</code>, and <code>__init__.py</code>. Additional local dependencies (i.e., code) may be specified so long as their imports are within the source directory, as described <a href="https://cloud.google.com/functions/docs/writing/specifying-dependencies-python#packaging_local_dependencies" rel="nofollow noreferrer">here</a>. This precludes the importing of sibling or parent directories.</p>
<p>In this directory setup, <code>main.py</code> can <code>import internal.code_internal</code> and, if <code>base/</code> has been added to the PythonPath, can <code>import base.code_sibling</code>. The limitations (by design) of Cloud Functions does not allow this latter import, as only the contents of <code>functions/f/</code> will be deployed to its servers. My question refers to workarounds and uses of symbolic links.</p>
<pre><code>base/
__init__.py
code_sibling.py
functions/
f/
__init
main.py
requirements.txt
internal/
__init__.py
code_internal.py
</code></pre>
<p>In conventional Python, a symbolic link can be added to <code>functions/f/</code> which points to <code>base/</code>, and which then makes the contents of <code>base/</code> able to be imported as though they are directly in the <code>functions/f/</code> directory: in other words, as though the file <code>functions/f/base/code_sibling.py</code> exists. However, this improvement does not change the deployment behavior of Cloud Functions: the symbolic link is (seemingly ignored by <code>gcloud functions deploy</code>. Instead, I am finding myself directly copying the <code>base/</code> directory into <code>functions/f/</code>, then deploying the Cloud Function, then deleting the copied files of <code>functions/f/base/</code>.</p>
<p>Has anyone been able to support symbolic link, or are there other workarounds that better address the situation?
Thank you.</p>
<p>Cross-posted to <code>functions-framework-python</code> with this <a href="https://github.com/GoogleCloudPlatform/functions-framework-python/issues/265" rel="nofollow noreferrer">ticket</a>.</p>
| <python><google-cloud-functions><symlink><functions-framework> | 2023-08-08 14:04:36 | 0 | 362 | David Bernat |
76,860,320 | 2,537,394 | Plotting a Pandas DataFrame with RGB values and coordinates | <p>I have a pandas DataFrame with the columns <code>["x", "y", "r", "g", "b"]</code> where x and y denote the coordinates of a pixel and r, g, b denote its RGB value. The rows contain entries for each coordinate of a grid of pixels and are unique. How can I display this DataFrame using matplotlibs's <code>imshow()</code>? This requires reshaping the data into a array of shape <code>(M, N, 3)</code>.</p>
<p>My usual approach of using <code>plt.imshow(df.pivot(columns="x", index="y", values="i"), interpolation="nearest")</code> does only work for greyscale images. Placing <code>["r", "g", "b"]</code> as the values argument yields a DataFrame with a MultiIndex as columns. However I fail to convert this into a correct image. Simply calling <code>.reshape(M, N, 3)</code> creates a wrong image.</p>
<p>I also had the idea of creating a new column with <code>df["rgb"] = list(zip(df.r, df.g, df.b))</code> However I'm not sure on how to convert the resulting tuples into a new axis for the ndarray.</p>
| <python><pandas><dataframe><matplotlib><imshow> | 2023-08-08 14:03:22 | 2 | 731 | YPOC |
76,860,142 | 10,281,244 | Python Script to Copy S3 contents encrypted through client side KMS Key between S3 buckets | <p>I have a use case where I need to copy contents between S3 buckets. The catch is that the S3 bucket where the contents lie are encrypted through a CMK KMS key and I need to decrypt it using a different CMK KMS key before storing it in another bucket.</p>
<p>I tried to first fetch the file from S3, and then decrypt it using the KMS key</p>
<pre class="lang-py prettyprint-override"><code>import boto3
source_bucket = "sourceBucket"
destination_bucket = "destinationBucket"
source_kms_key_id = "kmsKey"
dest_kms_key_id = "destKmsKeyId"
s3_client = boto3.client('s3')
kms_client = boto3.client('kms', region_name='us-west-2')
def get_files_from_s3(bucket_name,destination_bucket,kms_key_id):
response = s3_client.list_objects_v2(Bucket=bucket_name)
if 'Contents' in response:
objects = response['Contents']
for obj in objects:
file_key = obj['Key']
file_object = s3_client.get_object(Bucket=bucket_name, Key=file_key)
encrypted_data = file_object['Body'].read()
hello = str(encrypted_data)
# print(f"Encrypted Data: {encrypted_data}")
print("aa")
decrypted_data = kms_client.decrypt(CiphertextBlob=encrypted_data, KeyId=kms_key_id)['Plaintext']
s3_client.put_object(Bucket=destination_bucket, Key=file_key, Body=decrypted_data)
print(f"File key: {file_key}")
get_files_from_s3(source_bucket,destination_bucket,kms_key_id)
</code></pre>
<p>but this throws an error</p>
<pre class="lang-py prettyprint-override"><code>1 validation error detected: Value at 'ciphertextBlob' failed to satisfy constraint: Member must have length less than or equal to 6144 (Service: AWSKMS; Status Code: 400; Error Code: ValidationException; Request ID: 68c37cf6-0f1a-405c-9bf2-72f0d43c45ee; Proxy: null)
</code></pre>
<p>In Java, I was able to achieve the same using the following S3 client</p>
<pre class="lang-java prettyprint-override"><code>public AmazonS3 getS3Client(@NonNull final ClientConfiguration clientConfiguration) {
final String awsRegion = region().toString();
final com.amazonaws.regions.Region kmsRegion = RegionUtils.getRegion(awsRegion);
final CryptoConfiguration cryptoConfiguration = new CryptoConfiguration()
.withAwsKmsRegion(kmsRegion);
final KMSEncryptionMaterialsProvider materialsProvider = new KMSEncryptionMaterialsProvider(getS3KmsId());
return AmazonS3EncryptionClientBuilder
.standard()
.withCryptoConfiguration(cryptoConfiguration)
.withClientConfiguration(clientConfiguration)
.withEncryptionMaterials(materialsProvider)
.withForceGlobalBucketAccessEnabled(true)
.withRegion(Regions.US_EAST_1)
.build();
}
</code></pre>
<p>I wonder if there's any similar/other solution available to this problem?</p>
<p>TIA</p>
| <python><amazon-web-services><amazon-s3> | 2023-08-08 13:41:05 | 1 | 340 | Diksha Goyal |
76,860,119 | 8,467,078 | Append first item to end of iterable in Python | <p>I need to append the first item of a (general) iterable as the final item of that iterable (thus "closing the loop"). I've come up with the following:</p>
<pre><code>from collections.abc import Iterable
from itertools import chain
def close_loop(iterable: Iterable) -> Iterable:
iterator = iter(iterable)
first = next(iterator)
return chain([first], iterator, [first])
# Examples:
list(close_loop(range(5))) # [0, 1, 2, 3, 4, 0]
''.join(close_loop('abc')) # 'abca'
</code></pre>
<p>Which works, but seems a bit "clumsy". I was wondering if there's a more straightforward approach using the magic of <code>itertools</code>. Solutions using <code>more_itertools</code> are also highly welcome.</p>
| <python><python-itertools> | 2023-08-08 13:39:13 | 2 | 345 | VY_CMa |
76,860,108 | 11,942,410 | Raw wise Cumulative sum of Dataframe in pyspark | <p>This is the <strong>input</strong> DF</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>origin</th>
<th>destination</th>
<th>10+ Days</th>
<th>10 Days</th>
<th>9 Days</th>
<th>8 Days</th>
<th>7 Days</th>
<th>6 Days</th>
<th>5 Days</th>
<th>4 Days</th>
<th>3 Days</th>
<th>2 Days</th>
<th>1 Day</th>
</tr>
</thead>
<tbody>
<tr>
<td>CWCJ</td>
<td>MDCC</td>
<td>66</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>1</td>
<td>13</td>
<td>8</td>
<td>11</td>
<td>2</td>
<td>63</td>
</tr>
<tr>
<td>CWCJ</td>
<td>PPSP</td>
<td>21</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>1</td>
<td>13</td>
<td>8</td>
<td>8</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>PCWD</td>
<td>MDCC</td>
<td>50</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>PCWD</td>
<td>PPSP</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>0</td>
<td>39</td>
</tr>
<tr>
<td>DPMT</td>
<td>JNPT</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>21</td>
</tr>
<tr>
<td>PMKM</td>
<td>PPSP</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>PMKM</td>
<td>MDCC</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>i am trying to convert input into following output using cumulative sum. (cumulative sum starts from 10+ days column)</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>oirigin</th>
<th>destination</th>
<th>10+days</th>
<th>8-Aug</th>
<th>9-Aug</th>
<th>10-Aug</th>
<th>11-Aug</th>
<th>12-Aug</th>
<th>13-Aug</th>
<th>14-Aug</th>
<th>15-Aug</th>
<th>16-Aug</th>
<th>17-Aug</th>
<th>18-Aug</th>
</tr>
</thead>
<tbody>
<tr>
<td>CWCJ</td>
<td>MDCC</td>
<td>66</td>
<td>66</td>
<td>66</td>
<td>66</td>
<td>66</td>
<td>68</td>
<td>69</td>
<td>82</td>
<td>90</td>
<td>101</td>
<td>103</td>
<td>166</td>
</tr>
<tr>
<td>CWCJ</td>
<td>PPSP</td>
<td>21</td>
<td>21</td>
<td>21</td>
<td>21</td>
<td>21</td>
<td>23</td>
<td>24</td>
<td>37</td>
<td>45</td>
<td>53</td>
<td>55</td>
<td>58</td>
</tr>
<tr>
<td>PCWD</td>
<td>MDCC</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
<td>50</td>
</tr>
<tr>
<td>PCWD</td>
<td>PPSP</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>3</td>
<td>42</td>
</tr>
<tr>
<td>DPMT</td>
<td>JNPT</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>21</td>
</tr>
<tr>
<td>PMKM</td>
<td>PPSP</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>PMKM</td>
<td>MDCC</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>column name should be todays date , todays date +1 , todays date+2.... todays date +10</p>
<p>i have tried folliwing pyspark/python code however it is not giving expected output.</p>
<pre><code>from pyspark.sql.window import Window
from pyspark.sql import functions as F
from datetime import datetime, timedelta
todays_date = datetime.today().date()
future_dates = [todays_date + timedelta(days=i) for i in range(1, 12)]
columns_to_sum = ["9", "8", "7", "6", "5", "4", "3", "2", "1"]
window_spec = Window.orderBy("origin", "destination")
for i, date in enumerate(future_dates):
col_name = date.strftime("%d-%b")
for col in columns_to_sum:
summary_export_dwell_df_temp = summary_export_dwell_df_temp.withColumn(col_name, F.when(F.col(col) > 0, F.sum(col).over(window_spec)).otherwise(0).cast("int"))
window_spec = Window.orderBy("origin", "destination")
selected_columns = ["origin", "destination", "10+"] + [date.strftime("%d-%b") for date in future_dates]
selected_df = summary_export_dwell_df_temp.select(*selected_columns)
selected_df.show()
</code></pre>
<p>however, i think there is mistake in cumulative sum logic, for which i need you guys inputs as well as for the date logic after implementing cumulative sum.</p>
<p>below <strong>output</strong> i am getting</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>origin</th>
<th>destination</th>
<th>10+</th>
<th>09-Aug</th>
<th>10-Aug</th>
<th>11-Aug</th>
<th>12-Aug</th>
<th>13-Aug</th>
<th>14-Aug</th>
<th>15-Aug</th>
<th>16-Aug</th>
<th>17-Aug</th>
<th>18-Aug</th>
<th>19-Aug</th>
</tr>
</thead>
<tbody>
<tr>
<td>CWCJ</td>
<td>MDCC</td>
<td>66</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>CWCJ</td>
<td>PPSP</td>
<td>21</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>DPMT</td>
<td>JNPT</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>PCWD</td>
<td>MDCC</td>
<td>50</td>
<td>42</td>
<td>42</td>
<td>42</td>
<td>42</td>
<td>42</td>
<td>42</td>
<td>42</td>
<td>42</td>
<td>42</td>
<td>42</td>
<td>42</td>
</tr>
<tr>
<td>PCWD</td>
<td>PPSP</td>
<td>0</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>63</td>
<td>63</td>
</tr>
<tr>
<td>PMKM</td>
<td>MDCC</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>PMKM</td>
<td>PPSP</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Total</td>
<td></td>
<td>139</td>
<td>126</td>
<td>126</td>
<td>126</td>
<td>126</td>
<td>126</td>
<td>126</td>
<td>126</td>
<td>126</td>
<td>126</td>
<td>126</td>
<td>126</td>
</tr>
</tbody>
</table>
</div> | <python><pandas><pyspark> | 2023-08-08 13:37:49 | 3 | 326 | vish |
76,860,082 | 604,388 | How to optimize working with SQLite database? | <p>I have the following code to work with my files. If file is added to the database before, then the file is just skipped (<code>pass</code>).</p>
<pre><code>import sqlite3 as lite
for i, file in enumerate( allMedia ):
con = lite.connect(DB_PATH)
con.text_factory = str
with con:
cur = con.cursor()
cur.execute("SELECT rowid,files_id,path,set_id,md5,tagged FROM files WHERE path = ?", (file,))
row = cur.fetchone()
if (row is not None):
pass
</code></pre>
<p>The problem with this code is slow processing (2-3 seconds for each file found in the database). The database size is ~30 Mb. And thousands of files should be processed.</p>
<p>Is there any way to speed up the process?</p>
| <python><python-2.7><sqlite> | 2023-08-08 13:34:28 | 1 | 20,489 | LA_ |
76,859,988 | 11,229,812 | How to speeding up Python script to update columns in existing SQL table? | <p>I have a Orders table in SQL that contains StoreOrderNumber, State and Country. State and Country are empty columns and i have to populate them based on the dataframe below:</p>
<pre><code>data = {
'StoreOrderNumber': [1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010],
'State': ['CA', 'NY', 'TX', 'IL', 'FL', 'PA', 'OH', 'GA', 'MI', 'NC'],
'Country': ['USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA']
}
</code></pre>
<p>The python code i have been using is:</p>
<pre><code># Addin data to Orders table
sn_conn = DatabaseConnection(config_file)
sn_conn.sqldb(env = env)
for each in tqdm(orders_final.StoreOrderNumber):
state= orders_final.loc[orders_final.StoreOrderNumber == each, 'State'].values[0]
country = orders_final.loc[orders_final.StoreOrderNumber == each, 'Country'].values[0]
sn_conn.engine.execute(f"""UPDATE Orders SET State = '{state}' WHERE StoreOrderNumber = '{each}' """)
sn_conn.engine.execute(f"""UPDATE Orders SET Country = '{country}' WHERE StoreOrderNumber = '{each}' """)
sn_conn.close()
</code></pre>
<p>While this works it is extremely slow. It performs about 7 update per second.
Is there any way I could modify this script so i can improve the speed of updating State and Country inside SQL Orders table using the python?</p>
| <python><python-3.x><pandas> | 2023-08-08 13:22:48 | 0 | 767 | Slavisha84 |
76,859,963 | 41,782 | How can I log sql queries in the SqlModel? | <p>How could I see/log queries sent by <a href="https://github.com/tiangolo/sqlmodel" rel="nofollow noreferrer">sqlmodel</a> to database.</p>
| <python><debugging><logging><sqlalchemy><sqlmodel> | 2023-08-08 13:18:46 | 1 | 14,759 | Atilla Ozgur |
76,859,841 | 18,020,941 | Wagtail always uses default site for URLs | <p>I am using the Django site framework with wagtail.
Wagtail correctly gives me the settings I specified for each site.
I as of now have 2 sites registered:</p>
<pre><code>127.0.0.1
172.1.16.155
</code></pre>
<p>So, when I visit 127.0.0.1, I get all the site settings for 127.0.0.1, and vice versa.
The issue is, I have 172.1.16.155 as my default site. When I navigate to a URL from 127.0.0.1, (example: <code>127.0.0.1:8080/home/</code>) it will go to 172.1.16.1 (<code>172.1.16.155:8080/home/</code>), and the same thing the other way around if I set the default site to the loopback addr.</p>
<p>I have tried using both <code>page.full_url</code> and <code>page.url</code>, but to no avail.
I have tried adding <code>http://127.0.0.1</code> to the site settings (though I know this is incorrect), also to no avail.</p>
<p>I am using the same root for both sites.<br />
This is a requirement for my project, as I require the same pages to be displayed no matter the domain, but the icons and such must differ on each domain. This works. It's just that the HTML contains anchor tags which lead me off the domain I am currently on.</p>
<p>Some possibly relevant Django settings:</p>
<pre class="lang-py prettyprint-override"><code># SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env("SECRET_KEY",default="django-insecure-#=($lwdhz$drt5@h752_puh!y^5aqxh!bn($t_$^x(qd#1p5=e")
DEBUG = str(env("DEBUG", default="False")).lower() in ['true', '1', 't', 'y', 'yes', 'yeah', 'yup', 'certainly', 'uh-huh']
SECURE_CROSS_ORIGIN_OPENER_POLICY = None
ALLOWED_HOSTS = env("DJANGO_ALLOWED_HOSTS",default="127.0.0.1 localhost").split(" ")
CSRF_TRUSTED_ORIGINS = env("CSRF_TRUSTED_ORIGINS",default="http://127.0.0.1 http://localhost").split(" ")
# Search
# https://docs.wagtail.org/en/stable/topics/search/backends.html
WAGTAILSEARCH_BACKENDS = {
"default": {
"BACKEND": "wagtail.search.backends.database",
}
}
# Base URL to use when referring to full URLs within the Wagtail admin backend -
# e.g. in notification emails. Don't include '/admin' or a trailing slash
WAGTAILADMIN_BASE_URL = env("WAGTAILADMIN_BASE_URL", "http://127.0.0.1:8080")
WAGTAIL_SITE_NAME = env("WAGTAIL_SITE_NAME", "flex")
# Application definition
INSTALLED_APPS = [
"flex.apps.FlexConfig",
"blog.apps.BlogConfig",
'reviews.apps.ReviewsConfig',
"contact.apps.ContactConfig",
"search",
'wagtail_color_panel',
"site_settings.apps.SiteSettingsConfig",
"wagtail.contrib.settings",
"wagtail.contrib.forms",
"wagtail.contrib.redirects",
"wagtail.embeds",
"wagtail.sites",
"wagtail.users",
"wagtail.snippets",
"wagtail.documents",
"wagtail.images",
"wagtail.search",
"wagtail.admin",
"wagtail.contrib.modeladmin",
'treemodeladmin',
"wagtail",
"modelcluster",
"taggit",
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
WAGTAILIMAGES_EXTENSIONS = ["gif", "jpg", "jpeg", "png", "webp", "svg"]
MIDDLEWARE = [
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
"django.middleware.security.SecurityMiddleware",
"wagtail.contrib.redirects.middleware.RedirectMiddleware",
]
ROOT_URLCONF = "core.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [
os.path.join(PROJECT_DIR, "templates"),
],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
'wagtail.contrib.settings.context_processors.settings',
],
"builtins": [
"core.templatetags.core_tags",
],
},
},
]
</code></pre>
<p>For debugging purposes I have added print statements to the wagtail get_url method, like so:</p>
<pre><code> def get_url(self, request=None, current_site=None):
"""
Return the 'most appropriate' URL for referring to this page from the pages we serve,
within the Wagtail backend and actual website templates;
this is the local URL (starting with '/') if we're only running a single site
(i.e. we know that whatever the current page is being served from, this link will be on the
same domain), and the full URL (with domain) if not.
Return None if the page is not routable.
Accepts an optional but recommended ``request`` keyword argument that, if provided, will
be used to cache site-level URL information (thereby avoiding repeated database / cache
lookups) and, via the ``Site.find_for_request()`` function, determine whether a relative
or full URL is most appropriate.
"""
# ``current_site`` is purposefully undocumented, as one can simply pass the request and get
# a relative URL based on ``Site.find_for_request()``. Nonetheless, support it here to avoid
# copy/pasting the code to the ``relative_url`` method below.
print("getting url")
if current_site is None and request is not None:
site = Site.find_for_request(request)
current_site = site
print("site", site)
print("current_site", current_site) # None
# print(current_site, request, request.get_host() if request is not None else None) # None None None
url_parts = self.get_url_parts(request=request)
print("url_parts", url_parts)
....
</code></pre>
<p>and inside find_for_request:</p>
<pre><code> @staticmethod
def find_for_request(request):
if request is None:
return None
if not hasattr(request, "_wagtail_site"):
site = Site._find_for_request(request)
setattr(request, "_wagtail_site", site)
print("Site: from_request", request._wagtail_site)
return request._wagtail_site
</code></pre>
<p>this provides me with the following terminal output:</p>
<pre class="lang-bash prettyprint-override"><code>System check identified no issues (0 silenced).
August 08, 2023 - 15:10:33
Django version 4.2.1, using settings 'core.settings'
Starting development server at http://0.0.0.0:8080/
Quit the server with CTRL-BREAK.
Site: from_request Localhost
Site: from_request Localhost
Site: from_request Localhost
Site: from_request Localhost
Site: from_request Localhost
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/autoschade/') # I AM NAGIVATING FROM 127.0.0.1
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/caravan-en-camperschade/')# I AM NAGIVATING FROM 127.0.0.1
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/ruitschade/')# I AM NAGIVATING FROM 127.0.0.1
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/banden/')# I AM NAGIVATING FROM 127.0.0.1
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/over-ons/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/duurzaamheid/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/abs-actueel/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/autoschade/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/ruitschade/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/caravan-en-camperschade/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/banden/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/opdrachtgevers/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/abs-actueel/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/caravan-en-camperschade/')
getting url
current_site None
url_parts (2, 'http://172.16.1.155:8080', '/ruitschade/')
[08/Aug/2023 15:10:33] "GET / HTTP/1.1" 200 26173
[08/Aug/2023 15:10:33] "GET /static/css/core.css HTTP/1.1" 200 6927
[08/Aug/2023 15:10:33] "GET /static/css/bootstrap.min.css HTTP/1.1" 200 241729
[08/Aug/2023 15:10:33] "GET /static/js/bold-and-bright.js HTTP/1.1" 200 1395
[08/Aug/2023 15:10:33] "GET /static/js/core.js HTTP/1.1" 200 12097
[08/Aug/2023 15:10:33] "GET /static/js/bootstrap.min.js HTTP/1.1" 200 80372
[08/Aug/2023 15:10:33] "GET /media/images/slide-01-home.scale-100.jpg HTTP/1.1" 200 203911
[08/Aug/2023 15:10:33] "GET /media/images/Beckers_en_mulder_0000_BeckersMulder_W1C2171.scale-100.jpg HTTP/1.1" 200 181518
[08/Aug/2023 15:10:33] "GET /media/images/Beckers_en_mulder_0015_BeckersMulder_W1C1873_c.scale-100.jpg HTTP/1.1" 200 127941
[08/Aug/2023 15:10:33] "GET /media/images/anwb.width-200.svg HTTP/1.1" 200 6256
[08/Aug/2023 15:10:33] "GET /media/images/interpolis.width-200.svg HTTP/1.1" 200 7216
[08/Aug/2023 15:10:33] "GET /media/images/icon-caravan-500.scale-100.png HTTP/1.1" 200 64961
[08/Aug/2023 15:10:33] "GET /media/images/icon-ruitschade-500.scale-100.png HTTP/1.1" 200 54691
[08/Aug/2023 15:10:33] "GET /media/images/icon-autoschade-500.scale-100.png HTTP/1.1" 200 80444
[08/Aug/2023 15:10:33] "GET /media/images/icon-banden-500.scale-100.png HTTP/1.1" 200 68924
[08/Aug/2023 15:10:33] "GET /media/images/alphabet.width-200.svg HTTP/1.1" 200 5460
[08/Aug/2023 15:10:33] "GET /media/images/HEMA.width-200.svg HTTP/1.1" 200 1148
[08/Aug/2023 15:10:33] "GET /media/images/wave-pattern-2.original.svg HTTP/1.1" 200 747
Site: from_request Localhost
Site: from_request Localhost
Not Found: /favicon.ico
[08/Aug/2023 15:10:33] "GET /favicon.ico HTTP/1.1" 404 3472
</code></pre>
| <python><django><wagtail> | 2023-08-08 13:02:01 | 1 | 1,925 | nigel239 |
76,859,839 | 9,476,917 | python oracledb - update statement in procedure is executed without effect | <p>I have following oracle procedure that I want to trigger via python's oracledb:</p>
<pre><code>create or replace procedure update_reported_delta(
update_tbl_name in varchar2,
update_key_col in varchar2,
update_key_vals in varchar2
)
AUTHID CURRENT_USER -- needed to grant create/update table right within stored procedure
AS
sql_cmd varchar(32767);
BEGIN
sql_cmd:= 'UPDATE ' || update_tbl_name || '
SET DELTA_REPORTED = ''Y''
WHERE '|| update_key_col ||' in
('||update_key_vals||') AND DELTA_REPORTED IS NULL';
dbms_output.put_line(sql_cmd);
execute immediate sql_cmd;
END;
</code></pre>
<p>When executed via SQL-Developer like:</p>
<pre><code>set serveroutput on;
begin
update_reported_delta('my_tbl', 'my_id_col', '''MY_ID_VAL''');
end;
</code></pre>
<p>the corresponding records are updated and the sql_cmd variable looks like:</p>
<pre><code>UPDATE my_tbl
SET DELTA_REPORTED = 'Y'
WHERE my_id_col in
('MY_ID_VAL') AND DELTA_REPORTED IS NULL
</code></pre>
<p>If I execute it using the oracledb library via python I see that the created sql statement is exactly the same. However, the values are not updated (I reset the <code>DELTA_REPORTED</code> column to <code>null</code> again, before executing the same statement - when the python console output <code>sql_cmd</code> is copy pasted into the SQL-Developer it also works...)</p>
<p>My python code looks like:</p>
<pre><code> if debug==False:
cursor.callproc(procedure_name, params_list)
elif debug==True:
print("""START DEBUG SESSION - OUTPUT ENABLED""")
cursor.callproc("dbms_output.enable")
cursor.callproc(procedure_name, params_list)
self.__write_debug_output_to_console(cursor)
print(f"Executed {procedure_name} successfully!")
</code></pre>
<p>where <code>procedure_name</code>= UPDATE_REPORTED_DELTA and <code>params_list</code> looks like <code>['my_tbl', 'my_id_col', "'MY_ID_VAL'"]</code>.</p>
<p>I see that the procedure is triggered via python, as I can initiate errors which throws ORA-Errors in my Python Console - when e.g. adding non-existent table columns to my sql query within the procedure...</p>
<p>Any help/hint appreciated!</p>
| <python><oracle-database><stored-procedures><sql-update><python-oracledb> | 2023-08-08 13:01:55 | 0 | 755 | Maeaex1 |
76,859,744 | 2,802,576 | Pandas: Check if Pandas Dataframe column contains a Dataframe | <p>I have a DataFrame where there is a column with DataFrame type values. I would want to check if all the values in that column are of type DataFrame.</p>
<pre><code>t_df = DataFrame(data= {
"r": [1,2,3],
"r": ['a', 'b', 'c']
})
d = {
"ts": [1e7, 1e8, 1e9],
"value": [100, 200, 300],
"value2": [1, 2, 3],
"tvalue": [t_df, t_df, t_df]
}
df = pd.DataFrame(d)
</code></pre>
<p>if I check <code>type(df['tvalue'])</code> it shows - <code><class 'pandas.core.series.Series'></code> and if I do the same check for <code>value2</code> column it shows <code><class 'pandas.core.series.Series'></code> only.</p>
<p>Is there a way to determine if the all the values inside <code>tvalue</code> are of type <code>DataFrame</code>.</p>
| <python><pandas><dataframe> | 2023-08-08 12:51:17 | 1 | 801 | arpymastro |
76,859,705 | 10,155,573 | Error when installing Prophet in docker image | <p>I am trying to install prophet, yet (only when trying to docker build by image) the following error is thrown:</p>
<pre><code>19.18 Collecting ujson (from cmdstanpy==0.9.68->prophet)
19.57 Downloading https://files.pythonhosted.org/packages/21/93/ba928551a83251be01f673755819f95a568cda0bfb9e0859be80086dce93/ujson-4.3.0.tar.gz (7.1MB)
23.28 Complete output from command python setup.py egg_info:
23.28 Traceback (most recent call last):
23.28 File "<string>", line 1, in <module>
23.28 File "/tmp/pip-build-hzpc95u_/ujson/setup.py", line 38, in <module>
23.28 "write_to_template": version_template,
23.28 File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 129, in setup
23.28 return distutils.core.setup(**attrs)
23.28 File "/usr/lib/python3.6/distutils/core.py", line 108, in setup
23.28 _setup_distribution = dist = klass(attrs)
23.28 File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 372, in __init__
23.28 _Distribution.__init__(self, attrs)
23.28 File "/usr/lib/python3.6/distutils/dist.py", line 281, in __init__
23.28 self.finalize_options()
23.28 File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 528, in finalize_options
23.28 ep.load()(self, ep.name, value)
23.28 File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2324, in load
23.28 return self.resolve()
23.28 File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2330, in resolve
23.28 module = __import__(self.module_name, fromlist=['__name__'], level=0)
23.28 File "/tmp/pip-build-hzpc95u_/ujson/.eggs/setuptools_scm-7.1.0-py3.6.egg/setuptools_scm/__init__.py", line 5
23.28 from __future__ import annotations
23.28 ^
23.28 SyntaxError: future feature annotations is not defined
23.28
23.28 ----------------------------------------
23.69 Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-hzpc95u_/ujson/
------
failed to solve: process "/bin/sh -c python3 -m pip install prophet" did not complete successfully: exit code: 1
</code></pre>
<p>The above issue does not appear on my local execution. I have installed prophet using the <code>pip install prophet</code> command and everything works properly. The issue arises only when trying to create the Docker image.</p>
<p>The Dockerfile I use is the following:</p>
<pre><code>FROM ubuntu:18.04
RUN apt-get update
# To avoid the questions related to the geographic area
RUN ln -snf /usr/share/zoneinfo/$CONTAINER_TIMEZONE /etc/localtime && echo $CONTAINER_TIMEZONE > /etc/timezone
RUN apt-get install -y python3.10 python3-pip
RUN pip3 install flask pandas numpy cython
RUN python3 -m pip install prophet
COPY app.py app.py
COPY checks.py checks.py
COPY predict.py predict.py
EXPOSE 5000
CMD ["python3", "-u", "app.py"]
</code></pre>
<p>I've tried also to install <code>ujson</code> and then try to install prophet. The <code>ujson</code> is installed yet afterwards the same issue arises.</p>
| <python><python-3.x><docker><facebook-prophet> | 2023-08-08 12:44:28 | 1 | 707 | csymvoul |
76,859,518 | 8,565,438 | How to save a numpy array to a webp file? | <pre><code>import numpy as np
from PIL import Image
</code></pre>
<hr />
<p>I have a numpy array <code>res</code>:</p>
<pre><code>res = \
np.array([[[ 1, 142, 68],
[ 1, 142, 74]],
[[ 1, 142, 70],
[ 1, 142, 77]],
[[ 1, 142, 72],
[ 1, 142, 79]],
[[ 1, 142, 75],
[ 1, 142, 82]]])
</code></pre>
<p>I would like to save it to a webp file and check if I can recover the result.</p>
<p><strong>Attempt I</strong></p>
<pre><code># save:
Image.fromarray(res.astype(np.uint8), mode='RGB').save("test.webp", "WEBP")
# check result:
np.array(Image.open("test.webp"))
</code></pre>
<p>This outputs:</p>
<pre><code>array([[[ 2, 143, 75],
[ 2, 143, 75]],
[[ 2, 143, 75],
[ 2, 143, 75]],
[[ 2, 143, 75],
[ 2, 143, 75]],
[[ 2, 143, 75],
[ 2, 143, 75]]], dtype=uint8)
</code></pre>
<p>Attempt failed, as these aren't the same numbers as I started with in <code>res</code>.</p>
<p><strong>Attempt II</strong></p>
<p>Now without <code>.astype(np.uint8)</code>:</p>
<pre><code># save:
Image.fromarray(res,mode='RGB').save("test.webp", "WEBP")
# check result:
np.array(Image.open("test.webp"))
</code></pre>
<p>This outputs:</p>
<pre><code>array([[[ 2, 5, 31],
[ 0, 0, 27]],
[[17, 21, 40],
[ 0, 0, 15]],
[[ 0, 0, 3],
[25, 32, 34]],
[[ 0, 8, 1],
[ 0, 7, 0]]], dtype=uint8)
</code></pre>
<p>This is even worse than Attempt I.</p>
<p><strong>How can I save a numpy array to a webp file?</strong></p>
| <python><numpy><python-imaging-library><rgb><webp> | 2023-08-08 12:21:29 | 1 | 8,082 | zabop |
76,859,397 | 21,346,793 | How can i make shuffle in django Forms? | <p>I have got a project like quiz. But i need to shuffle the answers in questions.
here is my code:
template.html</p>
<pre><code> <form method="post">
{% csrf_token %}
<h3>{{ current_question.text }}</h3>
{{ form.selected_answer }}
<button type="submit">Next</button>
</form>
</code></pre>
<p>views.py</p>
<pre><code> if request.method == 'POST':
form = QuestionForm(request.POST, question=current_question)
if form.is_valid():
user_answer = form.cleaned_data['selected_answer']
user_test.useranswer_set.update_or_create(question=current_question,
defaults={'selected_answer': user_answer})
return redirect('test', test_id=test_id)
else:
form = QuestionForm(question=current_question)
</code></pre>
<p>There is my django form. I try like this but it doesn't work:</p>
<pre><code>from django import forms
from .models import Answer
import random
from django.db.models.query import QuerySet
class QuestionForm(forms.Form):
selected_answer = forms.ModelChoiceField(
queryset=Answer.objects.none(),
widget=forms.RadioSelect,
empty_label=None
)
def __init__(self, *args, question=None, **kwargs):
super().__init__(*args, **kwargs)
if question:
answers = list(question.answer_set.all())
random.shuffle(answers)
self.fields['selected_answer'].queryset = answers
</code></pre>
| <python><django> | 2023-08-08 12:06:14 | 1 | 400 | Ubuty_programmist_7 |
76,859,293 | 3,247,006 | function vs class vs module vs package vs session for fixture scopes in Pytest | <p>I set 5 fixtures with <code>function</code>, <code>class</code>, <code>module</code>, <code>package</code> and <code>session</code> scopes to <code>test1()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture(scope='function')
def fixture_function():
print('function')
@pytest.fixture(scope='class')
def fixture_class():
print('class')
@pytest.fixture(scope='module')
def fixture_module():
print('module')
@pytest.fixture(scope='package')
def fixture_package():
print('package')
@pytest.fixture(scope='session')
def fixture_session():
print('session')
class Test1:
def test1(
self,
fixture_function,
fixture_class,
fixture_module,
fixture_package,
fixture_session
):
pass
</code></pre>
<p>Then, I ran the command below:</p>
<pre class="lang-none prettyprint-override"><code>pytest -q -rP
</code></pre>
<p>Then, each fixture ran once according to the result below:</p>
<pre class="lang-none prettyprint-override"><code>$ pytest -q -rP
. [100%]
=============== PASSES ===============
____________ Test1.test1 _____________
_______ Captured stdout setup ________
session
package
module
class
function
1 passed in 0.10s
</code></pre>
<p>My questions:</p>
<ol>
<li><p>What is the difference between <code>function</code>, <code>class</code>, <code>module</code>, <code>package</code> and <code>session</code> for fixture scopes in Pytest?</p>
</li>
<li><p>When should I use fixture scopes in Pytest?</p>
</li>
</ol>
| <python><pytest><fixtures><pytest-fixtures> | 2023-08-08 11:51:31 | 3 | 42,516 | Super Kai - Kazuya Ito |
76,859,150 | 3,423,825 | How to keep to first boolean of consecutive group of values in Pandas? | <p>I would like to keep the <code>True</code> when it's isolated between 2 <code>False</code>, and keep the first <code>True</code> of a group of consecutive <code>True</code>.</p>
<p>What is a single ligne solution to do that ?</p>
<pre><code>Date
2023-07-10 00:00:00+00:00 False
2023-07-11 00:00:00+00:00 False
2023-07-12 00:00:00+00:00 False
2023-07-13 00:00:00+00:00 False
2023-07-14 00:00:00+00:00 False
2023-07-15 00:00:00+00:00 False
2023-07-16 00:00:00+00:00 False
2023-07-17 00:00:00+00:00 False
2023-07-18 00:00:00+00:00 False
2023-07-19 00:00:00+00:00 False
2023-07-20 00:00:00+00:00 False
2023-07-21 00:00:00+00:00 True
2023-07-22 00:00:00+00:00 False
2023-07-23 00:00:00+00:00 False
2023-07-24 00:00:00+00:00 False
2023-07-25 00:00:00+00:00 False
2023-07-26 00:00:00+00:00 False
2023-07-27 00:00:00+00:00 False
2023-07-28 00:00:00+00:00 False
2023-07-29 00:00:00+00:00 True
2023-07-30 00:00:00+00:00 True
2023-07-31 00:00:00+00:00 False
2023-08-01 00:00:00+00:00 False
2023-08-02 00:00:00+00:00 False
2023-08-03 00:00:00+00:00 False
2023-08-04 00:00:00+00:00 False
2023-08-05 00:00:00+00:00 True
2023-08-06 00:00:00+00:00 True
2023-08-07 00:00:00+00:00 True
2023-08-08 00:00:00+00:00 True
Freq: D, Name: Close, dtype: bool
</code></pre>
<p>Desired result would be:</p>
<pre><code>Date
2023-07-10 00:00:00+00:00 False
2023-07-11 00:00:00+00:00 False
2023-07-12 00:00:00+00:00 False
2023-07-13 00:00:00+00:00 False
2023-07-14 00:00:00+00:00 False
2023-07-15 00:00:00+00:00 False
2023-07-16 00:00:00+00:00 False
2023-07-17 00:00:00+00:00 False
2023-07-18 00:00:00+00:00 False
2023-07-19 00:00:00+00:00 False
2023-07-20 00:00:00+00:00 False
2023-07-21 00:00:00+00:00 True
2023-07-22 00:00:00+00:00 False
2023-07-23 00:00:00+00:00 False
2023-07-24 00:00:00+00:00 False
2023-07-25 00:00:00+00:00 False
2023-07-26 00:00:00+00:00 False
2023-07-27 00:00:00+00:00 False
2023-07-28 00:00:00+00:00 False
2023-07-29 00:00:00+00:00 True
2023-07-30 00:00:00+00:00 False
2023-07-31 00:00:00+00:00 False
2023-08-01 00:00:00+00:00 False
2023-08-02 00:00:00+00:00 False
2023-08-03 00:00:00+00:00 False
2023-08-04 00:00:00+00:00 False
2023-08-05 00:00:00+00:00 True
2023-08-06 00:00:00+00:00 False
2023-08-07 00:00:00+00:00 False
2023-08-08 00:00:00+00:00 False
Freq: D, Name: Close, dtype: bool
</code></pre>
| <python><pandas> | 2023-08-08 11:33:23 | 1 | 1,948 | Florent |
76,859,015 | 7,187,868 | E1/3 function in python? | <p>I am analyzing some lab results with python and I need to fit the data to a distribution that includes the generalized exponential integral E<sub>1/3</sub>.
The closest thing I could find besides calculating the integral myself (which is very slow) is <code>scipy.special.expn(n, x)</code>, but it only supports integer values for n, and I need <code>n=1/3</code>.</p>
<p>Does anybody know if such a function exists?</p>
| <python><scipy><data-analysis><physics> | 2023-08-08 11:16:45 | 2 | 461 | bloop |
76,858,988 | 1,039,302 | python: html string to pdf via Pdfkit: avoid image to span into 2 pages | <p>I want to output <code>html string</code> to pdf via <code>Pdfkit</code> and <code>python</code>. The html string included an image. The problem is that the image spanned into 2 pages as shown below.</p>
<p><a href="https://i.sstatic.net/2JYt3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2JYt3.png" alt="enter image description here" /></a></p>
<p>Assume the image can be held in one page,</p>
<ul>
<li>how to make the image not span into 2 pages via Pdfkit and python?</li>
<li>Or if Pdfkit can't do it, any other methods?</li>
</ul>
<p>The source is <code>html string</code>. Therefore, I can't calculate if the space left in one page can hold the size of the image. Any idea? Thank you.</p>
<p>Followed is the code. <code>qqaa</code> is a <strong>base64 image data</strong>. If I included the data, it would break the limit 30000 of stackoverflow. So, the code below wouldn't run. I didn't know how I can attach the python script.</p>
<pre><code>import pdfkit
html_str = ''
for i in range(1,15):
html_str += '<p>many row</p>'
html_str += '<h3>3.1.12 Draw</h3><h4>3.1.13.1 3D</h4><img src="data:image/png;base64,qqaa" alt="3D Structure">'
opt = {'encoding': 'UTF-8', 'orientation': 'Landscape', 'margin-top': '0.5in', 'margin-bottom': '0.5in', 'margin-left': '0.75in', 'margin-right': '0.75in', 'outline-depth': 6, 'header-center': 'whatever', 'header-right': 'Page: [page]/[toPage]', 'header-line': '', 'header-spacing': 2, 'footer-right': 'Date: [date]', 'footer-line': '', 'footer-spacing': 2, 'enable-local-file-access': None}
pdfkit.from_string(html_str, 'out.pdf', options=opt)
pdfkit.from_string(html_str, 'out.pdf', options=opt)
</code></pre>
<p><strong>Edit:</strong> one solution is to put <code><P style="page-break-before: always"></code> directly in the html string.</p>
<pre><code><P style="page-break-before: always"><img src="data:image/png;base64,qqaa" alt="3D Structure">
</code></pre>
<p><a href="https://i.sstatic.net/RF611.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RF611.png" alt="enter image description here" /></a></p>
| <python><image><python-pdfkit> | 2023-08-08 11:14:05 | 1 | 1,713 | warem |
76,858,868 | 12,493,545 | How to thread-safely capture stdout output from a function call? | <p>I have a program that now will get a REST API. Instead of rewriting a bunch of code I thought I would just run the program, capture the prints and put that into a message response.</p>
<p>On <a href="https://stackoverflow.com/q/16571150/3001761">How to capture stdout output from a Python function call?</a> I saw:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call#answer-16571630">this solution</a> which is apparently not thread-safe. I tried it with concurrent futures where it worked, but I trust the assessment there that it is not thread safe.</li>
<li>I also noticed the implementation <a href="https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call#answer-62397337">here</a>, but while it uses threads to be asynchronous, it never explicitly states that it is itself thread-safe.</li>
</ul>
<p>This is how I tried to test whether it is thread-safe or not, before I read that it is apparently not thread safe. I never had any issues with this (i.e. output always was <code>['hello world', 'hello world2']</code>, but maybe this is a characteristic of concurrent futures that is not present when using asynchronous functions from REST API modules (FastAPI in my case).</p>
<pre class="lang-py prettyprint-override"><code>import concurrent.futures
from io import StringIO
import sys
def main():
num_tests = 30
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = [executor.submit(test) for _ in range(num_tests)]
for future in concurrent.futures.as_completed(futures):
try:
result = future.result()
except Exception as e:
print(f"An error occurred: {e}")
class Capturing(list):
def __enter__(self):
self._stdout = sys.stdout
sys.stdout = self._stringio = StringIO()
return self
def __exit__(self, *args):
self.extend(self._stringio.getvalue().splitlines())
del self._stringio # free up some memory
sys.stdout = self._stdout
def test():
with Capturing() as output:
print('hello world')
print('displays on screen')
with Capturing(output) as output: # note the constructor argument
print('hello world2')
print('done')
print('output:', output)
main()
</code></pre>
<p>In the best case I'm looking for an idea how to capture stdout asynchronously. In the worst case an explanation why that isn't possible. In that case I will be looking for other solutions.</p>
| <python><multithreading><stdout><python-multithreading> | 2023-08-08 10:57:45 | 0 | 1,133 | Natan |
76,858,406 | 342,553 | Relationship between Python asyncio loop and executor | <p>I generally understand the concept of async vs threads/processes, I am just a bit confused when I am reading about the <a href="https://docs.python.org/3/library/asyncio-eventloop.html" rel="nofollow noreferrer">event loop</a> of asyncio.</p>
<p>When you use <code>asyncio.run()</code> I presume it creates an event loop? Does this event loop use an executor? The link above says the event loop will use the default executor, which after a brief google search it appears to be <code>ThreadPoolExecutor</code>.</p>
<p>I am a bit confused if I am writing coroutines why would that use <code>ThreadPoolExecutor</code> since I don't want to execute my async code in threads.</p>
| <python><python-asyncio> | 2023-08-08 09:52:19 | 2 | 26,828 | James Lin |
76,858,382 | 18,092,798 | How to add memory usage to Snakemake reports | <p>It seems that by default, Snakemake includes runtime in its reports. Is there any way to also add memory usage to the report as well?</p>
<p>Also, is there any way for us to see maximum memory usage of Snakemake over the course of a pipeline? I know you can use the <code>benchmark</code> directive to look at memory usage of an individual process, but this doesn't take into account how Snakemake runs things in parallel. This means that the total memory usage of Snakemake cannot be represented so easily using just <code>benchmark</code>.</p>
| <python><snakemake> | 2023-08-08 09:49:57 | 0 | 581 | yippingAppa |
76,858,353 | 8,792,159 | Possibility to trace back which packages hinder conda from installing most recent python version? | <p>I have a quite large <code>.yml</code> file that I use to create a conda environment. However, with the current specifications, I only get Python 3.8. Is there any way to trace back which packages and/or their combination hinder conda from installing the most recent python version? I would be willing to sacrifice those packages if it means that I can use a faster python version. Here's my <code>.yml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>channels:
- conda-forge
- pyviz
- plotly
- pytorch
- ejolly
- districtdatalabs
- bokeh
- defaults
dependencies:
- bokeh
- python
- sklearn-pandas
- datalad
- tableone
- spyder
- scikit-learn
- nilearn
- nipype
- dash
- dash-bootstrap-components
- statannotations
- fastcluster
- pingouin
- mne
- tensorboardx
- neurokit2
- oct2py
- pybids
- yellowbrick
- pymer4
- plotly-orca
- plotly
- pytorch
- holoviews
- hvplot
- r-essentials
- r-matrixstats
- r-repr
- r-devtools
- jupyterlab
- r-base
- r-pma
- r-dendextend
- r-circlize
- r-reticulate
- pip
- pip:
- xmltodict
- adjustText
- dash-cytoscape
- brainrender
- network_control
- cca-zoo
- bctpy
- gemmr
- gower
- antropy
- pyvis
- distro
- abagen
- groupyr
- clustergrammer
- nisupply
- fmriprep-docker
- lckr-jupyterlab-variableinspector
</code></pre>
| <python><dependencies><conda><package-managers> | 2023-08-08 09:44:29 | 1 | 1,317 | Johannes Wiesner |
76,858,314 | 7,195,666 | How to properly type client.post response in django test? | <pre><code> @pytest.mark.django_db
def my_test(
self,
client,
):
with monkeypatch_requests(resp_json={'values': []}):
r: JsonResponse = client.post(
self.endpoint,
self.request_payload,
content_type='application/json',
)
assert r.status_code == 201, r.content
assert r.json() == {'key': value}
</code></pre>
<p>PyCharm is complainging <code>Unresolved attribute reference 'json' for class 'JsonResponse'</code>, but I can use it.</p>
<p>When I run the test in debugger and hover over <code>json</code> I can see <code>functools.partial(<bound method ClientMixin._parse_json of <django.test.client.Client object at 0x7f3b81399d30>>, <JsonResponse status_code=201, "application/json">)</code></p>
| <python><django><unit-testing><typing> | 2023-08-08 09:39:24 | 1 | 2,271 | Vulwsztyn |
76,857,930 | 14,301,545 | CRC calculation in Python (from C++ code conversion) | <p>I am trying to rewrite the code for calculating the CRC given in the documentation of the sensor I am trying to communicate with.</p>
<p>Datasheet screens:
<a href="https://i.sstatic.net/RMd0B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RMd0B.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/hgqqL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hgqqL.png" alt="enter image description here" /></a></p>
<p>Example message I want to send:
<a href="https://i.sstatic.net/iTQ1r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iTQ1r.png" alt="enter image description here" /></a></p>
<p>My Python code:</p>
<pre><code>def calc_crc(p_buffer, buffer_size):
poly = 0x8408
crc = 0
for j in range(buffer_size):
crc = crc ^ p_buffer[j]
for i_bits in range(8):
carry = crc & 1
crc = crc / 2
if carry:
crc = crc^poly
return crc
# MESSAGE: IMU_GET_PROTOCOL_MODE------(0x13)
# Used to get the communication baud rate
def imu_get_protocol_mode():
frame_sync_byte = 'FF'
frame_start_byte = '02'
addr = '00'
cmd = '13'
len_msb = '00'
len_lsb = '00'
data = ''
frame_end_byte = '03'
to_calc_crc = addr + cmd + len_msb + len_lsb + data
to_calc_crc_b = bytearray.fromhex(to_calc_crc)
crc = calc_crc(to_calc_crc_b, 4)
command = frame_sync_byte + frame_start_byte + to_calc_crc + crc + frame_end_byte
print(command)
imu_get_protocol_mode()
</code></pre>
<p>Unfortunately, I'm weak in bit operations. The script returns an error:</p>
<pre><code>Traceback (most recent call last):
File "C:\DANE\PyCharm Projects\INC\testy.py", line 41, in <module>
imu_get_protocol_mode()
File "C:\DANE\PyCharm Projects\INC\testy.py", line 35, in imu_get_protocol_mode
crc = calc_crc(to_calc_crc_b, 4)
File "C:\DANE\PyCharm Projects\INC\testy.py", line 11, in calc_crc
carry = crc & 1
TypeError: unsupported operand type(s) for &: 'float' and 'int'
</code></pre>
<p>I don't understand it because both "crc" and "1" are of type int</p>
<p>Can anyone guide me to make this CRC calculations correct? Thanks!</p>
| <python><crc> | 2023-08-08 08:52:30 | 2 | 369 | dany |
76,857,679 | 3,906,786 | How to add line numbers to large files fast? | <p>I'm still working on an ETL program to load lots of data files into a Netezza database. One requirement of the import is that a leading line number is added in the datafile. Those datafiles may have all different sizes, starting from a few bytes up to > 4GB (approx. 13.000.000 lines and more).</p>
<p>The loader I wrote uses the Python threading module to spawn up to 10 threads at the same time. After lots of testing I figured out that whenever a thread is working on adding a line number to each line and write it to a new datafile, all other threads are blocked.</p>
<p>Skipping the counting part and adding it to each line cannot be skipped, so I have to think about an other solution. This could be either using multiprocessing instead of multithreading or changing the way a number is added to each line.</p>
<p>Here's the code I currently use for the counting part:</p>
<pre class="lang-py prettyprint-override"><code>from itertools import count
counter = count(0)
cnt = 0
file_in = open(orig_filename, 'rb')
with open(filename_tmp, 'wb') as file_out:
skip = False
for line in iter(file_in.readline, b''):
if header_included is True:
if skip is False:
skip = True
cnt = next(counter)
continue
else:
if cnt == 0:
cnt = next(counter)
cnt = next(counter)
new_line = f'{cnt}\t'.encode(encoding) + line
file_out.write(new_line)
if line:
field_len = len(line.decode(encoding).split('\t'))
file_in.close()
</code></pre>
<p>Usually this piece of code it quite performant, but it seems that adding the line number is less I/O and more processor specific.</p>
<p>The whole script itself does a lot more, like moving files around, etc. but it seems that when a huge file is being counted, all other things are just waiting. Btw, don't know if this is relevant, but I have to use Python 3.6 due to system OS limitations. :(</p>
<p>Any hints or tipps for me? Many thanks in advance, kind regards, Thomas</p>
| <python><python-3.x><python-multiprocessing><python-multithreading> | 2023-08-08 08:19:32 | 1 | 983 | brillenheini |
76,857,524 | 4,405,477 | Python virtual environment in NX monorepo | <p>I am using NX monorepo for one of my large javascript projects and one part of it relies on an external project that creates its own python virtual environment. Parts of my nodejs code are executing python commands and then retrieve the data required for the rest of JS build process.</p>
<p>In order to run NX scripts or tests, that virtualenv needs to be turned on, but the problem is that NX spawns its own processes that are isolated from the rest of operating system.</p>
<p>If I try to add NPM script that starts virtualenv <code>conda activate testenv</code> with <code>nx:run-script</code></p>
<pre><code> "virtualenv-conda": {
"executor": "nx:run-script",
"options": {
"script": "venv-conda"
}
}
</code></pre>
<p>or set it with <code>nx:run-commands</code> with</p>
<pre><code> "virtualenv-conda": {
"executor": "nx:run-commands",
"options": {
"commands": [
"conda activate testenv"
],
"parallel": false
}
}
</code></pre>
<p>I get the following error:</p>
<pre><code>CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
</code></pre>
<p>I am using ZSH on MacOS.</p>
<p>My questions are:</p>
<ol>
<li>How do I start python virtual environment before specific tests?</li>
<li>How do I set up custom environment that will be used in these spawned processes?</li>
</ol>
| <python><node.js><nx-monorepo> | 2023-08-08 07:55:21 | 2 | 4,016 | vlasterx |
76,857,519 | 1,843,511 | Dunder methods for overloading | <p>For dunder methods like <code>__getattribute__</code> and <code>__getattr__</code> we have a function which access this special methods like <code>getattr()</code>, so we only have to use these dunder methods for overloading (i.e. add/change behavior of this special method in your class).</p>
<p>For <code>__members__</code> in <code>Enum</code> classes this is unclear. Which function to use to access the values which this dunder method returns, without explicitly accessing this dunder method? Is there one?</p>
<p>Or is it actually OK to use this dunder method directly? It doesn't feel right, though.</p>
<p>Is there any clear reference anywhere which built-in functions use which dunder methods? Because I'm not able to find them in their docs that quickly.</p>
| <python> | 2023-08-08 07:54:45 | 2 | 5,005 | Erik van de Ven |
76,857,480 | 1,039,302 | Regex on pandas column | <p>pandas column has <code>0.0(nan)</code> and <code>0(nan)</code>. I want to get <code>0</code> for both cases. Followed is the code.</p>
<pre><code>import pandas as pd
import re
df = pd.DataFrame.from_dict({'col1': ['0.0(nan)','0(nan)']})
df['col2'] = df['col1'].astype(str).apply(lambda x: re.sub('(.*?)\(nan\)', '\\1', re.sub('(.*?)\.0*\(nan\)', '\\1', x)))
print(df)
</code></pre>
<p>Below is the output. For the regex, I didn't know how to deal with either <code>.0</code> or <code>0</code> before the (. This is why I used <code>re.sub</code> inside another <code>re.sub</code>. My question is how to make the regex in <code>one re.sub</code>. Or any other methods? Thank you.</p>
<pre><code> col1 col2
0 0.0(nan) 0
1 0(nan) 0
</code></pre>
<p><strong>Edit:</strong> by the comment of @mozway</p>
<pre><code>df['col2'] = df['col1'].astype(str).apply(lambda x: re.sub('(.*?)(?:\.0)?\(nan\)', '\\1', x))
</code></pre>
| <python><pandas><regex> | 2023-08-08 07:50:41 | 0 | 1,713 | warem |
76,857,408 | 13,882,618 | Can I apply functools.wraps to a normal function, which is not a decorator or a decorator factory? | <p>I have a function like this</p>
<pre class="lang-py prettyprint-override"><code>def load_resources(path: Path, src_type: str, *, load_all: bool = False) -> bool:
... # something happens here and return a value
</code></pre>
<p>Then IDE(which is VScode in this case) gives me useful information when using <code>load_resources</code> including this kind of type hint.</p>
<pre><code>(function) def load_resources(path: Path, src_type: str, *, load_all: bool = False) -> bool
</code></pre>
<p>Calling the function takes some time so I wanted to put <code>functools.lru_cache</code> on the function. (I am using python3.8)</p>
<pre class="lang-py prettyprint-override"><code>@functools.lru_cache()
def load_resources(path: Path, src_type: str, *, load_all: bool = False) -> bool:
...
</code></pre>
<p>Then <code>lru_cache</code> shades the information of <code>load_resources</code>.</p>
<pre><code>(function) load_resources: _lru_cache_wrapper[Unknown]
</code></pre>
<p>So I tried <code>functools.wraps</code> like this.</p>
<pre class="lang-py prettyprint-override"><code>def __load_resources(path: Path, src_type: str, *, load_all: bool = False) -> bool:
...
@functools.wraps(__load_resources)
@functools.lru_cache()
def load_resources(*args, **kwargs):
return __load_resources(*args, **kwargs)
</code></pre>
<p>but it gives me a weird type hint like this.</p>
<pre><code>(function) load_resources: _Wrapped[(__load_resources(path: Path, src_type: str, *, load_all: bool = False), bool, (*args: Hashable, **kwargs: Hashable), Unknown]
</code></pre>
<p>I knew that <code>__name__</code> of the function went wrong but I thought other features will be applied properly.</p>
<p>Is it just wrong using <code>functools.wraps</code> out of the context of decorators? Then how can I attach type hints to the functions decorated by <code>functools.lru_cache</code>?</p>
<hr />
<p>I just did like this.</p>
<pre><code>def load_resources(path: Path, src_type: str, *, load_all: bool = False) -> bool:
return __get_context(path, src_type, load_all=load_all)
</code></pre>
| <python><visual-studio-code><python-typing><pylance> | 2023-08-08 07:40:51 | 1 | 401 | jolim |
76,857,044 | 13,338,404 | Getting bash: python command not found error even though I am using zsh shell in my macOS | <p>I am trying to build and install this <a href="https://github.com/AcademySoftwareFoundation/OpenRV/" rel="nofollow noreferrer">software</a>
So, I did these steps</p>
<pre><code># recursive repo cloning
git clone --recursive https://github.com/AcademySoftwareFoundation/OpenRV.git
# sourcing there scripts
source rvcmds.sh
# configure
cmake -B_build -H. -DCMAKE_BUILD_TYPE=Release -DRV_DEPS_QT5_LOCATION=/Users/macos/Qt/5.15.2/clang_64
# building
cmake --build _build --config Release -v --target main_executable
</code></pre>
<p>now during build I get this error</p>
<pre><code>OpenGL descriptors
--------------------------------------------------------------------
rm -rf extensions/gl
cp -r glfixes/gl/specs/ANGLE OpenGL-Registry/extensions
cp -r glfixes/gl/specs/REGAL OpenGL-Registry/extensions
bin/update_ext.sh extensions/gl OpenGL-Registry/extensions blacklist
--------------------------------------------------------------------
WGL descriptors
--------------------------------------------------------------------
rm -f extensions/gl/WGL_*
python bin/parse_xml.py OpenGL-Registry/xml/wgl.xml --api wgl --extensions extensions/gl
bash: python: command not found
make[4]: *** [extensions/gl/.dummy] Error 127
make[3]: *** [cmake/dependencies/RV_DEPS_GLEW-prefix/src/RV_DEPS_GLEW-stamp/RV_DEPS_GLEW-configure] Error 2
make[2]: *** [cmake/dependencies/CMakeFiles/RV_DEPS_GLEW.dir/all] Error 2
make[1]: *** [CMakeFiles/main_executable.dir/rule] Error 2
make: *** [main_executable] Error 2
</code></pre>
<p>so according to the error I don't have python as it says python command not found, so what I did in my .zprofile I setup the alias like this <code>alias python="python3"</code>
when I do <code>python --version</code> I get same result as <code>python3 --version</code>
but since the cmake is running in bash shell (for some unknown reason) I get the error.</p>
<p>I even tried to change my default shell to bash and add the alias there, but still cmake gives error.</p>
<p>I am not sure how to fix this, if anyone has any idea regarding this, really appreciated .</p>
| <python><bash><cmake><zsh><macos-ventura> | 2023-08-08 06:44:43 | 1 | 547 | Pragyan |
76,856,953 | 11,629,296 | get median of a columns based on the weights from another column | <p>I have a data frame like this,</p>
<pre><code>col1 col2
100 3
200 2
300 4
400 1
</code></pre>
<p>Now I want to have median on col1 in such way col2 values will be the weights for each col1 values like this,</p>
<pre><code>median of [100, 100, 100, 200, 200, 300, 300, 300, 300, 400] # 100 is 3 times as the weight is 3
</code></pre>
<p>I can do it by creating multiple rows based on weights but I can't allow more rows, is there any way to do it more efficiently without creating multiple rows either in python or pyspark</p>
| <python><pandas><dataframe><pyspark><pyspark-pandas> | 2023-08-08 06:29:48 | 1 | 2,189 | Kallol |
76,856,867 | 7,458,826 | Read .continuous file from open ephys using python | <p>I have a series of .continuous files containing EEG data from Openephys. I am trying to read the information in order to recover the time-series to analyze them using Python tools. However, every recommended library I tried returned a similar error; for example: using <code>pyopenephys</code> (<code>continuous_data = pyopenephys.File(continuous_file)</code>), I get the error <code>NotADirectoryError: [Errno 20] Not a directory</code>; when I use the MNE library, I get the same error; when using h5py, I get <code>OSError: Unable to open file (file signature not found)</code>. How can I read the data from the .continuous file into Python code?</p>
| <python><python-3.x> | 2023-08-08 06:14:42 | 1 | 636 | donut |
76,856,758 | 2,840,134 | contour clabel custom text | <p>I'm plotting hyperbolas in their general form</p>
<pre class="lang-py prettyprint-override"><code>cntr = ax.contour(x, y, (A * x ** 2 + B * x * y + C * y ** 2 + D * x + E * y - 1), [0])
</code></pre>
<p>and I'd like to draw their names on the contour, for example, from <code>0</code> to <code>N-1</code>. How to do this?</p>
<p>There is a function</p>
<pre><code>ax.clabel(cntr, fmt="%2.1f", use_clabeltext=True)
</code></pre>
<p>but it does not allow passing custom text.</p>
<p>Below is my drawing of what I'd like to achieve. The position of custom labels can be any although ideally, I'd like to be them as in the <code>clabel</code> function.</p>
<p><a href="https://i.sstatic.net/rvcLX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rvcLX.png" alt="enter image description here" /></a></p>
<hr />
<p>Update 1.</p>
<p>A hacky way would be to shift each contour level by its index and enumerate them:</p>
<pre class="lang-py prettyprint-override"><code>cntr = ax.contour(x, y, (A * x ** 2 + B * x * y + C * y ** 2 + D * x + E * y - 1 + idx), [idx])
</code></pre>
<p>This approach wouldn't work for custom text though.</p>
| <python><matplotlib><label><contour> | 2023-08-08 05:49:18 | 1 | 710 | dizcza |
76,856,699 | 4,352,728 | Python web app 504 gateway timeout on Nginx and Gunicorn | <p>I'm running a Python web application upon Gunicorn and Nginx, on a Ubuntu server at Digital Ocean Droplet. The application times-out if the load is over 60s and goes to 504 Gateway Timeout page. I've made several changes to my nginx.conf file, by increasing the proxy timeout duration and I've tried I've increasing the gunicorn timeout duration.</p>
<p>Nothing seems to be working to resolve this issue, it's always return 504 after 60s.
Following are Nginx and gunicorn config sample. Appreciated for any help. Thanks!</p>
<p>nginx.conf</p>
<pre><code>user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 900s;
proxy_connect_timeout 900s;
proxy_read_timeout 900s;
proxy_send_timeout 900s;
send_timeout 900s;
types_hash_max_size 2048;
# server_tokens off;
...
...
}
</code></pre>
<p><strong>I made sure to include --timeout 900 in Gunicorn config as well.</strong></p>
| <python><ubuntu><nginx><gunicorn><digital-ocean> | 2023-08-08 05:35:47 | 0 | 2,118 | Ye Win |
76,856,611 | 5,987 | Why do char strings and byte strings iterate differently? | <p>I've noticed something odd about for loops iterating over Python strings. For char strings, you get strings with a single character each.</p>
<pre><code>>>> for c in 'Hello':
print(type(c), repr(c))
<class 'str'> 'H'
<class 'str'> 'e'
<class 'str'> 'l'
<class 'str'> 'l'
<class 'str'> 'o'
</code></pre>
<p>For byte strings, you get integers.</p>
<pre><code>>>> for c in b'Hello':
print(type(c), repr(c))
<class 'int'> 72
<class 'int'> 101
<class 'int'> 108
<class 'int'> 108
<class 'int'> 111
</code></pre>
<p>Why do I care? I'd like to write a function that takes either a file or a string as input. For a text file/character string, this is easy; you just use two loops. For a string input the second loop is redundant, but it works anyway.</p>
<pre><code>def process_chars(string_or_file):
for chunk in string_or_file:
for char in chunk:
# do something with char
</code></pre>
<p>You can't do the same trick with binary files and byte strings, because with a byte string the results from the first loop are not iterable.</p>
| <python><string><for-loop> | 2023-08-08 05:10:22 | 0 | 309,773 | Mark Ransom |
76,856,535 | 19,106,705 | pytorch multi hot vectors | <p>I want to implement a multi-hot vector in PyTorch.</p>
<ol>
<li>create a zero tensor of size len(x) x (multi_hot_num * max_num)</li>
<li>Then, for each element in 'x', fill the corresponding range of indices from x[i] * multi_hot_num to (x[i]+1) * multi_hot_num in the tensor with 1.</li>
</ol>
<p>The following code succinctly demonstrates the desired behavior:</p>
<pre class="lang-py prettyprint-override"><code>max_num = 4
multi_hot_num = 3
x = torch.tensor([0, 2, 1])
</code></pre>
<p>expect output:</p>
<pre class="lang-py prettyprint-override"><code>tensor([[1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0 ,0, 0]])
</code></pre>
<p>So my question is, how to create an expected output using x, max_num, and multi_hot_num.</p>
| <python><pytorch><tensor> | 2023-08-08 04:46:34 | 1 | 870 | core_not_dumped |
76,856,502 | 12,224,591 | Fit Data to Reciprocal Function with Exponent | <p>I want to fit a set of XY data points to a reciprocal exponential function in Python 3.8. I'm attempting to use the <code>leastsq</code> function of the <code>scipy.optimize</code> library.</p>
<p>I wish to fit my data to a function akin to the following:</p>
<p><a href="https://i.sstatic.net/H3VUt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3VUt.png" alt="enter image description here" /></a></p>
<p>Where <code>d</code>, <code>s</code>, and <code>e</code> are all the function parameters I wish to fit to my data.</p>
<p>I have some example data which I'd like to fit the function above to, and the following testing script:</p>
<pre><code>import numpy as np
from scipy.optimize import leastsq
import matplotlib.pyplot as plt
def main():
# Declare example data
dataX = [0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5, 10, 10.5, 11, 11.5, 12, 12.5, 13, 13.5, 14.5, 15, 15.5, 16, 16.5, 17]
dataY = [0.00000000e+00, 2.47076895e-03, 9.66638150e-03, 1.97670203e-02, 3.42218835e-02, 4.97943540e-02, 6.54004261e-02, 7.93484613e-02, 8.61083796e-02, 8.49273950e-02, 6.70327841e-02, 3.45946900e-02, -2.24559129e-02, -1.17023372e-01, -2.49244700e-01, -4.27837601e-01, -6.61529080e-01, -9.62240010e-01, -1.35215046e+00, -1.84177480e+00, -2.46428273e+00, -3.25198094e+00, -4.24238794e+00, -5.53021367e+00, -7.23278230e+00, -9.57603920e+00, -1.31432215e+01, -1.99465466e+01, -1.45862198e+01, -1.01563667e+01, -7.19977087e+00, -5.03349403e+00, -3.37202972e+00, -2.07966002e+00]
# Fit to reciprocal exponential function
guessDenom = 1
guessPow = 2
guessShift = -10
optimalFx = lambda args: -(args[0] / pow(dataX + args[2], args[1])) - dataY
finalDenom, finalPow, finalShift = leastsq(optimalFx, [guessDenom, guessPow, guessShift])[0]
dataYFitted = []
for x in dataX:
dataYFitted.append(-finalDenom / pow(x + finalShift, finalPow))
# Plot original & fitted data
plt.xlim([0, 17])
plt.ylim([-30, 30])
plt.plot(dataX, dataYFitted, label = "Fitted Function", color = "dodgerblue")
plt.plot(dataX, dataY, label = "Original Data", color = "gray", alpha = 0.5)
plt.legend()
plt.show()
plt.close()
plt.clf()
if (__name__ == "__main__"):
main()
</code></pre>
<p>The script above takes the example data, and attempts to fit the function above using a lambda expression. It also graphs the original data along the fitted function at the discrete X intervals. I've used this approach before for other functions and it worked as intended, however with this reciprocal exponential function it doesn't appear to work properly.</p>
<p>Firstly, upon running the script above, I get the following warnings:</p>
<pre><code>RuntimeWarning: divide by zero encountered in divide
optimalFx = lambda args: -(args[0] / pow(dataX + args[2], args[1])) - dataY
RuntimeWarning: invalid value encountered in power
optimalFx = lambda args: -(args[0] / pow(dataX + args[2], args[1])) - dataY
RuntimeWarning: Number of calls to function has reached maxfev = 800.
warnings.warn(errors[info][0], RuntimeWarning)
</code></pre>
<p>And the following final plot is generated:
<a href="https://i.sstatic.net/UPkU2l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UPkU2l.png" alt="enter image description here" /></a></p>
<p>Since the supposed fitted function is not even visible, something is clearly going wrong.</p>
<p>What I'm assuming is happening here is the fact that, at a certain point in the library trying to find a good fit, it encounters a division by 0 and it fails. This can be evident if one simply plots the reciprocal exponential function:</p>
<p><a href="https://i.sstatic.net/uNfsol.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uNfsol.png" alt="enter image description here" /></a></p>
<p>At a certain point an answer of <code>ind</code> must be getting encountered.</p>
<p>How does one fit the reciprocal exponential function to a data set with the <code>scipy.optimize</code> library in Python without running into this issue?
Is there perhaps a way to restrict the checking domains for each argument provided to the <code>leastsq</code> function?</p>
| <python><scipy><data-fitting> | 2023-08-08 04:34:25 | 1 | 705 | Runsva |
76,856,461 | 14,471,688 | How to speed up multiple nested for loops in python? | <p>I wonder if there is another way to speed up my multiple nested <code>for</code> loops in a matrix function.</p>
<p>Here is my function:</p>
<pre><code>def matrix(Xbin, y):
labels = np.unique(y)
con_matrix = []
start = time.time()
for i in range(len(labels)):
for j in range(i + 1, len(labels)):
# Crossover
for u in Xbin[y == labels[i]]:
for v in Xbin[y == labels[j]]:
con_matrix.append(np.bitwise_xor(u, v))
end = time.time()
duration = end - start
print("total time for nested loop: ", duration)
constraint_matrix = np.array(con_matrix)
bin_attr_dim = [i for i in range(1, Xbin.shape[1] + 1)]
df = pd.DataFrame(constraint_matrix, columns=bin_attr_dim)
return df
</code></pre>
<p>Please note that <code>Xbin</code> is a <code>numpy.ndarray</code>. <code>y</code> denotes the unique group (1, 2, 3). Picture below denotes that kind of <code>ndarray</code> (start with column a to h) Figure 1:</p>
<p><a href="https://i.sstatic.net/LeQaI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LeQaI.png" alt="enter image description here" /></a></p>
<p>My function matrix described above will generate the output as a <code>DataFrame</code> corresponding to the picture below (columns a to h). It is the combination between elements in different groups. Figure 2:</p>
<p><a href="https://i.sstatic.net/Gc7XX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gc7XX.png" alt="enter image description here" /></a></p>
<p>Here is my code to generate a dataset in binarize format as shown in Figure 1:</p>
<pre><code>def binarize_dataset(X, y):
cutpoints = {}
att = -1
for row in X.T:
att += 1
labels = None # Previous labels
u = -9999 # Previous xi
# Finding transitions
for v in sorted(np.unique(row)):
variation = v - u # Current - Previous
# Classes where current v appears
indexes = np.where(row == v)[0]
# current label
__labels = set(y[indexes])
# Main condition
if labels is not None and variation > 0:
# Testing for transition to find the essential cut-points
if (len(labels) > 1 or len(__labels) > 1) or labels != __labels:
# cut-point id
cid = len(cutpoints)
cutpoints[cid] = (att, u + variation / 2.0)
labels = __labels
# previous equals current
u = v
new_dict = {}
# Iterate over the values in the original dictionary
for key, value in cutpoints.items():
first_element = value[0]
second_element = value[1]
# Check if the first_element is already a key in new_dict
if first_element in new_dict:
new_dict[first_element].append(second_element)
else:
new_dict[first_element] = [second_element]
# Generate combinations of the second elements within each group
for key, value in new_dict.items():
comb = combinations(value, 2)
# Append the combinations to the value list
for c in comb:
new_dict[key].append(c)
arrays = []
for attr, cutpoints in new_dict.items():
for cutpoint in cutpoints:
row = X.T[attr]
if isinstance(cutpoint, tuple):
lowerbound = cutpoint[0] <= row.reshape(X.shape[0], 1)
upperbound = row.reshape(X.shape[0], 1) < cutpoint[1]
row = np.logical_and(lowerbound, upperbound)
arrays.append(row)
else:
row = row.reshape(X.shape[0], 1) >= cutpoint
arrays.append(row)
Xbin = np.concatenate(arrays, axis=1)
bin_attr_dim = [i for i in range(1, Xbin.shape[1] + 1)]
df = pd.DataFrame(Xbin, columns=bin_attr_dim)
start = 0
dict_parent_children = {}
for key, list_value in new_dict.items():
dict_parent_children[key] = list(df.columns[start: start + len(list_value)])
start += len(list_value)
return Xbin, df, dict_parent_children
</code></pre>
<p>When I tested with iris dataset which is a small dataset, it works really fast.</p>
<pre><code>X, y = datasets.load_iris(return_X_y=True)
bin_dataset, data, dict_parent_children = binarize_dataset(X, y)
con_matrix = matrix(bin_dataset, y)
</code></pre>
<p>When I tested with a bigger dataset like breast cancer, it started getting longer and longer.</p>
<pre><code>X, y = datasets.load_breast_cancer(return_X_y=True)
bin_dataset, data, dict_parent_children = binarize_dataset(X, y)
con_matrix = matrix(bin_dataset, y)
</code></pre>
<p>Imagine testing with a dataset bigger than breast cancer, the question is how can I speed up my function matrix as fast as possible in this case or is there a faster way to rewrite a faster matrix function?</p>
| <python><numpy><for-loop><optimization><nested> | 2023-08-08 04:15:25 | 1 | 381 | Erwin |
76,856,428 | 2,566,565 | Python module resolution for Google App Engine vs Google Cloud NDB | <p>I'm trying to create a Python 3 application for Google App Engine (using <code>PyCharm</code>).</p>
<p>I think I'm running into a basic problem in which a module named "google" is defined in 2 places: in the Google App Engine directory created by <code>gcloud install</code>, and in my project's venv directory when I pip install add-ons such as <code>google-cloud-ndb</code> via the <code>requirements.txt</code> file.</p>
<p>When I select <code>PyCharm</code> Settings | Frameworks | App Engine and specify the directory into which <code>gcloud</code> installed <code>App Engine</code>, the editor can resolve things like <code>import google.appengine</code> but not <code>import google.cloud</code>. If I unselect the <code>App Engine</code> Framework, then the editor can find <code>import google.cloud</code>, but not <code>import google.appengine</code>.</p>
<p>All the code examples just list <code>import google.xyz</code>, so I imagine when I deploy, <code>Google</code> will resolve all this correctly. But it also means I probably shouldn't play around with naming or imports either.</p>
<p>This is so basic. How is this handled?</p>
| <python><google-app-engine><pycharm> | 2023-08-08 04:05:14 | 2 | 728 | Dev93 |
76,856,342 | 4,095,108 | SQLAlchemy - SQL statement keeps running after closing session | <p>I have a python script to truncate a table on SQL Server. Here is the code:</p>
<pre><code># 2. Create connection
from sqlalchemy.engine import URL,create_engine
from sqlalchemy.sql import text
from sqlalchemy.orm import sessionmaker
connection_url = URL.create(
"mssql+pyodbc",
# username="",
# password="",
host="SERVER NAME",
# port=,
database="DB NAME",
query={
"driver": "SQL Server Native Client 11.0",
"TrustServerCertificate": "yes",
# "authentication": "ActiveDirectoryIntegrated",
},
)
# 4. Truncate table
engine = create_engine(connection_url)
Session = sessionmaker(bind=engine)
session = Session()
try:
session.execute(text('''TRUNCATE TABLE [dbo].[MyTable]'''))
session.commit()
except Exception as e:
print("an error occured:", e)
finally:
session.close()
</code></pre>
<p>The code seems to work but when I run <code>sp_whoisactive</code> on the SQL database, I see that 3+ hours later the query is still running. I don't understand why is it still running even after I've closed the session?</p>
<p>Here is a screenshot from that result. I had two Truncate statements. :<br />
<a href="https://i.sstatic.net/J0Un6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J0Un6.png" alt="enter image description here" /></a></p>
| <python><sql-server><sqlalchemy> | 2023-08-08 03:35:25 | 0 | 1,685 | jmich738 |
76,856,308 | 11,846,953 | Python decorator to automatically call superclass method | <p>I have a few class hierarchies with a common method, where I want the subclasses to call the parent method before executing the subclassed method. But when writing the root classes, obviously there's no parent method to call.</p>
<p>It's not a big deal to add <code>super().f(...)</code> to the subclass methods, but it's annoying that I sometimes forget that a class is not a root class and miss that until I trace the calls to find where the call chain stops. Is there a decorator that will do that automatically?</p>
<p>I tried this:</p>
<pre><code>def extend_super(orig_method):
def decorated(*args, **kwargs):
if len(args) >= 1:
selfarg = args[0]
selfarg_qualname = selfarg.__class__.__qualname__
expect_qualname = f"{selfarg_qualname}.{orig_method.__name__}"
if expect_qualname == orig_method.__qualname__:
try:
super_method = getattr(super(type(selfarg), selfarg), orig_method.__name__)
super_method.__func__(*args, **kwargs)
except AttributeError:
pass
return orig_method(*args, **kwargs)
return decorated
</code></pre>
<p>It works - it will call the parent method, stop at the root, and will even tolerate decorating functions if you do that accidentally. But it seems awkward since it uses string operations to work. Is there an existing decorator that will do what I want, or is there a better way to make this one?</p>
<p>Edit:</p>
<p>Second edit to add that this doesn't work: explanation below.</p>
<p>I wanted to move the overhead of finding any parent class method out of the function call itself. I looked at <code>gc</code> to get a list of all references to the function passed to the decorator, but there are no classes in that list. This is because the decorator will always be called before the class has been created, so there is no class to find.</p>
<p>I found an example of a Python decorator which counts the number of times it's called. It does this by adding a field to the decorated function itself, and storing the count there (<a href="https://python-course.eu/advanced-python/decorators-decoration.php" rel="nofollow noreferrer">https://python-course.eu/advanced-python/decorators-decoration.php</a>).</p>
<p>So I figured the decorator could find the superclass method the first time it's called, and save it. That part would have to be done once whatever method was used, but all subsequent calls would just used the saved method, so it can now be used efficiently. This is what it looks like:</p>
<pre><code>def extend_super(orig_method):
def decorated(*args, **kwargs):
if not decorated.checked_super_method:
decorated.checked_super_method = True
if len(args) >= 1:
selfarg = args[0]
selfarg_qualname = selfarg.__class__.__qualname__
expect_qualname = f"{selfarg_qualname}.{orig_method.__name__}"
if expect_qualname == orig_method.__qualname__:
super_class = super(type(selfarg), selfarg)
try:
decorated.super_method = getattr(super_class, orig_method.__name__)
except AttributeError:
pass
if decorated.super_method is not None:
decorated.super_method.__func__(*args, **kwargs)
return orig_method(*args, **kwargs)
decorated.checked_super_method = False
decorated.super_method = None
return decorated
</code></pre>
<p>I wasn't sure about adding attributes to a function, but it looks like that ability was added intentionally. Caching the result of an operation is an example use for it, which is what I'm doing here, so I'm satisfied with this.</p>
<p>Edit:</p>
<p>This doesn't work above the very lowest level, because the <code>self</code> arg will always be the class of the first call. <code>super()</code> will find the level above, and the code will call that, but that call will still have the original <code>self</code> arg, and <code>super()</code> will find the same level as before. This will repeat until a recursion level is reached.</p>
<p>A real solution might be able to manually take the original <code>selfarg.__class__.__bases__</code> list and search it until the class of the current function call is found, or there are no more parent classes (<code>s.__class__.__bases__[0] == type</code>), but that is getting to be a lot more work than I want to do, to simplify something pretty minor that I thought would be easy.</p>
| <python><python-3.x><python-decorators> | 2023-08-08 03:24:03 | 0 | 1,139 | John Bayko |
76,856,144 | 14,860 | Cleaning up old directories left around by crashing PyInstaller one-file mode programs | <p>We have a Python application built with PyInstaller into a one-file executable (for Linux) and it therefore creates a <code>/tmp/_MEIxxxxxx</code> directory for unpacking all the files into, before running the Python portion.</p>
<p>Due to the use of certain third party shared libraries, this program occasionally dumps core, which means the directory where it was unpacked does not get deleted. We have a workaround in place that recovers by restarting the application but the <code>/tmp</code> filesystem grows in size until we run into <em>other</em> problems.</p>
<p>We <em>are</em> working on fixing the underlying issue but, in the meantime, we'd like to try to clean up the directories ourselves until that happens.</p>
<p>I know about the <code>--runtime-tmpdir</code> build-time option but this appears to only affect where the individual <code>_MEIxxxxxx</code> directories are created, not where the code is directly unpacked to.</p>
<p>But even if we could cause the executable to go to one specific place (rather than a random directory under that one specific place), we have the added issue that we actually run several copies of this application on each box and each needs its <em>own</em> directory.</p>
<p>So, ideally there would be an option you could run during invocation (not at build-time) of the executable which would tell PyInstaller where all the files should go, something like:</p>
<pre class="lang-bash prettyprint-override"><code>my_prog.exe --unpack-to-dir /tmp/MYPROG-1 ... python arguments for instance 1
my_prog.exe --unpack-to-dir /tmp/MYPROG-2 ... python arguments for instance 2
</code></pre>
<p>That would run instance one by unpacking to <code>/tmp/MYPROG-1</code> and kicking up the Python code. Ditto for instance two but in the other directory.</p>
<p>But, since I can't see any way to do that, I'm looking for other solutions which allow me to keep the directories as clean as possible but still keep instances away from each other.</p>
<p>One option I have thought of is to have a cleanup task run periodically which can recognise relevant directories for <code>my_prog</code> (there may be <em>other</em> PyInstaller things running) and, if no active process is using it, clean the directory. It would just have to run often enough to quickly delete directories that were "orphaned" and avoid conflicts in the time between "starting to unpack" and "starting to run".</p>
<p>Recognition could be based on certain files within the directory. Active process detection could use <code>procfs</code> or the use of a watchdog file that the running Python code updated every minute (for example).
This seems ... sub-optimal ... so I'm hoping there's a better solution.</p>
<p>The Python version in use is 2.7.12 (it's a legacy product, so very little chance of upgrading unfortunately) and PyInstaller is 3.6.</p>
| <python><linux><pyinstaller><temp> | 2023-08-08 02:28:36 | 1 | 887,953 | paxdiablo |
76,855,977 | 2,683,447 | Collision detection in pygame. Error: no attribute 'rect' | <p>New to pygame, writing a little code to try and detect collisions. Here is a small test file I wrote up to demonstrate the error. It instantiates two sprites, and tries to detect collision between them</p>
<pre><code>import pygame
class Box(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
class Circle(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
box = Box()
circle = Circle()
group = pygame.sprite.Group()
group.add(circle)
pygame.sprite.spritecollideany(box, group)
</code></pre>
<p>However this returns an error:</p>
<pre><code>default_sprite_collide_func = sprite.rect.colliderect
^^^^^^^^^^^
AttributeError: 'Box' object has no attribute 'rect'
</code></pre>
<p>I'm not really sure what I should be doing to remedy the situation. How can I detect collision between two sprites in pygame?</p>
<p>NOTE: edited to include proposed solution from an answer. I'm not making a new question since I'm still getting a similar error.</p>
| <python><pygame><sprite><collision-detection> | 2023-08-08 01:15:56 | 1 | 4,187 | Math chiller |
76,855,925 | 5,563,324 | Split Large File Into Blocks Based on Regex Criteria | <p>I have a large file filled with content in the following pattern. I would like to split each email block. Criteria for a block is that everything from <code>Subject: </code> to second occurrence of <code>Subject: </code> only if it is followed by <code>From:</code>.</p>
<pre><code>Subject: Hello
From: John Doe
Date: Fri, 12 Feb 2010 09:13:51 +0200
Lorem ipsum...
Subject: How are you.
I am fine
Subject: Howdy
From: Jane Doe
Date: Fri, 12 Feb 2010 09:58:14 +0200
Lorem ipsum...
Subject: Re: Howdy
From: Eminem
</code></pre>
<p>In the example above, first email block would be:</p>
<pre><code>Subject: Hello
From: John Doe
Date: Fri, 12 Feb 2010 09:13:51 +0200
Lorem ipsum...
Subject: How are you.
I am fine
</code></pre>
<p>Second email block:</p>
<pre><code>Subject: Howdy
From: Jane Doe
Date: Fri, 12 Feb 2010 09:58:14 +0200
Lorem ipsum...
</code></pre>
<p>I have tried the following method but it doesn't work for all the cases.</p>
<p><code>email_blocks = re.split(r'\n(?=Subject:)', email_data)</code></p>
<p>It incorrectly splits the first block into two separate blocks because it only looks for the keyword <code>Subject:</code>. What I need is a way to split from <code>Subject:</code> to second <code>Subject:</code> only if followed by <code>From:</code>.</p>
<p>I have also tried the following but it didn't create an array of blocks. It only returned the last block:</p>
<p><code>email_blocks = re.findall(r'Subject:.*?(?=Subject:|\nFrom:|$)', email_data, re.DOTALL)</code></p>
| <python><regex><file><python-re> | 2023-08-08 00:57:28 | 1 | 325 | misaligar |
76,855,756 | 11,922,765 | Raspberry PI: Counting the raising and falling edge at a frequency? | <p>I am reading this 1/0 pulse on a GPIO physical pin 22 and I am trying to count the number of pulses occurring in the frequency of 100. I have two questions:</p>
<ol>
<li><p>I am reading and detecting the edges successfully. But failing to figure out the rest (counting the number of pulses).</p>
</li>
<li><p>Also, how do I print 1s continuously to the shell until 0 is detected and print 0s until 1 is detected? Presently, it prints 1/0 only when the edge is detected. Why? I want to see the square wave output rather than the triangle output (below screenshot) in the Thonny plotter.</p>
</li>
</ol>
<p><a href="https://i.sstatic.net/Sunr8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sunr8.png" alt="enter image description here" /></a></p>
| <python><raspberry-pi><raspberry-pi3><gpio><edge-detection> | 2023-08-07 23:52:32 | 0 | 4,702 | Mainland |
76,855,616 | 5,090,107 | How can I get hover info in VS Code for Google/Sphinx-style Python class attributes documentation? (like "#: text") | <p>I'm trying to document instance variables in my Python classes so they show up in VS Code when I hover over them. I've found that this works:</p>
<pre><code>class TsCity:
def __init__(self) -> None:
self.name: str = ""
"""The city name."""
</code></pre>
<p>But this is pretty ugly. I would ideally like to use the Google-style docstring instead:</p>
<pre><code>self.name: str = "" #: Doc comment *inline* with attribute
</code></pre>
<p>But this doesn't show up in VS Code properly. Is there a way to make this type of docstring work in VS Code?</p>
| <python><visual-studio-code><docstring> | 2023-08-07 23:05:24 | 1 | 606 | needarubberduck |
76,855,613 | 3,457,513 | Use generics to indicate return type of child method | <p>I have a mixin that is used by several child classes and would like to make sure PyCharm and <code>mypy</code> can infer that the return type of the <code>add_value</code> method on a child object is the type of the child class, not the type of the mixin. Any ideas how to do this with Generics? Currently PyCharm indicates that the return type of calling <code>add_value</code> from an instance of <code>MyMixinUser</code> is <code>MyMixin</code>, not <code>MyMixinUser</code></p>
<pre><code>T = TypeVar('T', bounds='MyMixin')
class MyMixin(Generic[T]):
value: int
def add_value(self: T, value) -> T:
return type(self)(self.value + value)
class MyMixinUser(
MyMixin,
object
):
def __init__(self, value: int):
self.value = value
</code></pre>
| <python><generics> | 2023-08-07 23:04:57 | 0 | 1,045 | vahndi |
76,855,609 | 2,403,531 | Automatically send all elements in list inside of a for loop list comprehension into a function | <p>I access a function via this list comprehension and have made it work by explicitly creating a variable for each element of the <code>list_of_lists</code>. I want a better way to access the elements in the function in a list comprehension. Example:</p>
<pre><code>list_of_lists = [[0, 1, 2, 3], [0, 1, 2, 3], ...]
[function(i, j, k, l) for i, j, k, l in list_of_lists]
</code></pre>
<p>It's a very annoying syntax as I need to update <code>(i, j, k, l) for i, j, k, l</code> if the number of elements in a sub-list of <code>list_of_lists</code> changes.</p>
<p>E.g., a change to <code>[[0, 1, 2, 3, 4, 5, 6], ...]</code> needs the syntax to be <code>(i, j, k, l, m, n) for i, j, k, l, m, n</code> and I need to be sure I do not miscount. This gets worse for more elements per sub-list and if the function changes during coding.</p>
<p>Is there a way to say something like:</p>
<pre><code>[function(*) for * in list_of_lists]
</code></pre>
<p>So my woes are ameliorated?</p>
<p>I tried searching for something like this but it's clear I don't know the right words to be able to search this.</p>
| <python><list><for-loop><list-comprehension> | 2023-08-07 23:03:49 | 2 | 730 | user2403531 |
76,855,605 | 3,718,501 | Pandas Data frame: Split and shift specific rows | <p>the columns in few rows are concatenated together with delimieter causing data to shift left. How can I split and shift values at the same time. here is an example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>date</th>
<th>account</th>
<th>site</th>
<th>balance</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>12/21/2022</td>
<td>JE</td>
<td>1189</td>
<td>40</td>
</tr>
<tr>
<td>2</td>
<td>12/21/2022</td>
<td>PR</td>
<td>1120</td>
<td>60</td>
</tr>
<tr>
<td>3</td>
<td>12/21/2022</td>
<td>JE</td>
<td>1130</td>
<td>90</td>
</tr>
<tr>
<td>4</td>
<td>12/31/2022\nJE\n1131</td>
<td>60</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>in line 4, the values for date, account and site are concatenated by new line character {\n}.</p>
<p>How do I split and shift the values to right:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>date</th>
<th>account</th>
<th>site</th>
<th>balance</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>12/21/2022</td>
<td>JE</td>
<td>1189</td>
<td>40</td>
</tr>
<tr>
<td>2</td>
<td>12/21/2022</td>
<td>PR</td>
<td>1120</td>
<td>60</td>
</tr>
<tr>
<td>3</td>
<td>12/21/2022</td>
<td>JE</td>
<td>1130</td>
<td>90</td>
</tr>
<tr>
<td>4</td>
<td>12/31/2022</td>
<td>JE</td>
<td>1131</td>
<td>60</td>
</tr>
</tbody>
</table>
</div>
<p>code:</p>
<pre><code>lst= [[1,45281,'JE',1189,40],[2,45281,'PR',1120,60],[3,45281,'JE',1130,90],[4,'12/31/2023\nJE\n1131','60']]
columns = ['id','date','account','site','balance']
new = pandas.DataFrame(data = lst, columns = columns)
</code></pre>
<p>I tried using <code>new['date'].str.split('\n',expand=True)</code> but it did not work</p>
| <python><python-3.x><pandas><dataframe> | 2023-08-07 23:03:06 | 1 | 405 | Learner27 |
76,855,256 | 160,808 | Segmentation fault when freeing up memory assigned to pointer | <p>I am working with the google assistant python library. I believe I have found a bug in the library but the library is now deprecated. I dont really have a choice but to use the library because it allows hotword (i.e. hey google keyword).</p>
<p>AFAIK when they deprecated the api, they didn't provide an alternative implementation that uses the service and provides hotword support.</p>
<p>I think the problem is here:</p>
<p>self._lib.assistant_free(self._inst)</p>
<p>but I don't understand why it crashes because it's clearly has been initialised within the constructor. Also I am using the instance of the class it's holding thoughout the main program.</p>
<p>I can get the segmentation fault to occur when I try adding custom voice commands and when I quit the program which makes alot of sense since the exit function is there for cleanup when the thread is closed.</p>
<p>Stacktrace:</p>
<pre><code>Thread 0xb6f8fb40 (most recent call first):
File "/home/steven/env/lib/python3.7/site-packages/google/assistant/library/assistant.py", line 119 in __exit__
File "/home/steven/GassistPi/src/main.py", line 1105 in main
File "/home/steven/GassistPi/src/main.py", line 1110 in <module>
Segmentation fault
</code></pre>
<p>code:</p>
<pre><code>def __init__(self, credentials, device_model_id):
warnings.warn('Google Assistant Library for Python is deprecated', DeprecationWarning)
if not device_model_id:
raise ValueError("device_model_id must be a non-empty string")
self._event_queue = IterableEventQueue()
self._load_lib()
self._credentials_refresher = None
self._shutdown = False
self._event_callback = EVENT_CALLBACK(self)
self._inst = c_void_p(
self._lib.assistant_new(
self._event_callback,
device_model_id.encode('ASCII'), None))
print(self._inst)
self._credentials_refresher = CredentialsRefresher(
credentials, self._set_credentials)
self._credentials_refresher.start()
def __exit__(self, exception_type, exception_value, traceback):
"""Frees allocated memory belonging to the Assistant."""
if self._credentials_refresher:
self._credentials_refresher.stop()
self._credentials_refresher = None
self._shutdown = True
self._lib.assistant_free(self._inst)
</code></pre>
| <python><google-assist-api> | 2023-08-07 21:25:07 | 1 | 2,311 | Ageis |
76,855,141 | 2,131,106 | Changing one list in a object altering lists in other objects | <p>This might be some python language thing but I am not able to wrap around my head on this.</p>
<p>When putting values in children of one TreeNode object makes children of other objects point to the children list of original object.</p>
<pre><code>def createTree(parent, value):
nmap = {}
for i, v in enumerate(value):
nmap[i] = TreeNode(v)
for i, v in enumerate(parent):
if v is not -1:
print("adding child for node: " + str(v))
nmap[v].children.append(nmap[i])
else:
print("Not adding")
class TreeNode:
val = None
children = []
def __init__(self, v):
self.val = v
createTree([-1,0,0,1,2,2,2],[0,1,2,3,4,5,6])
</code></pre>
<p><a href="https://i.sstatic.net/tE3xO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tE3xO.png" alt="enter image description here" /></a></p>
| <python><python-3.x> | 2023-08-07 20:59:40 | 0 | 566 | Sensiblewings47 |
76,854,874 | 5,835,423 | How to get SYSTEM_USER of SQL Server in Python | <p>I am using Azure function to create SQL query and insert data in the database. One of the parameters I need to give for column <code>Added_by</code> should be <code>system_user</code>.</p>
<p>If I run <code>Select SYSTEM_USER;</code> I get username in SQL Server. But if I run the same command using Python in Azure function I get None.</p>
<p>How do I get <code>SYSTEM_USER</code>? And is <code>sql_user_name</code> same as <code>SYSTEM_USER</code>?</p>
<p>This is part of the code for reference:</p>
<pre><code>sql_fetch_system_user = "SELECT SYSTEM_USER;"
system_user = exec_sql_query_no_params(sql_fetch_system_user, sql_server, database, sql_user_name, sql_password, sql_driver)
print("sql_user_name",system_user)
def exec_sql_query_no_params(query, sql_server, database, sql_user_name, sql_password, sql_driver):
try:
with pyodbc.connect('DRIVER='+sql_driver+';SERVER=tcp:'+sql_server+';PORT=1433;DATABASE='+database+';UID='+sql_user_name+';PWD=' + sql_password) as conn:
with conn.cursor() as cursor:
conn.autocommit = True
cursor.execute(query)
except pyodbc.Error as e:
logging.error(f"SQL query failed: {e}")
raise
</code></pre>
| <python><sql><sql-server><azure><azure-functions> | 2023-08-07 20:05:40 | 1 | 3,003 | heman123 |
76,854,861 | 1,493,116 | Microsoft teams echo bot sample keeps failing with "access_token" error | <p>I am having very consistent issues as it relates to the access_token with my teams bot. I am using the "02.echo-bot" sample written in python.</p>
<p>When I go to Azure portal -> app registrations -> my app -> certificates & services, my "value" in the only client secret I have is consistent with <code>APP_PASSWORD</code> in my <code>config.py</code> file. My <code>APP_ID</code> value is also consistent as what I have on the Overview page for Application (client) ID.</p>
<p>I've got ngrok set up correctly to forward traffic to the server that is spun up by <code>app.py</code> and I can see traffic flowing when I send messages from the "Test" tab of the bot framework web page.</p>
<p>However, every time I send a message, I see the following error:</p>
<pre><code>root@test:~/teams-bot-test/botbuilder-samples/samples/python/02.echo-bot# python3 app.py
/usr/local/lib/python3.10/dist-packages/botbuilder/schema/__init__.py:80: UserWarning: The Bot Framework Python SDK is being retired with final long-term support ending in November 2023, after which this repository will be archived. There will be no further feature development, with only critical security and bug fixes within this repository being undertaken. Existing bots built with this SDK will continue to function. For all new bot development we recommend that you adopt Power Virtual Agents.
warn(
======== Running on http://localhost:3978 ========
(Press CTRL+C to quit)
[on_turn_error] unhandled error: 'access_token'
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/bot_adapter.py", line 128, in run_pipeline
return await self._middleware.receive_activity_with_status(
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/middleware_set.py", line 69, in receive_activity_with_status
return await self.receive_activity_internal(context, callback)
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/middleware_set.py", line 79, in receive_activity_internal
return await callback(context)
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/activity_handler.py", line 70, in on_turn
await self.on_message_activity(turn_context)
File "/root/teams-bot-test/botbuilder-samples/samples/python/02.echo-bot/bots/echo_bot.py", line 17, in on_message_activity
return await turn_context.send_activity(
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/turn_context.py", line 174, in send_activity
result = await self.send_activities([activity_or_text])
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/turn_context.py", line 226, in send_activities
return await self._emit(self._on_send_activities, output, logic())
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/turn_context.py", line 304, in _emit
return await logic
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/turn_context.py", line 221, in logic
responses = await self.adapter.send_activities(self, output)
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/bot_framework_adapter.py", line 735, in send_activities
raise error
File "/usr/local/lib/python3.10/dist-packages/botbuilder/core/bot_framework_adapter.py", line 720, in send_activities
response = await client.conversations.reply_to_activity(
File "/usr/local/lib/python3.10/dist-packages/botframework/connector/aio/operations_async/_conversations_operations_async.py", line 524, in reply_to_activity
response = await self._client.async_send(
File "/usr/local/lib/python3.10/dist-packages/msrest/async_client.py", line 115, in async_send
pipeline_response = await self.config.pipeline.run(request, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/msrest/pipeline/async_abc.py", line 159, in run
return await first_node.send(pipeline_request, **kwargs) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/msrest/pipeline/async_abc.py", line 79, in send
response = await self.next.send(request, **kwargs) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/msrest/pipeline/async_requests.py", line 99, in send
self._creds.signed_session(session)
File "/usr/local/lib/python3.10/dist-packages/botframework/connector/auth/app_credentials.py", line 98, in signed_session
auth_token = self.get_access_token()
File "/usr/local/lib/python3.10/dist-packages/botframework/connector/auth/microsoft_app_credentials.py", line 65, in get_access_token
return auth_token["access_token"]
KeyError: 'access_token'
</code></pre>
<p>This issue seems to be consistent with an incorrect app password, according to other articles that I've seen; however, I've confirmed and double checked this several times. I've even created a new client secret to get a new secret, and it still gives me the exact same error.</p>
<p>Setting this test up seems to be relatively easy and minimal configuration changes. What else would be contributing to this?</p>
| <python><botframework> | 2023-08-07 20:02:52 | 2 | 6,032 | LewlSauce |
76,854,844 | 7,321,700 | Sum specific rows inside of a Dataframe | <p><strong>Scenario:</strong> I have a dataframe in which one of the rows is empty. Its value should be a sum of two of the other rows.</p>
<p><strong>Data Input:</strong></p>
<pre><code> ISIC2 2018 2019 2020 2021
0 A0 68 95 98 39
1 B0 95 19 5 98
2 B1
3 B2 58 86 10 90
9 C0 36 74 53 97
</code></pre>
<p><strong>Expected output:</strong></p>
<pre><code> ISIC2 2018 2019 2020 2021
0 A0 68 95 98 39
1 B0 95 19 5 98
2 B1 153 105 15 188
3 B2 58 86 10 90
9 C0 36 74 53 97
</code></pre>
<p>Here, where row ISIC2 == "B1", it should give the value of B0+B2 for each column.</p>
<p><strong>What I tried:</strong> I was trying to do this in a loop with fixed references for the rows, but that does not seem to be a very effective way to do this:</p>
<pre><code>year_list_2 = [2018,2019,2020,2021]
for year_var in year_list_2:
for index1 in step4_1.iterrows():
if step4_1.at[index1, "ISIC2"] == "B1":
step4_1.at[index1, year_var] = step4_1.at[index1 - 1, year_var] + step4_1.at[index1 + 1, year_var]
</code></pre>
<p><strong>Question:</strong> Is there a simpler way to do this?</p>
| <python><pandas><dataframe> | 2023-08-07 19:59:11 | 2 | 1,711 | DGMS89 |
76,854,702 | 10,133,797 | Numpy allclose tolerances for single and half precision? | <p><code>atol=1e-8</code> isn't suitable for single or half precision, unsure of <code>rtol=1e-5</code> - they appear calibrated for <code>float64</code>.</p>
<p>What should be the standard for <code>float32</code> and <code>float16</code>?</p>
<p>A reference is preferred - I found <a href="https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/" rel="nofollow noreferrer">this</a> but there's no summary and I don't know how applicable it is to <code>np.allclose</code>. Simply reasoning also works.</p>
| <python><numpy><precision> | 2023-08-07 19:35:19 | 0 | 19,954 | OverLordGoldDragon |
76,854,680 | 5,864,426 | RGB to TSL color space conversion issue in Python | <p>I've written code to convert an RGB image into a TSL (Tint, saturation, lightness) color space using python. I've also implemented the reverse function to check if the image is being re-generated proeprly so I know that my implementation is correct.</p>
<p>In this, I have doubts about my implementation as I am not able to re-generate the input image again.</p>
<p>I am not sure exactly in what part am I making this error. I've followed the wikipedia formula for <a href="https://en.wikipedia.org/wiki/TSL_color_space#cite_ref-terrillon1_1-0" rel="nofollow noreferrer">TSL to RGB</a></p>
<p>For the RGB to TSL conversion, I've referred to the original paper <a href="https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.6037&rep=rep1&type=pdf" rel="nofollow noreferrer">RGB to TSL</a></p>
<p>Following the input from @statemachine, I have incorporated his edits.</p>
<pre><code>def rgb_tsl(image_path, gamma_factor):
# BGR order is read by default
object_data = cv2.imread(image_path)
# scaled data (convert 0-255 to 0-1 scale)
scaled_data = object_data / 255
# Power Law transformation for gamma correction
corrected_image_gamma = np.power(scaled_data, gamma_factor)
# Blue channel
base_blue_channel = corrected_image_gamma[:, :, 0]
base_green_channel = corrected_image_gamma[:, :, 1]
base_red_channel = corrected_image_gamma[:, :, 2]
# T, S, L value calculations
common_divisor = (base_red_channel + base_green_channel + base_blue_channel)
small_r = base_red_channel / common_divisor
small_g = base_green_channel / common_divisor
r_hyphen = small_r - (1/3)
g_hyphen = small_g - (1/3)
#print("Small g", small_g)
#print("G hyphen calculation", g_hyphen)
luma = (0.299 * base_red_channel) + (0.587 * base_green_channel) + (0.114 * base_blue_channel)
saturation = np.sqrt((9/5)*(np.power(r_hyphen, 2) + np.power(g_hyphen, 2)))
# Not sure if this is correct
tint_arr = np.zeros_like(g_hyphen)
for index, item in np.ndenumerate(g_hyphen):
if item == 0:
tint_arr[index] = 0
else:
# Fetch corresponding r_hyphen value
corresponding_r_hyphen = r_hyphen[index]
if item < 0:
arctan_value_less_zero = ((np.arctan(corresponding_r_hyphen / item)) / (2 * np.pi)) + (3/4)
#original_value = tint_arr[index]
tint_arr[index] = arctan_value_less_zero
#print(f"Original value and final value is {original_value}, {tint_arr[index]}")
else:
arctan_value_greater_zero = ((np.arctan(corresponding_r_hyphen / item)) / (2 * np.pi)) + (1 / 4)
# original_value = tint_arr[index]
tint_arr[index] = arctan_value_greater_zero
# print(f"Original value and final value is {original_value}, {tint_arr[index]}")
merged_image = cv2.merge([tint_arr, saturation, luma])
merged_image = (255 * merged_image).astype(np.uint8)
return merged_image
def tsl_rgb(image):
tint = image[:, :, 0] / 255.0
saturation = image[:, :, 1] / 255.0
luma = image[:, :, 2] / 255.0
x_val = np.power(np.tan((2 * np.pi) * (tint - (1/4))), 2)
r_hyphen_tsl = np.sqrt((5 * np.power(saturation, 2)) / 9 * ((1/x_val) + 1))
g_hyphen_tsl = np.sqrt((5 * np.power(saturation, 2)) / 9 * (x_val + 1))
r_tsl = r_hyphen_tsl + (1/3)
g_tsl = g_hyphen_tsl + (1/3)
k = luma / ((0.185 * r_tsl) + (0.473 * g_tsl) + 0.114)
final_r = k * r_tsl
final_g = k * g_tsl
final_b = k * (1-r_tsl-g_tsl)
# Multiplying by 255 to get the rescaled values (Since these values are in 0-1 range)
final_rgb_image = cv2.merge([final_r, final_g, final_b])
clippedImg = np.clip(final_rgb_image, 0, 1) # Clip all values to 0-1
final_rgb_image = (255 * clippedImg).astype(np.uint8)
return final_rgb_image
# Height is 450 and width is 600
#current_image_path = '/home/xyz/Data_Science/Skin Cancer/ISIC_0034316_dullrazor.jpg'
current_image_path = '/home/xyz/Data_Sciencet/Skin Cancer/ISIC_0034202_dullrazor.jpg'
# Standard Gamma values https://www.viewsonic.com/de/colorpro/articles/detail/accurate-gamma_128
original_img, converted_image_tsl = rgb_tsl(current_image_path, 1.5)
reverse_image = tsl_rgb(converted_image_tsl)
cv2.imshow("Original RGB image", original_img)
cv2.imshow("TSL image", converted_image_tsl)
cv2.imshow("Re-constructed RGB image", reverse_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>Here is how the output looks currently: Maybe it can help.</p>
<p>Original RGB Image</p>
<p><a href="https://i.sstatic.net/b6tsy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b6tsy.png" alt="Original RGB Image" /></a></p>
<p>TSL from RGB</p>
<p><a href="https://i.sstatic.net/VGZPI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VGZPI.png" alt="TSL Image from RGB" /></a></p>
<p>Re-constructed RGB from TSL</p>
<p><a href="https://i.sstatic.net/vfd7f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vfd7f.png" alt="Reconstructed RGB from TSL" /></a></p>
| <python><numpy><opencv><image-processing><colors> | 2023-08-07 19:31:27 | 1 | 1,094 | Dhruv Marwha |
76,854,598 | 7,106,508 | In Python why is `KeyboardInterrupt` preventing the pickling of this object? | <p>I need a better method or I need to learn how to use the method
I'm using more correctly. Normally, when I press cntrl-c
my work gets pickled, but one time it did not. The subsequent time that I tried to open the pickle I got a <code>ran out of input</code> error. I need to know why this happened so that it does not happen again. As I'm running my code, every one hundred loops I will save the pickle and if I hit KeyboardInterrupt, theoretically my work is supposed to be pickled before the program stops. My hunch is that if I press <code>KeyboardInterrupt</code> while <code>pickle.dump(obj, temp)</code> is being executed then the file will overwrite the old file but as it's overwriting if the program is killed in the middle of the overwrite then the file will be sort of half-written. What I also do not understand is that after I hit <code>KeyboardInterrupt</code> the program should execute the line <code>print("help me")</code> but it's not, at least on the sole occasion where I tried that.</p>
<pre><code>import pickle
def save_pickle(obj, file):
temp = open(file, "wb")
##i have a feeling that if i hit keyboard interrupt
##while the below line is being executed that it won't
## save the pickle properly
pickle.dump(obj, temp)
temp.close()
class help_me():
def __init__(self):
pass
def func1(self):
try:
self.func2()
except KeyboardInterrupt:
pass
print('help me') # this line did not get activated
save_pickle(obj, file)
def func2(self):
#this function will take several days
for i in range(600_000):
time.sleep(1)
if i % 100 == 0:
save_pickle(obj, file)
</code></pre>
| <python><pickle><keyboardinterrupt> | 2023-08-07 19:16:19 | 2 | 304 | bobsmith76 |
76,854,457 | 15,845,509 | cocoeval change the number of keypoints and self.kpt_oks_sigmas into 14 but receive error | <p>I am trying to create keypoints detection using transformer and cocoapi. For evaluation, I use cocoeval and change the "self.kpt_oks_sigmas" from 17 into 14:</p>
<pre><code>self.kpt_oks_sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62,.62,
1.07, 1.07, .87, .87, .89, .89])/10.0
</code></pre>
<p>into</p>
<pre><code>self.kpt_oks_sigmas = np.array([1.07, .87, .89, 1.07, .87, .89, 1., 1., .79, .72, .62, .79, .72, .62])/10.0
</code></pre>
<p>However, I received error message that says:</p>
<p><a href="https://i.sstatic.net/hddrK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hddrK.png" alt="error message" /></a></p>
<p>Does anyone knows what should I do to fix this?
Thank you</p>
| <python><evaluation><transformer-model> | 2023-08-07 18:51:56 | 1 | 369 | ryan chandra |
76,854,448 | 10,083,382 | Merge Pandas Column based on Conditions | <p>Suppose that I have 2 pandas data frame the first one is a look up table and the second one is a data table which needs to be populated with an additional column <code>Category</code> which will be extracted from look up table using 2 conditions. The Region should match and the distance should be minimum. Both the data frames can be generated using code below.</p>
<pre><code>lookup_data = {'Category' : ['A1', 'A2', 'B1', 'C1', 'D1', 'D2'],
'Region':['A', 'A', 'B', 'C', 'D', 'D'],
'Distance':[109, 200, 300, 400, 500, 600]}
lookup_data_df = pd.DataFrame(lookup_data)
actual_data = {'Region':['A', 'A', 'B', 'C', 'D', 'D', 'E'],
'Distance':[95, 199, 10, 350, 550, 560, 200]}
actual_df = pd.DataFrame(actual_data)
</code></pre>
<p>I want a solution without using loops. The expected output data frame can be generated using code below.</p>
<pre><code>expected_data = {'Region':['A', 'A', 'B', 'C', 'D', 'D', 'E'],
'Category' : ['A1', 'A2', 'B1', 'C1', 'D1', 'D2', 'A2'],
'Distance':[95, 199, 10, 350, 550, 560, 200]}
expected_data_df = pd.DataFrame(expected_data)
</code></pre>
<p>Edit: In case of unseen region for example region <code>E</code> disregard the region and just choose <code>Category</code> with minimum distance that is <code>A2</code> in current scenario.</p>
| <python><pandas><dataframe><join><merge> | 2023-08-07 18:50:12 | 2 | 394 | Lopez |
76,854,333 | 4,414,359 | Error: "botocore.exceptions.NoCredentialsError: Unable to locate credentials" when doing Docker build | <p>I'm trying to get the secret from AWS like so:</p>
<pre><code>import boto3
import os
mysql_secret = os.environ['MYSQL_SECRET']
def get_secret():
region_name = "us-west-2"
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
get_secret_value_response = client.get_secret_value(SecretId=mysql_secret)
# Decrypts secret using the associated KMS key.
secret = get_secret_value_response['SecretString']
return secret
secret = get_secret()
</code></pre>
<p>with Dockerfile</p>
<pre><code># Top level build args
ARG build_for=linux/arm64/v8
FROM --platform=$build_for python:3.11.4-bullseye as base
# Set docker basics
VOLUME /usr/app
ARG MYSQL_SECRET='mysql_secret'
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY='some_key'
ARG AWS_DEFAULT_REGION='us-west-2'
ARG AWS_SECURITY_TOKEN='some_token'
RUN apt-get update -y
RUN apt-get install libpq-dev -y
RUN apt-get install default-libmysqlclient-dev -y
RUN apt-get install pkg-config -y
RUN python -m pip install boto3
COPY ./test.py /usr/app/test.py
RUN python /usr/app/test.py
</code></pre>
<p>I looked around SO for a while and tried adding</p>
<pre><code>ENV AWS_CONFIG_FILE=/root/.aws/config
ENV AWS_SDK_LOAD_CONFIG=1
</code></pre>
<p>to the Dockerfile</p>
<p>I tried passing the credentials directly like</p>
<pre><code>docker build . -t test:0.1 \
--build-arg AWS_ACCESS_KEY_ID=${access_key_here} \
--build-arg AWS_SECRET_ACCESS_KEY=${secret_key_here} \
--build-arg AWS_DEFAULT_REGION=${us-west-2} \
--build-arg AWS_SECURITY_TOKEN=${token_here}
</code></pre>
<p>Nothing seems to be working.</p>
<p>UPDATE:
I added</p>
<pre><code>ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY='some_key'
ARG AWS_DEFAULT_REGION='us-west-2'
ARG AWS_SECURITY_TOKEN='some_token'
</code></pre>
<p>as recommended by Vasyl Herman and hard coded 3 out of 4 arguments, leaving out AWS_ACCESS_KEY_ID.
I then tried running
<code>docker build . -t test:0.1 --build-arg AWS_ACCESS_KEY_ID=${some_key}</code> but still getting the same error. Even though it works if access key is also hard coded in.</p>
<p>If i hard code the access key in, but leave out the secret key, i get a different error when running
<code>docker build . -t test:0.1 --build-arg AWS_SECRET_ACCESS_KEY=${some_key}</code></p>
<p>:
<code>zsh: bad substitution</code></p>
| <python><docker><boto3><aws-secrets-manager> | 2023-08-07 18:33:01 | 1 | 1,727 | Raksha |
76,854,187 | 10,687,615 | Extract Text After a String | <p>I have a data table that look like this:</p>
<pre><code>ID RESULT
1 blah blah asc rdsd Critical Care Time : 3,342 job ded...
2 apple asc red Critical Care Time : 322 ED none...
3 computer Critical Care Time : 7,777 cul ninea....
</code></pre>
<p>I want to extract the time amount after "Critical Care Time :" to make a new column so the database reads:</p>
<pre><code>ID RESULT Minutes
1 blah blah asc rdsd Critical Care Time : 3,342 job ded... 3,342
2 apple asc red Critical Care Time : 322 ED none... 322
3 computer Critical Care Time : 7,777 cul ninea.... 7,777
</code></pre>
<p>I have code that looks like this but its still extracting all next after the minutes:</p>
<pre><code>DF['Minutes'] = DF.RESULT.str.extract(r'Critical Care Time : \s*([^\.]*)\s*\ ', expand=False)
</code></pre>
| <python><pandas> | 2023-08-07 18:08:26 | 1 | 859 | Raven |
76,854,081 | 5,838,180 | In pandas adding a time offset to a subset of the dataframe has no effect | <p>I noticed a strange behaviour of the pandas package, that leads to an unexpected failure to add time offsets in some cases.</p>
<p>Suppose I have the following dataframe:</p>
<pre><code>df = pd.DataFrame({'time': ['2022-01-24', '2022-02-24', '2022-03-24'],
'value': [10, 20, 30]})
</code></pre>
<p>I can successfully add a time offset to it using this syntax:</p>
<pre><code>df.set_index(['time'], inplace=True)
df.index = pd.to_datetime(df.index, format='%Y-%m-%d')
df.index = df.index + pd.offsets.DateOffset(years=100)
</code></pre>
<p>But there is a fail, when I want to add the offset only to a subset of the dataframe, e.g. only to dates after <code>2022-02-25</code>, see below:</p>
<pre><code>df.set_index(['time'], inplace=True)
df.index = pd.to_datetime(df.index, format='%Y-%m-%d')
df[df.index>pd.to_datetime('2022-02-25')].index = df[df.index>pd.to_datetime('2022-02-25')].index + pd.offsets.DateOffset(years=100)
</code></pre>
<p>The second code slip leads to no change in the column <code>time</code> of <code>df</code>. Why nothing changes when I perform the adding only to a slice? And how do I successfully do it? Tnx</p>
| <python><pandas><datetime><slice> | 2023-08-07 17:52:41 | 1 | 2,072 | NeStack |
76,853,986 | 7,853,533 | Type hints when only a subset of attribute types are accepted | <p>Say I have a class with multiple possible types for an attribute and a function that only works on some of these types.</p>
<p>How would I type-hint that function so that it specifies it only accepts some of the types accepted by the original class?</p>
<pre class="lang-py prettyprint-override"><code>import dataclasses
@dataclasses.dataclass
class Foo:
a: int | str
def plus_1_a(foo: Foo) -> None:
"""This will only work if foo.a is an int, not a string."""
foo.a += 1
</code></pre>
| <python><mypy><python-typing> | 2023-08-07 17:39:01 | 1 | 1,725 | Paulo Costa |
76,853,978 | 5,662,005 | Sync a frequently changing variable across concurrent processes | <p>Say I have three processes running</p>
<ul>
<li><p>SimonsSubconsious - Randomly deciding whether he wants to say "simon says"</p>
</li>
<li><p>SimonsGame1 - Giving out commands for game 1</p>
</li>
<li><p>SimonsGame2 - Giving out commands for game 2</p>
</li>
</ul>
<p>This is an attempt, but it really doesn't do what I need it to, because Game 1 and Game 2 aren't in sync. Simon is making separate random decisions.</p>
<p><strong>SimonsSubconcious.py</strong></p>
<pre><code>import time
import random
def main():
while True:
simon_says = random.randint(0, 1)
time.sleep(1)
print(simon_says)
if __name__ == '__main__':
main()
</code></pre>
<p><strong>SimonsGame1.py</strong></p>
<pre><code>import time
from SimonsSubconsious import simon_says
commands = ['jump', 'sit', 'clap']
while True:
for command in commands:
print(f"{'Simon says' if simon_says else ''} {command}")
time.sleep(1)
</code></pre>
<p><strong>SimonsGame2.py</strong></p>
<pre><code>import time
from SimonsSubconsious import simon_says
commands = ['bark', 'moo', 'meow']
while True:
for command in commands:
print(f"{'Simon says' if simon_says else ''} {command}")
time.sleep(1)
</code></pre>
<p>So SimonsSubconsious periodically decides to switch it up, and that needs reflected in both games. If it's "Simon says " for Game 1, I want to ensure it's always "Simon says " for Game 2.</p>
| <python> | 2023-08-07 17:38:04 | 0 | 3,899 | Error_2646 |
76,853,953 | 7,925,579 | Flask 2.3 deprecated before_request decorerator | <p>I am using flask security and sqlalchemy to store user credentials
but unfortunately flask 2.3 does not suppport <code>security = Security(app, user_datastore)</code>
any assistance would be appreicated.</p>
<p>I tried the following
<code>user_datastore = SQLAlchemyUserDatastore(db, User, Role) security = Security(app, user_datastore)</code>
but i get
<code>AttributeError: 'Flask' object has no attribute 'before_first_request'. Did you mean: '_got_first_request'?</code>
when i disable the <code>security = Security(app, user_datastore)</code>, it works fine but it will not store the first time login credentials to database</p>
| <python><flask><sqlalchemy><flask-sqlalchemy><flask-security> | 2023-08-07 17:34:25 | 2 | 979 | Dejene T. |
76,853,926 | 2,340,690 | Why can Python 3.11 parse this ISO datetime but Python 3.9 can't? | <p>On the same input <code>datetime.fromisoformat</code> succeeds in Python 3.11 but fails in Python 3.9. Did the ISO format change between minor versions of Python? If so, what changed?</p>
<pre><code>Python 3.11.3 (main, Jun 5 2023, 09:32:32) [GCC 13.1.1 20230429] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime
>>> datetime.fromisoformat('2022-11-21T19:44:27.292977Z')
datetime.datetime(2022, 11, 21, 19, 44, 27, 292977, tzinfo=datetime.timezone.utc)
</code></pre>
<pre><code>Python 3.9.17 (main, Jun 26 2023, 03:30:25)
[GCC 13.1.1 20230429] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime
>>> datetime.fromisoformat('2022-11-21T19:44:27.292977Z')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: Invalid isoformat string: '2022-11-21T19:44:27.292977Z'
</code></pre>
| <python><datetime><iso8601> | 2023-08-07 17:30:32 | 0 | 2,945 | cheezsteak |
76,853,872 | 6,394,617 | slice(start, stop, None) vs slice(start, stop, 1) | <p>I was surprised to read <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow noreferrer">here</a> that</p>
<blockquote>
<p>The <code>start</code> and <code>step</code> arguments default to <code>None</code></p>
</blockquote>
<p>since it also says:</p>
<blockquote>
<pre><code>slice(start, stop, step=1)
</code></pre>
<p>Return a slice object representing the set of indices specified by range(start, stop, step).</p>
</blockquote>
<p>So I expected the default argument value for the <code>step</code> parameter to be <code>1</code>.</p>
<p>I know that <code>slice(a, b, None) == slice(a, b, 1)</code> returns <code>False</code>, but I am curious if <code>slice(a, b, None)</code> always returns the same slice as <code>slice(a, b, 1)</code>, or if there is some example that I haven't been able to think of for which they will return different slices.</p>
<p>I couldn't find anything about this in the extensive post on slicing <a href="https://stackoverflow.com/questions/509211/how-slicing-in-python-works">here</a></p>
| <python><slice> | 2023-08-07 17:20:01 | 1 | 913 | Joe |
76,853,807 | 2,541,276 | AttributeError: module 'redis' has no attribute 'RedisCluster' | <p>I'm trying the connect to a redis cluster with the below code.</p>
<pre class="lang-py prettyprint-override"><code>import redis
ssl_ca_certs='<path_to_ca_certfile>'
r = redis.RedisCluster(
host='<RedisHOST>',
port=6379,
ssl=True,
password='<password>',
ssl_ca_certs=ssl_ca_certs
)
</code></pre>
<p>The code was working fine for some time. but recently I'm getting error</p>
<blockquote>
<p>Traceback (most recent call last):
File "/home/rnatarajan/network-signals/py-demo/get-redis-cli.py", line 7, in </p>
<p>r = redis.RedisCluster(</p>
<p>AttributeError: module 'redis' has no attribute 'RedisCluster'</p>
</blockquote>
<p>I tried uninstalling and reinstalling the redis package.</p>
<p>I uninstalled <code>redis-py-cluster</code> package.</p>
<p>Note: I'm using ubuntu 22.04</p>
<pre class="lang-bash prettyprint-override"><code>$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy
</code></pre>
<p>I'm using python 3.10</p>
<pre class="lang-bash prettyprint-override"><code>$ python3 --version
Python 3.10.12
</code></pre>
<p>Any idea how to fix this error?</p>
| <python><redis><redis-cluster><redisclient> | 2023-08-07 17:09:21 | 2 | 10,555 | user51 |
76,853,797 | 1,592,334 | Issues connecting to postgresql using python on Mac | <p>I am trying to connect to postgresql using the following code</p>
<pre><code>import psycopg2
hostname = 'localhost'
database = 'demo'
username = 'postgres'
pwd = 'johndoe'
port_d= 5432
conn = psycopg2.connect(
host=hostname,
dbname=database,
user = username,
password=pwd,
port = port_d)
conn.close()
</code></pre>
<p>I have installed psycopg2 using <code>pip install psycopg2</code>.</p>
<p>Somehow when I try establishing a connection I get this error.
Im not sure what I am doing wrong</p>
<pre><code>Traceback (most recent call last):
File "/Users/abiodunadeoye/KINGS-PLATFORM/Postgres/ETL/connect.py", line 1, in <module>
import psycopg2
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/psycopg2/__init__.py", line 51, in <module>
from psycopg2._psycopg import ( # noqa
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/psycopg2/_psycopg.cpython-37m-darwin.so, 0x0002): Library not loaded: @rpath/libpq.5.dylib
Referenced from: <D5013EE3-2219-32DC-A518-A4DA08CF3918> /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/psycopg2/_psycopg.cpython-37m-darwin.so
Reason: tried: '/usr/local/lib/libpq.5.dylib' (no such file), '/usr/lib/libpq.5.dylib' (no such file, not in dyld cache)
</code></pre>
| <python><postgresql> | 2023-08-07 17:08:18 | 3 | 1,095 | Abiodun Adeoye |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.