QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,949,007
| 5,269,892
|
Pandas operations between non-float types and NaN
|
<p>What are the reasons behind pandas allowing operations between sets / strings / other non-float types and NaN (yielding NaN), whereas pure Python does not?</p>
<pre><code>import pandas as pd
import numpy as np
pd.Series([np.nan]) - pd.Series([{5}]) # yields a NaN-series
pd.Series([np.nan]) - set([5]) # throws error "TypeError: unsupported operand type(s) for -: 'float' and 'set'"
</code></pre>
|
<python><pandas>
|
2024-09-04 13:47:40
| 1
| 1,314
|
silence_of_the_lambdas
|
78,948,870
| 1,942,555
|
How to debug Qt Creator's debugging helpers
|
<p>Debugging in Qt Creator can be extended with custom debugging helpers:<br />
<a href="https://doc.qt.io/qtcreator/creator-debugging-helpers.html" rel="nofollow noreferrer">https://doc.qt.io/qtcreator/creator-debugging-helpers.html</a></p>
<p>How can I debug the debugging helpers? E.g. is there a python snippet, which setups a breakpoint and where I can debug my custom <code>qdump__Class(d, value)</code> functions?</p>
|
<python><debugging><gdb><qt-creator><lldb>
|
2024-09-04 13:17:05
| 0
| 658
|
elsamuko
|
78,948,830
| 4,412,929
|
Efficient way to subtract yearly mean data from monthly data in Xarray?
|
<p>Suppose I have the following Xarray dataarray:</p>
<pre><code>>>> da
<xarray.DataArray 'precip' (time: 521, lat: 72, lon: 144)> Size: 22MB
[5401728 values with dtype=float32]
Coordinates:
* lat (lat) float32 288B 88.75 86.25 83.75 81.25 ... -83.75 -86.25 -88.75
* lon (lon) float32 576B 1.25 3.75 6.25 8.75 ... 351.2 353.8 356.2 358.8
* time (time) datetime64[ns] 4kB 1979-01-01 1979-02-01 ... 2022-05-01
</code></pre>
<p>which contains monthly mean data from 1979 to 2022. I have calculated the yearly mean data from this data:</p>
<pre><code>>>> yearly_mean=da.resample({'time':'YS'}).mean()
>>> print(yearly_mean)
<xarray.DataArray 'precip' (time: 44, lat: 72, lon: 144)> Size: 2MB [25/1031]
array([[[ 0.5208333 , 0.5141667 , 0.51 , ..., 0.54833335,
0.53833336, 0.5283333 ],
[ 0.4725 , 0.4766667 , 0.4883333 , ..., 0.48499998,
0.47416666, 0.47166666],
[ 0.6399999 , 0.68 , 0.7308333 , ..., 0.5975 ,
0.6016666 , 0.61333334],
...,
[ 0.05666666, 0.05833334, 0.05833334, ..., 0.05416666,
0.05416666, 0.05083333],
[ 0.0725 , 0.07166667, 0.07166667, ..., 0.07416666,
0.0725 , 0.06583334],
[ 0.11333334, 0.11166667, 0.11083334, ..., 0.11583333,
0.115 , 0.10166666]],
[[ 0.5125 , 0.50916666, 0.505 , ..., 0.53416663,
0.525 , 0.5183333 ],
[ 0.43249997, 0.43916664, 0.45000002, ..., 0.4358333 ,
0.4308333 , 0.42999998],
[ 0.5983333 , 0.6266666 , 0.6716667 , ..., 0.5758334 ,
0.5783333 , 0.5841667 ],
...
[ 0.13250001, 0.10916666, 0.09833334, ..., 0.21333332,
0.1875 , 0.16 ],
[ 0.07666666, 0.07583333, 0.075 , ..., 0.0775 ,
0.0775 , 0.07666666],
[ 0.06333333, 0.06333333, 0.06166667, ..., 0.06416667,
0.06333333, 0.06333333]],
[[ 0.35200003, 0.34199998, 0.336 , ..., 0.38599998,
0.374 , 0.36200002],
[ 0.42599997, 0.444 , 0.45999998, ..., 0.394 ,
0.40399998, 0.41399997],
[ 0.49 , 0.528 , 0.582 , ..., 0.43800002,
0.45 , 0.468 ],
...,
[ 1.8119999 , 2.0379999 , 2.35 , ..., 1.436 ,
1.5239999 , 1.6200001 ],
[ 6.03 , 6.708 , 7.484 , ..., 4.4660006 ,
4.9140005 , 5.4300003 ],
[13.785998 , 14.286 , 14.806 , ..., 12.434 ,
12.855998 , 13.309999 ]]], dtype=float32)
Coordinates:
* lat (lat) float32 288B 88.75 86.25 83.75 81.25 ... -83.75 -86.25 -88.75
* lon (lon) float32 576B 1.25 3.75 6.25 8.75 ... 351.2 353.8 356.2 358.8
* time (time) datetime64[ns] 352B 1979-01-01 1980-01-01 ... 2022-01-01
</code></pre>
<p>I want to calculate the difference between yearly mean and the monthly means. I could have just done <code>da-yearly_data</code>, if there were only a single year's data. Since I have data for multiple years, this would not work correctly. Xarray seems to successfully complete the operation but the results are incorrect, even the shape of the resulting dataarray is not what I was expecting:</p>
<pre><code>>>> d=da-yearly_mean
>>> print(d)
<xarray.DataArray 'precip' (time: 44, lat: 72, lon: 144)> Size: 2MB
array([[[-3.10833335e-01, -3.04166734e-01, -3.00000012e-01, ...,
-3.18333328e-01, -3.18333358e-01, -3.08333308e-01],
[-3.52499992e-01, -3.46666694e-01, -3.38333309e-01, ...,
-3.64999980e-01, -3.64166677e-01, -3.61666679e-01],
[-4.19999927e-01, -4.30000007e-01, -4.30833280e-01, ...,
-3.97500038e-01, -4.01666641e-01, -4.13333356e-01],
...,
[ 1.33333392e-02, 2.16666609e-02, 2.16666609e-02, ...,
5.83333522e-03, 5.83333522e-03, 9.16666538e-03],
[-1.24999993e-02, -1.16666667e-02, -1.16666667e-02, ...,
-2.41666622e-02, -2.24999972e-02, -1.58333369e-02],
[-3.33333388e-02, -4.16666716e-02, -4.08333391e-02, ...,
-3.58333364e-02, -3.50000039e-02, -3.16666588e-02]],
[[-1.92499995e-01, -1.99166656e-01, -1.94999993e-01, ...,
-1.84166640e-01, -1.84999973e-01, -1.88333303e-01],
[-1.02499962e-01, -1.19166642e-01, -1.30000025e-01, ...,
-5.58333099e-02, -6.08333051e-02, -7.99999833e-02],
[ 1.66672468e-03, -3.66666317e-02, -7.16666579e-02, ...,
6.41666055e-02, 6.16666675e-02, 3.58332992e-02],
...
-6.33333176e-02, -7.75000006e-02, -6.99999928e-02],
[-7.66666606e-02, -7.58333281e-02, -7.49999955e-02, ...,
-7.75000006e-02, -7.75000006e-02, -7.66666606e-02],
[-6.33333325e-02, -6.33333325e-02, -6.16666675e-02, ...,
-6.41666725e-02, -6.33333325e-02, -6.33333325e-02]],
[[-2.00003386e-03, -1.99997425e-03, -5.99998236e-03, ...,
1.40000284e-02, 5.99998236e-03, 7.99998641e-03],
[ 2.04000026e-01, 2.05999970e-01, 1.89999998e-01, ...,
1.85999990e-01, 1.96000040e-01, 2.06000030e-01],
[ 4.20000017e-01, 4.42000031e-01, 4.37999964e-01, ...,
3.81999969e-01, 4.00000036e-01, 4.12000000e-01],
...,
[ 6.08000159e-01, 1.19200015e+00, 1.97000027e+00, ...,
-3.76000047e-01, -1.43999934e-01, 1.29999876e-01],
[ 1.02699986e+01, 1.19820004e+01, 1.39359999e+01, ...,
6.39399910e+00, 7.50599957e+00, 8.78999996e+00],
[ 2.75340004e+01, 2.88440018e+01, 3.02239990e+01, ...,
2.39559994e+01, 2.50839996e+01, 2.62800007e+01]]],
dtype=float32)
Coordinates:
* lat (lat) float32 288B 88.75 86.25 83.75 81.25 ... -83.75 -86.25 -88.75
* lon (lon) float32 576B 1.25 3.75 6.25 8.75 ... 351.2 353.8 356.2 358.8
* time (time) datetime64[ns] 352B 1979-01-01 1980-01-01 ... 2022-01-01
</code></pre>
<p>The shape of the output should be the same as <code>da</code> (monthly data).</p>
<p>The following is a crude method of doing what I want using a for-loop:</p>
<pre><code>>>> years=yearly_mean.indexes['time'].year
>>> d=da.copy()
>>> for idx, year in enumerate(years):
... d[idx*12:(idx*12)+12]=da[idx*12:(idx*12)+12]-yearly_mean[idx]
>>> print(d)
<xarray.DataArray 'precip' (time: 521, lat: 72, lon: 144)> Size: 22MB
array([[[-3.108333e-01, -3.041667e-01, ..., -3.183334e-01, -3.083333e-01],
[-3.525000e-01, -3.466667e-01, ..., -3.641667e-01, -3.616667e-01],
...,
[-1.250000e-02, -1.166667e-02, ..., -2.250000e-02, -1.583334e-02],
[-3.333334e-02, -4.166667e-02, ..., -3.500000e-02, -3.166666e-02]],
[[-2.208333e-01, -2.141667e-01, ..., -2.283334e-01, -2.283333e-01],
[-2.125000e-01, -2.066667e-01, ..., -2.241667e-01, -2.216667e-01],
...,
[-6.250000e-02, -6.166667e-02, ..., -6.250000e-02, -5.583334e-02],
[-1.033333e-01, -1.016667e-01, ..., -1.050000e-01, -9.166666e-02]],
...,
[[-1.620000e-01, -1.620000e-01, ..., -1.840000e-01, -1.720000e-01],
[-2.860000e-01, -2.940000e-01, ..., -2.740000e-01, -2.840000e-01],
...,
[-2.080000e+00, -2.828000e+00, ..., -8.640003e-01, -1.430000e+00],
[-1.093600e+01, -1.149600e+01, ..., -9.905998e+00, -1.041000e+01]],
[[-2.000034e-03, -1.999974e-03, ..., 5.999982e-03, -2.000004e-03],
[-1.360000e-01, -1.440000e-01, ..., -1.140000e-01, -1.240000e-01],
...,
[-5.680000e+00, -6.368000e+00, ..., -4.564001e+00, -5.080000e+00],
[-1.351600e+01, -1.401600e+01, ..., -1.258600e+01, -1.304000e+01]]],
dtype=float32)
Coordinates:
* lat (lat) float32 288B 88.75 86.25 83.75 81.25 ... -83.75 -86.25 -88.75
* lon (lon) float32 576B 1.25 3.75 6.25 8.75 ... 351.2 353.8 356.2 358.8
* time (time) datetime64[ns] 4kB 1979-01-01 1979-02-01 ... 2022-05-01
Attributes:
long_name: Average Monthly Rate of Precipitation
valid_range: [ 0. 70.]
units: mm/day
precision: 2
var_desc: Precipitation
dataset: CPC Merged Analysis of Precipitation Enhanced
level_desc: Surface
statistic: Mean
parent_stat: Mean
actual_range: [ 0. 144.49]
</code></pre>
<p>That is, I want to do subtract from each of the monthly data in <code>da</code> with the corresponding year's annual mean which is in <code>yearly_mean</code>.</p>
<p>Is there a way to do it efficiently in Xarray, instead of using a for-loop?</p>
|
<python><python-xarray><weather>
|
2024-09-04 13:11:22
| 1
| 363
|
RogUE
|
78,948,777
| 15,587,034
|
Ruff formater lengthens the string, how to make sure that it does not change strings that are already less than the length specified in the settings
|
<p>The problem is that this is the line:</p>
<pre><code>__all__ = (
"Base",
"TimestampMixin",
"BaseUser",
"User",
"Locale",
"DBBot"
)
</code></pre>
<p>It turns you into such a</p>
<pre><code>__all__ = ("Base", "TimestampMixin", "BaseUser", "User", "Locale", "DBBot")
</code></pre>
|
<python><python-3.x><formatter><linter><ruff>
|
2024-09-04 13:00:13
| 1
| 360
|
Charls Ken
|
78,948,402
| 8,849,071
|
How does MyPy work when considering MagicMock
|
<p>I was thinking about enabling the same <code>mypy</code> rules in my tests as in my production code. I started doing some tinkering in <code>mypy</code> playground and found some things I do not quite understand about how <code>mypy</code> and <code>MagicMock</code> play together. Let's say I have the following class:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def method(self, variable: int) -> None:
print("Hey!")
</code></pre>
<p>Now, I want to create a mock of that class on another test, so I would do something like this:</p>
<pre class="lang-py prettyprint-override"><code>mock = MagicMock(spec_set=A)
</code></pre>
<p>Because I have used <code>spec_set</code> if I try to do something like:</p>
<pre class="lang-py prettyprint-override"><code>mock.not_existing.return_value = "Meh"
</code></pre>
<p>Then I will get a runtime error, but surprisingly (?) I will not get any error from <code>mypy</code>. This made me think that maybe mypy just ignored everything related to <code>MagicMock</code>. Then I tried something like:</p>
<pre class="lang-py prettyprint-override"><code>def function(some: A) -> None:
print(some)
# no mypy error either
function(mock)
</code></pre>
<p>Again, it does not trigger any <code>mypy</code> error, which I guess it's the expected behavior because that mock is mocking <code>A</code>. Nonetheless, if you change it to:</p>
<pre class="lang-py prettyprint-override"><code>class B:
def method_2(self, variable: int) -> None:
print("Hey!")
mock_2 = MagicMock(spec_set=B)
function(mock_2)
</code></pre>
<p>Then, no <code>mypy</code> error either. I also tried the following:</p>
<pre><code>mock: A = MagicMock(spec_set=A)
mock.not_existing.something() # runtime and mypy error
mock.method.return_value = 1 # mypy error due to return_value not existing
</code></pre>
<p>So, is there any way to change this behavior? Without this, I'm not sure if it makes sense to add <code>mypy</code> to my tests because I would be getting false negatives I guess.</p>
<p>As a bonus part of the question, I also thought about this:</p>
<pre><code>mock.method.return_value = "Meh"
</code></pre>
<p>Which does not trigger any error (runtime or from <code>mypy</code>). Ideally, it should throw both, because I'm breaking the spec (so I guess <code>spec_set</code> should do something about it) and also because I'm not returning <code>None</code>, as the type hint suggests.</p>
<p><a href="https://mypy-play.net/?mypy=latest&python=3.12&flags=strict-equality%2Cwarn-incomplete-stub%2Cwarn-redundant-casts%2Cwarn-return-any%2Cwarn-unreachable%2Cwarn-unused-configs%2Cwarn-unused-ignores&gist=6746a40f7cf534a5caccc9be4254a42a" rel="nofollow noreferrer">Here you can test the same example in <code>mypy</code> playground</a></p>
|
<python><unit-testing><python-typing><mypy><magicmock>
|
2024-09-04 11:28:53
| 1
| 2,163
|
Antonio Gamiz Delgado
|
78,948,398
| 6,689,867
|
Pandas style.to_latex: how to add a \cmidrule in the header?
|
<p>I have <code>mycsv.csv</code>:</p>
<pre><code>Nr,A,B,C
1,a,b,g
2,c,d,h
3,e,f,i
</code></pre>
<p>With this Phyton code:</p>
<pre><code>import pandas as pd
testo = pd.read_csv("mycsv.csv")
columns = [
('','Something'),
('Multicolumn','A'),
('Multicolumn','B'),
('Something else',''),
]
testo.columns = pd.MultiIndex.from_tuples(columns)
testo.head()
testo.style.hide(axis="index").to_latex("myout.tex",hrules=True,
multicol_align='c')
</code></pre>
<p>I got <code>myout.tex</code>:</p>
<pre><code>\begin{tabular}{rlll}
\toprule
& \multicolumn{2}{c}{Multicolumn} & Something else \\
Something & A & B & \\
\midrule
1 & a & b & g \\
2 & c & d & h \\
3 & e & f & i \\
\bottomrule
\end{tabular}
</code></pre>
<p>I would like to add a <code>\cmidrule{2-3}</code> in the headers:</p>
<pre><code>\documentclass[10pt]{article}
\usepackage{booktabs}
\begin{document}
\begin{tabular}{rlll}
\toprule
& \multicolumn{2}{c}{Multicolumn} & Something else \\
\cmidrule{2-3}
Something & A & B & \\
\midrule
1 & a & b & g \\
2 & c & d & h \\
3 & e & f & i \\
\bottomrule
\end{tabular}
\end{document}
</code></pre>
<p>How can I do it?</p>
|
<python><pandas><csv><latex>
|
2024-09-04 11:28:40
| 1
| 305
|
CarLaTeX
|
78,948,296
| 2,243,490
|
pytest nested parameterization
|
<p>Code</p>
<pre><code>@pytest.mark.parametrize("arg1", [1,2])
@pytest.mark.parametrize("arg2", [["A1", "A2"], ["B1", "B2", "B3"]])
def test_stackoverflow(arg1, arg2):
print(arg1, arg2)
</code></pre>
<p>Current output</p>
<pre><code>1 [A1, A2]
2 [A1, A2]
1 [B1, B2, B3]
2 [B1, B2, B3]
</code></pre>
<p>But I am expecting something like below. Is this even possible, if so how?</p>
<pre><code>1 A1
1 A2
2 B1
2 B2
2 B3
</code></pre>
|
<python><python-3.x><pytest>
|
2024-09-04 11:07:00
| 2
| 1,886
|
Dinesh
|
78,948,247
| 4,483,043
|
Gradio How to add user avatar in chat interface
|
<p>I have a very basic code for Gradio chat interface, how can i add user avatar in Gradio chat interface</p>
<pre><code>import gradio as gr
import random
import time
with gr.Blocks(theme=gr.themes.Soft()) as demo:
chatbot = gr.Chatbot()
msg = gr.Textbox()
clear = gr.ClearButton([msg, chatbot])
def respond(message, chat_history):
bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"])
chat_history.append((message, bot_message))
time.sleep(1)
return "", chat_history
msg.submit(respond, [msg, chatbot], [msg, chatbot])
demo.launch()
</code></pre>
|
<python><chatbot><large-language-model><gradio>
|
2024-09-04 10:54:41
| 2
| 437
|
Farooq Zaman
|
78,948,116
| 854,101
|
Django error on forms when running makemigrations
|
<p>I'm getting the following error when trying to make migrations for my models. This is against a clean DB so it is trying to generate the initial migrations.</p>
<pre><code>File "/Users/luketimothy/Library/Mobile Documents/com~apple~CloudDocs/LifePlanner/LifePlanner/LifePlanner/urls.py", line 20, in <module>
from . import views
File "/Users/luketimothy/Library/Mobile Documents/com~apple~CloudDocs/LifePlanner/LifePlanner/LifePlanner/views.py", line 7, in <module>
from .forms import AppUserForm, IncomeSourceForm, AccountForm, SpouseForm, DependentForm
File "/Users/luketimothy/Library/Mobile Documents/com~apple~CloudDocs/LifePlanner/LifePlanner/LifePlanner/forms.py", line 32, in <module>
class AccountForm(forms.ModelForm):
File "/Users/luketimothy/Library/Mobile Documents/com~apple~CloudDocs/LifePlanner/LifePlanner/ProjectEnv/lib/python3.11/site-packages/django/forms/models.py", line 312, in __new__
fields = fields_for_model(
^^^^^^^^^^^^^^^^^
File "/Users/luketimothy/Library/Mobile Documents/com~apple~CloudDocs/LifePlanner/LifePlanner/ProjectEnv/lib/python3.11/site-packages/django/forms/models.py", line 237, in fields_for_model
formfield = f.formfield(**kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/luketimothy/Library/Mobile Documents/com~apple~CloudDocs/LifePlanner/LifePlanner/ProjectEnv/lib/python3.11/site-packages/django/db/models/fields/related.py", line 1165, in formfield
"queryset": self.remote_field.model._default_manager.using(using),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'using'
</code></pre>
<p>This is the form in my forms.py:</p>
<pre><code>class AccountForm(forms.ModelForm):
class Meta:
model = Account
fields = ['account_name','owner','account_provider','balance']
labels = {'account_name': 'Name','account_provider': 'Account Provider','balance': 'Balance','owner': 'Owner'}
</code></pre>
<p>And here are the relevant Models in models.py:</p>
<pre><code>class Projectable(models.Model):
class Meta:
abstract = True
def project_annual_values(self, years):
raise NotImplementedError("Subclasses should implement this method.")
def project_monthly_values(self, months):
raise NotImplementedError("Subclasses should implement this method.")
class AccountProvider(models.Model):
name = models.CharField(max_length=64)
web_url = models.CharField(max_length=256)
login_url = models.CharField(max_length=256)
logo_file= models.CharField(max_length=64)
def __str__(self):
return f"{self.name}"
class Account(Projectable):
account_name = models.CharField(max_length=100)
balance = models.DecimalField(max_digits=15, decimal_places=2)
owner = models.ForeignKey(Agent, on_delete=models.CASCADE)
account_provider = models.ForeignKey(AccountProvider, on_delete=models.SET_NULL)
def get_balance(self):
return self.balance
def get_account_type(self):
raise NotImplementedError("Subclasses should implement this method.")
</code></pre>
<p>I have searched all over and I cannot quite find any example of the same issue. I think it might be the way I have the inheritance set up on the models but I'm not really sure, as far as I'm aware this should work.</p>
<p>EDIT:</p>
<p>Agent Model:</p>
<pre><code>class Agent(models.Model):
retirement_age = models.IntegerField()
date_of_birth = models.DateField()
name = models.CharField(max_length=100)
class Meta:
abstract = True
</code></pre>
|
<python><django>
|
2024-09-04 10:22:51
| 1
| 2,496
|
Luke
|
78,947,991
| 891,959
|
How do I get sqlalchemy subqueries to retain their ORM type?
|
<p>I have a query with subquery. It partitions based on ID and selects the newest of each one in the partitions. It works fine:</p>
<pre><code>subquery = db.query(
func.rank()
.over(
order_by=Table.CreatedAt.desc(),
partition_by=[Table.ID],
)
.label("rank"),
Table,
).subquery()
rows: list[Table] = db.query(subquery).filter(subquery.c.rank == 1).all()
</code></pre>
<p>The problems is the type of <code>rows</code> isn't <code>list[Table]</code> but rather <code>list[Rows]</code> and it loses all of the ORM information.</p>
<p>How do I get it to return <code>list[Table]</code>?</p>
|
<python><sqlalchemy><orm>
|
2024-09-04 09:51:35
| 1
| 321
|
InformationEntropy
|
78,947,680
| 110,963
|
Type annotations for parameters of Luigi tasks
|
<p>I'm using <a href="https://luigi.readthedocs.io/en/stable/" rel="nofollow noreferrer">Luigi</a> in my Python project, so I have classes that look like this:</p>
<pre><code>class MyTask(luigi.Task):
my_attribute = luigi.IntParameter()
</code></pre>
<p>I would like to add a type annotation to <code>my_attribute</code> so that <code>mypy</code> will be aware that it is an integer. Or rather "will be an integer", because obviously it is not yet. It will become an integer due to "metaclass magic":</p>
<pre><code>t = MyTask(my_attribute=5)
print(t.my_attribute) # <- t.my_attribute is an int, not an IntParameter
</code></pre>
<p>What's the proper way to annotate this attribute? Is it possible at all? I'm just a Luigi user and not a maintainer or contributor, so changing Luigi is not an option. At least not short term.</p>
|
<python><python-typing><mypy><luigi>
|
2024-09-04 08:41:47
| 1
| 15,684
|
Achim
|
78,947,551
| 10,240,072
|
Python package and conda environnement
|
<p>I have encountered a behavior relative to package versions in different conda environment that I donβt understand and seems illogical. There is probably something fundamental that I do not understand in the behavior.
Basically :</p>
<ol>
<li>I have a conda environment 'env' with pandas version 2.0.3</li>
<li>I clone this environment with : "conda create --name env_save --clone env</li>
<li>I activate 'env'</li>
<li>I update pandas to 2.2.2</li>
<li>I switch back to 'env_save' (the one with pandas 2.0.3, I assume)</li>
<li>I launch jupyter lab</li>
<li>In jupyter lab, in one cell, I import pandas and check for <code>pandas.__version__</code> : result :2.2.2</li>
<li>In the next cell, I check !pip list : result: pandas 2.0.3</li>
</ol>
<p>Is this a bug or is there a possibility that this 'spillage' is as intended ? (the behavior persists even after reboot)</p>
|
<python><conda><environment>
|
2024-09-04 08:18:12
| 0
| 313
|
Fred Dujardin
|
78,947,384
| 5,704,198
|
What does this code mean: "assert result == repeat, (result, repeat)"?
|
<p>From <a href="https://hypothesis.readthedocs.io/en/latest/ghostwriter.html#hypothesis.extra.ghostwriter.idempotent" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from hypothesis import given, strategies as st
@given(seq=st.one_of(st.binary(), st.binary().map(bytearray), st.lists(st.integers())))
def test_idempotent_timsort(seq):
result = timsort(seq=seq)
repeat = timsort(seq=result)
assert result == repeat, (result, repeat)
</code></pre>
<p>What does the last assert mean? I understand <code>result == repeat</code> but what's the rest?</p>
|
<python><assert>
|
2024-09-04 07:41:58
| 1
| 1,385
|
fabio
|
78,947,382
| 3,540,161
|
Inconsistent Python import behaviour with subdirectories
|
<p>When splitting my python code into simple modules and putting them in a subdirectory, I see seemingly inconsistent behaviour that I was not able to solve by adding an <code>__init__.py</code> file in the module. I am also unable to understand why things not seem to work consistently. Please explain or point to documentation where this "logic" is explained:</p>
<p>I have the following file structure:</p>
<pre><code>a.py
module/b.py
module/c.py
</code></pre>
<p>The files contain the following contents:</p>
<p>a.py:</p>
<pre class="lang-py prettyprint-override"><code>from module.b import hello
hello()
</code></pre>
<p>b.py:</p>
<pre class="lang-py prettyprint-override"><code>from c import helloworld
def hello():
helloworld()
hello()
</code></pre>
<p>c.py:</p>
<pre class="lang-py prettyprint-override"><code>def helloworld():
print("Hello world")
</code></pre>
<p>When I run b.py everything works:</p>
<pre class="lang-bash prettyprint-override"><code># cd module/
# python b.py
Hello world
</code></pre>
<p>But when I run a.py from the parent directory suddenly python does not know where to find c.py:</p>
<pre class="lang-bash prettyprint-override"><code># python a.py
Traceback (most recent call last):
File "python-playground/a.py", line 1, in <module>
from module.b import hello
File "python-playground/module/b.py", line 1, in <module>
from c import helloworld
ModuleNotFoundError: No module named 'c'
</code></pre>
<p>How can I make it so that I can run both <code>module/b.py</code> and <code>a.py</code> without Python complaining about imports without making my code too complicated?</p>
|
<python><python-3.x><python-import>
|
2024-09-04 07:41:25
| 2
| 7,308
|
Rolf
|
78,947,332
| 1,866,775
|
How to install torch without nvidia?
|
<p>While trying to reduce the size of a Docker image, I noticed <code>pip install torch</code> adds a few GB. A big chunk of this comes from <code>[...]/site-packages/nvidia</code>. Since I'm not using a GPU, I'd like to not install the <code>nvidia</code> things.</p>
<p>Here is a minimal example:</p>
<pre><code>FROM python:3.12.5
RUN pip install torch
</code></pre>
<p>(Ignoring <code>-slim</code> base images, since this is not the point here.)</p>
<p>Resulting size:</p>
<ul>
<li><code>FROM python:3.12.5</code> -> <code>1.02GB</code></li>
<li>After <code>RUN pip install torch</code> -> <code>8.98GB</code></li>
<li>With <code>RUN pip install torch && pip freeze | grep nvidia | xargs pip uninstall -y</code> instead -> <code>6.19GB</code>.</li>
</ul>
<p>While the last point reduces the final size, all the nvidia stuff is still <a href="https://gist.github.com/Dobiasd/4ebb2cdce927408f7953cb0cd65962c3" rel="noreferrer">downloaded and installed</a>, which costs time and bandwidth.</p>
<p>So, how can I install <code>torch</code> without nvidia directly?</p>
<p>Using <code>--no-deps</code> is not a convenient solution, because of the other transitive dependencies, that I would like to install.</p>
<p>Of course, I could explicitly list every single one, but looking at this list of packages installed with <code>torch</code></p>
<pre><code>mpmath
typing-extensions
sympy
nvidia-nvtx-cu12
nvidia-nvjitlink-cu12
nvidia-nccl-cu12
nvidia-curand-cu12
nvidia-cufft-cu12
nvidia-cuda-runtime-cu12
nvidia-cuda-nvrtc-cu12
nvidia-cuda-cupti-cu12
nvidia-cublas-cu12
networkx
MarkupSafe
fsspec
filelock
triton
nvidia-cusparse-cu12
nvidia-cudnn-cu12
jinja2
nvidia-cusolver-cu12
torch
</code></pre>
<p>I'd like to avoid manually maintaining this list since it would change with future versions of <code>torch</code>.</p>
|
<python><pytorch><pip><dockerfile><torch>
|
2024-09-04 07:27:02
| 1
| 11,227
|
Tobias Hermann
|
78,947,142
| 4,483,043
|
how to change favicon in Gradio python
|
<p>I have a very basic code for Gradio chat interface, how can i change favicon in Gradio chat interface</p>
<pre><code>import time
import gradio as gr
def slow_echo(message, history):
for i in range(len(message)):
time.sleep(0.3)
yield "You typed: " + message[: i+1]
gr.ChatInterface(slow_echo).launch()
</code></pre>
|
<python><chatbot><large-language-model><gradio>
|
2024-09-04 06:33:45
| 1
| 437
|
Farooq Zaman
|
78,946,789
| 7,471,830
|
Python Asyncio source code analysis: Why does `_get_running_loop` in Python execute the C implementation instead of the Python one?
|
<p>I've been exploring the <code>async</code> source code and noticed that the function <code>_get_running_loop()</code> is defined both in Python and has a note stating it's implemented in C (in <code>_asynciomodule.c</code>).</p>
<pre class="lang-py prettyprint-override"><code># python3.11/asyncio/events.py
def get_running_loop():
"""Return the running event loop. Raise a RuntimeError if there is none.
This function is thread-specific.
"""
# NOTE: this function is implemented in C (see _asynciomodule.c)
loop = _get_running_loop()
if loop is None:
raise RuntimeError('no running event loop')
return loop
def _get_running_loop():
"""Return the running event loop or None.
This is a low-level function intended to be used by event loops.
This function is thread-specific.
"""
# NOTE: this function is implemented in C (see _asynciomodule.c)
running_loop, pid = _running_loop.loop_pid
if running_loop is not None and pid == os.getpid():
return running_loop
</code></pre>
<p>If <code>_get_running_loop()</code> is already defined in Python, why is it said the actual implementation is written in C?</p>
<p>The following code in <a href="https://github.com/python/cpython/blob/135dad9bd70bba5a7b432c744f2993476915cf07/Modules/_asynciomodule.c#L3268" rel="nofollow noreferrer">Cpython</a> looks like the real implementation:</p>
<pre class="lang-c prettyprint-override"><code>static PyObject *
_asyncio_get_running_loop_impl(PyObject *module)
/*[clinic end generated code: output=c247b5f9e529530e input=2a3bf02ba39f173d]*/
{
PyObject *loop;
_PyThreadStateImpl *ts = (_PyThreadStateImpl *)_PyThreadState_GET();
loop = Py_XNewRef(ts->asyncio_running_loop);
if (loop == NULL) {
/* There's no currently running event loop */
PyErr_SetString(
PyExc_RuntimeError, "no running event loop");
return NULL;
}
return loop;
}
</code></pre>
<p>I'm aware that <code>asyncio</code> uses C extensions for performance reasons, but Iβd like to understand how this mechanism works, specifically how Python binds the Python-level function to the C-level implementation.</p>
<p>Thank you!</p>
|
<python><python-asyncio><cpython><event-loop>
|
2024-09-04 04:16:28
| 1
| 831
|
OOD Waterball
|
78,946,628
| 10,778,476
|
Python imaplib: Is there a consistent way to authenticate/login to Outlook via IMAP?
|
<p>I'm literally junior to Python programming after years of C++ and C programming, and am trying to work in using Python to save all messages & attachments from Outlook, specifically <code>imaplib</code>.</p>
<p>My experiments so far led me to understand that imaplib module could succeessly connect to an email server using the IMAP protocol over SSL when I enabled my account and password for outlook server.</p>
<p>I've simply written a program. The program demonstrates that imaplib module connect to the server by <code>imaplib.IMAP4_SSL</code>:</p>
<pre><code>import imaplib
ImapPort = 993
# Connect to the server
mail = imaplib.IMAP4_SSL("outlook.office365.com", ImapPort)
# Login to the account
mail.login(email_user, email_pass)
</code></pre>
<p>This is a very simple process ...but the authentication fails with "Login failed" for few attempts:</p>
<pre><code>> imaplib.IMAP4.error: b'LOGIN failed.'
</code></pre>
<p>I tried another way of looking at this -- but I'm not sure if this is consistent -- that let program demonstrates more than one attempt:</p>
<pre><code># Connect to the server
import imaplib
import logging
import sys
import time
IMAPserver = "outlook.office365.com"
ImapPort = 993
# Configure logging to write to both file and console
log_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
# Console handler
log_handler_console = logging.StreamHandler(sys.stdout)
log_handler_console.setFormatter(log_formatter)
# Create a logger
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.addHandler(log_handler_console)
def connect_to_server(server, port):
try:
mail = imaplib.IMAP4_SSL(server, port)
logging.info(f"Connected to the email server {server}.")
return mail
except imaplib.IMAP4.error as e:
logging.error(f"Connection failed: {str(e)}")
return None
def login_with_retries(server, port, email_user, email_pass, max_retries=30, retry_delay=5):
for attempt in range(max_retries):
mail = connect_to_server(server, port)
if mail is None:
logging.error(f"Retrying connection in {retry_delay} seconds...")
time.sleep(retry_delay)
continue
try:
mail.login(email_user, email_pass)
logging.info(f"Logged into the email account {email_user}.")
return mail # Return the mail object if login is successful
except imaplib.IMAP4.error as e:
logging.error(f"LOGIN failed on attempt {attempt + 1} of {max_retries} (Retry in {retry_delay} seconds). Error: {str(e)}")
mail.logout() # Ensure proper logout before retrying connection
time.sleep(retry_delay)
logging.error("All login attempts failed. Check your email credentials and settings.")
return None # Return False if all attempts fail
try:
# Connect to server
mail = login_with_retries(IMAPserver, ImapPort, email_user, email_pass)
if mail is None:
# Handle login failure
raise RuntimeError("Failed to IMAP login after multiple attempts.")
# Select mailbox
mail.select("inbox")
logging.info("Mailbox inbox folder selected.")
except imaplib.IMAP4.error as e:
logging.error(f"An IMAP error occurred: {e}", exc_info=True)
sys.exit(1)
except Exception as e:
logging.error(f"An unexpected error occurred: {e}", exc_info=True)
sys.exit(1)
</code></pre>
<p>The problem is authentication fails with "Login failed" for first few attempts and finally succeeds.</p>
<pre><code>2024-09-04 10:29:34 - INFO - Connected to the email server outlook.office365.com.
2024-09-04 10:29:38 - ERROR - LOGIN failed on attempt 1 of 30 (Retry in 5 seconds). Error: b'LOGIN failed.'
2024-09-04 10:29:43 - INFO - Connected to the email server outlook.office365.com.
2024-09-04 10:29:47 - ERROR - LOGIN failed on attempt 2 of 30 (Retry in 5 seconds). Error: b'LOGIN failed.'
2024-09-04 10:29:52 - INFO - Connected to the email server outlook.office365.com.
2024-09-04 10:29:54 - INFO - Logged into the email account s041978@hotmail.com.
2024-09-04 10:29:54 - INFO - Mailbox inbox folder selected.
</code></pre>
<p>Is there a correct interpretation of why validating email credentials to log in with same account and password appears to be inconsistent by using imaplib.IMAP4_SSL?</p>
|
<python><outlook><imaplib>
|
2024-09-04 02:43:42
| 2
| 884
|
Shelton Liu
|
78,946,545
| 736,662
|
Getting nested JSON value using a for-loop
|
<p>I make the use of a function in my Python script to get a certain value. The value I want is the so-called 'parentComponentId' using the condition 'name' equal to 'Vinje'.
Here is the function:</p>
<pre><code>def get_parentcomponentid(data, name):
return next(
element["parentComponentId"]
for element in data
for x in element["meta"]
if x["name"] == name)
</code></pre>
<p>The response I want to traverse is as follows:</p>
<pre><code> [
{
'powerPlant': {
'name': 'Nore 2',
'componentId': 7304,
'spotlessId': 11,
'timeSeries': None,
'units': [
{
'generatorName': 'Nore 2 G2',
'xyTimeSeries': None,
'componentId': 73042,
'timeSeries': None
},
{
'generatorName': 'Nore 2 G1',
'xyTimeSeries': None,
'componentId': 73041,
'timeSeries': None
}
]
},
'meta': {
'parentComponentId': 10024,
'technologyType': 'Hydro',
'regionId': 10,
'priceArea': 1
}
},
{
'powerPlant': {
'name': 'Vinje',
'componentId': 7814,
'spotlessId': 19,
'timeSeries': None,
'units': [
{
'generatorName': 'Vinje G1',
'xyTimeSeries': None,
'componentId': 78141,
'timeSeries': None
},
{
'generatorName': 'Vinje G3',
'xyTimeSeries': None,
'componentId': 78143,
'timeSeries': None
},
{
'generatorName': 'Vinje G2',
'xyTimeSeries': None,
'componentId': 78142,
'timeSeries': None
}
]
},
'meta': {
'parentComponentId': 10034,
'technologyType': 'Hydro',
'regionId': 10,
'priceArea': 2
}
}
]
</code></pre>
<p>However, using this set-up I get the error:</p>
<pre><code> return next(
element["parentComponentId"]
for element in data
for x in element["meta"]
> if x["name"] == name)
E TypeError: string indices must be integers
</code></pre>
<p>Is it the structure of the function that expects an integer?</p>
|
<python>
|
2024-09-04 01:45:24
| 1
| 1,003
|
Magnus Jensen
|
78,946,519
| 1,862,823
|
How to convert query string parameters from Datatables.js, like columns[0][name] into an object in Python/Django?
|
<p>I'm using DataTables.js and trying to hook up server-side processing. I'm using Django on the server.</p>
<p>Currently, the data to Django looks like:</p>
<pre><code>{'draw': '1',
'columns[0][data]': '0',
'columns[0][name]': 'Brand',
'columns[0][searchable]': 'true',
'columns[0][orderable]': 'true',
'columns[0][search][value]': '',
'columns[0][search][regex]': 'false',
'columns[1][data]': '1',
'columns[1][name]': 'Sku',
'columns[1][searchable]': 'true',
'columns[1][orderable]': 'true',
'columns[1][search][value]': '',
'columns[1][search][regex]': 'false',
'columns[2][data]': '2',
'columns[2][name]': 'Name',
'columns[2][searchable]': 'true',
'columns[2][orderable]': 'true',
'columns[2][search][value]': '',
'columns[2][search][regex]': 'false',
'order[0][column]': '0',
'order[0][dir]': 'asc',
'order[0][name]': 'Brand',
'start': '0',
'length': '10',
'search[value]': '',
'search[regex]': 'false',
'_': '1725412765180'}
</code></pre>
<p>(as a dictionary)</p>
<p>However, there's a variable number of columns and order values that might come through. So I'd like to convert all of this into a few key variables:</p>
<ol>
<li>start</li>
<li>length</li>
<li>search value</li>
<li>search regex</li>
<li>draw</li>
<li>array/list of column objects</li>
<li>array/list of order objects</li>
</ol>
<p>But I don't know a lot of python</p>
|
<python><django><dictionary><datatables>
|
2024-09-04 01:28:49
| 1
| 2,353
|
NeomerArcana
|
78,946,450
| 11,638,153
|
Python how to fit value, weight using different distributions
|
<p>I have tabular data in form of <code>values, weight</code> and would like to fit different distributions like normal, lognormal, etc. and get sum squared error of fitting. I tried normal distribution so far using <code>scipy.stats.norm.fit()</code> but it does not give any indication of error. Is there built-in way in python to fit weighted data and get sum squared error? Looks like <code>fitter</code> library can fit different distributions and get an error indication, but it doesn't work on weighted distributions.</p>
<p>My data looks like</p>
<pre><code>value,weight
11.12,0.001
12.112,0.05
...
</code></pre>
<p>For example, value 11.12 occurs 0.1% in the total data.</p>
|
<python><pandas><numpy><scipy>
|
2024-09-04 00:40:16
| 1
| 441
|
ewr3243
|
78,946,447
| 391,161
|
Is it possible to change the index URL for fetching `rules_python` itself in bazel?
|
<p>I am currently attempting to do a custom build of <code>envoy</code> on a machine that does not have access to PyPi. My company's security team requires us to use a corporate proxy with a different URL to access the PyPi repos.</p>
<p>When I try to run <code>bazel build ...</code>, I get the following error:</p>
<pre><code>ERROR: An error occurred during the fetch of repository 'pypi__pip_tools':
Traceback (most recent call last):
File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/b4e0fd0e207e6fdf5e33997b6741cf2d/external/bazel_tools/tools/build_defs/repo/http.bzl", line 132, column 45, in _http_archive_impl
download_info = ctx.download_and_extract(
Error in download_and_extract: java.io.IOException: Error downloading [https://files.pythonhosted.org/packages/0d/dc/38f4ce065e92c66f058ea7a368a9c5de4e702272b479c0992059f7693941/pip_tools-7.4.1-py3-none-any.whl] to /home/ubuntu/.cache/bazel/_bazel_ubuntu/b4e0fd0e207e6fdf5e33997b6741cf2d/external/pypi__pip_tools/temp12252431299166532778/pip_tools-7.4.1-py3-none-any.whl.zip: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
</code></pre>
<p>The security error is a red herring in this case because I need to replace <code>https://files.pythonhosted.org</code> with a different domain.</p>
<p>For command line <code>pip</code>, I had to do the following:</p>
<pre><code>python3 -m pip config set global.index-url <new_url_here>
</code></pre>
<p><strong>Is there some equivalent way to force <code>bazel</code> to use a different domain when fetching Python tools?</strong></p>
<p>Note that I have already seen <a href="https://github.com/bazelbuild/rules_python/issues/503" rel="nofollow noreferrer">this issue</a>, but it does not help because this error is happening as part of the process of installing pip itself, before any <code>pip_install</code> call is made.</p>
|
<python><bazel><pypi>
|
2024-09-04 00:39:09
| 1
| 76,345
|
merlin2011
|
78,946,446
| 484,944
|
Forward-over-reverse mode Hessian-vector product in Jax: how smart is jax.jvp at re-using computations?
|
<p>How smart is <a href="https://jax.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">Jax</a> at re-using intermediate computations when computing Hessian-vector products via forward-over-reverse mode automatic differentiation via <a href="https://jax.readthedocs.io/en/latest/_autosummary/jax.jvp.html" rel="nofollow noreferrer">jax.jvp</a> over <a href="https://jax.readthedocs.io/en/latest/_autosummary/jax.grad.html" rel="nofollow noreferrer">jax.grad</a>?</p>
<p>For example, something like this:</p>
<pre><code>def objective(x, other):
...expensive function, returns a scalar...
gradient = jax.grad(objective, argnums=0)
@jax.jit
def hessian_vector_product(p, x, other): # computes H(x) @ p
g, Hp = jax.jvp(lambda z: gradient(z, other), (x,), (p,))
return Hp
</code></pre>
<p>Now we call</p>
<pre><code>hessian_vector_product(p1, x, other)
hessian_vector_product(p2, x, other)
hessian_vector_product(p3, x, other)
...
</code></pre>
<p>with different p1, p2, p3, ..., but the same x, other.</p>
<p>A common scenario where this occurs is inexact Newton-CG optimization. In this case, many Hessian vector products are taken at the same point x, but for different directions p which are chosen by the conjugate gradient algorithm.</p>
<p>From a mathematical point of view, we know that the forward tangent variables (found when computing the objective), and adjoint tangent variables (found when computing the gradient), remain the same and can be re-used as long as x and other do not change. Does Jax figure that out, or are the forward and adjoint variables re-computed every time, thereby roughly doubling the cost?</p>
<p>If the code above unnecessarily re-computes intermediate variables, is there another, better way in Jax to do Hessian vector products? Preferably, using forward-over-reverse mode AD, because that is generally the recommended approach in books on AD (e.g., see chapter 3 in "The Art of Differentiating Computer Programs" by Uwe Naumann).</p>
|
<python><jax><automatic-differentiation>
|
2024-09-04 00:37:23
| 0
| 1,114
|
Nick Alger
|
78,946,237
| 825,227
|
Python groupby rank in two different directions
|
<p>I have a dataframe, <code>d</code>:</p>
<pre><code> Position Operation Side Price Size
9 9 0 1 0.7289 -16
8 8 0 1 0.729 -427
7 7 0 1 0.7291 -267
6 6 0 1 0.7292 -15
5 5 0 1 0.7293 -16
4 4 0 1 0.7294 -16
3 3 0 1 0.7295 -426
2 2 0 1 0.7296 -8
1 1 0 1 0.7297 -14
0 0 0 1 0.7298 -37
10 0 0 0 0.7299 6
11 1 0 0 0.73 34
12 2 0 0 0.7301 7
13 3 0 0 0.7302 9
14 4 0 0 0.7303 16
15 5 0 0 0.7304 15
16 6 0 0 0.7305 429
17 7 0 0 0.7306 16
18 8 0 0 0.7307 265
19 9 0 0 0.7308 18
</code></pre>
<p>Using the below for updates to <code>d</code> to recalculate <code>Position</code>:</p>
<p><code>d['Position'] = d.groupby('Side')['Price'].rank().astype('int').sub(1)</code></p>
<p>But as the order of the sort is different for each <code>Side</code> grouping, is there a way to sort <code>ascending</code> for one group and <code>descending</code> for another?</p>
|
<python><pandas><dataframe><group-by>
|
2024-09-03 22:17:34
| 3
| 1,702
|
Chris
|
78,946,135
| 127,682
|
Convert a list of strings to conversion functions
|
<p>Currently I can create a list of conversion functions like the following:</p>
<pre class="lang-py prettyprint-override"><code>casts = [float, float, int, str, int, str, str]
</code></pre>
<p>but I would like to do it in the following manner:</p>
<pre class="lang-py prettyprint-override"><code>casts = input("Enter the casts: ") # this would be str str int int float
</code></pre>
<p>I tried just doing <code>cast.split()</code>
but that only outputs a list of strings and not the conversion functions.
How would I go about achieving my objective?</p>
|
<python><list>
|
2024-09-03 21:33:04
| 3
| 465
|
capnhud
|
78,946,027
| 986,612
|
Change console icon on taskbar
|
<p>I don't want to change the shortcut icon:</p>
<p><a href="https://stackoverflow.com/questions/16782047/how-to-add-an-icon-of-my-own-to-a-python-program">How To Add An Icon Of My Own To A Python Program</a></p>
<p>I want to change the console (cmd) icon that runs a python script:</p>
<p><a href="https://stackoverflow.com/questions/2986853/is-there-a-way-to-change-the-console-icon-at-runtime">Is there a way to change the console Icon at runtime</a></p>
<p><a href="https://stackoverflow.com/questions/71823553/how-do-you-change-the-windows-console-icon-in-python-code">How do you change the windows console icon in python code?</a></p>
<p>However, it requires the window handle of the console:</p>
<p><a href="https://stackoverflow.com/questions/71729157/getting-the-instance-handle-of-a-window-in-python-ctypes">Getting the instance handle of a window in python (ctypes)</a></p>
<p><a href="https://stackoverflow.com/questions/66262195/get-position-of-console-window-in-python">Get position of console window in python</a></p>
<p>but the returned handle is for some small PseudoConsoleWindow rather than the real one.</p>
<hr />
<p>You can experiment with</p>
<pre><code>import win32api
import win32console
import win32gui, win32con, win32process
import psutil
img = r'c:\prog\icons\timer.ico'
def get_hwnds_for_pid( pid ):
def callback( hwnd, hwnds ):
found_pid = -1
if win32gui.IsWindowVisible(hwnd) and win32gui.IsWindowEnabled(hwnd):
_, found_pid = win32process.GetWindowThreadProcessId(hwnd)
if found_pid == pid:
hwnds.append( hwnd )
return True
hwnds = []
win32gui.EnumWindows(callback, hwnds)
return hwnds
if 0:
pid = psutil.Process().pid
print( 'pid', pid )
pid = psutil.Process().ppid()
print( 'ppid', pid )
hwnds = get_hwnds_for_pid( pid )
hwnd = hwnds[0]
else:
hwnd = win32console.GetConsoleWindow()
print( hex(hwnd) )
win32console.SetConsoleTitle( 'ha1' ) # works as long as the script is running
win32gui.SetWindowText( hwnd, 'ha2' ) # doesn't work
icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE
hicon = win32gui.LoadImage( None, img, win32con.IMAGE_ICON, 0, 0, icon_flags )
# doesn't work
win32gui.SendMessage( hwnd, win32con.WM_SETICON, win32con.ICON_SMALL, hicon )
win32gui.SendMessage( hwnd, win32con.WM_SETICON, win32con.ICON_BIG, hicon )
input()
</code></pre>
<p>Find the window with spyxx.exe.</p>
|
<python><windows>
|
2024-09-03 20:51:44
| 1
| 779
|
Zohar Levi
|
78,945,969
| 610,569
|
How to set max_memory pool of pyarrow to just use max available on the instance?
|
<p>I've a machine with 80GB RAM but whenever it does the <a href="https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html" rel="nofollow noreferrer">pa.concat_tables</a> operation, it goes out of memory.</p>
<p>I've tried doing the following to set the default memory pool but it's showing the <code>total_allocated_bytes</code> as 0:</p>
<pre><code>>>> import pyarrow as pa
>>> pa.set_memory_pool(pa.system_memory_pool())
>>> pa.total_allocated_bytes()
0
</code></pre>
<p>How to set max_memory pool of pyarrow to just use max available on the instance?</p>
<hr />
<p>I've also tried this and it seems to work and <code>pa.total_allocated_bytes()</code> is no longer zero but the docs are unclear</p>
<pre><code>import pyarrow as pa
pa.allocate_buffer(1024 * 1024 * 80, resizable=False) # Does this mean 80GB?
</code></pre>
|
<python><pyarrow><memory-pool>
|
2024-09-03 20:34:07
| 0
| 123,325
|
alvas
|
78,945,955
| 13,557,319
|
Spotify API 403 Forbidden Error When Adding Tracks to Playlist Despite Correct Token and Scopes
|
<p>I'm experiencing a 403 Forbidden error when trying to add a track to a Spotify playlist using the Spotify Web API. Despite having a correctly configured token and permissions, Iβm still facing this issue.</p>
<p>Details:</p>
<ul>
<li>Spotify Client ID: My client ID</li>
<li>Spotify Client Secret: Client Secrete</li>
<li>Redirect URI: <a href="http://127.0.0.1:8000/callback" rel="nofollow noreferrer">http://127.0.0.1:8000/callback</a></li>
</ul>
<p><strong>Access Token Details:</strong></p>
<pre><code>{
"access_token": "my access token",
"token_type": "Bearer",
"expires_in": 3600,
"refresh_token": "my refresh token",
"scope": "playlist-modify-private playlist-modify-public playlist-read-private playlist-read-collaborative",
"expires_at": 1725396428
}
</code></pre>
<p><strong>Error Log:</strong></p>
<pre><code>INFO 2024-09-03 20:16:02,097 cron Distributing 1 playlists to job scheduled at 2024-09-03 20:47:02.097874 for Order ID 28.
INFO 2024-09-03 20:16:02,100 cron Adding track 0TT2Tzi8mEETCqYZ1ffiHh to playlist 4CeTjVTCZOFzTBdO8yaLvG
ERROR 2024-09-03 20:16:02,557 cron Spotify API error: https://api.spotify.com/v1/playlists/4CeTjVTCZOFzTBdO8yaLvG/tracks:
Check settings on developer.spotify.com/dashboard, the user may not be registered.
ERROR 2024-09-03 20:16:02,558 cron Check if the user 31rizduwj674dch67g22bjqy7sue is correctly registered and has permissions to modify the playlist.
</code></pre>
<p><em><strong>What Iβve Tried</strong></em></p>
<p><strong>1- Verified Token Scopes:</strong>
The token includes scopes <code>playlist-modify-private playlist-modify-public playlist-read-private playlist-read-collaborative</code>.</p>
<p><strong>2- Checked Token Validity:</strong>
The token is valid and has not expired. I also attempted to refresh it.</p>
<p><strong>3- Confirmed Playlist Ownership:</strong>
Ensured that the playlist is either owned by or shared with the user whose token is being used.</p>
<p><strong>4- Re-authenticated:</strong>
Re-authenticated and re-authorized the application to confirm there are no issues with authentication.</p>
<p><strong>5- API Endpoint & Request Format:</strong>
Verified that the API endpoint and request format conform to the Spotify API documentation.</p>
<p><strong>6- User Permissions:</strong>
Confirmed that the user has the necessary permissions to modify the playlist.</p>
<p><strong>Questions</strong></p>
<p>1- Is there any additional configuration or setting I might be missing on the Spotify Developer Dashboard?</p>
<p>2- Could the issue be related to specific permissions or configurations for the playlist or user account?</p>
<p>3- Are there other common issues or troubleshooting steps for resolving 403 Forbidden errors with the Spotify API?</p>
<p>Environment Details:</p>
<ul>
<li>Django latest</li>
<li>Spotipy for Spotify OAuth</li>
<li>Python 3.12</li>
<li>Ubuntu 22.04</li>
</ul>
<p><strong>Request sample - Curl:</strong></p>
<pre><code>curl --request POST \
--url 'https://api.spotify.com/v1/playlists/6QoiIukPdZNHILKt4S7SDF/tracks?uris=spotify%3Atrack%3A0TT2Tzi8mEETCqYZ1ffiHh' \
--header 'Authorization: Bearer 1POdFZRZbvb...qqillRxMr2z' \
--header 'Content-Type: application/json' \
--data '{
"uris": [
"spotify:track:0TT2Tzi8mEETCqYZ1ffiHh"
],
"position": 0
}'
</code></pre>
<p>Any assistance or suggestions to resolve this issue would be greatly appreciated.</p>
|
<python><django><oauth-2.0><cron><spotipy>
|
2024-09-03 20:26:45
| 0
| 342
|
Linear Data Structure
|
78,945,806
| 4,748,483
|
Creating a standalone python executable with pywin32 and missed required module
|
<p><br/>I'm going to create a standalone python executable that rely on a python interpreter on windows (I don't want to create an exe file).
<br/>So, I'm used this helpful article:
<br/><a href="https://n8henrie.com/2022/08/easily-create-almost-standalone-python-executables-with-the-builtin-zipapp-module/#google_vignette" rel="nofollow noreferrer">https://n8henrie.com/2022/08/easily-create-almost-standalone-python-executables-with-the-builtin-zipapp-module/#google_vignette</a>
<br/>My script python uses <strong>pywin32</strong> for converting word to pdf,
<br/>After running this command:
<br/><code>> python -m pip install -t myapp pywin32</code>
<br/>I tried to test working of the standalone script on <strong>myapp</strong> directory. so I ran:
<br/><code>> python .\myapp\main.py</code>
<br/>And I get this error:
<br/><code>Traceback (most recent call last): File "D:\Parsa Projects\Python-PDF\myapp\main.py", line 3, in <module> import win32com.client File "D:\Parsa Projects\Python-PDF\myapp\win32com\__init__.py", line 8, in <module> import pythoncom File "D:\Parsa Projects\Python-PDF\myapp\pythoncom.py", line 2, in <module> import pywintypes ModuleNotFoundError: No module named 'pywintypes'</code></p>
<p><br/>I searched about the problem and I found installing <strong>pypiwin32</strong> solves the problem, so I Installed it on the <strong>myapp</strong> directory, but it not solved the problem.
<br/>Can anyone help me?
<br/>What I can do?
<br/>This is my python code:</p>
<pre><code>import argparse
import pathlib
import win32com.client
import os
def convert_word_to_pdf(word_file_path, pdf_file_path):
if not os.path.exists(word_file_path):
raise FileNotFoundError(f"The file {word_file_path} does not exist.")
word = win32com.client.Dispatch('Word.Application')
word.Visible = False
try:
doc = word.Documents.Open(word_file_path)
doc.SaveAs(pdf_file_path, FileFormat=17)
doc.Close()
except Exception as e:
print(f"An error occurred: {e}")
finally:
word.Quit()
def main():
filepath = os.path.abspath('D:\\Parsa Projects\\Python-PDF\\file-sample_100kB.docx')
print(filepath)
# pdf_path = pathlib.Path('./file-sample_100kB.pdf').absolute()
pdf_path = os.path.abspath('D:\\Parsa Projects\\Python-PDF\\file-sample_100kB.pdf')
convert_word_to_pdf(filepath, pdf_path)
# parser = argparse.ArgumentParser("Arguments")
# if os.environ.get("MY_ENV_VAR") is not None:
# parser.add_argument("--my-env-var", required=True, action="store_true")
# if os.environ.get("MY_ENV_VAR2") is not None:
# parser.add_argument("--my-env-var2", required=True, action="store_true")
# parser.parse_args()
if __name__ == "__main__":
# main()
# exit()
# doc_path = pathlib.Path('./file-sample_100kB.docx').absolute()
filepath = os.path.abspath('D:\\Parsa Projects\\Python-PDF\\file-sample_100kB.docx')
print(filepath)
# pdf_path = pathlib.Path('./file-sample_100kB.pdf').absolute()
pdf_path = os.path.abspath('D:\\Parsa Projects\\Python-PDF\\file-sample_100kB.pdf')
convert_word_to_pdf(filepath, pdf_path)
</code></pre>
<p><br/>For making the standalone python package, On this my directory <code>D:\Parsa Projects\Python-PDF</code> I opened cmd and ran:</p>
<blockquote>
<p><br/>> mkdir myapp
<br/>> xcopy /f /y main.py .\myapp\main.py
<br/>> python -m pip install -t myapp pywin32 pypiwin32
<br/>> python -m zipapp -p
"C:\Users\Parsa\AppData\Local\Programs\Python\Python312\python.exe" -m
main:main -c -o myapps myapp</p>
</blockquote>
|
<python><pywin32><python-standalone>
|
2024-09-03 19:29:38
| 1
| 1,463
|
Parsa Saei
|
78,945,659
| 2,893,712
|
Check if Series has Values in Range
|
<p>I have a Pandas dataframe that has user information and also has a column for their permissions:</p>
<pre><code>UserName Permissions
John Doe 02
John Doe 11
Example 09
Example 08
User3 11
</code></pre>
<p>I am trying to create a new column called <code>User Class</code> that is based on their Permissions (looking at all of the users permissions). If a user has all permissions <10, they are considered <code>Admin</code>. If a user has all permission >=10, they are considered <code>User</code>. However if they have permissions that are both <10 and >=10, then they will be coded as <code>Admin/User</code>. So my resulting output would be:</p>
<pre><code>UserName Permissions User Class
John Doe 02 Admin/User
John Doe 11 Admin/User
Example 09 Admin
Example 08 Admin
User3 11 User
</code></pre>
<p>What would be the best way to do this? My original idea was to do:</p>
<pre><code>for UserName, User_df in df.groupby(by='UserName'):
LT10 = (User_df['Permissions'] < 10).any()
GTE10 = (User_df['Permissions'] >= 10).any()
if (LT10 & GTE10):
UserClass = 'Admin/User'
elif LT10:
UserClass = 'Admin'
elif GTE10:
UserClass = 'User'
df.at[User_df.index, 'User Class'] = UserClass
</code></pre>
<p>However these seems very inefficient because <code>df</code> has ~800K records</p>
|
<python><pandas>
|
2024-09-03 18:32:34
| 5
| 8,806
|
Bijan
|
78,945,511
| 20,302,906
|
Mock instantiated class in class attribute
|
<p>I'm working on a project that uses a class called <code>Deck</code> to fetch data from an API. This class is instantiated inside another class called <code>Game Manager</code> through its <code>__init__</code> (it's a blackjack game btw) like this:</p>
<p><em>server/game_manager.py</em></p>
<pre><code>import .deck import Deck
class GameManager:
self._deck = Deck()
</code></pre>
<p><em>server/deck.py</em></p>
<pre><code>class Deck:
def fetch_cards(self):
'''Method that gets cards from api'''
pass
</code></pre>
<p>Right now I get an <code>attribute not found</code> error that I can't get around when mocking <code>Deck</code> class to avoid connecting to the API. My test goes in this way:</p>
<p><em>server/tests/game_manager_test.py</em></p>
<pre><code>import unittest
from unittest.mock import patch, Mock
from ..game_manager import *
gm = GameManager()
class TestCardDraw(unittest.TestCase):
@patch("server.game_manager.deck")
def test_draw_game_start_cards(self, deck_mock):
deck_mock = Mock()
deck_mock.fetch_cards.return_value = [7,2]
self.assertEqual(len(gm._game_state["player_one"]["cards"]), 2)
Attribute Error: module 'server.game_manager' does not the attribute 'deck'
</code></pre>
<p>I'd like to ask the community a couple of questions to understand what I'm doing wrong.</p>
<ol>
<li>Should I mock the imported <code>deck</code> module from <em>server/game_manager.py</em> or the module <code>deck</code> from the module <em>server/deck.py</em> itself?</li>
<li>What's the difference between doing one or the other?</li>
<li>I'm creating an instance of <code>GameManager</code> at the top of my test suites to test some methods from the same instance. I've changed a value of this instance in one of the test suites to get the result I wanted with <code>setUp</code>. Should I mock <code>fetch_cards</code> method for this instance I'm using for tests or should I keep my strategy of mocking for the specific tests I want to make. What's the difference between this two approaches?</li>
</ol>
<p>Lastly, I've followed the examples in the documentation as well as Stack Overflow answers all around but at this point I feel confused with understanding what to mock and how to mocking. It's always cool to get your answers in code but I'd like to see where are my knowledge gaps in my thought process.</p>
|
<python><unit-testing><mocking>
|
2024-09-03 17:42:44
| 1
| 367
|
wavesinaroom
|
78,945,487
| 2,726,900
|
How to read and write large amounts of data to Cassandra DB?
|
<p>I have a client application that connects to two or more Cassandra DB servers -- and has to copy some tables from one server to another.</p>
<p>What is the best way to copy big amounts of Cassandra data, especially using Python?</p>
|
<python><cassandra>
|
2024-09-03 17:35:07
| 1
| 3,669
|
Felix
|
78,945,462
| 1,700,890
|
Import module from subfolder - invalid syntax
|
<p>Here is my Python project folder structure.</p>
<pre><code>project\
main_code.py
code\
__init__.py
s_utils.py
data\
</code></pre>
<p>in <code>main_code.py</code> I tried:</p>
<pre><code>import os
os.chdir('absolute path to project folder')
from .code import s_utils
</code></pre>
<p>The last line returns error:</p>
<pre><code>ImportError: attempted relative import with no known parent package
</code></pre>
<p>What is wrong here? According to this <a href="https://stackoverflow.com/questions/8953844/import-module-from-subfolder">post</a> it should work.</p>
|
<python><import><subdirectory><relative-path>
|
2024-09-03 17:25:54
| 1
| 7,802
|
user1700890
|
78,945,437
| 2,386,113
|
How to arrange figures in a grid?
|
<p>I have six pairs of vertically stacked figures. I want to arrange the figures into a 3x2 grid. However, I don't find a way to do it. Since I will receive the required figures from an existing function (here as a dummy, it's <code>stacked_lineplots()</code>), the organization of the figures into the grid MUST be done in a separate function called <code>arrange_stacked_subfigures_into_3x2_grid(figures)</code>.</p>
<p><strong>MWE:</strong></p>
<pre><code>import matplotlib.pyplot as plt
def arrange_stacked_subfigures_into_3x2_grid(figures):
# HOW DO I DO THIS?
pass
def stacked_lineplots():
# Create a figure with two vertically stacked subplots
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(8, 6))
# Dummy data for the plots
x = [0, 1, 2, 3, 4]
y1 = [0, 1, 4, 9, 16]
y2 = [16, 9, 4, 1, 0]
# Plot the data on the first subplot
ax1.plot(x, y1, label='y = x^2')
ax2.plot(x, y2, label='y = 16 - x^2', color='orange')
# Adjust layout to prevent overlap
plt.tight_layout()
return fig
# To get the figure, simply call the function:
six_figures = []
for i in range(6):
stacked_sub_figure = stacked_lineplots()
six_figures.append(stacked_sub_figure)
# Now, you can arrange these figures into a 3x2 grid
grid_figure = arrange_stacked_subfigures_into_3x2_grid(six_figures)
print("Done")
</code></pre>
<p><em><strong>Sample Stacked SUB-figure:</strong></em></p>
<p><a href="https://i.sstatic.net/xFmwzKAim.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFmwzKAim.png" alt="enter image description here" /></a></p>
<p><strong>Required final Figure:</strong> A grid of 3 columns and 2 rows of the stacked pairs.</p>
<p><a href="https://i.sstatic.net/cwI4W4xg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwI4W4xg.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><figure>
|
2024-09-03 17:18:54
| 1
| 5,777
|
skm
|
78,945,394
| 4,746,081
|
Matplotlib imshow and dna_features_viewer: Align X axis
|
<p>I can't find a solution to align on the X-axis a matplotlib imshow with a <a href="https://edinburgh-genome-foundry.github.io/DnaFeaturesViewer/" rel="nofollow noreferrer">dna features viewer</a> plot.</p>
<p>The python3 code I used is:</p>
<pre class="lang-py prettyprint-override"><code>from dna_features_viewer import GraphicFeature, GraphicRecord
import matplotlib
from matplotlib import rcParams
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
rcParams["figure.figsize"] = 15, 12
fig, (ax0, ax1) = plt.subplots(2, 1, height_ratios=[5, 1], sharex=True)
# ax0
data_heatmap = np.random.rand(850, 850)
heatmap = ax0.imshow(data_heatmap, cmap="bone")
fig.colorbar(heatmap, ax=ax0)
ax0.set_xlabel("Scored Residue")
ax0.set_ylabel("Aligned Residue")
ax0.set_ylim(len(data_heatmap) + 1, 1)
# ax1
domains = pd.DataFrame({"domain": ["dom1", "dom2", "dom3"],
"start": [9, 516, 714],
"end": [459, 689, 850],
"color": ["#0000b6", "#02eded", "#ab0000"]})
features = []
for _, row in domains.iterrows():
features.append(GraphicFeature(start=row["start"], end=row["end"], strand=+1, color=row["color"],
label=row["domain"]))
record = GraphicRecord(sequence_length=row["end"] + 1, features=features, plots_indexing="genbank")
record.plot(ax=ax1)
plt.tight_layout()
plt.show()
</code></pre>
<p>Which produces the following plot:</p>
<p><a href="https://i.sstatic.net/eWVr0QvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eWVr0QvI.png" alt="enter image description here" /></a></p>
<p>But I would like the X axis of ax0 and ax1 to be on the same physical scale (same length).</p>
<p><a href="https://i.sstatic.net/yrDEO6N0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrDEO6N0.png" alt="enter image description here" /></a></p>
<p>Any idea?</p>
|
<python><matplotlib>
|
2024-09-03 17:07:55
| 1
| 341
|
Mesmer
|
78,945,294
| 7,519,700
|
Elasticsearch search query returning "empty" response using elasticsearch python API
|
<p>I am using <a href="https://github.com/elastic/elasticsearch-py" rel="nofollow noreferrer">elasticsearch py client</a> in my real time application.</p>
<p>The application performs search query aggregations like "sum of field settledAmount in last 10 days".</p>
<p>Most of queries work without problems. However a tiny percent fail silently returning the following.</p>
<pre><code>{
"took":0,
"timed_out":false,
"_shards":{
"total":0,
"successful":0,
"skipped":0,
"failed":0
},
"hits":{
"total":{
"value":0,
"relation":"eq"
},
"max_score":0.0,
"hits":[
]
}
}
</code></pre>
<p>When the same query is repeated the response is correct containing the aggregations field and the expected value.</p>
<pre><code>{
"took" : 191,
"timed_out" : false,
"_shards" : {
"total" : 756,
"successful" : 756,
"skipped" : 632,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 0,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"aggregations" : {
"total_settled_amount" : {
"value" : 0.0
},
"unique_customer_count" : {
"value" : 0
}
}
}
</code></pre>
<p>I added some retries and it seems to be related with a load problem, since retrying the query without a significant delay (1s) will end in the exact same wrong result.</p>
<p>This is a query example</p>
<pre><code>GET cases-*/_search
{
"from": 0,
"size": 1000,
"query": {
"bool": {
"filter": [
{
"range": {
"transactionAnnouncedEvent.transaction.processedAt": {
"from": "2024-08-29T00:00:00.000Z",
"to": "2024-08-29T13:46:49.000Z",
"include_lower": true,
"include_upper": true,
"boost": 1
}
}
},
{
"bool": {
"should": [
{
"term": {
"field1": {
"value": "abc",
"boost": 1
}
}
}
],
"adjust_pure_negative": true,
"boost": 1
}
}
],
"adjust_pure_negative": true,
"boost": 1
}
},
"_source": {
"includes": [
"caseId"
],
"excludes": []
},
"aggs": {
"total_settled_amount": {
"sum": {
"field": "settledAmount"
}
}
}
}
</code></pre>
<p>What might be the reason of elasticsearch returning such "empty response" instead of a "429 status code?"</p>
|
<python><elasticsearch>
|
2024-09-03 16:34:38
| 1
| 1,033
|
room13
|
78,945,268
| 1,088,979
|
Efficient Conversion of Timezone-Aware Timestamps to datetime64[m] in Pandas
|
<p>I have the following code that creates a DataFrame representing the data I have in my system:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
"date": [
"2021-03-12 19:50:00-05:00", "2021-03-12 19:51:00-05:00", "2021-03-12 19:52:00-05:00",
"2021-03-12 19:53:00-05:00", "2021-03-12 19:54:00-05:00", "2021-03-12 19:55:00-05:00",
"2021-03-12 19:56:00-05:00", "2021-03-12 19:57:00-05:00", "2021-03-12 19:58:00-05:00",
"2021-03-12 19:59:00-05:00", "2021-03-15 04:00:00-04:00", "2021-03-15 04:01:00-04:00",
"2021-03-15 04:02:00-04:00", "2021-03-15 04:03:00-04:00", "2021-03-15 04:04:00-04:00",
"2021-03-15 04:05:00-04:00", "2021-03-15 04:06:00-04:00", "2021-03-15 04:07:00-04:00",
"2021-03-15 04:08:00-04:00", "2021-03-15 04:09:00-04:00"
],
"open": [81.15, 81.14, 81.15, 81.15, 81.15, 81.17, 81.19, 81.19, 81.20, 81.23, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05],
"high": [81.15, 81.14, 81.15, 81.15, 81.17, 81.17, 81.19, 81.19, 81.20, 81.23, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05],
"low": [81.14, 81.14, 81.14, 81.15, 81.15, 81.17, 81.19, 81.19, 81.20, 81.23, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05],
"close": [81.14, 81.14, 81.15, 81.15, 81.17, 81.17, 81.19, 81.19, 81.20, 81.23, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05, 81.05],
"volume": [300.0, 100.0, 1684.0, 0.0, 1680.0, 150.0, 448.0, 0.0, 1500.0, 380.0, 162.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
}
df = pd.DataFrame(data)
print(df.info())
</code></pre>
<p>The output is:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 20 entries, 0 to 19
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 20 non-null object
1 open 20 non-null float64
2 high 20 non-null float64
3 low 20 non-null float64
4 close 20 non-null float64
5 volume 20 non-null float64
dtypes: float64(5), object(1)
memory usage: 1.1+ KB
</code></pre>
<p>The data type of the <code>date</code> column is <code>object</code> - it is timezone aware timestamp.</p>
<p>The timestamps contain timezone information that I need to remove then convert the <code>date</code> column to <code>datetime64[m]</code> (minute precision), but after applying the following conversion code:</p>
<pre class="lang-py prettyprint-override"><code>df['date'] = df['date'].apply(lambda ts: pd.Timestamp(ts).tz_localize(None).to_numpy().astype('datetime64[m]'))
print(df.info())
</code></pre>
<p>The output shows that the <code>date</code> column has a data type of <code>datetime64[ns]</code> instead of <code>datetime64[m]</code>:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 20 entries, 0 to 19
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 20 non-null datetime64[ns]
1 open 20 non-null float64
2 high 20 non-null float64
3 low 20 non-null float64
4 close 20 non-null float64
5 volume 20 non-null float64
dtypes: datetime64 , float64(5)
memory usage: 1.1 KB
</code></pre>
<p>How can I correctly convert the <code>date</code> column with timezone data to <code>datetime64[m]</code> in the most memory-efficient way?</p>
|
<python><pandas><datetime><time-series>
|
2024-09-03 16:25:13
| 1
| 9,584
|
Allan Xu
|
78,945,163
| 1,632,519
|
argparse parse arbitary number groups of arguments
|
<p>I have this snippet for illustrative purpose that doesn't work at all.</p>
<pre><code>#!/usr/bin/env python3
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--action", nargs="+")
parser.add_argument("--number", nargs="+", default=1)
parser.add_argument("--some-parameter-to-action", nargs="+", required=True)
args = parser.parse_args()
</code></pre>
<p>I'm trying to parse an arbitrary number of <code>action</code> parameters, each <code>action</code> should have an associated <code>number</code> and <code>some-parameter-to-action</code>, potentially with a default value. Passing another <code>action</code> should create a new one and the following <code>--number</code> and <code>--some-parameter-to-action</code> flags should be associated with the <code>action</code> that preceeded it.</p>
<p>Examples, I want to be able to pass all or a subset of these in a single invocation of the script:</p>
<pre><code>--action say -n 3 --some-parameter-to-action foo
--action bla --some-parameter-to-action no
--action nothing # error
</code></pre>
<p>Is something like this possible in python?</p>
|
<python><argparse>
|
2024-09-03 15:52:28
| 1
| 2,126
|
Philippe
|
78,945,099
| 5,904,690
|
Reading lines in certain interval from a file in python: alternatives to readline()
|
<p>I realize that how to read lines from a file in interval [start, stop] is a common question, however many of the standard answers don't work well for my data set.</p>
<p>Specifically, I have data files with 500K lines and 100K columns. Each block of 50 rows is a separate data set which I need to read as a block, analyze, and then move on to the next block. Using readlines() to create a data object which I can sample in increments of 50 will not work, because the data objects take up too much memory.</p>
<p>I thought that something like the following would work (for the example below, I created a test file with 150 lines (3 replicates of 50). "myfunction()" is just a placeholder for the processing of each line)</p>
<pre><code>infile = open("test_file", "r")
outfile = open("out_test_file", "w")
for rep in range(0:3):
to_sample = list(range(rep*50, rep*50+50))
i = 0
for line in infile:
if i in to_sample:
something_useful = my_function(line)
i=i+1
outfile.write(str(something_useful))
outfile.close()
</code></pre>
<p>The script gets me through the first iteration of 50, but then cannot proceed, presumably because the</p>
<pre><code>for line infile
</code></pre>
<p>loop doesn't start at the beginning of the file during the next iteration of rep, since it had already read the last line of the infile.</p>
<p>As I stated, if the data files were of manageable size I could just use readlines and then sample the matrix in the desired intervals using the loop over rep and line number, but this isn't feasible for this dataset. What would be an efficient alternative?</p>
|
<python>
|
2024-09-03 15:38:28
| 2
| 789
|
Max
|
78,944,749
| 2,287,458
|
Explode Polars rows on multiple columns but with different logic
|
<p>I have this code, which splits a <code>product</code> column into a list, and then uses <code>explode</code> to expand it:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import datetime as dt
from dateutil.relativedelta import relativedelta
def get_3_month_splits(product: str) -> list[str]:
front, start_dt, total_m = product.rsplit('.', 2)
start_dt = dt.datetime.strptime(start_dt, '%Y%m')
total_m = int(total_m)
return [f'{front}.{(start_dt+relativedelta(months=m)).strftime("%Y%m")}.3' for m in range(0, total_m, 3)]
df = pl.DataFrame({
'product': ['CHECK.GB.202403.12', 'CHECK.DE.202506.6', 'CASH.US.202509.12'],
'qty': [10, -20, 50],
'price_paid': [1400, -3300, 900],
})
print(df.with_columns(pl.col('product').map_elements(get_3_month_splits, return_dtype=pl.List(str))).explode('product'))
</code></pre>
<p>This currently gives</p>
<pre><code>shape: (10, 3)
βββββββββββββββββββββ¬ββββββ¬βββββββββββββ
β product β qty β price_paid β
β --- β --- β --- β
β str β i64 β i64 β
βββββββββββββββββββββͺββββββͺβββββββββββββ‘
β CHECK.GB.202403.3 β 10 β 1400 β
β CHECK.GB.202406.3 β 10 β 1400 β
β CHECK.GB.202409.3 β 10 β 1400 β
β CHECK.GB.202412.3 β 10 β 1400 β
β CHECK.DE.202506.3 β -20 β -3300 β
β CHECK.DE.202509.3 β -20 β -3300 β
β CASH.US.202509.3 β 50 β 900 β
β CASH.US.202512.3 β 50 β 900 β
β CASH.US.202603.3 β 50 β 900 β
β CASH.US.202606.3 β 50 β 900 β
βββββββββββββββββββββ΄ββββββ΄βββββββββββββ
</code></pre>
<p>However, I want to keep the total <code>price paid</code> the same. So after splitting the rows into several "sub categories", I want to change the table to this:</p>
<pre><code>shape: (10, 3)
βββββββββββββββββββββ¬ββββββ¬βββββββββββββ
β product β qty β price_paid β
β --- β --- β --- β
β str β i64 β i64 β
βββββββββββββββββββββͺββββββͺβββββββββββββ‘
β CHECK.GB.202403.3 β 10 β 1400 β
β CHECK.GB.202406.3 β 10 β 0 β
β CHECK.GB.202409.3 β 10 β 0 β
β CHECK.GB.202412.3 β 10 β 0 β
β CHECK.DE.202506.3 β -20 β -3300 β
β CHECK.DE.202509.3 β -20 β 0 β
β CASH.US.202509.3 β 50 β 900 β
β CASH.US.202512.3 β 50 β 0 β
β CASH.US.202603.3 β 50 β 0 β
β CASH.US.202606.3 β 50 β 0 β
βββββββββββββββββββββ΄ββββββ΄βββββββββββββ
</code></pre>
<p>i.e. only keeping the <code>price_paid</code> in the first expanded row. So my total price paid remains the same. The <code>qty</code> is okay to stay the way it is.</p>
<p>I tried e.g. <code>with_columns(price_arr=pl.col('product').cast(pl.List(pl.Float64)))</code> but was then unable to add anything to first element of the list. Or <code>with_columns(price_arr=pl.col(['product', 'price_paid']).map_elements(price_func))</code> but it did not seem possible to use <code>map_elements</code> on <code>pl.col([...])</code>.</p>
|
<python><dataframe><python-polars>
|
2024-09-03 14:07:15
| 2
| 3,591
|
Phil-ZXX
|
78,944,702
| 39,590
|
Convert log(x)/log(2) to log_2(x) in sympy
|
<p>I received a sympy equation from a library. It makes extensive use of log2, but the output was converted to <code>log(x)/log(2)</code>. This makes reading the results messy.</p>
<p>I would like to have sympy simplify this equation again with a focus on using log2 directly where possible.</p>
<p>How could this be done?</p>
<p>Example:</p>
<pre><code>(log(A) / log(2)) * (log(B + C) / log(2))
</code></pre>
|
<python><sympy>
|
2024-09-03 13:52:27
| 1
| 33,018
|
mafu
|
78,944,602
| 1,028,133
|
How can I override the default behavior of `list(MyEnum)`?
|
<p>I have a custom <code>enum</code>, <code>MyEnum</code>, with some elements that have different names but the same value.</p>
<pre><code>from enum import Enum
class MyEnum(Enum):
A = 1
B = 2
C = 3
D = 1 # Same value as A
</code></pre>
<p>Consequently, <code>list(MyEnum)</code> returns only the names of some of the members (the first name for each value):</p>
<pre><code>>>>list(MyEnum)
[<MyEnum.A: 1>, <MyEnum.B: 2>, <MyEnum.C: 3>]
</code></pre>
<p>Apparently, <code>list(MyEnum.__members__)</code> returns all the names:</p>
<pre><code>>>>list(MyEnum.__members__)
['A', 'B', 'C', 'D']
</code></pre>
<p>However, if I try to override the <code>__iter__()</code> method for my enum, the override seems to fail:</p>
<pre><code>class MyEnum(Enum):
A = 1
B = 2
C = 3
D = 1 # Same value as A
@classmethod # an attempt to override list(MyEnum) that doesn't change anything
def __iter__(cls):
return iter(list(cls.__members__))
</code></pre>
<p>Apparently <code>list(MyEnum)</code> doesn't ever hit the custom <code>__iter__()</code> (as indicated by, say, adding a <code>print()</code> before returning in our custom <code>__iter__()</code>).</p>
<p>Why is that?</p>
<p>How can I override the default behavior of <code>list(MyEnum)</code> so that I get all the distinct names?</p>
|
<python><enums>
|
2024-09-03 13:29:49
| 1
| 744
|
the.real.gruycho
|
78,944,123
| 13,806,869
|
How to change the value of a column based on a condition and an inner join?
|
<p>I have two Pandas dataframes; let's call them df_a and df_b.</p>
<p>df_a looks like this:</p>
<pre><code>account | threshold | standardised_threshold
--------|-----------|-----------------------
A | 39.5 | 40
B | 42.6 | 45
C | 47.4 | 45
D | 53.5 | 50
</code></pre>
<p>df_b looks like this:</p>
<pre><code>account
-------
B
C
</code></pre>
<p>In df_a, I want to add 5 to the standardised_threshold column for all rows that fulfil both of the following conditions:</p>
<ul>
<li>Account is in df_b</li>
<li>The threshold column in df_a is below a fixed number (let's say 45 for this example)</li>
</ul>
<p>In other words, df_a should look like this:</p>
<pre><code>account | threshold | standardised_threshold
--------|-----------|-----------------------
A | 39.5 | 40
B | 42.6 | 50
C | 47.4 | 45
D | 53.5 | 50
</code></pre>
<p>Does anyone know the best way to do this please?</p>
|
<python><pandas><dataframe>
|
2024-09-03 11:31:52
| 3
| 521
|
SRJCoding
|
78,944,113
| 5,379,182
|
How to recover from a closed channel
|
<p>I am using <a href="https://aio-pika.readthedocs.io/en/latest/" rel="nofollow noreferrer">aio-pika</a> for my RabbitMQ workers. I ran into a <code>ChannelInvalidStateError</code> exception when the worker wanted to ack the message at the end of its processing but the channel was already closed.</p>
<p>What are the best practices in aio-pika to avoid this issue / recover from it, especially in the scenario of long-running tasks?</p>
<p>Here is mini example of a worker where I simulate this problem</p>
<pre><code>import asyncio
from loguru import logger
import aio_pika as ap
from aio_pika.abc import AbstractIncomingMessage
async def main() -> None:
logger.info("Starting worker...")
conn_str = "amqp://rabbit:password@localhost:5672"
connection = await ap.connect_robust(conn_str, heartbeat=10)
queue_name = "my_queue"
async with connection:
channel = await connection.channel()
await channel.set_qos(prefetch_count=1)
queue = await channel.declare_queue(queue_name, durable=True)
async with queue.iterator() as queue_iter:
async for message in queue_iter:
try:
await process_message(message)
except Exception as e:
logger.error("Cannot process message")
logger.exception(e)
await asyncio.sleep(0.1)
while True:
await asyncio.sleep(1)
async def process_message(message: AbstractIncomingMessage):
body = message.body.decode()
logger.info(f"Got message: {body}")
await asyncio.sleep(2)
await message.channel.close() # simulate a channel close
await message.ack()
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
|
<python><rabbitmq>
|
2024-09-03 11:27:15
| 0
| 3,003
|
tenticon
|
78,943,949
| 243,031
|
flask babel not able to load the translated files
|
<p>I am trying to translate my current flask project and I follow the step mentioned in <a href="https://python-babel.github.io/flask-babel/#module-flask_babel" rel="nofollow noreferrer"><code>flask-babel</code></a>.</p>
<p>First I tried with new <a href="https://flask-restx.readthedocs.io/en/latest/" rel="nofollow noreferrer"><code>flask-restx</code></a> app. It works fine. When I compile the messages, it get those messages when I pass <code>Accept-Language</code> header with that language.</p>
<p>When I try to integrate same with already developed project, it always give <code>English</code> translation even I pass <code>Accept-Language</code> header. I tried printing the headers in my function, it comes as non english.</p>
<p>My code is as below.</p>
<pre><code>from flask import Flask, request, jsonify
from flask_babel import Babel, gettext
from flask_restx import Api, Resource, fields
def get_locale():
# otherwise try to guess the language from the user accept
# header the browser transmits. We support de/fr/en in this
# example. The best match wins.
print("*"*80)
return request.accept_languages.best_match(['en', 'ar'])
app = Flask(__name__)
babel = Babel(app,
locale_selector=get_locale)
api = Api(app, version='1.0', title='My API', doc='/api-swagger')
...
...
</code></pre>
<p>One thing I notice between new app and already developed app was, <code>print</code> statement in the <code>get_locale</code> is executing in new app, but it wont execute in my developed app.</p>
<p>I tried enabling debug mode and remove all <code>pyc</code> files. Also check the file permission of <code>messages.mo</code>, those are same between new app and already developed app.</p>
<p>Is there any other way to know, why <code>Accept-Language</code> headers wont tell <code>flask</code> to use translation files ?</p>
|
<python><flask><translation><python-babel><flask-restx>
|
2024-09-03 10:46:53
| 0
| 21,411
|
NPatel
|
78,943,716
| 270,043
|
Pairwise comparison of multiple fields in Pyspark dataframes
|
<p>I have a pyspark dataframe that looks like the one below.</p>
<pre><code>key_field fieldA fieldB fieldC
ddd A1 B1 C1
ddd A2 B2 C2
ddd A2 B2 C2
eee A1 B1 C1
eee A3 B3 C3
</code></pre>
<p>The goal is to group by <code>key_field</code> (but mapping them to numbers for easier pairwise comparison via a loop later), and store unique groups of <code>< fieldA,fieldB,fieldC ></code> by the <code>key_field</code>. And then compare the groups of <code>< fieldA,fieldB,fieldC ></code> belonging to 2 <code>key_field</code> to see if there is any common group (i.e. intersection).</p>
<p>So in this example, I would have</p>
<pre><code>1: (A1,B1,C1), (A2,B2,C2) <-- key_field: ddd mapped to 1
2: (A1,B1,C1), (A3,B3,C3) <-- key_field: eee mapped to 2
</code></pre>
<p><code>1</code> and <code>2</code> have the common <code>(A1,B1,C1)</code>.</p>
<p>I'm not really sure how to go about doing this.</p>
<p>Initially, I thought a dictionary of sets could be used to store the data, and then I could iterate through the dictionary keys and compare the sets pairwise.</p>
<p>So far, I have created a list out of the unique <code>key_field</code> values, so that I can use the item's index as the dictionary key when storing the group <code>< fieldA,fieldB,fieldC ></code>.</p>
<p><code>key_list = list(df.select(df.key_field).distinct().toPandas()["key_field"])</code></p>
<p>Then I was thinking I could create the dictionary with something like</p>
<p><code>new_dict[key_list.index("key_field")].append((df.fieldA, df.fieldB, df.fieldC))</code></p>
<p>by iterating through the dataframe row by row. But I'm not sure how this should be done, or even if this is the best way to do it.</p>
<p>Please let me know if you have any suggestions on how I can ultimately do a pairwise comparison between the groups of <code>< fieldA,fieldB,fieldC ></code> belonging to each <code>key_field</code>.</p>
|
<python><pyspark>
|
2024-09-03 09:53:49
| 0
| 15,187
|
Rayne
|
78,943,401
| 3,745,149
|
Fine-tuning a Pretrained Model with Quantization and AMP: Scaler Error "Attempting to Unscale FP16 Gradients"
|
<p>I am trying to fine-tune a pretrained model with limited VRAM. To achieve this, I am using quantization and automatic mixed precision (AMP). However, I am encountering an issue that I can't seem to resolve. Could you please help me identify the problem?</p>
<p>Here is a minimal example:</p>
<pre class="lang-none prettyprint-override"><code>import os
from transformers import BitsAndBytesConfig, OPTForCausalLM, GPT2TokenizerFast
import torch
from torch.cuda.amp import GradScaler, autocast
model_name = "facebook/opt-1.3b"
cache_dir = './models'
os.environ["CUDA_VISIBLE_DEVICES"] = "7"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16
)
pretrained_model:OPTForCausalLM = OPTForCausalLM.from_pretrained(model_name,
cache_dir=cache_dir,
quantization_config=quantization_config)
tokenizer:GPT2TokenizerFast = GPT2TokenizerFast.from_pretrained(model_name,
cache_dir=cache_dir)
optimizer = torch.optim.AdamW(pretrained_model.parameters(), lr=1e-4)
scaler = GradScaler()
input_ids = torch.LongTensor([[0, 1, 2, 3]]).to(0)
labels = torch.LongTensor([[1, 2, 3, 4]]).to(0)
with torch.autocast(device_type='cuda'):
out = pretrained_model(input_ids=input_ids, labels=labels)
loss = out.loss
scaler.scale(out.loss).backward()
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
print(f'End')
</code></pre>
<p>At the line <code>scaler.step(optimizer)</code>, an error occurs:</p>
<pre><code>Exception has occurred: ValueError: Attempting to unscale FP16 gradients.
</code></pre>
|
<python><pytorch><nlp><huggingface-transformers><fine-tuning>
|
2024-09-03 08:38:23
| 1
| 770
|
landings
|
78,943,385
| 8,458,083
|
How to configure Visual Studio Code to recognize external stub files for Python development?
|
<p>I'm developing a Python plugin that uses a specific external library. I have the following setup:</p>
<ul>
<li>I'm using Visual Studio Code as my IDE</li>
<li>I have access to the stub files (<code>.pyi</code>) for the library I'm using</li>
<li>The library is not installed in my development environment, as it will be available in the runtime environment where my plugin will be executed</li>
</ul>
<p>Currently, Visual Studio Code is showing error messages for objects from this library, marking them as unknown, even though they exist in the library.</p>
<p>Is there a config file where I can tell VS Code to search for the definition of the objects from the library in the directory of the stub files? If yes how can I edit them,</p>
<p>I tried to modifiy the settings. under .vscode in my project root. (You can do that in file, preferrence, setting)</p>
<p>According to <a href="https://code.visualstudio.com/docs/python/settings-reference" rel="nofollow noreferrer">https://code.visualstudio.com/docs/python/settings-reference</a> , I have to ovewrite stubPath and set languageServer to Default or Pylance</p>
<pre><code>{
"python.analysis.stubPath": "C:\\Program Files\\Stubs",
"python.languageServer": "Pylance",
}
</code></pre>
<p>But noting happends</p>
<p>I tried with other settings like "extapath" but it didn t work either</p>
|
<python><visual-studio-code>
|
2024-09-03 08:34:18
| 1
| 2,017
|
Pierre-olivier Gendraud
|
78,942,969
| 8,849,755
|
Python dominate do not escape characters
|
<p>I am using <a href="https://github.com/Knio/dominate" rel="nofollow noreferrer">dominate</a> to create an HTML document in Python. In one part of my workflow there is an already hardcoded HTML code in a string which I want to insert inside a tag similarly as <code>innerHTML</code> would do in JavaScript. However, I cannot figure out how to do this, because it always escapes the special HTML characters (as you would normally want to do):</p>
<pre class="lang-py prettyprint-override"><code>import dominate
hardcoded_html_string = 'Hello, <em>how are you doin?</em>'
doc = dominate.tags.html(title='doc')
with doc:
dominate.tags.p(
text = hardcoded_html_string,
# do_not_escape_HTML_chars = True, # I would like to have this!
)
print(doc)
</code></pre>
<p>prints</p>
<pre class="lang-html prettyprint-override"><code><html title="doc">
<p>Hello, &lt;em&gt;how are you doin?&lt;/em&gt;</p>
</html>
</code></pre>
<p>and my expected result is</p>
<pre class="lang-html prettyprint-override"><code><html title="doc">
<p>Hello, <em>how are you doin?<em></p>
</html>
</code></pre>
<p>How can I achieve this?</p>
|
<python><html><dominate>
|
2024-09-03 06:43:48
| 1
| 3,245
|
user171780
|
78,942,907
| 341,840
|
How to get the y values from Bokeh RangeTool selection pan?
|
<p>I'm working in an app which will show the temperatures recorded by meteorological stations.</p>
<p>I'm using Bokeh 3.5.1 with two figures: one is the main graph, and the other one holds a RangeTool to get a view of the data.</p>
<p>What I want is to get the y values (maximum, minimum and mean temperatures) in the selected range to calculate its arithmetic mean in javascript client side. I'm inspecting the RangeTool object (rngt parameter on the callback function), the Plot object (select parameter) and the ColumnDataSource object (source parameter) when a RangesUpdate event is triggered.</p>
<p>But I'm unable to see a method or a property to be used to get the start and end indices of the RangeTool selected range. Any help would be appreciated.</p>
<p>The <em>max</em>, <em>min</em>, <em>start</em> or <em>end</em> properties of the RangeTool <em>x_range</em> object (<em>rngt</em> in the javascript event) gives me the dates in time format of the selected view. But this integer does not exist in the source data: <em>source.data['x_axis'].indexOf(rngt.x_range.min)</em> returns -1.</p>
<p>The relevant code (bok_data is the ColumnDataSource object):</p>
<pre><code>def _configure_main_graph(self, dict_keys, station_data, bok_data, legend):
"""Configure the main graph to plot into the html file."""
station_id = station_data[1]
station_name = station_data[2].capitalize()
cn = station_data[3]
lat = station_data[4]
lon = station_data[5]
height = station_data[6]
country = countries.get(alpha_2=cn)
# Adapted code from https://docs.bokeh.org/en/latest/docs/user_guide/topics/timeseries.html RangeTool
# Normal graph
lat = self.dd_to_dms(float(lat), 'lat')
lon = self.dd_to_dms(float(lon), 'lon')
title = f'Station: {station_id} - {station_name} - Country: {country.name} - Latitude: {lat} - Longitude: {lon} - Height: {height} m.'
maing = figure(tools="xpan", toolbar_location=None,
x_axis_type="datetime", x_axis_location="above",
background_fill_color="#efefef", sizing_mode="scale_both", title=title)
line = None
legend_items = []
tooltips = []
tooltips.append(("Date:", "@x_axis{%F}"))
renderers = []
# For each y axis list of data, plot a line
for ind, y_name in enumerate(dict_keys):
line = maing.line('x_axis', y_name, source=bok_data, line_color=self.graph_lines_colors[ind])
legend_items.append((legend[ind], [line]))
format_y = f"@{y_name}"
format_y += '{0.0} ΒΊC'
tooltips.append((f"Temp. {legend[ind]}:", format_y))
renderers.append(line)
maing.add_tools(HoverTool(tooltips=tooltips, renderers=renderers, formatters={"@x_axis": "datetime"}, mode="vline"))
lines_legend = Legend(items=legend_items, location="center")
maing.add_layout(lines_legend, 'right')
# Set the y axis label
maing.yaxis.axis_label = 'Temperature ΒΊC'
return maing
def _configure_selection_graph(self, maing, dict_keys, bok_data):
# Range selection graph
select = figure(title="Drag the middle and edges of the selection box to change the range above",
height=300, width_policy="max", y_range=maing.y_range,
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
# For each y axis list of data, plot a line
for ind, y_name in enumerate(dict_keys):
select.line(x='x_axis', y=y_name, source=bok_data, line_color=self.graph_lines_colors[ind])
select.ygrid.grid_line_color = None
# Selection tool
range_tool = RangeTool(x_range=maing.x_range, start_gesture="pan")
range_tool.overlay.fill_color = "navy"
range_tool.overlay.fill_alpha = 0.2
select.add_tools(range_tool)
callback = CustomJS(args=dict(source=bok_data, rngt=range_tool, select=select), code="""debugger;""")
select.js_on_event(RangesUpdate, callback)
if self.acknowledgement:
select.add_layout(Title(text=self.acknowledgement, align="center", text_align="center", text_font_style="normal", text_color="gray"), "below")
return select
</code></pre>
<p>And here is a screen capture:</p>
<p><a href="https://i.sstatic.net/zOfIyIp5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOfIyIp5.png" alt="enter image description here" /></a></p>
|
<python><bokeh><bokehjs>
|
2024-09-03 06:15:10
| 1
| 1,075
|
quimm2003
|
78,942,885
| 13,447,006
|
Logging configuration does not take effect
|
<pre><code> logging.basicConfig(
filename=f"{output_location}/log.txt",
format="{asctime} - {levelname} - {filename}: {message}",
datefmt = "%d %b %H:%M",
style="{",
level=logging.INFO
)
</code></pre>
<p>This is my code and it used to work just fine but after I changed a line of code it stopped working. Outcome before it stopped working:</p>
<ol>
<li>Have the updated logging format as specified by the format parameter.</li>
<li>Have the file <code>log.txt</code> created with the logs in it.</li>
</ol>
<p>Now the logs look like this: <code>WARNING:root</code> when the <code>format</code> parameter says otherwise. In additon the <code>log.txt</code> file is not created.</p>
<p>The line of code I changed is <code>output_location = os.getcwd() + r"/../output"</code> where it is changed from <code>r"/output"</code> to <code>r"/../output"</code>. This line of code is in this function:</p>
<pre><code>def create_output_dir() -> Path:
"""Creates the output directory where all the log files and user facing sheets will go."""
output_location = os.getcwd() + r"/output"
path = Path(output_location)
if not path.exists():
msg = "A folder named 'output' has been created. All files generated will go there."
print(msg)
logging.info(msg)
path.mkdir()
return path
</code></pre>
<p>After I reverted the code to remove the <code>/..</code> part the logging format stays as the default which is <code>WARNING:root</code>. I'm not entirely sure what happened but it just stopped working out of nowhere.</p>
|
<python><logging><python-logging>
|
2024-09-03 06:09:14
| 1
| 565
|
AlphabetsAlphabets
|
78,942,791
| 12,035,739
|
Why won't Matplotlib's imshow plot 0.5 as grey?
|
<p>I remember this working. Zero meant fully dark and one meant fully lit up and in-between meant some shade of grey for plotting with <code>pyplot.imshow</code>. I remember plotting the MNIST data of handwritten digits like that. I wrote the following,</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
matrix = 0.5*np.identity(5)
plt.imshow(matrix, cmap='grey')
plt.show()
</code></pre>
<p>If I do not multiply the identity matrix with a half, I get what I expect. But--multiplying it with a half, I expected to see greys along the diagonal. However, it still plots whites there.</p>
<p>Am I missing some small detail? Is there a way to get the behaviour that I expect?</p>
|
<python><matplotlib>
|
2024-09-03 05:27:10
| 1
| 886
|
scribe
|
78,942,768
| 20,762,114
|
Faster Sequential Joins
|
<p>In a regular Polars join where there are duplicates, the result is the cartesian product of the matched rows.</p>
<p>However, I would like to join the dataframes such that if there are duplicates, the rows are matched in a sequential manner.</p>
<p>Example below:</p>
<pre class="lang-py prettyprint-override"><code>dfA = pl.from_repr('''
βββββββββ¬ββββββ
β group β A β
β --- β --- β
β str β i64 β
βββββββββͺββββββ‘
β X β 0 β
β X β 1 β
β Y β 2 β
β Y β 3 β
βββββββββ΄ββββββ
''')
dfB = pl.from_repr('''
βββββββββ¬ββββββ
β group β B β
β --- β --- β
β str β i64 β
βββββββββͺββββββ‘
β X β 4 β
β X β 5 β
β Y β 6 β
β Y β 7 β
βββββββββ΄ββββββ
''')
</code></pre>
<pre><code>Current implementation: dfA.join(dfB, on='group')
βββββββββ¬ββββββ¬ββββββ
β group β A β B β
β --- β --- β --- β
β str β i64 β i64 β
βββββββββͺββββββͺββββββ‘
β X β 0 β 4 β
β X β 0 β 5 β
β X β 1 β 4 β
β X β 1 β 5 β
β Y β 2 β 6 β
β Y β 2 β 7 β
β Y β 3 β 6 β
β Y β 3 β 7 β
βββββββββ΄ββββββ΄ββββββ
Desired outcome: Sequential Left Join on 'group'
βββββββββ¬ββββββ¬ββββββ
β group β A β B β
β --- β --- β --- β
β str β i64 β i64 β
βββββββββͺββββββͺββββββ‘
β X β 0 β 4 β
β X β 1 β 5 β
β Y β 2 β 6 β
β Y β 3 β 7 β
βββββββββ΄ββββββ΄ββββββ
</code></pre>
<p>I'm not sure if there's already a term for such a join, so I have termed it a Sequential Join.</p>
<p>Currently, to achieve the desired outcome, I create an 'index' column on both dataframes then join on both the group and index.</p>
<pre class="lang-py prettyprint-override"><code>(
dfA
.with_columns(
index = pl.int_range(0, pl.len()).over('group')
)
.join(
dfB
.with_columns(
index = pl.int_range(0, pl.len()).over('group')
),
on = ['group', 'index']
)
)
</code></pre>
<p>However, this can be very slow when joining on multiple columns or if there are many distinct groups.</p>
<p>For example, my main use case is for joining large time-series datasets with duplicated timestamps, so doing an <code>pl.int_range(0, pl.len()).over('timestamp')</code> can be very slow. Hence, I'm wondering if there is a faster or better way to do this.</p>
<p>Otherwise, is there a way to add custom join logic in Polars?</p>
|
<python><dataframe><python-polars>
|
2024-09-03 05:13:47
| 3
| 317
|
T.H Rice
|
78,942,542
| 4,500,749
|
Typing for tuple or list of tuples in recursive function
|
<p>Here is an example I wrote.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Tuple, List
def performances(
pt: Tuple[float, float] | List[Tuple[float, float]]
) -> float | list[float]:
if isinstance(pt, list):
output = []
for pt_i in pt:
output.append(performances(pt_i))
else:
output = pt[0] + pt[1]
return output
if __name__ == "__main__":
# This work as intended
t1 = performances((1, 2))
print(t1)
# This work as intended
t2 = performances([(1, 2), (3, 4)])
print(t2)
# This work while I was expecting it to throw a TypeError
t3 = performances((1, 2, 3, 4))
print(t3)
</code></pre>
<p>How can I apply type checking in this case to be sure I only accept a list of tuple with two elements?</p>
<p>I also tried without the <code>Tuple</code> and <code>List</code> type from <code>typing</code>, just using <code>list[tuple[float, float]]</code> but the result it the same.</p>
<p>And:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Tuple, Iterable
type Point = Tuple[float, float]
def performances(pt: Point | Iterable[Point]) -> float | list[float]:
...
</code></pre>
<p>But same.</p>
<p>There is probably something I don't understand about typing but I don't know what.
Thank you.</p>
|
<python><recursion><python-typing>
|
2024-09-03 02:57:21
| 1
| 326
|
Romn
|
78,942,488
| 127,320
|
Resolve no validator found for <class '__main__.Resume'>, see `arbitrary_types_allowed` in Config
|
<p>Getting the <code>no validator found</code> error with the following code. Here are the Library versions:</p>
<pre><code>LangChain version: 0.0.284
Pydantic version: 2.8.2
</code></pre>
<p>Code:</p>
<pre><code>from typing import Optional
from pydantic import BaseModel, Field, ValidationError
from config import set_environment
from langchain.chains import create_extraction_chain_pydantic
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
class Resume(BaseModel):
first_name: Optional[str] = Field(None, description="First Name")
last_name: Optional[str] = Field(None, description="Last Name")
email: Optional[str] = Field(None, description="Email Address")
phone: Optional[str] = Field(None, description="Phone Number")
class Config:
arbitrary_types_allowed = True
set_environment()
# Sample Data
sample_data = {
"first_name": "John",
"last_name": "Doe",
"email": "john.doe@example.com",
"phone": "123-456-7890"
}
# Validate resume
try:
resume = Resume(**sample_data)
print("Validation successful:", resume)
except ValidationError as e:
print("Validation error:", e)
pdf_file_path = "laverne-resume.pdf"
pdf_loader = PyPDFLoader(pdf_file_path)
docs = pdf_loader.load_and_split()
chat_model = ChatOpenAI(model_name="gpt-3.5-turbo")
chain = create_extraction_chain_pydantic(pydantic_schema=Resume, llm=chat_model)
resp = chain.run(docs)
print(resp)
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "/ch4/information_extraction.py", line 56, in <module>
chain = create_extraction_chain_pydantic(pydantic_schema=Resume, llm=chat_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "ws/gen-ai/venv/lib/python3.12/site-packages/langchain/chains/openai_functions/extraction.py", line 97, in create_extraction_chain_pydantic
class PydanticSchema(BaseModel):
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/main.py", line 197, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/fields.py", line 504, in infer
return cls(
^^^^
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/fields.py", line 434, in __init__
self.prepare()
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/fields.py", line 550, in prepare
self._type_analysis()
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/fields.py", line 756, in _type_analysis
self.sub_fields = [self._create_sub_type(self.type_, '_' + self.name)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/fields.py", line 806, in _create_sub_type
return self.__class__(
^^^^^^^^^^^^^^^
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/fields.py", line 434, in __init__
self.prepare()
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/fields.py", line 555, in prepare
self.populate_validators()
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/fields.py", line 829, in populate_validators
*(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "ws/gen-ai/venv/lib/python3.12/site-packages/pydantic/v1/validators.py", line 765, in find_validators
raise RuntimeError(f'no validator found for {type_}, see `arbitrary_types_allowed` in Config')
RuntimeError: no validator found for <class '__main__.Resume'>, see `arbitrary_types_allowed` in Config
</code></pre>
|
<python><pydantic><langchain><py-langchain>
|
2024-09-03 02:21:38
| 1
| 80,467
|
Aravind Yarram
|
78,942,459
| 1,457,380
|
Bar chart with slanted lines instead of horizontal lines
|
<p>I wish to display a barchart over a time series canvas, where the bars have width that match the duration and where the edges connect the first value with the last value. In other words, how could I have slanted bars at the top to match the data?</p>
<p>I know how to make barcharts using either the last value (example 1) or the first value (example 2), but what I'm looking for are polygons that would follow the black line shown.</p>
<p><strong>Example 1</strong></p>
<p><a href="https://i.sstatic.net/ypaab50w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ypaab50w.png" alt="enter image description here" /></a></p>
<p><strong>Example 2</strong></p>
<p><a href="https://i.sstatic.net/YTLY6dx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YTLY6dx7.png" alt="enter image description here" /></a></p>
<p><strong>Code:</strong></p>
<pre><code> import pandas as pd
from pandas import Timestamp
import datetime
import matplotlib.pyplot as plt
import numpy as np # np.nan
dd = {'Name': {0: 'A', 1: 'B', 2: 'C'}, 'Start': {0: Timestamp('1800-01-01 00:00:00'), 1: Timestamp('1850-01-01 00:00:00'), 2: Timestamp('1950-01-01 00:00:00')}, 'End': {0: Timestamp('1849-12-31 00:00:00'), 1: Timestamp('1949-12-31 00:00:00'), 2: Timestamp('1979-12-31 00:00:00')}, 'Team': {0: 'Red', 1: 'Blue', 2: 'Red'}, 'Duration': {0: 50*365-1, 1: 100*365-1, 2: 30*365-1}, 'First': {0: 5, 1: 10, 2: 8}, 'Last': {0: 10, 1: 8, 2: 12}}
d = pd.DataFrame.from_dict(dd)
d.dtypes
d
# set up colors for team
colors = {'Red': '#E81B23', 'Blue': '#00AEF3'}
# reshape data to get a single Date | is there a better way?
def reshape(data):
d1 = data[['Start', 'Name', 'Team', 'Duration', 'First']].rename(columns={'Start': 'Date', 'First': 'value'})
d2 = data[['End', 'Name', 'Team', 'Duration', 'Last']].rename(columns={'End': 'Date', 'Last': 'value'})
return pd.concat([d1, d2]).sort_values(by='Date').reset_index(drop=True)
df = reshape(d)
df.dtypes
df
plt.plot(df['Date'], df['value'], color='black')
plt.bar(d['Start'], height=d['Last'], align='edge',
width=list(+d['Duration']),
edgecolor='white', linewidth=2,
color=[colors[key] for key in d['Team']])
plt.show()
plt.plot(df['Date'], df['value'], color='black')
plt.bar(d['End'], height=d['First'], align='edge',
width=list(-d['Duration']),
edgecolor='white', linewidth=2,
color=[colors[key] for key in d['Team']])
plt.show()
</code></pre>
|
<python><matplotlib><bar-chart>
|
2024-09-03 02:06:16
| 2
| 10,646
|
PatrickT
|
78,942,406
| 2,382,483
|
How to "smooth" a discrete/stepped signal in a vectorized way with numpy/scipy?
|
<p>I have a signal like the orange one in the following plot that can only have integer values:
<a href="https://i.sstatic.net/2h9odqM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2h9odqM6.png" alt="enter image description here" /></a></p>
<p>As you can see, the orange signal in a bit noisy and "waffles" between levels sometimes when its about to change to a new steady state. I'd like to "smooth" this effect and achieve the blue signal. The blue signal is the orange one filtered such that the transitions don't occur until 3 samples in a row have made the jump to the next step. This is pretty easy if I loop through each sample manually and use a couple state variables to track how many times in a row I've jumped to a new step, but its also slow. I'd like to find a way to vectorize this in numpy. Any ideas?</p>
<p>Here's an example of the non-vectorized way that seems to do what I want:</p>
<pre><code>up_count = 0
dn_count = 0
out = x.copy()
for i in range(len(out)-1):
if out[i+1] > out[i]:
up_count += 1
dn_count = 0
if up_count == 3:
up_count = 0
out[i+1] = out[i+1]
else:
out[i+1] = out[i]
elif out[i+1] < out[i]:
up_count = 0
dn_count += 1
if dn_count == 3:
dn_count = 0
out[i+1] = out[i+1]
else:
out[i+1] = out[i]
else:
dn_count = 0
up_count = 0
</code></pre>
<p><strong>EDIT:</strong><br />
Thanks to @Bogdan Shevchenko for this solution. I already have numpy and scipy available, so rather than get pandas involved here's my numpy/scipy version of his answer:</p>
<pre><code>def ffill(arr, mask):
idx = np.where(~mask, np.arange(mask.shape[0])[:, None], 0)
np.maximum.accumulate(idx, axis=0, out=idx)
return arr[idx, np.arange(idx.shape[1])]
x_max = scipy.ndimage.maximum_filter1d(x, 3, axis=0, origin=1, mode="nearest")
x_min = scipy.ndimage.minimum_filter1d(x, 3, axis=0, origin=1, mode="nearest")
x_smooth = ffill(x, x_max!=x_min)
</code></pre>
|
<python><numpy><scipy>
|
2024-09-03 01:38:50
| 1
| 3,557
|
Rob Allsopp
|
78,942,402
| 8,997,728
|
Python: all items that have a higher than average price in array
|
<p>As part of a Python course IΒ΄m taking I wonder whether anyone can help me with the following question: given a list of food names along with their calories and prices information in the format of [name, calories, price] I want to create a function in Python that returns all items in a new array that have a higher than average price.</p>
<p>Say the input for the above is the following array:</p>
<pre><code>input =[
['ABUELO SUCIO (16oz)',400,26],
['Chick-fil-A Chicken Sandwich',400,6],
['Chicken in Lettuce Cups',900,19],
['Classic French Dip',900,16],
['Grilled Chicken Teriyaki',400,18],
['Medium 8 pc Wing Combo',300,10],
['Pad See You',1000,19],
['Tea Leaf Rice',400,15],
['Udon',300,12],
['Very Cherry Ghirardelli Chocolate Cheesecake',900,10]
]
</code></pre>
<p>I am trying to create the method with the def below but not sure what the optimal solution is as am a beginner in Python:</p>
<pre><code>def get_food_selection(food_items):
return sorted(overlap_items)
</code></pre>
<p>Any help is appreciated.</p>
|
<python><arrays><list>
|
2024-09-03 01:34:23
| 1
| 309
|
Estrobelai
|
78,942,254
| 647,002
|
How to "borrow" an instance property type in Python type annotations?
|
<p>In Python type annotations, is there some way to declare that a property has the same type as a property of another class? To "borrow" or copy the type from the other class?</p>
<p>For example, in this code, how could you say that <code>bar</code> should have the same type as the <code>foo</code> property of a <code>Foo</code> class instance?</p>
<pre class="lang-py prettyprint-override"><code># Python
from some_library import Foo # class Foo
class Bar(Foo):
bar: ??? typeof Foo.foo ??? # what goes here?
</code></pre>
<p>I'm looking for the equivalent of this TypeScript:</p>
<pre class="lang-js prettyprint-override"><code>// TypeScript
import { Foo } from "some_library"; // class Foo
class Bar extends Foo {
bar: Foo["foo"]; // bar has same type as `new Foo().foo`
}
</code></pre>
<hr />
<p>More info: <code>Foo</code> is defined in type stubs that are not under my control. I'm looking for a solution that doesn't involve modifying third-party code or importing its private underscore types.</p>
<pre class="lang-py prettyprint-override"><code># some_library.pyi
from typing import TypedDict
class _FooErrorTypeDef(TypedDict, total=False):
message: str
severity: str
class _FooPropTypeDef(TypedDict, total=False):
status: str
code: str | number
errors: list[_FooErrorTypeDef] | None
class Foo:
def __init__(self):
self.foo: _FooPropTypeDef
</code></pre>
|
<python><python-typing><mypy>
|
2024-09-02 23:33:04
| 0
| 6,291
|
medmunds
|
78,942,252
| 11,154,841
|
How can I get TSQL from easy MS Access SQL with little to no handiwork?
|
<p>I have 500+ queries in a bunch of MS Access databases. The queries are rather easy.</p>
<ul>
<li>(1.) I read them out with VBA into an Excel file as columns <code>A</code> to <code>H</code>, with the columns "ID, Datenbank, Objektname, LastUpdated, Objekttyp, Objektart, SourceTableName, Abfrage_SQL", with the last column being the query,</li>
<li>(2.) split the query with Regex into columns <code>I</code> to <code>P</code> as the split SQL blocks "Select, Into, From, Where, Group_By, Having, Order_By",</li>
<li>(3.) shortened the code with aliases into two new columns <code>Q</code> (<code>New SQL Codes</code>) and <code>R</code> (<code>Mapping</code>),</li>
<li>and my aim is to make (4.) a column <code>S</code> (<code>TSQL</code>) as the TSQL that I can take to feed an SSIS data flow.</li>
</ul>
<p>For (3.), see <a href="https://stackoverflow.com/questions/78940118/in-a-standard-ms-access-sql-query-output-that-does-not-have-any-aliases-how-do">In a standard MS Access SQL query output that does not have any aliases, how do I replace the full names by their "first-letters" aliases?</a>, and there are also the links (1.)+(2.).</p>
<p>I need to change the MS Access VBA that is embedded in MS Access SQL, like <code>Format()</code> functions and special formats like <code>#my_date#</code>, and I do not want to Regex-replace the code dozens of times by hand in some Regex-Search-Replace menu. Instead, I want to loop over the replacements with Python, taking the <code>output_file.xlsx</code> from (3.) as the input and renaming the new output to <code>output_file_tsql.xlsx</code>. The output of the new TSQL code shall be put in a new column <code>S</code> (<code>TSQL</code>).</p>
<p>Which kind of Regex replacements might help as patterns for anyone to begin with? I am sure that the patterns in my 500+ queries are only a small share of what you can run into, but on the other hand, they should be a good sample for a cold start. You will have other patterns as well so that you cannot rely only on the examples. But then just answer and share what you found.</p>
<p>Is there a better way to get the TSQL from MS Access SQL than by Regex replacements? Any tool or trick can be an answer to this question.</p>
<p>How can I get TSQL from easy MS Access SQL with little to no handiwork?</p>
<p><em>(Old question was: "Which Regex replacements help when rewriting MS Access SQL queries as mere TSQL queries? How can these be looped over with Excel as input and output?")</em></p>
|
<python><excel><t-sql><ms-access>
|
2024-09-02 23:32:08
| 1
| 9,916
|
questionto42
|
78,942,124
| 6,907,703
|
How to install langchain-openai that is compatible with existing openai installation?
|
<p>I am migrating to <code>langchain</code> version 0.2 in my project, which now requires installing LLM models separately. I attempted to install <code>langchain-openai</code> using:</p>
<pre class="lang-bash prettyprint-override"><code>pipenv install langchain-openai
</code></pre>
<p>However, this conflicts with another package in my environment:</p>
<pre class="lang-toml prettyprint-override"><code>openai = "==1.3.5"
</code></pre>
<p>It seems I need to find a compatible version of <code>langchain-openai</code> that works with <code>openai==1.3.5</code> or consider upgrading <code>openai</code> without breaking my current code.</p>
<p>How can I determine which version of the <code>openai</code> SDK is compatible with specific versions of <code>langchain-openai</code>? I've checked PyPI, but I couldn't find detailed information about dependencies between these packages.</p>
<p>Terminal output:</p>
<pre class="lang-none prettyprint-override"><code>pipenv install langchain_openai
Loading .env environment variables...
Installing langchain_openai...
Resolving langchain_openai...
Added langchain-openai to Pipfile's [packages] ...
Installation Succeeded
Pipfile.lock (244de0) out of date, updating to (fca94c)...
Locking [packages] dependencies...
Building requirements...
Resolving dependencies...
Locking Failed!
[== ] Locking...False
CRITICAL:pipenv.patched.pip._internal.resolution.resolvelib.factory:Cannot install -r C:\Users\MUBASH~1\AppData\Local\Temp\pipenv-uudymc5q-requirements\pipenv-ustatlmc-constraints.txt (line 48), -r C:\Users\MUBASH~1\AppData\Local\Temp\pipenv-uudymc5q-requirements\pipenv-ustatlmc-constraints.txt (line 68) and openai==1.3.5 because these package versions have conflicting dependencies.
</code></pre>
|
<python><openai-api><langchain><pipenv><py-langchain>
|
2024-09-02 21:55:28
| 1
| 1,281
|
Muhammad Mubashirullah Durrani
|
78,942,113
| 13,792,730
|
SymPy having trouble plotting Bessel functions - cannot recognize ```besselj``` within its own namespace
|
<p>It seems like SymPy is having some issues plotting a Bessel function. This is the working example:</p>
<pre><code>import sympy as sp
x = sp.symbols('x')
sp.plot(sp.besselj(4,x), (x,1,2))
</code></pre>
<p>When I try running the block above, I get the following stack trace:</p>
<pre><code>File "/Users/..../test_bessel.py", line 5, in <module>
sp.plot(sp.besselj(4,x), (x,1,2))
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/sympy/plotting/plot.py", line 419, in plot
plots.show()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/sympy/plotting/backends/textbackend/text.py", line 21, in show
textplot(ser.expr, ser.start, ser.end)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/sympy/plotting/textplot.py", line 167, in textplot
for line in textplot_str(expr, a, b, W, H):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/sympy/plotting/textplot.py", line 67, in textplot_str
y.append(f(val))
^^^^^^
File "<lambdifygenerated-1>", line 2, in _lambdifygenerated
NameError: name 'besselj' is not defined
</code></pre>
<p>I don't quite understand how <code>besselj</code> is not defined, as I imported it with the sympy namespace. Any pointers here are greatly appreciated!</p>
|
<python><plot><sympy><bessel-functions>
|
2024-09-02 21:49:51
| 0
| 321
|
laplacian_07
|
78,941,843
| 5,134,817
|
Several derived instances of an abstract base class throwing mypy error
|
<p>I cannot figure out the correct way to get <code>mypy</code> to not complain about the type hinting when using abstract base classes, specifically a container of several of these (more than 2).</p>
<p>As an example:</p>
<pre class="lang-py prettyprint-override"><code>mapping_1 = {type(i()).__name__: i() for i in [A, B]} # Fine.
mapping_2 = {type(i()).__name__: i() for i in [A, B, C]} # Complains.
</code></pre>
<p>The full MWE:</p>
<pre class="lang-py prettyprint-override"><code>import abc
from typing import Mapping, Type
class I(abc.ABC):
@abc.abstractmethod
def foo(self): ...
class A(I):
def foo(self): ...
class B(I):
def foo(self): ...
class C(I):
def foo(self): ...
mapping_1 = {type(i()).__name__: i() for i in [A, B]} # Fine.
mapping_2 = {type(i()).__name__: i() for i in [A, B, C]} # Complains.
mapping_3: Mapping[str, I] = {type(i()).__name__: i() for i in [A, B, C]} # Complains.
mapping_4: Mapping[str, Type[I]] = {type(i()).__name__: i for i in [A, B, C]} # Complains.
mapping_5: Mapping[str, I] = {type(i()).__name__: i for i in [A, B, C]} # Complains.
</code></pre>
<p>I've seen a few posts about adding <code>Type[I]</code> and similar stuff, but I've not been able to get anything working together to appease <code>mypy</code> (version 1.11.2).</p>
<p>Whether I have a container of instances or objects is not so important, albeit I would prefer a container of objects rather than instances.</p>
<p>Some of the complaints:</p>
<pre><code>$ mypy misc.py
misc.py:23: error: Cannot instantiate abstract class "I" with abstract attribute "foo" [abstract]
misc.py:24: error: Cannot instantiate abstract class "I" with abstract attribute "foo" [abstract]
misc.py:25: error: Cannot instantiate abstract class "I" with abstract attribute "foo" [abstract]
misc.py:25: error: Only concrete class can be given where "type[I]" is expected [type-abstract]
misc.py:26: error: Cannot instantiate abstract class "I" with abstract attribute "foo" [abstract]
misc.py:26: error: Value expression in dictionary comprehension has incompatible type "type[I]"; expected type "I" [misc]
Found 6 errors in 1 file (checked 1 source file)
</code></pre>
|
<python><python-typing><mypy>
|
2024-09-02 19:31:57
| 0
| 1,987
|
oliversm
|
78,941,736
| 1,277,488
|
I'm getting a 400 error and the endpoint isn't even entered. Why?
|
<p>I've encountered a situation with my Django Rest Framework (DRF) app in which a client mobile app is calling one of my endpoints and is getting a 400 error, but I can't debug what's going wrong. This is what appears in the log:</p>
<pre><code>Sep 02 11:11:29 myapp heroku/router at=info method=POST path="/users/generate_otp/?device_token=cBB2x2Y2L04qpQCbYukR31%3AAPA91bEHqCZ3ztI9wim0EzxVZ1Nv6clZMfsDxw7_6reWIVkm5dQcxuWlifnfxkn7Ope_2wTM75_TMw6BV-pAZyEYdrICz1dsk2gX6_aYJwz0-H3SDR2vLWvceiioUs21dTwXQIMwmBN1&mobile=%2B13104014335&device_type=ios" host=myhost.com request_id=fad10208-c01d-4f64-9640-1aff889ab3ee fwd="66.246.86.216" dyno=web.1 connect=0ms service=1ms status=400 bytes=28 protocol=https
</code></pre>
<p>If I send the same URL via Postman the endpoint returns successfully:</p>
<pre><code>Sep 02 11:41:48 myapp heroku/router at=info method=POST path="/users/generate_otp/?device_token=cKY2x2Y2L04qpQCbYukR31%3AAPA91bEHqCZ3ztI9wim0EzxVZ1Nv6clZMfsDxw7_6reWIVkm5dQcxuWlifnfxkn7Ope_2wTM75_TMw6BV-pAZyEYdrICz1dsk2gX6_aYJwz0-H3SDR2vLWvceiioUs21dTwXQIMwmBN1&mobile=%2B13104014335&device_type=ios" host=myhost.com request_id=4ddabe5f-619d-4665-aff5-28afb9ac277d fwd="66.246.86.216" dyno=web.1 connect=0ms service=572ms status=200 bytes=1686 protocol=https
</code></pre>
<p>I've tried to examine the request that comes in, by making these the first lines of the endpoint:</p>
<pre><code>@action(detail=False, methods=["POST"])
def generate_otp(self, request):
"""
Text a one-time password to the given mobile number to login.
If there's not yet a user with that number, register them.
"""
print("IN GENERATE OTP", flush=True)
print("IN GENERATE OTP, request.data =", request.data, flush=True)
print("IN GENERATE OTP, request.headers =", request.headers, flush=True)
</code></pre>
<p>But those print statements never get called -- so it's as if it's not entering the endpoint at all.
β¨To try to debug what's being sent in the request further, I added this middleware:</p>
<pre><code>class RequestLoggingMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
print("--- RequestLoggingMiddleware ---")
print(request.method, request.path)
print(request.GET)
print(request.POST)
print(request.body)
print("--------------------------------", flush=True)
response = self.get_response(request)
return response
</code></pre>
<p>But again nothing is being printed.</p>
<p>I've also set <code>DEBUG = True</code> in settings.py, and have even added:</p>
<pre><code>logger = logging.getLogger("django.request")
logger.setLevel(logging.DEBUG)
</code></pre>
<p>All are attempts to try to get more information on how I could be getting a 400 error for a POST request that seems to work everywhere else. To make matters even more complicated, someone else is running the client mobile app and isn't seeing these results.</p>
<p>What might be going on? What circumstances could be giving me a 400 error before the endpoint is even entered? How can I debug and fix this?</p>
|
<python><django><django-rest-framework><request><http-status-code-400>
|
2024-09-02 18:47:40
| 2
| 2,385
|
Dylan
|
78,941,620
| 1,116,354
|
TemplateSyntaxError jinja2.exceptions.TemplateSyntaxError: Encountered unknown tag 'result1'
|
<p>I am trying to implement an application for comparison. There is a method as you can see below in the method. <code>get_site_analysis</code></p>
<pre><code>@app.route('/comparison/<site_id1>/<site_id2>', methods=['GET'])
def comparison(site_id1, site_id2):
arable = ArableData()
site1_result, site1_images, site1_site_id = get_site_analysis(arable, site_id1)
site2_result, site2_images, site2_site_id = get_site_analysis(arable, site_id2)
return render_template('comparison.html', data=[[site1_result, site1_images, site1_site_id], [site2_result, site2_images, site2_site_id]])
</code></pre>
<p>I am passing data to template <code>data=[[site1_result, site1_images, site1_site_id], [site2_result, site2_images, site2_site_id]])</code>.</p>
<p>Trying to access it as below gives me the error in subjext.</p>
<p>TemplateSyntaxError jinja2.exceptions.TemplateSyntaxError: Encountered unknown tag 'result1'</p>
<p>The way I am trying to access is as below,</p>
<pre><code> {%
result1 = data[0][0]
image_list1 = data[0][1]
site1_id = data[0][2]
result2 = data[0][0]
image_list2 = data[0][1]
site2_id = data[0][2]
%}
</code></pre>
<p>What am I doing wrong here? Please help...</p>
|
<python><jinja2>
|
2024-09-02 17:54:52
| 1
| 6,877
|
Vinay
|
78,941,537
| 7,475,143
|
OpenCV not able to detect aruco marker within image created with opencv
|
<p>I encountered an issue while trying out a simple example of creating and detecting aruco-images.
In the following code-snippet, I generate aruco images, save them to a file and then load one of these files for detection:</p>
<pre><code>import cv2
aruco_dict= cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_ARUCO_ORIGINAL)
params = cv2.aruco.DetectorParameters()
##create aruco
marker_size = 500 #mm
for i in range(6):
marker_image = cv2.aruco.generateImageMarker(aruco_dict, i, marker_size)
cv2.imwrite(f"marker_{i}.png", marker_image)
##load aruco image
img = cv2.imread("marker_5.png")
#detect markers
aruco_detector = cv2.aruco.ArucoDetector(aruco_dict,params)
corners, ids, _ = aruco_detector.detectMarkers(img)
print(corners)
print(ids)
</code></pre>
<p>This code results in an empty list of corners (e.g. the detector wasn't able to find the aruco). I assumed it should be able to detect it easily if I simply re-use the image created by OpenCV.</p>
<p>Does someone have an idea what I did wrong or where the issue lies?
Any tips are welcome.
Kind regards!</p>
|
<python><opencv><image-processing><augmented-reality><aruco>
|
2024-09-02 17:29:33
| 1
| 563
|
Bobipuegi
|
78,941,483
| 243,031
|
Want to use COALESCE on related fields on django model
|
<p>I have model structure as below.</p>
<pre><code>from django.db import models
class Switch(models.Model):
fqdn = models.CharField(unique=True)
class Meta:
db_table = 'Switch'
class Colo(models.Model):
name = models.CharField()
class Meta:
db_table = 'Colo'
class Clstr(models.Model):
colo = models.ForeignKey('Colo',
db_column='colo',
related_name='clstrs')
name = models.CharField()
class Meta:
db_table = 'Clstr'
class ESwtch(models.Model):
switch = models.OneToOneField(Switch,
db_column='switch',
primary_key=True,
related_name='e_swtch')
clstr = models.ForeignKey('Clstr',
db_column='Clstr',
related_name="e_swtches")
class Meta:
db_table = 'ESwtch'
class BSwtch(models.Model):
switch = models.OneToOneField(Switch,
db_column='switch',
primary_key=True,
related_name='b_swtch')
clstr = models.ForeignKey('Clstr',
db_column='clstr',
related_name='b_swtches')
class Meta:
db_table = 'BSwtch'
class VChas(models.Model):
clstr = models.ForeignKey('Clstr',
db_column='clstr',
related_name='v_chas')
vc_num = models.IntegerField()
class Meta:
db_table = 'VChas'
class VSwtch(models.Model):
switch = models.OneToOneField(Switch,
db_column='switch',
primary_key=True,
related_name='v_swtch')
v_chas = models.ForeignKey(VChas,
db_column='v_chas',
related_name='v_switches')
role = models.CharField()
class Meta:
db_table = 'VSwtch'
class CTrig(models.Model):
switch = models.ForeignKey(Switch,
db_column='switch',
related_name='c_trig')
config_trigger = models.CharField()
class Meta:
db_table = 'CTrig'
</code></pre>
<p>I want to get the <code>CTrig</code> based on <code>Colo.name</code>. I ran raw query as below.</p>
<pre><code>SELECT `CTrig`.`switch`,
`CTrig`.`config_trigger`,
`Switch`.`fqdn`
FROM `CTrig`
LEFT OUTER JOIN `Switch` ON (`CTrig`.`switch` = `Switch`.`id`)
LEFT OUTER JOIN `BSwtch` ON (`Switch`.`id` = `BSwtch`.`switch`)
LEFT OUTER JOIN `ESwtch` ON (`Switch`.`id` = `ESwtch`.`switch`)
LEFT OUTER JOIN `VSwtch` ON (`Switch`.`id` = `VSwtch`.`switch`)
LEFT OUTER JOIN `VChas` ON (`VSwtch`.`v_chas` = `VChas`.`id`)
LEFT OUTER JOIN `Cluster` ON (COALESCE(`BSwtch`.`clstr`,
`VChas`.`clstr`,
`ESwtch`.`clstr`) = `Clstr`.`id`)
LEFT OUTER JOIN `Colo` ON (`Clstr`.`colo` = `Colo`.`id`)
WHERE `Colo`.`name` = 'coloname'
ORDER BY `ConfigTrigger`.`triggered_at` DESC
LIMIT 1;
</code></pre>
<p>This query works fine, but when I try to implement this using DJango model, its giving error.</p>
<p>I tried as below.</p>
<pre><code>from django.db.models.functions import Coalesce
...
...
CTrig.objects.select_related("switch").filter(
Coalesce("switch__b_swtch__clstr__colo__name",
"switch__e_swtch__clstr__colo__name",
"switch__v_swtch__v_chas__clstr__colo__name")=="coloname")
...
...
</code></pre>
<p>But this gives error, as below.</p>
<pre><code>File "/home/user/var/virtualenvs/myenv/lib/python3.8/site-packages/pkg/mod/api/views/ctrig.py", line 49, in get_queryset
ret_val = ret_val.filter(
File "/home/user/var/virtualenvs/myenv/lib/python3.8/site-packages/django/db/models/query.py", line 941, in filter
return self._filter_or_exclude(False, args, kwargs)
File "/home/user/var/virtualenvs/myenv/lib/python3.8/site-packages/django/db/models/query.py", line 961, in _filter_or_exclude
clone._filter_or_exclude_inplace(negate, args, kwargs)
File "/home/user/var/virtualenvs/myenv/lib/python3.8/site-packages/django/db/models/query.py", line 968, in _filter_or_exclude_inplace
self._query.add_q(Q(*args, **kwargs))
File "/home/user/var/virtualenvs/myenv/lib/python3.8/site-packages/django/db/models/sql/query.py", line 1391, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
File "/home/user/var/virtualenvs/myenv/lib/python3.8/site-packages/django/db/models/sql/query.py", line 1410, in _add_q
child_clause, needed_inner = self.build_filter(
File "/home/user/var/virtualenvs/myenv/lib/python3.8/site-packages/django/db/models/sql/query.py", line 1281, in build_filter
arg, value = filter_expr
TypeError: cannot unpack non-iterable bool object
</code></pre>
<p><code>coalesce</code> is not able to convert the leftside related fields. Doc for the <a href="https://docs.djangoproject.com/en/5.1/ref/models/database-functions/#coalesce" rel="nofollow noreferrer"><code>coalesce</code></a> give example where this function used on left right side of the equal operator.</p>
|
<python><django><orm><coalesce>
|
2024-09-02 17:09:41
| 0
| 21,411
|
NPatel
|
78,941,367
| 7,326,981
|
Binance - Spot Market Profit Calculator
|
<p>I have a Python method that calculates the profit after taking into account the commission structure. However, it fails to replicate the exact values from Binance trade history. For example I bought <code>ETH/USDT</code> using <code>LIMIT</code> order at a price of <code>2595</code> with a buy amount of <code>57.609 USDT</code>. Then I sold using <code>LIMIT</code> order at a price of <code>2700</code>. As per my understanding the precision value for this pair is 8 but still the calculator fails to give correct results. Below is the code I am using.</p>
<pre><code>def calculate_profit_after_commission_binance(buy_price: float, buy_amount_usdt: float, sell_price: float,
order_type_buy: str = 'MARKET', order_type_sell: str = 'MARKET',
fee_rate_maker: float = 0.001, fee_rate_taker: float = 0.001) -> float:
if order_type_buy == 'MARKET':
buy_fee_rate = fee_rate_taker
else:
buy_fee_rate = fee_rate_maker
if order_type_sell == 'MARKET':
sell_fee_rate = fee_rate_taker
else:
sell_fee_rate = fee_rate_maker
# Calculate the amount of asset bought
asset_quantity = buy_amount_usdt / buy_price
# Deduct commission on the asset bought
asset_quantity_available = asset_quantity * (1 - buy_fee_rate)
# Selling commission in USDT
sell_commission = asset_quantity_available * sell_price * sell_fee_rate
# Profit after sell commission
profit = asset_quantity_available * sell_price - sell_commission - buy_amount_usdt
return profit
</code></pre>
<p><a href="https://i.sstatic.net/AOyrO78J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AOyrO78J.png" alt="enter image description here" /></a></p>
|
<python><trading><binance>
|
2024-09-02 16:25:03
| 2
| 1,298
|
Furqan Hashim
|
78,941,273
| 685,984
|
Python requests: No module named 'werkzeug.wrappers.json'
|
<p>I'm trying to use the requests library in Python, but when I do so, I get an error</p>
<pre><code> from werkzeug.wrappers.json import JSONMixin
ModuleNotFoundError: No module named 'werkzeug.wrappers.json'
</code></pre>
<p>Below is an example (using a repo I forked specially for this, and a very fine grained access token that will expire soon, in case you're concerned about me sharing it).</p>
<p>import requests
import json</p>
<pre><code># GitHub API URL to create an issue in the crobarcro/Spoon-Knife repository
url = 'https://api.github.com/repos/crobarcro/Spoon-Knife/issues'
# Example payload for creating a GitHub issue
payload = {
"title": "Test Issue from API",
"body": "This is a test issue created via the GitHub API in the crobarcro/Spoon-Knife repository.",
"assignees": [], # You can assign this to specific users if needed
"labels": ["bug"] # Optional, you can add labels to the issue
}
# Headers with authentication (you need a GitHub token)
headers = {
'Authorization': 'token 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', # Replace with your GitHub token
'Accept': 'application/vnd.github.v3+json',
'Content-Type': 'application/json'
}
# Sending the POST request
response = requests.post(url, headers=headers, data=json.dumps(payload))
# Checking the response
if response.status_code == 201:
print('Issue created successfully:', response.json())
else:
print(f'Failed to create issue. Status code {response.status_code}:', response.text)
</code></pre>
<p>The full error message:</p>
<pre><code>Traceback (most recent call last):
File "/snap/pycharm-community/405/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py", line 75, in <module>
sys.exit(pytest.main(args, plugins_to_load + [Plugin]))
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 159, in main
config = _prepareconfig(args, plugins)
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 346, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "/home/<redacted>/.local/lib/python3.10/site-packages/pluggy/_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
File "/home/<redacted>/.local/lib/python3.10/site-packages/pluggy/_manager.py", line 120, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/<redacted>/.local/lib/python3.10/site-packages/pluggy/_callers.py", line 139, in _multicall
raise exception.with_traceback(exception.__traceback__)
File "/home/<redacted>/.local/lib/python3.10/site-packages/pluggy/_callers.py", line 122, in _multicall
teardown.throw(exception) # type: ignore[union-attr]
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/helpconfig.py", line 106, in pytest_cmdline_parse
config = yield
File "/home/<redacted>/.local/lib/python3.10/site-packages/pluggy/_callers.py", line 103, in _multicall
res = hook_impl.function(*args)
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1152, in pytest_cmdline_parse
self.parse(args)
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1501, in parse
self._preparse(args, addopts=addopts)
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1388, in _preparse
self.pluginmanager.load_setuptools_entrypoints("pytest11")
File "/home/<redacted>/.local/lib/python3.10/site-packages/pluggy/_manager.py", line 421, in load_setuptools_entrypoints
plugin = ep.load()
File "/usr/lib/python3.10/importlib/metadata/__init__.py", line 171, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/assertion/rewrite.py", line 178, in exec_module
exec(co, module.__dict__)
File "/home/<redacted>/.local/lib/python3.10/site-packages/schemathesis/__init__.py", line 2, in <module>
from ._hypothesis import init_default_strategies, register_string_format
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/assertion/rewrite.py", line 178, in exec_module
exec(co, module.__dict__)
File "/home/<redacted>/.local/lib/python3.10/site-packages/schemathesis/_hypothesis.py", line 13, in <module>
from . import utils
File "/home/<redacted>/.local/lib/python3.10/site-packages/_pytest/assertion/rewrite.py", line 178, in exec_module
exec(co, module.__dict__)
File "/home/<redacted>/.local/lib/python3.10/site-packages/schemathesis/utils.py", line 17, in <module>
from werkzeug.wrappers.json import JSONMixin
ModuleNotFoundError: No module named 'werkzeug.wrappers.json'
</code></pre>
|
<python><python-requests>
|
2024-09-02 15:55:09
| 1
| 2,949
|
crobar
|
78,941,136
| 1,714,385
|
Can requests-aws4auth.AWS4Auth accept three arguments?
|
<p>I'm trying to connect to AWS via a script. The script works on my colleague's machine, but doesn't on mine, even if I install all of his python packages via pip freeze. I can't even run the example from the <a href="https://pypi.org/project/requests-aws4auth/" rel="nofollow noreferrer"><code>requests-aws4auth</code> page</a>:</p>
<pre><code>from requests_aws4auth import AWS4Auth
from botocore.session import Session
credentials = Session().get_credentials()
auth = AWS4Auth(region='eu-west-1', service='es',
refreshable_credentials=credentials)
</code></pre>
<p>if I try, I get the following error:</p>
<pre><code>TypeError: AWS4Auth() takes 2, 4 or 5 arguments, 0 given
</code></pre>
<p>How can I run said piece of code successfully?</p>
<p>I use <code>requests-aws4auth==1.3.1</code></p>
|
<python>
|
2024-09-02 15:11:56
| 2
| 4,417
|
Ferdinando Randisi
|
78,941,054
| 3,033,634
|
error message using multiple %s string substitutions
|
<p>I haven't been able to find another example of string substitution like in this error message where %s%s is doubled up as it is here: <a href="https://github.com/django/django/blob/387475c5b2f1aa32103dbe21cb281d3b35165a0c/django/contrib/gis/utils/layermapping.py#L260" rel="nofollow noreferrer">https://github.com/django/django/blob/387475c5b2f1aa32103dbe21cb281d3b35165a0c/django/contrib/gis/utils/layermapping.py#L260</a></p>
<p>The %s%s and subsequent single %s are resulting in different values when I run code that raises that error. Looking for an explanation of the double usage.</p>
<p>here is the code linked:</p>
<pre class="lang-py prettyprint-override"><code>raise LayerMapError(
"Invalid mapping geometry; model has %s%s, "
"layer geometry type is %s."
% (fld_name, "(dim=3)" if coord_dim == 3 else "", ltype)
)
</code></pre>
|
<python><string>
|
2024-09-02 14:49:54
| 1
| 1,050
|
Hugh_Kelley
|
78,940,790
| 13,023,224
|
Pandas order column with lists by pairs
|
<p>Here is the dataframe:</p>
<pre><code>df1 = pd.DataFrame( {'st': {0: 1, 1: 0, 2: 2, 3: 0, 4: 1, 5: 5, 6: 0, 7: 7, 8: 19, 9: 0, 10: 0, 11: 0, 12: 3, 13: 0}, 'gen': {0: 'B1', 1: 'A0,B0', 2: 'A1,B1', 3: 'A0,B0', 4: 'B109', 5: 'B4,A1', 6: 'A0,B0', 7: 'A4,B3', 8: 'B15,A4', 9: 'A0,B0', 10: 'A0,B0', 11: 'A0,B0', 12: 'A123', 13: 'A0,B0'}, 'gen2': {0: 'B(1)', 1: 'A(0),B(0)', 2: 'A(1),B(1)', 3: 'A(0),B(0)', 4: 'B(109)', 5: 'A(1),B(4)', 6: 'A(0),B(0)', 7: 'A(4),B(3)', 8: 'A(4),B(15)', 9: 'A(0),B(0)', 10: 'A(0),B(0)', 11: 'A(0),B(0)', 12: 'A(123)', 13: 'A(0),B(0)'}} )
</code></pre>
<p>It yields:</p>
<pre><code> st gen gen2
0 1 B1 B(1)
1 0 A0,B0 A(0),B(0)
2 2 A1,B1 A(1),B(1)
3 0 A0,B0 A(0),B(0)
4 1 B109 B(109)
5 5 B4,A1 A(1),B(4)
6 0 A0,B0 A(0),B(0)
7 7 A4,B3 A(4),B(3)
8 19 B15,A4 A(4),B(15)
9 0 A0,B0 A(0),B(0)
10 0 A0,B0 A(0),B(0)
11 0 A0,B0 A(0),B(0)
12 3 A123 A(123)
13 0 A0,B0 A(0),B(0)
</code></pre>
<p>Note that column ['gen2'] is the sought after result</p>
<p>Separated by coma, column ['gen'] presents value pairs. Each value pair is made by a letter (A or B), and a number(integer).</p>
<p>I would like ['gen'] column to dispaly results ordered by pairs, prenting first 'A+value' and then 'B+value'. Also note that ['gen1'] presents all values after the letter in brackets.</p>
<p>See in sought after result column, where changes happen on index numbers 5 and 14.</p>
<pre><code>Index 5, column['gen2'] reorders column ['gen'] from B4,A1 to **A(1),B(4)**.
Index 8, column['gen2'] reorders column ['gen'] from B14,A4 to **A(4),B(15)**.
</code></pre>
|
<python><pandas><list><sorting>
|
2024-09-02 13:48:18
| 1
| 571
|
josepmaria
|
78,940,757
| 13,606,345
|
How to annotate after group by and order_by in django?
|
<p>I have a DB table which has a field "created_at". This is a auto_now_add=True field.</p>
<p>This table is inserted data once everyday. What I want to do is filter data that corresponds to the last day of each month of each year.</p>
<p>I have a query as follows:</p>
<pre class="lang-py prettyprint-override"><code>qs = (
self.model.objects().with_months()
.with_years()
.values('year', 'month')
.annotate(last_day=Max('created_at'))
.order_by('year', 'month')
)
</code></pre>
<p>with_months() and with_years() methods are queryset methods that annotate year and month.</p>
<p>What I want to do is to annotate other fields of this model in this queryset; however, since values() returns a list of dictionaries, I am not able to do that. I cannot add other fields inside values() as they won't be grouped.</p>
<p>How can I achieve this?</p>
|
<python><django><django-orm>
|
2024-09-02 13:39:14
| 1
| 323
|
Burakhan Aksoy
|
78,940,747
| 5,089,311
|
Python Tkinter Treeview display checkbox as a value
|
<p>I need display checkbox as a value for entries in my TreeView.<br />
Basically this:<br />
<a href="https://i.sstatic.net/31xi5tlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/31xi5tlD.png" alt="enter image description here" /></a></p>
<p>In the example on the screenshot I use UTF-8 symbols β
and β respectively. And it is somewhat OK, but there are couple issues.</p>
<p>First, "checked" and "unchecked" look differently (both size and shape) and I can't find stylistically matching checked/unchecked boxes in UTF-8. There are 2 version of "checked" though - ββ
- which genuinely surprises me why.<br />
Second, the above doesn't work on Linux, i.e. the "unchecked" box is still shown, but instead of "checked" it shows empty space.<br />
And last, when I "edit" the boolean cell, I put in-place <code>ttk.Checkbutton</code> which looks different from both symbols. This discrepancy is less important though, the two above issues are more annoying.</p>
<p>I thought about using <code>tkinter.PhotoImage</code> but it allows to put image only as an left-most icon for entire row, not as specific column value.</p>
<p>Any idea how to make it look nice and be cross-platform?<br />
if it matters, MacOS irrelevant, need only Windows and Linux.</p>
|
<python><tkinter>
|
2024-09-02 13:37:09
| 1
| 408
|
Noob
|
78,940,177
| 12,439,683
|
ValueError: Space not allowed in string format specifier
|
<p>I was modernizing some old <code>%</code> formatted string via regex substitution to change:</p>
<pre class="lang-py prettyprint-override"><code># Old:
'Map: % 20s' % name
# to New:
'Map: {: 20s}'.format(name)
</code></pre>
<p>and was surprised a</p>
<ul>
<li><code>ValueError: Space not allowed in string format specifier</code></li>
</ul>
<p>As the same syntax is valid for numerics and this error did not yet show up on my search on StackOverflow.
Why does this error happen in the new format and how to correctly write an equivalent formatting?</p>
<hr />
<p>Sitenote:</p>
<ul>
<li>I explicitly do not want to use f-strings (yet) because there are many similar lines and it looks much neater using <code>format</code> instead.</li>
</ul>
<hr />
<p>Related Questions:</p>
<ul>
<li><p><sub> Other alignment characters:
<a href="https://stackoverflow.com/q/54415323/12439683">ValueError: '=' alignment not allowed in string format specifier</a> </sub></p>
</li>
<li><p><sub> Same problem for f-strings.
<a href="https://stackoverflow.com/q/72319355/12439683">Space in f-string leads to ValueError: Invalid format specifier</a> </sub></p>
</li>
<li><p><sub> <a href="https://stackoverflow.com/q/5082452/12439683">String formatting: % vs. .format vs. f-string literal</a> </sub></p>
</li>
</ul>
|
<python><string><format><string-formatting>
|
2024-09-02 11:20:43
| 1
| 5,101
|
Daraan
|
78,940,132
| 12,466,687
|
How to extract only unique values from string using regex in Python?
|
<p>I have this piece of String <code>"Desirable: < 200 Borderline HIgh: 200 - 240 High: > 240"</code> where I want to extract only unique Number or decimal values.</p>
<p>To extract <code>Number,Decimal,-</code> I was using this regex code <code>r'[^0-9.-]+'</code> but it doesn't return unique values:</p>
<pre><code>import re
check = "Desirable: < 200 Borderline HIgh: 200 - 240 High: > 240"
re.sub(r'[^0-9.-]+', '',check)
</code></pre>
<p><strong>output:</strong>
<code>200200-240240</code></p>
<p><strong>Desired output:</strong>
<code>200-240</code></p>
<p>Please Note: Its important to able to extract <code>Numbers, Decimals,-</code> from the string.</p>
|
<python><regex>
|
2024-09-02 11:10:08
| 1
| 2,357
|
ViSa
|
78,940,125
| 17,721,722
|
How to Perform SQL-like Update Operations on a PySpark DataFrame Using SQL Queries?
|
<p>I am trying to perform in-memory updates on a very large PySpark DataFrame instead of making disk-based updates in a PostgreSQL database. I chose PySpark for its speed and scalability over direct database updates.</p>
<p><strong>Here's what I'm doing:</strong></p>
<ol>
<li>I have a <code>.csv</code> file named <code>source_table_details</code> that I read and convert into a PySpark DataFrame.</li>
<li>I perform some data manipulations like:
<pre class="lang-py prettyprint-override"><code>df = df.withColumn('column1', df['column2'] * 2)
df = df.withColumn('column2', concat(['column1', 'column2']))
</code></pre>
</li>
<li>The final step would be to load this DataFrame into a database table.</li>
</ol>
<p>However, I want to perform complex update operations similar to what I would do in SQL. For example, I have a SQL query that updates a table based on joins and conditions involving other database tables:</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE source_table_details
SET remark1 = 'AUTO REVERSAL'
FROM source_lrsapirefund_tmp_details
WHERE source_lrscbs_tmp_details.creditdebit = 'CREDIT'
AND source_lrsapirefund_tmp_details.status NOT IN ('1','2')
AND source_lrscbs_tmp_details.utrrrnmerge = source_lrsapirefund_tmp_details.refutrno
AND source_lrsapirefund_tmp_details.refundtype IN ('1','2')
AND source_lrscbs_tmp_details.remark1 IS NULL;
</code></pre>
<p>To execute this query in PySpark, I created a temporary view:</p>
<pre class="lang-py prettyprint-override"><code>df.createOrReplaceTempView("source_table_details")
</code></pre>
<p>I tried to use <code>spark.sql()</code> with this query:</p>
<pre class="lang-py prettyprint-override"><code>query = """
UPDATE source_table_details
SET remark1 = 'AUTO REVERSAL'
FROM source_lrsapirefund_tmp_details
WHERE source_lrscbs_tmp_details.creditdebit = 'CREDIT'
AND source_lrsapirefund_tmp_details.status NOT IN ('1','2')
AND source_lrscbs_tmp_details.utrrrnmerge = source_lrsapirefund_tmp_details.refutrno
AND source_lrsapirefund_tmp_details.refundtype IN ('1','2')
AND source_lrscbs_tmp_details.remark1 IS NULL;
"""
df_ops.df = spark.sql(query)
</code></pre>
<p>But I encountered the error: <code>UPDATE TABLE is not supported temporarily</code>.</p>
<p><strong>Question:</strong></p>
<p>Is there an optimized way to perform SQL-like UPDATE queries directly on a PySpark DataFrame? Given that the other tables (source_lrsapirefund_tmp_details and source_lrscbs_tmp_details) are in the database, is it possible to execute these queries with minimal modification?</p>
<p><strong>Any help or guidance would be appreciated.</strong></p>
|
<python><sql><apache-spark><pyspark><apache-spark-sql>
|
2024-09-02 11:09:09
| 0
| 501
|
Purushottam Nawale
|
78,940,118
| 11,154,841
|
In a standard MS Access SQL query output that does not have any aliases, how do I replace the full names by their "first-letters" aliases?
|
<p>I have a lot of queries from a bunch of MS Access databases. I read them out and split them into their SQL-blocks by the standard SQL keywords with:</p>
<ul>
<li><a href="https://superuser.com/questions/1840186/how-do-i-get-the-full-sql-code-for-all-queries-in-all-ms-access-databases-of-a-f">How do I get the full sql code for all queries in all MS Access databases of a folder?</a></li>
<li><a href="https://stackoverflow.com/questions/78380260/the-big-sql-regex-how-do-i-regex-split-an-easy-sql-query-select-into">"The big SQL RegEx": How do I RegEx split an easy SQL query (SELECT ... INTO ... FROM ... WHERE ... GROUP BY ... HAVING ... ORDER BY)?</a></li>
</ul>
<p>Thus, you need to put the objects from the MS Access databases in an Excel file of that same format to go on with answering this question. You might say that this is too detailed, and yes, it is, but without doing so, you will not get the split of the <code>FROM</code>-block from the rest of the query, which is needed to answer the question at all.</p>
<p>The columns of that input Excel file:</p>
<p><a href="https://i.sstatic.net/XIzwvh7c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XIzwvh7c.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/vYp27Eo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vYp27Eo7.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/6H02i6rB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6H02i6rB.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/puL0mwfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/puL0mwfg.png" alt="enter image description here" /></a></p>
<p>Or altogether:</p>
<p><a href="https://i.sstatic.net/CowXD9rk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CowXD9rk.png" alt="enter image description here" /></a></p>
<blockquote>
<p>ID Datenbank Objektname LastUpdated Objekttyp Objektart SourceTableName Abfrage_SQL Fehler Select Into From Where Group_By Having Order_By <code>New SQL Codes</code> Mapping</p>
</blockquote>
<p>The main SQL query is in column <code>H</code> (<code>Abfrage_SQL</code>, "Abfrage" means "Query"), while the split into blocks ranges from columns <code>J</code> to <code>P</code>. In the code, you will need column <code>H</code> and column <code>L</code> as the input.</p>
<p>You might get to an answer without this Excel input file and with other code, but you will not get around splitting the Code with some Regex, and I did not want to reinvent the wheel, therefore check the link how to get there. The queries from the MS Access databases at hand do not have aliases.</p>
<p><em>Mind that queries in your MS Access databases might have aliases. If you have them all the time, you do not need this question. But if they are there only sometimes, you need to change the code of the answer.</em></p>
<h4>Task</h4>
<p>I do not want to put the aliases with Regex or even by hand and in many steps. Instead, I want to run Python code on it that does it all in one go.</p>
<p>I need to replace the full table and view names with their standardized aliases. The alias shall be built with the first letters of each name that is split by "_" so that "my_random_table" becomes "mrt". If more than one full name is assigned to an abbreviation, the repeated abbreviation must be given an ascending number.</p>
<p>The full query from MS Access might look like this:</p>
<p><code>select my_random_table.* from my_random_table</code></p>
<p>This shall be shortened by aliases like this:</p>
<p><code>select mrt.* from my_random_table mrt</code></p>
<p>The Excel input file affords a list of 100+ queries in column <code>H</code> and their <code>FROM</code>-block in column <code>L</code>.</p>
<p>In a standard MS Access SQL query output that does not have any aliases, how do I replace the full names by their "first-letters" aliases? This shall be done with Python code that is run on an Excel input file. This Excel input file can be built with the help of the links listed above.</p>
<h4>PS</h4>
<p>The tags "excel" and "ms-access" are not the core of the question, they are not even needed. The answer can help in any other SQL settings as well. I make this clear since the answer takes up the output from MS Access and MS Excel, but you will get around that by re-writing the code for another software setting.</p>
|
<python><sql><excel><ms-access>
|
2024-09-02 11:07:43
| 1
| 9,916
|
questionto42
|
78,939,932
| 4,451,521
|
Why gradio image resizes the image in one environment?
|
<p>I am running the same gradio script in two different computers.</p>
<p>It uses</p>
<pre><code>image_display = gr.Image(label="Image Display", interactive=False)
</code></pre>
<p>however in one PC it shows the image occupying all the width of the Image element, and in the other, the image appears smaller</p>
<p><a href="https://i.sstatic.net/A29FBwb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A29FBwb8.png" alt="enter image description here" /></a></p>
<p>I don't know why this is happening (except maybe a difference of the gradio versions), but how can I make it explicitly that the image element have a width so that the image occupies everything?</p>
<p>EDIT: Turns out it was the gradio version
When I tried a different environment with 4.40.0 it showed the figure in full</p>
<p><strong>Now the question is, with the latest gradio how can I configure gr.Image to display the figure in full?</strong></p>
|
<python><gradio>
|
2024-09-02 10:11:52
| 0
| 10,576
|
KansaiRobot
|
78,939,900
| 10,595,871
|
Exploding multiple column list in pandas
|
<p>I've already tried everything posted here but nothing is working, so please don't mark this as duplicate because I think the problem is different.</p>
<p>I have a json like this:</p>
<pre><code>[{'Id': 1,
'Design': ["09",
'10',
'13'
],
'Research': ['Eng',
'Math']
}]
</code></pre>
<p>Plus other non-list colums. This is repeated for 500 ids.</p>
<p>I need to explode the list columns. the final output should be an excel file, I don't care if the explosion is done directly in json or in pandas.</p>
<p>Already tried:</p>
<pre><code> def lenx(x):
return len(x) if isinstance(x,(list, tuple, np.ndarray, pd.Series)) else 1
def cell_size_equalize2(row, cols='', fill_mode='internal', fill_value=''):
jcols = [j for j,v in enumerate(row.index) if v in cols]
if len(jcols)<1:
jcols = range(len(row.index))
Ls = [lenx(x) for x in row.values]
if not Ls[:-1]==Ls[1:]:
vals = [v if isinstance(v,list) else [v] for v in row.values]
if fill_mode=='external':
vals = [[e] + [fill_value]*(max(Ls)-1) if (not j in jcols) and (isinstance(row.values[j],list))
else e + [fill_value]*(max(Ls)-lenx(e))
for j,e in enumerate(vals)]
elif fill_mode == 'internal':
vals = [[e]+[e]*(max(Ls)-1) if (not j in jcols) and (isinstance(row.values[j],list))
else e+[e[-1]]*(max(Ls)-lenx(e))
for j,e in enumerate(vals)]
else:
vals = [e[0:min(Ls)] for e in vals]
row = pd.Series(vals,index=row.index.tolist())
return row
</code></pre>
<p>Leads to index error</p>
<pre><code>df.explode(['B', 'C', 'D', 'E']).reset_index(drop=True)
</code></pre>
<p>Columns must have same lenght</p>
<pre><code>df1 = pd.concat([df[x].explode().to_frame()
.assign(g=lambda x: x.groupby(level=0).cumcount())
.set_index('g', append=True)
for x in cols_to_explode], axis=1)
</code></pre>
<p>Somehow it creates lots of rows, I think it just explodes a column after another, and it leads to memory error.</p>
<p>Desired Output:</p>
<pre><code>Id Design Research
1 09 Eng
1 10 Math
1 13
</code></pre>
|
<python><pandas>
|
2024-09-02 10:00:43
| 2
| 691
|
Federicofkt
|
78,939,798
| 9,159,407
|
Greenhouse harvest API pagination doesn't work as expected
|
<p>I'm working with greenhouse API and tried to work with some API and their pagination mechanism.</p>
<p>Most of their APIs worked as documented, but the "User Permissions" API pagination didn't work for me.</p>
<p>Does anyone familiar with that issue and have a soltution?</p>
<p>Reference: <a href="https://developers.greenhouse.io/harvest.html#user-permissions" rel="nofollow noreferrer">https://developers.greenhouse.io/harvest.html#user-permissions</a></p>
<p>This user have more than 100 results for this API.</p>
<pre><code>response = requests.get('https://harvest.greenhouse.io/v1/users/1372428/permissions/jobs?page=1&per_page=100', auth=('api_key',''))
print(response.headers.get("link"))
Results:
'<https://harvest.greenhouse.io/v1/users/1372428/permissions/jobs?page=1&per_page=100&since_id=133383843>; rel="next"'
</code></pre>
<p>You can see from the results and the next page is the current page.</p>
|
<python><pagination>
|
2024-09-02 09:35:14
| 0
| 386
|
omer blechman
|
78,939,671
| 2,293,659
|
Format a SELECT query that avoids SQL injection taking multiple parameters
|
<p>I'm using Python 3 and SQLAlchemy to create dynamically a query that selects all the products that fulfill the conditions and avoids SQL injection issues.</p>
<p>I get the following error: "List argument must consist only of tuples or dictionaries"</p>
<p>The code is something like this</p>
<pre><code>data =[{"deparment_id": "vegies", "country": "Mexico"},
{"deparment_id": "vegies", "country": "Australia"},
{"deparment_id": "beef", "country": "Australia"}]
sql= """SELECT * FROM products WHERE (department_id = %s AND country = %s) OR (department_id = %s AND country = %s)
OR (department_id = %s AND country = %s)"""
param = ['vegies', 'Mexico', 'vegies', 'Australia','beef', 'Australia']
res = session.execute(text(sql), param).fetchall()
</code></pre>
|
<python><sqlalchemy><sql-injection>
|
2024-09-02 09:03:20
| 1
| 820
|
PepeVelez
|
78,939,607
| 158,049
|
Generic `NamedTuple`
|
<p>I work with a set of <code>NamedTuple</code> which share two common attributes (<code>key</code> and <code>value</code>), such as:</p>
<pre class="lang-py prettyprint-override"><code>CompanyIdentifier = NamedTuple("CompanyIdentifier", [
("key", str), ("value": str), ("some", str), ("other", str), ("fields", str)])
PeopleIdentifier = NamedTuple("PeopleIdentifier", [
("key", str), ("value": str), ("yet", str), ("another", str), ("field", str)])
WhateverIdentifier = NamedTuple("WhateverIdentifier", [
("key", str), ("value": str)])
</code></pre>
<p>I don't have control over of all these possible <code>NamedTuple</code>, the user of the library is free to create his own, etc.</p>
<p>Now I have created a number of functions to manipulate these classes, let's consider a dummy example:</p>
<pre class="lang-py prettyprint-override"><code>def print_identifier(i: ???):
print(i.key)
print(i.value)
</code></pre>
<p>I know that <code>print_identifier</code> can work on any <code>NamedTuple</code> that has a <code>key</code> and <code>value</code> field, but I struggle to express that to the type hint system.</p>
<p>I tried creating some kind of least common denominator idenfitier <code>NamedTuple</code> such as:</p>
<pre class="lang-py prettyprint-override"><code>AnyIdentifier = NamedTuple("AnyIdentifier", [("key", str), ("value": str)])
</code></pre>
<p>But when I try to provide specific real identifiers, mypy and pyright will complain that the real identifiers are not compatible with <code>AnyIdentifier</code>:</p>
<pre class="lang-py prettyprint-override"><code>from typing import NamedTuple
CompanyIdentifier = NamedTuple(
"CompanyIdentifier",
[
("key", str),
("value", str),
("some", str),
("other", str),
("field", str),
],
)
AnyIdentifier = NamedTuple(
"AnyIdentifier",
[
("key", str),
("value", str),
],
)
def func(identifier: AnyIdentifier):
print(identifier.key)
print(identifier.value)
ci = CompanyIdentifier(key="K", value="V", some="S", other="O", field="F")
gi = AnyIdentifier(key="K", value="V")
func(gi)
func(ci) # <- Error
</code></pre>
<pre><code>Argument of type "CompanyIdentifier" cannot be assigned to parameter "identifier" of type "AnyIdentifier" of function "func".
"CompanyIdentifier" is incompatible with "AnyIdentifier".
</code></pre>
|
<python><python-typing><mypy><namedtuple><pyright>
|
2024-09-02 08:43:18
| 1
| 2,547
|
NewbiZ
|
78,939,142
| 1,993,709
|
How to connect to existing logged in chrome instance in Playwright on MacOs?
|
<p>My goal is to automate a task on a website that requires login. So I want to login once manually and let the automation run from there. I tried using CRD but when my code runs, it opens a new window where my account is not logged in. Here is an example code I am running:</p>
<pre class="lang-py prettyprint-override"><code>from playwright.sync_api import sync_playwright
import time
def connect_to_chrome_debugger():
with sync_playwright() as p:
# Connect to the running Chrome instance using CDP
browser = p.chromium.connect_over_cdp("http://localhost:9234")
# Create a new page in the connected browser
page = browser.new_page()
# Navigate to a URL
page.goto("https://gmail.com")
time.sleep(10)
# Print the title of the page
print(page.title())
# Perform additional actions as needed
# Close the page (not the entire browser)
page.close()
if __name__ == "__main__":
connect_to_chrome_debugger()
</code></pre>
<p>Now in this example, I have already logged into my gmail on chrome and when I launch the CRD session using:</p>
<pre><code>/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --remote-debugging-port=9234
</code></pre>
<p>it shows me as logged in user. However, when playwright runs and opens a new window, then it behaves as if I am logged out.</p>
|
<python><playwright><playwright-python>
|
2024-09-02 06:18:33
| 1
| 4,238
|
Adi
|
78,938,961
| 6,309,590
|
What does it mean if a model acts normal on a training set but is abnormal on validation set
|
<p>I am trying to classify either an image of 25x25 px stacked together as 50x25 px is the same(1) or different(0). I am using keras to create the NN layers. The Keras sequential layers are shown below:</p>
<pre><code>layers.Input((2*imsize,imsize,3)), # shape of input with 3 channels
layers.Reshape((2,imsize,imsize,3)), # Turn the input into two 25x25 images
layers.LayerNormalization(axis=[-1,-2,-3]), # Normalize images
layers.Flatten(), # Flatten array
layers.Dense(16,activation='relu'), # 16 outputs hidden layer
layers.Dense(2,activation='softmax')
</code></pre>
<p>Then I compiled the layers using adam optimiser with loss and accuracy like so:</p>
<pre><code>ml.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=['accuracy'])
</code></pre>
<p>After that, I trained the model using <code>epoch=20</code> and <code>batch_size=100</code>. I got these as a result after plotting it based on epoch.</p>
<p><strong>Results</strong></p>
<p><a href="https://i.sstatic.net/22zUKmM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/22zUKmM6.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/VCLflBYt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCLflBYt.png" alt="enter image description here" /></a></p>
<p><em>Current Evaluation:</em></p>
<p>My current observation is that</p>
<ol>
<li>the model is overfitting as it performs normally only on the training set?</li>
<li>the model is learning the wrong things as the loss is increasing instead of decreasing on the validation set</li>
</ol>
<p><strong>My question is</strong>: Is my evaluation of the model correct? How should I understand this results in order to improve it?</p>
<p><strong>Update:</strong>
Sample Dataset of same(1):</p>
<p><a href="https://i.sstatic.net/l2VI1S9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l2VI1S9F.png" alt="enter image description here" /></a></p>
<p>Sample Dataset of different(0):</p>
<p><a href="https://i.sstatic.net/nuDs9bXP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuDs9bXP.png" alt="enter image description here" /></a></p>
|
<python><machine-learning><keras><deep-learning><evaluation>
|
2024-09-02 04:51:20
| 2
| 447
|
Squish
|
78,938,875
| 865,220
|
Fix wrong spelling after a run of correct spellings in a sorted dataframe in pandas
|
<p>I have a dataframe like this:</p>
<pre><code>Bevonce,2008,296853
Beyonce,2007,1210744
Beyonce,2007,1222003
Beyonce,2007,1222003
Beyonce,2007,1222007
Beyoncel,2007,1222002
Nicki Mina,2015,2717068
Nicki Minaj,2015,2741567
Nicki Minaj,2015,2741567
Nicki Minaj,2015,2743565
Nicki Minajl,2015,2744974
Nicki Minal,2015,2741562
Nickl Minaj,2015,2741867
</code></pre>
<p>I want to clean it to:</p>
<pre><code>Beyonce,2008,296853
Beyonce,2007,1210744
Beyonce,2007,1222003
Beyonce,2007,1222003
Beyonce,2007,1222007
Beyonce,2007,1222002
Nicki Minaj,2015,2717068
Nicki Minaj,2015,2741567
Nicki Minaj,2015,2741567
Nicki Minaj,2015,2743565
Nicki Minaj,2015,2744974
Nicki Minaj,2015,2741562
Nicki Minaj,2015,2741867
</code></pre>
<p>Note: the kind of errors are</p>
<pre><code> - single character wrong
- single character more
- single character less
- or combination of two of the above
</code></pre>
<p>And to avoid false positive, I am thinking of cleaning a string only if I find its variation after a run of at least three rows of identical first columns (and hence expected to be correct) and that run of identical values can be used as source of truth for the following or preceding mismatching string.</p>
<p>As you can see in the example, <code>Bevonce</code> is the variation from the following 4 run of <code>Beyonce</code> and also <code>Beyoncel</code> is mismatching from 4 run of preceding <code>Beyonce</code>.</p>
<p>Also you can assume my dataframe is always expected to be in sorted order of first column.</p>
|
<python><pandas><dataframe><spelling>
|
2024-09-02 04:02:56
| 6
| 18,382
|
ishandutta2007
|
78,938,799
| 4,057,790
|
How to read responses from request in Robot Selenium
|
<p>I am looking for similar to <code>cypress.intercept()</code> in Robot framework where we read API responses for get and post requests already happening for API testing from UI without additional calls. I've not found any suitable docs. Is it doable from Robot or any helper library?</p>
<pre><code>cy.intercept('POST', '**/login').as('login-request');
cy.wait('@login-request', { responseTimeout: TIME_OUT.pageLoad }).then(
(intercept) => {
const { statusCode, body } = intercept.response;
expect(statusCode).to.eq(200);
expect(body).property('idToken').to.not.be.oneOf([null, undefined]);
Cypress.env('idToken', body.idToken);
}
);
</code></pre>
|
<python><selenium-webdriver><automation><robotframework><ui-automation>
|
2024-09-02 03:12:21
| 2
| 3,653
|
Mithun Shreevatsa
|
78,938,754
| 260,345
|
Set x-axis scale for Altair bar chart
|
<p>I am using Python's Altair and Streamlit to create a bar chart. I'd like to have the x-axis represent time, and the y-axis be a list of people. Start and end times are plotted on the chart for each person. The problem is, I am unable to figure out how to set the scale of the x-axis. <strong>I would like the scale to start at "0:00" (midnight) and end after 24 hours (or rather 23:59:59).</strong> I have attempted many different ways, but am unable to discover how to set the scale. How can I achieve my desired scale of 0 to 24 hours?</p>
<p>The chart looks fine when I do <strong>not</strong> specify the scale - except that the x-axis doesn't go from 0 to 24 hours.</p>
<p><a href="https://i.sstatic.net/AJgFuNp8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJgFuNp8.png" alt="Chart" /></a></p>
<p>Here is my sample code:</p>
<pre><code>import pandas as pd
import streamlit as st
import altair as alt
sample_df = pd.DataFrame({
"Name": ["Abby", "Cathy", "Ellen", "Holly"],
"Start time": [
"1900-01-01 09:00:00",
"1900-01-01 09:30:00",
"1900-01-01 10:00:00",
"1900-01-01 10:00:00"
],
"End time": [
"1900-01-01 10:00:00",
"1900-01-01 10:30:00",
"1900-01-01 11:00:00",
"1900-01-01 11:00:00"
],
})
x_axis = alt.Axis(format="%-H:%M", tickCount=24)
c = (
alt.Chart(sample_df)
.mark_bar()
.encode(
x=alt.X("Start time", type="temporal", timeUnit="hoursminutes", axis=x_axis).title("Time (24-hour format)"),
x2=alt.X2("End time", timeUnit="hoursminutes"),
y=alt.Y('Name:N').title(None)
)
)
st.altair_chart(c, use_container_width=True)
</code></pre>
<p>I tried using this code to set the scale, but it just makes the x-axis disappear in the chart. Is there another way? Or am I doing something wrong?</p>
<pre><code>scale=alt.Scale(domain=[pd.Timestamp("1900-01-01 00:00:00"), pd.Timestamp("1900-01-01 23:59:59")])
</code></pre>
|
<python><streamlit><altair>
|
2024-09-02 02:38:43
| 1
| 2,982
|
Dylan Klomparens
|
78,938,725
| 7,228,093
|
How to include C libraries in cibuildwheel Github Action for Cython module?
|
<p>I'm trying to separate a few libraries from my Python project, into a C++ submodule, that should improve the performance, using Cython. Now, I've managed to make the module an compile it, but now I want to make possible for it to be used in more platforms, not just my computer, and that's when I found that Github provides a way to do this through GHA, moreso, I found that there is cibuildwheel which helps to manage all this work, but I haven't been able to make it work. Seems like the problem might come from the fact that my CPP module requires 2 non-native libraries(libass, and libjsoncpp), and apparently the platforms GH use for the build are not exactly equal, hence I haven't been able to discover how to include them in the build.</p>
<p>setup.py</p>
<pre class="lang-py prettyprint-override"><code>from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
import os
import sys
# Determine platform-specific library paths
if sys.platform == "win32":
# Windows
vcpkg_root = os.getenv('VCPKG_ROOT', 'C:\\vcpkg') # Set your vcpkg path
libass_head = os.path.join(vcpkg_root, 'installed', 'x64-windows', 'include')
libass_lib = os.path.join(vcpkg_root, 'packages', 'libass_x64-windows', 'lib', 'ass.lib')
json_lib = os.path.join(vcpkg_root, 'installed', 'x64-windows', 'lib', 'jsoncpp.lib')
json_head = os.path.join(vcpkg_root, 'installed', 'x64-windows', 'include')
else:
# Linux
libass_lib = "/usr/lib"
libass_head = "/usr/include/ass"
json_lib = "/usr/local/lib"
json_head = "/usr/include/"
ext_module = Extension(
"test_module",
sources=["test_module.pyx", "testmodule.cpp"],
language="c++",
extra_compile_args=["-std=c++11"],
libraries=["ass", "jsoncpp"],
library_dirs=[libass_lib, json_lib],
include_dirs=[libass_head, json_head],
)
setup(
name="test_module",
version='0.1.0',
ext_modules=cythonize([ext_module]),
)
</code></pre>
<p>testmodule.cpp</p>
<pre><code>#include <iostream>
#include <fstream>
#include <regex>
#include <map>
#include <cstring>
#include <ass/ass.h>
#include <json/json.h>
using namespace std;
// Rest of the file
</code></pre>
<p>build_wheel.yml</p>
<pre><code>name: Python application
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
permissions:
contents: read
jobs:
build:
name: Build wheel for ${{ matrix.python }}-${{ matrix.buildplat[1] }}
runs-on: ${{ matrix.buildplat[0] }}
strategy:
fail-fast: false
matrix:
buildplat:
- [ubuntu-20.04, manylinux_x86_64]
#- [ubuntu-20.04, manylinux_i686]
#- [windows-2019, win_amd64]
- [windows-2019, win32]
python: ["cp39", "cp310"]
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install dependencies (Windows)
if: contains(matrix.buildplat[0], 'windows')
run: |
# Alternative Windows method using vcpkg
git clone https://github.com/microsoft/vcpkg.git C:\vcpkg
C:\vcpkg\bootstrap-vcpkg.bat
C:\vcpkg\vcpkg install libass
C:\vcpkg\vcpkg install jsoncpp
- name: Install dependencies (Linux)
if: contains(matrix.buildplat[0], 'ubuntu')
run: |
sudo apt-get update
sudo apt-get install -y libass-dev libjsoncpp-dev
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt # Install dependencies from requirements.txt
- name: Show DEBUG Linux
if: contains(matrix.buildplat[0], 'ubuntu')
run: |
whereis libass
whereis libjsoncpp
whereis ass
whereis jsoncpp
ls /usr/include/ass
ls /usr/include
- name: Show DEBUG Windows
if: contains(matrix.buildplat[0], 'windows')
run: |
ls C:\vcpkg\installed\x64-windows\include
ls C:\vcpkg\packages
ls C:\vcpkg\packages\libass_x64-windows
ls C:\vcpkg\packages\libass_x64-windows\lib
ls C:\vcpkg\packages\libass_x64-windows\include
- name: Build wheels
uses: pypa/cibuildwheel@v2.8.1
with:
output-dir: wheelhouse
env:
CIBW_BUILD: ${{ matrix.python }}-${{ matrix.buildplat[1] }}
CIBW_SKIP: "cp34-* cp35-*"
CIBW_BEFORE_BUILD: |
pip install setuptools==59.6.0 wheel Cython
CIBW_ENVIRONMENT_WINDOWS: >
INCLUDE="C:\vcpkg\installed\x64-windows\include"
LIB="C:\vcpkg\installed\x64-windows\lib"
CXXFLAGS="-IC:\vcpkg\installed\x64-windows\include"
LDFLAGS="-LC:\vcpkg\installed\x64-windows\lib"
CIBW_ENVIRONMENT_LINUX: >
CXXFLAGS="-I/usr/include"
LDFLAGS="-L/usr/lib"
- name: Show generated wheels
run: ls wheelhouse
- name: Upload wheels
uses: actions/upload-artifact@v3
with:
name: built-wheels
path: ./wheelhouse/*.whl
</code></pre>
<p>Sorry if the setup.py and yml looks very messy, but I've been trying many things to understand what's happening inside of the instance doing the build.</p>
<p>In the case of my Linux builds, it's throwing this error, which seems to imply is not finding either the headers and/or the code of the libraries.</p>
<pre><code>testmodule.cpp:6:10: fatal error: ass/ass.h: No such file or directory
6 | #include <ass/ass.h>
</code></pre>
<p>And in the case of the Windows builds, seems like I've managed to make them point to the right headers, but it's throwing <code>LINK : fatal error LNK1181: cannot open input file 'ass.lib'</code>, which I understand means the lib files are not the right ones(?) I've tried to use the ones at installed and packages in vcpkg, but neither are working.</p>
<p>Does someone knows what might be wrong either on my setup.py or build_wheels.yml?</p>
|
<python><github-actions><cython>
|
2024-09-02 02:18:36
| 2
| 515
|
EfraΓn
|
78,938,722
| 200,783
|
What are valid assignment targets in Python?
|
<p>In Python, obviously it's possible to assign to names, e.g. <code>a = 1</code>. It's also valid to assign to attributes (<code>a.b = 1</code> - AFAIK this corresponds to the <code>__setattr__</code> special method) and indexing operations (<code>a[b] = 1</code>, which corresponds to <code>__setitem__</code>).</p>
<p>In Lua, those are the only valid assignment targets - the language's <em>grammar</em> enforces that restriction: <code>var ::= Name | prefixexp '[' exp ']' | prefixexp '.' Name</code>.</p>
<p>Are those three forms also an exhaustive list of valid assignment targets in Python or are there others? (For example, according to <a href="https://stackoverflow.com/a/74453438/200783">this answer</a>, it's not valid to assign to a function call: <code>f() = 1</code> is invalid.)</p>
<p>Also, is the restriction to valid assignment targets enforced by the grammar as in Lua, or later in the source => bytecode compilation process?</p>
|
<python><lua><variable-assignment>
|
2024-09-02 02:13:41
| 0
| 14,493
|
user200783
|
78,938,424
| 3,995,472
|
Why the Airflow dag is not getting triggered?
|
<p><a href="https://i.sstatic.net/FyMNxccV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyMNxccV.png" alt="Attached the screenshot of Schedules shown on Airflow UI" /></a>I have attached the code. It's 1st Sept 2024, Sunday, 21:45 UTC. Dag should have triggered. What's the problem with it ? I had to manually trigger it. BTW, rest of the dags are running fine, getting triggered on the scheduled time.</p>
<pre><code>from datetime import timedelta, datetime
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.operators.bash_operator import BashOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['abc@abc.com'],
'email_on_failure': False,
'email_on_retry': False,
'start_date': datetime(2024, 8, 26),
'retries': 1,
'retry_delay': timedelta(minutes=5),
}
dag = DAG("your_random_dag",
default_args=default_args,
schedule_interval='0 21 * * 0',
dagrun_timeout=None,
catchup=False,
)
begin = DummyOperator(dag=dag, task_id="BEGIN")
end = DummyOperator(dag=dag, task_id="END")
t1 = BashOperator(
task_id="your_random_dag",
bash_command='python3 /scripts/prod/run_me.py' ,
dag=dag)
begin >> t1 >> end
</code></pre>
|
<python><python-3.x><airflow>
|
2024-09-01 21:51:32
| 0
| 501
|
learner57
|
78,938,395
| 1,267,833
|
Matplotlib broke after Ubuntu update
|
<p>I just updated Ubuntu to 24.04.1 LTS and I'm having some trouble with Matplotlib in my Anaconda Python setup. Most of the other scientific libraries I use appear to be fine.</p>
<p>I'll try to run</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
plt.hist(np.random.normal(10))
</code></pre>
<p>and I get</p>
<pre><code>plt.hist(np.random.normal(10))
Out[7]:
(array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.]),
array([8.40468485, 8.50468485, 8.60468485, 8.70468485, 8.80468485,
8.90468485, 9.00468485, 9.10468485, 9.20468485, 9.30468485,
9.40468485]),
<BarContainer object of 10 artists>)Error in callback <function _draw_all_if_interactive at 0x7ff6a0b9cc20> (for post_execute), with arguments args (),kwargs {}:
Traceback (most recent call last):
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/pyplot.py:268 in _draw_all_if_interactive
draw_all()
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/_pylab_helpers.py:131 in draw_all
manager.canvas.draw_idle()
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/backend_bases.py:1905 in draw_idle
self.draw(*args, **kwargs)
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/backends/backend_agg.py:387 in draw
self.figure.draw(self.renderer)
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/artist.py:95 in draw_wrapper
result = draw(artist, renderer, *args, **kwargs)
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/artist.py:72 in draw_wrapper
return draw(artist, renderer)
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/figure.py:3161 in draw
self.patch.draw(renderer)
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/artist.py:72 in draw_wrapper
return draw(artist, renderer)
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/patches.py:632 in draw
self._draw_paths_with_artist_properties(
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/patches.py:617 in _draw_paths_with_artist_properties
renderer.draw_path(gc, *draw_path_args)
File ~/miniconda3/lib/python3.11/site-packages/matplotlib/backends/backend_agg.py:131 in draw_path
self._renderer.draw_path(gc, path, transform, rgbFace)
ValueError: object __array__ method not producing an array
</code></pre>
<p>Numpy version: 1.26.4</p>
<p>Matplotlib version: 3.9.1</p>
|
<python><matplotlib><ubuntu-24.04>
|
2024-09-01 21:20:03
| 0
| 2,157
|
Taylor
|
78,938,260
| 9,655,667
|
Passing Exception type and type hinting
|
<p>I have the following python code:</p>
<pre><code>from pathlib import Path
def ffind_overview_ex(base_dir: Path, exc: Exception = FileNotFoundError) -> Path:
try:
# do something
except Exception as err:
raise exc("hello") from err
## do some more
if some_extraordinary_condition:
raise exc("Wow, it really happened!?")
return result # result is of type Path
test = foo(SystemExit)
</code></pre>
<p>The code works pretty fine, however MyPy is angry with that code:</p>
<pre><code>lib\cnmerge\merger.py:53: error: Incompatible default for argument "exc" (default has type "type[FileNotFoundError]", argument has type "Exception") [assignment]
lib\cnmerge\merger.py:59: error: "Exception" not callable [operator]
lib\cnmerge\merger.py:62: error: "Exception" not callable [operator]
lib\cnmerge\merger.py:67: error: "Exception" not callable [operator]
test.py:96: error: Argument 2 to "ffind_overview_ex" has incompatible type "type[SystemExit]"; expected "Exception" [arg-type]
Found 5 errors in 2 files (checked 1 source file)
</code></pre>
<p>My intention is that to control what type of Exception ffind_overview_ex() are throwing, rather than creating either a complex except block to handle the myriad exceptions ffind_overview_ex() could throw, or create a sink exception handler.</p>
<p>So, while the code works, what to do with MyPy? How can I "appease" it? AFAIK both FileNotFoundError and SystemExit are inherited from Exception.</p>
|
<python><exception><python-typing><mypy>
|
2024-09-01 19:57:47
| 1
| 455
|
Rick Manix
|
78,938,211
| 6,930,340
|
Computing cross-sectional rankings using a tidy polars dataframe
|
<p>I need to compute cross-sectional rankings across a number of trading securities. Consider the following <code>pl.DataFrame</code> in long (tidy) format. It comprises three different symbols with respective prices, where each symbol also has a dedicated (i.e. local) trading calendar.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"symbol": [*["symbol1"] * 6, *["symbol2"] * 5, *["symbol3"] * 5],
"date": [
"2023-12-30", "2023-12-31", "2024-01-03", "2024-01-04", "2024-01-05", "2024-01-06",
"2023-12-30", "2024-01-03", "2024-01-04", "2024-01-05", "2024-01-06",
"2023-12-30", "2023-12-31", "2024-01-03", "2024-01-04", "2024-01-05",
],
"price": [
100, 105, 110, 115, 120, 125,
200, 210, 220, 230, 240,
3000, 3100, 3200, 3300, 3400,
],
}
)
print(df)
</code></pre>
<pre><code>shape: (16, 3)
βββββββββββ¬βββββββββββββ¬ββββββββ
β symbol β date β price β
β --- β --- β --- β
β str β str β i64 β
βββββββββββͺβββββββββββββͺββββββββ‘
β symbol1 β 2023-12-30 β 100 β
β symbol1 β 2023-12-31 β 105 β
β symbol1 β 2024-01-03 β 110 β
β symbol1 β 2024-01-04 β 115 β
β symbol1 β 2024-01-05 β 120 β
β β¦ β β¦ β β¦ β
β symbol3 β 2023-12-30 β 3000 β
β symbol3 β 2023-12-31 β 3100 β
β symbol3 β 2024-01-03 β 3200 β
β symbol3 β 2024-01-04 β 3300 β
β symbol3 β 2024-01-05 β 3400 β
βββββββββββ΄βββββββββββββ΄ββββββββ
</code></pre>
<p>The first step is to compute the periodic returns using <code>pct_change</code> and subsequently using <code>pivot</code> to align the symbols per date.</p>
<pre class="lang-py prettyprint-override"><code>returns = df.drop_nulls().with_columns(
pl.col("price").pct_change(n=2).over("symbol").alias("return")
).pivot(on="symbol", index="date", values="return")
print(returns)
</code></pre>
<pre><code>shape: (6, 4)
ββββββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ
β date β symbol1 β symbol2 β symbol3 β
β --- β --- β --- β --- β
β str β f64 β f64 β f64 β
ββββββββββββββͺβββββββββββͺβββββββββββͺβββββββββββ‘
β 2023-12-30 β null β null β null β
β 2023-12-31 β null β null β null β
β 2024-01-03 β 0.1 β null β 0.066667 β
β 2024-01-04 β 0.095238 β 0.1 β 0.064516 β
β 2024-01-05 β 0.090909 β 0.095238 β 0.0625 β
β 2024-01-06 β 0.086957 β 0.090909 β null β
ββββββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ
</code></pre>
<p>The next step is to use <code>concat_list</code> to create a <code>list</code> to compute the ranks per row (descending, i.e. highest return gets rank 1).</p>
<pre class="lang-py prettyprint-override"><code>ranks = (
returns.with_columns(all_symbols=pl.concat_list(pl.all().exclude("date")))
.select(
pl.all().exclude("all_symbols"),
pl.col("all_symbols")
.list.eval(
pl.element().rank(descending=True, method="ordinal").cast(pl.UInt8)
)
.alias("rank"),
)
)
print(ranks)
</code></pre>
<pre><code>shape: (6, 5)
ββββββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββββββββββββ
β date β symbol1 β symbol2 β symbol3 β rank β
β --- β --- β --- β --- β --- β
β str β f64 β f64 β f64 β list[u8] β
ββββββββββββββͺβββββββββββͺβββββββββββͺβββββββββββͺβββββββββββββββββββββ‘
β 2023-12-30 β null β null β null β [null, null, null] β
β 2023-12-31 β null β null β null β [null, null, null] β
β 2024-01-03 β 0.1 β null β 0.066667 β [1, null, 2] β
β 2024-01-04 β 0.095238 β 0.1 β 0.064516 β [2, 1, 3] β
β 2024-01-05 β 0.090909 β 0.095238 β 0.0625 β [2, 1, 3] β
β 2024-01-06 β 0.086957 β 0.090909 β null β [2, 1, null] β
ββββββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββββββββββββ
</code></pre>
<p>Now we are finally getting to the actual question:<br />
I would like to unpivot <code>ranks</code> again and produce a tidy dataframe. I am looking for the following columns: <code>symbol</code>, <code>date</code>, <code>return</code>, and <code>rank</code>. I was thinking about creating three new columns (basically using <code>explode</code> to unpack the list, but this will only create new rows rather than columns).</p>
<p>Also, I am wondering if I am required to pivot <code>df</code> in the first place or if there's a better way to directly operate on the original <code>df</code> in tidy format? I am actually looking for performance as <code>df</code> could have millions of rows.</p>
|
<python><dataframe><python-polars>
|
2024-09-01 19:28:42
| 1
| 5,167
|
Andi
|
78,938,115
| 4,699,441
|
Python (pyparsing): parsing legacy curly braces file format
|
<p>I have to parse with Python (pyparse) a legacy file format that is not well defined.</p>
<p>It is of the curly brace family (im fact, having to parse arbitrary curly brace formats is a recurring issue because people are always like "XML is too verbose, let's invent our own format").</p>
<p>So I have things like</p>
<pre><code>Variable = [1, 2]
Variable2 = {
Variable,
"Literal",
}
Variable[1] = Variable2
Namespace::Function(argument, {"string literal",
100})
</code></pre>
<p>Newlines are significant outside of expressions (separating statements like <code>;</code> does in C) but not significant in any kind of braces (<code>{}</code>, <code>()</code> and <code>[]</code>).</p>
<p>To make sense of it, it needs to be parsed such that:</p>
<ol>
<li>Each line is a child of the root of the AST (where "line" refers to lines delimited by line breaks not inside any kind of braces).</li>
<li>Each kind of parentheses constitutes a node (annotated with the type of parentheses).</li>
<li>The children of a parenthesis list are comma separated (in this sense newlines also constitute a kind of brace).</li>
<li>Anything else is treated as a single token to be handled in another pass.</li>
</ol>
<p>So for the above example the AST should look like this:</p>
<pre><code>Root
* Brace: \n
* Token: 'Variable ='
* Brace: []
* Token: '1'
* Token: '2'
* Brace: \n
* Token: 'Variable2 ='
* Brace: {}
* Token: 'Variable'
* Token: '"Literal"'
* Brace: \n
* Token: 'Variable'
* Brace: []
* Token: '1'
* Token: '= Variable2'
* Brace: \n
* Token: 'Namespace:Function'
* Brace: ()
* Token: 'argument'
* Brace: {}
* Token: '"string literal"'
* Token: '100'
</code></pre>
<p>Represented in Python as something like:</p>
<pre><code>[
Brace('\n', [
'Variable =',
Brace('[]', ['1', '2'])
]),
...
</code></pre>
<p>The issues are</p>
<ol>
<li>Specifying the syntax for the parser,</li>
<li>Converting the result from the parser into a data structure that I can use afterwards.</li>
</ol>
<p>(I used a dedicated parser package that did (1), but the result was (2) completely unusable for any further steps, so I'm trying again, this time with <code>pyparsing</code>.)</p>
|
<python><parsing><text-parsing><curly-braces>
|
2024-09-01 18:39:16
| 1
| 1,078
|
user66554
|
78,938,085
| 14,944,414
|
Making a user-friendly input for subdividing a square into coordinates
|
<p>I am working on a program (in Python) that involves cutting a square(s) into smaller pieces.
The user has to enter a 'code', which the program will automatically convert into the coordinates for each individual rectangle. Each rectangle also has a value associated with it.</p>
<p>So far, I came up with the following code:</p>
<pre><code>1 # the entire square
0.5;0.5 # square split in half from top to bottom
1:0.5,0.5 # square split in half from side to side
0.5;0.5:0.15,0.35,0.5 # square split in half from top to bottom, with the right side being further subdivided
</code></pre>
<p><a href="https://i.sstatic.net/kkDDzFb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kkDDzFb8.png" alt="Crude representation of examples above drawn with MS Paint" /></a></p>
<p>The <code>;</code> handles divisions from left to right, and the <code>:</code>+<code>,</code> handle subdividions from top to bottom.</p>
<p>The problem I am running into is that with this current method it is impossible to represent something relatively simple:</p>
<p><a href="https://i.sstatic.net/6TI8vzBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6TI8vzBM.png" alt="Impossible to represent figures" /></a></p>
<p>The only thing I can think of to solve this would be some kind of recursive algorithm that will go through some kind of code and generate the coordinates when prompted. It is a priority that this remains as user-friendly as possible, as these codes may be inputted many times and a confusing/lengthy code will defeat the purpose of it (could just enter a nested <code>dict</code>/<code>list</code>). I still cannot wrap my head around how something like this would even work...</p>
<p>Another solution would be to use sone kind of <code>tkinter</code> GUI. I am using it for my project and as the input for this code, so some kind of interactive box (like those sites where there is a slider in the middle and it shows you the before & after?) would also work.</p>
<h4>Cheers!</h4>
<h5>NOTE 0:</h5>
<p>This part of the program handles splitting <a href="https://en.wikipedia.org/wiki/Kanji" rel="nofollow noreferrer">Japanese Kanji (ζΌ’ε)</a> into constituent <a href="https://en.wikipedia.org/wiki/Chinese_character_radicals" rel="nofollow noreferrer">radicals</a>/parts. Most Kanji can already be represented by my solution, but there are lots that cannot! The characters are displayed on a <code>tkinter Canvas</code> object, with rectangles drawn behind each radical. The user selects a number of these radicals with the mouse (I have already done this part).</p>
<h5>NOTE 1:</h5>
<p>As I mentioned, each rectangle has an assigned value to it which has a default value. This meant in my attempt before converting into coordinates I used a nested list to store values:</p>
<pre class="lang-py prettyprint-override"><code>[[0.5,[[1,1]]],[0.5,[[0.15,1],[0.35,1],[0.5,1]]]] # generated from the fourth example
^ assigmed value ^ ^ ^
</code></pre>
<p>The coordinates were calculated based on the sidelength of the <code>Canvas</code> object.</p>
<h5>NOTE 2:</h5>
<p>Although not strictly necessary, but a way to combine these rectangles within the code/with the tkinter widget would be extremely useful, as some radicals can have weird shapes (like an L) or even wrap around everything (like ε£).</p>
|
<python><python-3.x><tkinter><cjk><kanji>
|
2024-09-01 18:23:14
| 1
| 307
|
Leo
|
78,938,073
| 12,016,688
|
Why doesn't the "repeated" number go up in the traceback as I increase the recursion limit?
|
<p>When I run a recursive function and it exceeds the recursion depth limit, the below error is displayed:</p>
<pre class="lang-py prettyprint-override"><code>Python 3.12.4+ (heads/3.12:99bc8589f0, Jul 27 2024, 11:20:07) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def f(): f()
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
[Previous line repeated 996 more times]
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>From what I understand, because the traceback is all the same <code>File "<stdin>", line 1, in f</code>, it does not show it all (because obviously it's not really helpful) and only tells me that this line was repeated 996 times more.
When I manually change the recursion limit, I expect that the traceback size grows as well. But it does not:</p>
<pre class="lang-py prettyprint-override"><code>>>> sys.setrecursionlimit(2000)
>>>
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
[Previous line repeated 997 more times]
RecursionError: maximum recursion depth exceeded
</code></pre>
<p>I doubled the recursion limit, so now I expect that traceback size doubles, but it says that the previous line is repeated 997 times. Why is this the case?</p>
<h2>Note</h2>
<p>I also found this question which seems same as my question, but it isn't. My question is specifically about the size of traceback.</p>
<p><a href="https://stackoverflow.com/questions/64321761/why-does-increasing-the-recursion-depth-result-in-stack-overflow-error">Why does increasing the recursion depth result in stack overflow error?</a></p>
|
<python><recursion><traceback>
|
2024-09-01 18:12:17
| 1
| 2,470
|
Amir reza Riahi
|
78,937,921
| 6,357,916
|
Unable to convert cuda:0 device type tensor to numpy
|
<p>I have <code>y_hat</code> variable of type <code>list</code>. I am unable to convert it to numpy array. It gives following error:</p>
<pre><code>TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
</code></pre>
<p>Also I cannot call <code>.cpu()</code> on it to move <code>y_hat</code> from CUDA to RAM.</p>
<p>Here is the screen grab of vscode debug console:</p>
<p><a href="https://i.sstatic.net/oTxWR3eA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTxWR3eA.png" alt="enter image description here" /></a></p>
<p>What I am missing here?</p>
|
<python><python-3.x><pytorch>
|
2024-09-01 17:10:19
| 1
| 3,029
|
MsA
|
78,937,912
| 2,774,885
|
is there a better (?) way to maintain a bidirectional mapping of strings
|
<p>I've got a use case for what I'll call an "invertible dictionary" -- because I don't know a better term for it. I've got a set of data where the values for some properties are repeated all over the place... think about a database where there are a million records but they share maybe a few dozen unique values for cities or states or countries or whatever.</p>
<p>The optimization assumes that you have a much smaller number of unique values, and that you're doing lookups on the key string far more often than you're doing inserts or modifications.</p>
<p>What I came up with was what I call a string dictionary, which I hacked up below. It basically takes a dict, then builds a list of the unique entries so we can use the index of that list to very quickly get the string. The indirection allows me to store just the index for each record instead of the whole string.</p>
<p>Anyway... this works fine, but it seems like python always has some builtin or well-maintained version of everything I write like this, so I figured I'd ask if such a thing already exists...</p>
<pre><code>class StrMap(dict):
""" provides a fast mapping of strings to integer indices in a list.
all keys of type int have values of type str
all keys of type str have values of type int
examples: d['ace'] : 0, d['bob'] : 1, d[0] = 'ace', d[1] = 'bob' and so on
"""
def __init__(self):
dict.__init__(self)
self.L = []
def add(self, newval : str) -> int:
assert isinstance(newval, str)
if newval not in self:
newidx = len(self.L)
self.L.append(newval)
self[newval] = newidx
return newidx
def retr(self, multikey, addOnMiss=True):
if isinstance(multikey, int):
return self.L[multikey]
elif isinstance(multikey, str):
try:
return self[multikey]
except KeyError:
if addOnMiss:
return self.add(multikey)
else:
print(f"addOnMiss is not set and string key {multikey} is not in dict")
raise KeyError
</code></pre>
|
<python><data-structures>
|
2024-09-01 17:06:50
| 1
| 1,028
|
ljwobker
|
78,937,897
| 1,332,263
|
Confirm if Python script process is running
|
<p>I want to check if a specific Python script is running. The script is "EXIF_GUI_EXPERIMENT.py". Running the script below returns nothing.</p>
<p>If I run <code>ps -fA</code> in the terminal I see the script is running as process "python3 /home/pi/scripts/tkinter/EXIF/EXIF_GUI_EXPERIMENT.py. What is the best way to see if this script is running?</p>
<pre><code>#!/usr/bin/python3\
import subprocess
cmd = ["ps -fA | pgrep "EXIF_GUI_EXPERIMENT.py""]
result = subprocess.run(cmd, capture_output=True, text=True, shell=True)
print (result)
</code></pre>
|
<python><python-3.x>
|
2024-09-01 16:55:46
| 1
| 417
|
bob_the_bob
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.